Насколько я понимаю, возведенное в степень значение бета из логистической регрессии - это отношение шансов этой переменной для зависимой переменной, представляющей интерес. Однако это значение не соответствует рассчитанному вручную коэффициенту шансов. Моя модель предсказывает задержку роста (показатель недоедания) с использованием, среди прочего, страховки.
// Odds ratio from LR, being done in stata
logit stunting insurance age ... etc.
or_insurance = exp(beta_value_insurance)
// Odds ratio, manually calculated
odds_stunted_insured = num_stunted_ins/num_not_stunted_ins
odds_stunted_unins = num_stunted_unins/num_not_stunted_unins
odds_ratio = odds_stunted_ins/odds_stunted_unins
Какова концептуальная причина того, что эти ценности отличаются? Контроль за другими факторами в регрессии? Просто хочу быть в состоянии объяснить несоответствие.
Ответы:
Если вы добавляете в модель только этого одиночного предиктора, то отношение шансов между предиктором и откликом будет точно равно экспоненциальному коэффициенту регрессии . Я не думаю, что вывод этого результата в настоящее время присутствует на сайте, поэтому я воспользуюсь этой возможностью, чтобы предоставить его.
Рассмотрим двоичный результат и один двоичный предиктор X :Y X
Тогда одним из способов расчета отношения шансов между и Y i являетсяXi Yi
По определению условной вероятности . В этом соотношении предельные вероятности, включающие X, отменяются, и вы можете переписать отношение шансов в терминах условных вероятностей Y | X :pij=P(Y=i|X=j)⋅P(X=j) X Y|X
В логистической регрессии вы моделируете эти вероятности напрямую:
Таким образом, мы можем рассчитать эти условные вероятности непосредственно из модели. Первое соотношение в выражении для выше:OR
а второй это:
so it is the odds ratio conditional on the values of the other predictors in the model and, in general, in not equal to
So, it is no surprise that you're observing a discrepancy between the exponentiated coefficient and the observed odds ratio.
Note 2: I derived a relationship between the trueβ and the true odds ratio but note that the same relationship holds for the sample quantities since the fitted logistic regression with a single binary predictor will exactly reproduce the entries of a two-by-two table. That is, the fitted means exactly match the sample means, as with any GLM. So, all of the logic used above applies with the true values replaced by sample quantities.
источник
You have a really nice answer from @Macro (+1), who has pointed out that the simple (marginal) odds ratio calculated without reference to a model and the odds ratio taken from a multiple logistic regression model (exp(β) ) are in general not equal. I wonder if I can still contribute a little bit of related information here, in particular explaining when they will and will not be equal.
Beta values in logistic regression, like in OLS regression, specify the ceteris paribus change in the parameter governing the response distribution associated with a 1-unit change in the covariate. (For logistic regression, this is a change in the logit of the probability of 'success', whereas for OLS regression it is the mean,μ .) That is, it is the change all else being equal. Exponentiated betas are similarly ceteris paribus odds ratios. Thus, the first issue is to be sure that it is possible for this to be meaningful. Specifically, the covariate in question should not exist in other terms (e.g., in an interaction, or a polynomial term) elsewhere in the model. (Note that here I am referring to terms that are included in your model, but there are also problems if the true relationship varies across levels of another covariate but an interaction term was not included, for example.) Once we've established that it's meaningful to calculate an odds ratio by exponentiating a beta from a logistic regression model, we can ask the questions of when will the model-based and marginal odds ratios differ, and which should you prefer when they do?
The reason that these ORs will differ is because the other covariates included in your model are not orthogonal to the one in question. For example, you can check by running a simple correlation between your covariates (it doesn't matter what the p-values are, or if your covariates are0/1 instead of continuous, the point is simply that r≠0 ). On the other hand, when all of your other covariates are orthogonal to the one in question, exp(β) will equal the marginal OR.
If the marginal OR and the model-based OR differ, you should use / interpret the model-based version. The reason is that the marginal OR does not account for the confounding amongst your covariates, whereas the model does. This phenomenon is related to Simpson's Paradox, which you may want to read about (SEP also has a good entry, there is a discussion on CV here: Basic-simpson's-paradox, and you can search on CV's simpsons-paradox tag). For the sake of simplicity and practicality, you may want to just only use the model based OR, since it will be either clearly preferable or the same.
источник