However, for the sake of demonstration, here is what the code for this method might resemble:
Example:
# Create data vectors #
id <- c(1,1,1,1,1,1,2,2,2,2,2,2,3,3,3,3,3,3)
vara <- c(0,0,0,0,1,1,1,1,1,0,1,1,0,1,1,1,1,0)
varb <- c(0,0,0,0,0,0,1,1,1,0,0,1,1,1,1,0,0,0)
outcome <- c(1,1,1,1,0,0,1,1,1,1,1,0,0,0,0,0,1,1)
# With the package: “lme4”, installed and enabled #
# Apply The Repeated Measures Logistic Regression Model #
glmer(outcome~vara+varb +(1|id),family=binomial)
This creates the output:
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: outcome ~ vara + varb + (1 | id)
AIC BIC logLik deviance df.resid
26.5032 30.0647 -9.2516 18.5032 14
Random effects:
Groups Name Std.Dev.
id (Intercept) 2.592
Number of obs: 18, groups: id, 3
Fixed Effects:
(Intercept) vara varb
4.656 -3.757 -3.587
Generalized linear mixed model fit by maximum likelihood (Laplace Approximation) ['glmerMod']
Family: binomial ( logit )
Formula: outcome ~ vara + varb + (1 | id)
AIC BIC logLik deviance df.resid
26.5032 30.0647 -9.2516 18.5032 14
Random effects:
Groups Name Std.Dev.
id (Intercept) 2.592
Number of obs: 18, groups: id, 3
Fixed Effects:
(Intercept) vara varb
4.656 -3.757 -3.587
Which can be utilized to create a model equation.
For our example, we can assume that we have three individuals who are being tested for a binary outcome. The variables, which we believe will be impactful within this situation, are the independent variables: “vara” and “varb”. As each individual was surveyed multiple times, the repeated measures logistic regression model was utilized to attribute for this aspect of the experimental methodology.
That’s all for now data heads. Stay tuned for more useful content!
No comments:
Post a Comment
Note: Only a member of this blog may post a comment.