We list here some recurrent problems reported by users. Other issues, questions, concerns may have been reported in github: https://github.com/CecileProust-Lima/lcmm/issues/ .
Please refer to github, both closed and opened issues, before sending any question. And please ask the questions via github only.
It sometimes happens that a model does not converge correctly. This is due most of the time to the very stringent criterion based on the “derivatives”. This criterion uses both the first derivatives and the inverse of the matrix of the second derivatives (Hessian). It ensures that the program converges at a maximum. When it can’t be computed correctly (most of the time because the Hessian is not definite positive), the program reports “second derivatives = 1”.
There are several reasons that may induce a non convergence, e.g.:
When the time variable (or more generally a variable with random effects) is in a unit which induces too small associated parameters (for very small changes per day). In that case, changing the scale (for instance with months or years) may solve the problem.
In models with splines in the link function (lcmm, multlcmm, Jointlcmm) or with splines in the baseline risk function (Jointlcmm), a parameter associated with splines very close to zero may prevent for correct convergence as it is at the border of the parameter space. In that case, this parameter can be fixed to 0 and convergence should be reached immediately.
Selection of the number of latent classes is a complex question. In some cases, the number is known. When not, different tools can be used to guide the decision:
Finally, it can be useful to present and contrast models with different numbers of latent classes.
The complexity of the selection of the optimal number of latent classes is illustrated in vignette: https://cran.r-project.org/package=lcmm/vignettes/latent_class_model_with_hlme.html . Indeed, all the criteria may not be concordant in practice.
Good discrimination of classes is usually sought when fitting latent class mixed models. Discriminatory power can be assessed using the entropy criterion (provided in summarytable) but also using the classification table (with command postprob). The description of the classes may also help comprehend the latent class structure.
(see vignette https://cran.r-project.org/package=lcmm/vignettes/latent_class_model_with_hlme.html for further details)
Different techniques can be used in this package to evaluate the goodness of fit. As in mixed models, one can compare the subject-specific predictions with the observations or plot the subject-specific residuals.
The comparison with more flexible models can also be useful (more flexible link functions, more flexible baseline risk functions, more flexible functions of time, etc.)
Each vignette includes a section on the evaluation of the model.
This is detailed in vignette on pre-normalizing: https://cran.r-project.org/package=lcmm/vignettes/pre_normalizing.html