Of the two methods, FIML has been more widely adopted in the behavioral sciences. Under FIML, missing information is handled concurrently with model parameter estimation. To meet the assumption of MAR, variables that account for missingness are usually added directly to the analytic model.

However, this is not ideal when the variables that contain information about missingness are not in themselves of substantive interest. MI is a three step process: missing values are imputed multiple times to create m complete datasets , each of these datasets is then analyzed independently of the others, and finally results are pooled across these analyses. Different algorithms may be used for imputing missing values the first step , but the general idea remains the same: variance within each imputed dataset is indicative of uncertainty in measurement, whereas variance across the datasets is a proxy for uncertainty due to missing information.

This contrasts with earlier methods wherein only a single complete dataset is imputed e. Another advantage of MI is that variables that explain missingness MAR assumption but that are not of substantive interest can be included in the imputation step but then excluded from the analytic model.

Van Buuren [ 8 ] provides numerous illustrations of key technical and theoretical differences between imputation methods as can be implemented using the R statistical software package MICE multiple imputation by chained equations. FIML is now implemented in most commercially available software packages and is easy to implement end users simply need select FIML estimation , whereas MI requires additional knowledge and the overhead that comes with managing multiple datasets.

FIML is also statistically deterministic and efficient. In practice, this means FIML will provide reproducible estimates with tighter confidence intervals smaller standard errors than those generally obtained under MI. For example, analytic models that include multilevel structures e. Congeniality is not an issue for FIML wherein missingness and parameter estimation are done under a single model [ 13 ].

On the other hand, MI produces complete datasets that can then be used in analyses for which FIML is not yet, or may ever be, implemented, e. This point is especially salient for researchers in gerontology, who are often interested in nonlinear and nonnormally distributed outcomes e. Another consideration for researchers faced with a large amount of missing data due to death or dropout is that FIML estimation will often fail to converge at higher levels of missingness e.

What then when incomplete data are MNAR nonignorable? MNAR estimation methods, such as selection and pattern-mixture analyses, jointly model observed and missing information. Importantly, these models rely on strong, unverifiable assumptions [ 14 ]. Sensitivity analysis is therefore required to evaluate changes in analytical outcomes given different MNAR scenarios.

Thus, MI may be better suited for researchers who wish to explore alternative hypotheses about unobserved sources of dropout in longitudinal studies of aging. In gerontological research, it is relatively common to rely on small samples or to examine constructs that are assessed by variables typically displaying asymmetric distributions e. Most mathematical models used in psychology the general linear model, linear structural equation modeling, linear mixed-effects models rely on the assumptions that the errors are normally distributed.

Moreover, commonly used models to compare groups Student t tests, analysis of variance assume homoscedasticity or at least sphericity in the case of repeated measures.

## Methodological Issues in Aging Research

These assumptions are often not met in real data, and the consequences of their violations can be very serious in terms of increased type I error and decreased power. Furthermore, the models we frequently use e. However, often transformations only alleviate, but do not fully get rid of, the problem, while interpretation of final results generally becomes more arduous. Moreover, outliers with unwarranted influence may bias all parameters' estimates, thereby invalidating the overall analysis.

Modern robust statistics deal with such common research instances, by estimating a model's parameters even with nonnormal errors, heteroskedasticity, and in the presence of outliers [ 19 ]. They thus not only allow reaching valid statistical conclusions when assumptions are violated, they also detect and deal with outliers.

The simplest example of a robust statistic is the median, which reacts to a much smaller degree to outliers than the mean. One can alter as much as half the values of a series and still obtain the same median. This is the reason why the location of inherently skewed distributions, such as income, is characterized by the median rather than the mean.

Likewise, robust statistics exist also with respect to distributions' scale parameters. The standard deviation, archetypal scale estimate, is more strongly influenced by outliers than the mean, given that its calculations involves squaring residuals, thereby increasing exponentially the deleterious effect of outliers. The median absolute deviation, defined as the median absolute difference between a data point of a series and that series' median value i.

These examples are mainly didactical, but they serve well the message that to characterize a distribution that deviates from familiar probability distributions e. Robust statistics also allow estimating parameters of well-known statistical models.

### 2004-12222

A popular robust estimation method is the MM-estimation, a generalization of maximum likelihood estimation commonly used in, e. MM-estimation determines empirically how many and which extreme cases should be excluded during an analysis, and does so as a function of the observed data e. If data are excluded, the inferential conclusions are nevertheless based on the entire sample size, whereas with manual removal of outliers the sample is reduced in inferential calculations, thereby losing efficiency and increasing the risk for type 2 errors.

Moreover, MM-estimation may not completely exclude an observation, but rather downweight it, so that observations at the extreme tail are weighted less than those at the center of a distribution during estimation whereas in traditional estimation all observations have the same weight. Thus, robust statistics also permit the automated detection of outliers and prevent researchers from making arbitrary decisions about what are and what to do with outliers. A classic example of robust regression was provided by Rousseeuw and Leroy [ 20 ] and consists of a dataset with only 20 observations of schools' and pupils' characteristics.

The outcome variable is sixth graders' verbal score, while the predictors are staff salary, teachers' verbal score, and three indicators of pupils' and parents' socioeconomic status. Robust statistics have influenced not just basic indicators of location and shape of various distributions, but also frequently used statistical models, such as ANOVA and linear regression. Multivariate analyses have also been enriched by robust statistics. Factor analysis [ 21 ], structural equation modeling [ 22 , 23 ], and linear mixed-effects models, for both hierarchical and crossed structures [ 24 , 25 , 26 ], can also be estimated within the robust framework.

Like others, we do not suggest that robust statistics should become a de facto replacement for classical statistics. We simply point to the availability of software for estimation using robust statistics e. We believe that in the next decade, more and more statistical models and related software will benefit from the option to use robust statistics, and that editors and reviewers of empirical studies will increasingly ask for robust statistics. For introductory, nontechnical readings, see for instance [ 18 , 19 ]. For a more complete understanding, see for instance [ 27 ]. There is little doubt that research in psychological aging, and more generally in development, has benefited greatly in the past 3 decades from the use of linear mixed-effects models to estimate changes over time in an outcome of interest [ 28 , 29 ].

Importantly, this modeling framework partitions variance due to interindividual differences between subjects from that due to intraindividual change within subjects , and thereby allows testing the effects of both subject-specific but time-invariant predictors e. This provides greater flexibility than other repeated-measures approaches such as repeated-measures analysis of variance for evaluating theoretically relevant questions.

Moreover, under certain assumptions this model can provide unbiased parameter estimates despite incomplete data, and it also can reduce the undue influence of slightly outlying observations. That this methodology is readily available in popular statistical software also adds to its popularity. Not surprisingly, reviews of current research practices widely used in the broad field of development and aging dedicate much space to discussion of this model [e.

Probably the two most common specifications of this model include a linear and a quadratic polynomial relation between time and the outcome, so that change growth or decline is theorized to be linear thus constant in its rate of change or with a quadratic curve, interpreted as acceleration or deceleration.

In statistical terms, both specifications refer to a linear model, in the sense that the parameters of the model are associated with the outcome via a linear combination. While linear models of change are widely adopted in the psychological aging literature, there are times when they are theoretically too limiting. Indeed, it is rather hard to believe that aging phenomena are linear or quadratic [ 30 , 31 , 32 , 33 ]. Growth and decline in various organisms often follow a logistic function, survival in a population may follow a Weibull function, and accumulation of assaults to the central nervous system may follow an exponential function.

These are just a few examples for which linear models may not satisfactorily describe the data and badly predict unobserved future occurrences.

- The Shattered Chain (Darkover 10: Renunciates).
- Combinatory Analysis - Volume 2!
- Chemistry of the Carbonyl Group - Programmed Approach to Organic Reaction Method?
- Our mission is to end the threat of age-related disease for this and future generations.
- Top Authors.
- Walter Gautschi, Volume 3: Selected Works with Commentaries?

There can also be statistical disadvantages to using polynomial functions. For instance, Fjell et al. We would expect to find similar problems in data that are partially longitudinal over a relatively short period e. Unlike linear mixed-effects models, NLME models do not constrain the parameters of the model to relate to the outcome via linear combinations only.

That is, the NLME model goes beyond the specification of nonlinear change i. Indeed, NLME models cover a wide array of functional specifications demonstrated to be effective for describing psychological phenomena. For instance, with experimental data, Ghisletta et al. Learning was not well characterized as a linear function, whereas the exponential function resulted in very good model fit and allowed for clearer interpretation of the estimated parameters.

Specifically, one parameter represented initial performance, a second indicated the rate of exponential learning, and a third corresponded to maximum performance. While age predicted final performance and rate of learning, spatial abilities independent of age predicted initial performance. Thus, older participants with strong spatial abilities could start as high as younger participants with the same level of spatial ability, but they could not hope to learn as much, or to perform as high at the end, as younger participants.

Advantages of nonlinear functions include: a they can be specified with parameters that are directly interpretable in psychological terms, such as learning positive exponential rate , maximal performance upper asymptote , and intensity of a signal amplitude ; b they can naturally accommodate noncontinuous outcomes such as counts and dichotomous events, both frequent in psychological research without having to resort to transformations; and c they often require fewer parameters hence are more parsimonious than linear models to describe adequately a developmental process.

For all these reasons, many statisticians and methodologists increasingly favor the use of nonlinear functions to model psychological processes. Applications in the psychological aging literature remain limited, but they do exist and are gaining traction.

For instance, in the context of longitudinal lifespan data, McArdle et al. Interested readers can consult Ratkowsky [ 38 ] for general understanding of nonlinear regression, Draper and Smith [ 39 ] for an introduction to nonlinear estimation, Cudeck and Harring [ 40 ] for a general overview of NLME models, Sit and Poulin-Costello [ 41 ] for a catalog of nonlinear functions, and Ghisletta et al.

Given this availability of software and research examples using NLME models, we encourage psychological aging researchers to consider this analytic approach whenever they think that linear models are neither theoretically nor statistically satisfying.

- An historians life : Max Crawford and the politics of academic freedom.
- Statistical Methods?
- Introduction!
- ASTM's Paper Aging Research Program!
- Ph.D. Program Requirements!
- The Collected Mathematical Papers of James Joseph Sylvester!

It is not uncommon that the change phenomenon under investigation is rather novel and consequently does not lend itself easily to confirmatory analyses. In such an exploratory setting, it is difficult to propose a linear or nonlinear model that best describes and facilitates interpretation of the phenomenon. Nonetheless, a researcher may wish to understand how the phenomenon unfolds over time, and how covariates of interest e. For instance, treatment effectiveness in clinical psychological practice can be improved by increased understanding of how intervention effects develop over time, but these longitudinal effects are not easily characterized by known statistical functions [ 43 ].

As an experimental example, Schmiedek et al.

## Methodological Issues in Aging Research: 1st Edition (Paperback) - Routledge

Because performance progressed nonmonotonically across the days as a result of practice, habituation, possibly fatigue, etc. In both of these above studies, rather than exploring a set of known parametric functions, the authors opted for a statistical procedure that captures patterns in the data with smoothing functions. Additive models [ 45 ] are semi- or nonparametric techniques that extend the general linear model by replacing one or more of the standard linear terms with smoothing functions that enter additively in the prediction model.

Smoothing functions are usually linear functions e. Often moving windows are employed, where a smoothing function is first fit to a limited range of values of the predictor e. Care is taken that the functions' values match at overlapping segments of the windows. Moreover, the smoothing procedure estimates so-called effective degrees of freedom, which assess the degree of nonlinearity of a smoothing term and can be interpreted as the polynomial order of the smoother.

The end result is a continuous smooth line, which may or may not be linear, across all values of the predictors, according to the patterns in the data. If the true relationship between the predictor and the outcome is parametric e. Generalized additive models GAMs extend this technique to noncontinuous dependent variables via link functions, just like the generalized linear model does for the general linear model. Finally, recent extensions to generalized additive mixed models GAMMs allow for multiple sources of variance just like the linear mixed model does for the general linear model , thereby accommodating longitudinal assessments.

Additive models and GAMs have gained popularity in medicine, biostatistics, biology, and related fields, and are slowly also gaining momentum in psychology [e. This package also allows for estimation of interaction effects between predictors and the parameters of a given smoothing function. Thus, for testing the effectiveness of a therapeutic treatment, the application of an additive model would allow a studying how the intervention unfolds over a time and b testing whether the time course is different between the intervention and the control group [ 43 ].

The same would apply when studying how older individuals perform repeatedly on cognitive tasks and whether group differences e. While we contend that it is always best to pair theory and data analysis whenever possible, there are instances in which lack of prior knowledge, or the desire to test a covariate's effect with maximal power, make additive models an excellent alternative to potentially misspecified parametric models.

Expertise is at our core CHAR is organized into four core research areas, each led by a distinguished expert. Military veterans exposed to combat were more likely to exhibit signs of depression and anxiety in later life than veterans who had not seen combat. How people cope with difficult life events fuels development of wisdom - Feb 20, The adage used to be with age comes wisdom, but thats not really true, says Carolyn, an expert on psychosocial factors that influence aging.

Generally, the people who had to work to sort things out after a difficult life event are the ones who arrived at new meaning. Military service creates health challenges and benefits - Nov 30, The books editors are Rick Settersten, endowed director of the Hallie E. Repeatedly thinking about work-family conflict linked to health problems - Nov 10, Thinking over and over again about conflicts between your job and personal life is likely to damage both your mental and physical health, research from Oregon State University suggests.

Faculty contribute to new Handbook of Theories of Aging - Jul 6, Because theories answer the why and how of what were interested in, the handbook highlights contemporary explanations for aging, Rick says, whether at the level of cells or societies. When we have good explanations, we can also better design prevention and intervention efforts to improve human aging.