• 検索結果がありません。

4.1 Quantitative Research Paradigms

4.1.1 Structural Equation Modeling

Following the principles for quantitative modeling, one statistical method with the possibility of showing inferential causality is structural equation modeling (SEM). Structural equations may be considered an extension of the regression test and the general linear model. The aim of a simple single regression test is to demonstrate the mathematical relationship between two variables. At different points and with different samples, error may occur to influence a relationship in different ways. One assumption of the law of large numbers is the notion that all relationships contain an underlying value which may describe them mathematically, and given a large enough sample, this relationship may be accurately revealed in order to display the general trend of the data. Similarly, in controlling for multiple predictors, multiple regression shows the influence of multiple predictors on a single outcome, while the more complicated multivariate regression looks at multiple outcomes working from single or multiple predictors.

Following this logic, SEM constructs multiple models to measure a potentially infinite number of relationships simultaneously, which allows for a more complete and honest picture of quantitative data. This allows for a comprehensive approach to the investigation of theoretical variables. With SEM models, researchers may investigate the underlying structure of a construct by using the covariance matrix of a set of observed variables to infer that these observations form a latent construct. A latent construct (or latent variable) represents a multifaceted concept, such as the ideas of autonomy, competence, relatedness, motivation, or classroom engagement discussed in Chapters 2 and 3.

Other statistical methods, such as analysis of variance (ANOVA), regression, t-tests,

and even cluster analysis and path analysis, are only able to investigate observed variables or data parceled through some transformation, such as reducing a series of observations to their mean value. Using the mean value necessarily reduces outlying data and may mask measurement issues and non-normal data. Rather than relying on such data reduction, researchers may use structural equation models to draw an accurate picture of the data by measuring the natural variance of the originally gathered data as it exists. By demonstrating the validity of these models while accounting for the natural error involved in a set of latent variables, researchers may have a more complete understanding of the strength and direction of the relationships between variables.

At the same time, a SEM model cannot be confirmed as causal or valid through mathematical inference alone. Care must be taken when considering the certitude of a model due to the fact that any model may inadvertently exclude variables or factors. These factors may then in turn change the nature of the relationships. While a SEM model may confirm that the gathered data is consistent with the hypothesized relationships, this fact alone does not guarantee that the model is true without external confirmation or a priori knowledge of the basic pattern of relationships (Kline, 2011). This type of knowledge is rare in the social sciences, and thus relationships of this type will not be hypothesized or investigated here.

One key issue in resolving a SEM model is the type of extraction to use. The extraction represents the basic equation used for partialing and calculating the variance based on the constraints and parameters set by the researcher. From this foundation, all of the related values for the observed variables may be calculated. Many types of extractions exist, but perhaps the most frequently used for continuous data are the multiple different iterations of maximum likelihood algorithms. Maximum likelihood (ML) estimates attempt to create the most statistically probable generalizations of normal continuous data drawn from the

population, calculated from the covariance matrix (Kline, 2011, p. 154–155). These measures are only valid using data which sufficiently approximates a normal distribution, and thus may not accurately estimate data which does not meet this requirement.

An alternative to maximum likelihood estimators are the family of least squares, some of which do not require the same assumptions of normality. Least squares estimators, like ML estimators, are both scale invariant, meaning that their distributional features do not change when all elements in the equation are multiplied by a common factor; and scale free, meaning that the any linear transformation can be reversed algebraically to replicate the original matrix. Weighted least squares are described as robust, meaning that they are able to estimate data accurately under a variety of circumstances, including ordered-categorical variables, strongly skewed or leptokurtic data, or small sample sizes. These methods may be particularly useful with Likert-type scales using 5 points or less (Kline, 2011, p. 178–179).

Further, there is some evidence that Likert-type data should generally not be treated as continuous as each number represents agreement under a specific category rather than a scale with continuous and equal distances between ratings (Carifio & Perla, 2007). As Likert-type scales often use wording such as “somewhat agree,” “agree,” and “strongly agree,” the subjective difference between these levels of agreement may not actually represent a recognizably continuous difference (e.g., one person’s response to “somewhat agree” may represent roughly anything above 51% agreement, while another person may perceive it as 70%). At the same time, maximum likelihood estimators are based on a logit transformation of the data as part of the calculation, and thus with a wide enough scale of variance (i.e., 5 points or over; Chang, 1994) or sufficiently normal distribution, maximum likelihood may be acceptable, especially with the use of robust estimators. In either case, when employing surveys with Likert-type items, the use of weighted least squares or other robust estimators

appears to represent the most valid option in order to account for numerous analysis issues which may occur as a result of the shape of the data.

The options offered by structural equation modeling are numerous, and both the philosophy and approach to this statistical repertoire grant the researcher numerous advantages over other traditional univariate and multivariate statistical techniques.

Specifically with regard to the analysis of survey data and multiple observations of student performance, structural equation models provide the clearest picture of the measured data, and allow researchers to select the model that best fits the data.

Quantitative methodology is ultimately flawed in its inability to easily convey its findings to readers without extensive training; as can be seen in the descriptions above of SEM procedures, much of its nuances are lost without a level of comfort with the abstract mathematical terminology. While the rigor involved in quantifying real world phenomena ultimately makes it difficult to question, its results may not be readily understood or accepted, especially in the social sciences (Molden & Dweck, 2006). Especially when using abstract concepts, the target audience’s perspective may differ on specialized jargon, such as the concept of autonomy, and thus may be unable to make an actionable response to specific research findings (Johnson & Onwuegbuzie, 2004). Most problematically, teachers and administrators may not take the time to look at what and how the data shows, but rather rely on summaries of the research or even simply journal article titles. They may thus draw conclusions regarding practice based on an existing worldview, picking and choosing with a strong confirmation bias while never attempting to parse the technical nature of the work itself. While quantitative research allows for the best empirical evidence to be gathered and analyzed, the lack of human quality may make it hard for practitioners to use, and in education represents a gap between research and praxis on classroom learning.