This was another good job. I'm pasting, below, some of the common/good "factor analysis specific" and "more general" comments that received. Many of you may have said these a little differently, but they all have a sort of common theme.
For some of you, I'll be following up individually regarding some specific comments/things I noticed. Thanks, Michael
- Use of PCA analysis - PCA includes measures that are reliable and unreliable, so if the measure has a lot of unreliability so will the component.
- Use of dichotomous variables for self-control items, which can reduce response variability.
- As a rule of thumb, in PCA analysis we should explain between 70% to 80% of the total variance. This rule was not followed.
- Absence of Kaiser's rule
- The latent dimensions are correlated (not appropriate for PCA)
- although normality assumptions are not as important for PCA, the absence of MV outliers is important, and the authors did not report any checks
- did not report eigenvalues of components
- The authors did not mention the number of non-redundant residuals in the analysis, which tell us if we did a good job with the final model
- -.5 seems like a high cutoff for factor loadings (in class the guideline for meaningfulness was >.3-.4, the authors here removed items with .468 and .429)
- -Authors seem to use "factor analysis" and "principle components analysis" synonymously. Are they the same? If so, why use PC and not EFA or CFA?
- The current measure was created from a factor analysis of a larger 50-item measure. Information for this initial factor analysis should have been provided.
- The authors should have provided more information about how the 13 item measure was developed and the theoretical underpinnings behind it. Also, what exactly do they mean when they say the items came from two self help manuals? Did they copy them directly, or gather expert feedback and revise the items?
- One methodological flaw within the study was that they did not assess socially desirable responding. It seems as though many students would feel hesitant to answer questions about drinking habits honestly, particularly in a school environment
- Of course, the biggest question that comes to mind regarding analyses is: why did they use PCA over principle axis factoring? They provide no theoretical rationale for this.
- Another issue with the PCA was that they used a rotation with it (varimax) when traditionally you should not rotate PCAs. Why did they rotate it? Again, no rationale provided.
- Finally, when it came down to item and factor retention, the authors provided no rationale for their decisions. For example, why did they make the item retention cut off so stringent at .50 rather than use a more lenient cut off like .30 or .40? What did they use as a cross loading cut off? Also, how did they decide to retain 3 factors? Did they look at eigenvalues or scree plots? Why did they not just force 2 components/factors and get rid of that one dangling 2 item factor (assertive communication)?
- The survey was designed for college students. In what ways are high school students different from college students? This difference should have been addressed.
- Pre-existing attitudes towards alcohol should have been addressed. Maybe students aren't trying not to drink.
- did not clarify why the DVs were translated into dichotomous variables or why odds ratios were used.
- They have not reported the eigenvalues or the scree plot (and its statistical values), therefore we do not know how they came up with these three factors. For example, we don't know if they used the Kaiser's rule or not.
- The authors failed to interpret why the two variables were excluded from the PCA (used healthy activities, thought about problems drinking can cause). Theoretically and intuitively, these items seem to belong to two of the principal components (healthy alternatives and self-regulation, respectively).
- A picture would have been nice to see more clearly how the latent constructs were put together