Thoughts about a few common questions from yesterday:


A few good questions came up yesterday:


1. Can unstandardized regression weights, covariances, and variances differ between two solutiions, but the standardized regression weights, correlations, and squared multiple correlations, along with fit, be identical?


In a nutshell, yes. I know this is conceptually difficult for some, but if you remember that (a) we scale each factor by fixing one loading to one (1.0), and (b) which loading we choose to do this to is ARBITRARY, it starts to get clearer.


Scaling is literally just a way to identify our solutions -- to make sure we're not giving the program more "unknowns" than there is information to solve.  It's kind of like the degrees of freedom concept.  If you conceptually treat each factor as a 'given' or a 'known', then knowing any two loadings, in a set of measures, automatically tells you the third.


Consequently, regardless of how you scale the solution (which loading you set to one), all solutions (that differ only in the scaling loading) are mathematically equivalent -- and will thus yield the same standardized values and model fit.


2.  What do modification indices mean, say, for "regression weights"? Can you free up modification indices that are not within-factor correlated residuals?


Remember "split loadings"? We saw these a bit in EFA.  For example, a measure of financial ability might load on both a factor of "finance knowledge" but also "test anxiety", if anxiety also influences your test performance.


If you ran a model with both "financial knowledge" and "test anxiety factors", then your measure of financial ability might want to load on BOTH factors.  If you only estimate the loading from financial ability to the financial knowledge factor, you wouldn't be accounting for the anxiety variance.


In that case, your modification indices might show a high modification index suggesting that your financial ability measure should ALSO have a freely estimated loading on the anxiety factor. 


That would be an example where freeing up a residual that is NOT a correlated within-factor uniqueness might make sense.


3.  What exactly is a modification index again?  Why do we know they're significant?


For each parameter in the modification index tables, the "MI" tells you how much chi square will go down in your next model, if you free up that parameter. And "parameter change" gives you an estimate of what the resulting parameter (b-weight, covariance, variance) would be in your new model.


Because AMOS by default only shows modification indices of four and higher, that means ANY modification indices -- if you acted on them -- should yield significant improvement in fit.  That's because the modification index is how much the chi square would go down by changing ONE parameter (1 df).  The critical value of chi square at alpha = .05 and 1 df is 3.84 (i.e., 4).  So you're only seeing modification indices that would significantly improve your model.


4. What is the single biggest difference between the correlation approach (question 3) and the regression approach (question 4)?


First, it's a good observation that solutions 3 and 4 are mathematically equivalent. They have the same fit.


The difference is the same as any analysis we've ever done when we do correlation (solution 3) or regression (solution 4).


In correlation, we get the bivariate association between each measure CONTROLLING FOR NOTHING. And so it is with a CFA.  We get the bivariate associations between each factor, CONTROLLING FOR NOTHING.


In regression, we separate our variables into a DV and IVs. And then we get the UNIQUE AND RELATIVE IMPORTANCE of each predictor CONTROLLING FOR THE OTHER PREDICTORS.


We also get the VARIANCE EXPLAINED in our DV.


And sure enough, these are the things that the regression approach adds in Question 4 that we didn't have in Question 3:


- we know get the relative importance of the two exogenous factors in predicting the third (via the standardized regression weights, or betas), and


- we get the variance explained in the endogenous outcome.


The class really seems to be getting the hang of this AMOS stuff -- I'm quite pleased.




From: CLP6529: Applied Multivariate Methods in Psychology, Fall 2017 [mailto:[log in to unmask]]
Sent: Thursday, November 09, 2017 8:35 AM
To: Michael Marsiske <[log in to unmask]>
Subject: Assignment Unmuted: Week 11 In-class Work, CLP6529: Applied Multivariate Methods in Psychology, Fall 2017


Your instructor has released grade changes and new comments for Week 11 In-class Work. These changes are now viewable.