I ran a set of hierarchical linear regressions on my full dataset (N =
351). I ran the same regressions on a subset of the data (N = 151). I
compared the results to see if the same regressions were significant
for the subset AND the full dataset. Most of them were. Some of them
were not. For the ones that were not, I want to compare the R squares,
because I know that is a measure of effect size in regression. My
question is this: what constitutes a significant difference of R
square? For example, for one of my tests the R square value for the
full dataset was .17, for the same test on the partial dataset the R
square value was .11. Are these values far enough apart to suggest
that the findings are the different? I've done about 20 of these
regressions, so a general answer ("a change in R square value of .2
would mean a significant difference")would be more useful than the
answer for the specific example above. Alternatively, I know that
there are guidelines for Cohen's d measure of effect size, such that a
certain number would count as a "large effect." Are there any such
guidelines for R square? |