Last week media outlets around the country reported on a study out of the University of Chicago on the relationship between religiosity and altruism in kids. The study can be found here. These are some of the headlines from last week: “Nonreligious children are more generous.” “Religion doesn’t make kids more generous or altruistic, study finds.” “Religion Makes Children More Selfish, Say Scientists.” How this research was portrayed constitutes a case example of what can go wrong when social science research is presented to the public.
The participants of this study were “…1,170 children aged between 5 and 12 years in six countries (Canada, China, Jordan, Turkey, USA, and South Africa).” The key determiner of altruism was how many stickers kids were willing to share with peers. Kids in the non-religious group were willing to share, on average, 4.1 stickers (out of 10) while kids in the Christian group were willing to share 3.3 stickers and kids in the Muslim group were willing to share 3.2 stickers. The researchers also determined that the correlation between the kids’ religiousity and altruism was -.173 (negative correlations mean that when one variable goes up, the other one goes down).
To better understand the confusion in the reporting I need to explain the term “statistically significant.” Research is always done with samples that hopefully represent the population under study. So, let’s say I’m a researcher that believes that 10 year old boys who eat apples for a year will end up taller than 10 year old boys who eat onions for a year. I then put together a sample of 800 10 year-old boys, half of whom eat the apples and half of whom eat the onions for one year. A test for statistical significance tells me, at the end of my study, whether my sample of 10 year-old boys represents all ten year old boys (the population). Lets say at the end of the year my test of statistical significance says that my results are statistically significant. All that means is that my sample likely represents the entire population (the standard cutoff is 95% likely or higher). However, statistical significance tells me nothing about the meaningfulness of the difference. So, lets say in my study the boys who ate the apples were .84 inches taller than the boys who ate the onions. I can tell the media that there is a significant difference between my two groups, and that would be true. But the media, and the public equate “significant difference” with “meaningful difference” and that would be troubling, especially to onion farmers.
An example of a statistic that speaks to meaning is effect size; .20 is a small effect size, .50 is a moderate effect size and .80 is a large effect size. Moreover, to consider the meaningfulness of correlations, .10 is considered small, .30 moderate and .50 is large.
So, let’s return to the study in question. The effect size on the main analysis (which they didn’t report but which I calculated) is .348, closer to the small category than the moderate category (e.g., there was a .8 sticker difference between the non-religious kids and the Christian kids). Moreover, the negative correlation of -.173 correlation is small.
But, we need to return to my apple-onion study to consider another methodological issue. Researchers commonly collect data on other related variables that might moderate the results. Do the apples and onion diets have differing effects on boys who start out shorter than boys who start out taller? Do boys who are obese have a different outcome than those who are not? Are the results different for boys who exercise than those who don’t? Including measures like these helps researchers to further interpret the meaning and relevance of the results. In well-constructed studies such analyses are common.
In the study in question there were numerous potential moderators that were not investigated. These included the presence of mental health problems among the kids, the level of intelligence of the kids, and the number of siblings in each participant’s household, to name a few. Moreover, a key potential moderator variable, socio-economic status, was assessed merely by determining the mother’s level of education. So, even though the results are statistically significant, the effect sizes are small and there are many unanswered questions regarding potential moderators of the findings.
Is this study interesting? Yes. Does it make a useful contribution to the literature? Yes. Does it suggest that parents should alter their religious practices based on its findings? Absolutely not. Moreover, there is a great deal of scientific evidence indicating that numerous physical and psychological advantages are associated with religiosity in children. In next week’s blog I will review some of that science.
Trackbacks
One Trackback