Skip to main content

McGill wins Article of the Year from the Journal of School Psychology

  • Ryan McGill,
    Ryan McGill,  assistant professor of school psychology, was co-author of the article, which focused on the history and validity of commonly used cognitive tests, such as the IQ test.  
Photo - of -

Ryan McGill, assistant professor of school psychology at the William & Mary School of Education, recently co-authored an article with Stefan Dombrowski (Rider University) and Gary Canivez (Eastern Illinois University) that received the Article of the Year Award from the Journal of School Psychology (JSP) and the Society for the Study of School Psychology (SSSP). The article, titled “Cognitive Profile Analysis: History, Issues, and Continued Concerns,” appeared in the November 2018 issue of JSP. Concurrent with the award, Elsevier has lifted the paywall for the article for a six month period making it available for free download.

A review panel selected it out of the 62 articles published in the journal in 2018. The award was announced at the JSP editorial board meeting at the 2019 National Association of School Psychologists (NASP) convention in Atlanta in February. McGill and his colleagues were presented with a plaque and a $1,500 stipend at the meeting. They were also invited to present their article at the 2019 NASP convention in Baltimore as part of a special documented session sponsored by SSSP. 

McGill, who also serves as director of the school psychology program, was originally drawn to the field of school psychology because he wanted to help children and adolescents succeed in school. He worked for five years in the public schools as a school psychologist before entering academia. “Everything I’ve been able to achieve in my life and value, I can trace back in some way to my education. I’ve benefited from having positive interactions in schools. The overarching goal for me was to help kids develop those same connections with education.” Ultimately, he transitioned to an academic career because he believed that he could influence the broader conversation at a different level, as a researcher trying to answer questions that arose out of his time working as a practitioner.

McGill reported that he and his co-authors wanted to write the article, which is a review of the history of cognitive test interpretation methods as well as relevant psychometric studies in the field, in order to determine whether substantive advances have been made since these matters were last debated in the 1990s. After reviewing the status of contemporary research, they concluded that the evidence base for many popular interpretive methods remains less than compelling.

McGill noted that “we present 30 years of consistently negative evidence suggesting these things may not be as useful as they are often perceived,” adding that while intelligence tests are able to estimate general intelligence relatively well, empirical evidence for the utility of many profile analytic methods is presently lacking. Of particular concern are the profiles commonly used to identify learning disabilities. “From a practical perspective, if you engage in cognitive profile analysis, you begin with the assumption that these scores represent legitimate psychological dimensions. However, numerous research studies have not been able to replicate the models from which these scores are derived. If scores that have questionable psychometric properties are used in a decision-making model, the decisions generated from that model are likely to be flawed.”

McGill points out that this is a difficult conversation to have, because many practitioners intuitively believe in the insight afforded by these procedures and that notion is often reinforced in pre-service training and subsequent clinical experiences. Furthermore, many interpretive resources and clinical guidebooks within the professional literature suggest that these procedures are valid in spite of the increasing body of empirical literature finding the opposite. In lieu of these procedures, he encourages practitioners to allocate more time to low inference assessment activities that evaluate “the variables that mediate student learning, such as response to instruction, learning rate, and domain specific skills.” In his opinion, interventions developed from those data are likely to have better ecological validity.

Despite the attention the article has received, McGill acknowledged that more work needs to be done to help practitioners make informed decisions about how to best use cognitive assessments. In his view, it is not a matter of throwing the baby out with the bath water but of understanding the potential limitations of these tools and utilizing them accordingly. Although debates on these matters have been contentious, McGill noted that momentum for these critical conversations has reached an apex as practitioners seek better and more efficient ways to meet the needs of students in schools.