As a retired professor, I have lots of thoughts on this. It's true, lots of students don't really know how to write or think critically. Those aren't easy-to-teach skills and university systems are making it harder and harder for faculty to teach them. But how did they test for that? There's lots to write about, but let's just look at the test in this post.
From the AP story at Valleynewslive.com (which has more of this AP story than the ADN).
The research found an average-scoring student in fall 2005 scored seven percentage points higher in spring of 2007 on the assessment. In other words, those who entered college in the 50th percentile would rise to the equivalent of the 57th after their sophomore years.
Among the findings outlined in the book and report, which tracked students through four years of college:
-Overall, the picture doesn't brighten much over four years. After four years, 36 percent of students did not demonstrate significant improvement, compared to 45 percent after two.
-Students who studied alone, read and wrote more, attended more selective schools and majored in traditional arts and sciences majors posted greater learning gains.My first questions was: how do you test such a thing? So I looked up the test.
It's written up in a new book called Academically Adrift by Richard Arum and Josipa Roksa, but I don't have the book. But the test they used is called the College Learning Assessment (CLA), and I found information on it.
From the NYU Steinhardt School of Culture, Education, and Human Development:
Unlike standardized tests such as the GRE or GMAT, the CLA is not a multiple choice test. Instead, the multi-part test is a holistic assessment based on open-ended prompts. “Performance Task” section prompts students with an imagined “real world” scenario, and provides contextual documents that provide evidence and data. The students are asked to assess and synthesize the data and to construct an argument based on their analysis.So one would find out not just how much trivia a student has memorized for the test, but whether they understand its relevance in context and can write about it. Grading this sort of test isn't easy, but with a rubric and training for the graders you can get reasonably reliable scores. The Performance Task is graded using four factors (see boxes below) and you can see the grading benchmarks for each here.
The Lumina Longitudinal Study: Summary of Procedures and Findings by Dr. Stephen Klein ("the principal investigator of the Lumina-funded CLA Longitudinal Study (2005-2009)") describes the kinds of test questions:
The Analytic Writing Task consists of two sections. First, students are allotted 45 minutes for the Make-an-Argument task in which they present their perspective on an issue like “Government funding would be better spent on preventing crime than dealing with criminals after the fact.” Next, the Critique-an-Argument task gives students 30 minutes to identify and describe logical flaws in an argument.
Here is one example:
Butter has now been replaced by margarine in Happy Pancake House restaurants throughout the southwestern United States. Only about 2 percent of customers have complained, indicating that 98 people out of 100 are happy with the change. Furthermore, many servers have reported that a number of customers who still ask for butter do not complain when they are given margarine instead. Clearly, either these customers cannot distinguish margarine from butter, or they use the term “butter” to refer to either butter or margarine. Thus, to avoid the expense of purchasing butter, the Happy Pancake House should extend this cost-saving change to its restaurants in the southeast and northeast as well.How many logical flaws did you find in the argument? Even if you can't name them, can you describe them?
Wait! Don't just move on. Stop and try to find at least one or two flaws in that passage. After all, critical thinking ability is what this is all about. If you're having trouble, at least go look at this list of logical fallacies.
I'm of two minds here. First I think we should be teaching people how to think critically. After all, this blog is called "What Do I Know?" and the underlying theme - even if it isn't always obvious - is how do people know what they know? So I'm all for students learning more about rationality, non-rational ways of knowing, logic, etc.
But not scoring higher on this particular assessment only means that the students didn't increase their ability in the skills this test was testing. Perhaps the students greatly increased their knowledge of human anatomy or accounting or their ability to read French. And it would seem the test wouldn't catch that.
A test like this is only fair if the colleges' goal is teaching the skills this test assesses.
But apparently there are other problems. Dr. Klein's team had trouble keeping schools and students in the test pool from year to year.
A total of 9,167 Lumina freshmen completed the fall 2005 testing, but only 1,330 of them (14 percent) eventually completed all three phases of testing. Most of the attrition was due to schools rather than individual students dropping out of the study (although some schools may have dropped out because of difficulty recruiting students to participate). Only 26 (52 percent) of the initial 50 schools tested at least 25 students in both Phases 1 and 3; just 20 schools (40 percent) met the minimum sample size requirements in all three phases. These 20 schools tested 4,748 freshmen in the fall of 2005, 2,327 rising juniors in the spring of 2007, and 1,675 seniors in the spring of 2009.
On the average, a school that stayed in the study for all three phases lost about one-third of its students that participated as freshmen. Thus, although this is a substantial loss, it is far less than the overall attrition rate. We found that dropouts were more likely to be Black or Hispanic, non-native English speakers, and students with total SAT scores about 80 points lower than their classmates. However, even when taken together, these student-level characteristics explained only five percent of the variance in students’ decisions to drop out of the study (but perhaps not from the school). We looked for but did not find any school characteristics associated with dropping out of the study.I'll have to let others with better statistical skills judge whether they were still within the parameters needed for legitimate sample size.
What Does it Mean?
If the research is saying, "US college students are not learning basic thinking skills" they are probably right. The next question is whether US colleges are teaching them. (If they aren't trying to teach them and are teaching something else, then so what?) My experience, and I'll try to do another post on this, is that there are serious institutional barriers to teaching thinking skills, even for those who want to. As someone who taught graduate students in relatively small classes, I had the luxury of being able to assign (and comment on in detail) many writing assignments over a semester. But this is for another post.
But if the purpose of the test is to raise awareness that these skills - often bandied about as important - aren't being taught, then that's something else. And should people think this is important, well, they just happen to have a test to use to measure it.
I have no reason to be suspicious here, but we should talk about the cost of the test, because anyone who develops such a test, has an incentive for people to use it. From the Council for Aid to Education:
How much does it cost? The cost is $6,500 for a one-year cross-sectional design with an additional $25 charge for each student tested over the 100 each fall and spring. If you are interested in a more specialized design model, please contact CAE staff to discuss pricing.One hundred schools would be $650,000 per year. That doesn't mean there is anything wrong with them making money off of this. After all that's the incentive built into capitalism. And these tests will be costly to grade because you need real people doing it. But one gets into the murky area between objective assessment and promoting self interest. I have no idea where the money goes (it's a non-profit and may well go into laudatory activities), but it's a question to keep in mind. After all, some say that No Child Left Behind was a big money maker for the test makers who lobbied hard to set up the program.
It's also not clear to me exactly what the relationship is between Dr. Klein and the book authors Richard Arum and Josipa Roska. The Council for Aid to Education (CAE) website says
Dr. Stephen Klein, the principal investigator of the Lumina-funded CLA Longitudinal Study (2005-2009)But an NYU page says:
Richard Arum, professor of sociology and education at New York University with joint appointments at FAS and NYU Steinhardt, and Josipa Roska, assistant professor of sociology at the University of Virginia, embarked on a multi-year, longitudinal study of more than 2300 undergraduate students at 24 universities across the country. Using a newly developed, state-of-the-art measurement tool, the College Learning Assessment (CLA), the research pair measured the extent to which students improved on these higher order skills. The CLA is a tool developed by the Council for Aid to Education, a national nonprofit based in New York.Are these different studies?
Another CAE page says this.
The Council for Aid to Education (CAE) heralds the publication of Academically Adrift: Limited Learning on College Campuses, by Richard Arum and Josipa Roksa (Chicago, IL: University of Chicago Press, 2011). This study was made possible by CAE’s policy to make its Collegiate Learning Assessment (CLA) database of over 200,000 student results across hundreds of colleges available to scholars and scholarly organizations. We were pleased to assist the authors in this major Social Science Research Council sponsored project.But if they were only using existing data, why would they say that "the research pair measured the extent. . ."? I guess if you know they were just using existing data collected by someone else, and that they used the data to 'measure the extent . . " but to me it sounds like they conducted the assessment itself. I don't know which is right.
I don't think there is anything fishy here, just confusing. Maybe a webmaster put up the wrong stuff.
The NYU Site site says:
Funding for their research was provided by the Ford, Lumina, Carnegie, and Teagle Foundations.It used to be that studies from non-profits could be trusted. But we've had a proliferation of "think tanks" set up to produce research that further a political or economic agenda, so we always need to look at where the money was coming from and going to.