home-inthenews

AP exams, personality traits more likely to predict long-term college success

Long-term success in college may be better predicted with Advanced Placement (AP) exams and personality traits in combination with standard admission practices, according to new research from the Georgia Institute of Technology and Rice University.

The study showed that prediction of student graduation may be significantly improved by including in the college admission process consideration of AP exam performance and a small set of personality traits, along with traditional indicators of student abilities and high school grades.

The research also revealed that, on average, males and females who changed their college major from a field in science, technology, engineering or math (STEM) identified different reasons for doing so. Women who changed from a STEM major tended to have lower “self-concepts” in math and science — they were less likely to view themselves in these fields. Men tended to have lower levels of orientation toward “mastery and organization.”

“There has been significant discussion in the domains of educational research and public policy about the difficulties in both attracting and retaining students in STEM majors,” said Margaret Beier, associate professor of psychology at Rice and the study’s co-author. “We’re very interested to know how the role of personality traits and domain knowledge influences the selection and retention of talented students and accounts for gender differences in STEM and non-STEM majors in a selective undergraduate institution.”

Phillip Ackerman, a professor of psychology at the Georgia Institute of Technology and the study’s lead author, said that they also hope university admissions officers consider taking into account what applicants “know,” in addition to their grades and standardized test scores.

“Given that over half of the AP exams are completed prior to the students’ senior year of high school, their actual exam scores could be part of the formal selection process and assist in identifying students most likely to graduate from college/university,” Ackerman said.

The study tracked individual trait measures (such as personality, self-concept and motivation) of 589 undergraduate students at the Georgia Institute of Technology from 2000 to 2008. The selected students were enrolled in Psychology 1000, a one-credit elective course for freshmen undergraduate students. Questionnaires assessing these trait measures were distributed to approximately 1,100 of the 1,196 students enrolled in the course in fall 2000, and 589 students completed the survey.

The researchers hope their research will help students, counselors and other stakeholders better match high school elective options to student interests and personal characteristics. They also hope that university admissions officers consider taking into account what applicants “know” (for example, what they learned in their high school elective classes), in addition to their grades and standardized test scores.

Ruth Kanfer, a professor of psychology at the Georgia Institute of Technology, co-authored the study with Ackerman and Beier.

The study, “Trait Complex, Cognitive Ability and Domain Knowledge Predictors of Baccalaureate Success, STEM Persistence and Gender Differences,” was funded by the Georgia Institute of Technology and is available online at…
http://psycnet.apa.org/psycinfo/2013-14499-001.

Story written by Amy Hodges, Rice University.

AP exams, personality traits more likely to predict long-term college success

Of the many factors that contribute to poor performance on standardized tests like the SAT, nerves and exhaustion, surprisingly, may not rank very high. In fact, according to a new paper published in Journal of Experimental Psychology, a little anxiety — not to mention fatigue — might actually be a very good thing.

The study was conducted by psychology professors Phillip Ackerman and Ruth Kanfer of Georgia Tech. They recruited 239 college freshman in the Atlanta area, each of whom agreed to take three different versions of the SAT reasoning test given on three consecutive Saturday mornings. The tests would take three-and-a-half hours, four-and-a-half hours and five-and-a-half-hours, and would be administered in a random order to each of the students. To boost the stress level in the students — who had already taken the SAT in the past and gotten into college — Ackerman and Kanfer offered a cash bonus to any volunteers who beat their high-school score.

Before the test began on each of the three Saturdays, the students filled out a questionnaire that asked them about their fatigue level, mood and confidence. They completed the questionnaire again at a break in the middle of the test and once more at the end. Together, all of these provided a sort of fever chart of the students’ energy and anxiety throughout the experience.

When the researchers scored the results, it came as no surprise that volunteers’ fatigue and stress rose steadily as the test got longer. What was unexpected was their corresponding performance: as the length of the test increased, so did the students’ scores. The average score on the three-and-a-half hour test was 1,209 out of 1,600. On the four-and-a-half-hour version it was 1,222; on the five-and-a-half-hour test it was 1,237. Virtually all of the students followed that pattern.

“The range of the scores was from about 800 to 1,600,” says Ackerman. “[But] within the study, lower-scoring examinees were not more or less affected by longer test lengths than higher-scoring examinees.”

Certainly, the subjects’ increasing familiarity with the test may have helped account for the improvement; this is just what happens in the real world, after all, when students take the SAT multiple times in an attempt to boost their scores. But in the real world, the test doesn’t keep getting longer; here it did — and yet the scores marched higher all the same. What the researchers believe explains the improvement is fatigue — or more precisely, what the fatigue represents. A feeling of exhaustion is often a stand-in for anxiety. Most students — particularly comparatively high achievers who have already gotten into college — learn to use the stress that accompanies a test as a prod to action and concentration. The experts call the phenomenon “achievement motivation,” or a kind of competitive energy spurt.

“One possibility,” Ackerman says, “is that more students respond to feelings of fatigue by increasing rather than decreasing their efforts.” This, however, could reveal a flaw in the study. By limiting the sample group to college freshman, the researchers did not get a look at an entire category of kids: those who took the SAT in high school, did poorly, and never went on with their education. There’s no way of knowing whether achievement motivation was absent in those students or whether they redoubled their efforts too, but got low scores for other reasons.

Another concern raised by the work is the fact that the College Board sponsored it. The College Board is, of course, also the sponsor of the SAT. The study’s positive results are likely to be welcomed by the Board, which added a writing section to the SAT in 2005, extending the test from its previous three-hour length to three hours and 45 minutes. The move elicited criticism from educators and parents, who said the test had gotten too long to be a fair assessment of an exhaustible student’s true abilities.

It is not uncommon for commercial groups to bankroll research that bears directly on their business; pharmaceutical companies fund drug trials all the time, for example. No matter how rigorously the research is conducted, however, the risk always exists that researchers’ objectivity may be tainted by their backers’ agenda. But Ackerman insists this is not a concern with his and Kanfer’s work. The data from the study, he says, remained the property of Georgia Tech, not the College Board, and the two groups signed a contract in advance in which the school retained the rights to publish the results no matter what.

So if the study is to be believed and students do perfectly well in a test that runs five-plus hours, what is the practical limit? Six? Seven? Twelve? We may never know. “Testing beyond 5.5 to six hours is not practical,” says Ackerman, “because examinees would need a break of significant time to eat. It’s an open question whether eight or more hours with a lunch break would result in poorer performance.” For now, high school students dreading the SAT probably don’t have to worry that the test is going to get longer. But it’s not likely to get any shorter either.

The original version of this article misstated that study participants took one of three different versions of the SAT reasoning exam; each student took all three versions. The article also stated that students took the tests in ascending order of length, but in fact the tests were administered randomly.

To view original article, click here.