Artificial Intelligence grading your ‘neuroticism’? Welcome to colleges’ new frontier

Students may soon be surprised to learn that artificial intelligence is increasingly being used to grade, answer questions and even teach at colleges across the country. The newest can evaluate and score applicants’ personality traits and perceived motivation, and colleges increasingly are using these tools to make admissions and financial aid decisions — though not without some controversy. (Photo: Erin Bormett / Argus Leader)

Students newly accepted by colleges and universities this spring are being deluged by emails and texts in the hope that they will put down their deposits and enroll. If they have questions about deadlines, financial aid and even where to eat on campus, they can get instant answers.

The messages are friendly and informative. But many of them aren’t from humans.

Artificial intelligence, or AI, is being used to shoot off these seemingly personal appeals and deliver pre-written information through chatbots and text personas meant to mimic human banter. It can help a university or college by boosting early deposit rates while cutting down on expensive and time-consuming calls for stretched admissions staffs.

AI has long been quietly embedding itself into higher education in ways like these, often to save money — a need that’s been heightened by pandemic-related budget squeezes.

Colleges and COVID-19: Rutgers, Cornell mandate vaccines for students. Is this the new norm?

Now, simple AI-driven tools like these chatbots, plagiarism-detecting software and apps to check spelling and grammar are being joined by new, more powerful – and controversial – applications that answer academic questions, grade assignments, recommend classes and even teach.

The newest can evaluate and score applicants’ personality traits and perceived motivation, and colleges increasingly are using these tools to make admissions and financial aid decisions.

As the presence of this technology on campus grows, so do concerns about it. In at least one case, a seemingly promising use of AI in admissions decisions was halted because, by using algorithms to score applicants based on historical precedence, it perpetuated bias.

“Where you start seeing things that get a bit more worrying is when AI gets into higher-stakes types of decisions.”

Much of the AI-powered software used by colleges and universities remains confined to fairly mundane tasks such as improving back-office workflow, said Eric Wang, senior director of AI at Turnitin, a service many institutions use to check for plagiarism.

“Where you start seeing things that get a bit more worrying,” he said, “is when AI gets into higher-stakes types of decisions.”

Among those are predicting how well students might do if admitted and assessing their financial need. 

Hundreds of colleges subscribe to private platforms that do intensive data analysis about past classes and use it to score applicants for admission on factors such as the likelihood they will enroll, the amount of financial aid they’ll need, the probability they’ll graduate and how likely they are to be engaged alumni.

Humans always make the final calls, these colleges and the AI companies say, but AI can help them narrow the field.

What is Title IX?: The law used to fight for trans rights, gender equality, explained.

Baylor, Boston and Wake Forest universities are among those that have used the Canadian company Kira Talent, which offers a review system that can score an applicant’s “personality traits and soft skills” based on a recorded, AI-reviewed video the student submits. A company presentation shows students being scored on a five-point scale in areas such as openness, motivation, agreeableness and “neuroticism.”

New York University, Southeast Missouri State University and other schools have used a service called Element451, which rates prospects’ potential for success based on how they interact with a school’s website and respond to its messages.

The result is 20 times more predictive than relying on demographics alone, the company says.

Once admitted, many students now get messages from companies like AdmitHub, which advertises a customizable chatbot and text message platform that the company calls “conversational AI” to “nudge” accepted applicants into putting down deposits. The company says it has reached more than 3 million students this way on behalf of hundreds of university and college clients.

Bots such as “Pounce” at Georgia State University answer questions for potential students using artificial intelligence. (Photo: Georgia State University)

Georgia State University, which pioneered the use of these chatbots, says its version, named Pounce, has delivered hundreds of thousands of answers to questions from potential students since it launched in 2016 and reduced “summer melt” — the incidence of students enrolling in the spring but failing to show up in the fall — by 20%.

Georgia State was also among the first to develop inexpensive, always-on AI teaching assistants, ready to answer student questions about course material. Theirs is called Jill Watson, and studies found that some students couldn’t tell they were engaging with AI and not a human teaching assistant.

Staffordshire University in England offers students a “digital friend,” an AI teaching assistant named Beacon that can recommend reading resources and connect students with tutors. Australia’s Deakin University has an AI assistant named Genie that knows whether a student asking a question has engaged with specific online course materials and can check students’ locations and activities to determine if they’ve visited the library or tell them when they’ve spent too long in the dining hall and prompt them to move along.

Many colleges increasingly use AI to grade students, as online classes grow too large for instructors to manage this well.

The pandemic has hastened the shift to those kinds of classes. Even before that, however, Southern New Hampshire University — with 97% of its nearly 150,000 students exclusively online — was working on ways that AI could be used to grade large numbers of students quickly, said Faby Gagne, executive director of its research and development arm.

SNHU is also starting to use AI not just to grade students but to teach them. Gagne has been experimenting with having AI monitor such things as speech or movement or the speed with which a student responds to video lessons and use that information to score achievement.

Georgia State University developed "Jill Watson," an inexpensive, always-on AI teaching assistant that was able to answer student questions about course material. Studies found that some students couldn't tell they were engaging with AI and not a human teaching assistant. (Photo: Georgia State University)

Turnitin, best known for checking for plagiarism, also sells AI language comprehension products to assess subjective written work. One tool can sort written assignments into batches, allowing a teacher to correct a mistake or give guidance just once instead of highlighting, commenting on and grading the same mistake again and again. The company says instructors check to verify that the machine made the correct assessment, and that eliminating repetitive work gives them more time to teach.

AI tools are also being sold to colleges to make decisions once made by faculty. ElevateU, for example, uses AI to analyze student data and deliver individualized learning content to students based on how they answered questions. If the program determines that a particular student will do better with a video lesson as opposed to a written one, that’s what he or she gets.

But some research suggests that AI tools can be wrong, or even gamed. A team at MIT used a computer to create an essentially meaningless essay that nonetheless included all the prompts an AI essay reader searches for. The AI gave the gibberish a high score.

In Spain, an AI bot named Lola answered more than 38,700 student questions with a 91.7% accuracy rate — meaning it gave out at least 3,200 wrong or incomplete answers.

“AI alone is not a good judge of human behavior or intention. We found that people are better at this than machines are, pretty much across the board.”

“AI alone is not a good judge of human behavior or intention,” said Jarrod Morgan, the founder and chief strategy officer at ProctorU, which schools hire to manage and observe the tests students take online. “We found that people are better at this than machines are, pretty much across the board.”

The University of St. Thomas in Minnesota said it tested, but did not deploy, an AI system that can scan and analyze students’ facial expressions to determine whether they’re engaged or understand the material. The system would immediately tell professors or others which students were becoming bored or which points in a lecture required repeating or punching up.

And researchers at the University of California, Santa Barbara, studied whether students got more emotional reinforcement from animated than from real-life instructors and found that, while students recognized emotion in both human and animated teachers, they had stronger, more accurate perceptions of emotions such as “happy” and “frustrated” when the instructors were human.

Many people “think AI is smarter than people,” said Wang of Turnitin. “But the AI is us. It’s a mirror that reflects us to us, and sometimes in very exaggerated ways.” Those ways, Wang said, underscore that the data AI often uses is a record of what people have done in the past. That’s an issue because “we are more prone to accept recommendations that reinforce who we are.”

The University of Texas at Austin built an artificial intelligence system to evaluate applicants to a graduate program in computer science, but dropped it after finding it had the potential to reinforce bias. (Photo: Jackie Mader/The Hechinger Report)

That’s what happened with GRADE, the GRaduate ADmissions Evaluator, an AI evaluation system built and used by the graduate program in computer science at the University of Texas at Austin. GRADE reviewed applications and assigned scores based on the likelihood of admission by a review committee. The goal was to reduce human time spent reviewing the increasing pile of applications, which GRADE did, cutting review time by 74%.

But the university dropped GRADE last year, agreeing that it had the potential to replicate superficial biases in the scoring — scoring up some applications not because they were good, but because they looked like the kinds of applications that had been approved in the past.

These types of reinforcing biases that can surface in AI “can be tested initially and frequently,” said Kirsten Martin, a professor of technology ethics at the University of Notre Dame. “But universities would be making a mistake if they thought that automating decisions somehow relieved them of their ethical and legal obligations.”

This story was produced by The Hechinger Report, a nonprofit, independent news organization focused on inequality and innovation in education. Sign up for our higher education newsletter.

Source: Read Full Article