Generative artificial intelligence holds “tremendous promise” in nearly every facet of higher education, but there need to be guardrails, policies and strong governance for the technology, according to a new report. 

The report from MIT SMR Connections, a subsection within MIT Sloan Management Review, classifies itself as a “strategy guide” for responsibly using generative AI in higher ed. It points toward several institutional practices that have reaped positive results in the last two years, following the debut of ChatGPT in November 2022, which kicked off a flood of AI tools and applications. 

The report urges the establishment of guidelines, pointing toward a University of Michigan AI committee report and tool kit, which included the university’s own version of ChatGPT

AI guidelines can be flexible, the report said. MIT’s Sloan School of Management, for example, places AI policies on a spectrum ranging from restrictive to experimental. 

“I thought that was wise because, depending on the subject and the faculty member, there may be a different approach and philosophy,” Ben Shields, a senior lecturer at MIT Sloan, said in the report. He opted for the more open generative AI usage policy for his courses. 

Many institutions have disregarded creating any type of guidelines, which the report calls a grave mistake. 

“The world has changed,” the report said. “Trying to ban generative AI altogether, educators interviewed for this report agreed, is simply unrealistic.”

In addition to establishing guidelines, the strategy guide also suggests preparing staff and faculty within higher education for an “AI world.” 

Faculty at Arizona State University, for example, are offered AI literacy courses, while the University of Michigan has free AI literacy training for staff, faculty and students. Texas A&M University offers a session allowing faculty to play with a variety of the AI tools to see the benefits and shortcomings. 

A large criticism of generative AI technology is its tendency to make up or give wrong answers. The report—and most experts—advise involving a human in reviewing AI-produced work. Jake Hofman, a senior principal researcher at Microsoft Research, also suggests having AI tools label their results based on confidence. 

“For instance, the tool could code results with a green, yellow, or red light depending on the degree of certainty of its accuracy,” he said.

The report, sponsored by education technology provider Anthology, was released earlier this month. MIT SMR Connections says that while it takes sponsor input, it maintains control over the final content of its reports.



Source link

By admin

Malcare WordPress Security