close
close

Apre-salomemanzo

Breaking: Beyond Headlines!

Fake quotes show Alaska education officials relied on generative AI, raising broader questions
aecifo

Fake quotes show Alaska education officials relied on generative AI, raising broader questions

ANCHORAGE, Alaska (Alaska Beacon) – The state’s top education official relied on generative artificial intelligence to draft a proposed policy on cell phone use in Alaska schoolswhich gave rise to a state document citing alleged university studies that do not exist, according to the Alaska BeaconIt’s Claire Stremple.

THE the document did not disclose that AI had been used in its design. At least some of this AI-generated misinformation ended up before members of the National Council on Education and Early Childhood Development.

Policy makers in education and elsewhere in government rely on well-supported research. The commissioner’s use of fake AI-generated content demonstrates a lack of state policy regarding the use of AI tools, when public trust depends on knowing that the sources used to inform government decisions are not only accurate, but real.

A ministry spokesperson initially called the fake sources “placeholders.” They were cited in the body of a resolution posted on the department’s website ahead of a state school board meeting, held this month in Matanuska-Susitna Borough.

Later, state Education Commissioner Deena Bishop said they were part of an early draft for which she used generative AI. She said she realized her mistake before the meeting and sent the correct quotes to board members. The board of directors adopted the resolution.

However, erroneous references and other vestiges of the so-called “AI hallucination” exist in the corrected document later distributed by the department and which, according to Bishop, was voted on by the board of directors. ‘administration.

The resolution directs DEED to develop a model policy for cell phone restrictions. THE resolution published on state website cited purported scientific articles that cannot be found at the specified web addresses and whose titles do not appear in broader online searches.

Four of the six citations in the document appear to be studies published in scientific journals, but they were false. State-cited journals exist, but department-referenced titles are not printed in the listed issues. Instead, works on different topics are published on the listed links.

Ellie Pavlick, an assistant professor of computer science and linguistics at Brown University and a research scientist for Google Deepmind, looked at the quotes and said they were similar to other fake quotes she had seen the AI ​​generate.

“This is exactly the type of pattern you see with AI hallucinated quotes,” she said.

A hallucination is the term used when an AI system generates misleading or false information, usually because the model does not have enough data or makes incorrect assumptions.

“It’s very common to see these fake quotes that talk about a real newspaper, sometimes even a real personal person, a plausible name, but which don’t correspond to anything real,” she said. declared. “This is exactly the pattern of citations you would expect from a language model – at least we’ve seen them do something like this.”

The reference section of the document includes URLs that led to scientific articles on different topics. Instead of “Mobile Phone Ban Improves Student Performance: Evidence from a Quasi-Experiment” in the journal Computers in Human Behavior, the State URL led to the article “Sexualized Behaviors on Facebook », another article from the publication. A search for the correct title yielded no results. So are two studies that the state says are in the Journal of Educational Psychology.

After the Alaska Beacon asked the department to produce the fake studies, officials updated the document online. When asked if the department was using AI, spokesperson Bryan Zadalis said the citations were simply there to fill out the form until correct information was inserted.

“Most of the sources listed were placeholders during the editorial process used while the final sources were critiqued, compared, and under review. This is a process that many of us have become accustomed to working with,” he wrote in an email Friday.

Bishop later said it was a first draft that had been published in error. She said she used generative AI to write the documents and correct errors.

But remnants of the AI-generated document can still be found in the document that Bishop said the board reviewed and voted on.

For example, the ministry update document still refers readers to a fictional 2019 study from the American Psychological Association to support the resolution’s claim that “students in schools with cell phone restrictions had lower levels of stress and lower levels of higher academic achievement. The new quote leads to a study that examines mental health rather than academic results. Anecdotally, this study did not find a direct correlation between cell phone use and depression or loneliness.

Although this statement is not correctly stated in the document, there is a study This shows that smartphones have an effect on lesson comprehension and well-being – but among students rather than adolescents. Melissa DiMartino, a researcher and professor at New York Tech who published the study, said that although she has not studied the effects of cell phones on adolescents, she thinks her findings would only be amplified in this population. .

“Their brains are still developing. They are very malleable. And if you look at the research on smartphones, it largely mirrors the research on substance dependence or any other type of addictive behavior,” she said.

She said the tricky part of studying adolescents, as the titles of the state’s fake studies suggest, is that researchers must get permission from schools to do research on their students.

The department updated the document online Friday, after multiple inquiries from the Alaska Beacon about the origin of the sources. The updated reference list replaced the citation of a non-existent article in the over 100-year-old Journal of Educational Psychology with an actual article from the Malaysian Online Journal of Educational Technology.

Bishop said there was “nothing nefarious” at play with the errors and no discernible harm was caused by the incident.

The fake quotes, however, show how misinformation about AI can influence state policy – ​​particularly if senior state officials use the technology as a writing shortcut that causes errors to end up in public documents and official resolutions.

The statement from the Ministry of Education spokesperson suggests that the use of such “placeholders” is not unusual in the ministry. This type of error could easily repeat itself if these placeholders are typically AI-generated content.

Pavlick, the AI ​​expert, said the situation portends a broader consideration of where people get their information and how misinformation circulates.

“I think there’s also a real concern, especially when people in positions of authority use this, because of this kind of breakdown of trust that already exists, right?” she said. “Once it repeatedly appears that information is false, whether intentionally or not, then it becomes easy to dismiss anything as false.”

In this example, scientific articles – long accepted forms of validating an argument with research, data and facts – are at issue, which could undermine their degree of reliability.

“I think for a lot of people, they view AI as a substitute for search in the same way, because in some ways it seems similar. Like, they’re at their computer, they’re typing into a text box and they’re getting these answers,” she said.

She referred to a court case last year, where a lawyer used an AI chatbot to draft a file. The chatbot cited false cases that the lawyer then used in court, causing the judge to consider punishing the lawyer. Pavlick said these errors remind him of what happened in the DEED document.

She said it was concerning that the technology had become so widely used without the public better understanding how it worked.

“I don’t know whose responsibility it really is – it’s probably more on us, the AI ​​community, right, to educate better, because it’s hard to blame people for not understanding, to not realize that they have to treat this differently from other search tools and other technologies,” she said.

She said building AI knowledge is one way to avoid misuse of the technology, but there are no universally agreed best practices for how this should happen.

“I think a few examples of things like this will hopefully escalate so that the whole country, the world, will take a little more interest in the results of this,” she said.

Editor’s note: This story has been republished with permission from the Alaska Beacon.