Academics accuse AI startups of co-opting peer review for publicity

Micheal

Robot paper holding pen, space for text

There’s a controversy brewing over “AI-generated” studies submitted to this year’s ICLR, a long-running academic conference focused on AI.

At least three AI labs — Sakana, Intology, and Autoscience — claim to have used AI to generate studies that were accepted to ICLR workshops. At conferences like ICLR, workshop organizers typically review studies for publication in the conference’s workshop track.

Sakana informed ICLR leaders before it submitted its AI-generated papers and obtained the peer reviewers’ consent. The other two labs — Intology and Autoscience — did not, an ICLR spokesperson confirmed to TechCrunch.

Several AI academics took to social media to criticize Intology and Autoscience’s stunts as a co-opting of the scientific peer review process.

“All these AI scientist papers are using peer-reviewed venues as their human evals, but no one consented to providing this free labor,” wrote Prithviraj Ammanabrolu, an assistant computer science professor at UC San Diego, in an X post. “It makes me lose respect for all those involved regardless of how impressive the system is. Please disclose this to the editors.”

As the critics noted, peer review is a time-consuming, labor-intensive, and mostly volunteer ordeal. According to one recent Nature survey, 40% of academics spend two to four hours reviewing a single study. That work has been escalating. The number of papers submitted to the largest AI conference, NeurIPS, grew to 17,491 last year, up 41% from 12,345 in 2023.

Academia already had an AI-generated copy problem. One analysis found that between 6.5% and 16.9% of papers submitted to AI conferences in 2023 likely contained synthetic text. But AI companies using peer review to effectively benchmark and advertise their tech is a relatively new occurrence.

“[Intology’s] papers received unanimously positive reviews,” Intology wrote in a post on X touting its ICLR results. In the same post, the company went on to claim that workshop reviewers praised one of its AI-generated study’s “clever idea[s].”

Academics didn’t look kindly on this.

Ashwinee Panda, a postdoctoral fellow at the University of Maryland, said in an X post that submitting AI-generated papers without giving workshop organizers the right to refuse them showed a “lack of respect for human reviewers’ time.”

“Sakana reached out asking whether we would be willing to participate in their experiment for the workshop I’m organizing at ICLR,” Panda added, “and I (we) said no […] I think submitting AI papers to a venue without contacting the [reviewers] is bad.”

Not for nothing, many researchers are skeptical that AI-generated papers are worth the peer review effort.

Sakana itself admitted that its AI made “embarrassing” citation errors, and that only one out of the three AI-generated papers the company chose to submit would’ve met the bar for conference acceptance. Sakana withdrew its ICLR paper before it could be published in the interest of transparency and respect for ICLR convention, the company said.

Alexander Doria, the co-founder of AI startup Pleias, said that the raft of surreptitious synthetic ICLR submissions pointed to the need for a “regulated company/public agency” to perform “high-quality” AI-generated study evaluations for a price.

“Evals [should be] done by researchers fully compensated for their time,” Doria said in a series of posts on X. “Academia is not there to outsource free [AI] evals.”

Leave a Comment