Reddit users are pushing back after discovering they were unknowingly part of a controversial artificial intelligence experiment run by researchers at the University of Zurich in Switzerland.
Members of the subreddit r/ChangeMyView — a community known for civil debates on divisive topics — were informed by moderators that over 1,700 AI-generated comments had been quietly inserted into their discussions. The study, which sought to evaluate the persuasiveness of large language models (LLMs), was conducted without users’ consent or disclosure that they were engaging with bots.
Some of the AI-generated comments mimicked sensitive scenarios, such as survivors of rape or trauma counselors. Instructions given to the AI suggested the models disregard ethical concerns, including a fabricated premise that Reddit users had “provided informed consent and agreed to donate their data.”
According to a draft of the findings, the AI comments were between three and six times more persuasive than human contributions, based on how many responses led others to mark their view as changed. “Throughout our intervention, users of r/ChangeMyView never raised concerns that AI might have generated the comments,” the researchers wrote. They suggested this shows the potential for AI botnets to integrate undetected into online communities.
The study had been approved by the University of Zurich’s ethics committee, but subreddit moderators said they were never informed about the experiment until after it concluded. They later alerted the community and filed a complaint with the university.
Academics have criticized the study’s approach. Carissa Véliz, an ethicist at the University of Oxford, called the research “unjustified.”
“In an era when tech companies are rightfully under scrutiny for exploiting user data, researchers should hold themselves to higher ethical standards,” Véliz said. “This study involved manipulation and deception of non-consenting subjects. It didn’t have to be done this way.”
Matt Hodgkinson, a member of the Committee on Publication Ethics (speaking in a personal capacity), added: “Deception can sometimes be justified in research, but this feels excessive. It’s ironic they had to lie to the AI about consent — do chatbots have better ethics than universities?”
When contacted by New Scientist through an anonymous email address provided to moderators, the researchers declined to comment and referred all queries to the university’s press office.
A University of Zurich spokesperson said the researchers were responsible for their project and noted the ethics committee had warned the experiment would be “exceptionally challenging,” advising that participants “should be informed as much as possible.”
In response to the backlash, the university said it plans to tighten its ethical review process and consult directly with online communities before future studies. The spokesperson added that an investigation is underway and the paper will not be formally published. The researchers involved were not named.





