The White House’s ambitious “Making America Healthy Again” (MAHA) report is facing mounting scrutiny after experts identified that dozens of its scientific citations appear to have been generated using artificial intelligence – resulting in fabricated studies, repeated entries and incorrect authorship.
Out of 522 footnotes reviewed by The Washington Post in the report’s initial version, at least 37 citations appeared multiple times. Some included broken links, authors who do not exist, or references to studies that were never published. The presence of “oaicite” in several URLs – a marker associated with the AI company OpenAI – strongly suggests AI tools were used in assembling the report, according to researchers.
AI-generated content is often marked by repetitive phrasing and a phenomenon known as “hallucination,” where plausible-sounding but false information is produced. That pattern was evident in the MAHA report, according to AI experts.
“Frankly, that’s shoddy work,” said Oren Etzioni, a professor emeritus at the University of Washington and AI researcher. “We deserve better.”
The MAHA report, led by Health and Human Services Secretary Robert F. Kennedy Jr. and compiled by a commission of Cabinet officials and government scientists, was produced in response to an executive order by former President Donald Trump. It links declining U.S. health outcomes to environmental toxins, nutrition deficiencies and excessive screen time.
However, several key references were found to be inaccurate or nonexistent. For example, one citation supposedly supporting the overprescription of oral corticosteroids for children with asthma pointed to a study that does not exist. A separate, real study published in Pediatrics with a similar topic was later inserted in its place.
Another example involved a U.S. News & World Report article on recess time for children. Initially cited with incomplete links and misattributed authors, the reference was later corrected to name the actual author, Kate Rix.
A Post analysis revealed that at least 21 of the 522 citations had dead links. Some included repeated use of the same references under different footnotes.
The credibility crisis echoes similar incidents. Former New York Gov. Andrew Cuomo faced backlash recently when a housing policy report tied to his mayoral campaign included garbled citations from ChatGPT. Legal professionals have also been penalized for submitting briefs containing AI-generated, fictitious case law.
Georges C. Benjamin, executive director of the American Public Health Association, said the report lacks the integrity required for serious policy use.
“This is not an evidence-based report, and for all practical purposes, it should be junked,” he said. “It cannot be used for policymaking or serious discussion.”
When asked about the citation issues during a briefing Thursday, White House press secretary Karoline Leavitt acknowledged formatting errors but defended the report’s overall findings.
“We have complete confidence in Secretary Kennedy and his team,” she said. “These minor issues don’t negate the report’s substance, which is one of the most transformative health documents ever produced by the federal government.”
As of Thursday evening, multiple updates had been made to the online version of the report. Mentions of “corrected hyperlinks” were removed, and at least one “oaicite” marker – referencing a New York Times Wirecutter article – was taken out later that night.
A spokesperson for HHS, Andrew Nixon, emphasized that only minor citation and formatting errors had been addressed, and said the core findings remain unchanged.
“Under President Trump and Secretary Kennedy, the federal government is finally confronting the chronic disease epidemic affecting our children,” Nixon said. “It’s time for the media to focus on that.”
Kennedy has previously advocated for the use of AI in healthcare, recently testifying before Congress about an AI nurse prototype that could improve care in rural areas.
Peter Lurie, president of the nonprofit Center for Science in the Public Interest, said he wasn’t surprised by the report’s AI links. His organization was cited in the report – inaccurately – as being affiliated with the USDA and HHS.
“The idea that they would wrap themselves in the credibility of science while relying heavily on AI is shockingly hypocritical,” said Lurie, a former Food and Drug Administration official.
Steven Piantadosi, a professor of psychology and neuroscience at the University of California at Berkeley, said AI still lacks basic mechanisms to weigh evidence or assess truth.
“AI is not trustworthy,” Piantadosi said. “It doesn’t understand logic or evidence. It just predicts what text should come next.”
The Post previously reported that the report contained several misleading interpretations of scientific research – and that its conclusions often stretched the boundaries of credible science.





