“Text generated by large language models (LLMs)often violates several of Wikipedia’s core content policies. For this reason, the use of LLMs to generate or rewrite article content is prohibited, save for the exceptions given below,” reads the article. “Editors are permitted to use LLMs to suggest basic copyedits to their own writing, and to incorporate some of them after human review, provided the LLM does not introduce content of its own. Caution is required, because LLMs can go beyond what you ask of them and change the meaning of the text such that it is not supported by the sources cited.” Anyone who has used Grammarly before and after its embrace of AI is likely already well-aware of this fact.
Some use of LLMs for translation purposes is still permitted, according to the article. Since some human-written stuff can sound similar to the AI style, editors are encouraged to “to consider the text’s compliance with core content policies and recent edits by the editor in question.”
AI-generated copy was a somewhat fraught topic at Wikipedia as recently as last year. As of last June, the site had experimented with producing AI-generated overviews to sit at the top of articles. This sparked widespread backlash from the site’s editors; it also sounded pretty redundant, given that an overview of a topic basically already exists in a page’s introduction. Even then, though, the site was already taking steps to mitigate the amount of AI slop content that was beginning to proliferate. Hopefully the new rules make that task easier!