Though OpenAI policy once prohibited its own tech from being used for military or warfare, like most tech companies the idea “don’t be evil” turned out to be more of a suggestion than a rule. The new $200 million contract with the Department of Defense doesn’t include the development of weaponry, but will focus on “administrative operations” and cybersecurity, per The Verge. The DoD says that OpenAI “will develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains.” At a time where there are, um, increased global tensions to say the least, it’s not particularly comforting to know the DoD will be using the same LLMs that have documented hallucinations and have sparked cult-like worship and even violence in some of its users (per a particularly sobering New York Times report last week).
Meanwhile at Mattel, its OpenAI partnership will be used to develop smart toys, according to Deadline. Internally, the company said it plans to “incorporate OpenAI’s tools such as ChatGPT Enterprise into its business operations to enhance product development and creative ideation, drive innovation, and deepen engagement with its audience.” Externally, the partnership will help “bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety.” The idea that mass-market kids’ toys will incorporate the same technology that has documented hallucinations and—well, you get the picture. There’s apparently no stopping AI from being incorporated into modern life, but the “safety” bit doesn’t seem quite guaranteed yet (not to mention the other ethical and environmental concerns). Will there be any industry untouched by OpenAI by the end of 2025?