The program has apparently imposed regulations around AI misinformation as well as using AI to create and spread rumors, generate pornographic or violent images, impersonate others, manipulate web traffic or conduct “online trolling,” or abuse minors. According to Futurism, “authorities have been making examples out of a number of misinformation peddlers,” which does sound markedly less good.
China has generally been on the forefront of creating and enacting AI regulation.A report from The International Association of Privacy Professionals earlier this year states that “local authorities, such as in Shanghai and Shenzhen have issued their own experimental regulations to test case different regulatory approaches, although these remain relatively light-touch in terms of prescriptive obligations on companies” and have not been “adopted at the central level.” Further, the country has proposed “an international body designed to foster international collaboration on AI development and regulation,” which would “coordinate global efforts, complement the work of the UN, share China’s advancements in AI, and help prevent monopolistic control by a few countries or corporations,” according to the American National Standards Institute.
While China attempts to regulate how its citizens use AI, a report from OpenAI in June alleged that Chinese propagandists were already using the tech in “malicious ways,” per NPR. “What we’re seeing from China is a growing range of covert operations using a growing range of tactics,” said Ben Nimmo, principal investigator on OpenAI’s intelligence and investigations team. These Chinese operatives allegedly used ChatGPT and other OpenAI tools in ways that “targeted many different countries and topics…. Some of them combined elements of influence operations, social engineering, surveillance,” Nimmo said. In other words—artificial intelligence regulations are proving pretty necessary, even and especially on a national security level.