Investing

OpenAI identifies Chinese users leveraging ChatGPT for monitoring, profiling tools

OpenAI has said that suspected Chinese government-linked users attempted to use ChatGPT to design proposals and marketing materials for mass-surveillance tools, including systems to monitor Uyghurs and scan social media for political or religious speech.

In a newly released threat-intelligence report titled “Disrupting Malicious Uses of Our Models,” OpenAI outlined multiple cases in which accounts tied to state-affiliated actors used its artificial intelligence tools for potentially repressive purposes.

Users linked to Chinese entities

One of the most striking examples involved a ChatGPT user “likely connected to a [Chinese] government entity” who asked the model to draft a proposal for what was described as a “High-Risk Uyghur-Related Inflow Warning Model.”

The proposal detailed plans for a system that would analyse transport bookings and compare them with police databases to issue alerts on the movement of people categorised as “Uyghur-related and high-risk.”

Another user, writing in Chinese, requested assistance designing promotional materials for a “social media probe” tool purportedly built for a government client.

According to OpenAI’s report, the tool was described as capable of scanning platforms such as X (formerly Twitter), Facebook, Instagram, Reddit, TikTok, and YouTube to identify “extremist speech” or political, ethnic, and religious content.

OpenAI said both accounts were banned once the company detected the activity.

It stressed that there was no evidence the users employed ChatGPT to carry out actual surveillance or that the tools were ultimately deployed.

A glimpse into AI use by authoritarian actors

The report framed the findings as a “rare snapshot” of how authoritarian and malicious actors are beginning to incorporate generative AI into their operations—not necessarily to create new forms of cyberwarfare, but to refine existing capabilities in data analysis, propaganda, and social monitoring.

“As we wrote in June, the PRC is making real progress in advancing its autocratic version of AI,” the company stated in the report.

“Our disruption of ChatGPT accounts used by individuals apparently linked to Chinese government entities shines some light on the current state of AI usage in this authoritarian setting.”

OpenAI emphasized that the activity it identified appeared to be carried out by individual users, not large-scale institutional operations.

However, the company said the incidents highlight the need for continued vigilance around “potential authoritarian abuses” of generative AI.

OpenAI’s broader warning

OpenAI said it disclosed the findings publicly to alert policymakers, researchers, and the tech industry to how AI can be repurposed for surveillance and repression.

The company called for proactive monitoring and tighter safeguards to prevent similar misuse.

“The activity was consistent with individual users using ChatGPT, rather than large-scale, institutional adoption of our models,” the report noted.

“As such, it is a limited snapshot of the usage of different AI models in this context.”

The post OpenAI identifies Chinese users leveraging ChatGPT for monitoring, profiling tools appeared first on Invezz

    Stay updated with the latest news, exclusive offers, and special promotions. Sign up now and be the first to know! As a member, you'll receive curated content, insider tips, and invitations to exclusive events. Don't miss out on being part of something special.

    By opting in you agree to receive emails from us and our affiliates. Your information is secure and your privacy is protected.