Close Menu

    Subscribe to Updates

    Get the latest creative news from FooBar about art, design and business.

    What's Hot

    Prompt Engineering Endorses ‘Cognitive Cognizance Prompting’ As A Vital Well-Being Technique

    January 20, 2026

    For These Women, Grok’s Sexualized Images Are Personal

    January 20, 2026

    Inside China’s buzzing AI scene a year after DeepSeek shock

    January 20, 2026
    Facebook X (Twitter) Instagram
    ailogicnews.aiailogicnews.ai
    • Home
    ailogicnews.aiailogicnews.ai
    Home»OpenAI»OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns
    OpenAI

    OpenAI Bans Accounts Misusing ChatGPT for Surveillance and Influence Campaigns

    AI Logic NewsBy AI Logic NewsFebruary 22, 2025No Comments5 Mins Read
    Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
    Share
    Facebook Twitter LinkedIn Pinterest Email

    Feb 22, 2025Ravie LakshmananDisinformation / Artificial Intelligence

    ChatGPT for Surveillance and Influence Campaigns

    OpenAI on Friday revealed that it banned a set of accounts that used its ChatGPT tool to develop a suspected artificial intelligence (AI)-powered surveillance tool.

    The social media listening tool is said to likely originate from China and is powered by one of Meta’s Llama models, with the accounts in question using the AI company’s models to generate detailed descriptions and analyze documents for an apparatus capable of collecting real-time data and reports about anti-China protests in the West and sharing the insights with Chinese authorities.

    The campaign has been codenamed Peer Review owing to the “network’s behavior in promoting and reviewing surveillance tooling,” researchers Ben Nimmo, Albert Zhang, Matthew Richard, and Nathaniel Hartley noted, adding the tool is designed to ingest and analyze posts and comments from platforms such as X, Facebook, YouTube, Instagram, Telegram, and Reddit.

    In one instance flagged by the company, the actors used ChatGPT to debug and modify source code that’s believed to run the monitoring software, referred to as “Qianyue Overseas Public Opinion AI Assistant.”

    Besides using its model as a research tool to surface publicly available information about think tanks in the United States, and government officials and politicians in countries like Australia, Cambodia and the United States, the cluster has also been found to leverage ChatGPT access to read, translate and analyze screenshots of English-language documents.

    Cybersecurity

    Some of the images were announcements of Uyghur rights protests in various Western cities, and were likely copied from social media. It’s currently not known if these images were authentic.

    OpenAI also said it disrupted several other clusters that were found abusing ChatGPT for various malicious activities –

    • Deceptive Employment Scheme – A network from North Korea linked to the fraudulent IT worker scheme that was involved in the creation of personal documentation for fictitious job applicants, such as resumés, online job profiles and cover letters, as well as come up convincing responses to explain unusual behaviors like avoiding video calls, accessing corporate systems from unauthorized countries or working irregular hours. Some of the bogus job applications were then shared on LinkedIn.
    • Sponsored Discontent – A network likely of Chinese origin that was involved in the creation of social media content in English and long-form articles in Spanish that were critical of the United States, and subsequently published by Latin American news websites in Peru, Mexico, and Ecuador. Some of the activity overlaps with a known activity cluster dubbed Spamouflage.
    • Romance-baiting Scam – A network of accounts that was involved in the translation and generation of comments in Japanese, Chinese, and English for posting on social media platforms including Facebook, X and Instagram in connection with suspected Cambodia-origin romance and investment scams.
    • Iranian Influence Nexus – A network of five accounts that was involved in the generation of X posts and articles that were pro-Palestinian, pro-Hamas, and pro-Iran, and anti-Israel and anti-U.S., and shared on websites associated with an Iranian influence operations tracked as the International Union of Virtual Media (IUVM) and Storm-2035. One among the banned accounts was used to create content for both the operations, indicative of a “previously unreported relationship.”
    • Kimsuky and BlueNoroff – A network of accounts operated by North Korean threat actors that was involved in gathering information related to cyber intrusion tools and cryptocurrency-related topics, and debugging code for Remote Desktop Protocol (RDP) brute-force attacks
    • Youth Initiative Covert Influence Operation – A network of accounts that was involved in the creation of English-language articles for a website named “Empowering Ghana” and social media comments targeting the Ghana presidential election
    • Task Scam – A network of accounts likely originating from Cambodia that was involved in the translation of comments between Urdu and English as part of a scam that lures unsuspecting people into jobs performing simple tasks (e.g., liking videos or writing reviews) in exchange for earning a non-existent commission, accessing which requires victims to part with their own money.

    The development comes as AI tools are being increasingly used by bad actors to facilitate cyber-enabled disinformation campaigns and other malicious operations.

    Cybersecurity

    Last month, Google Threat Intelligence Group (GTIG) revealed that over 57 distinct threat actors with ties to China, Iran, North Korea, and Russia used its Gemini AI chatbot to improve multiple phases of the attack cycle and conduct research into topical events, or perform content creation, translation, and localization.

    “The unique insights that AI companies can glean from threat actors are particularly valuable if they are shared with upstream providers, such as hosting and software developers, downstream distribution platforms, such as social media companies, and open-source researchers,” OpenAI said.

    “Equally, the insights that upstream and downstream providers and researchers have into threat actors open up new avenues of detection and enforcement for AI companies.”

    Found this article interesting? Follow us on Twitter  and LinkedIn to read more exclusive content we post.



    Source link

    Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
    Previous ArticleTrust, But Verify The Data Feeding Your AI Systems
    Next Article FTC’s Holyoak Has Her Eyes On DeepSeek
    AI Logic News

    Related Posts

    OpenAI

    OpenAI Opens The Box; And BBC Opens Up To YouTube Ads

    January 20, 2026
    OpenAI

    OpenAI to focus on ‘practical adoption’ in 2026, says finance chief Sarah Friar

    January 19, 2026
    OpenAI

    OpenAI backer Sequoia Capital in talks to join Anthropic’s proposed $25B mega round

    January 19, 2026
    Demo
    Top Posts

    Houston’s Small Biz Gets Smarter: H

    July 29, 20259 Views

    How To Rank First In ChatGPT Even If You’re New To AI

    March 29, 20259 Views

    OpenAI to Focus on Safety Amid Deception Risks

    January 4, 20266 Views
    Latest Reviews
    ailogicnews.ai
    © 2026 Lee Enterprises

    Type above and press Enter to search. Press Esc to cancel.