+44 (0)203 88 020 88

Menu

Search

Cyber Security News & Articles

 

Cyber Security
News & Articles

Trusted Cyber Security Experts
25+ Years Industry Experience
Ethical, Professional & Pragmatic

OpenAI Disrupts State-Linked Misuse of ChatGPT for Cyberattacks

OpenAI has revealed that it disrupted three coordinated operations by state-linked actors from Russia, North Korea and China who attempted to exploit ChatGPT to assist in developing malware, conducting phishing campaigns and supporting online influence activity. The announcement offers an unusually detailed look at how nation-state adversaries are beginning to incorporate large language models (LLMs) into cyber operations, and how the AI industry is adapting to limit those risks.

Russian Threat Actor Activity

OpenAI identified a Russian-language cluster that used ChatGPT to prototype parts of a remote access trojan and a credential stealer. The group avoided content filters by splitting tasks into smaller, benign requests such as clipboard monitoring or basic encryption, then reassembling those snippets into working code outside the chat environment. The same accounts repeatedly refined and debugged their code, an approach resembling a normal software development workflow rather than an isolated experiment.

The campaign illustrates a wider challenge for defenders: AI models can now accelerate low-level development work that previously required time and expertise. These advances make it harder to distinguish between legitimate programming and malicious preparation. Organisations should therefore focus on practical defence reviews, such as external and internal network testing, to verify how real-world attackers might chain together small weaknesses.

North Korean Use of ChatGPT

A separate cluster linked to North Korea showed similar technical ambition. It overlaps with findings from Trellix, which documented phishing and command-and-control (C2) activity supported by AI tools. OpenAI observed that the actors experimented with DLL injection, Windows API hooking and browser extension conversion, using ChatGPT as a debugging assistant to identify errors and generate alternative solutions.

The campaign demonstrates how generative AI can reduce the barrier to entry for malware development and speed up refinement cycles. When combined with conventional North Korean tradecraft, which already relies on open-source tools and stolen frameworks, these efficiencies could shorten the development time for new variants. For blue teams, that means updating incident response and threat-hunting processes to detect smaller, faster-moving campaigns that change infrastructure or payloads more often.

Chinese Operations and Phishing

OpenAI also attributed accounts to a Chinese actor known as UNK_DropPitch (or UTA0388). These users generated multilingual phishing emails in English, Chinese and Japanese, automated elements of remote code execution, and researched tools like nuclei and fscan for reconnaissance. While OpenAI described the group as “technically competent but unsophisticated”, the activity demonstrates how language models help less advanced operators scale their efforts quickly.

The group’s targeting reportedly included technology and semiconductor companies, aligning with China’s established interest in industrial espionage. Using ChatGPT for translation, formatting and payload testing enables attackers to broaden their reach across geographies and industries without dedicated language specialists or large development teams.

Wider Misuse Beyond Malware

Alongside technical abuse, OpenAI uncovered accounts involved in scams, disinformation and surveillance. Networks in Cambodia, Myanmar, Nigeria and China used ChatGPT to automate translation, social media posting and content generation for fraudulent or propagandistic purposes. In some cases, operators requested that the model remove punctuation patterns and stylistic traits that make AI-generated text easier to detect, such as long dashes or repetitive phrasing.

This form of content manipulation has implications for both national security and corporate communications. It suggests that the boundary between legitimate and inauthentic online activity is becoming more difficult to define, complicating moderation and threat intelligence work. For organisations, this highlights the need to reassess security awareness training so employees can recognise credible-looking phishing or social engineering attempts generated by AI.

OpenAI’s Response and Detection Strategy

OpenAI’s countermeasures involve behavioural monitoring, pattern analysis and account correlation rather than simple keyword blocking. The company said it monitors sequences of prompts that indicate iterative development or testing, as well as shared code fragments across multiple accounts. When consistent misuse patterns are found, accounts are terminated and connected identifiers are blocked.

This process builds on OpenAI’s earlier June 2025 threat intelligence report, which documented previous takedowns involving spam and influence campaigns. Although account closures cannot eliminate abuse entirely, they feed data back into automated detection pipelines. The company continues to share findings with partners including Microsoft and other AI developers to strengthen collective defences.

The Broader Technical Picture

Across the three campaigns, several technical patterns emerge. Threat actors are using modular prompt strategies to avoid outright refusals from AI systems. They request benign-sounding components, such as encryption routines or file access functions, that they later combine into malicious tools. They also use models to explain compiler errors, rewrite code for cross-platform compatibility and improve runtime stability. These behaviours illustrate that attackers are not seeking “push-button” malware generation but leveraging AI as an iterative development aid.

Recent academic research, such as studies on fine-tuning vulnerabilities, supports this observation. It shows that adversaries can manipulate training data to override safety alignment and extract restricted capabilities. These findings highlight the limits of prompt filtering as a long-term safeguard and suggest that robust behavioural telemetry and anomaly detection will be essential.

Defensive and Strategic Implications

For cybersecurity leaders, the key takeaway is that generative AI misuse has moved beyond proof-of-concept. Attackers are already integrating it into their operations. This requires defenders to expand their risk models and prepare for scenarios where routine tools like phishing kits or loaders are enhanced by AI assistance.

Practically, that means validating incident response readiness, ensuring telemetry can detect iterative attacker behaviour, and establishing governance for any internal AI deployments. It also means maintaining vigilance across digital supply chains. As more enterprises adopt generative AI internally, any misconfigured or poorly monitored integration point could offer a new vector for abuse.

Security professionals have previously encountered similar inflection points: the rise of automation in penetration testing, the mainstream adoption of malware-as-a-service, and the appearance of cloud-native attack frameworks. The current shift is different mainly in scale and accessibility. Anyone with a model account can now automate fragments of attack logic or generate credible phishing copy.

Industry Collaboration And Policy Direction

The response from industry and policymakers remains fragmented but is improving. Competitors such as Anthropic and Google DeepMind have introduced safety evaluation frameworks and alignment testing for their models. Governments are developing voluntary codes of practice focused on transparency, risk disclosure and red-teaming. Collectively, these initiatives suggest that AI governance is beginning to resemble the compliance structures already familiar in cybersecurity.

For enterprise defenders, participation in these frameworks through information sharing, reporting and joint exercises may help identify misuse earlier. Aligning AI deployment oversight with existing information security management systems, such as ISO 27001 controls, can also reduce overlap and improve accountability.

Building Awareness Of AI-Enhanced Threats

The growing convergence between traditional cybercrime and AI-assisted activity also affects workforce education. Employees who once relied on spotting clumsy phishing attempts must now handle messages with better grammar, correct branding and realistic context. Awareness programmes should therefore emphasise verification, not intuition, and incorporate recent examples of AI-generated lures.

SecureTeam previously discussed related risks in an earlier analysis of fake ChatGPT installers distributing malware, available here: Fake ChatGPT Installs Windows and Android Malware. While that story focused on opportunistic crime rather than state actors, it illustrates how quickly malicious ecosystems adapt to public interest in new technologies.

Conclusion

OpenAI’s disruption of state-linked misuse of ChatGPT represents a turning point in how both industry and government perceive AI in the cyber threat landscape. The cases confirm that generative models are now embedded in adversary toolchains, not as magic exploit generators but as accelerators for development and coordination.

For defenders, the implication is clear: AI misuse must be treated as an active risk vector. Monitoring, governance and human awareness all need to evolve to meet the challenge. As adversaries refine their methods, maintaining transparency and collaboration across the cybersecurity and AI communities will be essential to staying ahead.

Subscribe to our monthly newsletter today

If you’d like to stay up-to-date with the latest cyber security news and articles from our technical team, you can sign up to our monthly newsletter. 

We hate spam as much as you do, so we promise not to bombard you with emails. We’ll send you a single, curated email each month that contains all of our cyber security news and articles for that month.

Why Choose SecureTeam?

CREST
CCS
ISO9001
ISO27001
CE-PLUS

Customer Testimonials

“We were very impressed with the service, I will say, the vulnerability found was one our previous organisation had not picked up, which does make you wonder if anything else was missed.”

Aim Ltd Chief Technology Officer (CTO)

"Within a very tight timescale, SecureTeam managed to deliver a highly professional service efficiently. The team helped the process with regular updates and escalation where necessary. Would highly recommend"

IoT Solutions Group Limited Chief Technology Officer (CTO) & Founder

“First class service as ever. We learn something new each year! Thank you to all your team.”

Royal Haskoning DHV Service Delivery Manager

“We’ve worked with SecureTeam for a few years to conduct our testing. The team make it easy to deal with them; they are attentive and explain detailed reports in a jargon-free way that allows the less technical people to understand. I wouldn’t work with anyone else for our cyber security.”

Capital Asset Management Head of Operations

“SecureTeam provided Derbyshire's Education Data Hub with an approachable and professional service to ensure our schools were able to successfully certify for Cyber Essentials. The team provided a smooth end-to-end service and were always on hand to offer advice when necessary.”

Derbyshire County Council Team Manager Education Data Hub

“A very efficient, professional, and friendly delivery of our testing and the results. You delivered exactly what we asked for in the timeframe we needed it, while maintaining quality and integrity. A great job, done well.”

AMX Solutions IT Project Officer

“We were very pleased with the work and report provided. It was easy to translate the provided details into some actionable tasks on our end so that was great. We always appreciate the ongoing support.”

Innovez Ltd Support Officer

"SecureTeam have provided penetration testing for our system since 2021, and I cannot recommend them enough. The service is efficient & professional, and the team are fantastic to work with; always extremely helpful, friendly, and accommodating."

Lexxika Commercial Director

Get in touch today

If you’d like to see how SecureTeam can take your cybersecurity posture to the next level, we’d love to hear from you, learn about your requirements and then send you a free quotation for our services.

Our customers love our fast-turnaround, “no-nonsense” quotations – not to mention that we hate high-pressure sales tactics as much as you do.

We know that every organisation is unique, so our detailed scoping process ensures that we provide you with an accurate quotation for our services, which we trust you’ll find highly competitive.

Get in touch with us today and a member of our team will be in touch to provide you with a quotation. 

0

No products in the basket.

No products in the basket.