OpenAI has published a nine-page cybersecurity action plan that argues advanced AI models should be made available to trusted defenders under tighter access controls, rather than restricted to a small group of approved organizations.
The April plan sets out five pillars covering access, government-industry coordination, frontier model security, deployment visibility and consumer protection.
The plan’s central mechanism is Trusted Access for Cyber, OpenAI’s tiered program for giving vetted users access to more capable and more permissive models for defensive cybersecurity work.
Expanding the defensive circle
The program will expand to federal, state and local government users, industry actors that protect downstream users at scale and smaller critical infrastructure providers reached through trusted intermediaries such as managed security service providers, sector bodies, security vendors and CISA-supported programs.
The shift toward gated distribution
OpenAI’s argument rests on what the plan calls “controlled acceleration.” The document rejects both unrestricted release and a model where frontier cyber capabilities are limited to a small number of selected organizations, contending that attackers are already using AI to improve phishing, automate reconnaissance, speed malware development and scale cyber operations.
OpenAI has also begun turning that access model into a product program. On April 14, OpenAI said it was scaling Trusted Access for Cyber to thousands of verified individual defenders and hundreds of teams, with higher tiers receiving GPT-5.4-Cyber, a more permissive model fine-tuned for defensive workflows including binary reverse engineering.
The company said the model would start with limited deployment to vetted security vendors, organizations and researchers because more permissive access creates additional visibility and misuse concerns.
Anthropic is pursuing a more restrictive version of the same gated-access model. Its Project Glasswing gives selected partners access to Claude Mythos Preview for defensive cybersecurity work, while its red-team write-up says the model showed major gains in finding and exploiting zero-day vulnerabilities.
Anthropic said it does not plan to make Claude Mythos Preview generally available, underscoring how frontier labs are converging on gated distribution for cyber-capable models.
The intermediary risk
A key enterprise detail in OpenAI’s plan is the distribution channel for smaller critical infrastructure operators. Hospitals, school districts, water utilities and municipalities often lack the capacity to operate frontier cyber models directly, so the plan points to managed security providers and other intermediaries as a route to reach them.
That approach could broaden access for smaller operators, but it also shifts more oversight risk onto intermediaries.
Guardz’s 2026 State of MSP Threat Report said remote monitoring and management tool abuse was the largest endpoint threat campaign in its dataset, accounting for 26% of detections, with tools including ScreenConnect, AteraAgent and MeshAgent observed in unauthorized persistence activity.
Microsoft also reported in March that signed malware impersonating workplace apps installed RMM tools to establish persistent access inside enterprise environments.
Governance and privileged access controls
The plan’s governance language responds to that risk by tying access to more capable models to identity, use case, security posture and defensive impact. For higher-trust users, OpenAI lists identity verification, legal attestations, baseline security commitments, abuse reporting and monitoring as obligations that should increase as model capability or permissiveness rises.
The controls are designed to tighten as risk increases. The plan says OpenAI must be able to apply stricter blocking, introduce account-level friction, reduce quotas, require reauthentication, downgrade access tiers or remove access where abuse is detected.
For CISOs, vendor-risk teams and legal teams, that makes participation closer to a privileged-access and monitored-use regime than a standard SaaS rollout.
Requirements for public-private coordination
OpenAI is also asking the government to fill a gap the company cannot close alone. The action plan calls for faster operational threat intelligence sharing between government and industry, a shared threat model for AI-enabled cyber activity and a real-time coordination hub where AI companies, cloud providers and other stakeholders can share threat information and deploy urgent mitigations.
The plan also extends OpenAI’s cyber-defense argument to consumers. OpenAI says ChatGPT users send more than 15 million messages per month asking whether something is a scam, a data point that positions AI tools as consumer-facing cyber infrastructure as well as enterprise defense tooling.
Proofpoint’s 2026 AI and Human Risk Landscape Report separately found that 42% of organizations reported a suspicious or confirmed AI-related incident, showing that AI-enabled risk is also reaching enterprise environments.