

OpenAI warned its upcoming AI models could generate zero-day exploits against systems or assist intrusion operations. Microsoft-backed company is establishing Frontier Risk Council with access controls and defensive tools as countermeasures.
Read more: How Pakistan’s AI Ambitions Can Drive Economic Growth
OpenAI announced Wednesday that its upcoming artificial intelligence models could create high cybersecurity risks as their capabilities rapidly advance. The Microsoft-backed ChatGPT maker warned these AI systems might generate working zero-day remote exploits targeting well-defended systems or assist with complex enterprise intrusion operations aimed at causing real-world effects.
To address these emerging threats, OpenAI is investing in strengthening models for defensive cybersecurity tasks and developing tools that enable defenders to audit code and patch vulnerabilities more easily. The company will implement access controls, infrastructure hardening, egress controls and monitoring as countermeasures. OpenAI plans to introduce a program providing qualifying users and customers working on cyberdefense with tiered access to enhanced capabilities.
Additionally, the company will establish the Frontier Risk Council, an advisory group bringing experienced cyber defenders and security practitioners into collaboration with its teams, initially focusing on cybersecurity before expanding to other frontier capability domains.
