OpenAI patches twin leaks as Codex slips and ChatGPT spills

Summary

OpenAI has patched two security vulnerabilities affecting its AI models, Codex and ChatGPT. One flaw allowed for GitHub token theft through command injection in Codex's branch name parameter, while another in ChatGPT's code execution environment created a hidden channel for data exfiltration. Researchers warn that AI tools with code execution autonomy pose ongoing risks.

IFF Assessment

FOE

These vulnerabilities demonstrate new attack vectors for credential theft and data exfiltration within AI systems, which are increasingly being granted autonomous capabilities.

Defender Context

The article highlights emerging security risks associated with AI models that execute code and interact with external systems. Defenders should be vigilant about the security of AI development pipelines and be aware of potential command injection and data exfiltration methods targeting these new tools. Proper input validation and secure coding practices are crucial, especially as AI agents gain more autonomy.

Read Full Story →