김민수
2026년 4월 3일
ChatGPT Vulnerability Let Attackers Silently Exfiltrate User Prompts & Sensitive Data
A critical vulnerability in ChatGPT’s architecture allowed attackers to extract this exact type of user data silently.
By abusing a covert outbound channel in ChatGPT’s isolated code execution environment, attackers could extract chat history, uploaded files, and AI-generated outputs without triggering user alerts or consent prompts.
OpenAI designed the Python-based Data Analysis environment as a secure sandbox, intentionally blocking direct outbound HTTP requests to prevent data leakage.
Legitimate external API calls, known as GPT Actions, require explicit user consent through visible approval dialogs.
https://cybersecuritynews.com/chatgpt-vulnerability/
https://www.linkedin.com/posts/cybersecuritynews-vulnerabilitynews-chatgpt-share-7444774768068952064-ORaR