ChatGPT's new Lockdown Mode can stop prompt injection - here's how it works
Publish Time: 17 Feb, 2026
ChatGPT sensitive data
Screenshot by Lance Whitney/

Follow : Add us as a preferred source on Google


Key takeaways

  • Hackers use prompt injection to steal the private data you use in AI.
  • ChatGPT's new Lockdown Mode aims to prevent these attacks.
  • Elevated Risk labels warn you of AI tools and content that could be risky.

Prompt injection attacks pose a serious threat to anyone who uses AI tools, but especially to professionals who rely on them at work. By exploiting a vulnerability that affects most AIs, a hacker can insert malicious code into a text prompt, which may then alter the results or even steal confidential data.

Also: 5 custom ChatGPT instructions I use to get better AI results - faster

Now, OpenAI has introduced a feature called Lockdown Mode to better thwart these types of attacks.

Lockdown Mode

ChatGPT Lockdown Mode
OpenAI

Lockdown Mode enhances the protection against prompt injections and other advanced threats. With this setting enabled, ChatGPT is limited in the ways it can interact with external systems and data, thereby restricting an attacker's ability to exfiltrate sensitive files.

An optional security setting, Lockdown Mode isn't necessary for most ChatGPT users, OpenAI said in a news release on Friday. Rather, the feature is geared more toward security-minded users, such as executives or security pros at prominent organizations. With that in mind, Lockdown Mode is available for ChatGPT Enterprise, ChatGPT Edu, ChatGPT for Healthcare, and ChatGPT for Teachers.

Also: These 4 critical AI vulnerabilities are being exploited faster than defenders can respond

Lockdown Mode works by determining which tools and capabilities in ChatGPT are most at risk. The goal is to restrict access to any sensitive data in a conversation or from a connected app that could be exploited through prompt injection.

(Disclosure: Ziff Davis, 's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)

As one example, web browsing in Lockdown Mode limits access to cached content so that no live requests leave OpenAI's network. Other features are completely disabled unless OpenAI can confirm that the data is safe. Here, the idea is to prevent an attacker from stealing data through web browsing.

ChatGPT business plans already offer enterprise-level security protection, which administrators can control via the Workspace settings. Lockdown Mode adds an extra layer of defense. Workspace admins can also choose which apps and actions are controlled by Lockdown Mode.

Elevated Risk labels

ChatGPT Elevated Risks
OpenAI

But that's not all. OpenAI will now also display an Elevated Risk label when you access certain features that could be risky. Accessible in ChatGPT, the ChatGPT Atlas browser, and the Codex coding assistant, these labels are designed to give you pause before you work with a tool or content that could be exploited.

Also: The secret to AI job security? Stop stressing and pivot at work now - here's how

For example, developers who use Codex can give the tool network access so that it can search the web for assistance. With this access enabled, the Elevated Risk label will warn you of potential risks, changes that may occur, and when such access is warranted.

The Elevated Risk labels are designed as a short-term solution to at least inform you of potential dangers. Looking to the future, OpenAI said it plans to add more security features across the board to address additional risks and threats, eventually obviating the need for such labels.

I’d like Alerts: