Today's Core Dump is brought to you by ThreatPerspective

The Register - Software

How to jailbreak ChatGPT and trick the AI into writing exploit code using hex encoding

'It was like watching a robot going rogue' says researcher


OpenAI's language model GPT-4o can be tricked into writing exploit code by encoding the malicious instructions in hexadecimal, which allows an attacker to jump the model's built-in security guardrails and abuse the AI for evil purposes, according to 0Din researcher Marco Figueroa.


Published: 2024-10-29T22:30:07











© Segmentation Fault . All rights reserved.

Privacy | Terms of Use | Contact Us