Researcher Johann Rehberger shows a hack to override Gemini's prompt injection defenses, letting an attacker plant long-term memories for all future sessions (Dan Goodin/Ars Technica)

Dan Goodin / Ars Technica: Researcher Johann Rehberger shows a hack to override Gemini's prompt injection defenses, letting an attacker plant long-term memories for all future sessions  —  In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots …

Feb 12, 2025 - 16:09
 0
Researcher Johann Rehberger shows a hack to override Gemini's prompt injection defenses, letting an attacker plant long-term memories for all future sessions (Dan Goodin/Ars Technica)

Dan Goodin / Ars Technica:
Researcher Johann Rehberger shows a hack to override Gemini's prompt injection defenses, letting an attacker plant long-term memories for all future sessions  —  In the nascent field of AI hacking, indirect prompt injection has become a basic building block for inducing chatbots …