July 23, 2025
By esentry team

When AI Reads What You Can’t See

They say even the eyes of the eagle cannot see through deception cloaked in silence. And in today’s digital world, that silence comes in font-size zero and color-code white.

In an era where firewalls and filters stand like sentinels for our emails, a new trick has entered the threat actor’s handbook, one that targets not the human eye, but the AI brain.
It’s not a spear phish hurled through the usual channels of shady links or infected PDFs. No. This time, the attack whispers to the machine, not the man.

And the machine listens.

How Gemini Got Played

At the heart of this saga is Google’s Gemini, the AI-powered summary tool integrated into Workspace. In theory, it’s your friendly inbox assistant, designed to compress chaos into clarity. But like a storyteller manipulated by hidden stage directions, Gemini can be coaxed into saying what it shouldn’t.

Security researchers through Mozilla’s 0DIN program, uncovered a clever (and sinister) exploit.

It starts with a malicious actor embedding an invisible prompt inside an email. Literally invisible and crafted in white text with a font size of zero. You won’t see it. No links. No files. No flags. It just strolls into your inbox like a well-dressed ghost.

But when you ask Gemini for a summary?

That ghost starts speaking.

In one documented case, Gemini’s summary falsely reported that a user's Gmail account was compromised and offered a "support number”, a classic phishing lure.

Why This Works and Why It’s Dangerous

This isn't just a clever trick. It’s a strategic exploitation of AI trust.

Users increasingly rely on AI summaries to cut through clutter. When Gemini speaks, it carries the unspoken authority of Google itself. But here’s the kicker: Gemini isn’t a security engine. It’s a storyteller. And like all good storytellers, it can be tricked into delivering the wrong script.

Worse still, because there are no URLs or attachments, traditional detection tools don’t raise alarms. The message blends in. No flashing signs. Just AI acting on poisoned instructions buried in the background.

Recommendations

·       Build sanitization routines that detect and strip content using CSS tricks like font-size: 0 and color: white.

·      Deploy logic filters to scan AI-generated summaries for suspicious elements: keywords like “compromised,” “urgent,” phone numbers, or account recovery prompts. These should trigger review flags before the output reaches the user.

·      AI is a co-pilot, not a command center. Educate users to treat summaries as drafts, not verdicts.

·      Treat your AI models as potential attack surfaces. Regular adversarial testing (prompt injection drills, sandbox simulations) should be baked into your security cycle.

·      Push for features where AI summaries link back to why they say what they say. Traceable context would make invisible prompts easier to catch, or at least question.

Even a whisper can mislead the wise, if it’s spoken by the voice they trust!!