Pangea’s Global AI Hackathon Exposes Major Holes in GenAI Security
In March 2025, cybersecurity company Pangea issued a bold challenge: trick a chatbot into revealing secret information, and win $10,000. More than 800 participants across 85 countries answered the call. By the end of the month-long test, they had submitted over 330,000 attempts—feeding AI systems more than 300 million tokens in search of a security slip.
The goal? Bypass the invisible lines of defense known as “prompt injection guardrails.” These protections are built into AI applications to prevent misuse—like leaking sensitive data or running unauthorized actions. Pangea’s challenge showed just how porous those defenses still are.
The Nature of the Threat: It’s Not Predictable
Most traditional cybersecurity threats follow patterns. Not here. One of Pangea’s top findings: prompt injection attacks don’t behave consistently. A malicious input might fail 99 times and succeed on the 100th—even with the same text.
That inconsistency makes defending GenAI harder than it looks. It also means attackers don’t need advanced tools or elite skills. Sometimes, all it takes is persistence.
Real Risks: From Data Leaks to Internal Access
The biggest concern isn’t just chatbots giving strange replies. According to the research, attackers used prompt injection to expose server details, open ports, and system configurations. That’s reconnaissance—and it’s a red flag.
Once an attacker knows what they’re working with, they can escalate. In agent-based AI systems (those that can act on commands, access databases, or trigger tools), prompt injection becomes far more dangerous. In those cases, a single flaw could mean financial transfers, email sabotage, or damaged internal systems.
Guardrails Alone Don’t Cut It
The most telling number in the report: roughly 1 in 10 injection attempts worked against basic prompt guardrails. That’s a huge failure rate in cybersecurity terms.
Pangea’s own solution? Stack defenses. Use multiple barriers. Shrink the inputs. Limit functionality where security matters most. And never assume the built-in protections are enough.
The $10,000 Winner Who Beat Every System
Only one participant solved all three rooms. That player, a professional penetration tester named Joey Melo, spent two days crafting a multi-stage attack that bypassed even the strictest controls.
His success wasn’t brute force—it was precision. He used fewer words, more creativity, and a deep understanding of how to manipulate AI into making mistakes.
Actionable Steps for Enterprise AI Security
Pangea outlined five key takeaways for any business using AI systems, whether customer-facing or internal:
1. Stack Security Layers
Don’t rely on one line of defense. Combine guardrails, access control, and anomaly detection.
2. Limit What the AI Can Do
Reduce commands, response types, or input formats—especially where financial or personal data is involved.
3. Keep Testing
Red team your AI like you would any application. Build exercises that mimic prompt injection tactics.
4. Tweak AI Behavior
Lower the “temperature” of LLMs—this setting reduces how random their responses are. That randomness is often what attackers exploit.
5. Dedicate Real Resources
Security teams should monitor prompt injection trends the same way they monitor phishing or malware.
What Pangea Says Happens Next
Oliver Friedrichs, CEO of Pangea, doesn’t mince words: companies are ignoring the threat.
He says businesses are racing to deploy AI tools, adding them into sensitive workflows without thinking through the consequences. As adoption grows, so does risk. What’s missing is action.
“This isn’t a tomorrow problem,” Friedrichs warns. “It’s already here. If your AI app has a prompt box, you’ve already given attackers a keyboard.”
Why This Matters Now
Many organizations use large language models (LLMs) for everything from customer support to internal productivity tools. But few have security teams treating those systems like they do web apps or email infrastructure.
Pangea’s report makes one thing clear: prompt injection is the new phishing. It preys on trust. It can be subtle or explosive. And without serious planning, it’s a soft target.
Get the Full Report
For those who want the technical details—or who need evidence to push for better AI safeguards—the full report is available from Pangea:
Defending Against Prompt Injection: Insights from 300K attacks in 30 days
Pangea’s challenge wasn’t a stunt. It was a wake-up call. With over 300,000 attack attempts logged in just one month, the data shows that guardrails aren’t keeping up with how fast attackers are learning.
One attacker escaped every trap. Thousands more got close. And most AI systems cracked at least once.
That’s a problem no business can afford to ignore.