
Irregular envisions a future where artificial intelligence is developed and deployed with unwavering security and trust, safeguarding humanity from the complexities and risks of advanced AI technologies. Our mission is to protect the world by pioneering comprehensive AI security frameworks that anticipate and neutralize threats before they can manifest in real-world harm.
We are committed to advancing the frontier of AI security through innovative testing methods, including confidential inference and hardware-based verification, ensuring that every AI model is rigorously evaluated for potential vulnerabilities. By collaborating with leading AI labs and governments, we drive forward responsible AI deployment at scale, preserving the integrity of a future deeply integrated with intelligent systems.
At Irregular, we build the foundation for a safer AI-driven world by combining deep expertise in cybersecurity and artificial intelligence, fostering resilient AI ecosystems that empower society while minimizing risks. We are shaping the next era of AI where security is not an afterthought but a fundamental pillar of intelligent progress.
Our Review
When we first encountered Irregular (formerly Pattern Labs), what struck us wasn't just their impressive client roster — though having OpenAI, Google, and Anthropic on speed dial certainly turns heads. It was their uniquely proactive approach to what might be the most crucial challenge in tech today: keeping AI systems secure as they become increasingly powerful.
The Security-First Innovators
Think of Irregular as the special forces of AI security. While most companies are racing to build faster, smarter AI models, these folks are stress-testing them for vulnerabilities before the bad actors can. Their SOLVE framework, which puts AI models through rigorous security evaluations, is particularly impressive — it's like having a sophisticated flight simulator for AI safety.
What Sets Them Apart
We're especially intrigued by their use of confidential inference and hardware-based verification. It's one thing to spot vulnerabilities; it's another to create fortress-level security that's baked into the hardware itself. Their approach isn't just theoretical — they're running complex simulations where AI models play both attacker and defender, uncovering weak spots most wouldn't think to look for.
Growing at Light Speed
Their meteoric rise speaks volumes. Going from launch to a $450 million valuation with $80 million in Series A funding (led by Sequoia and Redpoint, no less) shows we're not the only ones impressed. What's particularly compelling is how they've managed to generate millions in revenue while working on something as forward-looking as AI security.
In an era where AI capabilities are expanding daily, Irregular's work feels less like a nice-to-have and more like a must-have. Their partnerships with government bodies, including the UK government, suggest they're playing a crucial role in shaping how we'll keep AI systems safe as they become more integral to our daily lives.
AI security testing tools
Confidential inference
Hardware-based verification
SOLVE framework for vulnerability detection
AI model vulnerability and resilience evaluation






