
Enfabrica envisions a future where AI datacenters operate with unprecedented efficiency and elasticity, enabling breakthroughs in accelerated computing at scale. By reinventing the networking fabric from the ground up, we empower next-generation AI infrastructures to overcome the limits of traditional backend fabrics.
At the core of our mission is the development of silicon and software that harmonize to deliver high-performance, resilient, and scalable solutions tailored to the unique challenges of modern AI workloads. Our innovations in AI system NICs and memory fabric advance the state of infrastructure technology, making large-scale AI computing more accessible and adaptable.
Through our commitment to open standards and collaboration within the industry, we strive to set new benchmarks for AI infrastructure, fostering environments where compute, memory, and networking resources accelerate transformative AI applications worldwide.
Our Review
We'll be honest — when we first heard about Enfabrica, we thought it was just another networking startup throwing around AI buzzwords. But after digging into what founders Rochan Sankar and Shrijeet Mukherjee are actually building, we're genuinely impressed. These aren't fresh-faced entrepreneurs; they're industry veterans from Broadcom, Cisco, and Google who saw a real problem and decided to solve it from the ground up.
The "AI System NIC" That Actually Makes Sense
Here's what caught our attention: instead of trying to retrofit existing networking gear for AI workloads, Enfabrica built their ACF-S chip specifically for modern AI datacenters. Think of it as a 3.2 Tbps traffic controller that knows how to handle the chaos of GPUs, CPUs, and memory systems all trying to talk to each other at once.
The 5nm chip isn't just fast — it's designed to make AI clusters more elastic and resilient. That matters because when you're running massive language models, every bottleneck costs you real money in compute time.
Smart Timing on Memory Disaggregation
Their EMFASYS AI Memory Fabric System, launched in 2025, feels like perfect timing. As AI models get bigger and hungrier for memory, being able to pool and share memory resources across a cluster is becoming essential. We like that they're not just selling hardware — they're providing a complete software stack that gives customers actual control over their network transport.
The fact that they're already piloting with customers tells us this isn't vaporware. Real companies are willing to test unproven tech, which usually means the pain point is significant.
Who Should Care About This
Enfabrica isn't for everyone. If you're running a small AI project or traditional enterprise workloads, you probably don't need what they're selling. But if you're a hyperscaler, cloud provider, or research institution trying to squeeze every ounce of performance out of massive AI clusters, this could be exactly what you've been waiting for.
With $365 million in funding and active involvement in industry consortiums like Ultra Ethernet, they seem to have both the resources and industry credibility to make this work. Sometimes the best solutions come from people who've been in the trenches long enough to know what's actually broken.
ACF-S chip: 3.2 Tbps AI system NIC at 5nm for high-throughput, low-latency AI cluster networking
EMFASYS AI Memory Fabric System: Ethernet-based memory fabric for efficient large language model inference scaling
Modular, open-standards software stack for network transport control
Supports AI system disaggregation
Designed for scalable GPU, CPU, accelerator, and memory resource networking in AI datacenters






