About Everest
At Everest, we believe executive assistants do meaningful, high-leverage work—and that work isn’t going anywhere. We're redefining executive support by turbocharging elite assistants with powerful agentic systems, so they can focus on the real work.
Because the real work is hard. There's no AI workflow for getting a Great Dane from Rome to Rio. There's no automating the trust required to support the people whose work is shaping our world.
So, what are we streamlining? Everything else.
We're building AI-native infrastructure into the bones of our operations—orchestration and observability layers that balance deterministic systems with stochastic outcomes. We prioritize thoughtful and ethical approaches to deploying AI agents that don't replace people, but make them superhuman.
We’re a small, senior team shipping the kind of agentic infrastructure most companies will still be demoing two years from now. You’ll help architect the core systems behind it, working closely with the world-class assistants who depend on it every day.
Our executive assistants support some of the most influential people across tech and finance. They're not just our users—they're our stakeholders and collaborators. That means clearer priorities, better data, tighter feedback loops, and the flexibility to be decisive and take big swings.
Core Responsibilities
Design and implement backend systems that power agentic workflows across LLM, deterministic, and hybrid pipelines.
Own and evolve core infrastructure like context memory, orchestration layers, and prompt routing systems.
Design composable multimodal systems that dynamically execute workflows from unstructured inputs (text, audio, video, images).
Optimize latency, extensibility, reliability, and inference cost of multi-agent pipelines.
Collaborate directly with internal EAs to pressure-test workflows in the real world.
Help us make clear decisions about when to use LLMs vs. traditional systems—and how to do both well.
Skills and Qualifications
5+ years of experience in backend software engineering, preferably in Go or similar systems languages.
Shipped agentic LLM systems to production (not prototypes, not demos).
Built real-time systems, distributed async queues, or performance-critical services.
Deep understanding of prompt engineering, token budgeting, and context management.
Strong intuition for when to use AI—and when not to.
Thrive in small teams with high trust and high ownership.
Bonus Points
Experience with RAG, embedding stores, and vector DBs.
Familiarity with tool orchestration frameworks.
Understanding of the architectural tradeoffs of agentic systems, RAG, MCP, memory, and orchestrations.
Know how to work with (and around) the limitations of cutting-edge LLM technologies.
Background in AI safety, observability, or human-in-the-loop workflows.
Prefer building systems that are simple, scalable, and "good enough," without sacrificing maintainability or future flexibility.
Are fluent in small-team dynamics: high trust, low ego, shared accountability.
Work Environment
Location: Remote.
Collaboration: You’ll work closely with our Engineering Lead and the small team shipping core product features weekly.
Compensation and Benefits
Salary: Competitive, commensurate with experience
Healthcare: Medical, dental, and vision
Benefits: 401k plan, short and long term disability, life insurance, generous paid time off policy.
Growth: Do foundational work on systems most companies won’t be building for two years