Location
London, United Kingdom
London, United Kingdom
Salary
(Yearly)
(Yearly)
(Yearly)
(Yearly)
(Hourly)
Undisclosed
Category
AI Engineer
Date posted
February 3, 2026
Job type
Full-time
Experience level
Senior 5+
Summary this job with AI
Highlight
Highlight

Job Description

At Kallikor, we're building the future of supply chain intelligence through AI-powered simulation digital twins. We create living digital representations of real-world operations (warehouses, distribution networks, global logistics) that help organisations make better decisions faster.

We're at an inflection point: moving from AI-assisted tools to domain-specific AI that understands supply chains as deeply as our best engineers do. You'll be instrumental in building our first domain-specific language model (DSLM) and the foundation for Project Genome, an ambitious initiative to capture and synthesise the world's supply chain knowledge into actionable intelligence.

This is a production engineering role first. You'll build robust Python systems that happen to train and serve LLMs, not the other way around. We need someone who writes production-quality code, debugs complex distributed systems, and thinks about reliability, who has learned ML/LLMs as powerful tools in their engineering arsenal.

You'll work across our entire AI stack: building FastAPI services that serve models, creating training pipelines that process production data, deploying inference endpoints with proper monitoring, and integrating all of this into our existing Python backend. The ML is important, but the engineering discipline is what makes it production-ready.

Learn more at kallikor.ai.

Your Opportunity

  • Build production AI systems: Design and implement the full stack, from FastAPI endpoints that handle requests, to training pipelines that process data, to inference services that serve predictions. You'll own the architecture, not just the model weights.

  • Train and deploy our DSLM: Fine-tune models using Unsloth/Axolotl, but more importantly, build the robust infrastructure around it - data pipelines that feed training, evaluation frameworks that catch regressions, deployment systems that handle failover. Make it production-grade.

  • Integrate ML into our backend: We use FastAPI, PydanticAI, FastMCP, Memgraph. You'll extend these systems with ML capabilities, not as a separate "ML service" but as a natural part of our backend architecture. Clean abstractions, proper error handling, observability.

  • Own inference performance: Get models running fast, whether that's vLLM deployment, quantization strategies, batching optimizations, or caching. Hit our <200ms latency targets through engineering, not just throwing bigger GPUs at it.

  • Shape Project Genome's foundation: Work with our Principal Engineer to architect how we ingest, process, and learn from global supply chain data. This is systems design as much as ML with data pipelines, graph databases, incremental learning strategies being just as important.

  • Mentor through code review and pairing: Raise the bar on code quality, testing, and production practices across the team. Teach mid and junior engineers how to build ML systems that don't fall over.

Why you're made for this

  • You're a strong production Python engineer: You write clean, maintainable, tested code. You understand async/await, know when to use generators vs lists, can profile performance bottlenecks. You've built FastAPI services (or similar) that handle production traffic. Your code passes review without drama.

  • You've built with LLMs in production: You've integrated GPT-4/Claude into real applications, handled streaming responses, dealt with rate limits and retries, cached intelligently. You know the practical challenges: prompt engineering, context management, error handling, cost control.

  • You've trained or fine-tuned models: Whether it's fine-tuning LLMs, training classifiers, or running experiments, you understand the workflow. You've dealt with training data quality, evaluation metrics, and overfitting. You can debug why a model isn't learning what you expected.

  • You think like a systems engineer: You design for failure, add instrumentation, consider edge cases. You know that "the model works on my laptop" isn't shipping. You care about monitoring, logging, alerting, and graceful degradation.

  • You can navigate the ML landscape pragmatically: You know enough about transformers, attention mechanisms, and training dynamics to make informed decisions. But you're not precious about it. If a simple heuristic beats a complex model, you ship the heuristic.

  • You balance velocity with quality: You ship incrementally and iterate based on production data. But you don't accumulate tech debt, you refactor proactively, write tests that matter, and leave the codebase better than you found it.

  • You communicate trade-offs clearly: You can explain to the team why we're choosing LoRA over full fine-tuning, why we're deploying on Fireworks instead of self-hosting, or why a 7B model might beat a 70B model. You help everyone make informed decisions.

What we're looking for specifically

  • 5+ years building production Python systems (backend services, APIs, data processing)

  • Strong software engineering fundamentals: design patterns, testing, debugging, profiling

  • Experience integrating LLMs into applications (OpenAI/Anthropic APIs, prompt engineering, streaming, PydanticAI)

  • Understanding of ML training workflows (even if you're not an expert. You need to know enough to build the infrastructure)

  • Docker, CI/CD, production deployment experience

  • Can read and understand PyTorch code (you don't need to write novel architectures)

About Us

Kallikor is determined to foster an environment where people can do their best work and feel like they belong. We believe a healthy culture, strong values and contribution from a diverse range of individuals will help us to achieve success.

We do not discriminate based on race, ethnicity, gender, ancestry, national origin, religion, sex, sexual orientation, gender identity, age, disability, veteran status, genetic information, marital status or any other legally protected status.

Apply now
Kallikor is hiring a Senior AI/ML Engineer . Apply through The Homebase and and make the next move in your career!
Apply now
Companies size
11-50
employees
Founded in
2024
Headquaters
London, United Kingdom
Country
United Kingdom
Industry
Computer Software
Social media
Visit website

Similar AI jobs

Here are other jobs you might want to apply for.

GB.svg
United Kingdom

Senior AI/ML Engineer

Full-time
AI Engineer
US.svg
United States

Senior AI Scientist

Full-time
AI Engineer
GE.svg
Germany

Forward Deployed Engineer (FDE), Life Sciences - Munich

Full-time
AI Engineer
IE.svg
Ireland

Forward Deployed Engineer (FDE), Life Sciences - Dublin

Full-time
AI Engineer
GB.svg
United Kingdom

Forward Deployed Engineer (FDE), Life Sciences - London

Full-time
AI Engineer
FR.svg
France

Forward Deployed Engineer (FDE), Life Sciences - Paris

Full-time
AI Engineer
Open Modal