Location
Melbourne, Australia
Melbourne, Australia
Salary
(Yearly)
(Yearly)
(Yearly)
(Yearly)
(Hourly)
Undisclosed
A$150,000 – A$180,000
Date posted
March 5, 2026
Job type
Full-time
Experience level
Senior
Summary this job with AI
Highlight
Highlight

Job Description

About the role

Maincode builds foundation models from first principles on Australian infrastructure. We design architectures, run our own compute, shape the training process, and operate the systems that serve our models.

We have built Matilda, the first large language model built and trained from scratch in Australia. Our new compute cluster is live; we are scaling the next version of Matilda and deploying and serving it live for public access.

We are looking for AI researchers who want to work on the core architecture, training, and evaluation of large-scale language models that power Matilda.

This role is not focused on incremental benchmarking or paper output. You will work directly with the engineers running large-scale training systems and help design models that learn efficiently and behave reliably in production.

What you would actually do

You will work across the model development loop, from research questions to training runs to evaluation.

This includes:

  • Designing and testing architecture changes and training regimes for large language models

  • Running controlled experiments at scale and isolating causal effects

  • Studying failure modes in reasoning, generalisation, robustness, and representation

  • Shaping objectives, data mixtures, and optimisation choices that influence model behaviour

  • Building and refining evaluations that measure capability and reliability, not just scores

  • Analysing training dynamics using logs, metrics, and model outputs

  • Collaborating with ML systems engineers on distributed training and training operations

  • Writing clear internal notes that turn experimental results into design decisions

You will spend substantial time in code, training runs, logs, and evaluation outputs. The goal is clarity about what improves the model and why.

What we are looking for

We care about depth of reasoning, experimental discipline, and the ability to make progress under ambiguity.

We expect:

  • Hands-on experience writing and running production-grade ML or research code

  • Strong Python and experience with PyTorch or JAX

  • Solid understanding of transformer-based language models and the basics of pre-training and evaluation

  • Ability to design experiments, interpret results, and communicate tradeoffs clearly

  • Comfort working close to infrastructure, performance constraints, and operational reality

  • Interest and exposure to reasoning-oriented architectures and training methods beyond standard approaches, and beyond standard LLMs


Nice to have

  • Experience with distributed training concepts and tooling (data parallel, tensor parallel, sharding, checkpointing)

  • Experience running training across multiple nodes and managing long training cycles

  • Familiarity with large-model training stacks and frameworks (for example Megatron-style systems, DeepSpeed-like tooling, or equivalent)

  • Comfort across the full workflow: training, evaluation, and deployment constraints

  • Experience working in ROCm-based environments

How you would work

This is hands-on research. You will use code as a primary tool for thinking.

You will be expected to:

  • Move between theory and implementation quickly and precisely

  • Prefer controlled experiments over broad sweeps

  • Use logs, metrics, and model behaviour to guide decisions

  • Work closely with engineering counterparts to scale and validate ideas

What this role is not

  • It is not a product research role

  • It is not prompt engineering

  • It is not fine-tuning someone else’s model and shipping wrappers around external APIs

You will work on Matilda, trained from scratch on our infrastructure, and pushed until its behaviour is understood and improved.

Why Maincode

Maincode builds and operates the full stack: training infrastructure, model code, evaluation systems, and deployment. We run one of the largest private AI compute environments in Australia, built for the sole purpose of training and deploying large scale models.

If you want to work directly on training and evaluating a large language model built from scratch, this is the only role in Australia that will put you inside that work.

Note

This is a full time role based in Melbourne, working closely with our in person team. At this time we are not able to offer visa sponsorship, so applicants must have existing and unrestricted work rights in Australia.

Apply now
Maincode is hiring a AI Researcher. Apply through The Homebase and and make the next move in your career!
Apply now
Companies size
11-50
employees
Founded in
Headquaters
Melbourne, Australia
Country
Australia
Industry
Computer Software
Social media
Visit website

Similar AI jobs

Here are other jobs you might want to apply for.

AU.svg
Australia

AI Researcher

Full-time
Research Scientist
US.svg
United States

Member of Technical Staff: Agent DX Research

Full-time
Research Scientist
US.svg
United States

Research Scientist (Measurement and Evaluation)

Full-time
Research Scientist
GE.svg
Germany

Senior AI Researcher- Reinforcement learning (f/m/d)

Full-time
Research Scientist
US.svg
United States

Research Scientist, PhD

Full-time
Research Scientist
US.svg
United States

Member of Technical Staff, Research

Full-time
Research Scientist
Open Modal