Skip to main content

Software Engineer – AI

  • Role overview
  • Who are we looking for
  • Our benefits
  • The process
  • Why apply
  • Meet our people
  • Interview process
  • Software Engineering
  • London

We tackle the most complex problems in quantitative finance, by bringing scientific clarity to financial complexity.

From our London HQ, we unite world-class researchers and engineers in an environment that values deep exploration and methodical execution - because the best ideas take time to evolve.  Together we’re building a world-class platform to amplify our teams’ most powerful ideas.

As part of our engineering team, you’ll shape the platforms and tools that drive high-impact research - designing systems that scale, accelerate discovery and support innovation across the firm.

Take the next step in your career.

The role

The Core AI team is a centralised infrastructure team within the AI Engineering department. We build, operate and scale the foundational platform that powers AI innovation across GResearch, including on-prem open model inference, model serving, AI developer experience tooling, centralised MCP servers and secure agent sandboxing.

We provide the foundations that enable teams across the firm to innovate and deliver with confidence, working closely with our Applied AI team.

As an Engineer in Core AI, you will work across four key areas:

  • Infrastructure and serving - design, build and operate on-prem model inference and serving platforms
  • MCP server infrastructure - build and operate centralised MCP servers that provide secure, governed access to tools and data
  • Security and sandboxing - design and implement infrastructure for safe execution of autonomous AI agents in a regulated environment
  • Developer AI experience – improve developer experience through seamless integrations and user-facing tools.

Key responsibilities of the role include:

  • Designing and operating model serving infrastructure, including inference pipelines and scheduling systems
  • Building and running centralised MCP servers, ensuring secure, reliable access to enterprise tools and data
  • Owning platform reliability, performance and scalability across Kubernetes-based infrastructure, including observability, capacity planning and incident response
  • Building self-service tooling and APIs to enable teams to provision and consume AI infrastructure independently
  • Integrating platform services with existing technology stacks, ensuring clear interfaces, monitoring and CI/CD
  • Evaluating and adopting open-source technologies and applying emerging best practices to improve the platform

Who are we looking for?

We value pragmatic engineers who combine deep infrastructure expertise with strong systems thinking and clear communication. You should enjoy building reliable, secure platforms at scale - the kind of foundations that hundreds of engineers and quants depend on daily without needing to think about.

The ideal candidate will have the following skills and experience:

Essential:

  • Strong expertise in C# and Python, building distributed systems and platform-level software
  • Deep Kubernetes expertise, including multi-tenant cluster operations and platform extensions
  • Experience with Docker, Terraform and CI/CD in controlled or regulated environments
  • Strong understanding of distributed systems, including networking, storage, security and performance
  • Experience with model serving and inference infrastructure, including deployment, scaling and optimisation of open models
  • Clear communication skills, with ability to explain complex concepts and produce high-quality technical documentation

Desirable:

  • Experience with MCP or similar platform services
  • Familiarity with sandboxing and workload isolation technologies
  • Experience in quantitative finance or low-latency systems
  • AWS experience particularly in hybrid environments
  • Experience with observability tooling such as Prometheus, Grafana or OpenTelemetry
  • Contributions to open-source projects in relevant domains

Why join us?

  • Highly competitive compensation plus annual discretionary bonus
  • Lunch provided (via Just Eat for Business) and dedicated barista bar
  • 30 days annual leave
  • 9% company pension contributions
  • Informal dress code and excellent work/life balance
  • Comprehensive healthcare and life assurance
  • Cycle-to-work scheme
  • Monthly company events
Location: London
Apply Now
An image of Dexter
Dexter Software Engineer

"Work culture is an important aspect for me, so when I was contacted by G-Research I discovered a company where I could grow as a developer, whilst feeling like the company valued me as a person, not just a code monkey."

Find out more

What our people say

An image of Alexander
Alexander Software Engineer

"I've felt very lucky to work with teams of people across the business who are generous with their time, knowledge and ideas as we collaborate to continuously build and rebuild complex systems with lots of moving parts."

Find out more
An image of Owen
Owen Software Engineer

"Before G-Research I’d completed internships at a few companies whilst studying Computer Science at the University of York. Since joining, I’ve progressed from an intern to a graduate to an engineer, having worked on systems and technology core to the business my entire time."

Find out more
An image of Sokratis
Sokratis Software Engineer

"My favourite part of working at G-Research is the people, from my colleagues to our customers. The culture is great and encourages collaboration, which makes it easier for everyone to work together."

Find out more
An image of David
David Director of Open Source Development

"Knowing that we make a difference to the bottom line of our business while also doing good for others in a collaborative way brings an extra layer of satisfaction to my day."

Find out more
An image of Ivan
Ivan DevOps Engineer

"I'm exposed to many different technologies and I get to learn the basics and then develop my skills with that technology further."

Find out more
An image of Jonathan
Jonathan Open Source Evangelist

"Out of all the places where I've worked, it's the only place where I can say that I'm constantly learning from my colleagues. I really feel like this has made me grow a lot."

Find out more
An image of Caterina
Caterina Director of Open Source Developer Experience

"We have the built-in advantage of a collaborative culture, working with open source software developers who are global and come from all different kinds of backgrounds and environments."

Find out more
An image of Adam
Adam ML Open Source Software Manager

"I get to build software that solves real problems for my colleagues, while also being part of the global Open Source community."

Find out more
An image of Peter
Peter Quantitative Research Intern

"My internship was a hugely valuable experience that sharpened my skills and gave me a new perspective on building models with real-world impact."

Find out more
An image of Matteo
Matteo Quantitative Research Intern

"One of the things that has truly stood out to me is the collaborative and welcoming culture. I hadn’t expected such a supportive environment but it’s been one of the main reasons I’ve enjoyed working here from day one."

Find out more

Interview process

Online Application

Our assessment process kicks off with our Talent Acquisition team, who will review your application and assess your fit for the role.

Stage One: Technical Interview

You will meet with a team member – or take a remote test – where your technical abilities will be put to the test.

Stage Two: Behavioural Interview

We will set aside technical skills and focus on you.

Stage Three: Further Technical Interviews

Here, we will take a deeper dive into your technical skills and competencies.

Stage Four: Management Interviews

The final stage of our interview process is where you will meet members of your team, your future manager, and functional leadership.

Software Engineer – AI Apply now

Stay up to date with G-Research