Skip to main content

AI Engineer

  • Role overview
  • Who are we looking for
  • Our benefits
  • The process
  • Why apply
  • Meet our people
  • Interview process
  • Software Engineering
  • London

We tackle the most complex problems in quantitative finance by bringing scientific clarity to financial complexity.

From our London HQ, we unite world-class researchers and engineers in an environment that values deep exploration and methodical execution - because the best ideas take time to evolve.  Together we’re building a world-class platform to amplify our teams’ most powerful ideas.

As part of our engineering team, you’ll shape the platforms and tools that drive high-impact research - designing systems that scale, accelerate discovery and support innovation across the firm.

Take the next step in your career.

The role

The Applied AI team is a centralised engineering team within the AI Engineering Department. We build, adopt, and maintain the abstracted agentic tools, platforms, and SDKs that enable intelligent systems across G‑Research.

We don't just build the platform — we use it ourselves to deliver high‑impact solutions, proving the patterns work and feeding real‑world lessons back into the tooling.

As an AI Engineer you will work across four dimensions:

  • Platform & SDKs — Build and maintain G‑Research's agentic platform, evaluation tooling, and Python SDKs that abstract away infrastructure complexity so teams across the firm can build agents quickly and safely.
  • Solutions — Use the platform to deliver production agentic workflows for research and corporate stakeholders, validating the platform through real use cases.
  • Ways of working — Define and champion best practices for building agents at G‑Research: evaluation standards, development patterns, testing approaches, and reference architectures. Lead by example.
  • Embedded delivery — When needed, deploy into specific business teams for weeks at a time to deeply understand their domain, solve critical problems, and uncover new AI opportunities first-hand.

Key responsibilities of the role include:

  • Build and evolve the agentic AI platform — develop the core abstractions, orchestration patterns, and infrastructure that enable agent development across G‑Research.
  • Create and maintain Python SDKs with G‑Research-specific abstractions that simplify common patterns: agent scaffolding, tool integration, context management, evaluation, and deployment.
  • Adopt and integrate best-in-class open-source tooling (LangGraph, Pydantic AI, and emerging frameworks), wrapping them in firm-specific abstractions rather than reinventing the wheel.
  • Define ways of working for agent development — establish evaluation standards, development patterns, testing approaches, and reference architectures that teams across the firm follow.
  • Lead by example on agentic evaluations — build and operate evaluation pipelines (e.g. LangSmith, Langfuse) that set the standard for how agents are measured, monitored, and improved at G‑Research.
  • Deliver production agentic solutions for internal stakeholders, using the platform to solve real problems and validating that the abstractions work under real-world conditions.
  • Embed with business teams across the firm — partner directly with research and corporate teams to solve critical problems and identify new AI opportunities. This may involve deploying into a specific team for several weeks to deeply understand their domain and deliver tailored solutions.
  • Apply context engineering techniques to optimise how agents retrieve, structure, and utilise information — and codify those techniques into the platform and SDKs.
  • Where needed, fine‑tune and optimise models (parameter‑efficient or full‑weight) to meet domain‑specific accuracy, latency, and cost targets.
  • Integrate with existing stacks (C#, C++, JVM) ensuring clear APIs, monitoring, and CI/CD pipelines.
  • Upskill engineers across the firm through pair‑programming, workshops, SDK documentation, and written playbooks on agent development best practices.
  • Stay on top of the LLM ecosystem (tooling, evaluation techniques, open‑source releases) and feed lessons learned back into the platform and wider AI Engineering Department.

Who are we looking for?

We value pragmatic engineers who combine deep technical ability with strong product intuition and impeccable stakeholder communication. You should enjoy moving between green‑field proofs‑of‑concept and hardening them into resilient, audited services.

Essential

AI Engineering

  • Hands‑on experience building LLM applications with LangGraph/LangChain, Pydantic AI, FastAPI, MCPs, and RAG (pgvector, Pinecone, Qdrant, Milvus, etc.).
  • Strong understanding of context engineering — retrieval strategies, context window management, dynamic prompt construction, and information routing.
  • Experience designing complex agentic workflows including multi-step planning, tool use, self-correction, and multi-agent patterns.
  • Solid understanding of RAG patterns, prompt engineering, and safe deployment considerations.

Evaluation & Observability

  • Experience with agentic evaluation frameworks (e.g. LangSmith, Langfuse) for measuring accuracy, latency, cost, and detecting behavioural regressions.

Platform & Software Engineering

  • Proven expertise in Python for production systems, with fluency in modern async patterns, typing, and testing frameworks.
  • Experience building platform-level software — reusable APIs, shared libraries, SDKs, extensible architectures — not just one-off solutions.
  • Comfort integrating with heterogeneous tech stacks (REST/gRPC, message buses, SQL/NoSQL stores) and automating deployment with Git, Docker, and Kubernetes.

Communication

  • Ability to translate ambiguous requirements into clear technical plans and to communicate trade‑offs to both technical and non‑technical audiences.

Desirable

  • Exposure to enterprise security, data‑privacy, and model‑governance frameworks.
  • Demonstrable skill fine‑tuning or parameter‑efficiently adapting foundation models (LoRA, QLoRA, DPO, etc.) and evaluating their performance.
  • Experience running low‑latency inference on‑prem GPU clusters or hybrid cloud environments.
  • Knowledge of experiment‑tracking, offline evaluation, and A/B‑testing pipelines for LLM applications.
  • Experience building chat or agent UIs for end-user interaction with agentic systems.
  • Contributions to open‑source AI‑engineering projects or publication of technical blogs/talks.


Why join us?

  • Highly competitive compensation plus annual discretionary bonus
  • Lunch provided (via Just Eat for Business) and dedicated barista bar
  • 30 days annual leave
  • 9% company pension contributions
  • Informal dress code and excellent work/life balance
  • Comprehensive healthcare and life assurance
  • Cycle-to-work scheme
  • Monthly company events
Location: London
Apply Now
An image of Alexander
Alexander Software Engineer

"I've felt very lucky to work with teams of people across the business who are generous with their time, knowledge and ideas as we collaborate to continuously build and rebuild complex systems with lots of moving parts."

Find out more

What our people say

An image of Dexter
Dexter Software Engineer

"Work culture is an important aspect for me, so when I was contacted by G-Research I discovered a company where I could grow as a developer, whilst feeling like the company valued me as a person, not just a code monkey."

Find out more
An image of Owen
Owen Software Engineer

"Before G-Research I’d completed internships at a few companies whilst studying Computer Science at the University of York. Since joining, I’ve progressed from an intern to a graduate to an engineer, having worked on systems and technology core to the business my entire time."

Find out more
An image of Sokratis
Sokratis Software Engineer

"My favourite part of working at G-Research is the people, from my colleagues to our customers. The culture is great and encourages collaboration, which makes it easier for everyone to work together."

Find out more
An image of David
David Director of Open Source Development

"Knowing that we make a difference to the bottom line of our business while also doing good for others in a collaborative way brings an extra layer of satisfaction to my day."

Find out more
An image of Ivan
Ivan DevOps Engineer

"I'm exposed to many different technologies and I get to learn the basics and then develop my skills with that technology further."

Find out more
An image of Jonathan
Jonathan Open Source Evangelist

"Out of all the places where I've worked, it's the only place where I can say that I'm constantly learning from my colleagues. I really feel like this has made me grow a lot."

Find out more
An image of Caterina
Caterina Director of Open Source Developer Experience

"We have the built-in advantage of a collaborative culture, working with open source software developers who are global and come from all different kinds of backgrounds and environments."

Find out more
An image of Adam
Adam ML Open Source Software Manager

"I get to build software that solves real problems for my colleagues, while also being part of the global Open Source community."

Find out more
An image of Peter
Peter Quantitative Research Intern

"My internship was a hugely valuable experience that sharpened my skills and gave me a new perspective on building models with real-world impact."

Find out more
An image of Matteo
Matteo Quantitative Research Intern

"One of the things that has truly stood out to me is the collaborative and welcoming culture. I hadn’t expected such a supportive environment but it’s been one of the main reasons I’ve enjoyed working here from day one."

Find out more

Interview process

Online Application

Our assessment process kicks off with our Talent Acquisition team, who will review your application and assess your fit for the role.

Stage One: Technical Interview

You will meet with a team member – or take a remote test – where your technical abilities will be put to the test.

Stage Two: Behavioural Interview

We will set aside technical skills and focus on you.

Stage Three: Further Technical Interviews

Here, we will take a deeper dive into your technical skills and competencies.

Stage Four: Management Interviews

The final stage of our interview process is where you will meet members of your team, your future manager, and functional leadership.

AI Engineer Apply now

Stay up to date with G-Research