Skip to main content

Forward‑Deployed AI Engineer

  • Software Engineering
  • London

Do you want to tackle the biggest questions in finance with near infinite compute power at your fingertips?

G-Research is a leading quantitative research and technology firm, with offices in London and Dallas.

We are proud to employ some of the best people in their field and to nurture their talent in a dynamic, flexible and highly stimulating culture where world-beating ideas are cultivated and rewarded.

This is a role based in our new Soho Place office – opened in 2023 - in the heart of Central London and home to our Research Lab.

The role

The Applied AI team acts as an internal strike force: small, highimpact groups that embed directly within business units, research groups and platform teams to accelerate the adoption of AI techniques across the firm.

As a ForwardDeployed AI Engineer you will turn LLM capabilities into production solutions that drive measurable value, from intelligent workflows to research agents.

Your daytoday will alternate between rapid prototyping, integrating with existing systems and coaching domain experts on best practices for maintainable, safe LLM applications. Success is measured by the speed and robustness with which internal partners can ship and own new AIbased features.

Key responsibilities of the role include:

  • Engaging directly with internal clients to understand pain points, identify AI opportunities and shape solution roadmaps

  • Building end‑to‑end AI‑powered systems in Python using LangGraph, and similar orchestration frameworks, FastAPI, MCPs and vector‑based retrieval services

  • Fine‑tuning and optimising models (parameter‑efficient or full‑weight) to meet domain‑specific accuracy, latency and cost targets

  • Designing RAG and agentic workflows that safely combine proprietary data with public and on prem models

  • Integrating new services with existing C#, C++ or JVM‑based stacks, ensuring clear APIs, monitoring and CI/CD pipelines.

  • Establishing repeatable patterns, such as reference architectures, templated infra testing harnesses, that enable teams to self‑serve future use‑cases

  • Upskilling engineers and analysts through pair‑programming, workshops and written playbooks on AI engineering best practices

  • Staying on top of the LLM ecosystem, including tooling, evaluation techniques and open‑source releases,  and feeding lessons learned back into the wider AI Engineering Department

Who are we looking for?

We value pragmatic engineers who combine deep technical ability with strong product intuition and impeccable stakeholder communication. You should enjoy moving between green‑field proofs‑of‑concept and hardening them into resilient, audited services.

The ideal candidate will have the following skills and experience:

  • Proven expertise in Python for production systems, with fluency in modern async patterns, typing and testing frameworks

  • Hands‑on experience building Agentic AI applications with LangGraph and LangChain, Pydantic, FastAPI, MCPs and RAG, such as pgvector, Pinecone, Qdrant and Milvus

  • Demonstrable skill fine‑tuning or parameter‑efficiently adapting foundation models, such as LoRA, QLoRA and DPO, and evaluating their performance

  • Solid understanding of RAG patterns, prompt engineering, evaluation metrics and safe deployment considerations

  • Comfort integrating with heterogeneous tech stacks, such as REST/gRPC, message buses and SQL/NoSQL stores, and automating their deployment with Git, Docker and Kubernetes

  • Ability to translate ambiguous requirements into clear technical plans and to communicate trade‑offs to both technical and non‑technical audiences

Desirable:

  • Exposure to enterprise security, data‑privacy and model‑governance frameworks

  • Experience running low‑latency inference on‑prem GPU clusters or hybrid cloud environments

  • Knowledge of experiment‑tracking, offline evaluation and A/B‑testing pipelines for LLM applications

  • Contributions to open‑source AI‑engineering projects or publication of technical blogs and talks

Why should you apply?

  • Highly competitive compensation plus annual discretionary bonus

  • Lunch provided (via Just Eat for Business) and dedicated barista bar

  • 30 days’ annual leave

  • 9% company pension contributions

  • Informal dress code and excellent work/life balance

  • Comprehensive healthcare and life assurance

  • Cycle-to-work scheme

  • Monthly company events

Location: London
Apply Now
An image of Michael
Michael Software Engineer

"It’s a privilege to be in a place where my curiosity is nurtured and my learning journey is supported!"

Find out more

What our people say

An image of Gabriel
Gabriel Software Engineer

"The problems we solve are often novel in nature, meaning we get to solve the previously unsolved. I find this to be a great way to stay challenged and engaged!"

Find out more
An image of Garrett
Garrett Software Engineer

"The willingness to collaborate between both teams and functions has made the transition into my new role as easy as possible."

Find out more
An image of Alexander
Alexander Software Engineer

"I've felt very lucky to work with teams of people across the business who are generous with their time, knowledge and ideas as we collaborate to continuously build and rebuild complex systems with lots of moving parts."

Find out more
An image of Dexter
Dexter Software Engineer

"Work culture is an important aspect for me, so when I was contacted by G-Research I discovered a company where I could grow as a developer, whilst feeling like the company valued me as a person, not just a code monkey."

Find out more
An image of Owen
Owen Software Engineer

"Before G-Research I’d completed internships at a few companies whilst studying Computer Science at the University of York. Since joining, I’ve progressed from an intern to a graduate to an engineer, having worked on systems and technology core to the business my entire time."

Find out more
An image of Sokratis
Sokratis Software Engineer

"My favourite part of working at G-Research is the people, from my colleagues to our customers. The culture is great and encourages collaboration, which makes it easier for everyone to work together."

Find out more
An image of Willy
Willy Data Services Manager

"My team and I have access to a wide range of training opportunities, which allowed us to get the entire team AWS certified within a quarter. We’re actively working on the latest AI and machine learning projects to stay ahead of industry standards."

Find out more
An image of Yoga
Yoga Software Engineering Manager

"The friendly, collaborative atmosphere here is a breath of fresh air and a perfect fit for me."

Find out more
An image of Joshua
Joshua Platform Engineer

"The best thing about working at G-Research is being around such smart people, it motivates you to always want to grow and learn."

Find out more
An image of Margot
Margot HRIS manager

"I enjoy how dynamic the work environment at G-Research is. It keeps you busy and continuously creates opportunities to develop yourself and your career, too."

Find out more

Interview process

Online Application

Our assessment process kicks off with our Talent Acquisition team, who will review your application and assess your fit for the role.

Stage One: Technical Interview

You will meet with a team member – or take a remote test – where your technical abilities will be put to the test.

Stage Two: Behavioural Interview

We will set aside technical skills and focus on you.

Stage Three: Further Technical Interviews

Here, we will take a deeper dive into your technical skills and competencies.

Stage Four: Management Interviews

The final stage of our interview process is where you will meet members of your team, your future manager, and functional leadership.

Forward‑Deployed AI Engineer Apply now

Stay up to date with G-Research