Skip to main content

ICML 2024: Research Paper Reviews

Machine learning is a rapidly evolving field. To stay ahead of the curve, we actively encourage our quantitative researchers and machine learning engineers to attend conferences like ICML so they can engage in the latest cutting-edge research.

In this ICML paper review series, our team share their insights on the most interesting research and papers presented at the conference. They discuss the latest advancements in ML, offering a comprehensive overview of the field and where it is heading. Through this series, you will gain valuable insights into the latest trends and developments in ML.

Follow the links to read each set of ICML 2024 paper reviews.

Paper review #1
  • Arrows of Time for Large Language Models
  • Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality

Yousuf, Machine Learning Engineer

Read now
Paper review #2
  • Compute Better Spent: Replacing Dense Layers with Structured Matrices
  • Emergent Equivariance in Deep Ensembles

Danny, Machine Learning Engineer

Read now
Paper review #3
  • A Universal Class of Sharpness-Aware Minimization Algorithms
  • Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks

Jonathan, Software Engineer

Read now
Paper review #4
  • Trained Random Forests Completely Reveal your Dataset
  • Test-of-time Award: DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

Evgeni, Senior Quantitative Researcher

Read now
Paper review #5
  • Stop Regressing: Training Value Functions via Classification for Scalable Deep RL
  • Physics of Language Models: Part 3.1, Knowledge Storage and Extraction

Michael, Scientific Director

Read now
Paper review #6
  • I/O Complexity of Attention, or How Optimal is Flash Attention?
  • Simple Linear Attention Language Models Balance the Recall-Throughput Tradeoff

Fabian, Senior Quantitative Researcher

Read now
Paper review #7
  • Offline Actor-Critic Reinforcement Learning Scales to Large Models
  • Information-Directed Pessimism for Offline Reinforcement Learning

Ingmar, Quantitative Researcher

Read now
Paper review #8
  • Better & Faster Large Language Models via Multi-token Prediction
  • Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations

Oliver, Quantitative Researcher

Read now

Our Networking Party

Latest News

The Tyranny of Tech Debt
  • 28 Apr 2025

Hear from our Head of Forecasting Engineering on why the term "tech debt" has outlived its usefulness. In this blog, he explores why we should move away from generic labels and instead ask more precise, value-driven questions that lead to meaningful improvements in engineering and business outcomes.

Read article
G-Research March 2025 Grant Winners
  • 22 Apr 2025

Each month, we provide up to £2,000 in grant money to early career researchers in quantitative disciplines. Hear from our March grant winners.

Read article
Invisible Work of OpenStack: Eventlet Migration
  • 25 Mar 2025

Hear from Jay, an Open Source Software Engineer, on tackling technical debt in OpenStack. As technology evolves, outdated code becomes inefficient and harder to maintain. Jay highlights the importance of refactoring legacy systems to keep open-source projects sustainable and future-proof.

Read article

Latest Events

  • Quantitative Engineering
  • Quantitative Research

SIAM Conference on Financial Mathematics and Engineering

15 Jul 2025 - 18 Jul 2025 Hyatt Regency Miami, 400 SE 2nd St, Miami, FL 33131, United States
  • Quantitative Engineering
  • Quantitative Research

Imperial PhD Careers Fair

10 Jun 2025 Queen's Tower Rooms, Sherfield Building, South Kensington Campus, Imperial College London, London, SW7 2AZ
  • Platform Engineering
  • Software Engineering

Oxbridge Women in Computer Science Conference

03 May 2025 The William Gates Building 15 JJ Thomson Avenue, Cambridge, CB3 0FD

Stay up to date with
G-Research