Skip to main content

ICML 2024: Research Paper Reviews

Machine learning is a rapidly evolving field. To stay ahead of the curve, we actively encourage our quantitative researchers and machine learning engineers to attend conferences like ICML so they can engage in the latest cutting-edge research.

In this ICML paper review series, our team share their insights on the most interesting research and papers presented at the conference. They discuss the latest advancements in ML, offering a comprehensive overview of the field and where it is heading. Through this series, you will gain valuable insights into the latest trends and developments in ML.

Follow the links to read each set of ICML 2024 paper reviews.

Paper review #1
  • Arrows of Time for Large Language Models
  • Transformers are SSMs: Generalized Models and Efficient Algorithms Through Structured State Space Duality

Yousuf, Machine Learning Engineer

Read now
Paper review #2
  • Compute Better Spent: Replacing Dense Layers with Structured Matrices
  • Emergent Equivariance in Deep Ensembles

Danny, Machine Learning Engineer

Read now
Paper review #3
  • A Universal Class of Sharpness-Aware Minimization Algorithms
  • Rotational Equilibrium: How Weight Decay Balances Learning Across Neural Networks

Jonathan, Software Engineer

Read now
Paper review #4
  • Trained Random Forests Completely Reveal your Dataset
  • Test-of-time Award: DeCAF: A Deep Convolutional Activation Feature for Generic Visual Recognition

Evgeni, Senior Quantitative Researcher

Read now
Paper review #5
  • Stop Regressing: Training Value Functions via Classification for Scalable Deep RL
  • Physics of Language Models: Part 3.1, Knowledge Storage and Extraction

Michael, Scientific Director

Read now
Paper review #6
  • I/O Complexity of Attention, or How Optimal is Flash Attention?
  • Simple Linear Attention Language Models Balance the Recall-Throughput Tradeoff

Fabian, Senior Quantitative Researcher

Read now
Paper review #7
  • Offline Actor-Critic Reinforcement Learning Scales to Large Models
  • Information-Directed Pessimism for Offline Reinforcement Learning

Ingmar, Quantitative Researcher

Read now
Paper review #8
  • Better & Faster Large Language Models via Multi-token Prediction
  • Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations

Oliver, Quantitative Researcher

Read now

Our Networking Party

Latest News

Going 15 Percent Faster with Graph-Based Type-checking (part two)
  • 13 Jan 2025

Hear from Florian, Open-Source Software Engineer, in the second part of this two part series, on the challenges and breakthroughs of an internal G-Research initiative aimed at enhancing the .NET developer experience at scale.

Read article
G-Research December 2024 Grant Winners
  • 09 Jan 2025

Each month, we provide up to £2,000 in grant money to early career researchers in quantitative disciplines. Hear from our December grant winners.

Read article
James Maynard on Prime Numbers: Cryptography, Twin Primes and Groundbreaking Discoveries
  • 19 Dec 2024

We were thrilled to welcome James Maynard, Fields Medallist 2022 and Professor of Number Theory, at the Mathematical Institute in Oxford, on stage for the latest Distinguished Speaker Symposium last month. James’ talk on Patterns in prime numbers hones in on unanswered questions within mathematics and the recent developments that have brought the solutions to those problems closer to reality. Hear more in his exclusive interview with us.

Read article

Latest Events

  • Platform Engineering
  • Software Engineering

Hack the Burgh

01 Mar 2025 - 02 Mar 2025 The Nucleus Building, The University of Edinburgh, Thomas Bayes Road, Edinburgh, UK
  • Quantitative Engineering
  • Quantitative Research

Pub Quiz: Oxford

12 Feb 2025 Oxford - to be confirmed after registration
  • Quantitative Engineering
  • Quantitative Research

Pub Quiz: Cambridge

25 Feb 2025 Cambridge - to be confirmed after registration

Stay up to date with
G-Research