Skip to main content
We're enhancing our site and your experience, so please keep checking back as we evolve.
Back to News
NeurIPs Paper Reviews 2023 #1

NeurIPs Paper Reviews 2023 #1

19 January 2024
  • Quantitative Research

Our team of quantitative researchers have shared the most interesting research presented during workshops and seminars at NeurIPs 2023.

Discover the perspectives of machine learning engineer Danny, as he discusses his most compelling findings from the conference.

The G-Research booth at NeurIPS 2022

A U-turn on Double Descent: Rethinking Parameter Counting in Statistical Learning

Alicia Curth, Alan Jeffares, Mihaela van der Schaar

The double descent hypothesis is a relatively recent idea that attempts to reconcile modern machine learning “bigger is better” practice with the bias-variance trade off. It states that in the overparameterized regime, the traditional U-shaped test error vs model complexity curve breaks down, and it’s possible to see improved generalization performance by continually increasing the model parameter count. This is referred to as the interpolation region, where the number of model parameters is greater than or equal to the training set size.

In this paper, the authors revisit the results from the original Belkin et al paper, from 2019, which observes a double descent for Random Fourier Feature regression, decision tree ensembles and gradient boosted trees. They claim that in each of these cases, model complexity is increased along multiple axes (for example, splits per tree and number of trees for a tree ensemble) and the double descent appears as an artefact of switching between these axes when increasing model complexity, rather than as a result of reaching the interpolation threshold (number of model parameters == training set size). When test error is plotted against increasing model complexity on any single axis, the traditional U-shaped bias-variance curve is recovered.

They go on to interpret each of these cases as a “smoother” from the classical statistical literature, which allows them to derive the effective number of parameters for each model. They then reproduce and re-plot the results from the original paper against the effective parameter count and recover the U-shaped curve in all cases. The obvious omission is the investigation of the “deep double descent” case which is suggested as the next direction for this work.

A U-turn on Double Descent: Rethinking Parameter Counting in Statistical Learning
NeurIPS 2022 Paper Reviews

Read paper reviews from NeurIPS 2022 from a number of our quantitative researchers and machine learning practitioners.

Read now

Normalization Layers Are All That Sharpness-Aware Minimization Needs

Maximilian Müller, Tiffany Vlaar, David Rolnick, Matthias Hein

Sharpness Aware Minimization (SAM) is a technique that attempts to improve generalization performance by finding loss regions with flatter minima by minimizing the worst case sharpness of the training loss in a neighbourhood around L(w).

In this paper, the authors introduce SAM-ON (SAM-OnlyNorm), which applies SAM to the normalization parameters of a network only. They find that SAM-ON achieves better generalization performance than the original SAM method (applied to all parameters) on ResNet architectures with BatchNorm and Vision Transformers with LayerNorm.

They investigate this further by measuring loss sharpness for both SAM and SAM-ON and find that SAM-ON actually finds regions with sharper minima, despite exhibiting better generalization performance. This supports claims from previous work that the generalization performance of SAM is not solely due to it finding flatter minima.

Normalization Layers Are All That Sharpness-Aware Minimization Needs

Quantitative Research and Machine Learning

Want to learn more about life as a researcher at G-Research?

Learn more

Read more of our quantitative researchers thoughts

NeurIPs Paper Reviews 2023 #2

Discover the perspectives of Paul, one of our quantitative researcher, on the following papers:

  • Sharpness-Aware Minimization Leads to Low-Rank Features
  • When Do Neural Nets Outperform Boosted Trees on Tabular Data?
Paper Review #2
NeurIPs Paper Reviews 2023 #3

Discover the perspectives of Szymon, one of our quantitative researchers, on the following papers:

  • Convolutional State Space Models for Long-Range Spatiotemporal Modeling
  • How to Scale Your EMA
Paper Review #3
NeurIPS Paper Review 2023 #4

Discover the perspectives of Dustin, our scientific director, on the following papers:

  • Abide by the law and follow the flow: conservation laws for gradient flows
  • The Tunnel Effect: Building Data Representations in Deep Neural Networks
Paper Review #4
NeurIPS Paper Review 2023 #5

Discover the perspectives of Laurynas, one of our machine learning engineers, on the following papers:

  • Monarch Mixer: A Simple Sub-Quadratic GEMM-Based Architecture
  • QLoRA: Efficient Finetuning of Quantized LLMs
Paper Review #5
NeurIPS Paper Review 2023 #6

Discover the perspectives of Rui, one of our quantitative analyst, on the following papers:

  • Predict, Refine, Synthesize: Self-Guiding Diffusion Models for Probabilistic Time Series Forecasting
  • Conformal Prediction for Time Series with Modern Hopfield Networks
Paper Review #6

Latest News

G Research
G-Research April 2024 Grant Winners
  • 02 May 2024

Each month, we provide up to £2,000 in grant money to early career researchers in quantitative disciplines. Hear from our April grant winners.

Read article
G Research
G-Research March 2024 Grant Winners
  • 15 Apr 2024

Each month, we provide up to £2,000 in grant money to early career researchers in quantitative disciplines. Hear from our March grant winners.

Read article
Insights from SXSW 2024: Reflections of a G-Research Software Engineer
  • 08 Apr 2024

Here, Michael W, an engineer in our Dallas engineering and infrastructure hub, provides an overview of what he took away from his time at SXSW 2024.

Read article

Latest Events

  • Quantitative Research

UCL Festival of Research

20 May 2024 UCL, Gower Street, London, WC1E 6BT
  • Machine Learning

ICML 2024

21 Jul 2024 - 27 Jul 2024 Messe Wien Exhibition Congress Center, Messeplatz, Vienna, Austria
  • Quantitative Research

Pub quiz: Cambridge

14 May 2024 - 14 May 2024 To be confirmed on registration

Stay up to date with
G-Research