ICML 2022: Research paper reviews

A number of our researchers attended the International Conference for Machine Learning (ICML) in Baltimore, US, last month. Read their thoughts on some of the best papers from the conference.

ICML 2022 paper reviews

As part of our diamond sponsorship of ICML and commitment to learning and development, a group of our Researchers and Engineers attended this year’s conference, held in Baltimore, USA.

Our researchers have pulled together reviews of their favourite ICML papers of 2022, giving insight into exactly what excited them about these papers.

Follow the links to read each set of ICML 2022 paper reviews.

Casey H – Machine Learning Engineer

  • Fast Convex Optimisation for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
  • Transformer Quality in Linear Time
  • Quant-BnB: A Scalable Branch-and-Bound Method for Optimal Decision Tress with Continuous Features
  • Monarch: Expressive Structured Matrices for Efficient and Accurate Training

Read the ICML paper review

Johann – Quantitative Research Manager

  • Unaligned Supervision for Automatic Music Transcription in-the-Wild
  • Dataset Condensation via Efficient Synthetic-Data Parameterization

Read the ICML paper review

Angus L – Machine Learning Engineer

  • FlashAttention
  • Head2Toe: Utilizing Intermediate Representations for Better Transfer Learning
  • DRAGONN: Distributed Randomized Approximate Gradients of Neural Networks

Read the ICML paper review

Oliver L – Quantitative Researcher

  • Improving Ensemble Distillation with Weight Averaging and Diversifying Perturbation
  • Hierarchical Shrinkage: Improving the Accuracy and Interpretability of Tree-Based Methods
  • Multirate Training of Neural Networks

Read the ICML paper review

Linden R – Quantitative Researcher

  • Uncertainty Modeling in Generative Compressed Sensing
  • Multi-Task Learning as a Bargaining Game
  • Anticorrelated Noise Injection for Improved Generalization

Read the ICML paper review

Jonathan L – Quantitative Researcher

  • Domain Adaptation for Time Series Forecasting via Attention Sharing
  • Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions
  • Unravelling Attention via Convex Duality: Analysis and Interpretations of Vision Transformers

Read the ICML paper review

Stephen C – Senior Quantitative Researcher

  • Last Layer Re-Training is Sufficient for Robustness to Spurious Correlations
  • Fast Convex Optimization for Two-Layer ReLU Networks: Equivalent Model Classes and Cone Decompositions

Read the ICML paper review

Jaak S – Quantitative Researcher

  • Planning with Diffusion for Flexible Behavior Synthesis
  • Prioritized training on points that are learnable, worth learning, and not yet learnt
  • GACT: Activation Compressed Training for Generic Network Architectures

Read the ICML paper review

James B – Quantitative Researcher

  • Exploiting Redundancy: Separable Group Convolutional Networks on Lie Groups
  • Fishr: Invariant Gradient Variances for Out-of-Distribution Generalization
  • Approximately Equivariant Networks for Imperfectly Symmetric Dynamics

Read the ICML paper review