Skip to main content

Research Blog

Training Machine Learning Models More Efficiently with Dataset Distillation

For a machine learning (ML) algorithm to be effective, useful features must be extracted from (often) large amounts of training data. However, this process can be made challenging due to the costs associated with training on such large datasets, both in terms of compute requirements and wall clock time. The idea of distillation plays an important role in these situations by reducing the resources required for the model to be effective. The most widely known form of distillation is model distillation (a.k.a. knowledge distillation), where the predictions of large, complex teacher models are distilled into smaller models.

Controlling Neural Networks with Rule Representations

Does Your Medical Image Classifier Know What It Doesn’t Know?

Resolving High-Energy Impacts on Quantum Processors

Accurate Alpha Matting for Portrait Mode Selfies on Pixel 6

Separating Birdsong in the Wild for Classification

LaMDA: Towards Safe, Grounded, and High-Quality Dialog Models for Everything

Introducing StylEx: A New Approach for Visual Explanation of Classifiers

Learning to Route by Task for Efficient Inference

Scaling Vision with Sparse Mixture of Experts

Google Research: Themes from 2021 and Beyond

A Scalable Approach for Partially Local Federated Learning