Microsoft Research Blog

Neurips 2022: seven microsoft research papers selected for oral presentations.

Published December 5, 2022

Share this page

  • Share on Facebook
  • Share on LinkedIn
  • Share on Reddit
  • Subscribe to our RSS feed

abstract banner for Microsoft at NeurIPS 2022

Microsoft is proud to be a platinum sponsor of the 36th annual conference on  Neural Information Processing Systems (opens in new tab) (NeurIPS), which is widely regarded as the world’s most prestigious research conference on artificial intelligence and machine learning.

Microsoft has a strong presence at NeurIPS again this year, with more than 150 of our researchers participating in the conference and 122 of our research papers accepted. Our researchers are also taking part in 10 workshops, four competitions and a tutorial.

In one of the workshops, AI for Science: Progress and Promises , a panel of leading researchers will discuss how artificial intelligence and machine learning have the potential to advance scientific discovery. The panel will include two Microsoft researchers: Max Welling , Vice President and Distinguished Scientist, Microsoft Research AI4Science, who will serve as moderator, and Peter Lee , Corporate Vice President, Microsoft Research and Incubations.

Of the 122 Microsoft research papers accepted for the conference, seven have been selected for oral presentations during the virtual NeurIPS experience the week of December 4 th . The oral presentations provide a deeper dive into each of the featured research topics.

In addition, two other Microsoft research papers received Outstanding Paper Awards for NeurIPS 2022. One of those papers, Gradient Estimation with Discrete Stein Operators , explains how researchers developed a gradient estimator that achieves substantially lower variance than state-of-the-art estimators with the same number of function evaluations, which has the potential to improve problem solving in machine learning. In the other paper, A Neural Corpus Indexer for Document Retrieval , researchers demonstrate that an end-to-end deep neural network that unifies training and indexing stages can significantly improve the recall performance of traditional document retrieval methods.

Spotlight: On-demand video

a screenshot of a computer screen shot of a man

AI Explainer: Foundation models ​and the next era of AI

Explore how the transformer architecture, larger models and more data, and in-context learning have helped advance AI from perception to creation.

Below we have provided the titles, authors and abstracts for all seven of the Microsoft research papers chosen for oral presentations at NeurIPS, with links to additional information for those who want to explore the topics more fully:

Uni[MASK]: Unified Inference in Sequential Decision Problems

Micah Carroll, Orr Paradise, Jessy Lin, Raluca Georgescu , Mingfei Sun, David Bignell, Stephanie Milani, Katja Hofmann , Matthew Hausknecht, Anca Dragan, Sam Devlin

Abstract :   Randomly masking and predicting word tokens has been a successful approach in pre-training language models for a variety of downstream tasks. In this work, we observe that the same idea also applies naturally to sequential decision making, where many well-studied tasks like behavior cloning, offline RL, inverse dynamics, and waypoint conditioning correspond to different sequence maskings over a sequence of states, actions, and returns. We introduce the UniMASK framework, which provides a unified way to specify models which can be trained on many different sequential decision-making tasks. We show that a single UniMASK model is often capable of carrying out many tasks with performance similar to or better than single-task models. Additionally, after fine tuning, our UniMASK models consistently outperform comparable single-task models.

K-LITE: Learning Transferable Visual Models with External Knowledge

Sheng Shen, Chunyuan Li , Xiaowei Hu, Yujia Xie, Jianwei Yang , Pengchuan Zhang, Zhe Gan , Lijuan Wang , Lu Yuan , Ce Liu, Kurt Keutzer, Trevor Darrell, Anna Rohrbach, Jianfeng Gao

Abstract : The new generation of state-of-the-art computer vision systems are trained from natural language supervision, ranging from simple object category names to descriptive captions. This form of supervision ensures high generality and usability of the learned visual models, based on the broad concept coverage achieved through large-scale data collection process. Alternatively, we argue that learning with external knowledge about images is a promising way which leverages a much more structured source of supervision and offers sample efficiency.

In this paper, we propose K-LITE (Knowledge-augmented Language-Image Training and Evaluation), a simple strategy to leverage external knowledge for building transferable visual systems: In training, it enriches entities in natural language with WordNet and Wiktionary knowledge, leading to an efficient and scalable approach to learning image representations that uses knowledge about the visual concepts; In evaluation, the natural language is also augmented with external knowledge and then used to reference learned visual concepts (or describe new ones) to enable zero-shot and few-shot transfer of the pre-trained models. We study the performance of K-LITE on two important computer vision problems, image classification and object detection, benchmarking on 20 and 13 different existing datasets, respectively. The proposed knowledge-augmented models show significant improvement in transfer learning performance over existing methods. Our code is released at https://github.com/microsoft/klite (opens in new tab) .

Extreme Compression for Pre-trained Transformers Made Simple and Efficient

Xiaoxia Wu, Zhewei Yao, Minjia Zhang , Conglong Li , Yuxiong He

Abstract : Extreme compression, particularly ultra-low bit precision (binary/ternary) quantization, has been proposed to fit large NLP models on resource-constraint devices. However, to preserve the accuracy for such aggressive compression schemes, cutting-edge methods usually introduce complicated compression pipelines, e.g., multi-stage expensive knowledge distillation with extensive hyperparameter tuning. Also, they oftentimes focus less on smaller transformer models that have already been heavily compressed via knowledge distillation and lack a systematic study to show the effectiveness of their methods.

In this paper, we perform a very comprehensive systematic study to measure the impact of many key hyperparameters and training strategies from previous. As a result, we find out that previous baselines for ultra-low bit precision quantization are significantly under-trained. Based on our study, we propose a simple yet effective compression pipeline for extreme compression.

Our simplified pipeline demonstrates that:

(1) we can skip the pre-training knowledge distillation to obtain a 5-layer \bert while achieving better performance than previous state-of-the-art methods, like TinyBERT;

(2) extreme quantization plus layer reduction is able to reduce the model size by 50x, resulting in new state-of-the-art results on GLUE tasks.

On the Complexity of Adversarial Decision Making

Dylan J Foster , Alexander Rakhlin, Ayush Sekhari, Karthik Sridharan

Abstract : A central problem in online learning and decision making—from bandits to reinforcement learning—is to understand what modeling assumptions lead to sample-efficient learning guarantees. We consider a general adversarial decision-making framework that encompasses (structured) bandit problems with adversarial rewards and reinforcement learning problems with adversarial dynamics. Our main result is to show—via new upper and lower bounds—that the Decision-Estimation Coefficient, a complexity measure introduced by Foster et al. in the stochastic counterpart to our setting, is necessary and sufficient to obtain low regret for adversarial decision making. However, compared to the stochastic setting, one must apply the Decision-Estimation Coefficient to the convex hull of the class of models (or, hypotheses) under consideration. This establishes that the price of accommodating adversarial rewards or dynamics is governed by the behavior of the model class under convexification, and recovers a number of existing results –both positive and negative. En route to obtaining these guarantees, we provide new structural results that connect the Decision-Estimation Coefficient to variants of other well-known complexity measures, including the Information Ratio of Russo and Van Roy and the Exploration-by-Optimization objective of Lattimore and György.

Maximum Class Separation as Inductive Bias in One Matrix

Tejaswi Kasarla, Gertjan J. Burghouts, Max van Spengler, Elise van der Pol , Rita Cucchiara, Pascal Mettes

Abstract : Maximizing the separation between classes constitutes a well-known inductive bias in machine learning and a pillar of many traditional algorithms. By default, deep networks are not equipped with this inductive bias and therefore many alternative solutions have been proposed through differential optimization. Current approaches tend to optimize classification and separation jointly: aligning inputs with class vectors and separating class vectors angularly.

This paper proposes a simple alternative: encoding maximum separation as an inductive bias in the network by adding one fixed matrix multiplication before computing the softmax activations. The main observation behind our approach is that separation does not require optimization but can be solved in closed-form prior to training and plugged into a network. We outline a recursive approach to obtain the matrix consisting of maximally separable vectors for any number of classes, which can be added with negligible engineering effort and computational overhead. Despite its simple nature, this one matrix multiplication provides real impact. We show that our proposal directly boosts classification, long-tailed recognition, out-of-distribution detection, and open-set recognition, from CIFAR to ImageNet. We find empirically that maximum separation works best as a fixed bias; making the matrix learnable adds nothing to the performance. The closed-form implementation and code to reproduce the experiments are available on GitHub.

Censored Quantile Regression Neural Networks for Distribution-Free Survival Analysis

Tim Pearce , Jong-Hyeon Jeong, Yichen Jia, Jun Zhu

Abstract : This paper considers doing quantile regression on censored data using neural networks (NNs). This adds to the survival analysis toolkit by allowing direct prediction of the target variable, along with a distribution-free characterization of uncertainty, using a flexible function approximator. We begin by showing how an algorithm popular in linear models can be applied to NNs. However, the resulting procedure is inefficient, requiring sequential optimization of an individual NN at each desired quantile. Our major contribution is a novel algorithm that simultaneously optimizes a grid of quantiles output by a single NN. To offer theoretical insight into our algorithm, we show firstly that it can be interpreted as a form of expectation-maximization, and secondly that it exhibits a desirable `self-correcting’ property. Experimentally, the algorithm produces quantiles that are better calibrated than existing methods on 10 out of 12 real datasets.

Learning (Very) Simple Generative Models Is Hard

Sitan Chen, Jerry Li , Yuanzhi Li

Abstract : Motivated by the recent empirical successes of deep generative models, we study the computational complexity of the following unsupervised learning problem. For an unknown neural network \(F:\mathbb{R}^d\to\mathbb{R}^{d’}\), let \(D\) be the distribution over \(\mathbb{R}^{d’}\) given by pushing the standard Gaussian \(\mathcal{N}(0,\textrm{Id}_d)\) through \(F\). Given i.i.d. samples from \(D\), the goal is to output \({any}\) distribution close to \(D\) in statistical distance.

We show under the statistical query (SQ) model that no polynomial-time algorithm can solve this problem even when the output coordinates of \(F\) are one-hidden-layer ReLU networks with \(\log(d)\) neurons. Previously, the best lower bounds for this problem simply followed from lower bounds for \(supervised\) \(learning\) and required at least two hidden layers and \(poly(d)\) neurons [Daniely-Vardi ’21, Chen-Gollakota-Klivans-Meka ’22].

The key ingredient in our proof is an ODE-based construction of a compactly supported, piecewise-linear function \(f\) with polynomially-bounded slopes such that the pushforward of \(\mathcal{N}(0,1)\) under \(f\) matches all low-degree moments of \(\mathcal{N}(0,1)\).

Related publications

A neural corpus indexer for document retrieval, gradient estimation with discrete stein operators, continue reading.

Research Focus March 4, 2024

Research Focus: Week of March 4, 2024

RF NeurIPS Edition December 11, 2023

NeurIPS 2023 highlights breadth of Microsoft’s machine learning innovation

2022 Microsoft Research - Year in review graphic

Research @ Microsoft 2022: A look back at a year of accelerating progress in AI

Microsoft Research - Research Focus 05 Week of November 28th, 2022

Research Focus: Week of November 28, 2022

Research areas.

neurips 2022 oral presentation

Related events

  • NeurIPS 2022
  • Follow on X
  • Like on Facebook
  • Follow on LinkedIn
  • Subscribe on Youtube
  • Follow on Instagram

Share this page:

Send Feedback

Enter your feedback below and we'll get back to you as soon as possible. To submit a bug report or feature request, you can use the official OpenReview GitHub repository: Report an issue

BibTeX Record

Accepted Papers

Oral presentations.

--> --> Felipe Bivort Haiek


Bivort Haiek, F. et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Oscar A. Bustos-Brinez, Joseph A Gallego, Fabio A. Gonzalez


Bustos-Brinez, O. A. et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Maria Leonor Pacheco, Tunazzina Islam, Lyle Ungar, Ming Yin, Dan Goldwasser


Pacheco, M. L. et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Vítor Lourenço, Aline Paes


Lourenço, V. et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> William Berrios, Arturo Deza


Berrios, W. et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Jorge Luis Guevara Diaz, Bianca Zadrozny, Campbell D Watson, Daniela Szwarcman, Debora Lima, Dilermando Queiroz, Leonardo Tizzei, Maria Garcia, Maysa M G Macedo, Priscilla Avegliano


Guevara Diaz, J. L et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Fatima Davelouis Gallardo, John D Martin, Joseph Modayil, Michael Bowling


Davelouis Gallardo, F. et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Juan S. Salcedo Gallo, Jesus Solano, Hernan Garcia, David Zarruk, Alejandro Correa-Bahnsen


Salcedo Gallo, J. S. et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Jeanfed Ramirez, Hugo Jair Escalante, Luis Villaseñor-Pineda


Ramirez, J. et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Ana Maria Quintero Ossa, Jesus Solano, Hernan Garcia, David Zarruk, Alejandro Correa-Bahnsen, Carlos F Valencia


Quintero Ossa, A. M. et al., (2022). [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.

Poster Presentations

--> --> --> --> --> --> -->
--> --> Gissella Bejarano, Joe Huamani, Cristian Lazo Quispe, Stev Huaman Ramos, Pablo Rivas, Tomas Cerny

Bejarano, G. et al., (2022). [Poster Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Hanan Ronaldo Quispe Condori, Jorshinno Sumire Mamani, Harley Vera Olivera, Edwin Alvarez Mamani, Rut Patricia Condori Obregon

Quispe Condori, H. R. et al., (2022). [Poster Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Jorge A. Sanchez-Bautista, Javier M. Antelis, Omar Mendoza-Montoya

Sanchez-Bautista, J. A. et al., (2022). [Poster Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Pedro A Colon-Hernandez

Colon-Hernandez, P. A. et al., (2022). [Poster Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Fernando Camarena, Miguel Gonzalez-Mendoza, Leonardo Chang, Neil Hernandez-Gress

Camarena, F. et al., (2022). [Poster Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.
--> --> Jorge Barreras Cortes

Barreras Cortes, J. et al., (2022). [Poster Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.

Virtual Session

Workshop program.

  
  
08:30 – 08:35 (by Shiqiang Wang)
08:35 – 09:00 , by Bo Li
09:00 – 09:20 , by Konstantin Mishchenko
09:20 – 10:00 (7 min talk + 3 min Q&A each)
: Jayanth Reddy Regatti, Songtao Lu, Abhishek Gupta and Ness Shroff. : Sai Praneeth Karimireddy, Wenshuo Guo and Michael Jordan.
10:00 – 10:30
10:30 – 11:10 (7 min talk + 3 min Q&A each)
11:10 – 11:15
11:15 – 12:00
12:00 – 13:30
13:30 – 14:10 (7 min talk + 3 min Q&A each)
14:10 – 15:00
15:00 – 15:30
15:30 – 15:50 , by Jianyu Wang
15:50 – 16:15 , by Stacy Patterson
16:15 – 17:00
17:00
   

Invited Talks

    Trustworthy Federated Learning

, Assistant Professor, University of Illinois at Urbana–Champaign (UIUC)


Dr. Bo Li is an assistant professor in the Department of Computer Science at the University of Illinois at Urbana–Champaign. She is the recipient of the IJCAI Computers and Thought Award, Alfred P. Sloan Research Fellowship, NSF CAREER Award, MIT Technology Review TR-35 Award, Dean's Award for Excellence in Research, C.W. Gear Outstanding Junior Faculty Award, Intel Rising Star award, Symantec Research Labs Fellowship, Rising Star Award, Research Awards from Tech companies such as Amazon, Facebook, Intel, and IBM, and best paper awards at several top machine learning and security conferences. Her research focuses on both theoretical and practical aspects of trustworthy machine learning, security, machine learning, privacy, and game theory. She has designed several scalable frameworks for trustworthy machine learning and privacy-preserving data publishing systems. Her work has been featured by major publications and media outlets such as Nature, Wired, Fortune, and New York Times.

    Asynchronous Optimization: Delays, Stability, and the Impact of Data Heterogeneity

, Research Scientist, Samsung


Konstantin Mishchenko is a Research Scientist at Samsung in Cambridge, UK, working on optimization theory and federated learning. He received his double-degree MSc from Paris-Dauphine and École normale supérieure Paris-Saclay in 2017, and he did his PhD under the supervision of Peter Richtárik from 2017 to 2021. From December 2021 to October 2022, he was a postdoc in the group of Francis Bach at Inria Paris. Konstantin had research internships at Google Brain and Amazon, has been recognized as an outstanding reviewer for NeurIPS19, ICML20, AAAI20, ICLR21, ICML21, NeurIPS21, ICLR22, ICML22 and served as an Area Chair for ACML 2022. He was named a Rising Career in Data Science by the University of Chicago in 2021 and has published 11 conference papers at ICML, ICLR, NeurIPS, AISTATS, and UAI.

    On the Unreasonable Effectiveness of Federated Averaging with Heterogenous Data

, Research Scientist, Meta


Jianyu Wang is a research scientist at Meta. He received his Ph.D. from ECE department at Carnegie Mellon University in 2022 and received his B.Eng. in Electrical Engineering from Tsinghua University in 2017. He was a research intern with Google Research in 2020 and 2021, and with Facebook AI Research in 2019. His research interests are federated learning, distributed optimization, and systems for large-scale machine learning. His awards and honors include the Qualcomm Ph.D. Fellowship (2019), the best student paper award at NeurIPS 2019 federated learning workshop, and the best poster award at NSF CEDO workshop (2021).

    Scalable and Communication-Efficient Vertical Federated Learning

, Associate Professor, Rensselaer Polytechnic Institute


Stacy Patterson is an Associate Professor in the Department of Computer Science at Rensselaer Polytechnic Institute. She received the MS and PhD in computer science from UC Santa Barbara in 2003 and 2009, respectively. From 2009-2011, she was a postdoctoral scholar at the Center for Control, Dynamical Systems and Computation at UC Santa Barbara. From 2011-2013, she was a postdoctoral fellow in the Department of Electrical Engineering at Technion - Israel Institute of Technology. Dr. Patterson is the recipient of a Viterbi postdoctoral fellowship, the IEEE CSS Axelby Outstanding Paper Award, and an NSF CAREER award. She serves as an Associate Editor for the IEEE Transactions on Control of Network Systems. Her research interests include distributed algorithms, cooperative control, and edge and cloud computing.

  • Outstanding Paper Award : Jayanth Reddy Regatti, Songtao Lu, Abhishek Gupta and Ness Shroff. Conditional Moment Alignment for Improved Generalization in Federated Learning
  • Outstanding Paper Award : Sai Praneeth Karimireddy, Wenshuo Guo and Michael Jordan. Mechanisms that Incentivize Data Sharing in Federated Learning

Accepted Papers (Oral Presentation)

  • Jayanth Reddy Regatti, Songtao Lu, Abhishek Gupta and Ness Shroff. Conditional Moment Alignment for Improved Generalization in Federated Learning
  • Sai Praneeth Karimireddy, Wenshuo Guo and Michael Jordan. Mechanisms that Incentivize Data Sharing in Federated Learning
  • Hanhan Zhou, Tian Lan, Guru Prasadh Venkataramani and Wenbo Ding. Federated Learning with Online Adaptive Heterogeneous Local Models
  • Baturalp Buyukates, Jinhyun So, Hessam Mahdavifar and Salman Avestimehr. LightVeriFL: Lightweight and Verifiable Secure Federated Learning
  • Francesco Pase, Berivan Isik, Deniz Gunduz, Tsachy Weissman and Michele Zorzi. Efficient Federated Random Subnetwork Training
  • Filippo Galli, Sayan Biswas, Gangsoo Zeong, Tommaso Cucinotta and Catuscia Palamidessi. Group privacy for personalized federated learning
  • Yae Jee Cho, Divyansh Jhunjhunwala, Tian Li, Virginia Smith and Gauri Joshi. To Federate or Not To Federate: Incentivizing Client Participation in Federated Learning
  • Marco Bornstein, Tahseen Rabbani, Evan Wang, Amrit Bedi and Furong Huang. SWIFT: Rapid Decentralized Federated Learning via Wait-Free Model Communication
  • Sharut Gupta, Kartik Ahuja, Mohammad Havaei, Niladri Chatterjee and Yoshua Bengio. FL Games: A Federated Learning Framework for Distribution Shifts
  • Simone Bottoni, Giulio Zizzo, Stefano Braghin and Alberto Trombetta. Verifiable Federated Machine Learning
  • Yeojoon Youn, Bhuvesh Kumar and Jacob Abernethy. Accelerated Federated Optimization with Quantization
  • Xingchen Ma, Junyi Zhu and Matthew Blaschko. Tackling Personalized Federated Learning with Label Concept Drift via Hierarchical Bayesian Modeling

Accepted Papers (Poster Presentation)

  • Jaeheon Kim and Bong Jun Choi. FedTH : Tree-based Hierarchical Image Classification in Federated Learning
  • M. Taha Toghani and Cesar Uribe. Unbounded Gradients in Federated Learning with Buffered Asynchronous Aggregation
  • Khaoula Chehbouni, Gilles Caporossi, Reihaneh Rabbany, Martine De Cock and Golnoosh Farnadi. Early Detection of Sexual Predators with Federated Learning
  • Timothy Castiglia, Shiqiang Wang and Stacy Patterson. Self-Supervised Vertical Federated Learning
  • Pei Fang and Jinghui Chen. On the Vulnerability of Backdoor Defenses for Federated Learning
  • Mariel AF Werner, Lie He, Sai Praneeth Karimireddy, Michael Jordan and Martin Jaggi. Towards Provably Personalized Federated Learning via Threshold-Clustering of Similar Clients
  • Xinwei Zhang, Bingqing Song, Mehrdad Honarkhah, Jie Ding and Mingyi Hong. Building Large Machine Learning Models from Small Distributed Models: A Layer Matching Approach
  • Yuqing Zhu, Xiang Yu, Yi-Hsuan Tsai, Francesco Pittaluga, Masoud Faraki, Manmohan Chandraker and Yu-Xiang Wang. Voting-Based Approaches for Differentially Private Federated Learning
  • Md Ibrahim Ibne Alam, Koushik Kar, Theodoros Salonidis and Horst Samulowitz. DASH: Decentralized CASH for Federated Learning
  • Yujia Wang, Pei Fang and Jinghui Chen. Accelerating Adaptive Federated Optimization with Local Gossip Communications
  • Dimitris Stripelis, Umang Gupta, Greg Ver Steeg and Jose Luis Ambite. Federated Progressive Sparsification (Purge-Merge-Tune)+
  • Pedro Valdeira, Yuejie Chi, Claudia Soares and Joao Xavier. A Multi-Token Coordinate Descent Method for Vertical Federated Learning
  • Rajarshi Saha, Michal Yemini, Emre Ozfatura, Deniz Gunduz and Andrea Goldsmith. ColRel: Collaborative Relaying for Federated Learning over Intermittently Connected Networks
  • Ziwei Li, Hong-You Chen, Han Wei Shen and Wei-Lun Chao. Understanding Federated Learning through Loss Landscape Visualizations: A Pilot Study
  • Krishna Pillutla, Yassine Laguel, Jérôme Malick and Zaid Harchaoui. Differentially Private Federated Quantiles with the Distributed Discrete Gaussian Mechanism
  • Chen Dun, Mirian Hipolito Garcia, Dimitrios Dimitriadis, Christopher Jermaine and Anastasios Kyrillidis. Efficient and Light-Weight Federated Learning via Asynchronous Distributed Dropout
  • Junyi Li and Heng Huang. FedGRec: Federated Graph Recommender System with Lazy Update of Latent Embeddings
  • Ljubomir Rokvic, Panayiotis Danassis and Boi Faltings. Privacy-Preserving Data Filtering in Federated Learning Using Influence Approximation
  • John Nguyen, Jianyu Wang, Kshitiz Malik, Maziar Sanjabi and Mike Rabbat. Where to Begin? On the Impact of Pre-Training and Initialization in Federated Learning
  • Joseph Lavond, Minhao Cheng and Yao Li. Trusted Aggregation (TAG): Model Filtering Backdoor Defense In Federated Learning
  • Stefanos Laskaridis, Javier Fernandez-Marques and Łukasz Dudziak. Cross-device Federated Architecture Search
  • Parker Newton, Olivia Choudhury, Bill Horne, Vidya Ravipati, Divya Bhargavi and Ujjwal Ratan. Client-Private Secure Aggregation for Privacy-Preserving Federated Learning
  • Shanshan Wu, Tian Li, Zachary Charles, Yu Xiao, Ken Liu, Zheng Xu and Virginia Smith. Motley: Benchmarking Heterogeneity and Personalization in Federated Learning
  • Daniel Lopes, João Nadkarni, Filipe Assunção, Miguel Lopes and Luís Rodrigues. Federated Learning for Predicting the Next Node in Action Flows
  • Chuan Guo, Kamalika Chaudhuri, Pierre Stock and Mike Rabbat. The Interpolated MVU Mechanism For Communication-efficient Private Federated Learning
  • Yi Sui, Junfeng Wen, Yenson Lau, Brendan Ross and Jesse Cresswell. Find Your Friends: Personalized Federated Learning with the Right Collaborators
  • Saeed Vahidian, Mahdi Morafah, Chen Chen, Mubarak Shah and Bill Lin. Rethinking Data Heterogeneity in Federated Learning: Introducing a New Notion and Standard Benchmarks
  • Afroditi Papadaki, Natalia Martinez, Martin Bertran, Guillermo Sapiro and Miguel Rodrigues. Federated Fairness without Access to Demographics
  • Mirian Hipolito Garcia, Andre Manoel, Daniel Madrigal, Robert Sim and Dimitrios Dimitriadis. FLUTE: A Scalable, Extensible Framework for High-Performance Federated Learning Simulations
  • Aleksei Triastcyn, Matthias Reisser and Christos Louizos. Decentralized Learning with Random Walks and Communication-Efficient Adaptive Optimization
  • Parsa Assadi, Byung Hoon Ahn and Hadi Esmaeilzadeh. Accelerating Federated Learning Through Attention on Local Model Updates
  • Ali Dadras, Karthik Prakhya and Alp Yurtsever. Federated Frank-Wolfe Algorithm
  • Atahan Ozer, Kadir Burak Buldu, Abdullah Akgül and Gozde Unal. How to Combine Variational Bayesian Networks in Federated Learning
  • Chhavi Sharma, Vishnu Narayanan and Balamurugan Palaniappan. Stochastic Gradient Methods with Compressed Communication for Decentralized Saddle Point Problems
  • Batiste Le bars, Aurélien Bellet, Marc Tommasi, Erick Lavoie and Anne-marie Kermarrec. Refined Convergence and Topology Learning for Decentralized Optimization with Heterogeneous Data
  • Amr Abourayya, Michael Kamp, Erman Ayday, Jens Kleesiek, Kanishka Rao, Geoffrey Webb and Bharat Rao. AIMHI: Protecting Sensitive Data through Federated Co-Training
  • Ilias Driouich, Chuan Xu, Giovanni Neglia, Frederic Giroire and Eoin Thomas. A Novel Model-Based Attribute Inference Attack in Federated Learning
  • Athul Sreemathy Raj, Irene Tenison, Kacem Khaled, Felipe Gohring de Magalhães and Gabriela Nicolescu. FedSHIBU: Federated Similarity-based Head Independent Body Update
  • Jaewoo Shin, Taehyeon Kim and Se-Young Yun. Revisiting the Activation Function for Federated Image Classification
  • Zhaozhuo Xu, Luyang Liu, Zheng Xu and Anshumali Shrivastava. Adaptive Sparse Federated Learning in Large Output Spaces via Hashing
  • Holger R Roth, Yan Cheng, Yuhong Wen, Isaac Yang, Ziyue Xu, YuanTing Hsieh, Kristopher Kersten, Ahmed Harouni, Can Zhao, Kevin Lu, Zhihong Zhang, Wenqi Li, Andriy Myronenko, Dong Yang, Sean Yang, Nicola Rieke, Abood Quraini, Chester Chen, Daguang Xu, Nic Ma, Prerna Dogra, Mona G Flores and Andrew Feng. FLARE: Federated Learning from Simulation to Real-World
  • Liam Collins, Enmao Diao, Tanya Roosta, Jie Ding and Tao Zhang. PerFedSI: A Framework for Personalized Federated Learning with Side Information
  • Shengyuan Hu, Jack Goetz, Kshitiz Malik, Hongyuan Zhan, Zhe Liu and Yue Liu. FedSynth: Gradient Compression via Synthetic Data in Federated Learning
  • Sourasekhar Banerjee, Alp Yurtsever and Monowar H Bhuyan. Personalized Multi-tier Federated Learning
  • Saeed Vahidian, Mahdi Morafah, Weijia Wang and Bill Lin. FLIS: Clustered Federated Learning via Inference Similarity for Non-IID Data Distribution
  • Hamid Mozaffari, Virendra Marathe and Dave Dice. Private and Robust Federated Learning using Private Information Retrieval and Norm Bounding
  • Karthik Prasad, Sayan Ghosh, Graham Cormode, Ilya Mironov, Ashkan Yousefpour and Pierre Stock. Reconciling Security and Communication Efficiency in Federated Learning
  • Shashi Raj Pandey, Lam Nguyen and Petar Popovski. FedToken: Tokenized Incentives for Data Contribution in Federated Learning
  • Giulio Zizzo, Ambrish Rawat, Naoise Holohan and Seshu Tirupathi. Federated Continual Learning with Differentially Private Data Sharing
  • Zhiwei Tang, Yanmeng Wang and Tsung-Hui Chang. z-SignFedAvg: A unified sign-based stochastic compression for federated learning
  • Kiwan Maeng, Chuan Guo, Sanjay Kariyappa and Edward Suh. Measuring and Controlling Split Layer Privacy Leakage Using Fisher Information
  • Yuanhao Xiong, Ruochen Wang, Minhao Cheng, Felix Yu and Cho-Jui Hsieh. FedDM: Iterative Distribution Matching for Communication-Efficient Federated Learning
  • Haibo Yang, Peiwen Qiu, Prashant Khanduri and Jia Liu. With a Little Help from My Friend: Server-Aided Federated Learning with Partial Client Participation
  • Yue Niu, Saurav Prakash, Souvik Kundu, Sunwoo Lee and Salman Avestimehr. Federated Learning of Large Models at the Edge via Principal Sub-Model Training
  • Yuhang Yao, Mohammad Mahdi Kamani, Zhongwei Cheng, Lin Chen, Carlee Joe-Wong and Tianqiang Liu. FedRule: Federated Rule Recommendation System with Graph Neural Networks
  • Chulin Xie, Pin-Yu Chen, Ce Zhang and Bo Li. Improving Vertical Federated Learning by Efficient Communication with ADMM
  • Virendra Marathe, Pallika Kanani and Daniel W. Peterson. Subject Level Differential Privacy with Hierarchical Gradient Averaging
  • Jingtao Li, Lingjuan Lyu, Daisuke Iso, Chaitali Chakrabarti and Michael Spranger. MocoSFL: Enabling Cross-Client Collaborative Self-Supervised Learning
  • Marco Schreyer, Hamed Hemati, Damian Borth and Miklos A. Vasarhelyi. Federated Continual Learning to Detect Accounting Anomalies in Financial Auditing
  • Motasem Alfarra, Juan Camilo Perez, Egor Shulgin, Peter Richtárik and Bernard Ghanem. Certified Robustness in Federated Learning
  • Sara Babakniya, Souvik Kundu, Saurav Prakash, Yue Niu and Salman Avestimehr. Federated Sparse Training: Lottery Aware Model Compression for Resource Constrained Edge
  • Mathieu Even, Hadrien Hendrikx and Laurent Massoulié. Asynchronous Speedup in Decentralized Optimization

Call for Papers

Submission Instructions

Submissions should be no more than 6 pages long, excluding references, and follow NeurIPS'22 template . Submissions are double-blind (author identity shall not be revealed to the reviewers), so the submitted PDF file should not include any identifiable information of authors. An optional appendix of any length is allowed and should be put at the end of the paper (after references).

Submissions are collected on OpenReview at the following link: https://openreview.net/group?id=NeurIPS.cc/2022/Workshop/Federated_Learning . Accepted papers and their review comments will be posted on OpenReview in public. Due to the short timeline, we will not have a rebuttal period, but the authors are encouraged to interact and discuss with reviewers on OpenReview after the acceptance notifications are sent out. Rejected papers and their reviews will remain private and not posted in public.

For questions, please contact: [email protected]

Proceedings and Dual Submission Policy

Presentation format, organizing committee.

  • Nathalie Baracaldo (IBM Research Almaden, USA)
  • Olivia Choudhury (Amazon, USA)
  • Gauri Joshi (Carnegie Mellon University, USA)
  • Peter Richtárik (King Abdullah University of Science and Technology, Saudi Arabia)
  • Praneeth Vepakomma (Massachusetts Institute of Technology, USA)
  • Shiqiang Wang (IBM T. J. Watson Research Center, USA)
  • Han Yu (Nanyang Technological University, Singapore)

Program Committee

Sponsored by

-->
         

) aims to provide an end-to-end machine learning operating system for people or organizations to transform their data to intelligence with minimum efforts. FedML stands for “ ” in a broad scope, and “ ” in a specific scope. At the current stage, FedML is developing and maintaining a machine learning platform that enables zero-code, lightweight, cross-platform, and provably secure federated learning and analytics. It enables machine learning from decentralized data at various users/silos/edge nodes, without the need to centralize any data to the cloud, hence providing maximum privacy and efficiency. It consists of a lightweight and cross-platform Edge AI SDK that is deployable over edge GPUs, smartphones, and IoT devices. Furthermore, it also provides a user-friendly MLOps platform to simplify decentralized machine learning and real-world deployment. FedML supports vertical solutions across a broad range of industries (healthcare, finance, insurance, smart cities, IoT, etc.) and applications (computer vision, natural language processing, data mining, and time-series forecasting).

Organized by

neurips 2022 oral presentation

Nominations to Join the NeurIPS 2023 Organizing Committees

Communications Chairs 2023 2023 Conference organizers

By Alice Oh and Tristan Naumann, General Chairs

With NeurIPS still fresh in our memory, planning is underway for NeurIPS 2023, which will be held once again in New Orleans, Dec 10-16, 2023. We are excited to join this upcoming year as co-General Chairs for the conference, and we look forward to contributing to this important meeting that holds significance for us and our field.

NeurIPS is a large conference and its organization continues to be largely driven by volunteers from our community committed to its success. As we start this process, we hope to have as wide a set of people from which to select as chairs for the conference. Please consider nominating yourself, or someone you know, as an organizer for one of the roles in the conference , or in a generalist role that you think might serve the conference better. 

Please send nominations by Jan 13, 2023, using this form .

Serving as an organizer is a great way to build experience in crafting large scientific meetings and balancing the many tradeoffs involved, helps build new networks, and is a great way to give back to the community in a way that is different from reviewing and workshops. 

We look forward to your nominations, and we hope to share updates as planning progresses in the new year.

NeurIPS 2022 – Day 1 Recap

Communications Chairs 2023 2022 Conference

Here are the highlights from Monday, the first day of NeurIPS 2022, which was dedicated to Affinity Workshops , Education Outreach , and the Expo !

There were many exciting Affinity Workshops this year organized by the Affinity Workshop chairs – Arjun Subramonian, Kehinde Aruleba and Sunipa Dev – that included:

  • Women in Machine Learning: In-Person and Virtual , from 7:30 am – 1:00 pm CST
  • LatinX in AI , from 8:00 am – 6:00 pm CST
  • Queer in AI , from 9:00 am – 6:00 pm CST
  • NewInML , from 9:00 am – 5:30 pm.
  • Black in AI , from 9:40 am – 6:00 pm CST
  • Indigenous in AI , from 11:30 am – 6:00 pm CST https://indigenousinai.org/
  • Global South in AI , from 2:30 – 6:00 pm CST 
  • North Africans in ML , from 4:30 – 6:00 pm CST
  • Affinity Poster Session: In-Person and Virtual : 4:30 – 5:00 pm CST.

Throughout the day was a highly interactive Expo from 9:30 am to 5:00 pm CST organized by Expo chairs – Wenming Ye and Ismini Lourentzou, with a full schedule that included:

  • Expo Talk Panels , 
  • Expo Workshops , and 
  • Expo Demonstrations . 

from a wide variety of industry participants and 77 exhibitors. Please see the Expo schedule for more details. 

The halls were full of enthusiastic high-school students from 11 local New Orleans high schools as NeurIPS hosted 240 students at its first  Education Outreach Day  organized by Matt Wang and Jessica Forde and a special thanks to Mary Ellen Perry for spearheading the idea!

The conference officially kicked off in Hall H at 5:00 pm CST with the Opening Remarks from our General Chairs – Shakir Mohamed and Sanmi Koyejo and Program Chairs – Alice Oh, Alekh Agarwal, Danielle Belgrave and Kyunghyun Cho followed by an opening invited talk by David Chalmers on Are Large Language Models Sentient? at 5:15 pm which was streamed for our virtual attendees. 

Following the talk, everyone socialized at NeurIPS tasty Reception from 6:00 pm-8:00 pm. 

If you have feedback and questions for the organizers, please send them via email to [email protected] for Wednesday’s Town hall in Theatre B, 30 November from 6:00-7:00 pm. 

Thank you to all attendees for following the NeurIPS Code of Conduct .  

Enjoy the conference!

Shakir Mohamed and Sanmi Koyejo

NeurIPS 2022 General Chairs,

on behalf of the NeurIPS 2022 Organizing Committee

How do Authors’ Perceptions of their Papers Compare with Co-authors’ Perceptions and Peer-review Decisions?

Communications Chairs 2023 2021 Conference

NeurIPS 2021 Author Perception Experiment

by Charvi Rastogi, Ivan Stelmakh, Hal Daumé III, Emma Pierson, and Nihar B. Shah , and the NeurIPS 2021 Program Chairs Alina Beygelzimer, Yann N. Dauphin, Percy Liang, Jennifer Wortman Vaughan, and Zhenyu Xue (NeurIPS 2021 Workflow Manager)

There is a considerable body of research on peer review. Within the machine learning community, there have been experiments establishing significant disagreement across reviewers and across reviewer panels — including at NeurIPS 2021 — and active discussions about the state of peer review. But how do author perceptions about their submitted papers match up to the outcomes of the peer-review process and perceptions of other authors? We investigate this question by asking authors who submitted papers to NeurIPS 2021 three questions:

(Q1) [At the time of paper submission] What is your best estimate of the probability (as a percentage) that this submission will be accepted?

(Q2) [At the time of paper submission; to authors submitting two or more papers] Rank your submissions in terms of your own perception of their scientific contributions to the NeurIPS community, if published in their current form.

(Q3) [After preliminary reviews were available to authors] After you read the reviews of this paper, how did your perception of the value of its scientific contribution to the NeurIPS community change (assuming it was published in its initially submitted form)?

Here are five key findings.

neurips 2022 oral presentation

We find that among both accepted and rejected papers, about 50% of authors report that their perception of their own paper changed after seeing the initial reviews (Q3). Moreover, among both accepted and rejected papers, over 30% of authors report that their perception became more positive.

The fact that authors vastly overestimated the probability that their papers will be accepted suggests it would be useful for conference organizers and research mentors to attempt to recalibrate expectations prior to each conference. The disagreements we document around paper quality — between co-authors as well as between authors and reviewers — taken together with the disagreement among committees of reviewers observed in the complementary NeurIPS 2021 consistency experiment , suggest that assessing paper quality is not only an extremely noisy process but may be a fundamentally challenging task with no objectively right answer. The outcomes of paper submissions should thus be taken with a grain of salt. More broadly, as a community, we may take these findings into account when deciding on our policies and perceptions pertaining to the peer-review process and its outcomes. We hope the results of our experiment encourage discussion and introspection in the community.

More details: https://arxiv.org/pdf/2211.12966.pdf

We would like to thank all the participants for the time they took to provide survey responses. We are grateful to the OpenReview team, especially Melisa Bok, for their support in running the survey on the OpenReview.net platform.

Get Ready for the NeurIPS 2022 Datasets and Benchmarks Track

Last year, NeurIPS launched the new Datasets and Benchmarks track to serve as a venue for exceptional work focused on creating high-quality datasets, insightful benchmarks, and discussions on improving dataset development and data-oriented work more broadly. Further details about the motivation and setup are discussed in our earlier blog post here .

This year, we received 447 submissions on a breadth of topics, out of which 163 have been accepted for publication. The acceptance rate was 36.46%. Please explore the list of accepted papers . The reviewing standards were again set very high, and the process involved a set of specific attention points, such as the impact and documentation quality of datasets, the reproducibility of benchmarks, as well as ethics, and long-term accessibility.

We are immensely grateful for the tremendous contributions of the 92 area chairs, 1064 reviewers, and 39 ethics reviewers to make this new endeavor a success. Different from last year, we organized a single reviewing round, more closely following the main NeurIPS review cycle, albeit with a longer rebuttal period which allowed many submissions to be substantially improved.

Of the 163 accepted papers, about half of the papers were identified as introducing new datasets, while the other half presented new benchmarks. They covered a broad range of topics. Approximately 23% of papers were related to computer vision; 8% natural language processing; 7% reinforcement learning and simulation environments; and 6% multimodal data. The remainder covered various other topics, such as speech processing, explainable AI, and ethics. While these are rough estimates, we hope they provide a sense of the distribution of topics in this year’s track. 

This year, the Dataset and Benchmarks track also truly became a standard component of the NeurIPS conference. Datasets and Benchmarks papers are blended with the main conference papers in the poster sessions, panels, and on the virtual conference site. They will still be easily discoverable via a virtual site highlight page and stickers in the poster session. We are also delighted that the NeurIPS board has agreed to publish a single NeurIPS proceedings this year. The Datasets and Benchmarks papers will appear in the same proceedings as the other NeurIPS papers , with an indication that they are affiliated with the dataset and benchmark track to make them easy to find.

We are looking forward to another great edition of the NeurIPS Datasets and Benchmarks track, and hope to see you at the conference!

Get Ready for the NeurIPS Competition Track

By Marco Ciccone, Gustavo Stolovitzky Jake Albrecht

NeurIPS is here and we will have a dedicated Competition Track for the sixth time!

Social Event and Poster Session

We are glad to invite you to our social event in New Orleans at the Conference on 29th November at 6 PM (Ballroom C).

The event will be opened by an invited talk from two pioneers of challenges in ML, Isabelle Guyon and Evelyne Viegas , who will discuss the role of competitions at NeurIPS, their evolution, and opportunities.

The invited talk will be followed by a poster session with the competition organizers presenting their challenges and the highlights of the past few months. 

Competitions have a valuable place in research and in solving complex problems. 

We encourage you to take advantage of this social opportunity to learn more about ML challenges and application trends. 

This year we selected competitions covering a broad spectrum of challenges and disciplines such as AutoML, Graph Representation Learning, Security, Machine Learning for Physical and Life Sciences, Natural Language Processing and Understanding, Robotics, and Multi-Agent Systems.

See the complete list of the selected competitions for this year: https://neurips.cc/Conferences/2022/CompetitionTrack

We are excited to finally meet new and old faces of both organizers and participants who made the competition track a success during these years. There will be pizza, salad, and soft drinks for everyone!

Online Workshops

After the success of the past editions, in addition to the physical event, the Competition Track will feature online workshops during the virtual week.

The online workshops aim to reach a larger audience and allow researchers worldwide interested in specific ML challenges to foster collaboration, exchange ideas, and grow a sense of community.

Each workshop is a focused session with invited talks from winners and experts of the specific competition. Check the schedule of each workshop on the conference competitions page or look at the general virtual program below.

6th December

11-14 UTC
11-14 UTC
11-14 UTC
11-14 UTC
11-14 UTC
11-14 UTC
11-14 UTC
21-24 UTC
21-24 UTC

7th December

13-16 UTC
13-16 UTC
13-16 UTC
13-16 UTC
13-16 UTC
13-16 UTC
13-16 UTC
13-16 UTC

8th December

1-4 AM UTC
11-14 UTC
21-24 UTC
21-24 UTC
21-24 UTC
21-24 UTC
21-24 UTC
21-24 UTC

We want to thank all the reviewers, organizers and competition participants for the hard work and integrity over the past months of preparation. We look forward to meeting you all in NOLA and virtually for two inspiring weeks of science.

Getting Ready for NeurIPS (3): 2022 Conference Highlights

by the General Chairs, Sanmi Koyejo and Shakir Mohamed

The two weeks of NeurIPS 2022 are close, and we are excited to meet everyone in person in New Orleans during the first week and then to continue our interaction during the virtual week. There is a lot to look forward to, and this post is meant to help navigate the various events and activities. In our previous updates we described the steps we took for safety and facilities , and the overall format of the conference .

A highlight of every NeurIPS are the keynotes from leading academic and industry leaders. This year’s topics are :

  • Are Large Language Models Sentient? David Chalmers
  • Algorithms On the Bench: Examining Validity of ML Systems in the Public Sphere , Rediet Abebe
  • Conformal Prediction in 2022 , Emmanuel Candès
  • Interaction-Centric AI , Juho Kim
  • Blueprint for an AI Bill of Rights Making Automated Systems Work for the American People , Alondra Nelson
  • The Data-Centric Era: How ML is Becoming an Experimental Science , Isabelle Guyon
  • The Forward-Forward Algorithm for Training Deep Neural Networks , Geoffrey Hinton

Poster Sessions

The in-person conference prioritizes in-person interaction and discussion, and this is centered around the poster sessions. There are two poster sessions each day of the main conference (Tues/Wed/Thurs). Poster boards have been placed with sufficient space for social distancing, we provide face shields for poster presenters, and we encourage mask-wearing for all attendees. 

Posters come from three different streams:

Main Conference track. The main conference has 2,672 accepted papers . In addition to learning from the authors about their work directly, each paper has an individual page on the website where you can find a 5-minute video and a chat channel to discuss the work asynchronously.

Datasets and Benchmark track. The 2nd year of this track saw 163 papers accepted .

Journal Showcase. This year, we introduced a journal-to-conference track, where you can learn about the work of papers accepted into journals in our field. There are 41 papers from JMLR and 33 papers from ReScience in this track. 

Affinity Group Workshops and Expo

The first day of the meeting, Monday 28 Nov, includes most of the Affinity Group workshops as well as the Expo.. This day is an opportunity to reconnect and make new connections. If you are attending NeurIPS for the first time, then consider joining the New in ML workshop .

Affinity events. You can find the schedule for the Affinity Groups here . This year’s affinity events include: Global South in AI; Women in ML (in both weeks); North Africans in ML; LatinX in AI; Queer in AI; Black in AI; Indigenous in AI. The joint ​​ Affinity Poster Session in the early evening of the 28th is an opportunity for members across the Affinity Groups to showcase their work.

Expo. The Expo is an opportunity to hear about research and work from industry representatives from some of the platinum or gold exhibitors. There are expo talks, demonstrations, and workshops to experience; see the full expo schedule for locations and topics.

Competitions, Socials and Discussions

The evenings (6 PM onwards) of the in-person week provide further activities to get involved in the NeurIPS community. Some of the highlights are:

Competitions. On Tuesday 29 Nov 6pm, connect with other attendees to learn about this year’s competitions. There will be 22 competitions for you to interact with in an exhibition demo-style setup, and there will be pizza, salads and soft-drinks to keep you fed while you visit all the competition stands.

Ethics Review Open Discussion , also on Tuesday 29 Nov at 7pm. If you are interested in talking about ethics review processes and ways to improve them, join this moderated discussion led by the NeurIPS 2022 Ethics Review Chairs.

Town Hall on Wednesday 30 Nov, 6pm. As members of the NeurIPS community, your thoughts on building a stronger NeurIPS community and wider considerations are essential to the health of the conference. Join this moderated discussion hosted by this year’s Communications chairs, and with updates from the NeurIPS board, the general and program chairs, the diversity, inclusion and accessibility chairs, and other members of this year’s organizing committee.

Socials. Find a social to make new connections. This year’s socials are broad and include: the negotiations social, K-Pop in NeurIPS; Women in AI Ignite; Un-Bookclub Haben: The Deafblind Woman Who Conquered Harvard Law; Interdisciplinary ML Mixer; ML&Space Social; Data Excellence; Industry, Academia, and the In-Betweens; Gulf coast AI. Check out the webpage listing all socials here.

Tutorials in 2022 are all virtual and held on Monday 5 December, covering time zones across the world. Catch up with the state-of-the-art across 13 tutorials, covering a broad range of subject areas in machine learning research. There are several times; see the tutorials blog and the website .

Spotlights and Paper Deep Dives

Since few of us can stay attentive for an all-day virtual conference, the virtual conference week keeps content focused to 2 × 2-hours blocks each day.  In these sessions, you will get 1-minute spotlight presentations from authors of accepted papers followed by mini panel discussions, where 2 papers are grouped and discussed together. You get to ask questions through RocketChat. 

These two-hour blocks repeat the following structure to fill the two hours:

  • [15mins] 1 min paper spotlights
  • [15mins] Paper panel with authors of 2 papers.

The times for these sessions: 9–11 AM UTC-8 and 9–11 AM UTC+8 . There are 2 or 3 tracks in parallel. Make sure to block the applicable times in your schedule—add this from the website . 

There are three days of workshops this year: two days during the in-person conference and one during the virtual week. There is a range of workshops; you can see the full list on the website and read more about the workshops on our blog .

What’s Next

We encourage you to join the conference both in-person and online, and register if you have not yet done so. All that’s left is to thank our organizing committees for the dedication they have given. And a special thanks to Mary-Ellen Perry, Lee Campbell, Brad Brockmeyer, Brian Nettleton, Terri Auricchio, Max Wiesner and other members of our logistics and organizing staff, without whom the conference would not be possible—when you bump into them online or in-person, please take a minute to share your thanks.

See you at the conference soon.

​​P.S. Our best wishes for a weekend ahead full of gratitude and grace. This post was written while listening to Jambalaya . And Tweet and Toot our content to help everyone plan for the two weeks ahead.

Announcing the NeurIPS 2022 Awards

by Alekh Agarwal, Alice Oh, Danielle Belgrave, Kyunghyun Cho, Deepti Ghadiyaram, Joaquin Vanschoren

We are excited to announce the award-winning papers for NeurIPS 2022! The three categories of awards are Outstanding Main Track Papers, Outstanding Datasets and Benchmark Track papers, and the Test of Time paper. We thank the awards committee for the main track, Anima Anandkumar, Phil Blunsom, Naila Murray, Devi Parikh, Rajesh Ranganath, and Tong Zhang. For the Datasets and Benchmarks track, we thank Hugo Jair Escalante, Sergio Escalera, Isabelle Guyon, Neil Lawrence, Olga Russakovsky, and Serena Yeung.

Congratulations to all authors!

Outstanding Papers

  • Is Out-of-distribution Detection Learnable? by Zhen Fang, Yixuan Li, Jie Lu, Jiahua Dong, Bo Han, Feng Liu This work provides a theoretical study of out-of-distribution (OOD) detection, focusing on the conditions under which such models are learnable. The work uses probably approximately correct (PAC) learning theory to show that OOD detection models are PAC learnable only for some conditions of the space of data distributions and the space of prediction models. It provides 3 concrete impossibility theorems, which can be easily applied to determine the feasibility of OOD detection in practical settings, and which was used in this work to provide a theoretical grounding for existing OOD detection approaches. This work also raises new theoretical questions, for example, about the learnability of near-OOD detection. As such, it has the potential for broad theoretical and practical impact in this important research area. Tues Nov 29 — Poster Session 1
  • Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding by  Chitwan Saharia, William Chan, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Raphael Gontijo-Lopes, Tim Salimans, Jonathan Ho, David J Fleet, Mohammad Norouzi High quality generative models of images based on Diffusion Process are having a huge impact both within and beyond machine learning. This work represents one of the state of the art of such models, but also innovates in demonstrating the effective combination of an independently trained large language model with an image decoder at scale. This inherently practical decoupling is likely to be a dominant paradigm for large scale text to image models. The results are impressive and of interest to a broad audience. Thurs Dec 1 — Poster Session 5
  • Elucidating the Design Space of Diffusion-Based Generative Models by Tero Karras, Miika Aittala, Timo Aila, Samuli Laine This paper is an excellent demonstration of how a well thought through survey, that seeks not just to list but to organise prior research into a coherent common framework, can provide insights that then lead to new modelling improvements. In this case the focus on this paper are generative models of images that incoporate some form of Diffusion Process, which have become extremely popular recently despite the difficulties of training such models. This paper is likely to be an important contribution in the evolution of both the understanding and implementation of Diffusion Process based models. Wed Dec 7 — Featured Papers Panels 3B
  • ProcTHOR: Large-Scale Embodied AI Using Procedural Generation by Matt Deitke, Eli VanderBilt, Alvaro Herrasti, Luca Weihs, Kiana Ehsani, Jordi Salvador, Winson Han, Eric Kolve, Aniruddha Kembhavi, Roozbeh Mottaghi This work provides a framework for training embodied AI agents on large quantities of data, creating the potential for such agents to benefit from scaling, as language and image generation models have. The core of the framework is an engine for building procedurally-generated, physics-enabled environments with which agents can interact. This engine, in combination with provided digital assets and environmental controls, allows for generating a combinatorially large number of diverse environments. The authors demonstrate that this framework can be used to train SoTA models for several embodied AI tasks. The framework and code used in this work will be open-sourced, providing a valuable asset for the research community. Wed Nov 30 — Poster Session 3
  • Using natural language and program abstractions to instill human inductive biases in machines by Sreejan Kumar , Carlos G Correa, Ishita Dasgupta, Raja Marjieh, Michael Hu, Robert D. Hawkins, Jonathan Cohen, Nathaniel Daw, Karthik R Narasimhan, Thomas L. Griffiths Co-training on program abstractions and natural language enables incorporating human biases into learning. This is a clean approach to incorporating human biases but also be robust with program abstractions. Thurs Dec 1 — Poster Session 6
  • A Neural Corpus Indexer for Document Retrieval by Yujing Wang , Yingyan Hou, Haonan Wang, Ziming Miao, Shibin Wu, Hao Sun, Qi Chen , Yuqing Xia, Chengmin Chi, Guoshuai Zhao, Zheng Liu, Xing Xie, Hao Sun, Weiwei Deng, Qi Zhang, Mao Yang This work proposes a neural indexer that takes as input a query and outputs, via a decoder combined with beam search, a list of IDs corresponding to relevant documents in the index. It joins a small but growing line of research that departs from the dominant high recall-sparse retrieval paradigm. Notably, this new paradigm allows for gradient-based optimization of the indexer for target applications using standard deep learning algorithms and frameworks. The proposed approach introduces architectural and training choices that result in significant improvements compared to prior work, demonstrating the promise of neural indexers as a viable alternative. The paper is well-written and discusses the limitations and open questions following from this work, which can serve as inspiration for future research. Thurs Dec 1 — Poster Session 5
  • High-dimensional limit theorems for SGD: Effective dynamics and critical scaling by Gerard Ben Arous , Reza Gheissari, Aukosh Jagannath This work studies the scaling limits of SGD with constant step-size in the high-dimensional regime. It shows how complex SGD can be if the step size is large. Characterizing the nature of SDE and comparing it to the ODE when the step size is small gives insights into the nonconvex optimization landscape. 
  • Gradient Descent: The Ultimate Optimizer by Kartik Chandra , Audrey Xie, Jonathan Ragan-Kelley, Erik Meijer This paper reduces sensitivity to hyperparameters in gradient descent by developing a method to  optimize with respect to hyperparameters and recursively optimize *hyper*-hyperparameters. Since gradient descent is everywhere, the potential impact is tremendous. Wed Nov 30 — Poster Session 4
  • Riemannian Score-Based Generative Modelling by Valentin De Bortoli , Emile Mathieu, Michael John Hutchinson, James Thornton, Yee Whye Teh, Arnaud Doucet The paper generalizes score-based generative model (SGM) from Euclidean space to Riemannian manifolds by identifying major components that contribute to the success of SGMs. The method is both a novel and  technically useful contribution. Wed Nov 30 — Poster Session 4
  • Gradient Estimation with Discrete Stein Operators by Jiaxin Shi , Yuhao Zhou, Jessica Hwang, Michalis Titsias, Lester Mackey This paper considers gradient estimation when the distribution is discrete.  Most common gradient estimators suffer from excessive variance. To improve the quality of gradient estimation, they introduce a variance reduction technique based on Stein operators for discrete distributions. Even though Stein operator is classical, this work provides a nice interpretation of it for gradient estimation and also shows practical improvement in experiments. Tues Nov 29 — Poster Session 1
  • An empirical analysis of compute-optimal large language model training by Jordan Hoffmann , Sebastian Borgeaud, Arthur Mensch, Elena Buchatskaya, Trevor Cai, Eliza Rutherford, Diego de las Casas, Lisa Anne Hendricks, Johannes Welbl, Aidan Clark, Tom Hennigan, Eric Noland, Katherine Millican, Geo rge van den Driessche , Bogdan Damoc, Aurelia Guy, Simon Osindero, Karen Simonyan, Erich E l sen , Oriol Vinyals, Jack William Rae, Laurent Sifre The work asks “Given a fixed FLOPs budget, how should one trade-off model size and the number of training tokens?”. The work models this trade off, makes a prediction based on this model, and trains a model corresponding to that prediction. The resultant model, that is significantly smaller but is trained on significantly more tokens, outperforms its counterpart, while also being more practical to use downstream due to its smaller size. All in all, this work sheds new light on the way the community thinks about scale in the context of language models, which may be useful in other domains of AI as well. Wed Nov 30 — Poster Session 4
  • Beyond neural scaling laws: beating power law scaling via data pruning by Ben Sorscher , Robert Geirhos, Shashank Shekhar, Surya Ganguli, Ari S. Morcos The importance of high quality data in order to achieve good results in machine learning is well known. Recent work on scaling laws has treated data quality as uniform and focussed on the relationship between computation and data. This work renews our focus on the importance of selecting high quality data as a means to achieve optimal scaling. It does so through a nicely designed analytic investigation that develops a theoretical model of the impact of data quality in concert with empirical instantiation of a range of data filtering metrics on ImageNet. This work is both insightful and timely and will shape the debate about the tradeoffs in the many dimensions of scale in machine learning. Wed Nov 30 — Poster Session 3
  • On-Demand Sampling: Learning Optimally from Multiple Distributions by Nika Haghtalab , Michael Jordan, Eric Zhao This paper studies multiple distribution learning using techniques from stochastic zero-sum games. This technique leads to very interesting theoretical results for a class of problems with near optimal results. Wed Nov 30 — Poster Session 3

Outstanding Datasets and Benchmarks Papers

  • LAION-5B: An open large-scale dataset for training next generation image-text models by Christoph Schuhmann, Romain Beaumont, Richard Vencu, Cade W Gordon, Ross Wightman, Mehdi Cherti, Theo Coombes, Aarush Katta, Clayton Mullis, Mitchell Wortsman, Patrick Schramowski, Srivatsa R Kundurthy, Katherine Crowson, Ludwig Schmidt, Robert Kaczmarczyk, Jenia Jitsev Studying the training and capabilities of language-vision architectures, such as CLIP and DALL-E, requires datasets containing billions of image-text pairs. Until now, no datasets of this size have been made openly available for the broader research community. This work presents LAION-5B, a dataset consisting of 5.85 billion CLIP-filtered image-text pairs, aimed at democratizing research on large-scale multi-modal models. Moreover, the authors use this data to successfully replicate foundational models such as CLIP, GLIDE and Stable Diffusion, provide several nearest neighbor indices, as well as an improved web-interface, and detection scores for watermark, NSFW, and toxic content detection. Wed Nov 30 — Poster Session 4  
  • MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge by Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, Anima Anandkumar Autonomous agents have made great strides in specialist domains like Atari games and Go, but typically fail to generalize across a wide spectrum of tasks and capabilities. This work introduces MineDojo, a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse open-ended tasks and an internet-scale knowledge base with Minecraft videos, tutorials, wiki pages, and forum discussions. It also proposes a novel agent learning algorithm that is able to solve a variety of open-ended tasks specified in free-form language. It provides an open-source simulation suite, knowledge bases, algorithm implementation, and pretrained models to promote research on generally capable embodied agents. Tue Nov 29 — Poster Session 2

Test of Time Award

This year, following the usual practice, we chose a NeurIPS paper from 10 years ago, and “ ImageNet Classification with Deep Convolutional Neural Networks ” by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton, aka “AlexNet paper” was unanimously selected by the Program Chairs. In 2012, it was presented as the first CNN trained on the ImageNet Challenge, far surpassing the state-of-the-art at the time, and since then it has made a huge impact on the machine learning community. Geoff will be giving an invited talk on this and more recent research on Thursday, Dec. 1, at 2:30 pm. https://neurips.cc/Conferences/2022/ScheduleMultitrack?event=55869

We again congratulate the award winners and thank the award committee members and the reviewers, ACs, and SACs for nominating the papers. We are looking forward to hearing from the authors of these and all other NeurIPS 2022 papers in New Orleans and on our virtual platform.

Alekh Agarwal, Alice Oh, Danielle Belgrave, Kyunghyun Cho

NeurIPS 2022 Program Chairs

Deepti Ghadiyaram, Joaquin Vanschoren

NeurIPS 2022 Datasets and Benchmark Chairs

Introducing the NeurIPS 2022 Tutorials

Communications Chairs 2023 2022 Conference Tutorials

by Adji Bousso Dieng, Andrew Gordon Wilson, Jessica Schrouff

We are excited to announce the tutorials selected for presentation at the NeurIPS 2022 conference! We look forward to an engaging program, spanning many exciting topics, including Lifelong Learning, Bayesian Optimization, Algorithmic Discrimination, Neurosymbolic Programming, Data Compression, NLP in Healthcare, and others. In this blog post, we detail our selection process, the program, reflections on submissions, and considerations for future tutorials.

Tutorial programs

Each virtual tutorial will consist of:

  • A presentation by the speakers (1h50)
  • Live Q&A with the speakers, answering technical or clarifying questions (10 minutes)
  • Live Panel with further researchers in the field to discuss challenges and promises (30 minutes)

There are two notable differences from last year’s programme: a mix of contributed and invited tutorials (rather than only invited), and a live panel.

UTC timeTutorial 1Tutorial 2Tutorial 3
10:00
Speaker: Eleni Triantafillou


13:00
Speakers: Pin-Yu Chen, Sijia Liu, Sayak Paul

Speakers: Tyler Hayes, Dhireesha Kudithipudi, Gido van de Ven
16:00
Speakers: Swarat Chaudhuri, Armando Solar-Lezama, Jennifer Sun

Speaker: Ndapa Nakashole

Speakers: Antonio Vergari, YooJung Choi, Robert Pehar
19:00
Speakers: Virginia Aglietti,
Jacob Gardner, Jana Doppa

Speakers: Golnoosh Farnadi, Vera Liao, Elliot Creager

Speaker: Chara Podimata
22:00
Speakers: Karen Ullrich,
Yibo Yang, Stephan Mandt

Speakers: Negar Rostamzadeh, Anna Huang, Mark Riedl
01:00
Speakers: Frederic Sala, Ramya Korlakai Vinayak

Speakers: Hannah Korevaar, Manish Raghavan, Ashudeep Singh

Selection process

This year, we have experimented with a “contributed only” design (see related blog post ). Our hope was to obtain a “community-led” selection of topics and speakers while emphasising diversity across but also within tutorials. Our call for proposals had clear guidelines for the selection of topics, speakers, panellists, format, etc.

We received 34 submissions by the (strict) deadline. Each submission was reviewed by two Tutorial Chairs based on interest and expertise on the topic. Each chair gave a score between 1 (strong reject) to 10 (strong accept) to encapsulate their overall impression of the proposal. We then shortlisted the submissions that had received a 6 or higher from at least one chair (14 proposals out of 34) and discussed when there were disagreements between the initial two reviews. A third review was then obtained from a different Tutorial Chair to finalise the decision to accept or reject a proposal. We accepted 9 proposals with this process.

Some relatively common reasons for low scores included (but were not limited to):

  • The topic is too niche for a very broad audience.
  • The topic has been presented in recent tutorials in major machine learning conferences.
  • The speakers have recently contributed to major ML conferences as tutorial and/or keynote speakers.
  • Low diversity, broadly construed.
  • Guidelines were not followed (e.g. no panel included).

While no feature in particular would guarantee acceptance, certain features were often present in proposals that were favourably reviewed:

  • The proposal was highly polished. Significant effort and thought had gone into carefully organising and planning the tutorial, paying close attention to instructions, with few loose ends. These features suggested that the presentation itself would be carefully planned, avoiding last minute organisation, logistical mishaps, etc. 
  • The presenters had demonstrated significant commitment, contributions, and expertise in the chosen topic. 
  • The topic would be both fresh as a tutorial and have a relatively broad appeal.
  • Diversity, broadly construed. For example, speakers and panels with diverse perspectives on the material.

Given the manageable workload, we did not desk-reject proposals for not following guidelines. In the future, desk rejections might be considered (e.g. multiple proposals were 10-12 pages long instead of the required 5).

Diversity in submitted proposals

Each proposal included 1 to 3 speakers, and up to 6 panellists. Across all submissions, there were a total of 85 speakers and 160 confirmed panellists (with some overlap with speakers). We had explicitly asked tutorial presenters to consider diversity in terms of (not exhaustive) gender, race, geographical location, institution, background and expertise, and to write a statement. Our goals were (1) to ensure that a diverse set of opinions were considered, and (2) members from underrepresented groups in the field were included in this program.

According to the diversity statements, researchers have mostly focused on background, expertise and geographical location as diversity dimensions. We note that geographical locations in proposals were mostly limited to Western Europe, the US and Canada. It therefore seems that researchers understood our first goal partly, but only a few proposals satisfactorily considered the second goal.

Aspects of gender and race or ethnicity were rarely addressed explicitly in diversity statements, and sometimes diversity was highlighted where it was not clearly present. This “lip-service” diversity led to the following results:

  • Men were overrepresented as speakers, with 17 out of 34 proposals including only  male speakers.
  • Asian (South and East) and White researchers were overrepresented.
  • Diversity was more often addressed in the panel, than in the speakers.

To illustrate these impressions, we tried to identify each speaker and confirmed panellist according to their perceived gender (based on pronouns in the proposal), perceived race (from the proposal where available, otherwise from a combination of CV, online information and picture as last resort), institution and seniority level (early career: PhD student or early postdoc 0-3 years post-PhD, mid-career: Assistant Prof, 3-10 years post-PhD, senior: Prof, 10+ years post-PhD). We acknowledge that this classification is somewhat arbitrary and does not fully reflect the gender and racial identities of the speakers and panellists. We however believe it is important to provide an approximate quantification of the consequences of our design choices.

neurips 2022 oral presentation

Figure 2: Perceived gender (pie chart) and race (bar plot) distribution of speakers across all submitted proposals. MENA stands for Middle East and North Africa. Note that percentages in the bar plot might not sum to 100 as race information might not be identifiable from the proposal or online information.

We observe that men (he/his pronouns) represent more than 75% of the proposed speakers. White and Asian speakers represent more than 90% of the speakers. Proposals were mostly submitted by and included more academics than industry researchers (26 industry out of 85). Seniority levels were well balanced (early: 26, mid: 33, senior: 26).

Diversity restricted to the panel

Diversity in terms of perceived gender slightly improved in the panel (Figure 3), but remained dominated by men. Similarly, Asian and White researchers still represented more than 80% of the panellists. Interestingly, the panels included fewer researchers from industry (31 out of 160, i.e. ~19%) and skewed more towards senior researchers (early: 26, mid: 47, senior: 81). Overall, we see that the diversity improves relative to speakers, but remains low.

neurips 2022 oral presentation

Figure 3: Perceived gender and race distribution in the panels of submitted proposals.

As a note, we would like to highlight the fact that 3 proposals had made particular efforts in terms of the diversity. These efforts, combined with strong proposals and timely topics, led 2 of these proposals to be accepted (the last one being ineligible). These authors show that it is possible to propose a diverse set of speakers and panellists across all dimensions.

Finally, we assessed diversity across other dimensions, such as disability or being part of the LGBTQIA+ community. For privacy reasons, we do not communicate these numbers.

Improving on the quality of the program

We contacted the authors of accepted tutorials and worked with them in cases where we believed the program could be improved, in terms of organisation, scientific content, and diversity. Where appropriate, we also encouraged the speakers to rethink the format of the proposed tutorial to account for the online edition.

As we had initially planned for ~12-15 tutorials, we had the opportunity to invite tutorial speakers. We invited speakers by identifying researchers who have demonstrated excellence and expertise in a specific topic and who would benefit from the opportunity. We considered aspects of diversity in our selection to prioritise researchers from under-represented groups and maximised the diversity in topics.

Thanks to the responsiveness of invited speakers and to the work of authors of submitted proposals, we are able to provide an exciting list of topics. While we also increase the diversity of speakers and panellists, there is still room for improvement.

neurips 2022 oral presentation

Figure 4: Perceived gender and race distribution across speakers and panellists after proposals were revised and speakers were invited.

Considerations for future editions

  • Carefully review the topics of tutorials at major ML conferences in the past 3 years. Topics that are overlapping are unlikely to be selected unless the tutorial brings a significantly novel point of view or extension.
  • Read and follow the guidelines. This might seem obvious but multiple proposals were rejected because they included speakers who were ineligible, did not include a panel, etc. While we did not desk-reject proposals this year, we did notice that the proposals we accepted were mostly following the guidelines. This simply highlights that the authors carefully considered the different requirements, wrote and proof-read their submission and submitted on time. This increases the chances of acceptance.
  • Diversity should be considered across all aspects, and proposals should include voices from under-represented groups. Proposals with 6 or 7 participants that all identify with masculine pronouns are unlikely to be accepted. Refer to directories from affinity groups (e.g. https://www.directory.wimlworkshop.org/ , https://lxai.app/PUBLIC-DIRECTORY ), request recommendations from more senior researchers in the field, and consider non-Western institutions.
  • If you would like to propose a topic for a tutorial but are ineligible, please pass the opportunity to someone else! Consider encouraging others in the field to submit a proposal.

Speakers and panellists

A proposal might stem from one or a couple of researchers who will then invite other speakers and panellists. These co-presenters and panellists also have an important role to play:

  • We observed that some panellists were considered as confirmed in multiple proposals. These were often more senior researchers. If you do get invited for multiple opportunities, please consider suggesting other researchers instead. We had strict guidance that every speaker and panellist could only be considered for one tutorial.
  • Similarly, if you cannot participate, please suggest other researchers and think about researchers from under-represented groups who would benefit from the opportunity.
  • If you see that the list of speakers and panellists is not very diverse or dominated by groups that are already over-represented in the field, reach out to the authors and ask them to modify the list.

Members from under-represented groups

While the burden of defining a diverse program should not fall on members of under-represented groups, there are a couple of steps that can be taken to increase visibility:

  • Create a website, fill in (and update) your personal page on your institution’s or company’s website or make public profiles on LinkedIn, DBLP, ResearchGate,  Google Scholar, …
  • If you identify with an affinity group and this group has a directory, consider creating a profile. Examples include: https://www.directory.wimlworkshop.org/ , https://lxai.app/PUBLIC-DIRECTORY
  • Consider posting the recordings of previous talks
  • We recommend affinity groups to create open-source repositories of their members (where feasible) such that organisers can identify potential speakers to invite. 
  • Submit a proposal!

Web presence helps authors, speakers, panellists and tutorial organisers find your profile to consider you for the opportunity. Without this information, it is difficult to estimate whether someone has the breadth of experience and communication skills that would make a tutorial successful.

Tutorial organisers

We are of course not exempt from improvements. Some of the learnings we take for future editions include:

  • More proactively reach out to potential speakers to encourage them to submit a proposal. This includes repeated postings on mailing lists such as those from affinity groups, as well as directly reaching out to researchers.
  • Earlier invitations of invited tutorials.
  • Clearer guidelines, e.g. explaining the goals we are trying to achieve with the diversity statement.
  • Having a clear set of expectations and benefits for tutorial speakers and panellists.

NeurIPS organisers and board

Tutorial speakers provide significant content for the conference. When they come from under-represented groups, they could be better supported such that they can submit a proposal or accept this opportunity. Bottlenecks we have identified include:

  • No funding opportunity for tutorial speakers if they wanted to attend the in-person component of the conference. 
  • No honorarium for speakers. For some speakers from under-represented groups, the exposure that a tutorial provides does not compensate for the toll that building such a program takes as this is time not devoted to research or grant applications.
  • Provide the opportunity for NeurIPS contributors to self-report their demographic characteristics.

The NeurIPS organisers and Board have been receptive to these requests and have now granted:

  • Tutorial speakers receive an in-person or virtual registration.
  • Thanks to DIA Chairs, tutorial speakers will also be considered in priority when applying for NeurIPS travel funding (previously limited to students and authors).
  • Each tutorial will receive a honorarium (to be split across speakers).
  • Self-reporting requires more consideration, especially given the laws regarding demographic surveys in different countries. It is being discussed for future meetings.

We are thankful to the organisers (in particular the General and DIA Chairs) and the Board for these measures. We believe they will help in providing an exciting and diverse set of tutorials in future editions. 

We are extremely excited about the programme, and look forward to seeing you at the tutorials! 

Reflections on the NeurIPS 2022 Ethics Review Process

By the NeurIPS 2022 ethics review chairs: Sasha Luccioni, Inioluwa Deborah Raji, Cherie Poland, and William Isaac

TL;DR: The 2022 ethics review process is done – come discuss the process and related considerations with us at the Ethics Review Open Discussion on Tuesday, November 29th at NeurIPS!

With the 2022 decision process behind us and as this year’s conference approaches, we wanted to take this opportunity to reflect on the 2022 NeurIPS ethics review process.

The ethics review process was first introduced at the 2020 NeurIPS conference, implemented as a step towards improving the ethical awareness and engagement of NeurIPS authors and reviewers in order to inspire overall improvements to ethical research conduct, practice and reflection throughout the field, especially for those participating and presenting at the conference. 

While the first year of the process was a pilot, last year was focused on operating the process at scale. This year’s main objective focused on consistency: incorporating the successful components from the previous editions of the ethics review process to reinforce its reliability and applicability for a conference of this size, further solidifying concrete policies to establish a coherent process moving forward.

Updates from the 2022 Ethics Review Process

The process saw updated Ethics Review Guidelines , which included new considerations regarding the misuse of ML algorithms to produce contradictory results, as well as the addition of a list of deprecated datasets to allow both authors and reviewers to check the status of training datasets and understand the different issues that may arise. The ethics reviews were not designed to be punitive or exclusionary. Rather, they were designed to inform, educate, and shed light on ethical concerns so authors could address these issues through an open discussion. 

This year also saw the release of the first draft of the NeurIPS   Provisional Code of Ethics , which aims to provide the community with more thorough ethical guidelines and expectations for the conference. As such, research integrity issues, including plagiarism, that were identified during the review process, were remanded and transferred to the Program Chairs.

Overview of the Ethics Review Process

Main NeurIPS track

We allowed technical reviewers and area chairs (ACs) to flag papers that they found to have ethical issues based on a list provided for guidance. 

To help review these papers, we invited 328 individuals with diverse backgrounds and expertise in AI ethics to take part in the ethics review process. In total, 128 people agreed to participate. 

The categories of ethics reviewer expertise included:

  • Discrimination / Bias / Fairness Concerns
  • Data and Algorithm Evaluation
  • Inappropriate Potential Applications & Impact (i.e. human rights concerns)
  • Privacy and Security (consent, etc.)
  • Legal Compliance (GDPR, copyright, terms of use etc.)
  • Research Integrity Issues (i.e. plagiarism, etc.)
  • Responsible Research Practice (i.e. IRB, documentation, research ethics, etc.)

Paper reviews were conducted in Open Review and reviewers were assigned algorithmically in a blinded fashion after preliminary conflict checks were performed, with many of the reviewers  reviewing in multiple categories. Once the ethics reviews were completed, they were made available to the authors and technical reviewers so that discussions of concerns could be handled through open dialogue.

Handling false positives (flagged papers that did not have ethical issues): Of the 419 main track papers flagged for ethics review, no sub-category issues were flagged in 115 of them. This necessitated a manual review of all 115 papers to identify the potential ethical concerns. Of these, 103 papers had no apparent ethical issues. 

Handling false negatives (papers with ethical concerns unflagged by technical reviewers): There were also papers that had clear ethical issues that were not surfaced by the primary reviewers. In accordance with the previous year, these papers were identified through a keyword search of especially challenging topics that had required additional ethical scrutiny in the past (i.e. key words such as surveillance, facial recognition, and biometric data generation). 

 

419

115

103

81

0

50

The Datasets and Benchmarks Track

While fewer papers were submitted to the Datasets and Benchmarks Track compared to the main track, the number of papers with ethical concerns was greater by percentage in the former: 81 papers were flagged for ethics review, with 31 having confirmed ethical issues. The ethical challenges arising from datasets range from issues of participant consent, privacy, anonymity, biometrics, data storage, and web scraping of data. These were all concerns that were raised in this year’s ethics review process. Concerns about the risk of harm and deprecation of the datasets, due to historical problems, were discussed at length among the technical and ethics reviewers. The ethics reviewers often recommended improved datasheet documentation, shedding light on issues such as consent, privacy, and third-party oversight of data collection processes and procedures.

Of the 31 submissions with confirmed ethical issues, two were minor, 25 were serious, and four were severe enough for the ethics chairs to recommend rejection or conditional acceptance upon additional review and deliberation between the ethics chairs. The decision to recommend rejection/conditional acceptance based on ethical grounds was not taken lightly and was made only after considerable open discussion (as seen on the Open Review pages of papers whose authors opted for their reviews to remain public). In these cases, the ethics chairs provided the area chairs with written justifications in support of their joint recommendations.The final decision on whether to accept or reject these papers was left to the track chairs.

Cross-cutting ethical issues

Issues pertaining to the utilization of Institutional Review Boards (IRBs) arose on several occasions in both the Main Track and the Datasets and Benchmarks Track. Concerns were related to ethical oversight of data collection, informed consent, ability to withdraw from participation in the dataset, data privacy, cross-border uses of data (global public availability), licensing, and copyright law issues. However, no requirements were made to use IRBs as a method of third-party oversight because the availability and access to IRBs as an oversight mechanism varies greatly between countries. 

Moving forward, it is crucial for the conference and the larger community to consider diverging international ethical standards, as reviewers raised concerns about how to equally and equitably address ethical standards for future conferences when laws, regulations, and ethics differ by country. The focus should continue to be placed on the technical merits with attention to ethical implications and impact, without unnecessarily burdening authors to follow rigid ethical protocols. It is therefore important to establish norms to guide and educate about potential ethical harms and dual uses, rather than to impose penalties on authors for their work in advancing technically relevant and important research.

Community feedback

During the ethics review process, we received insightful feedback from technical reviewers, ethics reviewers and ACs that we deemed relevant to share with the NeurIPS community:

  • Lack of clarity around the proper use of the ethics flag: technical and ethics reviewers noted that more information was needed regarding the purpose of the ethics flag and how to use it. The way in which the process was set up did not enable technical reviewers to add comments or reasons for flagging beyond the categories provided, which sometimes made it difficult for ethics reviewers to pinpoint the issue.  
  • Providing more support for technical reviewers: We also received several suggestions to add an FAQ to the technical reviewer guidelines with examples of how or when to use the ethics flag. This could be done by making the text box for comments a  required field to prompt technical reviewers to provide their reason for flagging a paper. This year, over 112 papers in the main track were marked with an ethics flag without explanation or justification.
  • The difference between ethical guidelines and norms: While we endeavored to cover a large scope of potential issues and problems in the updated ethical guidelines, they remained high-level and did not address all possible use cases and applications. For instance, several debates cropped up regarding whether it was acceptable to download data from public repositories to train ML models and how copyright can/should be enforced. 
  • The potential influence of paper visibility/promotion on reviewers’ evaluation: Given that NeurIPS doesn’t have an explicit policy regarding the use of social media and press releases before/during the review period, it was proposed that paper visibility could potentially influence reviewer evaluation. 

Other minor points that were raised:

  • Visibility of ethics reviews: Determining whether ethics reviews should be made visible to the public, authors and reviewers from the onset can be a non-obvious challenge, as it dictates the level of awareness and quality of communication between these stakeholders. There is a clear educational benefit in making reviews publicly available to the community. However, it could potentially lead to unhelpful public engagement in the review process.  
  • Communication between ER chairs and reviewers: There should be a clear articulation of guidelines on what reviewers are expected to flag and an established communication channel between ACs and reviewers to ensure they effectively comprehend the role of ethics reviews to appropriately solicit and react to the ethics reviews content. 
  • Ethics “office hours ”: Given the lack of clarity around the scope and purpose of ethics reviews, having regular “office hours” to answer questions from both the ethical and technical reviewers could be useful to both groups.
  • Better matching/reasons for ethics reviews : While the categories provided for flagging the papers aimed to cover many different ethical aspects, they did not enable easy matching between papers and reviewers. Adding more categories (and modifying existing ones) could help better match reviewer expertise to the concrete issues present in a paper in both conference tracks.
  • Lack of clarity around the role of ethics reviews : It was brought to our attention that the purpose of the ethics review wasn’t always clear to both technical reviewers and the broader community. For instance, whether ethical issues alone could constitute a reason for rejecting an otherwise technically sound submission was a question that was raised on Open Review. We hope to discuss this question with conference attendees at our Ethics Open Discussion that we will organize on Tuesday, November 29th.

Retrospectives on the ethics review process

Since starting this review process, the broader community has shown enthusiasm and interest in these efforts, including some academic reflection about the process and possible improvements. Similar major institutional ethics review efforts have been launched at places such as Microsoft , Deepmind and Stanford in order to mediate project approval and ethical consideration, citing the NeurIPS ethics review process as an impetus for their internal efforts. 

Furthermore, several great retrospective studies on the design of past NeurIPS ethics review processes have now been published, including an examination of past broader impact statements , and a review of last year’s move to the checklist , in addition to how-to guides for the research community to inform their thinking and taxonomize research ethics challenges. 

We hope these efforts serve as evidence that since the introduction of ethical oversight practices at NeurIPS, there has been a growing interest and uptake of further ethical reflections as part of the research process in machine learning. In addition, these specific efforts and others highlight the broader community’s participation in actively informing the next steps as it relates to this process through their feedback and adoption of recommended practices.

Further questions? Come to our Ethics Review Open Discussion at NeurIPS on November 29, or send an email to [email protected]!

Getting Ready for NeurIPS (2): Location, Facilities, Safety

From the General Chairs, Sanmi Koyejo and Shakir Mohamed

As a conference, NeurIPS has a commitment to creating more accessible and inclusive spaces, and to do this as a priority while staying within our means. This commitment includes reducing barriers to participation through programs for financial assistance, accounting for dietary requirements, facilities for parents’ rooms, child care, hearing loops, and other accommodations for the in-person conference, and having globally accessible platforms and tools for the virtual conference.

Travelling to in-person conferences continues to be an important and highly-personal decision, and this post describes some of the facilities and safety considerations to inform your time at the conference. If you have not done so yet, register here for the in-person or online conference.

As a condition of registration, all attendees are expected to be familiar with the code of conduct and to abide by its conditions.

Venue Facilities

In anticipation of the growing size and broad interest in attending the conference, the 2022 conference is hosted in the award winning New Orleans Ernest N. Morial Convention Center, the sixth largest convention facility in the United States. We are using half the space this year, and have sufficient space to allow for uncrowded activities.

Poster sessions. We have one large poster hall this year. All posters are spaced 6-feet apart to allow for easy movement and to prevent crowding. 

Accessibility. The conference has all the facilities expected, including a prayer room, nursing room, childcare, gender-neutral toilets, is wheelchair accessible, and can meet other needs (e.g., accommodating guide-dogs). Streams have captioning and are available on the website afterwards for asynchronous viewing. If you have specific needs that you think we would not have accounted for, please let us know and we will do what we can to ensure needs are met.

Health and Safety

We have several components related to health and safety for the the in-person conference. 

Security and policing. Access to the facility requires conference badges and these will be checked at all entrances by on-site security. The city is also aware of the conference and there is generally increased police-presence around the conference venue during large events.

Medical emergencies. All attendees should ensure that they have appropriate travel insurance. As always, we have on-site paramedics for the entirety of the conference in case of medical emergencies. New Orleans has a robust health system, with 3 hospitals within the LCMC Health system located within 2 miles of the Ernest N. Morial Convention Center. One of those hospitals is University Medical Center, the regional Level 1 Trauma and Burn Center. Another, Touro Infirmary, is the high-risk OB center for our health system and other advanced therapies.

Pregnancy. We have discussed concerns that have been raised regarding access to reproductive healthcare at the conference location formally with the President and Chief Medical Officer of LCMC Health, the nonprofit network of healthcare providers in Southern Louisiana based out of New Orleans. Hospitals in New Orleans will continue offering patient care including birth control and emergency contraception, and care for miscarriages and ectopic pregnancies. The Dobbs ruling and the subsequently enacted Louisiana trigger law have not changed the ability to offer these services.

COVID-19. We continue to be mindful of the spread and risks of COVID-19 infection. The conference strongly encourages receiving vaccination/boosters, distancing as much as possible to reduce spread, and the use of (FFP2/N95) masks during indoor events such as keynotes and posters. We allocated the use of spaces to ensure distancing is possible. Regular testing is encouraged, and tests can be obtained from local pharmacies. Face shields will be available for poster presenters , hand-sanitizer and masks will be available for attendees.

Reporting Concerns

During both the in-person and virtual conference, behaviour that violates the code of conduct should be reported as soon as possible – please refer to the Code of Conduct for the reporting process . The conference has a robust reporting process that ensures confidentiality. All cases are handled by an independent consultant who has worked with the conference for many years and is a specialist in dealing with conflicts and participant relations.

We encourage you to join the conference both in-person and online, and register if you have not yet done so. All the organizing teams are now hard at work to bring all the final details of the conference together. You can find additional information on accessibility and health and safety on the NeurIPS visiting New Orleans page .

Next week, our third and final post in this series will dig into specific highlights of the conference. Get an overview of the conference format of this year’s conference by looking through last week’s post .

​​P.S. Our best wishes for a spooky and autumnal week ahead. This post was written while listening to Bourbon Street Parade . Tweet our content to encourage a wide gathering of our community, and plan your list of people to reconnect with at the conference.

N2

NeurIPS 2022

80 followers

Following no one

Orals & Spotlights Track 20: Social/Adversarial Learning

Session chairs, to ask questions please use rocketchat, available only upon registration and login ., 1 - oral: dverge: diversifying vulnerabilities for enhanced robust generation of ensembles, 2 - oral: metric-free individual fairness in online learning, 3 - oral: fair regression via plug-in estimator and recalibration with statistical guarantees, 5 - spotlight: explaining naive bayes and other linear classifiers with polynomial time and delay, 6 - spotlight: differentially-private federated linear bandits, 7 - spotlight: adversarial training is a form of data-dependent operator norm regularization, 8 - spotlight: prediction with corrupted expert advice, q&a: joint q&a for preceeding spotlights, 10 - spotlight: guided adversarial attack for evaluating and enhancing adversarial defenses, 11 - spotlight: towards safe policy improvement for non-stationary mdps, 12 - spotlight: robust deep reinforcement learning against adversarial perturbations on state observations, 13 - spotlight: algorithmic recourse under imperfect causal knowledge: a probabilistic approach, 14 - spotlight: understanding gradient clipping in private sgd: a geometric perspective.

IMAGES

  1. NeurIPS 2022

    neurips 2022 oral presentation

  2. NeurIPS 2022 Oral

    neurips 2022 oral presentation

  3. NeurIPS 2022

    neurips 2022 oral presentation

  4. NeurIPS 2022

    neurips 2022 oral presentation

  5. [NeurIPS 2022] FrozenBiLM: 5 Min Presentation

    neurips 2022 oral presentation

  6. Introducing the NeurIPS 2022 Keynote Speakers

    neurips 2022 oral presentation

VIDEO

  1. [NeurIPS 2022]Learning Latent Seasonal-Trend Representations for Time Series Forecasting

  2. NeurIPS 2023 Oral 2A Efficient Learning

  3. LLM이 차근차근 생각하면 복잡한 추론문제를 풀 수 있을까?| NeurIPS 2022

  4. NeurIPS Live Stream Vendor Hall

  5. [NeurIPS 2022] Embodied Scene-aware Human Pose Estimation

  6. Annual Sales Conference 2022

COMMENTS

  1. NeurIPS 2022 Orals

    2022 2021 2020 2019 2018 2017 2016 ... NeurIPS uses cookies to remember that you are logged in. By using our websites, you agree to the placement of cookies. ... Accept Cookies The NeurIPS Logo above may be used on presentations. Right-click and choose download. It is a vector graphic and may be used at any scale. ...

  2. NeurIPS 2022 Oral-Equivalent Papers

    Oral-Equivalent Papers. Holomorphic Equilibrium Propagation Computes Exact Gradients Through Finite Size Oscillations. Axel Laborieux · Friedemann Zenke. [ Hall J ] Abstract. SIXO: Smoothing Inference with Twisted Objectives. Dieterich Lawson · Allan Raventós · andrew warrington · Scott Linderman. [ Hall J ] Abstract.

  3. [NeurIPS 2022] FrozenBiLM: 5 Min Presentation

    Video presentation in 5 minutes of our NeurIPS 2022 paper: Zero-Shot Video Question Answering via Frozen Birectional Language Models, Antoine Yang, Antoine M...

  4. NeurIPS 2022: Seven Microsoft Research Papers Selected for Oral

    Of the 122 Microsoft research papers accepted for the conference, seven have been selected for oral presentations during the virtual NeurIPS experience the week of December 4 th. The oral presentations provide a deeper dive into each of the featured research topics. In addition, two other Microsoft research papers received Outstanding Paper ...

  5. NeurIPS 2022 Conference

    New Orleans, Louisiana, United States of America Nov 28 2022 https://neurips.cc/ [email protected]. Please see the venue website for more information. Submission Start: Apr 16 2022 12:00AM UTC-0, Abstract Registration: May 16 2022 09:00PM UTC-0, End: May 19 2022 08:00PM UTC-0. Loading.

  6. NeurIPS 2022 Spotlights

    2022 2021 2020 2019 2018 2017 2016 ... NeurIPS uses cookies to remember that you are logged in. By using our websites, you agree to the placement of cookies. ... Accept Cookies The NeurIPS Logo above may be used on presentations. Right-click and choose download. It is a vector graphic and may be used at any scale. ...

  7. NeurIPS Oral Presentations

    2022 2021 2020 2019 2018 2017 2016 ... Oral Presentations [ Abstract ] ... The NeurIPS Logo above may be used on presentations. Right-click and choose download. It is a vector graphic and may be used at any scale.

  8. Getting Ready for NeurIPS (1): The Conference Format

    NeurIPS 2022 will bring together a broad community around machine learning, artificial intelligence, and neural information processing over two weeks: the first week is in-person in New Orleans, followed by a virtual conference week. Both weeks have unique events at which to connect and learn. ... Oral presentations all take place in the ...

  9. Getting Ready for NeurIPS (3): 2022 Conference Highlights

    The two weeks of NeurIPS 2022 are close, and we are excited to meet everyone in person in New Orleans during the first week and then to continue our interaction during the virtual week. ... In these sessions, you will get 1-minute spotlight presentations from authors of accepted papers followed by mini panel discussions, where 2 papers are ...

  10. 2022 Conference

    NeurIPS 2022 The Thirty-sixth Annual Conference on Neural Information Processing Systems. Monday, November 28th through Friday December 9th ... and oral and poster presentations of refereed papers. Along with the conference is a professional exposition focusing on machine learning in practice, a series of tutorials, and topical workshops that ...

  11. 2022

    NeurIPS 2022 will bring together a broad community around machine learning, artificial intelligence, and neural information processing over two weeks: the first week is in-person in New Orleans, followed by a virtual conference week. Both weeks have unique events at which to connect and learn. ... Oral presentations all take place in the ...

  12. Introducing the NeurIPS 2022 Tutorials

    Introducing the NeurIPS 2022 Tutorials. Communications Chairs 2023 2022 Conference Tutorials. by Adji Bousso Dieng, Andrew Gordon Wilson, Jessica Schrouff. We are excited to announce the tutorials selected for presentation at the NeurIPS 2022 conference! We look forward to an engaging program, spanning many exciting topics, including Lifelong ...

  13. NeurIPS 2022

    Towards a Machine Learning Prediction of Electronic Stopping Power [Oral Presentation]. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA. Neural Information Processing Systems Conference: LatinX in AI (LXAI) Research Workshop 2022, New Orleans, USA.

  14. NeurIPS 2023 Orals

    NeurIPS 2023. Orals. Ordering-based Conditions for Global Convergence of Policy Gradient Methods. Oral. Jincheng Mei · Bo Dai · Alekh Agarwal · Mohammad Ghavamzadeh · Csaba Szepesvari · Dale Schuurmans. [ Hall C2 (level 1 gate 9 south of food court) ] Abstract. We prove that, for finite-arm bandits with linear function approximation, the ...

  15. ivgWeb

    15-Sep-2022: 3 papers on visual foundation model on are accepted by NeurIPS 2022. 10-Sep-2022: 1 papers on multi-camera depth estimation is accepted by CoRL 2022. ... 1-Jul-2018: 1 paper on group activity recognition is accepted by ACM MM 2018 for oral presentation. 29-Jun-2018: 1 paper on visual object tracking is accepted by IROS 2018.

  16. FL-NeurIPS'22

    in Conjunction with NeurIPS 2022 (FL-NeurIPS'22) Final Submission Deadline: September 22, 2022 (23:59:59 AoE) Notification Due: October 20, 2022 ... Oral Presentation Session 3 (7 min talk + 3 min Q&A each) Sharut Gupta, Kartik Ahuja, Mohammad Havaei, Niladri Chatterjee and Yoshua Bengio.

  17. NeurIPS [2nd] Oral Presentation

    The NeurIPS Logo above may be used on presentations. Right-click and choose download. It is a vector graphic and may be used at any scale.

  18. 2022

    Here are the highlights from Monday, the first day of NeurIPS 2022, which was dedicated to Affinity Workshops, Education Outreach, and the Expo!. There were many exciting Affinity Workshops this year organized by the Affinity Workshop chairs - Arjun Subramonian, Kehinde Aruleba and Sunipa Dev - that included:. Women in Machine Learning: In-Person and Virtual, from 7:30 am - 1:00 pm CST

  19. NeurIPS 2022 Virtual Site

    All virtual parts of NeurIPS 2022 will be accessed through the main webpage. Use the top menu bar to access the diferent parts of the conference. ... Most events will have a live stream on the top of the page that you can watch during and after the presentation. All recordings will be available about a month after the conference. Poster Sessions.

  20. NeurIPS

    2022 2021 2020 2019 2018 2017 2016 ... NeurIPS 2024, the Thirty-eighth Annual Conference on Neural Information Processing Systems, will be held at the Vancouver Convention Center ... and oral and poster presentations of refereed papers. Along with the conference is a professional exposition focusing on machine learning in practice, a series of ...

  21. NeurIPS 2024 Orals

    2022 2021 2020 2019 2018 2017 2016 ... NeurIPS uses cookies to remember that you are logged in. By using our websites, you agree to the placement of cookies. ... Accept Cookies The NeurIPS Logo above may be used on presentations. Right-click and choose download. It is a vector graphic and may be used at any scale. ...

  22. NeurIPS 2022 Oral

    该论文在NeurIPS 2022上以三个strong accept的得分荣获Oral Presentation(排名前1.7%)。该论文由北京大学王立威教授课题组完成,其中作者张博航为北京大学智能学院2019级博士生;作者姜度为北京大学智能学院2022级博士生;作者贺笛为北京大学智能学院助理教授。

  23. NeurIPS 2022 Oral-Equivalent Papers

    2022 2021 2020 2019 2018 2017 2016 ... Oral-Equivalent Papers. Asymptotic Behaviors of Projected Stochastic Approximation: A Jump Diffusion Perspective ... illustrating CLUE's superior performance in all assessed categories of the NeurIPS 2021 Multimodal Single-cell Data Integration Competition. While we focus on analysis of single cell ...

  24. NeurIPS 2022

    NeurIPS 2022. 80 followers. Follow. Presentations 5,000 Collections 130 Following 0 Followers 80 About.

  25. Orals & Spotlights Track 20: Social/Adversarial Learning

    Each Oral includes Q&A Spotlights have joint Q&As . Time 2020-12-09T06:00:00-08:00 - 2020-12-09T09:00:00-08:00 . Session chairs Steven Wu, Miro Dudik . Video. Chat. To ask questions please use rocketchat, available only upon registration and login. ... Virtual NeurIPS 2020 made with ...