ML@GT at NeurIPS 2019

December 8-14, 2019
Vancouver, British Columbia

ML@GT Displays Diverse  Research Interests at NeurIPS 2019

With 30 papers to present, the Machine Learning Center at Georgia Tech (ML@GT) will make a strong showing at this year's Neural Information Processing Systems (NeurIPS) conference held Dec. 8-14 in Vancouver, British Columbia.
 
The conference fosters the exchange of research on the theoretical, technological, biological, and mathematical aspects of neural information processing systems. ML@GT research spans all of the categories, including work on
neural data, fairness in machine learning algorithms, and teaching artificial intelligence to work in changing environments.
 
“NeurIPS continues to be an exciting conference to attend because of the diverse research that is being presented each year. It is one of the most sought-after and anticipated conferences every year, and it’s good to see ML@GT have a good variety of papers being accepted,” said Tuo Zhao, an assistant professor in the H. Milton Stewart School of Industrial and Systems Engineering (ISyE). Zhao has three accepted papers.
 
NeurIPS also continues to be a hotspot for major technology companies like Google, Microsoft, Facebook and to recruit new talent. 

 

Research Highlights

We hate to brag (do we?), but our students and faculty are producing some pretty cool research. Here are recaps on just a few of our papers at NeurIPS 2019. 

Georgia Tech Researchers Explore New Ways to Overcome Robotic Mobility Limitations
By Peter Anderson, Ayush Shrivastava, Devi Parikh, Dhruv Batra, Stefan Lee

Researchers employ two new methods to help robots move around with less supervision and instruction from humans. Work like this could help with climate change and time wasted traveling to and from meetings. 

Learn more about this work at NeurIPS on Tues. Dec. 10 during the poster session from 10:45-12:45 in East Exhibition Hall B+C at Poster #209.

Making Sure Computing Machines Don't Stereotype People

By Uthaipon Tantipongpipat, Samira Samadi, Mohit Singh, Jamie Morgenstern, and Santosh Vempala

Our researchers updated principal component analysis (PCA) once to make it more fair, and they have done it again! The improved algorithm takes more factors into account, allowing less bias and more transparency to exist when analyzing various populations. 

Check out this work on Thurs. Dec 12 during the spotlight presentation at 10:40 in West Ballroom C or the poster session from 10:45 - 12:45 in East Exhibition Hall B+C at Poster #80.

Making Artificial Intelligence Work in a Changing Environment
By Adrian Rivera Cardoso, He Wang, and Huan Xu 

Cardoso, Wang, and Xu tackle decision problems with a massive number of states in a nonstationary environment. This work could help AI agents better perform a broad range of tasks - recommend movies, drive cars, reduce electricity consumption, and more. 

Learn more about it on the blog or on Wednesday, Dec. 11 from 5-7:00 p.m. at Poster #52.

Explaining “Optimal” Nonparametric Regression on Low Dimensional Manifolds using Deep Neural Networks
By 
Minshuo Chen, Haoming Jiang, Wenjing Liao, and Tuo Zhao
Exploring the intersection between theoretical machine learning and applied mathematics, this recent work from Georgia Tech is for anyone who loves math. 

Learn more about it on the blog or on Tuesday, Dec. 10 from 10:45-12:45 at Poster #52.

Explaining Blended Matching Pursuits: A Multi-Purpose AI Algorithm
By Cyrille Combettes and Sebastian Pokutta

With the potential to be used in market prediction, recommender systems, or object detection, the latest work from Combettes and Pokutta is one you don't want to miss. 

Learn more about it on the blog or on Tuesday, Dec. 10 from 5:30-7:30 p.m. at Poster #105.

Georgia Tech Papers

Online Learning via the Differential Privacy Lens
Jacob Abernethy, Young Hun Jung, Chansoo Lee, and Audra McMillan

Learning Auctions with Robust Incentive Guarantees
Jacob Abernethy, Rachel Cummings, Bhuvesh Kumar, Sam Taggart, and Jamie Morgenstern

ViLBERT: Pretraining Task-Agnostic Visiolinguistic Representations for Vision-and-Language Tasks
Jiasen Lu, Dhruv Batra, Devi Parikh, and Stefan Lee


Hierarchical Optimal Transport for Multimodal Distribution Alignment
John Lee, Max Dabagia, Eva Dyer, and Chris Rozell


On the Global Convergence of Actor-Critic: A Case for Linear Quadratic Regulator with Ergodic Cost
Zhuoran Yang, Yongxin Chen, Mingyi Hong, and Zhaoran Wang

RUBi: Reducing Unimodal Biases in Visual Question Answering
Remi Cadene, Corentin Dancette, Hedi Ben-younes, Matthieu Cord, and Devi Parikh

Chasing Ghosts: Instruction Following as Bayesian State Tracking
Peter Anderson*, Ayush Shrivastava*, Devi Parikh, Dhruv Batra, and Stefan Lee * equal contribution


Cross-Channel Neuron Communication Networks
Jianwei Yang, Zhile Ren, Hongyuan Zhu, Ji Lin, Chuang Gan, and Devi Parikh


Large Scale Markov Decision Processes with Changing Rewards
Adrian Rivera Cardoso, He Wang, and Huan Xu


Bayesian Meta-network Architecture Learning
Albert Shaw, Bo Dai, Weiyang Liu, and Le Song

Exponential Family Estimation via Adversarial Dynamics Embedding
Bo Dai, Zhen Liu, Hanjun Dai, Niao He, Arthur Gretton, Le Song, and Dale Schuurmans


Retrosynthesis Prediction with Conditional Graph Logic Network
Hanjun Dai, Bo Dai, Chengtao Li, Connor Coley, and Le Song


Neural Similarity Learning
Weiyang Liu, Zhen Liu, James Rehg, and Le Song

Learning Positive Functions with Pseudo Mirror Descent
Yingxiang Yang, Haoxiang Wang, Negar Kiyavash, and Niao He

Decentralized Sketching of Low Rank Matrices
Rakshith Sharma Srinivasa, Kiryung Lee, Marius Junge, and Justin Romberg


 
Value Propagation for Decentralized Networked Deep Multi-Agent Reinforcement Learning
Chao Qu, Shie Mannor, Huan Xu, Yuan Qi, Le Song, and Junwu Xiong

Blended Matching Pursuit
Cyrille W. Combettes and Sebastian Pokutta

Meta-Learning with Relational Information for Short Sequences
Yujia Xie, Haoming Jiang, Feng Liu, Tuo Zhao, and Hongyuan Zha


Spherical Text Embedding
Yu Meng, Jiaxin Huang, Guangyuan Wang, Chao Zhang, Honglei Zhuang, Lance Kaplan, Jiawei Han

A Unified Variance-Reduced Accelerated Gradient Method for Convex Optimization
Guanghui Lan, Zhize Li, and Yi Zhou

Faster Width-Dependent Algorithm for Mixed Packing and Covering LPs
Digvijay Boob, Saurabh Sawlani, and Di Wang

Multi-Criteria Dimensionality Reduction with Applications to Fairness
Uthaipon Tantipongpipat, Samira Samadi, Mohit Singh, Jamie Morgenstern, and Santosh Vempala


Practical Differentially Private Top-k Selection with Pay-what-you-get Composition
David Durfee and Ryan Rogers

Towards Understanding the Importance of Shortcut Connections in Residual Networks
Tianyi Liu, Minshuo Chen, Mo Zhou, Simon S Du, Enlu Zhou, Tuo Zhao

Rapid Convergence of the Unadjusted Langevin Algorithm: Log-Sobolev Suffices
Santosh Vempala and Andre Wibisono

Efficient Approximation of Deep ReLU Networks for Functions on Low Dimensional Manifolds
Minshuo Chen, Haoming Jiang, Wenjing Liao, and Tuo Zhao

Fast, Provably convergent IRLS Algorithm for p-norm Linear Regression
Deeksha Adil, Richard Peng, and Sushant Sachdeva

Enabling Hyperparameter Optimization in Sequential Autoencoders for Spiking Neural Data
Mohammad Reza Keshtkaran and Chethan Pandarinath

Addressing Sample Complexity in Visual Tasks Using HER and Hallucinatory GANs
Himanshu Sahni, Toby Buckley, Pieter Abbeel, and Ilya Kuzovkin

Migration through Machine Learning Lens - Predicting Sexual and Reproductive Health in Young Migrants
Amber Nigam, Pragati Jaiswal, Uma Girkar, Teertha Arora, and Leo A. Celi

ML@GT NeurIPS Luncheon

Wednesday, December 11
12:45 - 2:15 p.m.

TAPshack Coal Harbour

1199 W Cordova St, Vancouver, BC V6E 4R5, Canada

All ML@GT students, faculty, and alumni are welcome to join ML@GT for lunch.
The restaurant is about a 2-minute walk from the convention center. 


Please RSVP no later than December 3. RSVP here. 

Live NeurIPS Photo Gallery

Check here daily for live photo updates from the conference

About ML@GT

WHO WE ARE

 
The Machine Learning Center was founded in 2016 as an interdisciplinary research center (IRC) at the Georgia Institute of Technology. Since then, we have grown to include over 190 affiliated faculty members and 60 Ph.D. students, all publishing at world-renowned conferences. The center aims to research and develop innovative and sustainable technologies using machine learning and artificial intelligence (AI) that serve our community in socially and ethically responsible ways.

AREAS OF EXPERTISE

 
Our world-class faculty and students specialize in the areas including, but not limited to:
  • Computer Vision
  • Natural Language Processing
  • Robotics
  • Deep Learning
  • Game Theory
  • Neuro Computing
  • Ethics and Fairness
  • Artificial Intelligence
  • Internet of Things
  • Machine Learning Theory
  • Systems for Machine Learning
  • Bioinformatics
  • Computational Finance
  • Health Systems
  • Information Security
  • Logistics and Manufacturing 

JOIN THE
CONVERSATION

Use #NeurIPS2019 and #MLatGT
Story by Allie McFadden 
Photography by Allie McFadden, Lee Robinson, Terence Rushin