Blog

ICML 2018: State Space Gaussian Processes with Non-Gaussian Likelihood

Some weeks ago the International Conference on Machine Learning (ICML2018) concluded. It is one of top 3 conferences in Machine Learning and Artificial Intelligence in the world. This year it was held in Stockholm at the huge Stockholmsmässan exhibition center. The scale of the conference this year required exactly that huge venue to accommodate more than 5000 participants. With the recent advancements in Artificial Intelligence (AI) and increasing penetration of AI into industries, the top conferences in machine learning like NIPS, ICML, ICLR and KDD are experiencing significant growth of attention. The amount of participants is growing rapidly each year (e.g. last year there were less than 4000 participants at ICML) and registrations are sold out several months before the deadline.My name is Alexander Grigorevskiy and I work as an AI scientist at Silo.AI. At the conference I presented my work which was done jointly with researchers from Aalto University and from Philips Research. Our work is titled “State Space Gaussian Processes with Non-Gaussian Likelihood”, which provides a fast approach for learning the Gaussian process models for time series which have non-Gaussian observations. For instance, we have analyzed airline accidents over a 100 year time span, interpreting these accidents as counting data. It can also be time series with several discrete output values, or with robust likelihood. I have also hosted webinar on my research, see it below.

One organizational detail which I liked at the conference is that every accepted paper must be presented by a poster even if it is also presented orally. So, if you discover a paper of interest at an oral session, you can join the poster session in the evening and ask questions from the authors. This was the case with my paper. During the day, I gave a 20 minute oral presentation. During the evening, I was explaining for 3 hours to curious researches the details of our work. That day was quite exhausting but I also felt deep satisfaction. This, in my opinion, is a proper way to present research.

ICML 2018 Alex Grigorevskiy

Now, some more impressions about the six-day conference. There are 10 parallel oral sessions. Hence, it is almost impossible to hear all the interesting presentations. Some people have certain strategies about how to cover all relevant novelties. Poster sessions are, again, a great help. For instance, it is possible to take a photo of all interesting posters. A poster is much faster to read rather than a paper. Every day, there are more than 200 posters, but it seems that this strategy works for some people. I myself did not have any particular strategy and decided to focus more on quality rather than quantity. Moreover, since I had a presentation on the last days of the six day conference I planned not to overwhelm my mind before that.After a short amount of time I realized that it is possible to find a specialist in any topic at the venue. Whether it is a topic you want to know more about or it is a something you are working currently, with a small amount of effort you can find someone to discuss the topic in detail. I consider this as a big advantage of this conference.In addition to conference sessions, there was a large hall with company booths. All of the world's AI leaders and runner-ups were there. Strongest representation was provided by internet companies and by the financial sector. Other sectors were represented by a single company like Insilico Medicine. More traditional research institutions were also present by e.g. by Bosch. From Sweden, there were Spotify and Peltarion. Most of the companies were American, some were Chinese and not very many from Europe (Spotify and Yandex the two most prominent examples). Surprisingly, the city of Montreal had a a booth as well since they are positioning themselves a hub in AI research and business.So, what was happening at the company spaces? For companies the main reason to participate is presenting their own research and/or products, as well as recruiting. Hence, participants were able to know better what these companies are doing and what the job openings require. Intuit (American financial company) attracted visitors with virtual reality demonstrations, purposefully built for this conference. I tried it - falling down from a skyscraper was a realistic and worrying experience. Yandex demonstrated their library CatBoost which they claim is superior to the supervised learning package XGBoost. Google DeepMind, Amazon, Uber, Facebook presented dozens of papers at the conference. There were much more of course, in general it was a very good opportunity to chat with people from companies and get an overview of what they are doing.Now, I’ll present some highlights from the conference which I picked. These are a random selection which aligned with my interests, and do not necessarily correlate with the quality of the papers.The first day of the conference is the tutorial day. Listening these tutorials is good way to get an current state-of-the-art and personal opinions about some topic from a famous researcher. This year I really enjoyed the tutorial of Benjamin Recht "Optimization Perspectives on Learning to Control". Do you remember his Test of Time Award lecture at NIPS2017 (also: http://www.argmin.net/2017/12/05/kitchen-sinks/)? In that talk he raised a concern that current Deep Learning (DL) algorithm involve substantial part of “alchemy” and it is worth to compare your DL models with simple models and optimization algorithms. In his present tutorial he summarized the differences and similarities between learning and control. Also he spoke about reproducibility of Reinforcement Learning and control.In the following days I was shuttling between several sessions: Gaussian Processes, Bayesian Deep Learning, approximate inference, and generative models. The papers which grabbed my attention were:

  1. Quasi-Monte Carlo Variational Inference. I wonder how this method is compared with “central composite design” -approach which is used for hyper-parameter integration in GP models (https://arxiv.org/pdf/1206.5754.pdf).
  2. Differentiable Compositional Kernel Learning for Gaussian Processes. They present Neural Kernel Network approach to learn a kernel for GPs.
  3. Generalized Robust Bayesian Committee Machine for Large-scale Gaussian Process Regression. They present an alternative approach to speed up the GPs for large datasets.
  4. Conditional Neural Process, from DeepMind, as well as the follow up "Neural Processes" appears to be a rich nonlinear extension of standard GPs.

The weekend after the conference was dedicated to workshops. While tutorials summarize the past, conference highlights the present, workshops glimpse into the future. I’ve been really impressed by the quality of top level speakers at the workshops:

  • Yoshua Bengio (Université de Montréal)
  • Juergen Schmidhuber (IDSIA)
  • Sergey Levine (UC Berkeley)
  • Yann LeCun (Facebook & NYU)
  • Marc Deisenroth (Imperial College London)

They have been speaking in the following workshops:

I have not found videos from the workshop talks in one place, so they have to be searched individually. I attended two reinforcement learning workshops just to get overview of the field. Because of the high level of those presentations, it seems that I have grasped the main trade offs and approaches of the field.One fashion image dataset has been presented at Deep Generative Models workshop. There are about 290 000 images and textual description to those. Currently, the authors run a generative model competition at https://www.fashion-gen.com/. I suppose it will be a very popular dataset in the future.As you can see from this overview, I am quite happy about attending ICML2018. I am sure that everyone can find relevant people there to talk about the current work as well as other interesting topics. The quality of tutorials and workshops is also very high. See you at the ICML next year, which is held in USA, Long Beach!

ICML 2018 Alex Grigorevskiy

About

No items found.

Want to discuss how Silo AI could help your organization?

Get in touch with our AI experts.
Author
Authors
Alexander Grigorevskiy, PhD
AI Scientist
Silo AI

Share on Social
Subscribe to our newsletter

Join the 5000+ subscribers who read the Silo AI monthly newsletter to be among the first to hear about the latest insights, articles, podcast episodes, webinars, and more.

What to read next

Ready to level up your AI capabilities?

Succeeding in AI requires a commitment to long-term product development. Let’s start today.