31. August 2020
| Virtual Event
All current and former employees and friends of the Max Planck Institute fo Intelligent Systems and Cyber Valley are welcome to attend this event.
If you have any questions, please contact Cyber Valley Event Manager - Oliwia Gust (email@example.com)
8:55 am - 9:00 am Opening remarks
9:00 am - 11:00 am Presentations | Part One
11:00 am - 2:00 pm - Break -
2:00 pm - 4:00 pm Presentations | Part Two
Technical University of Munich, Germany
Talk: Monday, August 31, 2020 – 9:00-9:30 am
Neural Rendering and Representation Learning for 3D Holograms of Humans
Communicating via text messages, audio-, or video-calls is a part of our modern society. As a next evolution step, we already see early implementations of virtual 3D telepresence systems and personalized avatars. Capturing natural motions and expressions as well as the photorealistic reproduction of images under free views are challenging. With the rise of deep learning methods and, especially, neural rendering, we see immense progress to succeed in these challenges. In this talk, I will give an overview of my previous and ongoing research about image synthesis of humans, the underlying representation of appearance, geometry, and motion to allow for explicit and implicit control over the synthesis process. Specifically, I will talk about my work on facial reenactment and neural rendering.
Harvard University, Cambridge, USA
Wyss Institute for Biologically Inspired Engineering
Talk: Monday, August 31, 2020 – 9:30-10:00 am
A computational approach to the creation of digital matter
Human creativity, combined with bioinspiration and modern, computational tools is now able to derive the most captivating designs, elevating both structural appearance and functionality to unprecedented levels. The physical realization of these designs, however, remains a bottleneck. On the other end, the grand challenge in manufacturing is to create a process that is economical, fast, repeatable, and that enables the desired design freedom in both geometric complexity and choice of materials. Digital Fabrication and, in particular, Additive Manufacturing (AM) has emerged as a potent alternative to conventional manufacturing and is considered by many the holy grail. Despite this hype, however, AM still lacks behind the expectations and is often not able to handle the required complexity, which significantly limits progress in major research fields.
In my talk, I will address both the digital design of novel materials and structures with outstanding properties, and the fabrication thereof. First, I will present recent research that shows how the mutual exclusivity between stiffness and toughness can be overcome in mechanical metamaterials and how we can integrate (multi-)functionality, such as actuation and sensing, on a materials-level. Second, I will demonstrate AM-based solutions specifically tailored to these design paradigms that cannot be fabricated in any other way. Third, I will address the general limitations of AM and show how they can be (partially) overcome, with the ultimate goal of solving the grand challenge. Finally, I will outline potential next steps and provide a perspective on how the proposed, data-driven design approaches can dictate the future direction of the whole field.
University of Oxford, UK
Talk: Monday, August 31, 2020 – 10:00-10:30 am
Minimally Supervised Learning
We know that deep learning works exceptionally well with copious amounts of annotated training examples. Collecting this data is often tedious, expensive and sometimes even infeasible. This talk explores what we can learn from a reduced amount of annotated data and how including physical priors about the world can substitute manual supervision. We will show how incorporating explicit knowledge about the world in the model can lead to more interpretable predictions and better generalization.
DeepMind, London, UK
Talk: Monday, August 31, 2020 – 10:30-11:00 am
Towards Scalable Visual Reasoning
With deep learning and large-scale datasets, we have seen a seismic shift in computer vision, where more traditional problems such as image recognition on ImageNet are one-by-one tackled with many successes. However, can we build and investigate architectures that go beyond simple classification?
In the first part of the talk, I will talk about my work on visual question answering. In this line of research, we ask machines various questions about the visual scene. Within that paradigm, we can test the "reasoning" capabilities beyond the typical classification. During the talk, I will also shortly mention related works on grounding language into percepts and actions as well as building a symbolic proof system using deep learning.
In the second part of the talk, I will briefly introduce my recent work on more scalable and biologically plausible training on temporal input streams such as videos. The underlying hypothesis is that we can exploit various redundancies in such signals to approximate the standard backpropagation algorithm accordingly.
Institute for Machine Learning, ETH Zurich, Switzerland
Talk: Monday, August 31, 2020 – 2:00-2:30 pm
Geometric aspects of adversial robustness of deep networks
Despite showing impressive classification performance on many challenging benchmarks, generally in well-controlled settings, deep networks are intriguingly vulnerable to adversarial perturbations. It is indeed relatively easy to design noise that can change the estimated label of the classifier. The decision boundaries learned by state-of-the-art image classifiers are known to lie close to data samples. That is precisely why they exhibit such a poor performance under l_p-norm adversarial perturbations. The study of this high-dimensional phenomenon is proven to be a challenging yet extremely crucial task; it will eventually bring important benefits in safety-critical applications of machine learning such as self-driving cars, autonomous robots, and medical imaging.
In this talk, I will present some geometric insight about deep networks' decision boundaries and how their geometry contributes to the l_p-norm robustness properties of such classifiers. Furthermore, I will demonstrate how these observations can be exploited to design efficient methods to evaluate and improve the robustness of state-of-the-art image classifiers. In particular, I will show how controlling curvature of the decision boundary can be critical to improve l_p adversarial robustness, and how, by exploiting curvature, computationally efficient algorithms can be derived to measure robustness properties.
Harvard T.H. Chan School of Public Health, Boston, USA
Talk: Monday, August 31, 2020 – 2:30-3:00 pm
Cascade Processes in Machine Learning
Cascade processes have been proven useful in modeling such diverse phenomena as epidemic spreading, signaling in biological networks, information propagation in social media, financial systemic risk, and the reorganization of international trade networks. In this talk, we will focus on two model classes, load redistribution and spreading models, and use them as means to an end to improve and develop machine learning algorithms. We will leverage the fact that load redistribution models correspond one-to-one to the evaluation of deep neural networks. We will derive successful initialization strategies and speed up model training based on analytic insights for random graph ensembles. We will further discuss how we can utilize these improvements for the inference of gene regulatory networks.
University of California, Los Angeles, USA
Talk: Monday, August 31, 2020 – 3:00-3:30 pm
From Simple Inference to Complex Probabilistic Reasoning
Probabilistic reasoning is generally considered to be the framework-of-choice to enable and support decision making under uncertainty in real-world scenarios. Ideally, we would like a probabilistic ML system that is deployed in the wild to be able to i) allow humans (or other AI agents) to pose arbitrary and articulated queries, that is questions about states of the world; ii) to provide guarantees on their results; iii) to deal with complex, heterogeneous and potentially structured data and, moreover iv) to support chaining several inference steps together. In this talk, I will argue that the above desiderata are still unmet in the current landscape of probabilistic ML. Even the most prominent paradigm nowadays, deep generative modeling, is able to provide only a shallow, simplistic, form of inference and struggles when dealing with complex queries or data. I will then delineate how my past and current research aimed at closing this gap. Specifically, I will touch some recent works investigating principled frameworks within dealing with complex tasks such as reasoning about the behavior of classifiers or dealing with algebraic constraints over heterogeneous data can be done elegantly and efficiently. Lastly I will talk about some future research perspectives: extending these complex probabilistic reasoning routines to interactive and relational settings while allowing for approximations with guarantees.
Facebook AI Research, USA
Talk: Monday, August 31, 2020 – 3:30-4:00 pm
Towards Embodied Intelligence
The creation of intelligent embodied artificial agents has long been a dream for roboticists and artificial intelligence researchers alike. In this talk, I will argue for several key advances that I believe are necessary before we can achieve this dream, and in particular two of these key research areas: multi-modal sensing, and data-efficient learning for fast adaptation. Regarding multi-modal sensing, I will discuss the importance of using alternative sensor modalities in addition to vision, and will present my recent research on using touch sensing to allow robots to perceive, understand, and interact with the world around them. About data-efficient learning, I will discuss the use of model-based reinforcement learning -- which explicitly creates models of the world -- and present some of my recent works to understand and overcome the limitations of current approaches