[ELLIS Meetup] Neural Ensemble Search

Speaker: Arbër Zela
Date: 2022-04-15


Ensembles of neural networks achieve superior performance compared to standalone networks not only in terms of accuracy on in-distribution data but also on data with distributional shift alongside improved uncertainty calibration. Diversity among networks in an ensemble is believed to be key for building strong ensembles, but typical approaches only ensemble different weight vectors of a fixed architecture. Instead, we investigate neural architecture search (NAS) for explicitly constructing ensembles to exploit diversity among networks of varying architectures and to achieve robustness against distributional shift. By directly optimizing ensemble performance, our methods implicitly encourage diversity among networks, without the need to explicitly define diversity. We find that the resulting ensembles are more diverse compared to ensembles composed of a fixed architecture and are therefore also more powerful. We show significant improvements in ensemble performance on image classification tasks both for in-distribution data and during distributional shift with better uncertainty calibration.

[ELLIS Meetup] SAT Competitions + SAT Solving

Speaker: Armin Biere
Date: 2022-04-01


Armin Bieres research interests are applied formal methods, more specifically formal verification of hardware and software, using model checking and related techniques with the focus on developing efficient SAT and SMT solvers. He is the author and co-author of more than 220 papers and served on the program committee of more than 160 international conferences and workshops. His most influential work is his contribution to Bounded Model Checking. Decision procedures for SAT, QBF and SMT, developed by him or under his guidance rank at the top of many international competitions and were awarded 98 medals including 55 gold medals. He is a recipient of an IBM faculty award in 2012, received the TACAS most influential paper in the first 20 years of TACAS in 2014 award, the HVC’15 award on the most influential work in the last five years in formal verification, simulation, and testing, the ETAPS 2017 Test of Time Award, the CAV Award in 2018, and the IJCAI-JAIR 2019 Award.

[ELLIS Meetup] Research in the Robot Learning Lab

Speaker: Abhinav Valada
Date: 2022-03-18


The research of the Robot Learning Lab lies at the intersection of robotics, machine learning and computer vision with a focus on tackling fundamental robot perception, state estimation and planning problems using learning approaches to enable robots to reliably operate in more complex domains and diverse environments. The overall goal of this research is to develop scalable lifelong robot learning systems that continuously learn multiple tasks from what they perceive and experience by interacting with the real-world. The groups approach is to design deep learning algorithms that facilitate transfer of information through self-supervised multimodal and multitask learning by exploiting complementary features and cross-modal interdependencies. These techniques in turn enable robots to perceive more robustly and reason about the environment more effectively.

[ELLIS Meetup] Dynamic Algorithm Configuration

Speaker: André Biedenkapp
Date: 2022-08-26


The performance of many algorithms in the fields of hard combinatorial problem-solving, machine learning or AI in general depends on parameter tuning. Automated methods have been proposed to alleviate users from the tedious and error-prone task of manually searching for performance-optimized configurations. However, there is still a lot of untapped potential. Existing solution approaches often neglect the non-stationarity of hyperparameters where different hyperparameter values are optimal at different stages of an algorithms run. Taking the non-stationarity into account in the optimization procedure promises much better performances but also poses many new challenges. In this talk we will
discuss existing solution approaches to classical hyperparameter optimization and explore ways of tackling the non-stationarity of hyperparameters.