IbPRIA 2015 image number 1
IbPRIA 2019: 9th Iberian Conference on Pattern Recognition and Image Analysis
Madrid, Spain. July 1-4

Public Funding


July 1, 2019

Machine Learning with scikit-learn
Date: Monday, July 1

Gaël Varoquaux

Gaël Varoquaux is a tenured computer-science researcher at Inria. His research develops statistical learning tools for scientific inference. He has pioneered the use of machine learning on brain images to map cognition and brain pathologies. More generally, he develops tools to make the use of machine learning easier, with statistical models suited for real-life, uncurated data, and software development for data science. He is project-lead for scikit-learn, one of the reference machine-learning toolboxes, as well as core contributor to joblib, Mayavi, and nilearn. Varoquaux has contributed key methods for learning on spatial data, matrix factorizations, and modeling covariance matrices. He has a PhD in quantum physics and is a graduate from Ecole Normale Superieure, Paris.


This tutorial will briefly cover how to do machine learning with scikit-learn. It will not go in the details, but rather try to give pointers to important aspects of the software as well as key concepts in machine learning.

Computer Vision for Affective Computing
Date: Monday, July 1

Agata Lapedriza Garcia

Agata Lapedriza is a Professor at the Universitat Oberta de Catalunya. She received her MS degree in Mathematics at the Universitat de Barcelona and her Ph.D. degree in Computer Science at the Computer Vision Center, at the Universitat Autonoma Barcelona. She was working as a Visiting Researcher in the Computer Science and Artificial Intelligence Lab, at the Massachusetts Institute of Technology (MIT), from 2012 until 2015. Currently she is also a Visiting Researcher at the MIT Medialab, at the Affective Computing group. Her research interests are related to Scene Understanding and Emotional Artificial Intelligence.


Over the past decade we have observed an increasing interest in developing technologies for automatic emotion recognition. The capacity of automatically recognizing emotions has many of applications in environments where machines need to interact and collaborate with humans. However, how can machines recognize emotions? In this tutorial I will give an introduction to Affective Computing (also known as Emotional Artificial Intelligence), the discipline that studies and develops systems and devices that can recognize, interpret, process or simulate emotions or feelings. After a general introduction to Affective Computing I will focus on techniques for emotion recognition, paying a special attention to the problem of emotion recognition from images. We will review some research on emotion recognition based on face and body analysis and we discuss about the importance of analyzing scenes and context, in addition to faces, to better recognize emotions. In particular, we will see how emotion recognition can be approached from a Scene Understanding perspective.

Bayesian Optimization
Date: Monday, July 1

Daniel Hernandez-Lobato

Dr. Daniel Hernandez-Lobato obtained a Ph.D. and an M.Phil. in Computer Science from Universidad Autónoma de Madrid, Spain, in January 2010 and June 2007, respectively. His Ph.D. thesis received the award to the best thesis on Computer Science defended during that academic year in that institution. Between November 2009 and September 2011 he worked as a post-doc researcher at Université Catholique de Louvain, Belgium. There he had the opportunity to collaborate with Prof. Pierre Dupont and Prof. Bernard Lauwerys in the identification of biomarkers for the early diagnosis of arthritis. In September 2011, he moved back to Universidad Autónoma de Madrid, and since January 2014 he works there as a Lecturer of Computer Science. His research interests are mainly focused on the Bayesian approach to machine learning, including topics such as Bayesian optimization, kernel methods, Gaussian processes, and approximate Bayesian inference. He has participated, as an invited speaker, in the workshop on Gaussian process approximations, in 2015 and 2017, and in the Second Workshop on Gaussian processes at Saint-Étienne, in 2018. He was also one of the two main organizers of the Machine Learning Summer School 2018, at Universidad Autónoma de Madrid.


Many optimization problems are characterized by an objective function that is very expensive to evaluate. More precisely, the evaluation may involve carrying out a time-consuming experiment. This also means that the objective may lack a closed-form expression and, moreover, that the evaluation process can be noisy. That is, two measurements of the objective function at the same input location can give different results. Examples of these problems include tuning the hyper-parameters of a deep neural network, adjusting the parameters of the control system of a robot, or finding new materials for, e.g., solar energy production. Standard optimization methods give sub-optimal results when tackling this type of problems. In this tutorial, I will present a general overview of Bayesian optimization (BO), a collection of methods that can be used to efficiently solve problems with the characteristics described. For this, BO methods fit, at each iteration, a probabilistic model to observed evaluations of the objective. This model is typically a Gaussian process whose predictive distribution captures the potential values of the objective in regions of the space in which there are no observations. This uncertainty is then used to build an acquisition function whose maximum indicates where to perform the next evaluation of the objective with the goal of solving the problem in the smallest number of steps. Because the acquisition function only depends on the probabilistic model and not on the actual objective, it can be cheaply optimized. Therefore, BO methods make, at each iteration, intelligent decisions about where to evaluate next the objective. This can save a lot of computational time. In this tutorial, I will explain in detail each of the steps performed by BO methods and, focusing on information theory-based methods, I will also describe some extensions to address problems dealing with multiple evaluations in parallel, and multiple constraints and/or objectives. I will conclude with a description of BO software, open problems and future research directions in the field. The tutorial will be followed by an afternoon session in which some of the concepts and methods described will be put in practice. More precisely, BO software will be used for tuning the hyper-parameters of machine learning algorithms.


Technical Sponsors