IbPRIA 2015 image number 1
IbPRIA 2019: 9th Iberian Conference on Pattern Recognition and Image Analysis
Madrid, Spain. July 1-4

Public Funding

Plenary Talks
Building Computer Vision Systems That Really Work
Date: Tuesday, July 2

Andrew Fitzgibbon

Fitzgibbon is a partner scientist at Microsoft in Cambridge, UK. He has published numerous highly-cited papers, and received many awards for his work, including ten “best paper” prizes at various venues, the Silver medal of the Royal Academy of Engineering, and the BCS Roger Needham award. He is a fellow of the Royal Academy of Engineering, the British Computer Society, and the International Association for Pattern Recognition. He studied at University College, Cork, and then did a Masters at Heriot-Watt University, before taking up an RSE job at the University of Edinburgh, which eventually morphed into a PhD. He moved to Oxford in 1996 and drove large software projects such as the VXL project, and then spent several years as a Royal Society University Research Fellow before joining Microsoft in 2005. He loves programming, particularly in C++, and his recent work has included new numerical algorithms for Eigen, and compilation of F# to C.


I have been shipping advanced computer vision systems for two decades. In 1999, prize-winning research from Oxford University was spun out to become the Emmy-award-winning camera tracker “boujou”, which has been used to insert computer graphics into live-action footage in pretty much every movie made since its release, from the “Harry Potter” series to “Bridget Jones’s Diary”. In 2007, I was part of the team that delivered human body tracking in Kinect for Xbox 360, and in 2015 I moved from Microsoft Research to the Windows division to work on Microsoft’s HoloLens, an AR headset brimming with cutting-edge computer vision technology.

In all of these projects, the academic state of the art has had to be leapfrogged in accuracy and efficiency, sometimes by several orders of magnitude. Sometimes that’s just raw engineering, sometimes it means completely new ways of looking at the research. I will talk about this interplay, between mathematics and code, and show how each helps to understand the other. If I had to nominate one key to success, it’s a focus on, well, everything: from cache misses to end-to-end experience, and on always being willing to change one’s mind.

Face Analysis for Multimodal Emotional Interfaces
Date: Tuesday, July 2

Matti Pietikäinen

Matti Pietikäinen received his Doctor of Science in Technology degree from the University of Oulu, Finland. He is a professor at the Center for Machine Vision and Signal Analysis, University of Oulu. From 1980 to 1981 and from 1984 to 1985, he visited the Computer Vision Laboratory at the University of Maryland. He has made fundamental contributions, e.g. to local binary pattern (LBP) methodology, texture-based image and video analysis, and facial image analysis. He has authored about 350 refereed papers in international journals, books and conferences. His papers have about 53,500 citations in Google Scholar (h-index 78), and eight of these have over 1,350 citations. He was Associate Editor of IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), Pattern Recognition, IEEE Transactions on Forensics and Security, and Image and Vision Computing journals. Currently he serves as Associate Editor of IEEE Transactions on Biometrics, Behavior and Identity Science, and Guest Editor for special issues of IEEE TPAMI and International Journal of Computer Vision. He was President of the Pattern Recognition Society of Finland from 1989 to 1992, and was named its Honorary Member in 2014. From 1989 to 2007 he served as Member of the Governing Board of International Association for Pattern Recognition (IAPR), and became one of the founding fellows of the IAPR in 1994. He is IEEE Fellow for contributions to texture and facial image analysis for machine vision. In 2014, his research on texture-based face description was awarded the Koenderink Prize for Fundamental Contributions in Computer Vision. He was the recipient of the prestigious IAPR King-Sun Fu Prize 2018 for fundamental contributions to texture analysis and facial image analysis. He was named a Highly Cited Researcher by Clarivate Analytics in 2018, by producing multiple highly cited papers in 2006-2016 that rank in the top 1% by citation for his field in Web of Science.


Emotions are central for human intelligence, and should have a similar role in artificial intelligence. There is a growing need to develop multimodal emotional interfaces, which are able to read the emotions of people and adapt their operations accordingly. Among the areas of application are emotional chatbots, personal assistants, human-robot interaction, emotion-aware games, health and medicine, on-line learning, safe car driving, security, and user / customer experience analysis. Facial image analysis will play a key role in developing emotionally intelligent systems. In this talk, first an introduction to emotions, face information and applications of emotion analysis is presented. Then, highlights of our recent research on facial image analysis are introduced, including methods for image and video description, face and facial (micro-)expression recognition, and heart-rate measurement from face videos. Some examples of multimodal emotion analysis are presented. Finally, future challenges for building multimodal emotional interfaces are discussed.

Fun with Human-Machine Collaboration for Computer Vision
Date: Wednesday, July 3

Vittorio Ferrari

Vittorio Ferrari is a Senior Staff Research Scientist at Google, where he leads a research group on visual learning. He received his PhD from ETH Zurich in 2004, then was a post-doc at INRIA Grenoble (2006-2007) and at the University of Oxford (2007-2008). Between 2008 and 2012 he was an Assistant Professor at ETH Zurich, funded by a Swiss National Science Foundation Professorship grant. In 2012-2018 he was faculty at the University of Edinburgh, where he became a Full Professor in 2016 (now he is a Honorary Professor). In 2012 he received the prestigious ERC Starting Grant, and the best paper award from the European Conference in Computer Vision. He is the author of over 110 technical publications. He regularly serves as an Area Chair for the major computer vision conferences, he was a Program Chair for ECCV 2018 and will be a General Chair for ECCV 2020. He is an Associate Editor of IEEE Pattern Analysis and Machine Intelligence. His current research interests are in learning visual models with minimal human supervision, human-machine collaboration, and semantic segmentation.


Training computer vision models typically requires tedious and time consuming manual annotation, which hinders scaling, especially for complex tasks such as full image segmentation. In this talk I will present recent human-machine collaboration techniques from my team, where the machine assists a human in annotating the training data and training a new model. These can substantially reduce human effort and also yield more interesting interfaces to interact with. The talk will explore several cases, including segmentation of individual objects, joint segmentation of all objects and background regions in an image, using speech together with mouse inputs, and annotating object classes using free-form text written by undirected annotators.

Towards Human Behavior Modeling from (Big) Mobile Data
Date: Wednesday, July 3

Nuria Oliver

Nuria Oliver is Director of Research in Data Science at Vodafone and Chief Data Scientist at Data-Pop Alliance. She has pioneered the development of intelligent, interactive systems that are able to recognize and predict different types of human behavior on desktops, mobile phones and even cars. She received a PhD in Perceptual Intelligence at the MIT Media Laboratory.

Nuria has over 20 years of research experience developing novel computational models of both individual and aggregate human behavior to power intelligent, interactive and personalized systems. Her work has contributed to the improvement of services, the creation of new services, the definition of strategies and the creation of new companies. Her projects include building a real-time facial expression recognition system which was licensed to Nokia in 1997, a visual surveillance system to detect and recognize human interactions in 1998, a smart car which was able to predict the most likely maneuver in 2000, a multi-modal office activity recognition system demoed with Bill Gates at IJCAI 2001, a range of mobile intelligent interfaces to detect sleep apnea (2006), enable runners to achieve their exercise goals (2007), improve medication non-compliance (2009) or even detect boredom (2015). Since 2009, she is also working on the area of computational social sciences by leveraging large-scale human behavioral data to enable better decision making and have positive social impact. She has published over 180 academic papers and 40 patents. Ten of her papers have received awards or nominations to best scientific article, including two best paper awards at Ubicomp 2014 and 2015, a best paper award at RecSys 2012 and the ACM ICMI Ten Year Technical Impact Award in 2015. Nuria has given more than 140 invited talks.

She is a Fellow of the ACM (2017), a Fellow of the IEEE (2017) and a Fellow of the European Association of Artificial Intelligence (2016). She is a member of the Spanish Royal Academy of Engineering, the Academia Europaea and the ACM SIGCHI Academy. She received an honorary PhD from the University Miguel Hernandez in 2018. Her work has received many awards, including the MIT TR35 Young Innovator Award (2004), the European Ada Byron Award (2016), the Spanish National Computer Science Award (2016) and the Spanish Engineer of the Year Award (2018). She is a member of the scientific advisory board of six European universities, of Mahindra Comviva and the Future Digital Society. She advises the Spanish Government and the European Commission on AI related topics. She is a member of a Global Future Council at the World Economic Forum.

Nuria is committed to service to the scientific community. She has served in a chair role in 18 ACM/IEEE/AAAI international conferences and is a regular member of the program committee of the top international conferences in her fields of research. She is in the editorial board of several journals, has served in about 10 PhD thesis committees and on the ACM IUI oversight committee, among others.

In addition to her scientific work, Nuria devotes part of her time to scientific-technological outreach and to inspire young people and adolescents - and especially girls - to study technical careers. Her work and profile have been featured in more than 200 media articles. She has given talks to more than 8000 adolescents, has contributed with the chapter entitled "Digital Scholars" in the book "Digital natives do not exist" (Deusto, 2017), has written articles for EL PAIS, The Guardian, India Economic Times, TechCrunch among others and has been co-organizer large, open conferences, such as the first TEDxBarcelona event devoted to education. Her talks on WIRED, TEDx and similar events have been seen thousands of times.


Human Behavior Modeling and Understanding is a key challenge in the development of intelligent systems and a great asset to help us make better decisions. Over the course of the past 23 years, I have worked on building automatic data-driven machine-learning based models of human behaviors for a variety of applications, including smart rooms, smart cars, smart offices, smart mobile phones and smart cities.

n my talk, I will describe three of such projects. The first project is a smartphone app to automatically detect boredom. This project received the best paper award at Ubicomp 2015. The second project, MobiScore, tackles the challenge of financial inclusion by building machine learning based models of credit scoring from mobile network data. MobiScore enables people who do not have a bank account and hence are excluded from the financial system to get access to credit. Finally, the third project focuses on automatically detecting crime hotspots in a city through the analysis of mobile data.


Technical Sponsors