
Horaud, Radu
Directeur de thèse à l'Université Joseph Fourier de Grenoble
Radu Patrice Horaud holds a position of research director at INRIA Grenoble Rhône-Alpes. He is the founder and leader of the PERCEPTION team. Radu’s research interests cover computational vision, audio signal processing, audio-visual scene analysis, machine learning, and robotics. He has authored over 160 scientific publications.
Radu has pioneered work in computer vision using range data (or depth images) and has developed a number of principles and methods at the cross-roads of computer vision and robotics. In 2006, he started to develop audio-visual fusion and recognition techniques in conjunction with human-robot interaction.
Radu Horaud was the scientific coordinator of the European Marie Curie network VISIONTRAIN (2005-2009), STREP projects POP (2006-2008) and HUMAVIPS (2010-2013), and the principal investigator of a collaborative project between INRIA and Samsung’s Advanced Institute of Technology (SAIT) on computer vision algorithms for 3D television (2010-2013). In 2013 he was awarded an ERC Advanced Grant for his five year project VHIA (2014-2019).
Vidéos
Part 5 : Fusion of Audio and Vision 5.1. Audio-visual processing challenges 5.2. Representation of visual information 5.3. The geometry of vision 5.4. Audio-visual feature association 5.5. Audio
Part 5 : Fusion of Audio and Vision 5.1. Audio-visual processing challenges 5.2. Representation of visual information 5.3. The geometry of vision 5.4. Audio-visual feature association 5.5. Audio
Part 5 : Fusion of Audio and Vision 5.1. Audio-visual processing challenges 5.2. Representation of visual information 5.3. The geometry of vision 5.4. Audio-visual feature association 5.5. Audio
Part 5 : Fusion of Audio and Vision 5.1. Audio-visual processing challenges 5.2. Representation of visual information 5.3. The geometry of vision 5.4. Audio-visual feature association 5.5. Audio
Part 5 : Fusion of Audio and Vision 5.1. Audio-visual processing challenges 5.2. Representation of visual information 5.3. The geometry of vision 5.4. Audio-visual feature association 5.5. Audio
Part 5 : Fusion of Audio and Vision 5.1. Audio-visual processing challenges 5.2. Representation of visual information 5.3. The geometry of vision 5.4. Audio-visual feature association 5.5. Audio
Part 5 : Fusion of Audio and Vision 5.1. Audio-visual processing challenges 5.2. Representation of visual information 5.3. The geometry of vision 5.4. Audio-visual feature association 5.5. Audio
Part 5 : Fusion of Audio and Vision 5.1. Audio-visual processing challenges 5.2. Representation of visual information 5.3. The geometry of vision 5.4. Audio-visual feature association 5.5. Audio
Part 4 : Machine Learning and Binaural Hearing 4.1. Binaural Features 4.2. Mapping Sounds onto Their Direction 4.3. Collecting Training Data 4.4. The Binaural Manifold 4.5. Localization with a Look-up
Part 5 : Fusion of Audio and Vision 5.1. Audio-visual processing challenges 5.2. Representation of visual information 5.3. The geometry of vision 5.4. Audio-visual feature association 5.5. Audio
Part 4 : Machine Learning and Binaural Hearing 4.1. Binaural Features 4.2. Mapping Sounds onto Their Direction 4.3. Collecting Training Data 4.4. The Binaural Manifold 4.5. Localization with a Look-up
Part 4 : Machine Learning and Binaural Hearing 4.1. Binaural Features 4.2. Mapping Sounds onto Their Direction 4.3. Collecting Training Data 4.4. The Binaural Manifold 4.5. Localization with a Look-up