mlsplogo MLSP2014
IEEE International Workshop on
Machine Learning for Signal Processing

September 21-24, 2014  Reims, France

Brain Computer Interfacing: Signal Processing Challenges
Marc Van Hulle Professor Marc M. Van Hulle
Computational Neuroscience Group, KU Leuven, Belgium
Personal homepage


Marc M. Van Hulle holds MSc degrees in Electromechanical Engineering and in Electrotechnical Engineering and received a PhD degree in Applied Sciences from KU Leuven, Belgium. He also received BScEcon and MBA degrees from Hasselt University, Belgium, the Doctor Technices degree from the Technical University of Denmark (2003), and Honorary Doctoral degrees from Brest State University (2009) and Yerevan State Medical University (2013). He is a Full Professor at the KU Leuven Medical School, where he heads the Computational Neuroscience Group of the Laboratorium voor Neuro- en Psychofysiologie. In 1992, he was with the Brain and Cognitive Sciences Department, Massachusetts Institute of Technology, Boston, as a Post-Doctoral Scientist. He has authored a monograph and 250 technical publications. His current research interests include computational neuroscience, brain computer interfacing, neural networks, data mining, and signal processing. Marc Van Hulle is an IEEE Fellow.


A Brain Computer Interface (BCI) is a device that records and decodes brain activity with the aim to establish a communication channel that does not require any physical embodiment. It can assist patients that suffer from severe motor- and/or communication disabilities and in this way improve their quality of life. BCI research primarily relies on electroencephalography (EEG) signals acquired from the subject's scalp or on local field potentials and spike trains recorded with deep brain implants. The journal Science selected for 2012 deep brain implant-based BCI research as one of ten runner-ups for "Breakthrough of the year". A recent success of EEG-based BCI is the "first kick" of the ball given by a paraplegic at the 2014 FIFA World Cup opening ceremony. EEG-based BCI offers the advantage not to require any surgery and to be relatively inexpensive, which probably explains why it has attracted a large research community. However, applications have only rarely left the prototype stage for several reasons: too low information transfer rate, poor reliability (many false hits), inadequate automation, electrode failures, need for expert support and preparation time. The former two are seen as real signal processing and machine learning challenges and in some cases they led to hybrid BCI paradigms and to non-control based applications.

Invasive techniques such as deep brain implants, and more recently electrocorticography (ECoG, also called intracranial EEG), offer a much higher spatiotemporal resolution, which is a key factor in achieving a higher throughput and controllability, but the required surgery and the risk of scarring, gliosis and infection, in particular for deep brain implants, and probably also the ensuing restricted access possibility to test new hypotheses and algorithms, hamper the development of marketable applications in the near future.

We will review a number BCI paradigms thereby focusing on EEG-BCI and indicate some of the signal processing issues and challenges.  

Sparse and Spurious: Dictionary Learning with Noise and Outliers
Remi Gribonval Research Director Rémi Gribonval
INRIA in Rennes, FranceFrance
Personal homepage


Rémi Gribonval holds a Directeur de Recherche position with Inria in Rennes, France, where he is the scientific leader of the PANAMA research group on sparse audio signal processing. His research focuses on the mathematical and algorithmic aspects of signal processing & machine learning, with an emphasis on the interplay between low-dimensional models and inverse problems in high-dimensions. He founded the series of international workshops SPARS on Signal Processing with Adaptive/Sparse Representations. He has been the coordinator of several national, bilateral and European research projects. In 2011, he was awarded the Blaise Pascal Award of the GAMNI-SMAI by the French Academy of Sciences, and a starting investigator grant from the European Research Council. He is an IEEE fellow.


Sparse modeling has become a very popular tool in signal processing and machine learning, where many tasks can be expressed as underdetermined linear inverse problems. Together with a growing family of related low-dimensional signal models, sparse models expressed with signal dictionaries have given rise to algorithms combining provably good performance with bounded complexity.  Through data-driven principles to choose the model, dictionary learning leverages these algorithms to address applications ranging from denoising to inpainting and super-resolution. 

This tutorial is meant to draw a panorama of dictionary learning and related sparse matrix factorization approaches for low-dimensional modeling. The talk will cover principles and connections with blind signal separation, algorithms, as well as an overview of recent theoretical results guaranteeing the robust identifiability of overcomplete dictionaries in the presence of noise and outliers.   

Powered by CONWIZ, © Copyright 2014. Page maintained by Jan Larsen (Email)