Automated abnormality detection in brain images

The research focuses on the design and implementation of algorithms that use multivariate extreme value theory to automatically detect and locate abnormalities (e.g., white matter hyperintensities, lesions, tumors, hypoperfused areas, etc.) caused by traumatic brain injury, cancer, multiple sclerosis, and ischemic stroke in multi-contrast brain MR images.

Image reconstruction and low-level processing

The research focuses on the design and implementation of compressed sensing-type MRI reconstruction mechanisms with emphasis on cardiac and abdominal imaging, as well as on the reconstruction/processing of 4D CT data of the lung for radiation therapy.

Neuroimaging and the human connectome

This research focuses on the inference of brain connectivity from advanced diffusion MRI protocols (e.g., DSI, HARDI) and the fusion of the resulting connectivity information with functional connectivity. The project includes developing techniques for HARDI reconstruction, deterministic or probabilistic whole-brain tractography, quantification of the (loss of) structural connectivity, and discrimination between normal controls and subjects with neurodegenerative diseases. Recently, we also focus on online methods for near real-time white matter fiber track clustering.

Processing of HARDI data on the manifold of ODFs

The objective of this project is to develop mathematical methods to process (averaging, interpolation, filtering), segment and align HARDI data described by orientation distribution functions (ODFs). Our methods use tools from Riemannian geometry, compressed sensing, harmonic analysis, and graph theory. These resulting methods can be used to build more accurate ODF atlases of white matter.

Analysis of fibrous structures in 3D medical images

The objective is to create 1) directional maps of fibrous structures, 2) robustly detect local complexities such as crossings or branching points, and 3) devise novel deterministic and stochastic techniques for fiber tracking. Specifically, we investigate structures such as microtubules, cardiac myofibers, free-running cardiac Purkinje system, coronary arteries, etc. Our methods involve nonlinear filtering techniques, robust branch detectors, and analysis of orientation distribution functions (ODFs).

Unsupervised manifold clustering for object recognition and motion segmentation

We investigate an alternative mean shift formulation, which performs the iterative optimization on the manifold of interest and intrinsically locates the modes via consecutive evaluations of a mapping. In particular, these evaluations constitute a modified gradient ascent scheme on Stiefel and Grassmann manifolds. The performance evaluated by conducting experiments on object categorization and segmentation of multiple motions.

Dynamical system theoretic lip articulation analysis

We present a system for synthesizing lip movements and recognizing speakers/phrases from visual lip sequences. The temporal evolution of the lip features is modeled with linear dynamical systems (LDS), whose parameters are learned using system identification techniques. By carefully exploiting physical constraints of lip movement both in the learning and synthesis stages, realistic synthesis of novel sequences based on the learned LDS is achieved. Recognition is performed using classification methods, such as nearest neighbors and support vector machines, combined with metrics for dynamical systems, such as subspace angles and Binet-Cauchy trace kernels.

Discriminative lip motion features for multimodal speaker identification and speech-reading

We present a new multimodal speaker/speech recognition system that integrates audio, lip texture and lip motion modalities. We propose using the explicit lip motion information, instead of or in addition to lip texture and/or geometry information within a unified feature selection and discrimination analysis framework. A novel two-stage, spatial and temporal discrimination analysis is also introduced to select the best lip motion features. It solves the dimension reduction problem optimally, taking into account the intra-class and inter-class distribution of individual single-frame lip feature vectors as well as the temporal discrimination information. We then investigate the benefits of inclusion of the best, i.e. the most discriminative lip motion modality for multimodal recognition. The fusion of audio, lip texture and lip motion modalities is performed by the so-called Reliability Weighted Summation (RWS) decision rule.