MSc.

Sonia Laguna

PhD Student

E-Mail
sonia.lagunacillero@inf.ethz.ch
Address
Department of Computer Science
CAB G 15.2
Universitätstr. 6
CH – 8092 Zurich, Switzerland
Room
CAB G 15.2

I completed my Bachelor’s Degree in Biomedical Engineering at Universidad Carlos III de Madrid in 2020. During this time, I spent one year as an exchange student at Georgia Institute of Technology, USA, where I started my research path working on medical devices. Later, I carried out an internship at ETH Zürich in the Mobile Health Systems Lab, working on medical data analysis. I obtained my Master’s Degree in the Department of Information Technology and Electrical Engineering (Bioimaging) at ETH Zurich in 2022, specializing in machine learning and computer vision. During my master’s studies, I spent a semester at the Athinoula A. Martinos Center for Biomedical Imaging with Harvard University, working on generative models for super-resolution of MRI images. At the time of my thesis, I worked at the Computer Vision Laboratory at ETH, focusing on uncertainty estimation in deep variational networks for ultrasound image reconstruction. There, I gained an interest in understanding the underlying concepts of machine learning models and their interpretation. In January 2023, I joined the Medical Data Science group as a PhD student.

I am interested in generative models and explainability and interpretability of machine learning methods for a better understanding of their behaviour, most importantly, when applied to high-risk domains such as medical data.

 

Abstract

Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input. Through time-consuming manual interventions, a user can correct wrongly predicted concept values to enhance the model's downstream performance. We propose Stochastic Concept Bottleneck Models (SCBMs), a novel approach that models concept dependencies. In SCBMs, a single-concept intervention affects all correlated concepts. Leveraging the parameterization, we derive an effective intervention strategy based on the confidence region. We show empirically on synthetic tabular and natural image datasets that our approach improves intervention effectiveness significantly. Notably, we showcase the versatility and usability of SCBMs by examining a setting with CLIP-inferred concepts, alleviating the need for manual concept annotations.

Authors

Moritz Vandenhirtz*, Sonia Laguna*, Ricards Marcinkevics, Julia E. Vogt
* denotes shared first authorship

Submitted

ICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling, Workshop on Models of Human Feedback for AI Alignment, and Workshop on Humans, Algorithmic Decision-Making and Society

Date

26.07.2024

Link

Abstract

Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts. A compelling advantage of this model class is the user's ability to intervene on the predicted concept values, affecting the model's downstream output. In this work, we introduce a method to perform such concept-based interventions on already-trained neural networks, which are not interpretable by design, given an annotated validation set. Furthermore, we formalise the model's intervenability as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black-box models. Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We demonstrate that fine-tuning improves intervention effectiveness and often yields better-calibrated predictions. To showcase the practical utility of the proposed techniques, we apply them to deep chest X-ray classifiers and show that fine-tuned black boxes can be as intervenable and more performant than CBMs.

Authors

Sonia Laguna*, Ricards Marcinkevics*, Moritz Vandenhirtz, Julia E. Vogt
* denotes shared first authorship

Submitted

Arxiv

Date

24.01.2024

Link

Abstract

ExpLIMEable is a tool to enhance the comprehension of Local Interpretable Model-Agnostic Explanations (LIME), particularly within the realm of medical image analysis. LIME explanations often lack robustness due to variances in perturbation techniques and interpretable function choices. Powered by a convolutional neural network for brain MRI tumor classification, ExpLIMEable seeks to mitigate these issues. This explainability tool allows users to tailor and explore the explanation space generated post hoc by different LIME parameters to gain deeper insights into the model’s decision-making process, its sensitivity, and limitations. We introduce a novel dimension reduction step on the perturbations seeking to find more informative neighborhood spaces and extensive provenance tracking to support the user. This contribution ultimately aims to enhance the robustness of explanations, key in high-risk domains like healthcare

Authors

Sonia Laguna, Julian Heidenreich, Jiugeng Sun, Nil\"ufer Cetin, Ibrahim Al Hazwani, Udo Schlegel, Furui Cheng, Mennatallah El-Assady

Submitted

NeurIPS 2023, XAI in Action: Past, Present, and Future Applications

Date

16.12.2023

Link

Abstract

Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts. A compelling advantage of this model class is the user's ability to intervene on the predicted concept values, consequently affecting the model's downstream output. In this work, we introduce a method to perform such concept-based interventions on already-trained neural networks, which are not interpretable by design. Furthermore, we formalise the model's intervenability as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black-box models. Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We demonstrate that fine-tuning improves intervention effectiveness and often yields better-calibrated predictions. To showcase the practical utility of the proposed techniques, we apply them to chest X-ray classifiers and show that fine-tuned black boxes can be as intervenable and more performant than CBMs.

Authors

Ricards Marcinkevics*, Sonia Laguna*, Moritz Vandenhirtz, Julia E. Vogt
* denotes shared first authorship

Submitted

XAI in Action: Past, Present, and Future Applications, NeurIPS 2023

Date

16.12.2023

Link

Abstract

Multimodal VAEs have recently received significant attention as generative models for weakly-supervised learning with multiple heterogeneous modalities. In parallel, VAE-based methods have been explored as probabilistic approaches for clustering tasks. Our work lies at the intersection of these two research directions. We propose a novel multimodal VAE model, in which the latent space is extended to learn data clusters, leveraging shared information across modalities. Our experiments show that our proposed model improves generative performance over existing multimodal VAEs, particularly for unconditional generation. Furthermore, our method favourably compares to alternative clustering approaches, in weakly-supervised settings. Notably, we propose a post-hoc procedure that avoids the need for our method to have a priori knowledge of the true number of clusters, mitigating a critical limitation of previous clustering frameworks.

Authors

Emanuele Palumbo, Sonia Laguna, Daphné Chopard, Julia E Vogt

Submitted

ICML 2023 Workshop on Structured Probabilistic Inference/Generative Modeling

Date

23.06.2023

Link

Abstract

Multimodal VAEs have recently received significant attention as generative models for weaklysupervised learning with multiple heterogeneous modalities. In parallel, VAE-based methods have been explored as probabilistic approaches for clustering tasks. Our work lies at the intersection of these two research directions. We propose a novel multimodal VAE model, in which the latent space is extended to learn data clusters, leveraging shared information across modalities. Our experiments show that our proposed model improves generative performance over existing multimodal VAEs, particularly for unconditional generation. Furthermore, our method favorably compares to alternative clustering approaches, in weakly-supervised settings. Notably, we propose a post-hoc procedure that avoids the need for to have a priori knowledge of the true number of clusters, mitigating a critical limitation previous clustering frameworks.

Authors

Emanuele Palumbo, Sonia Laguna, Daphné Chopard, Julia E Vogt

Submitted

ICML 2023 Workshop DeployableGenerativeAI

Date

23.06.2023

Link

Abstract

Three-dimensional imaging of live processes at a cellular level is a challenging task. It requires high-speed acquisition capabilities, low phototoxicity, and low mechanical disturbances. Three-dimensional imaging in microfluidic devices poses additional challenges as a deep penetration of the light source is required, along with a stationary setting, so the flows are not perturbed. Different types of fluorescence microscopy techniques have been used to address these limitations; particularly, confocal microscopy and light sheet fluorescence microscopy (LSFM). This manuscript proposes a novel architecture of a type of LSFM, single-plane illumination microscopy (SPIM). This custom-made microscope includes two mirror galvanometers to scan the sample vertically and reduce shadowing artifacts while avoiding unnecessary movement. In addition, two electro-tunable lenses fine-tune the focus position and reduce the scattering caused by the microfluidic devices. The microscope has been fully set up and characterized, achieving a resolution of 1.50 μ m in the x-y plane and 7.93 μ m in the z-direction. The proposed architecture has risen to the challenges posed when imaging microfluidic devices and live processes, as it can successfully acquire 3D volumetric images together with time-lapse recordings, and it is thus a suitable microscopic technique for live tracking miniaturized tissue and disease models.

Authors

Clara Gomez-Cruz, Sonia Laguna, Ariadna Bachiller-Pulido, Cristina Quilez, Marina Ca\~nadas-Ortega, Ignacio Albert-Smet, Jorge Ripoll, Arrate Mu\~noz-Barrutia

Submitted

Biosensors

Date

01.12.2022

LinkDOI

Abstract

Synthetic super-resolved images generated by a machine learning algorithm from portable low-field-strength (0.064-T) brain MRI had good agreement with real images at high field strength (1.5–3 T).

Authors

Juan Eugenio Iglesias, Riana Schleicher, Sonia Laguna, Benjamin Billot, Pamela Schaefer, Brenna McKaig, Joshua N Goldstein, Kevin N Sheth, Matthew S Rosen, W Taylor Kimberly

Submitted

Radiology

Date

08.11.2022

LinkDOI

Abstract

Portable low-field MRI has the potential to revolutionize neuroimaging, by enabling pointof-care imaging and affordable scanning in underserved areas. The lower resolution and signal-to-noise ratio of these scans preclude image analysis with existing tools. Superresolution (SR) methods can overcome this limitation, but: (i) training with downsampled high-field scans fails to generalize; and (ii) training with paired low/high-field data is hard due to the lack of perfectly aligned images. Here, we present an architecture that combines denoising, SR and domain adaptation modules to tackle this problem. The denoising and SR components are pretrained in a supervised fashion with large amounts of existing high-resolution data, whereas unsupervised learning is used for domain adaptation and end-to-end finetuning. We present preliminary results on a dataset of 11 low-field scans. The results show that our method enables segmentation with existing tools, which yield ROI volumes that correlate strongly with those derived from high-field scans (ρ > 0.8).

Authors

Sonia Laguna, Riana Schleicher, Benjamin Billot, Pamela Schaefer, Brenna McKaig, Joshua N Goldstein, Kevin N Sheth, Matthew S Rosen, W Taylor Kimberly, Juan Eugenio Iglesias

Submitted

Medical Imaging with Deep Learning

Date

07.07.2022

Link

Abstract

The recent introduction of portable, low-field MRI (LF-MRI) into the clinical setting has the potential to transform neuroimaging. However, LF-MRI is limited by lower resolution and signal-to-noise ratio, leading to incomplete characterization of brain regions. To address this challenge, recent advances in machine learning facilitate the synthesis of higher resolution images derived from one or multiple lower resolution scans. Here, we report the extension of a machine learning super-resolution (SR) algorithm to synthesize 1 mm isotropic MPRAGElike scans from LF-MRI T1-weighted and T2-weighted sequences. Our initial results on a paired dataset of LF and high-field (HF, 1.5T-3T) clinical scans show that: (i) application of available automated segmentation tools directly to LF-MRI images falters; but (ii) segmentation tools succeed when applied to SR images with high correlation to gold standard measurements from HF-MRI (e.g., r = 0.85 for hippocampal volume, r = 0.84 for the thalamus, r = 0.92 for the whole cerebrum). This work demonstrates proof-of-principle postprocessing image enhancement from lower resolution LF-MRI sequences. These results lay the foundation for future work to enhance the detection of normal and abnormal image findings at LF and ultimately improve the diagnostic performance of LF-MRI. Our tools are publicly available on FreeSurfer (surfer.nmr.mgh.harvard.edu/)

Authors

Juan Eugenio Iglesias, Riana Schleicher, Sonia Laguna, Benjamin Billot, Pamela Schaefer, Brenna McKaig, Joshua N Goldstein, Kevin N Sheth, Matthew S Rosen, W Taylor Kimberly

Submitted

arXiv preprint arXiv:2202.03564

Date

07.02.2022