MSc.

Sonia Laguna

PhD Student

E-Mail
sonia.lagunacillero@inf.ethz.ch
Address
Department of Computer Science
CAB G 15.2
Universitätstr. 6
CH – 8092 Zurich, Switzerland
Room
CAB G 15.2

My PhD research focuses on solving problems on interpretability of machine learning methods, as well as on the development of generative models and gaining a better understanding and control of them (diffusion, VAEs, LLMs) through their representations. Recently, I have become interested in machine unlearning, in an effort to comprehend how models can selectively forget data while retaining general knowledge. In parallel, I am interested on the development of foundation models as part of the Swiss AI initiative to better understand VLMs and exploit their capabilities in real-world clinical applications. During my PhD, I have been a visiting student at Cambridge University working on alignment and interpretability of LLMs, and a Research Intern and a Student Researcher at Google, developing 3D diffusion-based generative models in the AR&VR team. I am the team co-leader of CSNOW, Computer Science Network of Women at ETH.

Prior to my doctoral studies, I obtained an MSc in the Department of Information Technology and Electrical Engineering (Bioimaging) at ETH Zurich specializing in machine learning and computer vision, and spent a semester at Harvard University working on 3D generative models for super-resolution of MRIs. During my thesis, I worked at the Computer Vision Laboratory at ETH, focusing on uncertainty estimation in deep variational networks for US reconstruction. Before that, I completed my BSc in Biomedical Engineering at Universidad Carlos III de Madrid, spent one year at Georgia Institute of Technology, and carried out an internship at ETH Zurich as an Amgen Scholar working on medical data analysis.

I am always happy to collaborate and discuss new topics, feel free to reach out!

Further details and up-to-date information can be found on my webpage.

 

Abstract

We present RadVLM, a compact (7B) multitask conversational foundation model designed for CXR interpretation. Its development relies on the curation of a large-scale instruction dataset comprising over 1 million image-instruction pairs containing both single-turn tasks - such as report generation, abnormality classification, and visual grounding - and multi-turn, multi-task conversational interactions. Our experiments show that RadVLM, fine-tuned on this instruction dataset, achieves state-of-the-art performance in conversational capabilities and visual grounding while remaining competitive in other radiology tasks (report generation, classification). Ablation studies further highlight the benefit of joint training across multiple tasks, particularly for scenarios with limited annotated data. Together, these findings highlight the potential of the RadVLM model as a clinically relevant AI assistant, providing structured CXR interpretation and conversational capabilities to support more effective and accessible diagnostic workflows.

Authors

Nicolas Deperrois, Hidetoshi Matsuo, Samuel Ruipérez-Campillo, Moritz Vandenhirtz, Sonia Laguna, Alain Ryser, Koji Fujimoto, Mizuho Nishio, Thomas Sutter, Julia Vogt, Jonas Kluckert, Thomas Frauenfelder, Christian Bluethgen, Farhad Nooralahzadeh, Michael Krauthammer

Submitted

Physionet

Date

08.10.2025

LinkDOI

Abstract

We release the RadVLM instruction dataset, a large-scale resource used to train the RadVLM model on diverse radiology tasks. The dataset contains 1,115,021 image–instruction pairs spanning five task families: (i) report generation from frontal CXRs using filtered Findings/Impression text; (ii) abnormality classification for the standard 14 CheXpert labels; (iii) anatomy grounding; (iv) abnormality detection and grounding; and (v) phrase grounding from report sentences. To support interactive use, we include ~89k LLM-generated multi-turn, multi-task conversations (~3k with spatial grounding) derived from image-linked attributes (reports, labels, boxes). Creation involved curating datasets from public sources, excluding lateral views, removing prior-study references and other non-image context from reports, fusing multi-reader annotations, and harmonizing label and coordinate formats. The resource is intended for training CXR assistants across diverse radiology tasks and within a conversational format.

Authors

Nicolas Deperrois, Hidetoshi Matsuo, Samuel Ruipérez-Campillo, Moritz Vandenhirtz, Sonia Laguna, Alain Ryser, Koji Fujimoto, Mizuho Nishio, Thomas Sutter, Julia Vogt, Jonas Kluckert, Thomas Frauenfelder, Christian Bluethgen, Farhad Nooralahzadeh, Michael Krauthammer

Submitted

Physionet

Date

25.09.2025

LinkDOI

Abstract

General movements (GMs) are spontaneous, coordinated body movements in infants that offer valuable insights into the developing nervous system. Assessed through the Prechtl GM Assessment (GMA), GMs are reliable predictors for neurodevelopmental disorders. However, GMA requires specifically trained clinicians, who are limited in number. To scale up newborn screening, there is a need for an algorithm that can automatically classify GMs from infant video recordings. This data poses challenges, including variability in recording length, device type, and setting, with each video coarsely annotated for overall movement quality. In this work, we introduce a tool for extracting features from these recordings and explore various machine learning techniques for automated GM classification.

Authors

Daphne Chopard*, Sonia Laguna*, Kieran Chin-Cheong*, Annika Dietz, Anna Badura, Sven Wellmann, Julia E Vogt
* denotes shared first authorship

Submitted

Proceedings of Machine Learning Research - Machine Learning for Healthcare 2025, previous version in ICLR 2025 (Best Paper Award - Oral) Workshop AI4CHL

Date

15.08.2025

LinkCode

Abstract

Building generalizable medical AI systems requires pretraining strategies that are data-efficient and domain-aware. Unlike internet-scale corpora, clinical datasets such as MIMIC-CXR offer limited image counts and scarce annotations, but exhibit rich internal structure through multi-view imaging. We propose a self-supervised framework that leverages the inherent structure of medical datasets. Specifically, we treat paired chest X-rays (i.e., frontal and lateral views) as natural positive pairs, learning to reconstruct each view from sparse patches while aligning their latent embeddings. Our method requires no textual supervision and produces informative representations. Evaluated on MIMIC-CXR, we show strong performance compared to supervised objectives and baselines being trained without leveraging structure. This work provides a lightweight, modality-agnostic blueprint for domain-specific pretraining where data is structured but scarce.

Authors

Andrea Agostini*, Sonia Laguna*, Alain Ryser*, Samuel Ruipérez-Campillo*, Moritz Vandenhirtz, Nicolas Deperrois, Farhad Nooralahzadeh, Michael Krauthammer, Thomas M Sutter, Julia E Vogt
* denotes shared first authorship, denotes shared last authorship

Submitted

International Conference of Machine Learning (ICML) 2025 Workshop on FM4LS

Date

15.07.2025

Link

Abstract

We introduce Concept Bottleneck Reward Models (CB-RM), a reward modeling framework that enables interpretable preference learning through selective concept annotation. Unlike standard RLHF methods that rely on opaque reward functions, CB-RM decomposes reward prediction into human-interpretable concepts. To make this framework efficient in low-supervision settings, we formalize an active learning strategy that dynamically acquires the most informative concept labels. We propose an acquisition function based on Expected Information Gain and show that it significantly accelerates concept learning without compromising preference accuracy. Evaluated on the UltraFeedback dataset, our method outperforms baselines in interpretability and sample efficiency, marking a step towards more transparent, auditable, and human-aligned reward models.

Authors

Sonia Laguna, Katarzyna Kobalczyk, Julia E Vogt, Mihaela Van der Schaar

Submitted

International Conference on Machine Learning (ICML) 2025 Workshop on PRAL

Date

12.07.2025

LinkCode

Abstract

Purpose Speed-of-sound (SoS) is a biomechanical characteristic of tissue, and its imaging can provide a promising biomarker for diagnosis. Reconstructing SoS images from ultrasound acquisitions can be cast as a limited-angle computed-tomography problem, with variational networks being a promising model-based deep learning solution. Some acquired data frames may, however, get corrupted by noise due to, e.g., motion, lack of contact, and acoustic shadows, which in turn negatively affects the resulting SoS reconstructions. Methods We propose to use the uncertainty in SoS reconstructions to attribute trust to each individual acquired frame. Given multiple acquisitions, we then use an uncertainty-based automatic selection among these retrospectively, to improve diagnostic decisions. We investigate uncertainty estimation based on Monte Carlo Dropout and Bayesian Variational Inference. Results We assess our automatic frame selection method for differential diagnosis of breast cancer, distinguishing between benign fibroadenoma and malignant carcinoma. We evaluate 21 lesions classified as BI-RADS 4, which represents suspicious cases for probable malignancy. The most trustworthy frame among four acquisitions of each lesion was identified using uncertainty-based criteria. Selecting a frame informed by uncertainty achieved an area under curve of 76% and 80% for Monte Carlo Dropout and Bayesian Variational Inference, respectively, superior to any uncertainty-uninformed baselines with the best one achieving 64%. Conclusion A novel use of uncertainty estimation is proposed for selecting one of multiple data acquisitions for further processing and decision making.

Authors

Sonia Laguna, Lin Zhang, Can Deniz Bezek, Monika Farkas, Dieter Schweizer, Rahel A. Kubik-Huch, Orcun Goksel

Submitted

International Journal of Computer Assisted Radiology and Surgery

Date

10.06.2025

LinkDOI

Abstract

Modern machine learning models for scene understanding, such as depth estimation and object tracking, rely on large, high-quality datasets that mimic real-world deployment scenarios. To address data scarcity, we propose an end-to-end system for synthetic data generation for scalable, high-quality, and customizable 3D indoor scenes. By integrating and adapting text-to-image and multi-view diffusion models with Neural Radiance Field-based meshing, this system generates highfidelity 3D object assets from text prompts and incorporates them into pre-defined floor plans using a rendering tool. By introducing novel loss functions and training strategies into existing methods, the system supports on-demand scene generation, aiming to alleviate the scarcity of current available data, generally manually crafted by artists. This system advances the role of synthetic data in addressing machine learning training limitations, enabling more robust and generalizable models for real-world applications.

Authors

Sonia Laguna, Alberto Garcia-Garcia, Marie-Julie Rakotosaona, Stylianos Moschoglou, Leonhard Helminger, Sergio Orts-Escolano

Submitted

International Conference on Learning Representations (ICLR) 2025 Workshop SynthData

Date

17.04.2025

Link

Abstract

Concept Bottleneck Models (CBMs) aim to enhance interpretability by structuring predictions around human-understandable concepts. However, unintended information leakage, where predictive signals bypass the concept bottleneck, compromises their transparency. This paper introduces an information-theoretic measure to quantify leakage in CBMs, capturing the extent to which concept embeddings encode additional, unintended information beyond the specified concepts. We validate the measure through controlled synthetic experiments, demonstrating its effectiveness in detecting leakage trends across various configurations. Our findings highlight that feature and concept dimensionality significantly influence leakage, and that classifier choice impacts measurement stability, with XGBoost emerging as the most reliable estimator. Additionally, preliminary investigations indicate that the measure exhibits the anticipated behavior when applied to soft joint CBMs, suggesting its reliability in leakage quantification beyond fully synthetic settings. While this study rigorously evaluates the measure in controlled synthetic experiments, future work can extend its application to real-world datasets.

Authors

Mikael Makonnen, Moritz Vandenhirtz, Sonia Laguna, Julia E Vogt

Submitted

International Conference on Learning Representations (ICLR) 2025 Workshop: XAI4Science

Date

15.04.2025

Link

Abstract

The dead-in-bed syndrome describes the sudden and unexplained death of young individuals with Type 1 Diabetes (T1D) without prior long-term complications. One leading hypothesis attributes this phenomenon to nocturnal hypoglycemia (NH), a dangerous drop in blood glucose during sleep. This study aims to improve NH prediction in children with T1D by leveraging physiological data and machine learning (ML) techniques. We analyze an in-house dataset collected from 16 children with T1D, integrating physiological metrics from wearable sensors. We explore predictive performance through feature engineering, model selection, architectures, and oversampling. To address data limitations, we apply transfer learning from a publicly available adult dataset. Our results achieve an AUROC of 0.75 +- 0.21 on the in-house dataset, further improving to 0.78 +- 0.05 with transfer learning. This research moves beyond glucose-only predictions by incorporating physiological parameters, showcasing the potential of ML to enhance NH detection and improve clinical decision-making for pediatric diabetes management.

Authors

Marco Voegeli*, Sonia Laguna*, Heike Leutheuser, Marc Pfister, Marie-Anne Burckhardt, Julia E Vogt
* denotes shared first authorship

Submitted

International Conference on Learning Representations (ICLR) 2025 Workshop AI4CHL

Date

14.04.2025

Link

Abstract

The widespread use of chest X-rays (CXRs), coupled with a shortage of radiologists, has driven growing interest in automated CXR analysis and AI-assisted reporting. While existing vision-language models (VLMs) show promise in specific tasks such as report generation or abnormality detection, they often lack support for interactive diagnostic capabilities. In this work we present RadVLM, a compact, multitask conversational foundation model designed for CXR interpretation. To this end, we curate a large-scale instruction dataset comprising over 1 million image-instruction pairs containing both single-turn tasks -- such as report generation, abnormality classification, and visual grounding -- and multi-turn, multi-task conversational interactions. After fine-tuning RadVLM on this instruction dataset, we evaluate it across different tasks along with re-implemented baseline VLMs. Our results show that RadVLM achieves state-of-the-art performance in conversational capabilities and visual grounding while remaining competitive in other radiology tasks. Ablation studies further highlight the benefit of joint training across multiple tasks, particularly for scenarios with limited annotated data. Together, these findings highlight the potential of RadVLM as a clinically relevant AI assistant, providing structured CXR interpretation and conversational capabilities to support more effective and accessible diagnostic workflows.

Authors

N Deperrois, H Matsuo, S Ruipérez-Campillo, M Vandenhirtz, S Laguna, A Ryser, K Fujimoto, M Nishio, TM Sutter, JE Vogt, J Kluckert, T Frauenfelder, C Blüthgen, F Nooralahzadeh, M Krauthammer

Submitted

arXiv

Date

01.02.2025

LinkDOI

Abstract

Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input. Through time-consuming manual interventions, a user can correct wrongly predicted concept values to enhance the model's downstream performance. We propose Stochastic Concept Bottleneck Models (SCBMs), a novel approach that models concept dependencies. In SCBMs, a single-concept intervention affects all correlated concepts, thereby improving intervention effectiveness. Unlike previous approaches that model the concept relations via an autoregressive structure, we introduce an explicit, distributional parameterization that allows SCBMs to retain the CBMs' efficient training and inference procedure. Additionally, we leverage the parameterization to derive an effective intervention strategy based on the confidence region. We show empirically on synthetic tabular and natural image datasets that our approach improves intervention effectiveness significantly. Notably, we showcase the versatility and usability of SCBMs by examining a setting with CLIP-inferred concepts, alleviating the need for manual concept annotations.

Authors

Moritz Vandenhirtz*, Sonia Laguna*, Ričards Marcinkevičs, Julia E Vogt
* denotes shared first authorship

Submitted

NeurIPS: Thirty-Eighth Annual Conference on Neural Information Processing Systems

Date

14.12.2024

LinkCode

Abstract

General movements (GMs) are spontaneous, coordinated body movements in infants that offer valuable insights into the developing nervous system. Assessed through the Prechtl GM Assessment (GMA), GMs are reliable predictors for neurodevelopmental disorders. However, GMA requires specifically trained clinicians, who are limited in number. To scale up newborn screening, there is a need for an algorithm that can automatically classify GMs from infant video recordings. This data poses challenges, including variability in recording length, device type, and setting, with each video coarsely annotated for overall movement quality. In this work, we introduce a tool for extracting features from these recordings and explore various machine learning techniques for automated GM classification.

Authors

Daphné Chopard*, Sonia Laguna*, Kieran Chin-Cheong*, Annika Dietz, Anna Badura, Sven Wellmann, Julia E Vogt
* denotes shared first authorship

Submitted

Findings Machine Learning for Health (ML4H) Symposium colocated with NeurIPS

Date

13.12.2024

Link

Abstract

Concept-based machine learning methods have increasingly gained importance due to the growing interest in making neural networks interpretable. However, concept annotations are generally challenging to obtain, making it crucial to leverage all their prior knowledge. By creating concept-enriched models that incorporate concept information into existing architectures, we exploit their interpretable capabilities to the fullest extent. In particular, we propose Concept-Guided Conditional Diffusion, which can generate visual representations of concepts, and Concept-Guided Prototype Networks, which can create a concept prototype dataset and leverage it to perform interpretable concept prediction. These results open up new lines of research by exploiting pre-existing information in the quest for rendering machine learning more human-understandable.

Authors

Alba Carballo-Castro, Sonia Laguna, Moritz Vandenhirtz, Julia E Vogt

Submitted

NeurIPS 2024 Workshop Interpretable AI

Date

12.12.2024

LinkCode

Abstract

Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts. A compelling advantage of this model class is the user's ability to intervene on the predicted concept values, affecting the model's downstream output. In this work, we introduce a method to perform such concept-based interventions on already-trained neural networks, which are not interpretable by design, given an annotated validation set. Furthermore, we formalise the model's intervenability as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black-box models. Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We demonstrate that fine-tuning improves intervention effectiveness and often yields better-calibrated predictions. To showcase the practical utility of the proposed techniques, we apply them to deep chest X-ray classifiers and show that fine-tuned black boxes can be as intervenable and more performant than CBMs.

Authors

Sonia Laguna*, Ricards Marcinkevics*, Moritz Vandenhirtz, Julia E. Vogt
* denotes shared first authorship

Submitted

NeurIPS: Thirty-Eighth Annual Conference on Neural Information Processing Systems

Date

11.12.2024

LinkCode

Abstract

Synthetic data have emerged as an attractive option for developing machine-learning methods in human neuroimaging, particularly in magnetic resonance imaging (MRI)—a modality where image contrast depends enormously on acquisition hardware and parameters. This retrospective paper reviews a family of recently proposed methods, based on synthetic data, for generalizable machine learning in brain MRI analysis. Central to this framework is the concept of domain randomization, which involves training neural networks on a vastly diverse array of synthetically generated images with random contrast properties. This technique has enabled robust, adaptable models that are capable of handling diverse MRI contrasts, resolutions, and pathologies, while working out-of-the-box, without retraining. We have successfully applied this method to tasks such as whole-brain segmentation (SynthSeg), skull-stripping (SynthStrip), registration (SynthMorph, EasyReg), super-resolution, and MR contrast transfer (SynthSR). Beyond these applications, the paper discusses other possible use cases and future work in our methodology. Neural networks trained with synthetic data enable the analysis of clinical MRI, including large retrospective datasets, while greatly alleviating (and sometimes eliminating) the need for substantial labeled datasets, and offer enormous potential as robust tools to address various research goals.

Authors

Karthik GopinathCorresponding Author, Andrew Hoopes, Daniel C. Alexander, Steven E. Arnold, Yael Balbastre, Benjamin Billot, Adrià Casamitjana, You Cheng, Russ Yue Zhi Chua, Brian L. Edlow, Bruce Fischl, Harshvardhan Gazula, Malte Hoffmann, C. Dirk Keene, Seunghoi Kim, W. Taylor Kimberly, Sonia Laguna, Kathleen E. Larson, Koen Van Leemput, Oula Puonti, Livia M. Rodrigues, Matthew S. Rosen, Henry F. J. Tregidgo, Divya Varadarajan, Sean I. Young, Adrian V. Dalca, Juan Eugenio Iglesias

Submitted

Imaging Neuroscience

Date

19.11.2024

LinkDOI

Abstract

Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input. Through time-consuming manual interventions, a user can correct wrongly predicted concept values to enhance the model's downstream performance. We propose Stochastic Concept Bottleneck Models (SCBMs), a novel approach that models concept dependencies. In SCBMs, a single-concept intervention affects all correlated concepts. Leveraging the parameterization, we derive an effective intervention strategy based on the confidence region. We show empirically on synthetic tabular and natural image datasets that our approach improves intervention effectiveness significantly. Notably, we showcase the versatility and usability of SCBMs by examining a setting with CLIP-inferred concepts, alleviating the need for manual concept annotations.

Authors

Moritz Vandenhirtz*, Sonia Laguna*, Ricards Marcinkevics, Julia E. Vogt
* denotes shared first authorship

Submitted

ICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling, Workshop on Models of Human Feedback for AI Alignment, and Workshop on Humans, Algorithmic Decision-Making and Society

Date

26.07.2024

Link

Abstract

Multimodal VAEs have recently gained significant attention as generative models for weakly-supervised learning with multiple heterogeneous modalities. In parallel, VAE-based methods have been explored as probabilistic approaches for clustering tasks. At the intersection of these two research directions, we propose a novel multimodal VAE model in which the latent space is extended to learn data clusters, leveraging shared information across modalities. Our experiments show that our proposed model improves generative performance over existing multimodal VAEs, particularly for unconditional generation. Furthermore, we propose a post-hoc procedure to automatically select the number of true clusters thus mitigating critical limitations of previous clustering frameworks. Notably, our method favorably compares to alternative clustering approaches, in weakly-supervised settings. Finally, we integrate recent advancements in diffusion models into the proposed method to improve generative quality for real-world images.

Authors

Emanuele Palumbo, Laura Manduchi, Sonia Laguna, Daphne Chopard, Julia E Vogt

Submitted

ICLR: The Twelfth International Conference on Learning Representations

Date

17.05.2024

LinkCode

Abstract

ExpLIMEable is a tool to enhance the comprehension of Local Interpretable Model-Agnostic Explanations (LIME), particularly within the realm of medical image analysis. LIME explanations often lack robustness due to variances in perturbation techniques and interpretable function choices. Powered by a convolutional neural network for brain MRI tumor classification, ExpLIMEable seeks to mitigate these issues. This explainability tool allows users to tailor and explore the explanation space generated post hoc by different LIME parameters to gain deeper insights into the model’s decision-making process, its sensitivity, and limitations. We introduce a novel dimension reduction step on the perturbations seeking to find more informative neighborhood spaces and extensive provenance tracking to support the user. This contribution ultimately aims to enhance the robustness of explanations, key in high-risk domains like healthcare

Authors

Sonia Laguna, Julian Heidenreich, Jiugeng Sun, Nil\"ufer Cetin, Ibrahim Al Hazwani, Udo Schlegel, Furui Cheng, Mennatallah El-Assady

Submitted

NeurIPS 2023, XAI in Action: Past, Present, and Future Applications

Date

16.12.2023

Link

Abstract

Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts. A compelling advantage of this model class is the user's ability to intervene on the predicted concept values, consequently affecting the model's downstream output. In this work, we introduce a method to perform such concept-based interventions on already-trained neural networks, which are not interpretable by design. Furthermore, we formalise the model's intervenability as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black-box models. Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We demonstrate that fine-tuning improves intervention effectiveness and often yields better-calibrated predictions. To showcase the practical utility of the proposed techniques, we apply them to chest X-ray classifiers and show that fine-tuned black boxes can be as intervenable and more performant than CBMs.

Authors

Ricards Marcinkevics*, Sonia Laguna*, Moritz Vandenhirtz, Julia E. Vogt
* denotes shared first authorship

Submitted

XAI in Action: Past, Present, and Future Applications, NeurIPS 2023

Date

16.12.2023

Link

Abstract

Multimodal VAEs have recently received significant attention as generative models for weakly-supervised learning with multiple heterogeneous modalities. In parallel, VAE-based methods have been explored as probabilistic approaches for clustering tasks. Our work lies at the intersection of these two research directions. We propose a novel multimodal VAE model, in which the latent space is extended to learn data clusters, leveraging shared information across modalities. Our experiments show that our proposed model improves generative performance over existing multimodal VAEs, particularly for unconditional generation. Furthermore, our method favourably compares to alternative clustering approaches, in weakly-supervised settings. Notably, we propose a post-hoc procedure that avoids the need for our method to have a priori knowledge of the true number of clusters, mitigating a critical limitation of previous clustering frameworks.

Authors

Emanuele Palumbo, Sonia Laguna, Daphné Chopard, Julia E Vogt

Submitted

ICML 2023 Workshop on Structured Probabilistic Inference/Generative Modeling

Date

23.06.2023

Link

Abstract

Three-dimensional imaging of live processes at a cellular level is a challenging task. It requires high-speed acquisition capabilities, low phototoxicity, and low mechanical disturbances. Three-dimensional imaging in microfluidic devices poses additional challenges as a deep penetration of the light source is required, along with a stationary setting, so the flows are not perturbed. Different types of fluorescence microscopy techniques have been used to address these limitations; particularly, confocal microscopy and light sheet fluorescence microscopy (LSFM). This manuscript proposes a novel architecture of a type of LSFM, single-plane illumination microscopy (SPIM). This custom-made microscope includes two mirror galvanometers to scan the sample vertically and reduce shadowing artifacts while avoiding unnecessary movement. In addition, two electro-tunable lenses fine-tune the focus position and reduce the scattering caused by the microfluidic devices. The microscope has been fully set up and characterized, achieving a resolution of 1.50 μ m in the x-y plane and 7.93 μ m in the z-direction. The proposed architecture has risen to the challenges posed when imaging microfluidic devices and live processes, as it can successfully acquire 3D volumetric images together with time-lapse recordings, and it is thus a suitable microscopic technique for live tracking miniaturized tissue and disease models.

Authors

Clara Gomez-Cruz, Sonia Laguna, Ariadna Bachiller-Pulido, Cristina Quilez, Marina Canadas-Ortega, Ignacio Albert-Smet, Jorge Ripoll, Arrate Munoz-Barrutia

Submitted

Biosensors

Date

01.12.2022

LinkDOI

Abstract

Synthetic super-resolved images generated by a machine learning algorithm from portable low-field-strength (0.064-T) brain MRI had good agreement with real images at high field strength (1.5–3 T).

Authors

Juan Eugenio Iglesias, Riana Schleicher, Sonia Laguna, Benjamin Billot, Pamela Schaefer, Brenna McKaig, Joshua N Goldstein, Kevin N Sheth, Matthew S Rosen, W Taylor Kimberly

Submitted

Radiology

Date

08.11.2022

LinkDOI

Abstract

Portable low-field MRI has the potential to revolutionize neuroimaging, by enabling pointof-care imaging and affordable scanning in underserved areas. The lower resolution and signal-to-noise ratio of these scans preclude image analysis with existing tools. Superresolution (SR) methods can overcome this limitation, but: (i) training with downsampled high-field scans fails to generalize; and (ii) training with paired low/high-field data is hard due to the lack of perfectly aligned images. Here, we present an architecture that combines denoising, SR and domain adaptation modules to tackle this problem. The denoising and SR components are pretrained in a supervised fashion with large amounts of existing high-resolution data, whereas unsupervised learning is used for domain adaptation and end-to-end finetuning. We present preliminary results on a dataset of 11 low-field scans. The results show that our method enables segmentation with existing tools, which yield ROI volumes that correlate strongly with those derived from high-field scans (ρ > 0.8).

Authors

Sonia Laguna, Riana Schleicher, Benjamin Billot, Pamela Schaefer, Brenna McKaig, Joshua N Goldstein, Kevin N Sheth, Matthew S Rosen, W Taylor Kimberly, Juan Eugenio Iglesias

Submitted

Medical Imaging with Deep Learning

Date

07.07.2022

Link