MSc.

Moritz Vandenhirtz

PhD Student

E-Mail
moritz.vandenhirtz@inf.ethz.ch
Address
Department of Computer Science
CAB G 15.2
Universitätstr. 6
CH – 8092 Zurich, Switzerland
Room
CAB G 15.2

I completed my bachelor's degree in Banking and Finance at the University of Zurich in 2020, where I worked with Prof. Dr. Michael Wolf on additive, high-dimensional models for predicting stock returns. I obtained my master's degree in Statistics at ETH in 2022, where I acquired a strong background in machine learning and was awarded the Willi Studer Prize. My master's thesis focused on interpretably discovering and removing hidden biases. In October 2022, I joined the Medical Data Science lab as a PhD student.

I am excited about exploring techniques that give insight into the decision making process of machine learning models and leveraging this extracted knowledge for medical problems. At the time of writing, my focus lies in (inherently) interpretable machine learning methods and representation learning. Additionally, I am curious about combining aforementioned topics with other fields such as adversarial machine learning, anomaly detection, drug discovery, or reinforcement learning.

Abstract

The structure of many real-world datasets is intrinsically hierarchical, making the modeling of such hierarchies a critical objective in both unsupervised and supervised machine learning. Recently, novel approaches for hierarchical clustering with deep architectures have been proposed. In this work, we take a critical perspective on this line of research and demonstrate that many approaches exhibit major limitations when applied to realistic datasets, partly due to their high computational complexity. In particular, we show that a lightweight procedure implemented on top of pre-trained non-hierarchical clustering models outperforms models designed specifically for hierarchical clustering. Our proposed approach is computationally efficient and applicable to any pre-trained clustering model that outputs logits, without requiring any fine-tuning. To highlight the generality of our findings, we illustrate how our method can also be applied in a supervised setup, recovering meaningful hierarchies from a pre-trained ImageNet classifier.

Authors

Emanuele Palumbo, Moritz Vandenhirtz, Alain Ryser, Imant Daunhawer, Julia E. Vogt
denotes shared last authorship

Submitted

Preprint

Date

10.10.2024

DOI

Abstract

Concept Bottleneck Models (CBMs) have emerged as a promising interpretable method whose final prediction is based on intermediate, human-understandable concepts rather than the raw input. Through time-consuming manual interventions, a user can correct wrongly predicted concept values to enhance the model's downstream performance. We propose Stochastic Concept Bottleneck Models (SCBMs), a novel approach that models concept dependencies. In SCBMs, a single-concept intervention affects all correlated concepts. Leveraging the parameterization, we derive an effective intervention strategy based on the confidence region. We show empirically on synthetic tabular and natural image datasets that our approach improves intervention effectiveness significantly. Notably, we showcase the versatility and usability of SCBMs by examining a setting with CLIP-inferred concepts, alleviating the need for manual concept annotations.

Authors

Moritz Vandenhirtz*, Sonia Laguna*, Ricards Marcinkevics, Julia E. Vogt
* denotes shared first authorship

Submitted

ICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling, Workshop on Models of Human Feedback for AI Alignment, and Workshop on Humans, Algorithmic Decision-Making and Society

Date

26.07.2024

Link

Abstract

Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts. A compelling advantage of this model class is the user's ability to intervene on the predicted concept values, affecting the model's downstream output. In this work, we introduce a method to perform such concept-based interventions on already-trained neural networks, which are not interpretable by design, given an annotated validation set. Furthermore, we formalise the model's intervenability as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black-box models. Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We demonstrate that fine-tuning improves intervention effectiveness and often yields better-calibrated predictions. To showcase the practical utility of the proposed techniques, we apply them to deep chest X-ray classifiers and show that fine-tuned black boxes can be as intervenable and more performant than CBMs.

Authors

Sonia Laguna*, Ricards Marcinkevics*, Moritz Vandenhirtz, Julia E. Vogt
* denotes shared first authorship

Submitted

Arxiv

Date

24.01.2024

Link

Abstract

We propose Tree Variational Autoencoder (TreeVAE), a new generative hierarchical clustering model that learns a flexible tree-based posterior distribution over latent variables. TreeVAE hierarchically divides samples according to their intrinsic characteristics, shedding light on hidden structures in the data. It adapts its architecture to discover the optimal tree for encoding dependencies between latent variables. The proposed tree-based generative architecture enables lightweight conditional inference and improves generative performance by utilizing specialized leaf decoders. We show that TreeVAE uncovers underlying clusters in the data and finds meaningful hierarchical relations between the different groups on a variety of datasets, including real-world imaging data. We present empirically that TreeVAE provides a more competitive log-likelihood lower bound than the sequential counterparts. Finally, due to its generative nature, TreeVAE is able to generate new samples from the discovered clusters via conditional sampling.

Authors

Laura Manduchi*, Moritz Vandenhirtz*, Alain Ryser, Julia E. Vogt
* denotes shared first authorship

Submitted

Spotlight at Neural Information Processing Systems, NeurIPS 2023

Date

20.12.2023

LinkCode

Abstract

Recently, interpretable machine learning has re-explored concept bottleneck models (CBM), comprising step-by-step prediction of the high-level concepts from the raw features and the target variable from the predicted concepts. A compelling advantage of this model class is the user's ability to intervene on the predicted concept values, consequently affecting the model's downstream output. In this work, we introduce a method to perform such concept-based interventions on already-trained neural networks, which are not interpretable by design. Furthermore, we formalise the model's intervenability as a measure of the effectiveness of concept-based interventions and leverage this definition to fine-tune black-box models. Empirically, we explore the intervenability of black-box classifiers on synthetic tabular and natural image benchmarks. We demonstrate that fine-tuning improves intervention effectiveness and often yields better-calibrated predictions. To showcase the practical utility of the proposed techniques, we apply them to chest X-ray classifiers and show that fine-tuned black boxes can be as intervenable and more performant than CBMs.

Authors

Ricards Marcinkevics*, Sonia Laguna*, Moritz Vandenhirtz, Julia E. Vogt
* denotes shared first authorship

Submitted

XAI in Action: Past, Present, and Future Applications, NeurIPS 2023

Date

16.12.2023

Link

Abstract

Prototype learning, a popular machine learning method designed for inherently interpretable decisions, leverages similarities to learned prototypes for classifying new data. While it is mainly applied in computer vision, in this work, we build upon prior research and further explore the extension of prototypical networks to natural language processing. We introduce a learned weighted similarity measure that enhances the similarity computation by focusing on informative dimensions of pre-trained sentence embeddings. Additionally, we propose a post-hoc explainability mechanism that extracts prediction-relevant words from both the prototype and input sentences. Finally, we empirically demonstrate that our proposed method not only improves predictive performance on the AG News and RT Polarity datasets over a previous prototype-based approach, but also improves the faithfulness of explanations compared to rationale-based recurrent convolutions.

Authors

Claudio Fanconi*, Moritz Vandenhirtz*, Severin Husmann, Julia E. Vogt
* denotes shared first authorship

Submitted

Conference on Empirical Methods in Natural Language Processing, EMNLP 2023

Date

25.10.2023

LinkDOICode

Abstract

We propose a new generative hierarchical clustering model that learns a flexible tree-based posterior distribution over latent variables. The proposed Tree Variational Autoencoder (TreeVAE) hierarchically divides samples according to their intrinsic characteristics, shedding light on hidden structures in the data. It adapts its architecture to discover the optimal tree for encoding dependencies between latent variables, improving generative performance. We show that TreeVAE uncovers underlying clusters in the data and finds meaningful hierarchical relations between the different groups on several datasets. Due to its generative nature, TreeVAE can generate new samples from the discovered clusters via conditional sampling.

Authors

Laura Manduchi*, Moritz Vandenhirtz*, Alain Ryser, Julia E. Vogt
* denotes shared first authorship

Submitted

ICML 2023 Workshop on Structured Probabilistic Inference & Generative Modeling

Date

30.06.2023

LinkCode

Abstract

We propose a new generative hierarchical clustering model that learns a flexible tree-based posterior distribution over latent variables. The proposed Tree Variational Autoencoder (TreeVAE) hierarchically divides samples according to their intrinsic characteristics, shedding light on hidden structures in the data. It adapts its architecture to discover the optimal tree for encoding dependencies between latent variables, improving generative performance. We show that TreeVAE uncovers underlying clusters in the data and finds meaningful hierarchical relations between the different groups on several datasets. Due to its generative nature, TreeVAE can generate new samples from the discovered clusters via conditional sampling.

Authors

Laura Manduchi*, Moritz Vandenhirtz*, Alain Ryser, Julia E. Vogt
* denotes shared first authorship

Submitted

ICML 2023 Workshop on Deployment Challenges for Generative AI

Date

30.06.2023

LinkCode

Abstract

Spurious correlations are everywhere. While humans often do not perceive them, neural networks are notorious for learning unwanted associations, also known as biases, instead of the underlying decision rule. As a result, practitioners are often unaware of the biased decision-making of their classifiers. Such a biased model based on spurious correlations might not generalize to unobserved data, leading to unintended, adverse consequences. We propose Signal is Harder (SiH), a variational-autoencoder-based method that simultaneously trains a biased and unbiased classifier using a novel, disentangling reweighting scheme inspired by the focal loss. Using the unbiased classifier, SiH matches or improves upon the performance of state-of-the-art debiasing methods. To improve the interpretability of our technique, we propose a perturbation scheme in the latent space for visualizing the bias that helps practitioners become aware of the sources of spurious correlations.

Authors

Moritz Vandenhirtz, Laura Manduchi, Ricards Marcinkevics, Julia E. Vogt

Submitted

Domain Generalization Workshop, ICLR 2023

Date

04.05.2023

LinkCode

Abstract

Objective: To report the outcomes of active surveillance (AS) for low-risk prostate cancer (PCa) in a single-center cohort. Patients and Methods: This is a prospective, single-center, observational study. The cohort included all patients who underwent AS for PCa between December 1999 and December 2020 at our institution. Follow-up appointments (FU) ended in February 2021. Results: A total of 413 men were enrolled in the study, and 391 had at least one FU. Of those who followed up, 267 had PCa diagnosed by transrectal ultrasound (TRUS)-guided biopsy (T1c: 68.3%), while 124 were diagnosed after transurethral resection of the prostate (TURP) (T1a/b: 31.7%). Median FU was 46 months (IQR 25–90). Cancer specific survival was 99.7% and overall survival was 92.3%. Median reclassification time was 11.2 years. After 20 years, 25% of patients were reclassified within 4.58 years, 6.6% opted to switch to watchful waiting, 4.1% died, 17.4% were lost to FU, and 46.8% remained on AS. Those diagnosed by TRUS had a significantly higher reclassification rate than those diagnosed by TURP (p < 0.0001). Men diagnosed by targeted MRI/TRUS fusion biopsy tended to have a higher reclassification probability than those diagnosed by conventional template biopsies (p = 0.083). Conclusions: Our single-center cohort spanning over two decades revealed that AS remains a safe option for low-risk PCa even in the long term. Approximately half of AS enrollees will eventually require definitive treatment due to disease progression. Men with incidental prostate cancer were significantly less likely to have disease progression.

Authors

Sarah Hagmann, Venkat Ramakrishnan, Alexander Tamalunas, Marc Hofmann, Moritz Vandenhirtz, Silvan Vollmer, Jsmea Hug, Philipp Niggli, Antonio Nocito, Rahel A. Kubik-Huch, Kurt Lehmann, Lukas John Hefermehl

Submitted

Cancers

Date

12.01.2022

LinkDOI