Exhibition
Posters
1. A Cost-Effective Deep Learning Workflow for High-Throughput Food Formulation
Author
Adar Fridman, Remko Marcel Boom, Serafim Bakalis and Raghavendra Selvan
Abstract
High-throughput methods can accelerate food formulation design, where combining ingredients to achieve desired textures and stability is central, yet accessible tools for screening gelation are scarce. We present a workflow that combines a 96-well plate platform with sphere displacement tracking and a deep learning-based image analysis pipelineto map gelation behavior in parallel. A YOLOv8 model tracked spheres in each well. Gelatin was selected as a model system to validate the approach. Sphere velocity decreased with increasing concentration, capturing immobilization thresholds and systematic hysteresis between cooling and reheating. Validation against oscillatory rheology showed strong agreement with sol–gel boundaries, with only minor deviations due to discrete temperature steps. This demonstrates that deep learning–assisted sphere tracking provides a reliable, low-cost proxy for rheology, offering a practical tool for rapid, automated food formulation screening.
2. A longitudinal analysis of political neutrality in independent fact-checking
Author
Sahajpreet Singh, Sarah Masud and Tanmoy Chakraborty
Abstract
Independent fact-checking organisations have emerged as the guardians against fake news. However, these organisations might deviate from political neutrality by being selective in what false news they debunk and how the debunked information is presented. At the intersection of AI for social science and humanities, this work explores how journalistic frameworks and large language models (LLMs) can be employed to both detect political biases in fact-checking organisations as well as make the public aware of the limitations of such a setup.

Prompting GPT-3.5, with journalistic frameworks of 5W1H, we establish a longitudinal measure (2018-2023) for political neutrality that looks beyond the left-right spectrum. Specified on a range of -1 to 1 (with zero being absolute neutrality), we establish the extent of negative portrayal of political entities that marks a difference in the readers' perception in six independent fact-checking organisations across the USA and India. Here, we observe an average score of -0.17 and -0.24 in the USA and India, respectively. The findings indicate how seemingly objective fact-checking can still carry distorted political views, indirectly and subtly impacting the perception of consumers of the debunked news.
3. Accelerated phase identification using deep learning and physics-informed neural network for retrieving physical parameters from SAXS data
Author
Smita Chakraborty
Abstract
Small-angle X-ray Scattering (SAXS) provides crucial insights into the structure of cellulose fibres. However, extracting quantitative physical parameters such as fibril diameter, orientation distribution, and porosity from SAXS patterns is challenging due to signal complexity and model ambiguity. This ongoing work presents deep learning and physics-informed neural network(PINN) approaches that leverage both experimental SAXS data and the underlying physical laws governing X-ray scattering.
4. Accelerating Water Simulations with Reliable SurrogateModeling: Motivation and Early Directions
Author
Freja Høgholm Petersen, Jesper Mariegaard, Rocco Palmitessa and Allan Engsig-Karup
Abstract
Physics-based hydrodynamic models are accurate but computationally demanding, motivating interest in machine learning surrogates for coastal water simulations. A growing body of work explores physics-aware and stable data-driven methods, including reduced-order models, operator learning, and Koopman-based approaches, to improve generalization and robustness. This study reviews key advances and presents early experiments with reduced-order surrogates, where incorporating stability constraints in the training of Koopman autoencoders show promise for reliable long-term forecasting.
5. Acoustic field estimation with differentiable physics
Author
Peter Gerstoft, Samuel Verburg and Efren Fernandez-Grande
Abstract
Differentiable physics is used to estimate acoustic fields from a limited number of spatially distributed observations. The initial conditions of the wave equation are approximated with a neural network, and the differential operator is computed with a differentiable numerical solver. We introduce an additional sparsity-promoting constraint to achieve meaningful solutions even under severe under sampling conditions. Numerical experiments demonstrate that the approach can reconstruct sound fields under extreme data scarcity.
6. Advancing Interdisciplinary Research: The European Centreof Excellence in Artificial Intelligence for DigitalHumanities (CoE AI4DH)
Author
Marko Robnik Šikonja, Antoine Doucet, Špela Arhar Holdt,Kaja Dobrovoljc, Simon Krek, Tinca Lukan, Ajda PretnarŽagar, Polona Tratnik, Igor Vobič and Slavko Žitnik
Abstract
This paper introduces the European Centre of Excellence in Artificial Intelligence for Digital Humanities (CoE AI4DH), a newly established EU-funded initiative based at the University of Ljubljana. The Centre addresses key scientific and structural barriers to interdisciplinary collaboration between computer science and the humanities, particularly in applying artificial intelligence (AI) methods to cultural, historical, and linguistic research. CoEAI4DH offers an integrated approach combining technical innovation with domain expertise, providing AI tools, research infrastructure, and training tailored to the needs of digital humanities and social science scholars. Key research activities include the development of NLP tools for historical texts, knowledge-enhanced large language models, and AI-supported media dis-course analysis in low-resource languages. By fostering collaboration across disciplines, the Centre enables new forms of inquiry and promotes scalable, explainable, and ethically grounded AI applications. This contribution out-lines the Centre’s mission, services, and research agenda, highlighting opportunities for international collaboration and shared innovation.
7. AI as an Interdisciplinary Enabler: Case Studies and Ethical Challenges
Author
Zenon Lamprou, Haris Shekeris, Pilar Orero and Konstantinos Avgerinakis
Abstract
Artificial Intelligence (AI) has evolved into asocio-technical phenomenon, necessitating interdisciplinary collaboration for its effective and ethical deploymentacross diverse sectors. This paper examines the role of interdisciplinary approaches in AI development, focusing on healthcare and climate science as key case studies, while also touching upon education, industry, and accessibility. We explore how ethical and governance dimensions, particularly within the framework of EU policy such as the AI Act and GDPR, concretely shape these collaborations. In healthcare, AI advancements in diagnostics and drug discovery highlight the need for cooperation between clinicians, data scientists, and ethicists to mitigate risks like bias and unequal access. Similarly, in climate science, AI's contribution to environmental monitoring and policy decisions demands collaboration among scientists, policymakers, and ethicists to ensure accountability and transparency. We analyze how responsibilities are distributed, ethical risks are mitigated, and governance frameworks influence design choices in these domains. The paper concludes that successful AI deployment depends not only on technical innovation but also on robust, interdisciplinary governance structures that embed principles of fairness, accountability, and transparency into every stage of development and implementation, fostering AI as a shared societal project.
8. AI for Equine Welfare: Detecting Ear Movements forAffective State Assessment
Author
João Alves, Pia Haubro Andersen and Rikke Gade
Abstract
The Equine Facial Action Coding System (EquiFACS) enables the systematic annotation of equine facial movements through distinct Action Units (AUs) and serves as a crucial tool for assessing affective states in horses by identifying subtle facial expressions associated with discomfort.

In this work, we study different methods for specific ear AU detection and localization from horse videos. We achieve87.5% classification accuracy of ear movement presence on a public horse video dataset, demonstrating the potential of our deep learning based approach. Our code is publicly available at https://github.com/jmalves5/read-my-ears.
9. AI:XPERTISE Lab: Embedding AI in Expert Work
Author
Andreas Møgelmose and Kasper Trolle Elmholt
Abstract
Advanced AI technologies increasingly encroach on domains traditionally occupied by expert professions such as medicine and architecture. Existing work often isolates technical development from the organisational conditions that shape how AI is adopted. AI:XPERTISE is a new lab and research project at Aalborg University, which brings these perspectives together by co-designing new AI systems while simultaneously studying their integration into expert workflows. This position paper outlines the lab's research agenda, methodological approach, and expected contributions.
10. AI-driven strategic intelligence for policy- and decision-making
Author
Jasper van Kempen and Amber Geurts
Abstract
Technology foresight is often constrained by reliance on publications and patents, limiting early detection of emerging science, technology and innovation (STI) trends. We demonstrate a hybrid AI–expert approach that combines the breadth and speed of Large Language Models (LLMs) with expert validation and contextualization. The framework is applied to three policy challenges: (i) analyzing structured policy documents, (ii) nowcasting STI indicators from proxy data, and (iii) transforming unstructured web content into structured datasets. Case studies in Dutch innovation policy, offshore wind, growth markets, R&D nowcasting, and critical raw material supply risks show that the approach delivers in hours what previously took weeks, and incorporates previously inaccessible data sources. The resulting workflow, consisting of collection, normalization, AI transformation, expert gating and audit trails, was reviewable by design. It limited model hallucinations and expert tunnel vision, while allowing low cost updates as new data or sources emerged.
11. Auditing generative AI’s cultural imagination of cities
Author
Ingrid Campo-Ruiz
Abstract
Generative artificial intelligence (AI) increasingly shape show people perceive and navigate cities, influencing travel choices, urban understanding, and collective memory. Yet how generative AI systems represent urban culture remains largely unexplored. This study presents a systematic audit of such representations, applied to Stockholm as a case study. I prompted ChatGPT-4o and Mid journey with controlled queries about the city’s “cultural context” and compared their outputs with geolocated demographic data, photographs collected through fieldwork, and findings from previous research on Stockholm’s cultural and spatial dynamics. The analysis reveals a consistent narrowing of urban cultural representation: generative AI systems privileged iconic, consumption-oriented places concentrated in the city center while overlooking suburban and lower-income areas. Integrating text and image generation with geospatial analysis, I introduce a transparent and reproducible framework for examining cultural bias in generative AI. The findings expose how AI systems filter urban life and invite collaboration on responsible, context-aware AI that promotes diversity, equity, and inclusion in cities.
12. Augmenting Anomaly Detection Datasets with Relighting and Depth Estimation
Author
Ivan Nikolov
Abstract
Variations in lighting from weather, day–night cycles, seasonal changes, or artificial sources can cause data drift in deep learning models by altering scene appearance, which leads to degraded performance. This is particularly problematic for outdoor anomaly detection, where training and testing datasets are usually limited to daytime recordings. In this paper, we present initial research on using an image relighting model with depth estimation to build augmented surrogate models of datasets under diverse lighting conditions. We show that such datasets pose significant challenges for anomaly detectors and can enable the development of more robust models. Furthermore, these surrogate relit datasets can support a wide range of outdoor vision tasks.
13. Automated Discovery of Sequential Sampling Models
Author
Ibrahim Muhip Tezcan, Daniel Weinhardt and Sebastian Musslick
Abstract
Modeling both choices and reaction times is central to decision-making research. Sequential sampling models (SSM)provide a powerful framework for explaining how humans and animals make decisions over time, yet their development has largely relied on a few hand-crafted variants, limiting exploration of the broader hypothesis space of evidence accumulation dynamics and potentially overlooking novel mechanisms. We introduce a machine learning framework for automated discovery of SSMs directly from behavioral data, in which a recurrent neural network (RNN) with internal evidence accumulation mechanisms generates choice–reaction time distributions, while a discriminator network scores their similarity to empirical data. Validated on simulated datasets from three model classes---the drift diffusion model with a constant rate of evidence accumulation, and two urgency models with linearly and nonlinearly increasing rates of evidence accumulation---the system successfully recovered these underlying dynamics, reproducing behavior consistent with the original processes. These results demonstrate that RNNs equipped with evidence accumulation mechanisms can approximate a variety of sequential sampling mechanisms, opening new opportunities for automated discovery of interpretable models and systematic theory development in decision-making.
14. Automated Discovery of Sparse and Interpretable Cognitive Equations
Author
Daniel Weinhardt, Martyna Plomecka, Muhip Tezcan, Maria Eckstein and Sebastian Musslick
Abstract
Data-driven approaches to model discovery can uncover mechanisms that traditional theory-driven models fail to explain. In cognitive science, this task is particularly challenging because cognitive models aim to capture la-tent cognitive processes that are not directly observable. We introduce the Sparse Interpretable Cognitive Equations (SPICE) framework, a machine learning approach that combines recurrent neural networks with equation discovery to identify latent cognitive dynamics directly from noisy behavior in cognitive tasks. Applied to human reinforcement learning data, including clinical populations with depression, performing a2-armed bandit task, SPICE discovered models that outperformed those proposed in the original study while revealing novel cognitive mechanisms such as linear and nonlinear interactions between reward beliefs and choice preference dynamics. Leveraging participant embeddings, SPICE also uncovered systematic structural differences in cognitive dynamics between healthy and clinical populations. By enabling the automated discovery of interpretable cognitive dynamics from behavior, SPICE paves the way for data-driven theory development in cognitive science.
15. Automated discovery of structural differences in dynamical systems
Author
Sedighe Raeisi, Pascal Nieters and Sebastian Musslick
Abstract
A major challenge in the natural and engineering sciences is uncovering dynamical systems that explain observed natural phenomena. Data-driven equation discovery has emerged as a powerful approach to automate this process. Existing equation discovery methods, however, typically pool data across instances of systems they seek to explain, ignoring structural differences or reducing them to parameter variations within the same equation. Yet, different instances of the same physical or biological system may be governed by different equations. The challenge is to uncover structural components that are universal across systems while distinguishing them from individual-specific variations. To capture structural differences across systems, we introduce a Bayesian hierarchical approach to equation discovery. We demonstrate across case studies in physics, ecology, and neuroscience that this method recovers mechanisms of the data-generating processes while capturing structural variability across systems. By explicitly modeling structural variability inequations, it establishes a foundation for data-driven automated model discovery at the population level, providing scientists with a tool to separate universal principles from individual differences.
16. Automated Prototyping of Behavioral Experiments with Large Language Models
Author
Alessandra Brondetta and Sebastian Musslick
Abstract
Piloting behavioral experiments is a critical yet resource-intensive step in behavioral research. Behavioral scientists often rely on intuition and repeated data collection before arriving at experimental designs that elicit desired behavioral phenomena. To address this challenge, we introduce a large language model (LLM)-driven framework for in silico prototyping of behavioral experiments. The framework involves an iterative interaction between an experimentalist LLM, that proposes candidate designs, and participant LLMs, that engage with them. We formalize this interaction as a black-box optimization problem, where the experimentalist LLM aims to minimize a loss function defined over behavioral metrics of interest by iteratively revising its proposals. We illustrate this approach in the context of task framing—the narrative explanations used to introduce participants to experimental tasks. Using the Wisconsin Card Sorting Test, a canonical psychological paradigm for studying cognitive flexibility, we demonstrate that the framework can discover framings that systematically shift the behavior of synthetic participants along a spectrum of cognitive stability and flexibility. Our findings demonstrate the potential of LLM-based in silico experimentation to accelerate the design cycle in behavioral research, enabling cost-effective exploration of experimental design spaces prior to in vivo validation with human participants.
17. Automatic Classification of Sedimentary Particles for Insights into Past Climate Environment
Author
Nikolai Andrianov and Kasia Sliwinska
Abstract
This work presents early-stage research on the classification of sedimentary particles in microscope images from samples spanning different geological epochs. The overarching goal is to enable a quantitative assessment of past environmental changes using a fast and unbiased annotation method. Since particle labels are available for only a small subset of images, we focus on unsupervised learning. The workflow consists of two stages: first, particles are segmented using a combination of thresholding methods and the pre-trained SAM2 model (Ravi et al.[2024]); second, segmented particles are grouped through zero-shot classification using the pre-trained DINOv2 model(Oquabet al. [2023]). Initial qualitative results show that most particles are successfully segmented. However, the resulting clusters contain heterogeneous mixtures of particle types, highlighting the challenges of unsupervised classification in this domain. Future work will incorporate available labeled data to improve the classification results.
18. ColliderML: Enabling Foundation Models in High EnergyPhysics Through Low-Level Detector Data
Author
Daniel Murnane, Paul Gessinger-Befurt, Andreas Salzburger and Anna Zaborowska
Abstract
We introduce ColliderML , an open dataset of one million fully simulated proton-proton collisions at HL-LHC conditions, providing detector-level measurements across ten physics processes. Unlike existing fast-simulation datasets operating on high-level objects, ColliderML provides hits, energy deposits, and reconstructed tracks from realistic detector geometry under high luminosity pile-up (µ ≈ 200). We argue that foundation models trained on such low-level data represent the future of collider physics, and present ColliderML as the infrastructure to realize this vision.
19. Compositional Shielding and Reinforcement Learning for Multi-Agent Systems
Author
Asger Horn Brorholt, Kim Guldstrand Larsen and Christian Schilling
Abstract
Deep reinforcement learning has emerged as a powerful tool for obtaining high-performance policies. However, the safety of these policies has been a long-standing issue. One promising paradigm to guarantee safety is a shield, which "shields" a policy from making unsafe actions. However, computing a shield scales exponentially in the number of state variables. This is a particular concern in multi-agent systems with many agents. In this work, we propose a novel approach for multi-agent shielding. We address scalability by computing individual shields foreach agent. The challenge is that typical safety specifications are global properties, but the shields of individual agents only ensure local properties. Our key to overcome this challenge is to apply assume-guarantee reasoning. Specifically, we present a sound proof rule that decomposes a (global, complex) safety specification into(local, simple) obligations for the shields of the individual agents. Moreover, we show that applying the shields during reinforcement learning significantly improves the quality of the policies obtained for a given training budget. We demonstrate the effectiveness and scalability of our multi-agent shielding framework in two case studies, reducing the computation time from hours to seconds, achieving fast learning convergence.
20. Contrastive omics pre-training for multimodal embeddings of genomic sequences
Author
Lyam Baudry, Cyril Matthey-Doret, Amelie Fritz and Celia Raimondi
Abstract
The exponential growth of genomic data presents a significant challenge in bridging the gap between raw DNA sequences and their complex biological functions. We introduce CLOP (Contrastive Learning for Omics Pre-training), a model designed to learn joint representations of DNA sequences and their textual annotations. Inspired by the success of CLIP in vision-language tasks, CLOP employs a dual-encoder architecture trained with a contrastive objective to map DNA k-mers and descriptive text into a shared, multimodal embedding space. In this paper, we present a proof of concept based on a preliminary training run on genomic sequences from various species. Despite the limited training (5 epochs) and small sample size, our results demonstrate that the learned embeddings qualitatively capture meaningful biological information. The embedding space shows clear clustering of sequences by functional category. These early results highlight the potential of contrastive learning to build foundational models for genomics.
21. Deep Learning Segmentation and Analysis of Neural Organoids for Schizophrenia Research
Author
Jack Bodine, Lucrezia Criscuolo, Barbara Lykke Lind and Svetlana Kutuzova
Abstract
Manual region-of-interest annotation in two-photon calcium imaging is a necessary step for meaningful results, yet remains a tedious process, limiting scalable studies of neural organoids. We built an AI pipeline using U-Net to segment neural organoids and quantify activity, enabling cell-level investigations into schizophrenic and control organoids. Using just 12 ground truth images, we trained a U-Net segmentation model that achieved a macro Dice =0.8321 on the evaluation set. We used these masks to measure cell specific activity. The activity traces were then used to analyze differences in calcium activity between schizophrenic and control subsets. The pipeline demonstrates the ability AI has to accelerate neuroscience research.
22. Dynamic Maps from Sparse Observations using Spatio-temporal Coordinates
Author
David Mickisch, Konstantin Klemmer, Mélisande Teng, Esther Rolf, Marc Rußwurm and David Rolnick
Abstract
Complex spatio-temporal dependencies govern many real-world processes -- from climate dynamics to disease spread. Modeling these processes continuously using purpose-built neural network architectures, so-called location encoders, presents an emerging paradigm in analyzing and interpolating geographic data. In this work, we expand existing spatial location encoders and introduce a new time-informed architecture: the space-time encoder. Our method takes in geographic (latitude, longitude) and temporal information simultaneously and learns smooth, continuous functions in space and time. The inputs are first transformed using positional encoding functions and then fed into neural networks that allow the learning of complex functions. We consider, via detailed experimental analysis, (1) how to integrate space and time encodings, (2) the effect of different choices of encoding functions for the time component and (3) frameworks for encouraging orthogonality of feature representations to improve representational power. We highlight the effectiveness and flexibility of the space-time encoder on a range of tasks representing different spatio-temporal dynamics, from climate prediction to animal species classification.
23. Enabling Selective Classification in NN-based Small-Signal Stability Analysis
Author
Galadrielle Humblot-Renaux, Yang Wu, Sergio Escalera, Thomas B. Moeslund, Xiongfei Wang and Heng Wu
Abstract
Neural network (NN)-based analysis methods have the potential to accelerate stability screening of modern power systems, but cannot guarantee accurate and reliable stability predictions for unseen operating scenarios (OSs),posing safety risks. To address this limitation, we propose a selective classification framework leveraging deep ensembles for uncertainty and asymmetric thresholding of predicted probabilities to identify safety-critical misclassifications. These uncertain OSs are then flagged for further analysis using physical-based methods, ensuring safety and robustness. We validate the proposed method both in simulation and on a physical system.

This paper is an aggressively abridged version of the interdisciplinary work by [Humblot-Renaux et al., 2025],published in IEEE Transactions on Power Electronics. Code is available athttps://github.com/glhr/ibr-stability-ensemble
24. Enhancing Generative Decoders with Stochastic Training on Biomedical Data with Missingness
Author
Xi Shen, Yan Li, Andreas Bjerregaard and Anders Krogh
Abstract
Biomedical datasets are often heterogeneous and affected by noise or missing values. Deep Generative Decoders (DGD)provide a promising framework for latent representation learning, but their standard training procedure relies on sample-level SGD, which performs poorly with incomplete data. To address this, we introduce two stochastic training strategies---Nested Stochastic Gradient Descent (Nested SGD) and Feature Dropout---that incorporate feature-level randomness into optimization. Evaluations on biomedical tabular datasets demonstrate that Nested SGD improves robustness under missingness, while Feature Dropout accelerates convergence with lower computational cost. These results suggest that feature-level stochasticity is a practical way to strengthen biomedical AI pipelines.
25. Evaluating LLMs as Participant Simulators for Behavioral Science
Author
Sabrina Namazova, Alessandra Brondetta, Younes Strittmatter, Matthew Nassar and Sebastian Musslick
Abstract
Collecting data from human participants in cognitive experiments is a costly and time-consuming aspect of behavioral science. One promising direction is to fine-tune large language models (LLMs) on human behavior to act as participant simulators. In this work, we outline key criteria such simulators must satisfy and evaluate how well state-of-the-art fine-tuned LLMs meet them. Our analyses indicate that although such LLMs achieve high predictive accuracy, their generative behavior—a key requirement for simulating participants—systematically diverges from human data. To probe this discrepancy, we examine the role of experimental information provided to the model. The results replicate prior work showing that LLMs primarily act as autoregressive predictors: they excel at forecasting responses from prior behavior but fail to integrate information about the experimental context ,leading to weak generative performance. These findings highlight both the potential and methodological challenges of using LLMs as synthetic participants, emphasizing the need for careful validation before integrating them into behavioral research
26. Exploring Image–Text Alignment for Radio Galaxy Morphologies
Author
Erica Lastufka, Mariia Drozdova and Svyatoslav Voloshynovskiy
Abstract
We investigate whether specially constructed text captions can capture the same morphological information as radio galaxy images. Using the MiraBest dataset, we generate captions with a domain-specific prompt and evaluate their alignment with images through the SigLIP-2 vision-language model, with and without LoRA fine-tuning. Results show that caption-based classification of FR-I and FR-II galaxies performs similarly to images, with fine-tuning improving local coherence of embeddings but not global alignment.
27. Foundation Model for Multimodal Materials Science
Author
Le Yang, Anoop K Chandran, Sebastien Bompas, Bashir Kazimi, Karen Forberich, Christoph Brabec, Stefan Kesselheim, Stefan Sandfeld, Francesca Toma, Jona Östreicher, Felix Laufer, Muhammad Riaz, Robert Barthel, Ulrich Paetzold, Pascal Friederich, Michael Götte, Eva Unger, Adrian Mirza, Gordan Prastalo and Kevin Maik Jablonka
Abstract
The Helmholtz SOL-AI project is one of the seven projects of the Helmholtz Foundation Model Initiative (HFMI),dedicated to accelerating solar energy materials discovery through multimodal AI. Building on Helmholtz strengths in data, compute, and interdisciplinary collaboration, SOL-AI develops modular foundation model frameworks that unify heterogeneous scientific datasets. In its initial development phase, we created MatBind, a contrastive-learning framework that integrates crystal structures, density of states, powder X-ray diffraction(pXRD), and textual descriptors into a joint embedding space. MatBind, published at AI4Mat@ICLR 2025, demonstrates cross-modal retrieval and semantic alignment with recall@1up to 97\%. Beyond this proof-of-concept, we showcase two new directions: (1) inverse design via arithmetic in the embedding space, and (2) extending to metal-organic frameworks (MOFs) using pXRD and MOFid descriptors. Together, these developments illustrate SOL-AI’s trajectory: from broad multimodal materials data to domain-specific applications in solar photovoltaics, supported by Helmholtz infrastructure such as JUPITER{} and HAICORE, and guided by FAIR and open science principles.
28. Funding by Algorithm? The responsible use of AI in research funding and assessment
Author
Denis Newman-Griffis, Helen Buckley Woods, Youyou Wu, Mike Thelwall and Jon Holm
Abstract
Research funding organisations globally are exploring the use of AI technologies to inform data-driven funding decisions and research assessment. However, AI expertise in the funding sector varies widely, and there is little best practice on responsible use of AI that is sensitive to the research funding context. We present key findings and outcomes from the Research on Research Institute's GRAIL project, which developed an international community of practice on responsible AI in research funding and assessment. We show that responsible AI implementation and management is an organisational and interprofessional challenge as much as a technical and data one, and that while AI presents significant potential benefit for funders, its impacts on science systems must be carefully managed. We present Funding by Algorithm, a new responsible AI handbook, to guide these discussions.
29. Generating Realistic Underwater Images using a Revised Image Formation Model
Author
Vasiliki Ismiroglou, Malte Pedersen, Stefan H. Bengtson, Andreas Aakerberg and Thomas B. Moeslund
Abstract
In this work, we propose an improved synthetic data generation pipeline based on the underwater image formation model with inclusion of the commonly omitted forward scattering term, while also considering a nonuniform medium. Our results demonstrate qualitative improvements over the reference model, particularly under increasing turbidity, with a selection rate of 82.5% by survey participants. Data, code, and more information can be accessed on the project page:
vap.aau.dk/sea-ing-through-scattered-rays.
30. Generative AI for Life Cycle Assessment
Author
Lotte Ansgaard Thomsen, Ning An and Massimo Pizzol
Abstract
Life Cycle Assessment (LCA) is a systematic method for evaluating the environmental impacts of products and services across their entire life cycle, from raw material extraction to end-of-life disposal. A major challenge incurrent LCA practice is the significant manual effort required to search, match, and interpret information from large, complex, and nested databases.

This project introduces a modular AI-assisted workflow that integrates Large Language Models (LLMs) with Graph Retrieval-Augmented Generation (GraphRAG) to streamline data integration and enable more advanced querying. By representing LCA databases as graphs, the system allows for flexible and semantically rich interactions that go beyondtraditional search.

A first prototype highlights the feasibility of this vision. Initial trials with generative AI show encouraging outcomes, particularly in aligning entities, extracting contextual information, and linking knowledge across datasets. Continued development is aimed at testing the boundaries of the approach, strengthening robustness, and ensuring scalability to broader applications.

The envisioned outcome is an LCA AI Assistant capable of guiding practitioners through assessments while also supporting the generation of new data entries needed to fill gaps in existing databases. This approach lays the groundwork for scalable, intelligent, and more automated sustainability analysis that reduces manual workload, improves consistency, and expands the capabilities of LCAtools.
31. GraphNeT: Machine Learning software for seeing the Universe in new light
Author
Troels Christian Petersen, Aske Rosted and Rasmus Ørsøe
Abstract
GraphNeT is an open-source Python framework for end-to-end event reconstruction in neutrino telescopes and related instruments. It is designed to bring cutting-edge deep learning methods—most notably graph neural networks (GNNs)and transformers—into large-scale astro-particle physics experiments. The software provides a modular, detector-agnostic infrastructure that allows physicists and machine learning practitioners to seamlessly develop, train, and deploy models for tasks such as particle classification, energy and direction estimation, and uncertainty quantification. Its design enables rapid experimentation across heterogeneous detector geometries, making it suitable for current experiments such as IceCube and KM3Net as well as next-generation observatories likeIceCube-Gen2 and P-ONE, but also gamma ray telescopes like Magic and Cerenkov Telescope Array.
32. HClimRep: AI Climate Model for Capturing the Atmosphere, Ocean, and Sea Ice Interactions
Author
Savvas Melidonis, Ankit Patnala, Asma Semcheddine, Martin Schultz, Julius Polz and Kacper Nowak
Abstract
Climate change presents critical challenges to ecosystems and human society. Accurate projections of climate change and its consequences are essential for assessing climate policies and developing proactive strategies to mitigate extreme weather events. While traditional climate models based on fluid dynamics and radiative transfer have provided valuable information, they face limitations such as inherent biases, coarse resolution, and structural errors. Moreover, these models are computationally intensive, dependent of physical constraints or governing equations, and it is therefore impossible to explore a wide range of policy scenarios and generate actionable climate information at the desired high resolutions. In the context of Helmholtz Foundation Model Initiative (HFMI), we present HClimRep, an AI foundation model that captures complex interactions of the atmosphere, stratosphere, ocean, and sea ice to create realistic climate simulations. Our prototype extends the prototype of Weather Generator, an EU-funded HORIZON project on weather forecasting, to unique climate project capabilities. HClimRep leverages large-scale transformer-based AI models, optimized for GPU efficiency, to overcome computational limitations, improve cost-accuracy trade-offs, and emulate first principles climate simulations at a fraction of the traditional computational cost. Our main project objective is the creation of a generalizable large-scale foundation model, which will serve as a basis for various downstream climate-related applications and products from stratospheric warmings forecasts to tropical cyclone climatology and hydrological downscaling. By integrating more advancements in AI and HPC on JUPITER, the Europe’s first exascale supercomputer, we aspire for our model to provide a ground-breaking tool for climate research, offering new insights into the Earth's climate system, producing large climate projection ensembles for localized impact modelling, and supporting informed decision-making in climate adaptation and policy, for example in the context of digital twins.
33. Hyperbolic Contrastive Unlearning
Author
Àlex Pujol Vidal, Kamal Nasrollahi, Sergio Escalera and Thomas Moeslund
Abstract
Machine unlearning has become crucial for removing harmful concepts from large multimodal models, particularly for safety and copyright compliance. While recent work explores unlearning in Euclidean vision-language models, hyperbolic spaces remain unexplored despite their natural ability to capture semantic hierarchies. We present Hyperbolic Alignment Calibration (HAC), the first method for concept removal in hyperbolic contrastive learning models like MERU. Through systematic experiments, we demonstrate that HAC achieves better forget accuracy compared to Euclidean methods, particularly when removing multiple concepts simultaneously. Our approach introduces entailment calibration and norm regularization leveraging hyperbolic geometry's unique properties. Visualization analysis reveals that hyperbolic unlearning reorganizes semantic hierarchies, while Euclidean approaches merely disconnect cross-modal associations. These findings establish geometric unlearning as critical for safer deployment of machine learning models.
34. Hyperbolic Multimodal Representation Learning for Biological Taxonomies
Author
Zeming Gong, Chuanqi Tang, Xiaoliang Huo, Nicholas Pellegrino, Austin T. Wang, Graham W. Taylor, Angel X.Chang, Scott C. Lowe and Joakim Bruslund Haurum
Abstract
Taxonomic classification in biodiversity research involves organizing biological specimens into structured hierarchies based on evidence, which can come from multiple modalities such as images and genetic information. We investigate whether hyperbolic networks can provide a better embedding space for such hierarchical models. Our method embeds multimodal inputs into a shared hyperbolic space using contrastive and a novel stacked entailment-based objective. Experiments on the BIOSCAN-1M dataset show that hyperbolic embedding achieves competitive performance with Euclidean baselines, and outperforms all other models on unseen species classification using DNA barcodes. However, fine-grained classification and open-world generalization remain challenging. Our framework offers a structure-aware foundation for biodiversity modelling, with potential applications to species discovery, ecological monitoring, and conservation efforts.
35. Incorporating Quality of Life in Climate Adaptation Planning via Reinforcement Learning
Author
Miguel Costa, Arthur Vandervoort, Martin Drews, KarynMorrissey and Francisco C. Pereira
Abstract
Urban flooding is expected to increase in frequency and severity as a consequence of climate change, causing wide-ranging impacts that include a decrease in urban Quality of Life (QoL). Meanwhile, policymakers adapting cities to flooding have to devise adaptation strategies able to cope with the uncertain nature of climate change and the complex and dynamic nature of urban flooding. Reinforcement Learning (RL) holds significant promise in tackling such complex, dynamic, and uncertain problems. Because of this, we use RL to identify which climate adaptation pathways lead to a higher QoL in the long term. We do this using an Integrated Assessment Model (IAM) which combines a rainfall projection model, a flood model, a transport accessibility model, and a quality of life index. Our preliminary results suggest that this approach can be used to learn optimal adaptation measures and it outperforms other realistic and real-world planning strategies. Our framework is publicly available:
https://github.com/MLSM-at-DTU/maat_qol_framework.
36. Integrating Wearables and Food Images for Precision Digital Healthcare
Author
Zhi Ye and Morten Arendt Rasmussen
Abstract
The paradigm shift from traditional reactive health management to precision digital healthcare demands continuous monitoring of physiological patterns outside traditional clinical settings. The current healthcare system is inherently limited in its ability to detect early disease and personalize interventions. This study proposes a novel machine learning framework that integrates multi-modal wearable data with food images to advance precision digital healthcare. We collected data from 600subjects over 14 days using continuous glucose monitors(CGM), accelerometers, and smartphone food images. The model addresses the computational challenges of large-scale time-series signals from wearable devices by jointly embedding these multi-modal data into a shared latent space via self-supervised learning. This approach advances precision medicine by providing a comprehensive view of individual health dynamics and enabling data-driven personalized interventions such as diet recommendations based on daily lifestyle monitoring.
37. Introducing RegCheck: Automating consistency checks between study registrations and papers
Author
Jamie Cummins, Beth Clarke, Ian Hussey and Malte Elson
Abstract
Across medical and social sciences, researchers recognize that creating a record of study protocols prior to the commencement of research has benefits for the both the transparency and rig our of science. Despite this, evidence suggests that across disciplines, study registrations frequently go unexamined, minimizing their effectiveness. In a way this is no surprise: manually checking registrations against papers is lab our- and time-intensive, requiring careful reading across formats and expertise across domains. The advent of AI unlocks new possibilities in facilitating this activity. We present RegCheck, a modular AI-assisted tool designed to help researchers, reviewers, and editors from across scientific disciplines compare study registrations with their corresponding papers. Importantly, RegCheck keeps human expertise and judgement in the loop by (i) ensuring that users are the ones who determine which features should be compared, and(ii) presenting the most relevant text associated with each feature to the user, facilitating (rather than replacing)human discrepancy judgements. RegCheck also generates shareable reports with unique RegCheck IDs, enabling them to be easily shared and verified by other users. RegCheck is designed to be adaptable across scientific domains, as well as registration and publication formats. In this paper we provide an overview of the motivation, workflow, and design principles of RegCheck, and discuss its potential as an extensible infrastructure for reproducible science.
38. Machine Learning: New Perspectives for Science
Author
Tilman Gocht
Abstract
The Machine Learning Cluster of Excellence was established in 2019 at the University of Tübingen, Germany. This research cluster aims to advance machine learning to aid scientific understanding in a wide range of disciplines –from medicine and neuroscience to cognitive science, linguistics, and economics, physics and the geosciences –and to better understand and steer the impact of machine learning on scientific practice. In the past years, we have developed the community and workflows to connect machine learning with different scientific disciplines. In the following years, we will continue to harness recent advances in machine learning for the benefit of science, sharpening the machine learning toolset, and tackling the most pressing questions in a wide range of scientific disciplines.
39. Metalens: Scaling Scientific Synthesis through AI-Powered Data Extraction and Dynamic Meta-Analysis
Author
Johanna Einsiedler, Kristin Jankowsky, Jamie Cummins, Linas Vastakas, Alan Hernes, Ulrich Schroeders and Rosa Lavelle-Hill
Abstract
The volume of scientific literature is growing at an unprecedented rate, with over 2.5 million articles published annually, yet synthesizing evidence re-mains slow, costly, and error-prone. Traditional systematic reviews and meta-analyses are static, limited to predefined questions, and rely on extensive manual data extraction, with error rates of 8–31%. While recent work shows that large language models (LLMs) can achieve human-level accuracy in data extraction, most high-performing systems depend on expensive proprietary models and lack interactive front-end integration. To address this, we are developing Metalens, an open-source platform that enables dynamic, user-driven meta-analyses through automated data extraction and continuously updated synthesis. Powered by thoroughly tested LLM pipelines, this project combines hyperparameter optimization with calibrated uncertainty quantification to ensure reliable extraction while flagging cases for human verification. This research addresses critical gaps in automated evidence synthesis, advances methodological standards for LLM evaluation in scientific applications, and enhances the accessibility of research findings for a broader audience.
40. MinervAI: Using Generative AI to Assist, Not Replace Humans in Peer Review
Author
Imogen Hüsing, Tim Petersen, Cornelius Wolff, Moritz-André Weiher and Sebastian Musslick
Abstract
The exponential growth of submitted scientific articles is straining an already overburdened peer review system. This trend is fueled by the increasing use of large language models (LLMs) for scientific writing, which enables higher article throughput per author without a corresponding increase in available reviewers. While LLMs can support specific review tasks, such as citation verification and argumentation mapping, their limitations in domain expertise, consistency, and bias caution against their use as autonomous reviewers. Based on structured expert interviews with scientists, we argue that LLMs should complement rather than replace human judgment in peer review. We present a tool that leverages LLM strengths to address key pain points in the scientific review process, while preserving the responsibility of human experts for critical evaluations. This approach seeks to improve both the integrity and efficiency of peer review through transparent, constrained AI support in scientific practice.
41. MOSAIC: A Multilingual, Taxonomy-Agnostic, and Computationally Efficient Approach for Radiological Report Classification
Author
Alice Schiavone, Desmond Elliott, Melanie Ganz, Marco Fraccaro, Rasmus Bonnevie, Lea Marie Pehrson, Silvia Ingala, Micheal Bachmann Nielsen and Vincent Beliveau
Abstract
Radiology reports contain rich clinical information for training imaging models without costly manual annotation. Existing methods have key drawbacks: rules miss linguistic variability, supervised models need large labeled sets, and LLM-based systems often rely on closed-source or resource-intensive models unsuitable for clinical use. They are also mostly English-only and limited to single taxonomies.

We present MOSAIC, a multilingual, taxonomy-agnostic, and efficient approach for radiology report classification. Based on the compact open-access MedGemma-4B model, MOSAIC supports zero-/few-shot prompting and lightweight fine-tuning, running on consumer GPUs. Evaluated across seven datasets in English, Spanish, French, and Danish, it achieves a mean F1 of 88 on chest X-rays, matching or surpassing expert-level baselines while requiring only 24GBGPU memory.
42. Neural Adaptive H∞ Control of an Active Suspension System
Author
Fouzi Tabouri, Kim Guldstrand Larsen and Christian Schilling
Abstract
We present a novel adaptive active suspension control framework that combines robust H∞ control with a Neural Network-based road classification and controller scheduling. The Neural Network classifies road conditions in real time, according to ISO 8608 standards, enabling dynamic scheduling to adapt the suspension system control to the identified road class. We evaluate and validate the classifier and the controller’s performance on realistic road profiles that adhere to ISO standards.
43. Neural Networks as Surrogates for Changing Dynamics in Epidemics
Author
Lukas Stelz, Jan Fuhrmann and Pascal Nieters
Abstract
Traditional epidemiological models struggle to capture how disease transmission changes over time due to policy interventions, behavioral shifts, and evolving testing strategies. We developed a hybrid AI-modeling framework that uses neural networks to automatically discover these time-varying patterns from routine surveillance data alone. Applied to Germany's COVID-19 data, our method successfully reconstructed how transmission rates responded to lockdowns and holidays, while simultaneously uncovering the hidden number of undetected infections as testing capacity evolved. Our AI system works with only reported cases and deaths, making it broadly applicable for real-time epidemic analysis. This approach offers public health officials an interpretable tool to evaluate interventions and understand outbreak dynamics.
44. Neuro-Symbolic Assistants for Science: Compiling Hypotheses into Verifiable Workflows
Author
Alexandre Termier and Luis Galárraga
Abstract
In an era of unprecedented data availability, scientific progress is increasingly constrained by the difficulty of designing reliable data-analysis workflows. These workflows demand scarce expertise, constant monitoring of expanding literature, and extensive trial-and-error. Generative AI can draft workflows but provides no guarantees of correctness, provenance, privacy, or bias.

We argue for AI assistants that shoulder much of this workload. Relying on neuro-symbolic approaches, they can synthesize, validate, and explain workflows while enforcing guarantees on reproducibility, provenance, and bias auditing. Recent advances make this vision feasible, offering a path to more efficient, transparent, and accessible scientific discovery.
45. On Foundation Models for Temporal Point Processes to Accelerate Scientific Discovery
Author
David Berghaus, Patrick Seifner, Kostadin Cvejoski and Ramses Sanchez
Abstract
Many scientific fields, from medicine to seismology, rely on analyzing sequences of events over time to understand complex systems. Traditionally, machine learning models must be built and trained from scratch for each new dataset, which is a slow and costly process. We introduce a new approach: a single, powerful model that learns the underlying patterns of event data\textit{in-context}. We trained this "foundation model" on millions of simulated event sequences, teaching it a general-purpose understanding of how events can unfold. As a result, our model can analyze new scientific data instantly, without retraining, simply by looking at a few examples from the dataset. It can also be quickly fine-tuned for even higher accuracy. This approach makes sophisticated event analysis more accessible and accelerates the pace of scientific discovery. Our pretrained model, repository and tutorials will soon be available online.
46. Online-learning tracking of Qubit control parameters under jump-diffusion type drift
Author
Oswin Krause, Fabrizio Berritta, Ferdinand Kuemmeth, Jacob Hastrup and Morten Kjaergaard
Abstract
Controlling a Quantum computer requires the estimation of a number of parameters for each qubit. Since the parameters fluctuate quickly over time, there is a need for fast and efficient online estimation algorithms that can adapt the measurement parameters to gain as much information as possible. We present our ongoing work of developing an online Bayesian estimation protocol for estimation of the decay-time of a Transmon Qubit and extend it to handle jump-diffusion models. We apply the algorithms both to simulated data with known ground truth as well as real-world data.
47. Optimising DMTA through Academia-Industry Partnerships
Author
Fabian Krüger, Andrea Hunklinger, Matthew Ball and Bob Schendel
Abstract
Industrial drug discovery pipelines, such as the common
design–make–test–analyze(DMTA) cycle, are lengthy and resource intensive. While AI has begun to accelerate and optimize these processes, we argue that further progress could be achieved by incorporating academic innovations and fostering collaboration through joint research positions and challenge initiatives that evaluate models on industry-relevant datasets.
48. Physics-Informed Mixture Models and Surrogate Models for Precision Additive Manufacturing
Author
Sebastián Basterrech, Shuo Shan, Debabrata Adhikari and Sankhya Mohanty
Abstract
In this study, we leverage a mixture model learning approach to identify defects in laser-based Additive Manufacturing (AM) processes. By incorporating physics based principles, we also ensure that the model is sensitive to meaningful physical parameter variations. The empirical evaluation was conducted by analyzing real-world data from two AM processes: Directed Energy Deposition and Laser Powder Bed Fusion. In addition, we also studied the performance of the developed framework over public datasets with different alloy type and experimental parameter information. The results show the potential of physics-guided mixture models to examine the underlying physical behavior of an AM system.
49. PolarBERT: A Foundation Model for IceCube
Author
Inar Timiryasov, Jean-Loup Tastet and Oleg Ruchayskiy
Abstract
The IceCube Neutrino Observatory instruments a cubic kilometer of Antarctic ice with optical sensors to detect the light emitted by neutrino interactions. Its data are used to reconstruct the direction, energy, and type of neutrinos for particle physics and astrophysics research. Although deep learning has been successfully applied to these reconstruction tasks, existing methods are typically supervised and require extensive labeled Monte Carlo simulations. In this work, we develop PolarBERT, a foundation model for IceCube, pre-trained without labels in a self-supervised manner. We show that it can be fine-tuned for neutrino directional reconstruction in a sample-efficient way, and that performance improves with a larger pre-training dataset.
50. Predicting Comprehensibility in Scientific Text Based on Word Facilitation
Author
Moritz Hartstang, Martyna Plomecka, Nicole Gotzner and Sebastian Musslick
Abstract
Comprehensible scientific writing is the basis for interdisciplinary work and science literacy. Common metrics for measuring text comprehensibility are based on word length, frequency, and predictability. Since scientific writing often contains long, rare, and low-predictability words, such linguistic metrics negatively assess them without taking the context into account sufficiently. Here, we introduce an attention-based metric derived from trans-former models to capture how much a word facilitates the processing of others. We find that facilitation complements traditional linguistic metrics by explaining late reading times and neural correlates of text under-standing, capturing unique variance for words that the existing linguistic metrics rate as difficult. These results hold promise for the development of an interpretable comprehensibility metric, usable for scientific writing and potentially adaptable to individual comprehension based on vocabulary knowledge.
51. Protein KABOOM: Kermut-Aided Bayesian Optimization Of Mutations
Author
Mads Herbert Kerrn, Wouter Boomsma and Henry Moss
Abstract
Machine learning has shown promise in enhancing protein engineering, particularly through guided selection in directed evolution. This study introduces a Bayesian optimization alternative to directed evolution. We propose using a Gaussian process with the recent Kermut kernel to guide the suggestion of protein variants, leveraging uncertainty estimates through Thompson sampling. Optimizing against established prediction tools for protein stability and solvent accessible surface area, we demonstrate that our Bayesian optimization framework consistently identifies superior protein variants compared to other methods, including traditional directed evolution, zero-shot models, and existing ML-guided directed evolution procedures.
52. Reinforcement Learning for Efficient Quantum Circuit Equivalence Checking via Tensor Network Contraction
Author
Suhiab Al-Rousan, Kim Guldstrand Larsen, Christian Schilling and Max Tschaikowski
Abstract
Verifying whether two quantum circuits realize the same unitary is vital for optimization and compilation. However, approaches based on tensor network contraction often suffer from exponential blow-ups, not least since finding an optimal contraction order is NP-hard. We address this by employing tensor decision diagrams and formulating contraction as a sequential decision-making task. A reinforcement learning agent, trained via proximal policy optimization with a graph neural network policy, learns contraction orders to minimize both floating-point operations and peak memory. Our evaluation shows that the learned policy consistently outperforms heuristic baselines in running time while maintaining competitive memory usage, highlighting the potential of AI for scalable quantum circuit verification.
53. Rethinking Large Language Model Development for Chemistry
Author
Nawaf Alampara, Mara Schilling-Wilhelmi, Martiño Ríos-García and Kevin Maik Jablonka
Abstract
AI models fall short in real-world chemistry applications due to a lack of domain knowledge. Existing approaches often rely on easily accessible datasets and evaluate progress primarily with simple question–answer benchmarks, offering only limited insights into true chemical competence. Drawing on our experience in building evaluations and datasets, we argue for a paradigm shift: the development of domain-specific models for chemistry trained on diverse, high-quality data that captures fine-grained reasoning. We highlight limitations of existing datasets and evaluation methods, and propose an approach involving practice-oriented evaluation tasks, chemical knowledge-intensive pre-training, and reasoning-intensive post-training. This new paradigm can better align AI capabilities and the complex demands of chemical discovery and innovation.
54. Risk-assessment of Mosquito-borne Diseases in African Cities
Author
Venkanna Babu Guthula, Mary Hahm, Stefan Oehmcke, Remigio Chilaule, Hui Zhang, Dimitri Gominski, Alex Limwagu, Exavery Chaki, Fredros Okumu, Leka Tingitana, Anders Hermund, Lucy S. Tusting, Rasmus Fensholt, Gustavo Riberio, Jakob Brandtberg Knudsen, Yeromin Mlacha, Nico Lang, Ankit Kariryaa, Johan Mottelson, Mary Cameron and Christian Igel
Abstract
Mosquito-borne diseases such as malaria, dengue, and yellow fever have been spreading across African cities, placing more than 126 million residents at risk of large-scale outbreaks. Poor housing quality is a key driver of mosquito-borne diseases, yet the role of the urban built environment in shaping the transmission dynamics of these diseases remains understudied. Therefore, we assess the risk-factors from the built environment and how they are related to vector-borne diseases. To do so, we extract these risk-factors from high-resolution remote sensing imagery with deep learning to identify high-risk areas and to inform targeted intervention strategies. Here, we present initial results on mapping some of the high risk-factors from drone imagery. Our findings demonstrate the ability to capture fine-grained urban details, such as roof materials and small water-holding containers, which are critical indicators of vector habitats.
55. Storm Surge Forecasting with LSTM Networks
Author
Villy Mik-Meyer, Francisco C. Pereira, Martin Drews andMorten Andreas Dahl Larsen
Abstract
Accurate storm surge prediction is essential for coastal risk management and climate adaptation, yet conventional hydrodynamic models are too computationally demanding for large ensemble analyses. We develop and validate a machine learning framework for the North Sea and Baltic Sea, trained on 58 years of wind data using a Long Short-Term Memory (LSTM) architecture to capture temporal dynamics of water levels. The approach requires only a fraction of the resources of physical models, enabling rapid forecasts across large domains and time horizons. This efficiency makes ensemble-based climate impact assessments feasible, offering a scalable alternative for projecting extreme water levels and their statistical distributions under future climate scenarios.
56. Synthesizability via Reward Engineering: Expanding Generative Molecular AI into Synthetic Space
Author
Dominik Dekleva, Jure Borišek, Martina Hrast-Rambaher, Alexey Voronov, Hannes H. Loeffler, Jon Paul Janet and Albin Ekborg
Abstract
Generating novel, drug-like molecules with realistic synthetic pathways is an essential goal in computer-aided drug discovery, yet generative models often lack synthesis awareness, resulting in compounds that are difficult or impossible to produce. To overcome this limitation, models must optimize not only molecular properties but also synthetic feasibility, which is not fully meaningful unless it incorporates user-defined factors like preferred reactions and available starting materials. Moreover, generating singleton compounds without respecting possibilities for parallel synthesis greatly increases the cost and complexity of synthesizing multiple proposed molecules. In practice, medicinal chemistry workflows group targets into families sharing coherent synthetic strategies and common intermediates, enabling efficient parallel and automated synthesis. Here we introduce Synth Sense, are inforcement learning framework that guides molecular design using retrosynthetic feedback. SynthSense offers extrinsic reward functions that assess molecule-level feasibility, such as adherence to available building blocks and preferred reactions, or synthesizability via predefined synthetic routes. It also implements intrinsic, batch-level functions that enforce route coherence across generated compounds. In silico multi-parameter validation demonstrated clear advantages over naïve approaches: SynthSense generated 6.2-fold more synthetically feasible hits than the control trained without SynthSense, achieved a 727-fold enrichment in hits synthesizable with a predefined synthetic route, and populated 2.3-fold more virtual parallel synthesis plates. These results demonstrate that by reframing synthesizability from a mere constraint into an active design objective, generative AIcan better support the realities of modern medicinal chemistry by enabling personalized synthetic design, accelerating SAR exploration, and aligning more naturally with automated parallel synthesis workflows.
57. The APEX project - Artificial Intelligence for Policy Excellence in the Climate Crisis: initial results and outlook
Author
João Böger, Oskar Lassen, Francisco Madaleno, Carlos Azevedo, Guido Cantelmo, Filipe Rodrigues, Serio Agriesti and Francisco Pereira
Abstract
Policy-making requires comparing multiple scenarios involving many variables and uncertainties, assessing trade-offs between economic, environmental and social factors. To make informed decisions, policymakers often rely on data from simulators, to frame the impacts of choices on a set of possible futures. These simulations, though, can becom putationally intensive and run over the course of days, if not weeks. This problem can become an insurmountable barrier, for example, in tackling the climate crisis, which requires long-term planning and the comparison of different policy trajectories on a large scale. We argue that to solve this problem, the generalizing power of simulators has to be coupled with the computational efficiency of machine learning solutions. In this position paper we detail how simulators and non-parametric approaches can be enhanced through each other’s strengths and how the results may increase the amount of scenarios that can inform policy-making, while reducing uncertainties.
58. The Cost-Benefit of Interdisciplinarity in AI for Mental Health
Author
Katerina Drakos, Eva Paraschou, Simay Toplu, Line Harder Clemmensen, Christoph Lütge, Nicole Nadine Lønfeldt and Sneha Das
Abstract
Artificial intelligence has been introduced as a way to improve access to mental health support. However, most AI mental health chatbots rely on a limited range of disciplinary input, and fail to integrate expertise across the chatbot's lifecycle. This paper examines the cost-benefit trade-off of interdisciplinary collaboration in AI mental health chatbots. We argue that involving experts from technology, healthcare, ethics, and law across key lifecycle phases is essential to ensure value-alignment and compliance with the high-risk requirements of the AI Act. We also highlight practical recommendations and existing frameworks to help balance the challenges and benefits of interdisciplinarity in mental health chatbots.
59. The Eight Barriers to Implementing Artificial Intelligence in Research: A Researchers' Consensus Qualitative Assessment
Author
Quentin Loisel and Sebastien Chastin
Abstract
Artificial intelligence (AI) is transforming the research ecosystem, offering unprecedented opportunities to accelerate discovery, enhance efficiency, and foster interdisciplinary collaboration. Yet its integration into scientific practice is hindered by systemic barriers that extend beyond technical challenges to encompass issues of literacy, infrastructure, governance, ethics, and methodology. Despite a growing body of commentary, current literature often remains fragmented or discipline-specific, leaving a gap in cross-cutting, evidence-based understanding of the obstacles to responsible AI adoption.

To address this, we conducted a structured consensus study involving 29 researchers from diverse disciplines, career stages, and institutional contexts. Using a concept mapping methodology, participants generated and validated 157barrier statements, which were refined into eight categories: skill gaps and uneven AI literacy; inadequate technical infrastructure and access; data quality, governance, and security; regulatory and policy gaps; financial and institutional constraints; trust, transparency, and explainability; ethical, social, and accountability concerns; and methodological and epistemic incompatibility. Each was rated for urgency, revealing pressing challenges around data governance, ethical responsibility, and methodological adaptation.

By translating lived experiences into structured knowledge, this work provides the first collective, cross-disciplinary map of barriers to AI in research. It offers a framework for aligning policy, infrastructure, and practice, inviting collaboration among researchers, funders, publishers, and regulators to ensure that AI-driven transformation strengthens science rather than undermines it.
60. The Helmholtz Foundation Model Initiative (HFMI) -Harnessing Foundation Models for Science and Society
Author
Eirini Kouskoumvekaki, Dagmar Kainmueller, Stefan Kesselheim, Stefan Bauer and Fabian Isensee
Abstract
The Helmholtz Foundation Model Initiative (HFMI) pioneers the development of open, interdisciplinary foundation models (FMs) to accelerate scientific discovery and deliver broad societal benefit. FMs represent a new generation of AI models, trained on vast and diverse datasets with immense computing power. Their adaptability enables them to tackle complex challenges across disciplines, ranging from climate science to healthcare, energy, and the life sciences. Unlike traditional AI models designed for narrow tasks, FMs provide a broad knowledge base that can be fine-tuned for many applications with minimal additional data.

HFMI builds on the unparalleled strengths of the Helmholtz Association, Germany’s largest scientific organization, including an active and vivid AI community, vast domain-specific data repositories, leading expertise in both AI and domain sciences, and access to world-class computing infrastructure. A central role is played by JUPITER, Europe’s first exascale supercomputer at Forschungszentrum Jülich, which will surpass one quintillion calculations per second. JUPITER is complemented by the Helmholtz AI computing resources(HAICORE), a distributed network of powerful clusters that expand access to advanced computing across Helmholtz centers. Together, these resources enable the efficient and sustainable training of large-scale FMs with guaranteed scientific impact.

Seven pilot projects launched between spring 2024 and start2025 demonstrate HFMI’s breadth and ambition: 3D-ABC(quantifying the global carbon budget of vegetation and soils), the Human Radiome Project (advancing 3D radiology and personalized medicine), HClimRep (developing high-resolution FM-based climate models), SOL-AI(accelerating discovery of photovoltaic materials),Virtual Cell (building a digital twin of a cell), AqQua(creating the first foundational pelagic imaging model for marine biodiversity), and PROFOUND (modeling protein dynamics to advance biomedicine and biotechnology). These projects are complemented by a Synergy Unit that addresses cross-cutting questions of scalability, interpretability, and interdisciplinary knowledge exchange, ensuring long-term impact across all domains.

HFMI aims to follow the principles of open science and FAIR data, democratizing access to advanced AI tools and empowering under-resourced researchers and institutions worldwide. By lowering barriers to AI adoption, HFMI envisages to enhance collaboration, reduce costs, and accelerate innovation. Its inclusive approach ensures that not only Helmholtz’s 45,000 employees but also partners across academia, industry, and civil society can benefit from these cutting-edge resources.
61. The Helmholtz Model Zoo: A Cloud-Based Platform for AI Model Sharing and Inference in the Helmholtz Association
Author
Hans Werners, Engin Eren, Patrick Fuhrmann and Philipp Heuser
Abstract
The Helmholtz Model Zoo (HMZ) is a cloud-based platform enabling seam-less sharing and inference of deep learning models across the Helmholtz Association’s 18 research centers. By automating model deployment and providing both web and programmatic interfaces, the HMZ lowers technical barriers to AI adoption in scientific research. Integrated with Helmholtz infrastructure (Helmholtz ID authentication, dCache storage, DESY’s HPC cluster with NVIDIA L40S GPUs), the platform ensures secure, scalable in-ference while maintaining data sovereignty. NVIDIA Triton Inference Server and Slurm manage GPU resources efficiently, supporting data-sets from giga-bytes to terabytes. Virtual organizations enable fine-grained access control for specialized models. Launched in July 2025 in beta, the HMZ focuses on domain-specific applications, with future plans for model quality metrics and agentic capabilities.
62. The Quest for Reliable Metrics of Responsible AI
Author
Theresia Veronika Rampisela, Maria Maistro, Tuukka Ruotsaloand Christina Lioma
Abstract
The development of Artificial Intelligence (AI), including AI in Science (AIS), should be done following the principles of responsible AI. Progress in responsible AI is often quantified through evaluation metrics, yet there has been less work on assessing the robustness and reliability of the metrics themselves. We reflect on prior work that examines the robustness of fairness metrics for recommender systems as a type of AI application and summarise their key takeaways into a set of non-exhaustive guidelines for developing reliable metrics of responsible AI. Our guidelines apply to a broad spectrum of AI applications, including AIS.
63. Towards a Multidimensional Impact Evaluation Framework for AI Applications in the Energy Sector
Author
Emily Bringmann, Bianca Weber, Celina Kacperski and Florian Kutzner
Abstract
Artificial Intelligence (AI) is increasingly deployed to enhance energy efficiency, for example by predicting energy demand, enabling demand-side management, or supporting predictive maintenance of renewable energy infrastructure. While these applications offer substantial potential to reduce energy consumption and greenhouse gas emissions, their actual impacts of-ten remain unclear. Especially unintended consequences are rarely assessed. This lack of systematic impact evaluation hampers responsible deployment and risks undermining the contribution of AI to a just energy transition. This contribution addresses this gap by developing a multidimensional im-pact evaluation framework tailored to AI applications in the energy sector.Basedonasystematicliteraturereviewandexpertinterviews,theframeworkintegrates environmental, social, economic, and technical impact dimensions and embeds energy justice considerations. It provides a modular structure of outcomes and associated indicators that practitioners can select according to their specific use case. The framework aims to support evidence-based evaluation of AI-based energy solutions, ensuring their effectiveness, efficiency, and fairness.
64. Towards a Pan-European Generative AI for Population Health
Author
Andrea Ganna
Abstract
Generative Artificial Intelligence (GenAI) has the potential to transform population health research by enabling digital twins and disease modelling across countries. Realizing this vision in Europe requires harmonized use of health data within the European Health Data Space (EHDS). We present the first steps towards a pan-European GenAI framework. Using aggregated data from84M individuals across nine countries, we evaluated cross-country comparability of diagnosis and medication coding, revealing good concordance across Europe but marked divergences with U.S. data, underscoring the need for EU-specific models. Pooling data can enable rare disease studies that would be otherwise infeasible. Delphi, a foundation model trained on UK health trajectories, performed robustly in Finland, though with imperfect transferability. These findings establish the feasibility of pan-European GenAI and outline a roadmap for developing foundation models aligned with EHDS objectives. Our ongoing work includes the integration of more diverse data modalities and improving Gen AI model transferability.
65. Towards Fast Coarse-graining and Equation Discovery with Foundation Inference Models
Author
Manuel Hinz, Maximilian Mauel, Patrick Seifner, David Berghaus, Kostadin Cvejoski and Ramses Sanchez
Abstract
High-dimensional recordings of dynamical processes are often characterized by a much smaller set of effective variables, evolving on low-dimensional manifolds. Identifying these latent dynamics requires solving two intertwined problems: discovering appropriate coarse-grained variables and simultaneously fitting the governing equations. Most machine learning approaches tackle these tasks jointly by training autoencoders together with models that enforce dynamical consistency. We propose to decouple the two problems by leveraging the recently introduced Foundation Inference Models (FIMs).FIMs are pretrained models that estimate the infinitesimal generators of dynamical systems (e.g., the drift and diffusion of a stochastic differential equation) in zero-shot mode. By amortizing the inference of the dynamics through a FIM with frozen weights, and training only the encoder–decoder map, we define a simple, simulation-consistent loss that stabilizes representation learning. A proof-of-concept on a stochastic double-well system with semicircle diffusion, embedded into synthetic video data, illustrates the potential of this approach for fast and reusable coarse-graining pipelines
66. Towards Foundation Inference Models that Learn ODEs In-Context
Author
Maximilian Mauel, Manuel Hinz, Patrick Seifner, David Berghaus and Ramsés Sánchez
Abstract
Ordinary differential equations (ODEs) describe dynamical systems evolving deterministically in continuous time. Accurate data-driven modeling of systems as ODEs, a central problem across the natural sciences, remains challenging, especially if the data is sparse or noisy. We introduce FIM-ODE (Foundation Inference Model for ODEs), a pretrained neural model designed to estimate ODEs zero-shot (i.e.in-context) from sparse and noisy observations. Trained on synthetic data, the model utilizes a flexible neural operator for robust ODE inference, even from corrupted data. We empirically verify that FIM-ODE provides accurate estimates, on par with a neural state-of-the-art method, and qualitatively compare the structure of their estimated vector fields.
67. Towards multimodal genomic foundation modelling: integrating expression data
Author
Frederikke Marin, Panagiotis Antoniadis, Anders Christensen, Felix Teufel, Anders Krogh, Ole Winter and Wouter Boomsma
Abstract
The genome encodes the blueprint of cellular function. Advancing our understanding of genomic regulation is essential both for biology and for biomedical progress. The success of protein language models has inspired analogous efforts in genomics, but current models often underperform compared to simple task-specific ones, suggesting that sequence-only training might fail to capture long-range regulatory and structural interactions. To address this, we propose a multimodal foundation model that combines language modelling with functional genomics prediction and showcase that incorporating expression data improves representation quality and downstream performance.
68. UNAAGI: Atom-Level Diffusion for Generating Non-Canonical Amino Acid Substitutions
Author
Han Tang and Wouter Boomsma
Abstract
Proposing beneficial amino acid substitutions, whether for mutational effect prediction or protein engineering, remains a central challenge in structural biology. Recent inverse folding models, trained to reconstruct sequences from structure, have had considerable impact in identifying functional mutations. However, current approaches are constrained to designing sequences composed exclusively of natural amino acids (NAAs). The larger set of non-canonical amino acids (NCAAs), which offer greater chemical diversity, and are frequently used in in-vivo protein engineering, remain largely inaccessible for current variant effect prediction methods.

To address this gap, we introduce \textbf{UNAAGI}, a diffusion-based generative model that reconstructs residue identities from atomic-level structure using an E(3)-equivariant framework. By modeling side chains in full atomic detail rather than as discrete tokens, UNAAGI enables the exploration of both canonical and non-canonical amino acid substitutions within a unified generative paradigm. We evaluate our method on experimentally benchmarked mutation effect datasets and demonstrate that it achieves substantially improved performance on NCAA substitutions compared to the current state-of-the-art. Furthermore, our results suggest a shared methodological foundation between protein engineering and structure-based drug design, opening the door for a unified training framework across these domains.
69. Visual Anomaly Detection for Bycatch Detection onboard Fishing Vessels
Author
Stefan Hein Bengtson, Malte Pedersen, Mathias Søgaard, Claus Reedtz Sparrevohn and Kamal Nasrollahi
Abstract
With increasingly fragile marine environments, accurately documenting bycatch on fishing vessels is becoming evermore important for conducting informed and sustainable fisheries management. In this study, we investigate the feasibility of using visual anomaly detection (VAD) for bycatch detection using pre-installed general-purpose electronic monitoring cameras. We introduce a novel VAD dataset on which we achieve a mean AUROC of 0.79 with consistent performance across different types of bycatch, underscoring the potential of applying anomaly detection to automate bycatch detection.
Demostrations
70. A national infrastructure for sharing AI solutions in medical imaging for radiotherapy of cancer diseases
Author
Stine Korreman, Ivan Richter Vogelius, Simon Long Krogh and Christian Rønn Hansen
Abstract
Radiotherapy (RT) is a resource-intensive process, with considerable variation in practice across centers. Artificial intelligence (AI) presents a means of automation and increased consistency in RT workflows, thereby reducing the burden on clinical staff and supporting more uniform, equitable care. We demonstrate a national data science infrastructure that enables secure medical image data upload, automated curation, ontology mapping, and standardized model deployment. By building on existing healthcare communication networks and adhering to FAIR principles, the infrastructure allows validated AI models to be trained and applied across institutions. Segmentation of organs at risk in CT-scans serves as one example of AI-supported automation, but the framework is designed to accommodate a wide range of applications including treatment planning and quality assurance.
71. AIoD: The European Open Science Commons for AI
Author
Marco Rorro and Ville Tenhunen
Abstract
AI on-Demand (AIoD) is a collaborative platform designed to unify access to AI resources and accelerate research and innovation across disciplines. Developed through European initiatives such as AI4EU, AI4Europe and DeployAI, AIoD provides a trusted ecosystem for sharing datasets, models, experiments, tools, publications, educational materials, and services. By integrating governance, participation, interoperability, and technical infrastructure, AIoD lowers barriers for AI research, foster collaboration, and supports responsible and reproducible science.

In this demo, we will showcase the main services of the AIoD: the central portal, Single-Sign-On authentication, the AI Catalogue for browsing and searching assets and the metadata catalogue with its REST API and Python SDK for programmatic access. We will also demonstrate how services such as the Research and Innovation AI Lab (RAIL) and AI Builder rely on the metadata catalogue to enable experimentation, workflow design, and deployment. The objective of AIoD is to bring together the European AI community, promote open and reusable AI, and provide a sustainable commons to accelerate science, power innovation, and ensure responsible use of AI for society.
72. Aloe-Vision: Building Robust Vision-Language Models for Healthcare
Author
Cumhur Erkut
Abstract
Large Vision-Language Models (LVLMs) specialized in healthcare have recently become a hot area of research due to their potential use in practical domains. This work introduces Aloe-Vision-7B and Aloe-Vision-72B, two LVLMs openly released together with their training mixture: Aloe-Vision-Data. This dataset spans medical and general domains, as well as multimodal and text-only sources, and is intended for direct use in model fine-tuning. A comprehensive benchmarking shows the proposed model to be a competitive alternative to current solutions, and highlights the importance of balanced training strategies to develop robust medical LVLMs without compromising general utility. Finally, we also show that, despite high benchmark scores, current models remain insufficiently robust under adversarial or misleading conditions, limiting their use in real-world scenarios.
73. amass: An AI-First Scientific Intelligence Platform for theLife Sciences
Author
Alexander Junge, Ruben Halifa, Alexis Yang, Mads Hensel, René Bischoff and Henrik Jensen
Abstract
We demonstrate the amass platform, a comprehensive AI-first scientific intelligence platform designed to accelerate discovery and decision-making across the life sciences ecosystem. Amass addresses the exponential growth of scientific data, with more than 5 million papers published annually, 3.5 million patents filed annually, and 750,000registered clinical trials, which significantly hampers research productivity. Our platform transforms scattered internal and external data into a unified workspace through three core capabilities: i) \textbf{Synthesized Intelligence} enabling semantic search across millions of scientific articles, patents, clinical trials and beyond, using life science-specific embedding models; ii)\textbf{Competitive Intelligence} providing real-time tracking of research trends and market signals; and iii)\textbf{AI-Powered Q\&A} through our research assistant GEMA that delivers citation-backed answers synthesizing private and public knowledge bases. The platform integrates multimodal data assets while maintaining enterprise-grade security with EU-hosted infrastructure and privacy-first design. We demonstrate real-world applications through interactive sessions showcasing automated research report generation, multimodal scientific analysis, clinical trial synthesis, and AI-assisted scientific writing, with validated efficiency improvements reducing multiday literature reviews to minutes while maintaining scientific rigor. The amass platform is publicly available at:https://www.amass.tech/
74. HumaRAG: A personal AI assistant for Humanists
Author
Yevhenii Osadchuk, Tiago Ribeiro, Selda Eren and Giovanni Colavizza
Abstract
Despite their impressive capabilities, large language models (LLMs) often generate fabricated or inaccurate responses, so-called "hallucinations", raising serious concerns about their reliability in scientific contexts. This issue is particularly relevant in the humanities, where scholarly work depends on the careful use of verifiable sources and accurate citation practices. To tackle the hallucinations problem we present a retrieval-augmented generation (RAG)-based AI research assistant called "HumaRAG", designed to deliver evidence-based, citation-rich answers to user queries grounded in a domain-specific knowledge base. By integrating state-of-the-art RAG practices, we show that HumaRAG not only retrieves the most relevant content given a query, but also generates responses grounded in verifiable sources.
75. Modeling Disrupted Conversation Dynamics: A Real-Time VR and Active Inference Approach
Author
Cumhur Erkut
Abstract
Disrupted conversational dynamics are a hallmark of both auditory verbal hallucinations and hearing impairment, where the brain struggles to balance prior expectations against unreliable sensory evidence. This project addresses the scientific challenge of modeling these disruptions by combining Active Inference (AIF) with real-time simulation in immersive virtual reality. We implement a discrete, message-passing formulation of AIF, where likelihood precision, policy confidence, and transition dynamics can be manipulated to generate either in-context or out-of-context disruptions. Our scientific AI approach thus bridges computational psychiatry, speech technology, and VR engineering. The potential impact lies in advancing mechanistic understanding of hearing loss or psychosis, creating novel therapeutic VR interventions, and fostering collaborations across AI, clinical neuroscience, and human-computer interaction.
76. ReAct-ExtrAct: A Tool for Source-Grounded Automated Data Extraction in Systematic Reviews
Author
Sebastian Krawczyk, Paweł Jemioło, Jan Karkowski, Ilinka Ivanoska, Miroslav Mirchev and Wojciech Kusa
Abstract
To address the bottleneck of manual data extraction for systematic reviews (SRs), we present ReAct-ExtrAct. Our tool uses retrieval-augmented generation with small language models and heuristic reasoning to automate extraction with source transparency, surpassing baselines in factual accuracy and completeness of extraction. Itslightweight, local deployment makes ReAct-ExtrAct a trustworthy, privacy-preserving, and sustainable alternative to large language model pipelines for automating SRs.
77. UltraUP: A Deep Learning Framework for Real-Time Processing of Ultrasound Images
Author
Simone Cammarasana and Giuseppe Patanè
Abstract
Ultrasound (US) is widespread across medical specialities due to its non-invasive nature, portability, cost-effectiveness, and real-time imaging capability. Nevertheless, portable and handheld US systems are limited by speckle noise, low spatial resolution, and computational constraints, which reduce diagnostic accuracy and broader clinical adoption. To address these challenges, we introduce UltraUP, a hardware-software prototype forreal-time denoising and super-resolution of ultrasound data. UltraUP integrates adaptive edge-preserving denoising with deep learning-based super-resolution to suppress noise, reconstruct missing scan lines, and reveal fine anatomical structures. Optimised for both CPU and GPU platforms, the system delivers low-latency performance suitable for clinical workflows, while balancing resolution and penetration depth, particularly in deep tissue imaging. UltraUP applies to both 2D images, videos, and volumetric acquisitions increasing the field of view and frame rate, and supports device interoperability, reducing operator dependence and standardising output quality. UltraUP is designed for mobile and resource-limited contexts, including bedside diagnostics, field hospitals, and telemedicine. Finally, UltraUP enhances diagnostic confidence, improves reproducibility, and enables integration into different healthcare settings.
78. Uncertainty-Aware Conversations in Stroke Rehabilitation
Author
Valerio Bonsignori, Fosca Giannotti, Carlo Metta, Salvatore Rinzivillo, Francesca Cecchi, Badia Bahia Hakiki, Andrea Mannini and Stefano Doronzio
Abstract
Machine learning deployment in scientific domains faces acritical bottleneck: domain experts struggle to interpret AI-generated insights. While explainable AI generates technical explanations, communicating these to non-AI experts remains largely unsolved. We present anuncertainty-aware conversational AI framework that transforms complex ML explanations into natural language conversations for domain experts. Our approach integrates selective classification with conversational explanation delivery through familiar messaging interfaces. We demonstrate this through a Telegram bot for rehabilitation physicians analyzing stroke outcomes, translating SHAP values, counterfactuals, and decision rules into clinical language while communicating prediction uncertainty through intuitive patterns. This addresses the understudied intersection of uncertainty quantification and conversational explanation delivery, offering a generalizable methodology for deploying interpretable AI inexpert domains where traditional interfaces fall short.
79. Wetlands' Greenhouse Gas Balance: A Use-Case for AI and Hybrid Modeling
Author
Guy Schurgers, Gyula Mate Kovacs, Frederikke Krogh Corydon, Laura van der Poel, Xin Lin, Stéphanie Horion, Stefan Oehmcke, Simon Stisen, Christian Tøttrup, Nico Lang and Christian Igel
Abstract
Wetlands play an important role in the global budgets of the major greenhouse gases. Modeling can help to quantify this role and can support policymaking for mitigation of climate change.

In this position paper, we highlight how AI can support the modeling of greenhouse gas budgets through unlocking vast amounts of relevant Earth observation data, and how hybrid modeling (combining process-based modeling and Machine Learning) can be used to integrate these Earth observation data in models that build on process understanding.
Titel
Author
Coming soon
Abstract
Coming soon