Lectures
Prof. Joost Van de Weijer
Talk Title: Toward Lifelong Learning in Foundation Models
Abstract: Biological systems learn continuously over their lifetime, while most artificial intelligence systems still follow a rigid two-stage pipeline consisting of training and deployment. When new data arrives, these models are often retrained from scratch, leading to the loss of previously acquired knowledge and incurring significant computational costs. Continual learning aims to overcome these limitations by developing theoretical frameworks and practical methods that allow models to adapt to non-i.i.d. data streams while retaining prior knowledge. In the first part of this talk, I will introduce the core principles of continual learning and review key approaches for mitigating catastrophic forgetting. In the second part, I will present recent advances that extend continual learning techniques to the context of foundation models. I will conclude by highlighting several open challenges and directions for future research.
Bio: Prof. Joost van de Weijer is the leader of the Learning and Machine Perception (LAMP) team at the Computer Vision Center, Universitat Autónoma de Barcelona. His research is dedicated to advancing generative AI, with a focus on two primary research areas: (i) Continual Learning to develop algorithms that can accumulate knowledge from a sequence of tasks over time, (ii) Image Generative Models investigating on diffusion models, which, guided by text prompts, can generate high-quality, realistic images.
Prof. Egidio Falotico
Talk Title: "Continual Learning for Adaptive Robot Perception and Control"
Abstract: Continual learning enables systems to incrementally acquire new knowledge from evolving data while retaining previously learned skills, a key requirement for robots operating in dynamic, non-stationary environments . This paradigm is particularly relevant in robotics, where agents must continuously adapt to changing tasks, environments, and sensory conditions over long time horizons. This lecture provides an overview of continual learning approaches for robot perception and control, with a focus on multimodal sensing and embodied intelligence. We discuss how continual learning supports robust perception across heterogeneous modalities, enabling both rigid and soft robotic systems to cope with sensor variability, deformation, and environmental changes. On the control side, we highlight how robots can incrementally acquire and refine skills through both imitation learning from demonstrations and reinforcement learning through interaction, enabling continuous improvement without full retraining. Particular attention is given to the trade-off between stability and plasticity, as well as to scalable learning architectures that support long-term autonomy. Overall, the lecture outlines how continual learning can unify perception and control, paving the way toward lifelong, adaptive robotic systems capable of operating reliably in complex real-world scenarios.
Bio: Prof. Egidio Falotico is Professor at The BioRobotics Institute, Scuola Superiore Sant’Anna (Pisa, Italy). Prof. Falotico’s research lies at the intersection of neuroscience, artificial intelligence, and robotics, with a strong focus on brain-inspired robotics, soft robot control, and continual and deep learning. His work aims to understand how movement is conceived, planned, and controlled by the brain and to translate these mechanisms into computational models for robotics. More recently, his research has expanded into the field of continual learning, investigating how robotic systems can acquire and refine skills over time without catastrophic forgetting. By integrating continual learning and deep learning techniques into soft robot control, he aims to develop robots capable of long-term adaptation and resilience, pushing the boundaries of autonomous and intelligent robotic systems.
Prof. David Kappel
Talk Title: Efficient Sequence Modelling with State Space Models
Abstract: Recurrent neural networks (RNNs) have undergone a renaissance, amplified by the latest developments in deep state space models (SSMs), which can match the performance of transformer networks while retaining the efficiency of RNNs. In this lecture, I will provide an overview of recent SSM research and introduce the Legendre Memory Unit (LMU), the Linear Recurrent Unit (LRU), and the Mamba model. I will then discuss recent applications of these models for efficient machine learning on edge devices and highlight future directions and open challenges. Finally, I will share insights into the in-context learning capabilities of SSMs and demonstrate how they can be tuned to solve reasoning tasks.
Bio: David Kappel is a Professor at the University of Bielefeld. His scientific goal is to understand the fundamental principles through which organisms generate behavior and cognition while linked to their environments through sensory and effector systems. Inspired by our insights into such natural cognitive systems, he seeks new solutions to problems of information processing in artificial cognitive systems. He draws from a variety of disciplines that include experimental psychology and neurophysiology as well as machine learning, neural artificial intelligence, computer vision, and robotics.
Prof. Oriol Pujol
Talk Title: Continuous management of AI systems by means of differential replication and machine learning copies
Abstract: Deployed machine learning systems rarely fail only because of prediction error; they fail because their surrounding environment changes. New regulatory demands, interpretability requirements, infrastructure constraints, privacy concerns, fairness objectives, or production bottlenecks can make an existing model suboptimal or even infeasible without changing either the task or the domain. This conference introduces environmental adaptation as the formal problem of preserving task performance while satisfying a new set of operational constraints, and presents differential replication as a general strategy for transferring the decision behavior of an existing model into a new hypothesis space better suited to the new environment. The session then focuses on copying, sometimes referred as zero-shot distillation, the most restrictive and agnostic form of differential replication, in which the original classifier is treated as a black box accessed only through hard-label membership queries, with no access to training data or model internals. We will cover the theoretical formulation of copying as a dual optimization over synthetic samples and copy-model parameters, discuss how copying differs from standard supervised learning, and review practical use cases. Finally, the conference presents the evolution from the original single-pass copying framework to a scalable iterative approach that reduces memory usage and accelerates convergence while maintaining fidelity. We will examine the role of uncertainty-guided synthetic sample selection, regularization against forgetting, and the empirical gains reported across benchmark datasets, including large reductions in memory needs and faster convergence.
Bio: Oriol Pujol Vila is a Full Professor of Computer Science and Artificial Intelligence at the department of Matemàtiques i Informàtica at Universitat de Barcelona. His research focuses on the foundations of machine learning and its application to social challenges, including medical image analysis, ensemble learning, anomaly detection, sequential learning, and deep learning with limited data and multitask supervision. In recent years, his work has shifted toward trustworthy and human-centered AI, including uncertainty estimation, auditing opaque models, bias mitigation, and privacy preservation. His scientific output includes more than two hundred international publications in the field of machine learning and its applications. He has held academic leadership positions such as Head of Studies in Computer Science, Director of the Master’s Degree in Foundations of Data Science, Vice-Rector for Digital Transformation, and Dean of the Faculty of Mathematics and Computer Science at the University of Barcelona.
Prof. Stefano Melacci & Prof. Alessandro Betti
Talk Title: "Collectionless AI - Hamiltonian Learning: From Optimal Control to Perpetual Generation and Continual Memory"
How should a neural network learn from a stream of data that unfolds over time, possibly without end, without storing past observations? In this lecture we address this question by building on tools from optimal control theory. We introduce Hamiltonian Learning, a framework in which the dynamics of neural computation, parameter adaptation, and data are governed by a coupled system of differential equations that can be integrated forward in time without backpropagating into the past. We show that this formulation recovers classical gradient-based learning as a special case, while opening the door to fully local, distributed, and memory-efficient learning schemes.We then explore two research directions that naturally arise once the Hamiltonian perspective is adopted. The first is perpetual generation: can a model trained online on a single stream learn to generate coherent sequences of unbounded length? We discuss how spectral properties of the recurrent dynamics play a critical role. The second direction concerns continual memory: how can a network avoid catastrophic forgetting while learning over time? We present architectures in which classical weight matrices are replaced by hierarchical, attention-gated memory units that offer a principled form of parameter isolation.Throughout the lecture, we emphasize the common thread: once learning is reformulated as a dynamical system evolving forward in time, questions about generation, stability, and memory become questions about the structure and control of that dynamical system.
Bio: Stefano Melacci is an Associate Professor of the Department of Information Engineering and Mathematics (DIISM), University of Siena (Italy), whose research activity is focussed on the field of Artificial Intelligence with emphasis on Machine Learning, mainly using Neural Networks. Prof. Melacci studies Machine Learning problems in which the machine processes data streams and interactions, from which it is expected to continuously learn to model its behavior in making predictions (Lifelong Learning, Learning Over Time), as well as approaches to integrate symbolic knowledge and neural models (Neural-Symbolic Learning & Reasoning).
Bio: Prof. Alessandro Betti is Professor of Computer Science within the SySMA research unit of IMT Lucca. His research focuses on studying Machine Learning algorithms for processing data streams that exhibit dynamical properties. From the theoretical standpoint, Prof. Betti proposed and developed a framework rooted in Calculus of Variations for interpreting laws of learning. In the visual domain, Prof.Betti applied these ideas to develop learning agents that utilize deep architectures to extract features that consider motion-induced regularities. Currently, Prof. Betti is working on formulating online continual learning in a principled manner, aiming to investigate the relationship between learning problems over time and optimal control.
Dr. Matteo Mendula
Talk Title: "Energy-Efficient Digital Twins and Foundational Dynamical Substrates with Reservoir Computing"
Abstract: The remarkable success of attention-based models has sparked a crucial question: Can efficient architectures such as Reservoir Computing (RC) compete with energy-intensive Deep Learning architectures in real-world scenarios? This seminar traces the evolution of RC from a lightweight tool for edge deployment to a robust Foundational Model for learning universal physical dynamics. First, we address the computational bottlenecks of RC deployment. We present an adaptive ε- Greedy search strategy that reduces offline hyperparameter optimization time by 70% and energy consumption by up to 88%. We further demonstrate how extending RC with Recursive Least Squares (RLS) enables seamless online transfer learning with minimal memory overhead, bridging the gap between offline training and online adaptation. Second, we scale these concepts to Hierarchical Digital Twin (DT) ecosystems. We introduce the "Fidelity" metric, a comprehensive evaluation balancing accuracy, maintainability, and deployability. We show how our proposed RC-DT engine achieves up to 39% higher accuracy than LSTM baselines while consuming an order of magnitude less energy, proving its viability for industrial Cyber-Physical Systems4. Finally, we unveil our current research reframing RC as a "Universal Dynamical Substrate" that can serve as a Green Foundational AI capable of generalizing dynamics across domains.
Bio: Dr. Matteo Mendula is a researcher at the Sustainable Artificial Intelligence Research Unit at the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC). His research focuses on bridging the gap between the computational efficiency of Reservoir Computing (RC) and the performance demands of state-of-the-art Deep Learning. He specializes in the optimization of offline-online training cycles and the application of lightweight AI in Hierarchical Digital Twin ecosystems. His recent work explores the use of Lyapunov-guided reservoirs as foundational models for extracting universal dynamical laws in resource-constrained environments.
Lars Krupp
Talk Title: "Diving into the Energy Abyss: Measuring, Estimating, and Improving the Energy Cost of LLM-Based Systems"
Abstract: As Large Language Models (LLMs) become integrated into daily life, their environmental footprint has moved from a minor concern to a major technical challenge. However, accurately quantifying the energy demands of these systems remains notoriously difficult due to the complexity of hardware-software interactions and the opacity of proprietary models. In this talk, a deep dive into the challenges of researching energy consumption in LLM-based systems will be provided by using the emerging use case of web agents to explore the nuances and open problems in measuring, estimating, and improving their energy demand. By exposing the "hidden" costs of model inference, the talk will propose actionable solutions and identify open research problems necessary to achieve more precise energy estimates and more efficient LLM-based systems.
Bio: Lars Krupp is a researcher and PhD candidate at the German Research Center for Artificial Intelligence (DFKI) and RPTU University, where he is a member of the Embedded Intelligence department. His work focuses on making LLM-based systems more sustainable and beneficial for users from both a human-centric as well as a technical perspective. In the technical domain, Lars specializes in benchmarking and estimation of LLM energy consumption. He also explores LLM compression using quantum physics-inspired approaches. On the user side, he investigates the role of AI in education, specifically for teaching Quantum Physics and Quantum Computing. His goal is to leverage the potential of high-performance AI in a resource-efficient way for user-centric applications.
Victor Rotellar
Talk Title: "From Lab to Market: Xarxa RDI-IA’s Guide to Research Valorisation"
Abstract: Bringing research to the market is a key opportunity to increase the impact of scientific work. This talk introduces research valorisation as a practical process to transform research results into real-world applications, covering key steps such as identifying value, understanding market needs, validating technologies, and exploring different transfer pathways. It will also present how Xarxa RDI-IA supports researchers along this journey, helping them increase the impact and applicability of their work.
Bio: Víctor Rotellar currently works as Strategic Projects Coordinator at the Computer Vision Center, managing the RDI-AI Network. He holds a degree in Business Administration and Management from IQS (Ramon Llull) and has completed training in sales at ESADE, entrepreneurship at ISDI, and AI technology at UAB. He also has prior experience in sales, strategy, and business management. With an entrepreneurial profile, he has co-founded several startups and worked within the venture builder ecosystem. His interest in innovation, artificial intelligence, and Deep-Tech project development led him to join a center like the CVC. His main motivation is to drive technology transfer at both national and international levels
Organizers
Paolo Dini
CTTC
Marco Miozzo
CTTC
Vincenzo Lomonaco
LUISS University