Title: Reinforcement Learning with Active Particles
Duration: 2 years
Research Area: Reinforcement Learning, Physics and Chemistry
Living organisms adapt their behavior to their environment to achieve specific goals. They sense, process, and encode environmental information into biochemical processes, resulting in appropriate actions or properties. These adaptive processes occur within an individual’s lifetime, across generations, or over evolutionary timescales, leading to distinct behaviors in individuals and collectives. Examples include swarms of fish and flocks of birds developing collective strategies against predators and optimizing foraging tactics, birds learning to use convective air flows, sperm evolving complex swimming patterns for chemotaxis, and bacteria expressing specific shapes to follow gravity.
In this project, we implement learning in systems of microswimmers, which are tiny machines or microrobots that can explore micrometer length scales by self-propulsion. They provide potential applications in drug delivery but also provide model systems for many large-scale self-propelled systems studying collective motion, self-organization and adaptive behavior.
This project explores the adaptive navigation of active particles in real world environments by reinforcement learning algorithms. We would like to explore how specific behaviors in biological species appear based on specific environmental properties and sensorial inputs. We aim to extend the application of reinforcement learning techniques to multi-agent reinforcement learning with direct experimental control to address the reality gap between simulated learning in virtual environments and real-world learning.
The behavior of microorganisms is highly optimized by evolutionary pressures from past environmental conditions, yet the specific historical factors shaping these biochemical feedback circuits are often unknown. This lack of historical knowledge complicates the projection of current behaviors onto present environmental properties. A potential solution involves using reinforcement learning approaches to control microswimmers, which are artificial or engineered microorganisms representing model systems.
Reinforcement Learning and computer control allows these microswimmers to adaptively optimize their behaviors in real-time based on current environmental feedback, effectively learning and responding to stimuli in a manner like natural evolutionary processes. By training Reinforcement Learning algorithms to adjust the microswimmers’ responses to varying environmental conditions, we can achieve precise control over their movements and functions, improving their efficiency in applications such as targeted drug delivery, environmental sensing, and micro-manipulation. This approach bridges the gap between historical evolutionary optimization and current environmental adaptability, leveraging advanced computational techniques to enhance the functionality of microswimmers in complex and dynamic environments.
The information gained from studying these relations and transferring them to biological systems can be very useful for new biotechnological systems including bioreactors but also for drug delivery with microrobots or optimized traffic by studying collective behaviors in multi-agent systems.
Our experimental approaches implement various elements of machine learning and combine these algorithms with the hardware control of an optical microscopy setup to steer active particles.
Our project employs advanced experimental control of optical microscopy to investigate the dynamics of active microparticles in liquids. Currently, the focus is primarily on single active particles, but we aim to extend this to ensembles of active particles in complex dynamic environments, including external flows and fields, to study their multi-agent response functions resulting from environmental interactions. These systems exhibit spatial and temporal correlations, and our research could be expanded to include transformer models, which are highly effective at uncovering temporal correlations. Exploring spatio-temporal representation learning in these systems and its connection to real-world physical mechanisms could significantly benefit future AI developments.