In this research, we will explore learning among a network of robotic agents who must perceive and act in a physical world dominated by dynamic, uncertain phenomena, in the maritime environment. We argue that the current big-data, big-model, deep network paradigm leads to monolithic models that learn slowly, have poor composability, scant safety guarantees, and often make decisions for reasons that are uninterpretable to humans.
We propose a fundamental re-think of the deep learning framework for multi-robot systems. This research will address trustworthy and interpretable swarm learning-based control, with the aim to achieve the important goals of interpretability, auditability, and online learning. Specifically, we will explore distributed training and inference for models such as Liquid Time Constant (LTC) networks. we approach this problem by considering local communication between the agents in the swarm as a type of perception. By considering communication to be analogous to perception, we effectively develop a new category of multi-modal learning-based control.
Many animals act as a group to improve the efficiency of various tasks. Some animals must scan huge areas on daily basis in search of food, and in order to do so collectively, they must adopt optimal collective movement, sensing and communication strategies.
Moreover, once the resource (e.g., prey) has been found, animals must solve the problem of exploiting it with minimal interference. In this research, we will build a bio-inspired collective movement in order to design and build a group of autonomous bio-inspired robots. A particular and unique focus of this research is to study distributed perception in a communication limited environment.
Given the foundational principles of Representation Engineering (RepE) and its focus on enhancing AI transparency and control, when considering its application to swarms and multi-agent systems, several intriguing research directions emerge:
1. Emergent Behaviors in Swarms: Investigating how RepE can be used to understand and control emergent behaviors in swarms. This could involve analyzing the representations of inter-agent interactions and environmental feedback to predict and shape collective outcomes.
2. Distributed Representation Learning: Exploring distributed methods for representation learning in multi-agent systems, where agents learn to encode useful information in their internal representations through collaboration and competition with other agents. This could improve efficiency and adaptability in changing environments.
3. Safety and Robustness in Multi-Agent Systems: Applying RepE to enhance safety and robustness in multi-agent systems by engineering representations that inherently encode safety constraints and ethical guidelines. This could help prevent harmful emergent behaviors and ensure that multi-agent systems act in accordance with human values.
This research aims to address these challenges by integrating Reinforcement Learning (RL) into the motion planning of AUVs. RL, a branch of machine learning, is adept at handling environments with high uncertainty and complexity, making it well-suited for underwater navigation. The novelty of this research lies in its focus on the prediction of disturbances using AI-based computational fluid dynamics (CFD), that as far as we know was not done nor tested on an AUV in real-time. By leveraging advanced CFD models, the proposed system can predict water currents and other environmental disturbances with high accuracy. This predictive capability, when combined with an RL framework, will enable the AUVs to plan paths that are not only efficient but also adaptive to changing underwater conditions.