If you are rubbing shoulders with tech geeks or actively reading our content, you have probably heard of the term ‘drone swarms’ – as the concept that uses drones in a network driven by artificial intelligence and controlled as an entire group.
Even though the concept of drone swarms has been used a lot of times so far (even in military purposes), it definitely lacks certain sophistication in order to become used internationally and in as many applications as possible.
According to a new paper submitted by four authors at the Laboratory of Intelligent Systems in Lausanne, Switzerland, the use of collective behavior in drone swarms can be fully controlled, improved and further developed in the coming years. Their approach to learning vision-based collective behavior comes from a simple flocking algorithm which can develop certain velocity commands – directly from the raw camera images.
Comparing this system to the natural and animal-based behavior of animal groups seen as flocks of birds, the phenomenon of seeing drones in similar movement can be further developed in the field of aerial swarm robotics.
However, the authors say that one of the most appealing characteristics of collective animal behaviors for robotics is the fact that decisions are made based on local information such as visual perception.
The main drawback of the approach is the introduction of a single point of failure as well as the use of unreliable data links. Relying on centralized control therefore bears a significant risk – mostly because agents lack the autonomy to make their own decisions in failure cases, such as communication outage.
So, it is safe to say that vision is the most promising sensory modality that can help this (drone swarm) technology achieve a maximum level of autonomy. The recent advancements in computer vision, deep learning and obstacle-avoidance are definitely important in the field of drones – and potentially crucial for developing similar drone swarm behaviors.
Vision-based flock of nine drones during migration. Our visual swarm controller operates fully decentral- ized and provides collision-free, coherent collective motion without the need to share positions among agents. The behavior of an agent depends only on the omnidirectional visual inputs and the migration point (blue circle and arrows). Collision avoidance (red arrows) and coherence (green arrows) between flock members are learned entirely from visual inputs.
The paper also addresses the control of multiple agents based on visual inputs – achieved with relative localization techniques for a group of three quadrotors, as tested by the authors.
Each of these drones ‘wears’ a camera and a circular marker that enables the detection of other agents and the estimation of relative distance. The system relies only on local information that is obtained from the onboard cameras in near real-time.
Facilitated by the several recent advances in the field of machine learning, this form of control is especially detrimental in real-world conditions. Thanks to it, drones can communicate with each other and be commanded in terms of their velocity – but also match the (same) velocity as a group and mimic the collective behavior, eliminating the dependence on the knowledge of the positions of other agents by processing only local visual information, as the paper outlines.
There are obviously more challenges than only the velocity when it comes to drone swarms and collective behaviors. This is why the authors of this paper experimented and used drones with common migration goals, showing that the swarm remains collision-free during its navigation. This is mostly because of the vision-based and position-based swarms and their precise behavior while migrating.
However, with opposing migration goals, there are two subsets of agents which are assigned the same waypoint. Here, the position-based and vision-based flock exhibit very similar migration behaviors. In both cases, the swarm cohesion is strong enough to keep the agents together despite their diverging navigational preferences.
Generally speaking, the experiments are showing the massive potential of drone swarms and their collective behaviors – but also serve as ground to highlight the failure cases of the controller and further experiment with this concept.
In the end, it is safe to say that this machine learning approach can solve the problem of drones colliding with each other and moving incoherently in a swarm of quadcopters. They can be better coordinated by mimicking a simple flocking algorithm. This, according to the authors, is the next step towards building a fully decentralized vision-based swarm of drones.
Citation: Learning Vision-based Cohesive Flight in Drone Swarms, Authors: Schilling, Fabian; Lecoeur, Julien; Schiano, Fabrizio; Floreano, Dario, Publication: eprint arXiv:1809.00543 https://arxiv.org/abs/1809.00543 | https://arxiv.org/pdf/1809.00543.pdf