Swarm Robotics

Computer scientist Thomas Schmickl on bio-inspired robotics, virtual embryogenesis, and symbiotic multi-robot organisms

faq | January 14, 2016

We mainly investigate bio-inspired swarm algorithms and robot designs. For this we observe natural organisms in our labor we take known facts about their behaviors from literature and translate the „core“ of those behaviors into algorithms, which we then implement in our robots. Of course we take especially interesting „swarm intelligent“ behaviors from animals. For example, we extracted the BEECLUST algorithm from the behaviors of young honeybees in complex temperature fields. These collective behaviors allow the bees to always pick out the spot with the best-suited temperature for aggregation (global optimum) from a set of other warm spots (local optima). The basic principles of interactions were extracted from videos made in our experiments with real bees and then transferred into a simple computer algorithm executed by swarms of autonomous robots, allowing these robots to find optimal spots in the environment collectively. BEECLUST is currently the most simple yet still quite powerful swarm algorithm that exists. Such algorithms need also sensors and actuators on robots and also communication between robots to work, thus we also investigate bio-inspired designs of those components together with our hardware partners. Remember, we are actually a biology department, so we usually work together with international partners that operate engineering departments (mechatronics, electronics, sensors, …).

On adaptation and evolution for symbiotic multi-robot organisms

We had several dozens of cubic robot modules (approx. 10x10x10cm), each of them was an autonomous robot driving on tracks or screws. Each module had also a hinge to bend itself and 4 docking ports to physically connect to other modules (and also to undock). This way the modules could form a swarm of cells that aggregate and dock to form a more complex robot organism of various shapes. For steering this „embryogenetic process“ we developed a software called „Virtual Embryogenesis“ (aka “VE”, mainly work with my colleague Dr. Ronald Thenius) which is inspired by embryogenetic processes of Metazoans. In VE the robots run a distributed model of biological embryogenesis, which is driven by gene activation that leads to production of substances (morphogens) that diffuse and build gradients of concentrations throughout the body of the embryo. The local concentration of such morphogens then determines the activation or blockage of other genes. This system is a very complex cascade of genes-morphogens-genes interactions that can actually be seen as the real “program” that is guiding the embryogenesis. In VE, the robots run a model of

Robotics expert Mel Siegel on the features that a machine must have to be called a robot, previous studies, and the future of the field
such processes, performing activation/deactivation of virtual genes based on local morphogen levels, which are affected by gene activation. Those virtual morphogene levels are also changed by a diffusion-like communication process between all neighboring robot modules. Ultimately virtual morphogen concentrations affect the docking ports and can this way grow or reconfigure the multi-modular robotic organism.To let such a robotic multi-cellular organism exhibit behaviors (e.g. walking) we developed another software called AHHS (Artificial Homeostatic Hormone System) aka „The Hormone Controller“. This hormone controller works similar to the above-described Virtual embryogenesis. Every module is diffusing (communicating) substances, which represent hormones, with local neighbors. These substances are locally added or removed from the system by rules that can be triggered based on the local hormone concentration. These rules can be seen as the “genome” of the module. In contrast to the Virtual Embryogenesis, which is used to grow the organism by coordinating the docking of single robot modules to the organism, the AHHS is used to move the organism. The hormone concentrations are also affected by the sensor input and ultimately affect the motors (tracks, screw-drives, hinges) of the modules. Thus it represents an information-processing system that drives the whole robot organism. One advantage is that the emerging hormone –gradients also can subdivide the organism and “activate” different programs. For example the legs can execute other programs than the central spine, regardless how many legs the organism has. And based on specific sensor inputs the whole organism can switch between “programs” for example: With low batteries a “hungry” organism can move differently than a fully charged one. In short: This software, inspired by the hormonal regulation system of animals, exploits the shape of the body, reflects environmental conditions (sensor input) and can act as a driving software(actually producing specific behaviors) or as a action-selection mechanism (choosing which behaviors are exhibited). I developed this system together with my colleagues Dr. Heiko Hamann, Dr. Jürgen Stradner and Dr. Payam Zahadat.

Researchers create a multifunctional sensor that can 'heal' itself after being cut
We mainly followed the Evo-Devo approach which is a combination of Developmental Biology and Evolution Theory. In this theory Embryogenesis and Evolution are interlinked processes as Evolution adapts the genes, which adapt the body that is grown and the suitability of that body is then again affecting its evolutionary fate. In our approaches (see V.E. and A.H.H.S. above) all programs/rules are encoded in a data structure called “genome” and by applying evolutionary computation techniques we adapt these genomes based on the performance of the organism. We also considered Ecology when designing such robotic systems because we had always many artificial organisms “living” together in the same habitat (cooperation, competition) in our focus. We developed a set of experiments in which we brought organisms on purpose in a situation where it either had to compete or to cooperate with other organisms. Thus evolution of cooperation was done implicitly as only those that competed well or cooperated well were good in reaching the final targets.

On applications

I think we inspired many current research projects in multi-cellular robotics. The concepts developed in our older projects I-SWARM, SYMBRION and REPLICATOR continued to live on and were further developed in a series of our own projects. For example we went underwater with the robots in the project CoCoRo (largest underwater robot swarm in the world) and in the current projects subCULTron. With subCULTron we will be the first one applying an autonomous robot swarm of such a size (150+) in the real open world by monitoring the Venice lagoon. The current state of-the-art is that either large swarms are operated in controlled lab conditions or “swarms” of very few robots are operated in the wild. However, those out-of-the-lab installations are still semi-controlled and short termed. In subCULTron, we are going far beyond that: We plan to operate for long times (many days, weeks) with an almost 100% autonomous system. Also our projects ASSISIbf and FloraRobotica, which use swarm algorithms together with real

Senior researcher at 3D Robotics, Brandon Basso, on the history of UAVs, common misconceptions surrounding them, and what we can expect to see from this technology soon
organisms (real honeybees, real ants, and real plants), aiming for future applications in livestock management, animal monitoring and agriculture incorporating robotic swarms as a part of the system.For example in ASSISIbf, we associate real bees with robots and close the behavioral feedback loops between them. That means that the real bees affect the robots, the robots affect the bees and thus self-organization within bees and robots can kick in, ultimately merging both societies into one. In addition we use evolutionary computation and machine learning to adapt the robots to the bees. Currently we have 2 research targets here: (1) Evolving a program in a group of robots that allows them to estimate the local densities of bees around them. This is evolving a collective sensing task. (2) Evolving a program in the robots that makes the bees perform a specific task, for example to aggregate at a desired place. This is evolving a collective actuation task. I think that the ability to retrieve information from/about an animal population/society as well as the ability to exert control over them (without forcing them!) might be advantageous for animal keeping, for controlling animal populations (also pests) and might also well be a more humane method to treat animals, as we never force the animals to do something, instead we convince the animals to go to a specific place by using their normal stimuli they are used to react to. Our robots are sort of mimicking other animals and just have their own say in the animal society.Thus,evolutionary computation and machine learning algorithms will allow the robots to adapt to the real organisms and thus to integrate smoothly into the natural society.

On current challenges

Evolutionary computation has a problem with the complexity of behaviors it produces. For example it is easy to evolve robot programs that do collision avoidance or simple target finding but very difficult to evolve more complex behaviors. Evolution tends to go for the simplest (cheapest) solution that is somehow good enough. Swarm robotics is building on simple things allowing to construct more complex collective things from those simple ingredients. So evolutionary swarm robotics is a promising perspective as it is a win-win combination of swarm robotics and evolutionary computation. However, this field is rather unexplored yet, as it is difficult to establish the required setups. The most interesting flavor of evolutionary swarm robotics is using on-line on-board evolution on real robots. This means that the evolutionary algorithm is executed live in all robots in parallel. This is technically tricky as it demands for good (decentralized) communication and also for long runtimes. The ultimate promising goal is to generate robot swarms that are dropped somewhere and that reconfigure and reprogram themselves according to the environmental situation at the target place without much a-priori knowledge of the situation there.

On the future of the field

Deep sea. Other planets and/or their moons.And funny/interesting/motivating toys.

As I described above self-reconfiguring/self-programming/self-adapting robot swarms will be excellent tools for exploring unknown places with potentially harsh conditions. The most prominent examples are deep-sea habitats and extraterrestrial exploration. In the closer future, applications will be in environmental monitoring (like we are currently developing it for the Venice lagoon). My personal experience after many, many exhibitions of our robot swarms is that they are very attractive for people who like to touch the robots and to interfere (thus to play) with the swarm. So there is a “toy-factor” in it. This is why I expect swarm toys to come up soon and people will start to play around and try to figure out what interesting things can be done by/with them, so there is also a science-education aspect in them.

Affiliated Associate Professor at the Department of Zoology, Karl-Franzens University of Graz (Austria), head of Artificial Life Lab, Department of Zoology, Karl-Franzens University of Graz
Did you like it? Share it with your friends!
Published items
0779
To be published soon
+92

Most viewed

  • 1
    Patrick Haggard
  • 2
    David Adger
  • 3
    Peter Jones
  • 4
    Gareth Jones
  • 5
    David Adger
  • 6
    Steve Jones
  • 7
    Onur Güntürkün
  • 8
    Anson Mackay
  • 9
    Erol Gelenbe
  • New