The paper Simulating Kilobots within ARGoS: models and experimental validation by Carlo Pinciroli, M. Salah Talamali, Andreagiovanni Reina, James Marshall, and Vito Trianni proposes a new plugin for the ARGoS simulator that allows users to simulate Kilobots in a fast and realistic way, to use the same code in simulation and on robots, and to simulate the ARK infrastructure along with the Kilobots.
Looking at honeybees in a colony as if they were neurons in a brain could help understand the basic mechanisms of human behaviour. A bee colony can be considered as a single superorganism, composed of tens of thousands of bees, which displays a coordinated response to external stimuli. Our recent paper, published in Scientific Reports and authored by Andreagiovanni Reina, Thomas Bose, Vito Trianni, and James Marshall, has shown that honeybee colonies might respond to stimuli in the same way other organisms, such as humans, do. The superorganism response is the result of interactions between individual bees; finding which type of interactions generate brain-like responses helps researchers to identify the general mechanisms generating these responses, and may ultimately lead to a better understanding of our brain.
The video above showcases the functionalities of ARK through three demos. In Demo A, ARK automatically assigns unique IDs to a swarm of 100 Kilobots. Demos B shows the possibility of employing ARK for the automatic positioning of 50 Kilobots, which is one of the typical preliminary operations in swarm robotics experiments. These operations are typically tedious and time consuming when done manually. ARK saves researchers’ time and makes operating large swarms considerably easier. Additionally, automating the operation gives more accurate control of the robots’ start positions and removes undesired biases in comparative experiments. Demo C shows a simple foraging scenario where 50 Kilobots collect material from a source location and deposit it at a destination. The robots are programmed to pick up one virtual flower inside the source area (green flower field), carry it to the destination (yellow nest), and deposit the flower there. When performing actions in the virtual environments, the robot signals by lighting its LED in blue. When picking up a virtual flower from the source, the robot reduces the source’s size for the rest of the robots (by reducing the area’s diameter by 1cm). Similarly when a robot deposits flowers at its destination, the area increases by 1 cm. This demo shows that robots can perceive (and navigate) a virtual gradient, can modify the virtual environment by moving material from one location to another, and can autonomously decide when to change the virtual environment that they sense (either the source or the destination).
More information available at: http://diode.group.shef.ac.uk/kilobots/index.php/ARK
A bifurcation analysis of the model shows that for previous parameterisations there is always decision deadlock in the case of three or more same-quality options.
This result motivates the change of parameterisation with respect to previous work.
In the video below, a swarm of 150 kilobot robots takes a value-sensitive decentralised decision between two options (red and blue). The swarm must select the best quality option if the quality is higher than a given threshold (in this study, greater than 1.5). In this experiment, the options have quality v=5 thus the swarm makes a decision for one option (in this case, the blue option).
The overlaying coloured circles show the two options localised in the environment. The options are signalled through two static kilobot robots acting as beacons that send infrared messages with the option’s ID and quality. The robots light up their LED in a colour that corresponds to their internal commitment state: green for the uncommitted state and red and blue for commitment to the option of the respective colour.