Final Paper: Simulating Filial Imprinting With RobotBit

As documented earlier, Anand and I managed to complete the benchmarks we set for ourselves in terms of the final project. The robots were fine-tuned so that they kept a certain distance from the succeeding robot within a line, and the movement was extremely fluid. I am quite satisfied with how the end products turned out, though I wish we would have created a ‘Congo-line’ of RobotBits if we had more time. Nevertheless, I really enjoyed the different projects we created in class, and hope to utilize more of what I learned in future endeavors! 

Link to the Final Paper:

https://docs.google.com/document/d/1uyzA88VI6YVixFFd-j42QGfMkUiLtyTW35_XHtpDS_M/edit?usp=sharing

BIRS Collective Decisions Lab Report

Plan

The purpose of this project is to analyze how a simple algorithm can govern the behavior of a population of individuals. With this in mind, we set out to devise an algorithm that can analyze the clustering behavior among Microbit-powered robots. Within the swarm, each robot will calculate the shortest possible distance between each other and start grouping together into clusters until they form one single unit and collectively act as one entity.

Materials

Microbit x 5
Robotbit set x 4 (optional)
HD 920c Logitech camera
Mac mini
Aruco markers

Code

First of all, we set up a computer vision system. We attached the Logitech camera high above the project setup, facing downwards so it could see all the robots. Then, we attached Aruco markers on the top of each robot. The camera is then connected to a program that will detect the markers and orientation of each robot, which then calculates the distance and uses radio serial communication to guide the robots to form clusters with each other.

The full program can be found on the following Github link:

https://github.com/krawc/arUco

Reflection

This project was very challenging for me to comprehend, especially because I wasn’t familiar with the concept of computer vision, however it motivated me to put in the extra effort because I was very interested in the concept behind it. Whilst I did not make a huge contribution to the technical developments of this project, I still found the experience very valuable because I was able to learn from my peers. Moreover, it was very rewarding to see the successful end result, especially because we experienced multiple failures in getting the robots to communicate with each other and form clusters. I think this project is a good stepping stone to devising more complex algorithms that can be applied towards real-life swarm behaviors.

Anand Tyagi – Swarm Project: Final Observations

For our final project, our team of Gabi, Diana, Kevin, and I, decided to mimic territorial behaviors of animals aiming to protect their territory. The main procedure would involving identifying the “enemy” and sending the robots to remove it from their territory. In this case, the enemy was a specific QR code and the “territory” was the green box under the camera. Our plan was to use the computer vision program set up in the room to communicate the position of the robots and use that information to send commands to each robot. Although we could send every command to every robot and have each individual robot decide how and when to run, we realized that this could be more efficiently done by doing the calculations on the computer instead, and sending more limited instructions to each robot. This was the system could run faster and more naturally mimic a real life system. Of course, this also takes away from some of the “swarm nature” of the project. However, since the only difference is the amount of computation each robot has to do, rather than the type of computation, I think it is justified to do the computation on the computer instead of doing the same repeated computation on each robot. 

Beyond just having the robots collectively remove the enemy out of the territory, we decided to add another layer of complexity and only send as many robots as necessary. We did this by sending one robot and checking if that robot was able to move the enemy on its own. If not, it would call for reinforcements and continue this process until it had enough power to move the enemy out of rectangle. We were planning on testing this by filling a box with varying levels of sand. 

Although we were not able to test our project and code in the end, we did make a lot of progress. This project helped us learn a lot about how to break down a problem into steps and how to go about constructing a complex algorithm for a large system of robots. As you can see in the picture down below, we broke down each process and listed out the procedure step by step. 

Besides this, we also iterated over our code several  times while figuring out how to simplify and speed up the entire process. We also learned a lot about how to break down a larger project into more manageable parts and have a little insight into how larger coding projects are broken down and worked on by multiple people.  

Overall, I enjoyed this project as it taught me quite a bit about how to build code for larger systems and also, how to make algorithms for swarms of robots. 

Final Project Proposal

For my final, I will be working with Anand to fine tune and expand upon my midterm project. As you may recall, my midterm was centered around the method of imprinting found in animals, namely birds. Imprinting usually occurs immediately after birth; organisms such as ducklings have been recorded to imprint on the first object they see, whether it be shoes or other animals such as dogs. One of the most noticeable behaviors that result from imprinting is that the animal will follow the organism it has imprinted on, as demonstrated by this video:

As you can see, the duckling has imprinted on the dog, and will essentially follow it everywhere. Once the imprint has solidified, it is extremely difficult to break. Here is another demonstration of ducklings imprinting on the mother duckling:

So with these examples in mind, we wanted to emulate the most basic imprinting behavior by having multiple bots follow a ‘mother’ bot, matching both speed and angle change accordingly. In order to do so, we will continue to utilize the dual-ultrasound technique that I used in my midterm, where essentially each motor is connected to its own ultrasound sensor. This allows for smooth, quick turning, as well as accurate bot-to-bot sensing. 

Dual-ultrasound:

Some improvements that we intend to add would be distance adjustment for the ‘duckling’ bot relative to the ‘mother’ bot, meaning that the babies will follow the mother, but maintain a set distance from behind, in order to avoid collision. We will also fine tune the reactive turning to emulate smoother motion, and add more bots to form a true duckling line. Lastly, since my midterm ducklings had some trouble detecting the back of the mother bot, we also want to add some sort of ‘detection plate’ to the rear, so that the bot will have a larger marker to detect. 

Our expected product of this project would be a set of fully functional ducklings (complete with smooth motion and accurate detection), as well as a mother duck to act as the line leader.