Lab Report: Collective Decisions by Molly He

Plan:

In the lab, we pick a simple algorithm to simulate the behavior of “cohesion”. Steps are as follow:

1. Trigger

We have five micro:bit as operating micro:bit in a square and another one micro:bit as controller. Each operating micro:bit will join a radio group with the controller. Once the controller micro:bit gives out a trigger to all operating micro:bit, they will switch to the mode “cohesion” and begin to move to a certain point.

2. Calculate

Each operating micro:bit will stick to a different individual mark. The computer vision system will determine the coordinates of each mark. After we get the coordinates of all five micro:bit, we will calculate the coordinates of the point which is the average point to all five micro:bit. Then we will sent the current position coordinates and average point position coordinates to five operating micro:bit through controller micro:bit.

3. Display

After five operating micro:bit get the coordinates, they will calculate the direction they need to head to. Then they will show an arrow towards the place the mark is on there 5×5 LED.

4. Move

Operating micro:bit will move towards or away from the marker. After each move, we will get the new location coordinates of the operating micro:bit and send them through controller micro:bit. The operating micro:bit will then automatically calculate the distance away from the average point and compare it with previous distant. If they move towards the marker (distant is smaller), they show a happy face on their screen. If they move away from the marker (distant is larger), they show a sad face. Once they are in the position they wanted to be, their screen displays something else (a square or a circle, for example).

Code:

Important Variables:

previous_distance 

current_distance

cur=(x_cur, y_cur)

des=(x_des, y_des)

Logic:

void loop{
current_distance=dist(des,cur);

if (previous_distance>current_distance){
display(HAPPY);
}else{
display(SAD);
}

previous_distance =dist(des,cur);

The logic of our algorithm is not hard to figure out. The only problem we need to solve is how to repeatedly send out coordinates (information) from the controller micro:bit. 

Reflection:

We spent much time deciding on which behavior we were going to mimic. We were also hesitant to make out a clear logic line of how to realize our idea. At last, we didn’t have time to actually tinker the code of CV and put our idea into practice.

Final Paper&documentation by Molly (Partner: Yaming Xu)

This is the final paper. It is written by Diana Xu. Link: final paper

After exhibiting my project in IMA show, in the undergraduate research symposium, and in Montreal at ICRA, I have collected many comments and feedback:

1. So far so good:

1)It’s good to keep the piece “magical”, to see the ball moving top-down– To keep the sand table on the floor. People would be curious about the mechanism of why the ball is moving. Many of them would actually bend down and look underneath the sand table, which is an interesting interaction that could be photoed.

2)The appearance of the sand table is decent and beautiful, making it like artwork.

3)The slow movement of the ball relates to a sense of “Zen”. The audience would actually enjoy watching it without feeling impatient. They would stare with curiosity and interest. Some would even stay longer to wait for the reveal of the pattern.

2. Suggestions:

1) The pattern could be more precise so that the ball could run continuously. (After it has finished the pattern, the robot could run backward again to its starting point.) In this case, the whole piece could be a display that doesn’t need custody.

2) There could be various patterns to choose from. In this case, it may deviate from the point of us making the sand table but makes it a more flexible and intriguing art piece.

3) The robot could be with feedback: If the ball on the table is detached from the magnet below, it would know and get back to fetch the ball and push it forward.

4) To create depth in the sand, we could also try vibration/electromagnet/back and forth.

5) People awe of the magnificence created by the pufferfish. Yet, for now, people can’t tell the relationship of the piece with pufferfish: We could either carve the ball into a fish or use a big water tank, fill it with water and make the magnet draw at the bottom of the tank on the sand. Meanwhile, there could be a display (using TV or printed picture) of the actual pattern drawn by the pufferfish to inform people in a better way than a simple description in a verbal presentation or written form.

Biology Observation by Molly He (Partner: Diana Xu)

In this lab, we first learned about how to tell the difference between male and female flies. We also learned about the circadian rhythm.

This is the type of fly that is with the gene of Parkinson’s disease. There is a control group that is healthy.

 

This is when the flies are immobilized by CO2. This is also a board that emits CO2 to keep the flies asleep.

The left is the male, right female fly.

 

We are accounting for Monitor1, line 17-32. The red dotted ones are disease group.

 

We can see that there are some flies killed by us by accident that have all zero showings. For others, I lack the ability to use the computer tool to analyze the data. I could tell by naked eyes that most flies followed the circadian rhythm.

BIRS Final Documentation

Purpose of the Project

The aim of the final is to design a comprehensive experimental project, taking into account the theoretical and practical concepts learned throughout the semester, that would culminate into a research paper that explores one of the following topics:

      • Biology knowledge that inspires, informs and nurtures robotic systems (e.g., swarm behaviors)
      • Robotic experimental information that helps to create a new understanding about biology (e.g.,  intelligence, decisions or behaviors of individuals)
      • Hybrid systems that are inspired in biology by design (e.g., new mechanical structures or locomotion systems)

Initial Ideas and Research

In my initial project proposal, I had decided to work individually and hone my focus into investigating swarm behavior and designing a robotic implementation to gain further insights into biological swarms. I then did more research and also decided that I wanted to work with ants. However, I soon realized that it would be more valuable to work together with a partner because we can align our research interests and add a layer of creativity to the project, by combining our ideas. Our research revealed that ant behavior is very dynamic and there are so many different elements of sophistication involved in their decision-making and behavioral patterns. These insights relate not only to swarming behavior, but also biological understanding. 

As outlined on the Gaps in Knowledge section of my research paper, there were a few paths that we were able to take in our investigation. We could investigate the rigidity of the sugar and protein ant categories. Another option was to discern whether more pungent or strong-smelling food particles that appeal to the foraging ant would affect the speed of the ant. Lastly, another interesting topic of exploration we came across was the correlation between the accuracy of the ant’s movements and the strength of the scent from the pheromone trail. 

Final Project Focus

Kennedy and I finally decided on focusing our experiment to explore the relationship between the strength of a food’s scent and the time it takes for the ant to find and reach it. We would design a robotic model in order to simulate the ant behavior and conduct the experiment. Upon deciding our focus, we immediately started working on its development and evaluating different approaches to achieve our goal.

Robot Development Timeline and Process

Initial Planning

There are two parts to this project: the ant and the food that attracts it. Deciding on what approach to take in developing the robot involved a lot of discussion with our professor, peers, and IMA fellows, as well as research to figure out what method best aligned with both our technical capabilities and the purpose of this project. In my initial proposal, I wanted to use Arduino as the basis of the ant robots. My initial model consisted of an Arduino powered robot with wheels and DC motors for locomotion, fitted with an infrared receiver to detect the signals. This would’ve made the project unnecessarily more challenging, because it has been a long time since I programmed with Arduino.

We then figured out that it was possible to build the robot using Microbit and the Kittenbot kit already had its own infrared receiver as well as motors and wheels. Thus, it made more sense to take this approach because we could consolidate the programming knowledge we had accumulated all semester, and we could program the whole thing on the same platform. The food part was a pretty simple concept. We decided to use an infrared emitter as a beacon that attracts the ant robot, with the infrared signals representing the scent of the food. The more beacons we place, the stronger the signal is, and we hypothesized that the ant robots would find it and reach its location faster.

Thus, our final plan was to build our own robot using the Kittenbot. Whilst the concept for the food beacon was simple, as we started on the project we did not have an initial plan as to how we were going to build a free-standing structure fitted with an infrared emitter so we decided to figure that out as we went along.

Initial Prototyping and Grasping MakeCode

As soon we started prototyping we realized that we did not need the fully-assembled Kittenbot for the purpose of project. We also felt that the Kittenbot chassis was to heavy and clunky, and at this phase of development we only really needed the Microbit connected to the motors. So, we decided to build our own chassis made out of cardboard, as this also gave us flexibility of designing the robot, allowing us to create slots and make custom modifications specific to the purpose of our project. We chose cardboard instead of 3D printing a chassis or laser cutting plastic/wood, because it was cheaper and would make the prototyping process faster.  If we made a mistake or created too many holes, we were able to easily build a new one that fit all our components better as well, and we could continue doing so multiple times.

Upon fitting the Microbit and motors to the cardboard chassis, we first explored the locomotion of our robot. Kennedy and I were not well-versed on MakeCode, so this was an easy starting point for us to get things working. We flashed a simple program to figure out how the motors worked.

As shown on the video, we immediately ran into first problem. The robot would only spin around. We had to make an adjustment on the code. We had programmed both motors, M1A and M2B, with the value 150. We realized that in order for it to move straight we had to program M2B with the value -150.

We tinkered around further, and I tried to familiarize myself with the mechanisms of MakeCode by looking at other sample projects as well. Now that we had figured out the locomotion and got our robot moving, we had to figure out the sensors, which is the core of our project.

Implementing Infrared Communication

This part of the project involved mounting an infrared receiver on our prototype, connecting it to Microbit, and then making it detect signals from an infrared-emitting beacon. Since I used the infrared sensors from the Kittenbot kit for my midterm, I thought we could mount it on the front of our robot and use the Robotbit extension. For the beacon, we created a simple circuit with an Arduino infrared emitter and downloaded the code for it online. However, when we tested it, we weren’t getting any results and it appeared that the infrared receiver could not detect any signals from the emitter, even though we placed them very closely to each other.

We sought help from Rudi and Tristan, and discovered that the Robotbit infrared receiver was also an infrared emitter; it would shoot infrared signals as we turned it on and wash out any of the signals coming from the Arduino emitter.

So, we performed a little robot surgery and cut out the white colored bulbs on the Robotbit infrared sensor, because they were emitters. We were then only left with the infrared receivers, so Kennedy cut out slots on the cardboard and mounted it. The ant portion of the infrared communication was done, now we needed the figure out the beacon.

We had saved the infrared emitting bulbs that we cut out, so Tristan and Rudi suggested that we use them as our infrared-emitting beacons. We tested this by using an infrared sensor that we had not cut, to see if any signals were actually detected.

Fortunately, we got clear values right away, so it was definitely the right direction to take for the project. To construct our beacon, we soldered each bulb to two wires, which we plugged into a solderless breadboard and connected to Arduino as a power supply. 

We then tested it to see what kind of values we were getting, so that we can establish a threshold when we start coding for the entire robotic simulation that would be the setup for our experiment. We had finally finished establishing the infrared communication. 

Implementing Obstacle Avoidance

The next part was to implement the obstacle avoidance element. This element is necessary for the robot to easily move around and find the beacon without running into walls or other robots within the area, since we built multiple ant robots to roam in the arena together, replicating an ant swarm.

For this part, we opted to use an ultrasonic sensor to detect obstacles, and we figured we can use the Robotbit extension as well. We used an HC-5204 sensor and initially attached at the bottom of our robot, under the infrared receiver. 

I then wrote a very simple obstacle avoidance program on MakeCode to test its function. However, the sensor wouldn’t work. I tried to make modifications to the code, looked through online forums for advice, and even tried switching out the sensors, to no avail. I was very confused by this and sought help from Rudi. As it turns out, I hadn’t realized that the sensor we used is different to the one from the Kittenbot kit and so it couldn’t be programmed with the Robotbit extension on MakeCode. So I looked for the ultrasonic extension for the HC-5204 sensor and after finding it, I very quickly able to put together a new program and it ran smoothly. With the obstacle avoidance feature working, I had all the separate pieces of the robot ready. I was finally able to start writing the final program to integrate all the components and create the setup for our experiment.

Final Program

To integrate all the features together and create one dynamic program, I found the use of the Functions blocks very helpful. It would’ve been very tedious and disorderly to create one long program within the Forever block. I found this part of the project to be the most challenging, as we had no blueprint on how to put them together. There were so many different ways to design the code. In my early iterations I created separate functions for movement, infrared detection, obstacle avoidance and then tried to integrate everything in a “do” function. This was ineffective and my robot would only do the first part of the sequence, which was to move forward, and did not do any of the crucial steps such as infrared detection and obstacle avoidance, and finding its way to the beacon. This phase had about iterations of the program, and with each iteration as I tried to add more elements to improve the whole function, my code got more chaotic without much improvement. I finally abandoned the messy and complicated structure, deciding to take a different approach to simplify the whole program, which led to the second phase of coding.

The main goal during this phase was to keep things simple without sacrificing function, so with each iteration I focused on removing redundancies and condensing the code. I first got rid of a lot of unnecessary variables which optimized the infrared detection and obstacle avoidance features. Next, I eliminated the lengthy “do” function, and implemented the obstacle avoidance as a part of the movement function.

I also added an LED symbol to correspond to each task on the program, so I could see the states changing and know what the robot is doing as it moves. Finally, I simplified the actions the robot would do if it reached the beacon. In the first phase, I had tried to program it to do a spinning action, playing a melody, and then stopping. That complicated things and the robot was not able to do it effectively, so in the second phase I programmed the robots to simply stop, play a short melody, and show an LED sequence of a beating heart when it reached the beacon.

 Our robot finally started showing signs of working and it was moving around whilst avoiding obstacles, however, it wasn’t quite perfect. It did not appear to detect the infrared signals well or respond to it accordingly. In addition, whilst the obstacle avoidance part worked fairly well, having the ultrasonic sensor at the bottom of the robot created a drag which hindered effective movement.

This led to the final phase of programming the final code, which mostly involved simple debugging and optimizing the features. First we moved and rearranged the wiring for the ultrasonic sensor, placing it behind the infrared receiver but positioned just above it to keep its path clear. Next, we had to tinker with the value thresholds for both the ultrasonic and infrared sensors, aiming to get the robot to function as effectively as possible. Although the most challenging part was over, this part was definitely the most painstaking and tedious. This was because it was very repetitive; we had to repeat the steps of changing a value in the code and then testing the robot, over and over again.

It took several iterations to finally get the right values to serve as the threshold for determining whether the robot had arrived at the food beacon. It took more iterations to also establish the values for the ultrasonic sensor. In total, from the start of writing the final code to finally getting everything working right, we wrote 16 different iterations of our program.

Experiment

Conducting the experiment was very easy because we had the whole setup working perfectly, so all we needed to do was add the right number of beacons, position and turn on the robots, and then record the data.

Afterwards, I used the chi-squared test to analyze the significance of our data, which determined that there is no significant relationship between the strength of the infrared signal and the time it takes for all the ants to arrive at the food beacon zone. The full details of this experiment can be found on my final research paper.

Final Code

Microsoft MakeCode

JavaScript

https://github.com/bishchand/BIRSFinal/blob/master/BIRSFinal.js

Herding Commands Adapted to a Microbit – Andres Malaga

Abstract:

The aim of this experiment is to translate the commands given to herding dogs into instructions given to a robot through a controller device in order to start replicating herding behavior. An example of a herding robot already in development and a mathematical model that describes the principles of herding behavior are discussed as examples of the implementation of the herding commands. The robot used in this experiment was only able to receive and follow commands related to its movement, but further improvements to this have been discussed that could lead to the development of fully autonomous robots that are capable of carrying out the functions of a herding dog.

Bio-Inspiration:

The behavior I am going to investigate is herding behavior such as that observed in herding dogs. A herding dog (or sheepdog) is a dog that has been trained to keep together a group of sheep or cattle through commands. The efficiency of a sheepdog relies on its ability to group the animals and move them forward. When grouping sheep, a sheepdog will attempt to move the sheep close to each other, thus closing the gaps between them, and lead them from the back of the group. This first step is done so that the sheep (or cattle) move in a uniform manner in order to facilitate the second step, which is leading the group from behind. The second step consists of the dog staying behind the group, prompting it to move forward. In order to close gaps between sheep or cattle, the dog will usually go towards the stray sheep or cow and bark at it, prompting it to return to the group. The sheep will always move away from the dog. Herding dogs are trained to herd responding to commands by their owner, which usually include commands that make the dog turn left, right, go straight, get closer or away from the herd, group the animals, stop, and bark.  These commands are often said by the owner or given in the form of a whistle, where different pitches or patterns can symbolize a different command. In this experiment, I will try to make a kittenbot follow instructions similar to the herding commands given by a microbit as the first step towards developing autonomous robots that can carry out herding efficiently, such as the Swagbot.

Herding Robots:

Herding robots are already in use. An example of this is Swagbot, a robot developed in Australia meant to group and lead cows around a pasture. This robot just attempts to keep the group together and lead it around, following the principles of herding behavior. Although it does not follow any external commands like a herding dog would, the actions it needs to do are already programmed so that it follows the principles of herding behavior. This robot is autonomous and can detect the herd and members of the herd getting away from it, prompting it to close the gap and maintain cohesion while leading the group. The behavior of this robot resembles a mathematical model that explains herding behavior, which is essentially derived from how herding dogs follow the commands given to them.

Mathematical model for herding behavior:

The principles of herding behavior have been mathematically modeled and found to be applicable in different fields, since herding behavior can be seen as a way of achieving and maintaining cohesion within a group of elements. The principles are two: the herder (in this case the sheepdog) has to keep the group together, and the herder has to drive the group forward. The sheepdog keeps the group together by closing gaps between members of the group that have gone astray. This is achieved by getting in front of the stray animal, which will always move away from the dog and towards the group. The herder moves the group forward by leading it from behind, as the sheep will always move away from the dog. The dog will cycle between keeping the group together and moving it forward. This mathematical model has been proven to resemble the behavior of real herding dogs, and could be applied in different fields such as human crowd control, cleaning of debris and control of swarm robots. Although my experiment will not center on or implement this mathematical model, a herding robot could be programmed to follow this mathematical model and use computer vision with an ultrasonic sensor to carry out herding on its own, as the mathematical model already takes into account the commands given to a herding dog.

Design and Purpose of the experiment:

Because a robot that replicates herding behavior already exists and so does a mathematical model that explains this behavior, my experiment will center on the commands herding dogs receive and attempt to replicate them with a kittenbot in order to prove that in a larger context a remote-controlled robot will be able to carry out herding as efficiently or more efficient than a herding dog. This would be the first step towards creating an autonomous robot that can carry out herding. While the experiment will be carried out only with microbits (which use radio signals), it is worth stating that it should work with different kinds of communication, such as Bluetooth and voice input, provided the necessary hardware and software is available. The experiment will also focus only on commands related to locomotion, sensors could be added to allow the robot to sense distance on its own and group the herd on its own.  A LearningBot that turns when it detects things in front of its ultrasonic sensor will be used as the subject to be ‘herded’ by the kittenbot.

Materials and software needed for this experiment are:

  • Two Microbit microcontrollers
  • One kittenbot (or any robot that supports microbit) fit with two wheels, each attached to a different motor.
  • Microsoft MakeCode (to write the code for the microbit) or MuEditor (to code microbit in python). MakeCode was used for this experiment.
  • One LearningBot, fitted with two Servo Motors attached to wheels and an ultrasonic sensor. The robot was pre-programmed via Arduino to turn right when it detects objects in front of it.

Procedure:

  • Both microbits were programmed using MakeCode to send and receive numbers through their built-in radio transmitters.
  • A different number was programmed to be sent depending on the gesture performed with or on the microbit, such as a button or buttons being pressed or the microbit being physically tilted sideways.
  • The microbits were programmed to receive the numbers and use a series of “if ” and “else if ” statements to determine the command the number gives. The same code was flashed into both microbits.
  • One microbit was left connected to the computer (which could be replaced with any other power source, such as a power bank, a wall plug or batteries), and the other one was plugged into the kittenbot’s microbit slot.
  • The kittenbot was turned on, able to be controlled via the microbit connected to the computer.
  • If a LearningBot is used, it will turn right when the kittenbot passes in front of it.

Code:

            The microbits were programmed with Microsoft MakeCode. They were programmed using MakeCode’s drag-and-drop feature, which created a version of the code in JavaScript on the background. The JavaScript version of the code is shown below:

  1. let State = 0
  2. onButtonPressed(Button.A, function () {
  3. sendNumber(0)
  4. })
  5. onReceivedNumber(function (receivedNumber) {
  6. if (receivedNumber == 0) {
  7. MotorStopAll()
  8. showString(“F”)
  9. MotorRunDual(
  10. Motors.M1A,
  11. 150,
  12. Motors.M2B,
  13. 150
  14. )
  15. } else if (receivedNumber == 1) {
  16. MotorStopAll()
  17. showString(“B”)
  18. MotorRunDual(
  19. Motors.M1A,
  20. -80,
  21. Motors.M2B,
  22. -80
  23. )
  24. } else if (receivedNumber == 2) {
  25. MotorStopAll()
  26. showString(“S”)
  27. } else if (receivedNumber == 3) {
  28. MotorStopAll()
  29. showString(“L”)
  30. MotorRunDual(
  31. Motors.M1A,
  32. 150,
  33. Motors.M2B,
  34. 90
  35. )
  36. } else if (receivedNumber == 4) {
  37. MotorStopAll()
  38. showString(“R”)
  39. MotorRunDual(
  40. Motors.M1A,
  41. 90,
  42. Motors.M2B,
  43. 150
  44. )
  45. }
  46. })
  47. onGesture(Gesture.TiltLeft, function () {
  48. sendNumber(3)
  49. })
  50. onGesture(Gesture.TiltRight, function () {
  51. sendNumber(4)
  52. })
  53. onButtonPressed(Button.AB, function () {
  54. sendNumber(2)
  55. })
  56. onButtonPressed(Button.B, function () {
  57. sendNumber(1)
  58. })
  59. State = 0
  60. setGroup(1)
  61. forever(function () {
  62. if (true) {
  63. } else {
  64. }
  65. })

 

Carrying out the experiment:

            The kittenbot was supposed to mimic a sheepdog carrying out herding behavior. Through giving it commands with another microbit, the kittenbot displayed the first letter of the action and performed it. For example, if it received the command to go forward it would display an F, if it received a command to go backwards it would display a B, it would display R if it had to turn right and L if it had to turn left, and S if it had to stop. The idea behind having the kittenbot display the commands before executing them was to show that the kittenbot was acknowledging the command before executing it, just like a sheepdog acts when given a command. Because there were no sensors attached to the kittenbot, the commands given to it could only be related to the robot moving, with not much room for commands related to herding itself. This made the kittenbot resemble a remote-controlled car more than a sheepdog, but demonstrated that the robot was able to follow commands, setting the base for more complex features to be added, such as sensors and other methods of control.

Conclusions and possible improvements:

The robot acted with little delay after the action in the other microbit was performed, and while it was not equipped with any sensors to detect and group a group of things, it proved that it would be the first step towards creating a robot that carries out herding behavior, similar to a Swagbot. In an ideal scenario the robot would be much bigger and be mounted with sensors, such as an infrared/ultrasonic sensor in order to allow for more commands to be given to the robot, such as keeping within a certain distance of the group, and would also allow the robot to detect stray sheep/cow to close the gap, allowing it to behave more like a real sheepdog. A microphone could be added so that the robot detects voice commands or whistles of different pitches and acts on them, giving the user more control over the robot. Other types of control may also be experimented with. An analog controller (joystick), for example, gives more control over the movement of the robot. In conclusion, a robot will be able to apply the principles of herding behavior with further improvements such as a different interface from which to control it and sensors that give it more autonomy and increase the number of possible commands that can be given to it. Once this is implemented, a fully autonomous herding robot could be developed that has the herding commands written in its code and applies them to the mathematical model for herding behavior.

Works Cited

Association, Press. Sheepdogs Could Be Replaced by Robots After Scientists Crack Simple Process. 27 August 2014. <https://www.theguardian.com/uk-news/2014/aug/27/sheepdogs-replaced-by-robots>.

Klein, Alice. Cattle-herding Robot Swagbot Makes Debut on Australian Farms. 12 July 2016. <https://www.newscientist.com/article/2097004-cattle-herding-robot-swagbot-makes-debut-on-australian-farms/>.

Strombom, Daniel and Andrew King. Why We Programmed a Robot to Act Like a Sheepdog. 21 May 2018. <http://theconversation.com/why-we-programmed-a-robot-to-act-like-a-sheepdog-96961>.