Final Paper: Simulating Filial Imprinting With RobotBit

As documented earlier, Anand and I managed to complete the benchmarks we set for ourselves in terms of the final project. The robots were fine-tuned so that they kept a certain distance from the succeeding robot within a line, and the movement was extremely fluid. I am quite satisfied with how the end products turned out, though I wish we would have created a ‘Congo-line’ of RobotBits if we had more time. Nevertheless, I really enjoyed the different projects we created in class, and hope to utilize more of what I learned in future endeavors! 

Link to the Final Paper:

https://docs.google.com/document/d/1uyzA88VI6YVixFFd-j42QGfMkUiLtyTW35_XHtpDS_M/edit?usp=sharing

Final Project Proposal

For my final, I will be working with Anand to fine tune and expand upon my midterm project. As you may recall, my midterm was centered around the method of imprinting found in animals, namely birds. Imprinting usually occurs immediately after birth; organisms such as ducklings have been recorded to imprint on the first object they see, whether it be shoes or other animals such as dogs. One of the most noticeable behaviors that result from imprinting is that the animal will follow the organism it has imprinted on, as demonstrated by this video:

As you can see, the duckling has imprinted on the dog, and will essentially follow it everywhere. Once the imprint has solidified, it is extremely difficult to break. Here is another demonstration of ducklings imprinting on the mother duckling:

So with these examples in mind, we wanted to emulate the most basic imprinting behavior by having multiple bots follow a ‘mother’ bot, matching both speed and angle change accordingly. In order to do so, we will continue to utilize the dual-ultrasound technique that I used in my midterm, where essentially each motor is connected to its own ultrasound sensor. This allows for smooth, quick turning, as well as accurate bot-to-bot sensing. 

Dual-ultrasound:

Some improvements that we intend to add would be distance adjustment for the ‘duckling’ bot relative to the ‘mother’ bot, meaning that the babies will follow the mother, but maintain a set distance from behind, in order to avoid collision. We will also fine tune the reactive turning to emulate smoother motion, and add more bots to form a true duckling line. Lastly, since my midterm ducklings had some trouble detecting the back of the mother bot, we also want to add some sort of ‘detection plate’ to the rear, so that the bot will have a larger marker to detect. 

Our expected product of this project would be a set of fully functional ducklings (complete with smooth motion and accurate detection), as well as a mother duck to act as the line leader. 

Swarm Project: End Result

As it turns out, our final project ran into a few issues in terms of putting all the code together and running final testing with serial communication. As a result, we did not manage to get the swarm running, but we do have the main code, as well as the specific functions for flipping camera axis. 

As stated, here was our original plan:

  1. Object is detected within boundary
  2. Robot with the shortest distance to the object will move first to “attack” it
  3. The robot will navigate to the object and attempt to push it out the boundary
  4. If the robot is unable to do so (most likely if the object is too heavy), the rest of the bots will navigate to the object as well, forming a chain sequence behind the first bot to help push the object out
  5. Once the object is pushed out, the bots will resume original reset positionsOf course, we did manage to code every step we intended to run, including an extra step of solving the camera axis inversion.  Here are some problems we managed to solve during the process:

calculating the position of bot, as well as the angle it is facing (performed by recognizing the four corners of the bot’s QR code)

def calcBotData(data): #data has the four corners #calcBotData returns the x,y of the center of the bot and the angle it is facing, can also be used for the enemy bot
id = data[0]
corner1 = data[1]#corner1data Should be at the location of the head
corner2 = data[2]#corner2data Should be at the location of the tail
#these corners are across from each other

#calculating center of the bot
x = (headx + tailx)/2
y = (heady + taily)/2

headx = corner1[0]
heady = corner1[1]
dy = heady - y
dx = headx - x

angle = math.atan2(dy, dx)
if angle < 0:
angle = 2*math.pi - angle

#THIS ANGLE IS IN RADIANS
return [id,x,y,angle]

sending ‘reinforcement bots’ in the case where one bot cannot push out the object

def initiateCall(availablebots, enemy, dest):

reinforcementBot = availablebots[0] #store first bot in list into variable
availablebots = availablebots[1:]

distance = math.hypot(reinforcementBot[0][0] - enemy[0], reinforcementBot[0][1] - enemy[1])

timeA = 6.38 * orient(reinforcementBot, dest)
timeD = 60 * distance

command = str(reinforcementBot[0])+ ",0,"+ str(timeA)
#SEND TO SERIAL
sleep(timeA + 2)
command = str(reinforcementBot[0])+ ",1,"+ str(timeD)
#SEND TO SERIAL

timeA = 6.38 * orient(reinforcementBot, dest)
command = str(reinforcementBot[0])+ ",0,"+ str(timeA)
#SEND TO SERIAL

command = str(reinforcementBot[0])+ ",1,0"
#SEND TO SERIAL

return availablebots

resetting the bot position 

def reset(robots):
received = robots
for i in range(3): #for each bot
move[i][0] = bot[i][0]
for j in range(1,3): #calculate vector, j = 1 for x, j = 2 for y
move[i][j] = start[i][j] - received[i][j]

if move[i][1] != 0: #exclude x = 0
tanAngle[i] = move[i][2] / move[i][1] #calculate tan, y/x
if move[i][2] > 0: # y > 0
angle[i] = math.atan(tanAngle) #angle = 0 ~ pi
elif move[i][2] < 0:
angle[i] = math.atan(tanAngle) + math.pi #angle = pi ~ 2pi
elif move[i][2] == 0: # angle = 0 or pi
if move[i][1] > 0: #x > 0
angle[i] = 0;
else: #x < 0
angle[i] = math.pi
else: # x = 0
if move[i][2] > 0: # y > 0
angle = math.pi / 2
elif move[i][2] < 0:
angle = math.pi / 2 * 3
else:
angle = 0

However, due to time constraints, as well as the ambitious nature of our project, we had to implement the training time during the end of the week, which resulted in some setbacks. Namely, we didn’t expect to have so much trouble sending signals to the bot using serial, and after several attempted test runs, we had to leave the project as is to work on the final. 

With that being said, because the main issue was that the serial wasn’t working, we actually have no idea if the code runs properly or not. Our hope is that for future improvements, we would tackle the serial issue, and then test out the full swarm code if possible.

Final Project – 2Bot Conversation

As presented earlier, my final project builds upon text generation and online chatbots, where users find themselves interacting with a basic AI through speech. For my midterm, I created a bot that chats with the user, except the bot itself had a speech impediment, rendering the conversation essentially meaningless, as the user would be unable to process what the bot was actually saying. For my final, I wanted to remove the speech impediment aspect of the project, and focus on generating comprehensible text. To further the project, I also wanted to train two bots individually, and then place them together into a conversation to see the results. 

Inspiration

As I have mentioned before, I’ve always been fascinated with chatbots, and the idea of talking with a machine. Even if the computer itself doesn’t truly ‘understand’ the concepts behind words and sentences, if the program itself is able to mimic human speech accurately, I find it absolutely intriguing. Here is a video (which I’ve posted quite a few times, so I’ll just mention it briefly) of an engaging conversation between two AIs:

First Stage 

So initially, I expected to utilize RASA in order to create my chatbots and have them converse normally with each other. To provide some background info, RASA is comprised of RASA Core and RASA NLU, which are both tools for organizing sentences and achieving proper syntax. However, after working with both frameworks for a while, I realized that they are extremely powerful for creating assistance AI (especially for specified tasks, such as a weather bot or restaurant finding bot), but noticeably more complicated when attempting to create a bot that chats about a variety of different subjects that may have no relation to each other. This is partly because building a dialogue model in RASA requires the programmer to categorize concepts into categories and intent; however, I don’t specifically want my chatbots to have an intent, or have one bot solve another bot’s problems (like finding a nearby restaurant), rather, I want them to merely chat with each other and build upon the other side’s sentences. Therefore, I kept RASA aside for the moment, as I searched for other tools that may better fit my project.

Second Stage

I ended up utilizing Python’s NLTK, spaCy, and Polyglot libraries  after searching endlessly for possible language processing tools. This was a gold mine essentially, since the  libraries took care of several issues I ran into with RASA.  The NLTK library documentation provides a lot of good resources when it comes to creating bidirectional LSTMs for language processing, along with documentation on implementing search/classification algorithms. However, the most useful ability that NLTK provides is real-time training:

So basically, one of the things that I was able to do was train the bots on a corpus to get it started with basic speech understanding, but then converse with the bot by feeding it inputs. Each time I fed the bot a new input, the bot would be able to store that phrase, as well as preceding and succeeding phrases relative to it. Therefore, the bot would learn personalized phrases and speech patterns through actual conversation. The bot would then ‘learn’ how to use these newly acquired words and sentences with the help of the spaCy library, which allows for word vector assignment and semantic similarity processing. In other words, the newly acquired information that the bot receives will be processed, and the output text will be generated based on the content of the string. For example, if I fed it a list of the following:

-“How are you?”

-“I’m fine, how are you?”

-“I’m doing great!”

-“That’s good to hear!”

The bot would then be able to output similar results given a similar input. If I asked it “how are you?”, it would respond with “I’m fine, how are you?”, and so on. I also utilized the random module for response retrieval in order to add a bit of variety to the responses so that they don’t keep repeating the same, rehearsed conversations. After a lengthy training time, I was able to create several branches of phrases that built off of each other, which simulates a real life, human conversation. 

Training

For training, I first utilized a decently sized corpus filled with small back-and-forth conversations in English. The topics ranged from psychology and history, to sports and humor. Though the content of the corpus was not by any means a huge dataset, the purpose of initial training was to get the bot familiar with common English small talk. Once you could talk with one chatbot normally, you would then feed it new input statements, which it would then store in its database of phrases. Therefore, talking with the bot is essentially the second half of training. An example of a corpus:

Of course, this is just one small section of one category, as the usual set would contain much more phrases. Another great thing about the module is that you can train the bot on a specific set of phrases as well, or reinforce certain responses. What I did was feed it a certain block of conversation that I wanted to change or reinforce, and run the training method a few times so that the bot understands.

Some screenshots of my training process with individual chatbots:

Some more screenshots of putting together the chatbots into a conversation (early training stages)

Final screenshots of fully trained 2Bot conversation:

Final Thoughts

Working with two chatbots was definitely more complicated than I originally expected. With one chatbot, I was able to have more control over the content of the conversation, since I was the other participant. However, with two bots, you never truly know what will happen, despite training both bots repeatedly on familiar datasets. However, the project turned out as I wanted, and I was actually quite shocked when the robots actually conversed with each other during training. I definitely hope to work with a bigger dataset (such as the Ubuntu corpus), and train the chatbots for a longer period of time to yield even better results. 

Swarm Lab Proposal

Inspiration

For the swarm lab, our group will be attempting to recreate the concept of “group-defensive behavior”,  where members of a pack will actively fend off non-members from entering a defined territory. In this case, the bots will represent the group members, and the goal will be to program them so that they push any foreign object out of a set square boundary as a swarm. 

Process

We each set specific roles for the members, which is as follows: 

  • Bot Orientation and Angle/Boundary Distance calculation (Anand)
  • Real-Time Movement (Gabi)
  • Updating/Sending Data (Kevin)
  • Reset Bot Position (Diana)

Goal

Our hope is that the bots will react accordingly to the presence of a foreign object:

  1. Object is detected within boundary
  2. Robot with the shortest distance to the object will move first to “attack” it
  3. The robot will navigate to the object and attempt to push it out the boundary
  4. If the robot is unable to do so (most likely if the object is too heavy), the rest of the bots will navigate to the object as well, forming a chain sequence behind the first bot to help push the object out
  5. Once the object is pushed out, the bots will resume original reset positions