Lab #2: Robot Brain

By: Gabrielle Branche

Getting Ready:

My partner Kevin and I tested the microbit after downloading the code and were pleasantly surprised at how responsive it was. We used a multimeter to measure the voltage output to be 3.2V ±0.05V.

Programming the brain and using its sensors: 

These sections are merged because we unintentionally programmed the brain with the simulation such that it worked on the hardware as well both inputting data and outputting accordingly. Further programming was done when trying to debug our platypus code.

We decided to program our code to turn on certain LEDs on pressing button A. Turn off these same lights when pressing button B. Additionally, we programmed it to play music when it shook. We then tinkered with the code allowing shake to manipulate the tempo of the music. At first this was difficult because we did not set up the wiring for the headphones right but we finally got it to work.

Block Code for first tinkering
Block Code for first tinkering

Limitations:

Since we mainly just fiddled with the different box, we realized after that we were giving ourselves unnecessary work by using less convenient code such as the plotting function.

Additionally, due to the microbit’s small grid, it is difficult to have full words shown and the user has to wait till the entire message flashes across the screen to see it.

Finally, the blocks given are quite small in variation and thus makes it hard for a beginner coder to explore all the possible uses for the bot. However, the program does provide a space to code using JavaScript which can allow for the microbot to be more useful

However even though this is a bit limiting, the microbot has the potential to be very handy as other components can be attached to it such as the headphones we used to play music.

Basic Animal Behavior System

We decided to explore the behavior of a platypus. While the exercise did not call for us to look an any particular animal we thought that by narrowing down to a specific animal we could truly explore how it operates holistically. The platypus is an animal native to Australia. It is nocturnal and is known for its acute sight and sense of hearing. As such we decided to design a program that could respond to these senses.

Flowchart showing animal behavior breakdown
Flowchart showing animal behavior breakdown

By using a photocell, we could execute the responses only within a specific light intensity threshold. Since the platypus is nocturnal we decided that above a certain threshold it would cease all activity.

If below the threshold the platypus could respond to sound using a sound sensor thus simulating acute hearing. If sudden loud sounds are made, the eyes of the platypus would glow and it would shake.

Finally using a proximity sensor, the platypus could be programmed to move away from obstacles within a specific distance. This thus mimics its sight which allows it to maintain a safe distance from predators.

All these actions stem from one of the basic necessities of all living things – irritability: responding to internal and external stimulus. All animals respond to stimuli differently but all girls respond to stimuli none the less.

Using the microbot as the brain of the platypus, the aforementioned components can be set up to allow for execution of its animal behavior.

Connect it

Finally, we created a simplified version of the code for the microbot that maintained the core of stimulus response. Using the built-in light intensity sensor, we could provide a threshold for light. Then if button B was pressed, the platypus would make a smiley face and sing responding to ‘sound’. When it was shaken it would blink an arrow representing running away from a predator.

At first before using the light-intensity sensor the code worked well. However, there was great trouble when working with the light. The values for light intensity changes sporadically and it seemed to not response to any conditional statements that hinged on a specific light intensity. We tried to simplify the code such that above an intensity of 20, the bot would frown and below that intensity it would display a heart.

let light = 0

basic.showNumber(input.lightLevel())

light = input.lightLevel()

if (light < 20) {

   basic.showIcon(IconNames.Heart)

}

if (light > 20) {

   basic.showNumber(IconNames.Sad)

}

However, no matter how we changed the light intensity using flash lights the LEDs would not change from whatever it chose first. We hope that by using more accurate photocell and move debugging such as that done below, this experiment may work.

Block code for debugging light intensity
Block code for debugging light intensity
basic.showNumber(input.lightLevel())

let light = input.lightLevel()

if (light < 20) {

   basic.showIcon(IconNames.Heart)

}

if (light > 20) {

   basic.showNumber(input.lightLevel())

}

Reflection:

This was a very useful lab because it made me realize that while the task of building a robot seemed mountainous and challenging, it is in fact a simple matter of breaking down the tasks into individual responses to stimuli. As living beings, we act through reacting, by taking into consideration how one would behave within certain conditions it becomes just a matter of having the correct syntax for creating the response. While this is not meant to invalidate the immense complexities that are found in almost all multicellular organisms, it does make starting the process of building robots significantly less daunting.

BIRS Lab 2: Robot Brain

Step 1: Getting Started

This was our first time working with the micro:bit. To start off, we went on the online Microsoft simulator <www.makecode.microbit.org> and downloaded the original source code. It was pretty easy to figure out, and we found the first easter egg which was a game of Snake, pretty quickly.

Step 2: Simple Sequential and Looping Displays

After familiarizing ourselves with the basics of micro:bit, we used the simulator to program a simple sequence. When we shake it to start, it displayed a message that said “Hello!” and then a flashing pattern of a box getting smaller. We came up with this sequence by playing around with the Basic and Leds sections. It was helpful to have the virtual micro:bit display because we could test out the different patterns and shapes before finalizing the code and uploading it onto our actual micro:bit.

Step 3: Programming the Brain

This part was really fun because we were able to get really creative with what loops and sequences we wanted to put together.  We decided to have a sequence of icons display when button A is pressed, and numbers displayed in a loop when button B is pressed. On the first try, we did fail, because button A worked but when button B was pressed it only displayed a 0. It turns out, we used the wrong function.

We replace the while loop with a for loop and then it worked perfectly. When button A was pressed it displayed a series of icons (heart, smiley face, person) and then stopped after one iteration. When button B was pressed, it displayed a continuous loop of the numbers 1 and 2, one after the other.

Step 4: Using the Sensors

This was a little tricky because we weren’t sure where all the sensors were on the micro:bit, but we decided to use brightness as our independent variable. When the surroundings are bright, the micro:bit would play a sound, and stop when it’s dark. This was harder to pull off because we couldn’t control the brightness in the classroom, but in the end, the code worked.

Step 5: Creating A Basic Animal Behavior System

Our initial idea was to create a game mimicking the behaviors of chickens. One of the LED dots would represent a chicken and move along the grid, one would be a fox chasing it, and one would represent corn which the chicken would move towards to eat it. However, it was difficult to achieve because the LED dots only lit up in one color, so it would be confusing to know which dot represents what. So, we changed our idea to fish behavior, inspired by a scene in Finding Nemo where the little girl shakes Nemo in a plastic bag. Basically, at the start, a fish will appear and a message saying “Hello I’m Nemo” will display. Then, if one presses button A, it will feed the fish and a heart will appear accompanied by a positive sound. However, if one shakes the micro:bit, a sad face will display accompanied with a negative sound because one should not shake fish.

Reflection #4 – Embodied Cognitive Science

Synopsis:

This chapter of Janusz Kacprzyk and Vito Trianni’s Evolutionary Swarm Robotics deals with Embodied Cognitive Science. It goes through a history of Cognitive Science and relates it to theories such as Connectionism and Functionalism. These ideas center greatly on determining whether machines can be intelligent and defining the parameters by which a machine can be considered intelligent. 

Useful Terms and my interpretation of them:

  • Artificial Intelligence:
    • the ability of machines to mimic human reasoning
  • Cybernetics:
    • Dealing with control theory and statistical information theory
    • Forefather of Artifical Intelligence
    • Modeling agent with the environment
    • sense think act cycle (act react system)
  • Behaviourism
    • responds to a stimulus
  • Practitioners of AI
    • Value the ability to work mentally rather than responding from a stimulus
  • Unity
    • A system with boundaries that encompass a number of elementary components
    • self-organizing robotic systems
  • Connectionism
    • interconnected networks of simple units
    • symbolic spacially structured representation
  • Functionalism
    • Symbolic syntactically structured representation
  • Subsumption Architecture
    • used in behavior-based robotics
  • Situatedness
    • being in the world
  • Embodiment
    • acting in the world

Reflection:

This reading was quite technical and I found it somewhat difficult to digest. However, there were some points that stuck out to me. Firstly, I found it interesting that the claim was made that machines can never truly be intelligent because they will always be allopoeitc since they can never be living organisms (pg 19) 

This interests me because I am very intrigued by the question of whether robots need to be an artificial replica of nature to be considered successful. This seems to be the general consensus. The topic is mildly addressed in the article and made me think back to the children’s interpretation of the robotic fish. See extract below for article’s take on the question;

Not in looks, but in action, the model must resemble an animal.
Therefore, it must have these or some measure of these attributes: exploration, curiosity, free-will in the sense of unpredictability, goal-
seeking, self-regulation, avoidance of dilemmas, foresight, memory, learning, forgetting, association of ideas, form recognition, and the
elements of social accommodation. Such is life.

Grey Walter, 1953, pp. 120-121

Two other things that intrigued me were the idea of the Chinese experiment. By that logic with no intention, there can be no intelligence. By that logic artificial intelligence is a paradox in itself because while a machine can be responsive it can never achieve innate intention – Or can it? How is intention defined?

The article also brings up a valid point, we do not truly understand simple natural phenomena so how can we jump to complex cognitive thought processes. The start of robotics should look at building machines that can interact with the real world.