Simple Machines + Python // Konrad Krawczyk

As I have had sizeable issues with successfully running the Kitten bot commands from the GUI-based code editor, I decided to post the successful version of the lab now that I got access to the robot via Python.

  1. Prior work & issues

The easiest part of this lab was really assembling the robot. It was extremely straightforward, picture-based, like building a Bionicle toy. The wiring was slightly different, but I adjusted accordingly.

Just like in the prior lab, I included the right pieces of code is that I could scroll a “Hello, World” on the display. The code worked with no issues.

The curve got steeper when I trued to include certain functionalities in the code editor. I tried to use the Robotbit library in the in-browser editor for a really long time, and every time it just did not happen. It took me a bit to figure that this was a bug in the browser, and I would have to use Safari.

Then when I got access to the right library, I tried inserting the code into Microbit. The graphical part (scroll) worked, but none of the motors got to work.

  1. Python

Once we together figured out how to use Python and put it in Robotbit, the motors finally started to work. With help from Tristan, I got to use the signal in the ultrasonic sensor to create conditions for the robot to change directions.

I got to: scroll a “hello world”, use the DC motors, and turn the servo around with Python code:

# Write your code here 🙂
from microbit import *
import robotbit

display.show(Image.HAPPY)

while True:

if (robotbit.sonar(pin1) > 30):
robotbit.motor(1, 100, 0)
robotbit.motor(4, -100, 0)
else:
robotbit.motorstop(1)
robotbit.motorstop(4)
robotbit.servo(7, 90)
robotbit.motor(0, 100, 500)

discl // konrad krawczyk

discl is a CV filter that finds and draws a single contour of your face, in the style of Disclosure album covers.

Created with face-api and p5.js.

repo: https://github.com/krawc/discl

process:

The process started with me trying to figure out how to use face-api. I wanted to test this library to figure out what was possible to accomplish with it. I looked through various features, including expression recognition, face recognition etc. I just picked the one that interested me aesthetically, namely the facial landmark feature, which identified the specific facial features and drew them accordingly on the canvas.

This was similar to what Disclosure often does on their album covers, namely drawing a rough sketch around the faces of the band members, or collaborators.

The hardest part was trying to figure out how to load the model. I tried to create a wrapper around the examples provided in the face-api repo, but that did not work. But, as it turned out, it was possible to just download and save the separate models as part of the project, and load them with a simple function in the face api. I found out about it thanks to Navya’s repo.

I wrote my own implementation in which a single path is being drawn across all ~60 feature points. I also experimented with various filters in the p5 library.

results:

     

AttnGAN outline // Konrad Krawczyk

AttnGAN is an example of an attentive generative adversarial network, developed and compiled into a browser-testable version by Cris Valenzuela form NYU ITP. Its primary task is generating images out of text input. The images are based on a dataset of 14,000 photos.

The generative adversarial neural network in use is trying to generate an image by matching it with a set of images marked as original. It is a pair where one component tries to generate a fake image, and another component (neural network) classifies it as real or fake. The generative component, in turn, tries to pass the fake image as real, and the aim of the cycle is to make fake images unrecognisable from the real ones.

In some cases, the model achieves some impressive results, for example when the text includes mentions of certain animals (birds). However, it does not generate photorealistic images most of the time, and fails completely when confronted with more complex sentences. What I find most interesting about it is the visual representations of abstract concepts, such as: “the importance of being yourself” or “y tho”.

In future work, I would like to use the concept of GANs to create my own inferences. Perhaps, I could find better datasets to accomplish this.

My specific idea so far is to make a neural network that recognises corporate logos in public spaces.

BIRS Lab #2 // Konrad Krawczyk

Use the sensors

The accelerometer was something I could use during the Easter egg snake game. Later on, I tried out various features, mostly focusing on screen input and sound interaction. I worked with Python right from the start, as I find GUI-based programming interfaces somewhat counterintuitive.

Create an bio-inspired interaction

Diana and I got inspired by fireflies. In short, fireflies communicate through light impulses (almost like in Morse code) and these signals spread out to other colonies. 

We found an example from the documentation website and followed it through. 

There were some variables we decided to change. For example, a regular button push by default would make all the displays in the vicinity blink, except the one on the device itself. We fixed it by adding two lines to the Python code, so that on button pressed all the LEDs would light up.

Video:

Untitled4

Code:

https://microbit-micropython.readthedocs.io/en/latest/tutorials/radio.html