Biology lab: Circadian rhythm of Drosophilia flies // Konrad Krawczyk

The objective of the lab was to investigate potential differences in patterns of circadian rhythm among genotypically different specimen of male Drosophilia flies. The circadian cycle is a biological feature of animal species, including humans, in which daily activity is chemically administered by a mechanism that corresponds with the daily cycle (for example the light intensity of the sun).

In this lab, we collected 10 specimen of male fruit flies without any genetica mutations into separate test tubes, and then proceeded to collect 10 specimen of male fruit flies with a genetic mutation. We incapacitated the flies using CO2, then used a metal rod to isnert them into tubes partially filled with sugar paste, then covered the tubes in cotton to let the flies breathe.

Over the duration of one week, every hour, on the hour, the number of flies active vs asleep has been measured.

The sample results are available here:

 

The aforementioned results clearly show that there is a difference in terms of how many flies are active between different times of the day. Looking further into the results, it does not seem to be an anomaly, as many more flies tend to be visibly active early in the morning. However, this did not apply to all test tubes, as in some no activity has been detected for a long time (potentially some fruit flies died), and in others the patterns have been not as clearly visible as in columns 1. and 4. 

This lab has provided us with a useful experience of following clear laboratory procedures, as well as of collecting and processing results. Even though further research could show the differences more clearly, it seems that our preparations enabled to at least partially show how various organisms follow circadian rhythms.

BIRS Collective Decisions // Konrad Krawczyk

  1. Plan: 

The initial plan for our group was to build a 3-robot swarm mechanism, in which robots distributed separately would clump together, one by one. In the mechanism, numbered and labelled robots would be distributed so that they have uneven distances from each other. Then, the software would look for the two robots that are closest to each other, and make one of them (the one with the lower order number) go towards the next one, and then both of them would proceed to the one that’s the farthest.

2. Materials

Microbit x 3
Robotbit set x 4 (optional)
HD 920c Logitech camera
Mac mini
Aruco markers

3. Production

We started with setting up the aruco repository on our computers. I set up a separate forked repo for that, and gave instructions so that the entire group has access to the code. At first, our concerns were mostly mechanical – we didn’t know how the robot would figure out its rotation angle, and how we would send the data to the swarm. 

We figured that we have to use one microbit as an antenna, in order to then send data to the other robots. 

We successfully figured out how to send serial data to the antenna. However, we ran into issues when trying to send them forward to the other microbits. We still did not quite end up understanding how to make the robots move in a specific direction and to give them specific angles of rotation. This could be the next step of the lab.

4. Code:

https://www.github.com/krawc/aruco

I am Not a Robot // BIRS final project documentation by Konrad Krawczyk

idea:

For my BIRS final, I wanted to explore the area of computer vision in robotics. After having worked with robots using MicroBit and Arduino, I decided that it’s time to take on something a bit more advanced, namely implementing machine learning algorithms into robots themselves in order to make them perform advanced analytical tasks. Since the beginning, I knew this would require me to learn how to use Raspberry Pi, as it is the only full-fledged computer among the small chips commonly used for robotics.
Ideally, I wanted my robot to be able to recognise itself in the mirror.

background:

 

The importance of visual self-recognition is outlined by Jacques Lacan in his Mirror Stage theory. The concept of the mirror stage is based on the belief that infants can recognise themselves in the mirror an that moment creates a powerful shift in self perception. According to Lacan, the moment of recognition of oneself as an object is a moment that adds a crucial bit of self-awareness to develop socially, and plays a role in ego formation. Aside from humans, the act of recognising oneself in the mirror has been observed among certain species of monkeys.

I would like to look at the symbolic meaning of this moment by applying a seemingly similar mechanism to a robot. This ind of self-recognition would be very different from how infant mind works, in a sense that the robot would have a pre-existing amount of object recognition intelligence already, while infants are developing it as they go. However, this could be a critical and maybe somewhat ironic take on the importance of the “mirror stage”.

process:

I started with getting the Raspberry Pi 2. I had very little prior experience working with it, so I tried to install the NOOBS operating system. This, however, has failed after several attempts due to a mysterious data formatting issue.

I picked up the RPi 1 with opencv included in it. It quickly turned out that there was not enough disk space, therefore I decided to delete an unrelated tutorial file in a .pdf format. This caused the RPi to freeze, unfortunately. I will need to bring it back to usability ASAP.

However, this way I found out that there was an easier way to start working on RPi2, namely Raspbian Stretch Lite – a headless, command line-only operating system. I burned the image of it onto an SD card and proceeded to try to get the Internet to work, learning as I went through it. After half a day of struggle, I finally connected to the NYU Wi-Fi. I installed a list of packages, then got the database corrupted after reboot, then reinstalled the system, then installed packages again, and finally tried to get the camera footage. However, I didn’t get to that point eventually. I had to reconnect to my own hotspot, after which the database got corrupted yet again. Then, I had to reinstall the system again, reinstall the packages, and try to get the camera footage to work with the common wi-fi. Even though I followed the tutorial step by step, at the end there has always been a problem with a camera move.

However, I did get to run some basic AlphaBot sketches on the robot setup.

joystick control:

obstacle avoidance:

I truly, deeply wish I could accomplish this task by the allocated time. However, I am determined to continue working on this experiment, as I got very interested in making Web-embedded physical robots. I will continue working on it for as long as I am in Shanghai, or later next semester.

IML / Style Transfer / Konrad

IML // Style Transfer

In this style transfer exercise, I wanted to try creating a model that would turn natural landscapes, such as mountains, into synthetic and artificial ones. The image I decided to use for my style transfer was this:

Content: 

 

Style:

I tried to train the model using the tutorial from the class slides. I prepared all the files, made the train shell script and tried to qsub it. However, every time I tried to add it to the queue, I got the message along the lines of:

XXXX-qsub-07-ai-builder

Nothing happened afterwards, and immediately I got redirected to bash. No training logs or error files were present.

After at least 10 attempts to train the network, I concluded that there is some other technical issue with my neural network setup.

I decided to follow a different tutorial instead. I found a style transfer algorithm in an IBM Watson neural network set. There is an algorithm that enables users to train a neural network to transfer styles. The tutorial is available here: 

https://www.youtube.com/watch?v=29S0F6GbcxU

These are the results: