Video:
Stills:
Code:
IMA Documentation Blog
Video:
Stills:
Code:
In the first prototype, I built a simple Wi-Fi beeper that reflects the signal strength (RSSI) of each signal by connecting it to a buzzer. It was a proof of concept that the system can create an interactive experienece that shows how the environment can have an impact on the individual in an invisible and untouchable way.
Taking a step further, the goal of the second prototype is to:
This blog post will be divided in 3 parts: wiring, setting up Mozzi (2 examples), and working prototype 2.
Photo: Prototype 2 Wiring
For prototype 2, the wiring contains two potentiometers for analog input, and an additional potentiometer for volume control (together with a capacitor to smooth audio quality). These two potentiometers are connected to ESP32, which runs Mozzi. For output, a GPIO pin with Audio (DAC) capabilities is connected to the earphone. The earphone is connected using wire clamps, corresponding to each section of the 3.5mm audio jack.
In order from the inside to the outside: Microphone -> Ground -> Left -> Right
It’s a very temporary and experimental setup, which will change in the future to improve stability and cleaner connections.
An ESP8266, responsible for Wi-Fi scanning, is connected to ESP32 via analog, more specifics will be in the following sections.
In this example, two potentiometers are mapped to pitchValue (frequency) and cutoff frequency for a low-pass filter. The potentiometers can control the features of sinwave synth. This example demonstrates a basic synth which can be shaped and manipulated further.
This example utilizes bluetooth scan, trying to achieve scanning and sound production on the same board, but with little success.
The logic of the bluetooth scan is as follows:
BLEScan *scan = BLEDevice::getScan();
scan->setActiveScan(true);
BLEScanResults results = scan->start(1);
int best = CUTOFF;
for (int i = 0; i < results.getCount(); i++) {
BLEAdvertisedDevice device = results.getDevice(i);
int rssi = device.getRSSI();
if (rssi > best) {
best = rssi;
}
This code snippet will grab the “best” bluetooth signal, and countinuously return the biggest signal source and its numerial signal strength value (RSSI). In turn, the signal value (RSSI) will define an arpeggiator pattern that “should” make the sinwave synth more musical.
However, a big problem is the incompatibility of running Bluetooth/Wi-Fi on a single board, with Mozzi. Mozzi code structure includes special sections such as UpdateControl(), updateAudio(), which operates on high frequencies (~16000Hz) to match audio control rate. Adding anything related to Serial Communication, WiFi, Bluetooth, or just timing functions in general would not work with Mozzi.
Therefore, the only option left is to use another board (ESP8266) and separate the bluetooth function with the sound board, and utilize analog output/input (PWM pin) to transmit data.
This example plays a fluctuating ambient wash in response to the Wi-Fi scan results and a potentiometer. The Wi-Fi scan controls the base frequency of the oscillator, and the potentiometer controls oscillator offset depending on the resistance.
There are two sets of oscillators. The first set uses 7 different Cosine wavetables to produce a harmonic synth sound. The second set duplicates but slightly off frequency for adding to originals. There is a pre-set offset scale to map WiFi scan result to base frequency drift.
The base midi notes are: C3 E3 G3 A3 C4 E4, and translates to
The current prototype took a step further and accompolished synth generation with Wi-Fi inputs. For the next prototype, the goal is to build a more defined user experience. Achiving that requires a re-thinking of the input variables that the system uses (currently is the number of Wi-Fi signals detected). Adding temperature, light, or other inputs might complicate things further and generate richer sound. However, that requires a deeper and higher level understanding of Mozzi, especially how to change synths parameters and control sounds.
The overarching theme of this project is the mix between digital and human features. Specifically, how the environment can have an impact on the individual in an invisible and untouchable way.
Inspired by the Bluetooth Visualizer and the Ambient Machine, I want to create a sound machine that reacts to radio signals (Wi-Fi signals as the primary option to recieve signals).
The goal for the first prototype is to test how to detect Wi-Fi and make sounds accordingly. This process is based on an ESP32 chip, and developed in Arduino IDE.
First, I explored how to detect Wi-Fi signals on an ESP32 board, using the Wi-Fi Scan example.
In this example, the ESP32 board can get:
I am using RSSI (signal strength) in this prototype as the main input. The output is a simple buzzer that makes sound based on signal strength. I defined a couple of notes and their frequencies, and higher the signal strength, higher the pitch. The notes and pitch are defined as follows:
The code works as follows: the ESP32 loops through all the wifi signal that it detects, the buzzer will beep according to the signal strength. For example, if there are 3 wifi networks, signal strength are -80, -70, -60 respectively, then the buzzer will beep notes E, F, G.
The benefits of the ESP32 is portability. To demonstrate, I connected the ESP32 board to a power bank (thanks Prof de Bel for the power bank) and walked around the campus. I found that there are more wifis in the courtyard than actually inside the building (because the beeping sequence was longer). Here’s a video demo:
Overall, this simple prototype demonstrates that the basic idea works. Wi-Fi detection is feasible and easy to implement. However, in the next step, making pleasant sound is much harder. Music coding platform SuperCollider is very hard to manage. So I will try to use other ways (synthesizers) or adding effects to construct ambient sounds.
FULL CODE
This immersion experience is conducted in the Taikoo Li mall 2F (stone zone). The immersion is comprised of 3 part:
and this post documents the observe phase and interact phase. To avoid people staring at me since the immersion time lasted around 1 hour, and also implement the process more effectively, I adopted 3 items from the Oblique Strategies by Brian Eno and Peter Schmidt:
Here are the results:
The map machine is located around the main entrance of each floor, with a industrial but modern look and design.
The map has a default view, with an overall map of the entire mall. Unfortunately, there were no markings, serial numbers, or indication of where it was made. The machine is housed by a metallic case, which makes it hard to probe its insides.
Immdiately, a camara mounted on the top of the machine is very visible. There were no applications in the machine that explicitly uses the camara, which left me wonder what it actually does (facial recognition? consumer portraits?)
A very detailed air quality dashboard.
When I touched the screen, the bubble upon touch resembles an android system.
Height friendly mode that changes the screen size.
Finally, a very interesting AR navigation experiment. Scan a QR code, login on the phone, then the AR app will take you to your destination. But it is awkward because you need to hold the phone high and aim at the road in front of you.
The Ice Cream Machine Visit: The interaction (see the interaction illustration below)
The interaction between human and the ice cream machine starts with the customer selecting and paying for the ice cream, which the process is operated on a screen and a computer. Then, the computer sends instructions to a robotic arm which it will perform a set sequence of making the ice cream and delivering it to the customer.
In the entire process, no human is involved except for the customer. But for maintainance, there has to be someone to fill up the supplies, such as milk and the cones.
The content on the screen is pretty simple: a button to start ordering, a payment system based on WeChat or Alipay, and then a finishing up animation.
In terms of sounds, it was very interesting that no computer-generated sounds were used at all. For the robotic arm, it makes basically no sounds, so the experience is mostly visual and psysical.
The first eye-catching and most obvious theme is the visualization of the “invisible links” of smart cities, which could represent the internet, data transfers, or something else.
The second theme common is a background picture of a highly developed city, typically at night, emphasizing the use of electricity as a manifestation of development.
The third themes is a futuristic, or even cyberpunk, interpretation of the smart city.
These themes are a symbolic representation of some of the characteristics of the smart city. They give people a feeling that smart cities are inadvertently linked to high-tech, the internet, highly-developed big metropolises.
The shared bikes! Shared umbrellas sometimes too. Waimai (delivery services).
The benefits are mostly convenience to commute (especially when the distance traveled is shorter than a taxi ride but too long for walking), and the fact that you can use the services by just having a phone. The frustration is that the placement of bikes are mostly random, and at peak times you cannot find a bike home.
I find that Constant’s idea of “the nomadic life of creative play” a very interesting concept. There are many ways to play in the city, and I think Constant meant creating a life of leisure inside the city, which could include things like going to the movies, a bar, or taking a walk in a park.
Technology could enhance play mainly by automation and awareness. The author of the article mentioned that “Spaces in New Babylon would somehow need to be ‘aware’ of the activities taking place in them so that the environment could know when to change its appearance and behavior.” The ambient environment and its automation enhances the experience of many leisure activities, but not to the extend that Constant may have envisioned.
Lastly, diversity represents a significant factor in the making of a smart city. To quote from the article, “the smartness comes from the diverse human bodies of different genders, cultures, and classes whose rich, complex, and even fragile identities ultimately make the city what it is.” It’s the engagement and blending of people from different backgrounds that matters more than just the technology or the profit stream of companies.