Thousands of netizens jointly expose domestic violence to the digital public domain from its shadow. Through collective distributing, commenting, and rewriting the original materials about domestic violence, they blur the boundaries between public and private, voyeurism and participation. This project simulates a publicized domestic violence as an interactive-visual spectacle and invites you to inspect your positionality when confronted with such mediated events.
Public Voyeurism: Confronting Domestic Violence in Cyberspace is an interactive video installation that demonstrates the complexity of public-domestic relationships in media events about domestic violence. Netizens’ collective power pushes domestic violence to be an openly discussed social issue. However, with domestic violence being a spectacle in cyberspace, the motivation and emotion behind individual netizens are no longer straightforward. Finding the truth and promoting justice, or curiosity, obsession, voyeurism, and even paranoia, what is behind the netizen’s engagement with media events about domestic violence? What will happen when thousands of people try to get closer to the core of the events. Through illustrating these questions, my project demonstrates a refusal to ignore or flatten the public-domestic relationships and the unarticulated emotions when viewing acts of violence, or what we might call “public voyeurism.”
In this project, I create two sequential videos about domestic violence in media events and display them on a screen equipped with facial recognition and a distance sensor. The screen will be placed at the end of a dim corridor. The audience will be given directions to walk along the corridor and trigger the sensor in a visible line marked on the floor. When the audience crosses the line, the sensor detects them, triggering the video to shift. The first video uses snapshots, audio, pictures, and comments about domestic violence on social media platforms. The video conveys the sense of fragmentary, plausible, and overwhelming, reflecting the media environment that we are experiencing. When the audience sees the first video from a distance, they will be naturally attracted to get closer to the screen and see the video. The second video is more fictional and emotional. When the audience crosses the line, it will replace the first one. In the second video, I repeatedly use eyes as metaphors, mediate images by television and camera, and adopt the dramatized background music to demonstrate the unstable boundaries between public and private, participation and voyeurism. Finally, when the audience gets extremely close to the screen, the video will entirely disappear, leaving them with a black screen in an empty corridor.
The whole viewing process makes the tension between netizens’ positionality and the video content explicitly, through which I want the audience to reflect on their positionality on social media platforms and their emotions when confronting domestic violence. Along with the approach of domestic violence, the identities of netizens shift between bystanders, helpers, and voyeurs and become much more intricate. That is what we are experiencing but ignored in the publicized domestic violence.
Tags:#videoinstallation#domesticviolence#public/domestic
Jannie Zhou | Traces: Reimagining movement in 3D webspace.
An interactive 3D movement visualization webpage that aims to change the context of our everyday movement, visualize its traces in the virtual space, and disclose the information our movement carries.
My project is an interactive 3D movement visualization webpage. Developed using Three.js, p5.js, and the machine learning 3D motion detection model written in Tensorflow.js, the project runs in web browsers and still provides a good synchronization of movement prediction and visual rendering. It aims to change the context of our everyday movement, visualize its traces in the virtual space, and disclose the information our movement carries.
The Tensorflow.js machine learning model, developed by Google, predicts the users’ joint coordinates in three dimensions (x,y,z). I deployed the model in my project and used the output data to create visuals in the web space. The visuals are abstract and geometrical, including lines, icosahedrons, butterflies, and particles, representing the traces of the users’ movement. The aesthetics of the visuals are deliberate, contributing to the overall abstract experience. The butterflies, as geometrical as they look, are the only things that associate the visual world with the real-life experience. They are programmed to fly toward the users’ wrists, creating lifelike experiences. Every visual is closely tied together to tell a coherent story about the movement. The users are able to observe how much information their movement has: the angels, trails, speed, and dynamics, and play with them.
A backend server was also implemented for the webpage to allow interactions among users in different physical locations. Using Socket.io, the server stores the data from the clients and renders the visuals on each client’s device in real-time. Since it is developed using solely JavaScript and I have published the project on Glitch, users can join the server using their own devices and see each other’s traces in real time with the link. The server is designed to promote exchange and communications between users. While substituting verbal communication with movement interaction, I hope to remind the users that movement remains an expressive and direct method of communication, although many of us have neglected this ability in today’s society.
Tags:#3DWebspace#VisualizingMovement#WebServer
Haoquan Wang | Self, Digi: VR Interactive Fashion Show for NanoLove: A Capsule Collection
“Self, Digi” is an experimental VR fashion show for exploring the possibilities of using VR as an interactive medium to support fashion designers presenting their clothes economically, environmentally, and in a functionally friendly way.
“Self, Digi” is an interactive fashion show in virtual reality for the “NanoLove” capsule collection. Through the construction of a virtual environment and the application of digital assets, “Self, Digi” creates a platform for fashion designer Dana Kosber to have a hyperspatial dialogue with the audience to discuss the interaction of fashion, self, and love in the cyberspace era. Unlike traditional fashion shows, the audience can experience the entire show from a first-person perspective. Haoquan Wang as the director of show gives the freedom of going through the whole show and trying clothes to the audience. The audience will be able to observe the clothes closely and try them on through the function of the avatar’s cloning. The interaction design of avatar cloning and the conversation with the avatar is used as a metaphor and implication to express Haoquan’s understanding of digital and fashion: “Fashion is one of the extensions of self-expression and identity. In the digital world in the future, identity and expression are easy to switch, copy and experience”. In the final stage of the show, the audience can have an understanding of the design concept and emotional transmission from the designer and the director.
Tags:#VirtualReality#Fashion#Runwayshow
Shengli Wen | PlaNet in Crisis: A Web-Based Meditation on Climate Grief
PlaNet in Crisis is a series of web pages exploring the fragmented, confusing, and ambient nature of climate change-related grief.
PlaNet in Crisis is a series of web pages that traverse emotional responses to climate change. In a series of “rooms” that the viewer can visit, the project presents various forms of interactive text and generative ecological art, all leaning into the surreal feelings that emerge when grappling with a planet in suffering. Studies have shown that young people across the globe are responding to climate change with emotions such as sadness, anxiety, anger, powerlessness, helplessness, and guilt. Climate change emotions can also manifest across different timescales: felt in grief for extinct species, for land lost to unfolding natural disasters, and as young people grieve for their lost futures. Sometimes, the way that climate change information is presented can be so overwhelming that people become apathetic to the subject.
The project consists of a minimalistic home page, with buttons that lead to an introduction to the project, in addition to pages numbered 1-4. The introduction contains clickable text with an ambient style, grounding the user in the context of the project with a poem inspired by Netflix’s Don’t Look Up. Page 1, titled “The Algorithmic Beauty Of Slow Violence”, features three slowly growing fractal trees and a clock counting down to July 28, 2028, which is the estimated date that the earth will reach 1.5 degrees warming. The trees generate a new layer roughly every 6 minutes, and it takes 38 minutes for the trees to grow a full cycle. The second page is called “Understanding our interconnected world”, it is a collective canvas made using WebSocket, where multiple users can visit and generate moss-like visuals on each other’s browsers. The third page, “Corporate Rhetoric as Reassurance” projects an ExxonMobil press statement on climate policy onto a 3D visualization of global temperature change from 1880 to 2016. Combined with audio of a 10-hour oil engine sound originally posted on a white noise channel for sleep, this page hits on the strangeness of the relationship between corporate and consumer. The fourth page is inspired by the fact that in order to grapple with confusing emotions, naming them is a start. When the user opens the page, they are greeted with text that reads, “what does it mean to think about climate change as a way of mourning?” and they are able to type a response that rotates randomly, creating a winding effect to echo the non-linear fashion of climate emotions.
PlaNet in Crisis is a series of four distinct pages/rooms, reflecting the fragmented experience of processing climate grief. It takes advantage of the web browser as a form of time-based media, allowing user experiences to unfold over time, according to the temporal logic of climate change: its slowness and dispersion. Additionally, browser art can be accessed by multiple users at once, reflecting our interconnected relationship with the earth. Through this minimalist form, it aims to offer a space of solace and questioning — a collective anthology of grief and hope.
Tags:#climatechange#digitalhumanities#website
Louis Veazey | TSA…TS欸?/TS-eh?: Checkpoint Using TSA’s Secret Checklist
This is an interactive installation inspired by the automated security checkpoints that are currently in use in airports across the globe, with a twist in which the results of the approval or denial process are based upon TSA’s confidential checklist of what they look for in potential terrorists.
In a world with increased surveillance, within which every single move and action can be observed, scrutinized, analyzed, and put into a database. Thus this begs the question, how are we being observed exactly, and in what ways are we being observed in certain situations? When one thinks of some situations where one is being closely observed, there are certainly some situations that come to mind such as a military checkpoint, a courthouse entrance, or an airport security checkpoint.
Currently, the TSA – the United States’ airport security agency, uses a set of arbitrary secret rules to identify potentially suspicious individuals based on their appearance, behaviors, movements, and so on. Some of these rules are obvious – such as appearing to be in disguise – while others are so incredibly arbitrary – such as ‘wearing improper attire’ or ‘gazing down’ that any individual may be flagged as being suspicious and certainly are actions or behaviors that almost any traveler may have previously done. Therefore, “TS-eh?” was created as an interactive, automated checkpoint-like installation that draws attention to not only the constant surveillance that people are under in the modern world but also how subjective the rules (published here: https://theintercept.com/2015/03/27/revealed-tsas-closely-held-behavior-checklist-spot-terrorists/) within security agencies’ checklists are.
The project accomplishes this by first presenting itself as an environment that can be very familiar to anyone, especially those who have traveled by air. There are lines on the ground to follow and walk along, and an end-point with a screen. At the same time, there are multiple cameras placed in locations around the walking path, creating a setting in which the subject will know that they are being observed and potentially analyzed. When participating, the subject will be shown on the screen that their body, movement, and clothing are being analyzed as well as their head, eyes, and facial expression afterward. The subject then receives their result of approval or denial based upon the rules within TSA’s checklist, with the denial showing only one reason for their denial, similar to how security checkpoints often are, with those being denied given little or no reason for their denial.
Tags:#Surveillance#AirportSecurity#FacialAnalysis