Stanley Virgint | infinitecanvas : Collaborative AI image generation canvas

Infinitecanvas is an experimental platform that empowers users to collaboratively create and expand images using textual descriptions, leveraging AI technology and community-oriented design to pioneer a new approach to online art and collaboration.

Infinitecanvas is an experimental platform that allows users to collaboratively create and expand an image using textual descriptions. Inspired by my research thesis, Expressive Data: Fostering Connections With Online Crowdsourcing Through Creative Applications, this project’s initial ideation explores the use of crowdsourcing in fostering connections and collaboration among participants in creative contexts.

The project intervenes in key debates surrounding online collaboration, methods of crowdsourced contributions, and empowering generative AI among participants in creative contexts. Mainly, Infinitecanvas aims to pioneer a new approach to collaborative online art by leveraging emerging AI technology and community-oriented design. Its unique contribution lies in enabling users to collaboratively grow an image through textual prompts, creating a shared creative experience that transcends geographical and cultural boundaries.

The technical setup of Infinitecanvas is centered around the integration of open-source machine learning models including stable diffusion, outpainting supports, and additional components such as cloud storage and databases. The project’s outpainting functionality and model configuration are based on a fork of lnyan/stablediffusion-infinity, a preexisting outpainting tool with a utility-centered purpose. The Infinitecanvas project “space” on Hugging Face powers the machine learning models on an Nvidia A10G GPU and is integrated into a custom website wrapper I developed with HTML, CSS, and JavaScript. The website wrapper is designed to guide users through the experience, providing intuitive controls and background information. The primary site is hosted on Heroku, while image data is stored on Firebase and updates with each visitor’s accepted contribution. A separate real-time database on Firebase continuously updates the project with the most recent image URL, keeping the canvas image in sync.

Infinitecanvas explores the creative, philosophical, and conceptual implications of a communal AI graffiti wall, encouraging a sense of creativity among its users as it forces them to consider the visual trail they want to contribute and its relation to the growing narrative. By allowing users to collaboratively grow an image through textual descriptions, the project creates a shared creative experience that challenges traditional notions of art and collaboration. The platform’s use of AI technology and community-oriented design not only democratizes the creative process but also encourages users to engage with one another regardless of geographical and cultural boundaries. This community-driven approach is a direct application of the research paper’s findings, which showcases the appeal of low-skill contributions (i.e. typing text to generate an image instead of hand drawing) and how this enables consistent and simple contributions from wide audiences of people. The project ultimately seeks to establish a new paradigm in collaborative online art, where users can participate in a collective creative experience driven by AI and human interaction.

Repository: https://github.com/stanleywalker1</div

 

Tags:#Collaboration#GenerativeAI

 

Freya Zhuang | Avatar Skin: Towards New Bodily Experience

The project seeks to blur the boundaries between our physical and digital selves by linking the avatars’ skin to our tangible, living skin. Through the integration of electronic components and the unfolding of UV maps from digital avatar models, the project facilitates communication between our virtual and real identities.


As individuals, we inhabit separate physical bodies and digital entities spread out across the internet. Avatars serve as our digital embodiments, existing within us.

Our physical bodies are constrained by their static and fixed nature, our skin, being permeable, engages in a constant exchange of matter with the environment, granting our physical beings adaptability. In contrast, avatars possess a dynamic and ever-changing appearance, offering boundless possibilities for self-expression. This project seeks to explore the relationship between these two bodies by introducing the concept of skin as a means to examine avatars as extensions of the human body, forging a tangible connection with our living, breathing skin. Through this convergence, a juxtaposition is created, facilitating seamless communication between our digital and physical identities, allowing avatars to transcend the boundaries of the virtual world and merge with our corporeal existence.

To accomplish this fusion, the project delves into the intricacies of UV maps, unfolding the detailed surface attributes of digital avatar models. Furthermore, electronics are integrated into the project, acknowledging the crucial role of algorithms, vast data sets, and powerful servers in the generation and manipulation of avatars.

 
 

Tags:#Avatar#DigitalSculpture

 

Yutong Liu | Station Earth: A Journey Between Life And Death

Death is after-life, life is after-death. Join me on a journey between life and death.
 


Station Earth is a VR experience designed to challenge the static perception of life and death. In this narrative, life and death is not viewed on two opposing end, but rather dynamic, intertwined stages of one constant flow. Under this perspective neither life nor death is a permanent stage, but both are crucial forces pushing us towards the constant journey of change. Under their influence we say goodbye to our past again and again, and continuously forge newer experience for ourselves.

 

Tags:#VRExperience#SpiritualJourney

 

Jiani Yu | beto beto べとべと: An Interactive Kinetic Yokai Sound Installation

beto beto べとべと is an interactive sound installation aiming at recreating the auditory experience people have when encountering and interacting with the formless Japanese Yōkai, Betobetosan, who are recognizable only by their telltale sound – the “beto beto” clacking of wooden clogs – and like to prank lone walkers by following behind them.
 


In Japanese folklore, Yōkai (妖怪, “strange apparition”) are a class of supernatural entities and spirits. The idea of Yokai originates from Japanese Shinto principles, including animism and nature worship, which believe that spiritual entities reside in all natural phenomena and objects. Yōkai were first created to explain supernatural or unaccountable phenomena. Over time, Yōkai evolved into a part of people’s entertainment life, being depicted in manga, illustrations, dramas, and so on. Each Yōkai’s appearance, characteristics, and stories also became standardized.

While visuals about Yōkai are abundant and emphasized all the time, this capstone project focuses on studying the sounds of Yokai. Yokai sounds are significant because they are an important part of people’s encounters with Yokai in folk tales. For some formless Yokai, sound is the only means they have to interact with people. The current way for people to learn Yokai sounds is to read textual descriptions. Since many of the sounds are closely related to Japanese tradition, people who lack knowledge of Japan or experience with similar sounds can hardly have a complete and correct idea of the sounds. Hence, hearing Yokai sounds in person becomes important to understand Yokai and Yokai sounds.

By collecting and analyzing 21 Yokai that make sounds, this project concludes two critical facts about Yokai sounds: Firstly, Yokai make sounds that people are familiar with in their daily lives, such as the sounds of footsteps, sounds of wood cracking, and the sound of waterfalls. Secondly, the reason why people consider the experience of hearing normal sounds supernatural and strange is that they can’t identify the source of the sounds.
To authentically restore the naturalness and ordinariness of Yokai sounds, instead of sound synthesis technologies, the project applies primitive sound-making means to recreate the sounds. With a carefully designed kinetic mechanism using a motor, each sound device can produce the disyllabic sounds of wooden sandals and visually show the movement of walking. The whole installation consists of eight identical sound devices that are placed along a staircase. Computer vision technologies are applied to detect the number of people on the staircase and people’s positions. When one walker is detected, as he/she walks down/up the stairs, sound devices are activated accordingly to create the auditory experience of being followed by the invisible footsteps monster.

The project expands a traditional two-dimensional reading experience of Yokai that relies only on visual sense into a three-dimensional immersive multi-sensory experience. It explores sounds and the relationship between sounds and the environment. By providing a playful interactive sound experience, the project hopes to introduce Yokai to people and bring Yokai to life in a new and fun way.

 

Tags:#soundinstallation#computervision

 

Leah Bian | Animagine: Speculating on the Future Ethics of Animal Cyborgs

Experience a speculative future in 2036 where mind-controlled animal cyborgs are the next generation of companions and service providers. Explore the ethical implications of this emerging form of coexistence between humans and animals.
 


In a world where science fiction becomes reality, the concept of “animal cyborgs” is no longer confined to the realm of imagination. Recent scientific experiments have demonstrated the transformative potential of modern technologies, blurring the boundaries between animals and machines. Drawing inspiration from these advancements, this project embarks on a speculative journey to explore the ethics and implications of a future where animals are transformed into animal-robot hybrids.

The video installation, as a centerpiece of this project, captivates viewers with two 3D-animated advertisements for “Animagine,” a fictional company that promotes mind-controlled animal cyborgs as companions and service providers to the public. The advertisements depict a world where these cybernetic creatures seamlessly integrate into people’s daily life, offering a range of benefits and possibilities.

Delving deeper, this installation endeavors to explore the moral complexities surrounding this emerging form of coexistence. By presenting viewers with hypothetical scenarios and thought-provoking concepts, it challenges their preconceptions and prompts reflection on the autonomy and agency of animals. For instance, the installation introduces the notion of mosquito cyborgs programmed to eliminate other mosquitoes, raising questions about the manipulation of nature and the potential consequences of such interventions.

Beyond the video ads, the installation incorporates informative posters that elucidate the design ideas behind the products and 3D printed prototypes of “Anima,” the mind-control devices utilized in the project. These representations provide a tangible connection to the technology, allowing viewers to engage with the project on a more tactile level and fostering a deeper understanding of its implications.

In essence, this multifaceted installation serves as a catalyst for introspection, inviting viewers to contemplate the profound ethical questions posed by the rise of animal cyborgs. Through a captivating blend of visuals, theoretical research, and technological prototypes, it encourages audiences to critically examine the potential impacts on the human-animal relationship, the limits of intervention in nature, and the intricate balance between progress and responsibility.

Tags:#Animalcyborgethics#DigitalConsciousness