Speakers will present 10-minute case studies and are listed in the order they will present.
Speakers & Schedule are Subject to Change!
Welcome to #LTT2024! (8:45 – 9 AM EST)
- De Angela L. Duff (New York University)
Panel #1 (9 – 10:30 AM EST)
- Than van Nispen tot Pannerden (HKU, University of the Arts Utrecht) – Proj1B “Maak het (niet) met A.I.” (translates to “(don’t) make it with A.I.”)
“Project 1B” at the HKU University of the Arts, School of Music and Technology, first-year students embarked on a distinctive journey where Artificial Intelligence (driven by GPT-4) played the role of a collaborative academic (lecturing) partner. This innovative venture encompassed various facets: from lesson outlining, spearheading the initial presentations, to co-designing homework assignments. The AI’s engagement even extended to contemplating seating arrangements during the ethics lesson, following a detailed room description.
Students were further empowered with a tailored application (a Max/MSP patch), enabling a personalized interaction with their AI assistant, fondly named ‘Melodius Maximus’. This initiative encouraged students to shift the traditional homework paradigm, advocating for maximal AI-generated content while minimizing manual student input.
A standout moment from this project was when one student, unable to attend the final presentation, innovatively had their presentation delivered by an A.I. avatar. This creative approach was well-received by the faculty, showcasing the limitless possibilities of AI in an academic setting. Moreover, ‘Melodius Maximus’ also participated in providing feedback and participating in the final assessment, adding another layer for potential discussion on the evolving roles of AI in education.
The insights garnered from this endeavor, coupled with the experiences shared in HKU-wide seminars “Arty Intelligence” and the “AI, AI, A.I.” seminar, position us to contribute meaningful dialogues in the ongoing discourse surrounding the integration of generative AI in the contexts of Creative Technologies at the forthcoming #LTT2024 UnSymposium.
This narrative, rich with practical experiences and forward-thinking approaches, promises to be a stimulating contribution to the conference discussions, especially on the evolving synergy between AI and contemporary educational practices.
- Louis McCallum (University of the Arts, London) – Natural Language Processing for Creatives
Generative Text and Human Collaboration. We look at different ways the output of generative text models can be developed by humans before they are presented as art / performance. We cover instances where humans don’t just generate lyrics, they record the songs (AI Song contest), where they don’t just generate scripts, they get actors to perform it (Sunspring, Date Night), they don’t just generate stories, they do readings (Algonory – Shardcore). We do a generative film club where we watch two films written with different models (CharRNN and GPT-3) and compare their strengths and weaknesses as scriptwriters.
This is about encouraging students to take the stance of Human as Collaborator, rather than Human as Curator, and explore ways to take the extra step beyond just presenting the text (or image, or audio) from a model as the final outcome.
- Clara Fernandez-Vara (New York University) – Generating Parser-based Games to Teach Narrative Design
Narrative Game Studio is a hands-on course that focuses on story-driven single-player games, providing the opportunity to do interdisciplinary work. This course introduces students to the design and development of narrative games, including conceptualization, foundational narrative design strategies, and writing. Students will learn how to use tools to develop narrative games; they will work individually at first and then in teams.
Using ChatGPT to generate code that students have to fix and expand on, as a way to introduce students to programming conventions that they are not familiar with. In this case, I used ChatGPT to generate very short narrative games in the programming language Inform, which uses “natural language” to produce parser-based text adventure games. Students could see the original result of the generated code, what it was needed to make it compile, and how it was functional, although far from an interesting game. The students still had to do the work of modifying and expanding the original code into something that was compelling and interesting, giving them a foundation to start with, rather than a blank page. I have been teaching this class for 10+ years, and this exercise has been a resounding success in introducing people to this game development tool.
- Phoenix Perry (University of the Arts London) – Coding One / Coding Two
These courses are the Creative Computing Institute’s advanced creative coding classes for the MSc in Creative Computing.
I will discuss a study conducted on my students’ usage of ChatGPT for programming understanding and learning enhancement and teaching with Co-Pilot and exploring the curriculum for computational thinking in light of students’ ability to generate code through prompts. What are the most effective ways to utilize Co-Pilot and ChatGPT in the classroom, and how does it align with fostering inclusive creative practices through code?
- Rebecca Fiebrink (University of the Arts London) – Exploring Machine Intelligence
This is a one-term unit for our Masters in Creative Computing students. It introduces Machine Learning (ML) approaches, concepts, and methods through direct examples, practical problem-solving, and core technical training for creative applications.The unit explores a specific set of approaches for both interactive and offline Machine Learning using Keras. We cover creative applications of both supervised learning and generative learning. The unit culminates in a final independent creative project.
What is the role of coding in teaching creative ML? I can talk about our decision to (for now) make this masters-level unit a coding-focused class, in which basic programming skills are taught as something that can unlock a more flexible, creative, bespoke approach to AI beyond the existing no-code tool space. On one hand, we see generative ML projects as a potentially good motivator to get people more comfortable with coding; on the other, this means we inherit all the inclusivity problems of programming and see students who come in with less coding knowledge or a lower sense of self-efficacy struggling (perhaps unnecessarily?).
Group Q&A moderated by
- De Angela L. Duff (New York University)
Panel #2 (10:45 AM – 12:15 PM EST)
- Griffin Smith (Rhode Island School of Design) – Art and Artificial Intelligence
This studio course explores how AI’s rapid progress is challenging artists today. As we work with these exciting, terrifying new tools, we’ll discuss how artists have responded to transformative media of the past like the camera, the television, and the internet. How can we comment on the ethical concerns of AI technology? Should we change how we think about creativity? And who will the machines replace?
Students will experiment with new tools as they are released throughout the semester, as well as interview machine learning researchers and digital artists.
I’d like to share some of the key differences I’ve observed between designers and artists while teaching AI at RISD. Designers are concerned with dissecting different parts of their workflow for AI optimization, thinking of AI prompts like the briefs they get from clients, and leveraging AI as they sketch. Artists, on the other hand, are intrigued by AI as an alien mind, an unexpected and sometimes bizarre artistic muse, and a tool for exploring cultural bias dredged up from the internet. In our critiques, these two kinds of thinking inform many useful points of disagreement, and highlight opportunities for rethinking AI creativity.
- Roopa Vasudevan (University of Massachusetts Amherst) – Digital Imaging (UMass) and Design 21: Design After the Digital (UPenn)
This presentation is based on two class experiences. The first is an experimental demo I conducted while teaching the studio class ART 275 (Digital Imaging) this past semester at UMass, where I used my own DALL-E, Midjourney, and Adobe accounts to demonstrate 1) the craft necessary in writing image generation prompts, and 2) the differences in what was generated for the same prompt based on the models on which each system was trained. The second is an assignment given to students in DSGN 0020, a design and technology ethics seminar led at the University of Pennsylvania, where I asked students to generate a set of images with DALL-E and critically reflect on the process, their expectations, and what was produced. Together, these assignments start to probe at the necessity of examining generative image creation as a creative process in and of itself—requiring craft and attention to writing, and not simply clicking a button and spitting out beautiful imagery, as many students believe going in—in tandem with all of the ethics and consideration necessary when using these tools (e.g., using a specific artist’s work as a reference, generating “realistic” people, using the tools for mis- and disinformation, etc.).
- Golan Levin (Carnegie Mellon University) – Foundations: Digital Media II
This course is a practical introduction to expanded modes of creative practice made possible by the computer. In this studio course, students develop the skills and confidence necessary to produce interactive, generative, and immersive artworks; discuss their work in relation to current and historic praxes of electronic art; and engage new technologies critically. Topics in this course include no-code and low-code approaches to: internet art, immersive world-building, and environmental storytelling; generative art and experimentation with learning machines; creative interventions and performance in social platforms; and the development of branching narratives and games.
Since 2020, our Foundations II course has included a unit on the exploratory use of generative AI.
The use of generative AI tools in arts pedagogy is new, poorly understood, and controversial. In spring 2023, we introduced a new exercise to explore their use: the creation of “AI Chapbooks” written and illustrated by student-guided machines.
This project engages arts students with experimental tools that are only a few months old, and whose very existence is fraught with new forms of potential infringement and abuse. Our hope is that this exercise is not only instructive in the aesthetics and mechanics of using these new technologies, but in their ethics as well.
The purpose of this project is for arts students to learn how to guide AI systems to produce a compelling cultural object—and for them to navigate digital systems to realize this object through a real-world publication process. In order to generate their books, students had to learn how to guide AI tools like ChatGPT (for text) and Midjourney (for images), an indirect creative process that is very different from their usual way of working. Additionally, students learned how to use Adobe InDesign to lay out their books, and how to use a digital print-on-demand service, a publication method of great practical value.
The results of this project can be seen here:
https://golancourses.net/60120/deliverables/1-aiart/ai-chapbook-gallery-2023/
- Christian Grewell (New York University) – B-Roll: Open Source AI Short Filmmaking
B-Roll is a workshop focused on the practical use of open-source AI tools in filmmaking. In this session, we’ll be utilizing tools like Stable Diffusion, ComfyUI, and Bark to create super short works of media art.
I would like to teach participants a potential workflow for working with generative AI art tools for film, photography, and sound. My goal is to run this all off my laptop and for participants to have a nice step-by-step way of recreating my workflow.
- Liat Lavi (University College of London) – AI Doppelgängers In The Service Of Reflective Practice
The workshop presented centers around creating AI doppelgängers using the character.ai platform. Students then interrogate their AI counterparts on their graduation project, development process, research, and creative outputs. The results are surprising, often amusing, and always pertinent for the students’ reflective process. Generally, the AI outputs and its analysis advance the students by demonstrating what they would *not* like to do in their projects. AI outputs also highlight the trivial/banal/kitsch replies to the questions the students pose, allowing them to steer their projects to more complex and profound avenues.
The talk will center on using AI outputs as a means of creative provocation, relieving existential fears, jumping into the process, and better understanding what one would like to do through a critical rejection of what AI has to suggest.
Group Q&A Moderated by
- Scott Fitzgerald (New York University)
Lunch + Learn (12:30 – 1:3o PM EST)
- Workshop 12:30 PM – 1:00 PM EDT
Adam Tindale, Emilie Brancato, and Lori Riva (OCAD University) – Navigating Generative AI in Art and Design Education: Policy, Pedagogy, and Creative Practice
The session is an active participation session where we ask participants to generate a list of experiences, concerns, and hopes they have for GAI in the classroom, their institution, and in the world. We create a collaborative MIRO board where people can post their ideas and we have a few ways of categorizing and arranging the ideas spatially to allow us to see some trends. We use this as a way of generating ideas collectively that participants can use to shape their own practice or effect change in their institutions. We try to focus on promoting and enable positive and ethical action with GAI rather than focusing on time consuming actions that simply impede the use of GAI, like banning it in writing classes. The exercise starts with asking participants to notate experiences they have had recently. We ask participants to orient this between a desired and undesired effects, we then ask participants to extrapolate from their experiences to future experiences. We place the notes in space with an X axis for time and Y axis for desired/undesired, with the top right quadrant being items in a desired future. As a group we work to summarize and analyze the board such that we can start to devise action we can take today that we can imagine leading to the desired future imagined by the group.
- Lunch Break 1:00 PM – 1:30 PM EDT
Panel #3 (K-12 • 1:30 – 2:45 PM EST)
- Emma Wingreen (Code.org®) – CS Connections: AI for English and History
Code.org has been designing lessons on the use of Generative AI for English and History classrooms in the 6-12 grade bands. The ELA lessons focus on different ways to use generative AI in creative and argumentative essay writing with the end result being that students develop their own set of AI values. The History lessons focus on different ways to use generative AI to complement or augment the use of search engines in writing document-based essay questions.
This presentation will share findings from Code.org’s focus groups and current middle school and high school pilot classrooms. It will document the process of how the Code.org Curriculum Team pitched the idea to teachers, created the lesson plans, and ultimately piloted the lessons.
- Beth Rosenberg, Ray Barash, and Haley Shibble (Tech Kids Unlimited) – NATE: A Neurodiverse Accessibility Tech Education Tool-A Look at How AI CAN Help Educators and Excite Students in the Classroom
While AI might be perceived as a source of frustration for educators, at Tech Kids Unlimited, we see it as a powerful tool to captivate and empower neurodiverse students. At Tech Kids Unlimited, a non-profit education organization (which sits inside of The Ability Project at NYU Tandon) neurodiverse students ages 10 to 24– can learn about machine learning, get excited about AI and make a live chatbot all while educators can learn how to improve their own curriculum. Using a new educator platform, playlab.ai (in beta format), TKU educators will talk about the advantages of how AI can level the playing field for students with disabilities AND will engage fellow educators in making a live chatbot!
TKU educators will talk about how AI is important for students with disabilities and will engage participants in creating a live chatbot around an education topic. This presentation will not only help educators with their own curriculums but will showcase a live chatbot demonstration.
In this dynamic presentation, TKU educators will not only emphasize the significance of AI for students with disabilities but also invite participants to contribute to the creation of a live chatbot centered around a specific topic. Brace yourself for an enriching experience that promises to elevate your teaching strategies and culminates in a live chatbot demonstration.
- Diana Dias (Avenues São Paulo) – AI Explorer
The AI Explorer Workshop is a Professional Development session for K-12 Faculty and Staff. Through hands-on activities and discussions, teachers will journey through different perspectives on AI, examining its varied applications, societal impacts, ethical considerations, and potential future scenarios. Additionally, this workshop offers educators practical insights on how to integrate AI into their classrooms, encouraging the sharing of AI lesson plans and strategies.
How do we support and guide critical learning of emergent technology in a school community? How can we meet faculty at their different stages of understanding Artificial Intelligence? The AI Explorer workshop was a key piece in the journey of being a Technology Integrator in a K – 12 school as the impacts of Generative AI on education became more and more present. I will share how we approached preparing this Professional Development session as a way to embrace the challenges that emerged from everyday use, shining light on ethical concerns, fostering space speculation about the future, and sharing practices.
- Ariel Han (University of California, Irvine) – StoryAI: AI-based Personalized Story Authoring Platform to Empower Youth and Families’ Literacy Development
StoryAI is an interactive, web-based platform aimed at helping children aged 5 to 12 years improve their reading/writing (ELA) skills. Our platform is engaging, educational, accessible, and user-friendly, incorporating a mix of teacher-created lessons, gamified modules, and quizzes. The system is constructed with atomic and interchangeable AI-assisted web services. Each service is implemented in a self-contained fashion that allows developers or lesson content authors to fully customize any type of lessons with defined purposes. By harnessing generative AI, stakeholders in the education sector can create more personalized, engaging, and effective learning experiences that cater to diverse needs and support overall student success.
I will be presenting the process of designing and developing the StoryAI platform starting from need-finding interviews, iterative design cycle, and learning implications in educational settings of StoryAI platform for students grades 4 to 6th.
Group Q&A Moderated by
- Hui Soo Chae (New York University)
Panel #4 (3 – 4:30 PM EST)
- Todd Ingalls (Arizona State University) – Machine Learning for Media Arts
Provides an interdisciplinary introduction to machine learning techniques from the classical to the contemporary. Discusses methods and implementations of machine learning in creative outcomes including image, video, and sound synthesis, with both real-time and non-real-time approaches. Also discusses issues of representation, bias, and ethics in machine learning that help students frame their projects with critical awareness. Preferred incoming skills include programming, system building, or integrative skills using Max/MSP or Python and/or expertise in signal processing or composition and/or technical skills and/or sensor development.
This course is co-taught with my colleague and the director of the School of Arts, Media, and Engineering, Pavan Turaga. I would like to give a quick overview of how we have structured and adapted the class as we have taught it over the past 4 years. I or we would also like to show or describe the breadth of work being done by students and the viewpoints of interest for teaching this type of class we have learned.
- Norah Lorway (Toronto Metropolitan University) – Artificial Intelligence and Music
In this course, students will learn the fundamentals of artificial intelligence and machine learning that can be used to work with musical audio, human-computer interaction, and other real-time data events. This course will focus on the creation of algorithms for live performance practices which can be directly employed in creating new real-time systems for the arts. Students will apply these frameworks through a series of lectures, studio labs, workshops, and assignments.
In this presentation, I will discuss new strategies for teaching music computing using generative AI such as ChatGPT. I will also discuss my co-created language Scorch which has been used throughout my course by students to test out the language’s ability to integrate with ChatGPT in a way that allows for more accessible learning.
- Chen Wang (California State University, Fullerton) – Computer-Assisted Graphics
This course revolutionizes graphic and interaction design education by seamlessly integrating Augmented Reality (AR) and Artificial Intelligence (AI). Students gain a profound understanding of these technologies, cultivating skills to incorporate them into design processes. Examining AI’s impact on the design workflow, we explore streamlined tasks, varied design generation, and enhanced creative decision-making. Emphasizing human-AI collaboration, designers can amplify their skills for impactful communication.
In an innovative initiative, we propose converting the school’s Arboretum into an ‘AR-boretum,’ a cross-disciplinary laboratory. This space facilitates AR and AI experimentation, encouraging collaboration with experts from diverse fields. By embracing AI as a catalyst for innovation, educators guide students to enhance creativity and problem-solving. Through interdisciplinary collaboration, students can address real-world challenges, pushing the boundaries of interactive design. “Transforming the Arboretum into an ‘AR-boretum’ leverages technology for experimentation, making AI and AR serve humanity in design education.”
- Tyler Coleman (University of California Santa Cruz) – Creativity with AI (at the University of Texas at Austin)
In the Creativity with AI course at UT Austin, we took an AI-first approach in creativity, with all students self-learning cutting edge generative tools, writing case studies on the ethical concerns, and choosing to work with them further or continuing their case studies as a final project. I’d like to present my findings on splitting an AI course between using the tools and discussing the ethical implications of said tools and takeaways from this approach.
- Daniel Lefcourt (Rhode Island School of Design) – Generative Systems
Open to all Departments, Student-led projects extending the work in their primary studios in Film, Architecture, Industrial Design, Sculpture and more.
I’ll present my approach to teaching Generative Art & Design. Specifically, how new AI technologies connect to the history of contemporary art.
Group Q&A moderated by
- Luke DuBois (New York University)
Panel #5 (4:45 – 6:15 PM EST)
- Veronika Szkudlarek (OCAD University) – XR Space Jam: Introducing AI in Experimental Animation and Painting
XR Space Jam is an introductory course I developed for the new Experimental Animation department at OCADU. It explores themes of immersion and interaction in relation to extended reality in an effort to demystify new technology and inspire contemporary engagements with AI. Through a series of low stakes assignments, students are introduced to a variety of XR techniques that are catalysts for exploration and play.
This presentation will share some of the work made in this course, and provide a brief overview on how the assignment structure from this course can be helpful in promoting AI and XR in Painting and Animation.
- Jennifer Kowalski (Lehigh University) – Web Design I
This course is an introduction to the design and fabrication of web pages. I want to share a case study of how students used AI for more interesting concepts and content in an introductory web design class.
Students in this Web Design I course each created websites to advertise a fictional event. They came up with unique concepts for their event and used ChatGPT to help generate copy for the site. Many students also used generative images instead of stock images for the web design. Students then used Figma to create wireframes and high-fidelity prototypes, followed by creating a basic functioning website.
The use of generative AI allowed students to explore their own unique topics rather than relying on “Lorem Ipsum” generic placeholder text or a standard assigned prompt.
- Mario Nakazawa (Berea College) – Introduction to Game Design
This course is an introduction to creative (not cookie-cutter) game design, whether one thinks of computer games or not. Students should expect to apply the game design process, explore the myriad ways that make games compelling, and explore the various societal issues game developers must consider. Students will (1) develop their own simple computer game and (2) complete a document of an unrelated game that they want to design only.
I created two assignments; the first of which is to develop a prompt for using ChatGPT to generate a story for their game, and another to create a prompt so that https://pixlr.com/ai/ai-image-generator/ will create art pieces for their game. The assignments gave them guidelines to create an effective prompt and also how to critique the generated output and how to modify their prompts to get something closer to what they want.
- Lisa Maione (Kansas City Art Institute) – Imagining Near-Future Workspace Interfaces
This course is a research and process, junior-level course.
I would like to present an ongoing project in which I invite students to envision and design towards workspaces and interfaces that do not yet exist. The students and I use generative AI image engines to create images of workplaces. We push, test, and iterate what near-future screen interfaces might look like, and the rooms and spaces the people and design interface designs that we imagine go with these scenarios. It is one thing to design for a screen or wearable that we can see and hold today; it is another to imagine a system of visuals and haptics that do not yet exist. We talk about interface design for film and TV as an adjacent place for this kind of exercise.
- Alexander Manu (OCAD University) – Disruptive Futures
This course examines how companies can deploy and implement disruptive technologies such as AI, robotics and the role design plays in this context. We will explore the development of disruptive technologies and how companies can adopt new technologies to increase user experience and efficiency, enhance products and services, and diversify their revenue streams. Industry strategies are used to identify opportunities within disruptive technologies and to implement them in developing products and services. The legal, ethical, and economic implications of incorporating disruptive technologies into everyday products and services are examined.
I would like to present a project in which students used generative AI (ChatGPT) for the research phase, the reporting of results and opportunities as well as the physical design (MidJourney) of the opportunity.
In this team project, students were tasked with exploring the disruptive potential of integrating AI technology into waste management systems. The project aims to cover all facets from technological possibilities and design implications to business strategies and ethical considerations.
Group Q&A Moderated by
- Carla Gannis (New York University)
Closing (6:15 – 6:30 PM EST)
- De Angela L. Duff (New York University)