Events

Chicago Creative Machines is a free, public lecture-performance series featuring artists who use machine learning and ML-adjacent tools and techniques in their creative practices. Each event consists of a technical lecture/demonstration by the artist of their tools followed by a performance of work they’ve created using those tools. No previous knowledge of machine learning, coding, algorithms, or computation is required!

Series II at Experimental Sound Studio

February 23, 2025 // [Watch Here]
2:00 - 4:00 pm
Patricia Alves-Oliveira will address how robots can be used for unconventional tasks, such as being promotors of human creativity and artistic expression. She will share the design process and techniques to create robots for and with users. Her work has an impact on human empowerment and on visioning technology as a way to partner with (instead of substituting) humans. 

March 2, 2025 // [Watch Here]
2:00 - 4:00 pm
Ted Moore presents a free FluCoMa workshop // Bring a laptop!

This workshop will introduce participants to FluCoMa by training neural networks to classify instruments by timbre and control synthesizers in high-dimensional space. FluCoMa is a toolkit for making music with machine listening and machine learning within Max, SuperCollider, and Pure Data. The toolkit offers audio decomposition tools to separate real-time and recorded audio into component elements, audio analysis tools to describe audio components as analytical and statistical representations, data analysis and machine learning algorithms for pattern detection and expressive corpus browsing, and audio morphing and hybridization algorithms for audio remixing, interpolating, and variation-making.

March 9, 2025 // [Watch Here]
2:00 - 4:00 pm
A talk about indeterminacy and Caitlyn Min-ji Au’s work-in-progress with roombas.

March 16, 2025 // [Watch Here]
2:00 - 4:00 pm
Bryan Pardo’s Interactive Audio Lab invents new tools to find, label, separate, modify, and generate sound. Prof. Pardo will discuss several projects illustrative of the work in the lab: 

VampNet: VampNet is fast enough for interactive performance and can be prompted by music audio prompts, making it well suited for creating loops and variational accompaniment.

Text2FX: Audio effects (e.g., equalization, reverberation, compression) are a cornerstone of modern audio production but are challenging for non-technical musicians, podcasters and sound artists. Natural language can serve as a more intuitive and accessible way to navigate the complex parameter spaces of audio effects.

AI-tribution: Today, "inspiration" has been automated by generative music models. The black box nature of these models obscures the identity of the works that influence their creative output. As a result, users may inadvertently appropriate, misuse, or copy existing artists' works. With AI-tribution, we establish a methodology to systematically identify similar pieces of music audio in a manner that is useful for understanding training data attribution.

Experimental Sound Studio
5925 N Ravenswood
Chicago, IL 60660
link to live stream

Series I Archive

February 25, 2024 // [Watch Here]
2:00 - 4:00 pm
Hugo Flores García discussed his research in machine learning and music, where he designs artist-centered interfaces for creative expression through sound. He also shared several tools and techniques he uses for creating sound art with machine learning. Hugo presented an 8-channel fixed media composition for voice and a generative model and performed a solo set of improvised music with an ML powered instrument.

March 3, 2024 // [Watch Here]
2:00 - 4:00 pm
Hunter Brown demonstrated how he incorporates machine learning techniques into the control of sound synthesis algorithms implemented in a custom SuperCollider instrument he uses for free improvisation.

March 10, 2024 // [Watch Here]
2:00 - 4:00 pm
Julie Zhu will talk about her obsession with the sound of writing and drawing and how it's played an important role in her interdisciplinary collaborations, from chamber compositions to video installations to spatial audio experiments.  She will also talk about the new digital co-performer currently being developed using deep learning models trained on her drawing habits.

March 17, 2024 // [Watch Here]
2:00 - 4:00 pm
In a talk titled "Undiscerning Mirrors," X. A. Li will discuss the aesthetic possibilities of deep neural networks given both their architectural constraints and societal context. How can artists approach machine learning systems critically and inventively, beyond assuming the functional paradigms established by scientists and tech companies? The talk will be followed by a performance of paired video and sound pieces Instruction and Ecstasies, exploring the dogma of self-optimization amplified, embodied, and refracted by algorithms.

Q+A with the Series I Artists // [Watch Here]
March 30, 2024
9:00 - 10:00 am
X. A. Li, Julie Zhu, Hunter Brown, Hugo Flores Garcia, and Molly Jones streamed a live Zoom conversation about machine learning as a part of their creative practices: its promise, its pitfalls, its concerns, its challenges, and future directions in their own work and artist/engineer collaboration.

Introduction to AI and Music Workshops at the Library

Molly Jones, a composer, data engineer, and audio computation nerd, will provide an introduction to AI and music. We’ll look at inspiring examples of artists working with machine learning and music, learn a little about what AI is (and isn’t), and demonstrate neural networks that can generate and transform audio. All abilities and confidence levels welcome!

June 26, 2024 // Chicago Public Library, Portage-Cragin
6:00 - 7:00 pm

July 23, 2024 // Chicago Public Library, Jeffery Manor
6:00 - 7:00 pm

July 25, 2024 // Chicago Public Library, West Loop
6:00 - 7:00 pm

August 20, 2024 // Chicago Public Library, Richard J Daley
6:00 - 7:00 pm

November 16, 2024 // Lake Forest Library
1:00 - 2:00 pm

February 13, 2025 // Nerd Nite by Ann Arbor District Library
7:00 - 9:00 pm

Please contact me (Molly Jones) if you’re interested in booking an Intro to Music and AI Workshop.