Featuring
Keyboardist/Technologist: Jordan Rudess
AI Music System Designer: Lancelot Blanchard
Installation Artist/Designer: Perry Naseck
Faculty Advisor: Joe Paradiso
with special guest Camilla Bäckman, violin/vocals
Additional project support from:
Madhav Lavakare, visual mapping
Carlo Mandolini, fabrication
Brian Mayton, mechanical design, fabrication, and installation
Nathan Perry, embedded software support and installation
Phil Cherner, pre-show AI visuals
Eyal Amir, Spectrasonics, and Audio Modeling, sound software contributions
Program
Prelude to the Machine
Opening the show with an improvisation introducing Jordan to the audience. No AI in this piece, just Jordan performing on Expressive E’s Osmose synth, a keyboard instrument that offers unique expressivity with a gesture-sensitive keybed.
Lead by Code
Jordan’s first interaction with the model has Jordan playing chords and bass lines while the model listens and responds by playing leads.
Synthetic Dreams
The model shows its ability to rock out. Jordan trained the model in the style of progressive music and they trade back and forth.
Symbiotic Rhapsody
Camilla enters, and now we have three musicians on the stage. Now it’s Jordan’s turn to play the lead lines which will guide the model’s harmonic choices. The model is trained to listen closely to Jordan’s modal shifts and performs the bass lines. Camilla improvises based on both Jordan’s and the model’s musical decisions. Jordan and Camilla have the option to see a display of the model’s timeline, which can give them a preview of where the model is going harmonically.
Freeform Frequencies
It’s the model’s turn to be in charge of harmonic decisions. The model will lay down chords and bass lines which will interact with Jordan’s leads.
Fugue in Digital Minor
Let’s see how the model does with a contrapuntal style. The model is able to answer Jordan’s contrapuntal improvisation, matching his style.
Timeless Touch
Back to basics. Jordan has a solo musical moment, and lets the machine rest up for the finale.
Veil of Truth
This original song performed by Jordan and Camilla is the one piece of the evening that is NOT improvised live, except for 16 bars at the end of the song where the model takes its own solo against the chord progression.
Whispers of the Machine
We let the model take over and present us with its own choice of lush chords and harmonies. Jordan and Camilla will join the model for a grand finale group improvisation.
About the Project
When presented with the task of training our model to improvise, I didn’t have to teach it the essential improvisation skills that I had to learn as a music student. With the magic of machine learning blended with my musical modeling, it already had a foundation in music theory, familiarity with musical genres, technical mastery and the ability to listen and respond. Our model, or “Baby Jordan” as we have affectionately called it, is a quick learner. It has made us proud with its consistent growth but is still learning and sometimes “misbehaves” so you might hear some “wrong notes” tonight. Charlie Parker, one of the world’s greatest improvisers said it best, “You’ve got to learn your instrument. Then, you practice, practice, practice. And then, when you finally get up there on the bandstand, forget all that and just wail.” Our model has been practicing, practicing and practicing. Tonight we get to see if it is ready to wail.
Welcome to opening night friends, and only the beginning of what our team hopes will be a long journey together.
– Jordan Rudess
The kinetic sculpture behind Jordan is the physical embodiment of the jam_bot. The sculpture visualizes what the AI—and only the AI—is playing at any given time. Since the AI companion doesn’t have a physical form, it serves to separate the musicians onstage for the audience; the notes played by the AI are demonstrated by the movements. The sculpture is also a medium for research: traditionally, onstage visuals are either pre-programmed for exact sequences or react to only what is currently being played. With AI improvisation, we can use the next minute of generated but not-yet-played improvised music to inform the visuals. This allows for communicating changes in anticipation, energy, and focus not dissimilar to the cues given by a musician’s posture, eye contact, and breath. Just as a violinist may lean in to prepare for a long bowing or a keyboardist’s hands may jump across the keyboard, the sculpture shows what comes next through its movements. Because the AI is constantly changing its mind and re-planning and re-adjusting to what Jordan is playing, the sculpture makes sweeping movements to show a change in musical direction.
– Perry Naseck
The jam_bot is made up of a variety of large musical generative AI models, all capable of creating real-time accompaniments and improvisations by reacting to Jordan’s live input data. These models, based on the Music Transformer architecture, have been trained on a large amount of symbolic music data before being fine-tuned on multiple improvisations created by Jordan, all in various styles and genres. The models have then been optimized to run in real-time, and we developed several interaction modalities to allow Jordan to dynamically prompt the models. The result is a reactive and versatile musical agent that acts as an improvisation partner for Jordan on stage.
– Lancelot Blanchard
About the Artists
Voted “Best Keyboardist of All Time” by Music Radar Magazine, Jordan Rudess is best known as the keyboardist/multi-instrumentalist extraordinaire for platinum-selling Grammy Award–winning rock band, Dream Theater. A classical prodigy who began his studies at the Juilliard School at the age of 9, his music is a unique blend of classical and rock influences. In addition to playing in Dream Theater and his solo performance career, Rudess has worked on the development of state-of-the-art keyboard controllers and music apps through his company Wizdom Music.
More about Jordan Rudess at MIT
Description
Camilla Bäckman is a singer, violinist, and songwriter from Helsinki, Finland. Her background is in classical violin, but she came to realize, at a very young age, that she was also a singer and a storyteller. Camilla participated in The Voice of Finland in 2014, and from 2017 to 2020 performed as lead singer, violinist, and an on-stage character in Cirque du Soleil’s VOLTA. Her debut album of original music, “Give Me A Moment,” was released in fall 2022.
Description
Lancelot Blanchard is an AI researcher, engineer and musician, currently pursuing a Master’s in the Responsive Environments research group of the MIT Media Lab and carrying out research on the applications of generative AI for musical expression and communication. His research interests center around working with artists and musicians to design interfaces that embed AI and use analog and digital signals to create new musical experiences.
Description
Perry Naseck is an artist and engineer with an interest in interactive, kinetic, light- and time-based media. He specializes in interaction, orchestration, and animation of systems of sensors and actuators. Perry completed a Bachelor’s of Engineering Studies and Arts at Carnegie Mellon in 2022 where he studied both Art and Electrical & Computer Engineering. He worked at the Hypersonic studio where he worked on large interactive, kinetic, and light public art installations found all over the U.S. and abroad. He currently focuses on developing interactions that bring people closer to their environments.
Description
Joseph Paradiso directs the Responsive Environments group, which explores how sensor networks augment and mediate human experience, interaction, and perception. His current research interests include wireless sensing systems, wearable and body sensor networks, sensor systems for built and natural environments, energy harvesting and power management for embedded sensors, ubiquitous/pervasive computing and the Internet of Things, human-computer interfaces, space-based systems, and interactive music/media.
Description
About the Presenting Entities
The MIT Center for Art, Science & Technology (CAST) creates new opportunities for art, science, and technology to thrive as interrelated, mutually informing modes of exploration, knowledge, and discovery. CAST’s multidisciplinary platform presents performing and visual arts programs, supports research projects for artists working with science and engineering labs, and sponsors symposia, classes, workshops, design studios, lectures, and publications. The Center is funded in part by a generous grant from the Andrew W. Mellon Foundation.
The MIT Media Lab is an interdisciplinary creative playground rooted squarely in academic excellence, comprising dozens of research groups, initiatives, and centers working collaboratively on hundreds of projects. The Media Lab focuses not only on creating and commercializing transformational future technologies, but also on their potential to impact society for good.
Presented by the MIT Center for Art, Science & Technology with support from the MIT Media Lab
Special thanks to Danielle Rudess, Kevin Davis, Cornelle King, Jordan Miller, Eran Egozy, Irmandy Wicaksono, Jimmy Day, Xan Foote, Eyal Amir, Cedric Honnet, Fangzheng Liu, Sam Chin, Patrick Chwalek, Alan Han, Marie Kuronaga, Benedetta D’Eliah, Martin Sawtell, Char Stiles, Elinor Poole-Dayan, Michael Wong, and the CAST Team: Lydia Brosnahan, Philana Brown, Rayna Yun Chou, Stacy DeBartolo, Heidi Erickson, Katherine Higgins, Stephanie Irigoyen, Leila W. Kinney, Tim Lemp, Marisa McCarthy, Isaac Tardy, Leah Talatinian, and Evan Ziporyn.