Computer rendering of a network of many black lines connected at notes.
Markus Buehler’s Network Model. Photo: Dr. Zhao Qin, MIT.

The nexus of materialized sound and sonified material

Markus Buehler covers the interface of material and sound, explaining how we can transcend scales in space and time to make the invisible accessible to our senses, and to manipulate them from different vantage points. Starting from the macroscale—things we can see with our eyes—moving to smaller scales and probing the unique features of the nano-world. Learn how sound can be shaped with molecular vibrations, how molecules can be designed with new sound, and how the neural networks of living systems (their brains) can form a medium for translation between different material manifestations.

Buehler begins with a brief review of recent work with CAST Visiting Artist Tomás Saraceno, CAST Faculty Director Evan Ziporyn, and others studying three-dimensional spider webs, modeling, and translating those complex structures into a playable musical instrument. The instrument is used to manipulate the natural sounds of spider webs, generating human input that can be fed back to the spider. The spider processes the auditory signals as vibrations of the silk strings and responds by altering its behavior and generation of silk material, reflecting a vision by which sound becomes material.

While spider webs are fascinating, there is more to see at the nano-level, where all things always move. Tiny objects are excited by thermal energy and set in motion to undergo large deformations. Taking advantage of this phenomenon, the frequency spectrum of all known protein structures—more than 110,000—can be computed, translating motion into audible sound. Using AI, these natural soundings of proteins are evolved into new patterns, exploring an interface of human musical expression with learned behavior and how it can be used as a guide to discover new materials and better understand physiology and disease etiology.

Proteins are the most abundant building blocks of all living things, and their motion, structure, and failure in the context of disease is a foundational question that transcends many academic disciplines. The structure of the very protein materials used to build our bones, skin, organs, and brains, finds representation in the various creative expressions humans have projected over tens of thousands of years, in that our bodies—how they function in a healthy state but also how they fail in disease—are reflected in all expressions of art. With this form of microscope, we can begin to see the world within us and exploit it for new engineering designs.

The translation from various hierarchical systems into one another poses a powerful paradigm to understand the emergence of properties in materials, sound, and related systems, and offers new design methods to materialize what we hear and help us understand how the materials build us up.

Markus J. Buehler
Jerry McAfee (1940) Professor

Markus J. Buehler is the McAfee Professor of Engineering at MIT, Head of the Department of Civil and Environmental Engineering (CEE), and leads the Laboratory for Atomistic and Molecular Mechanics (LAMM).

His primary research interests focus on the structure and mechanical properties of biological and bio-inspired materials, to characterize, model, and create materials with architectural features from the nano- to the macro-scale.

More about Markus Buehler

Vision in Art and Neuroscience.
Vision in Art and Neuroscience.

Computation for the Interstices

Structure, collectively cultivated over generations, underlies artistic creation. A single artwork manifests one instance of that structure; learning executable models of it enables creation of multifold instances, and quick iteration. With machine learning algorithms, models can be inferred of the statistical structure underlying a set of existing works that can be used to generate new, similar work. An algorithm which knows little else about the world can learn, from the combined features of its inputs, a recipe for new creation—ranging from homages to a particular period to chimeras of geography and style. Here we find a great beauty of AI intermingling with art: the emergence of a toolkit for collaborating with a model of cultural history on a timeline allowing experimentation and rapid evolution in the present. The toolkit should be broadly and openly accessible, allowing everyone to take part in cultivating collective structures that underlie culture and creation. Implementing this on a digital collection of cultural works gives us the ability to interpolate, and to iterate. Interpolating lets us explore the interstices of a collection to discover, in the space between existing works, echoes of those that could have been created but never were. Iterating lets us explore thousands of variations and select feature combinations to inspire the next gen of outputs, evolving collections forward.

Schwettmann discusses the recent collaboration between the Metropolitan Museum of Art, Microsoft, and MIT, where there was opportunity to bring a prototype of this vision to life. With Dasha Zhukova Distinguished Visiting Artist at CAST Matthew Ritchie and collaborators from Microsoft and the Met, she built a machine learning model of the structure underlying different categories in the Met’s digital collection, and developed an interactive web studio where visitors can explore and experiment with that structure. Gen Studio places existing Met images on a map of an associated latent space, a space of features that can be recombined to create new images using trained neural networks known as GANs (Generative Adversarial Networks). As visitors explore the map, they see new images generated from features of the Met images, weighted by the distance on the map from each. This experience was designed to be generist—simultaneously conveying both the uniqueness of each image, and its potential to be iterated.

Schwettmann addresses this project as well as a future it inspires, where AI models for making become an explicit part of shared heritage, and participation in training and evolving them becomes a shared cultural practice.

Sarah Schwettmann
PhD candidate, MIT Brain and Cognitive Sciences

Sarah Schwettmann is a computational neuroscientist interested in creativity underlying the human relationship to world: from the brain’s fundamentally constructive role in sensory perception to the explicit creation of experiential worlds in art.

More about Sarah Schwettmann

A hallway with colorful art on the ceiling and walls.
Matthew Ritchie's Game of Chance and Skill, 2002. MIT Zesiger Sports and Fitness Center (Building W35). Commissioned with MIT Percent-for-Art Funds.

Turbulence in Porous Media

Just as previously disparate and inaccessible perceptual systems and physical scales are becoming accessible through new technology, it is becoming possible to imagine previously disparate informational systems and disciplines, such as art and science, becoming porously interconnected through AI, and even to imagine these newly accessible scales and systems being coherently visualized in Virtual Reality. In such a fluid environment, questions of informational turbulence, relative transparency and porosity across different media will become critical.

Art practice can operate between world building and picture building, generating noemas or thought-objects that can be manipulated (using GANs) to explore the relationships between systems of thought, types of intelligence, and thought-objects. A first proof of concept, developed in collaboration with Sarah Schwettmann, was presented at the Met on February 4, 2019. The next step is the integration of human perceptual structures and non human signaling environments into a fluid, but coherent information ecology—an invisible college.

Matthew Ritchie
2018-19 Dasha Zhukova Distinguished Visiting Artist, CAST

Matthew Ritchie’s artistic mission has been no less ambitious than questioning the various systems we use to represent and visualize the universe and the relationships between the structures of knowledge and belief that we use to understand it.

More about Matthew Ritchie