This year’s biannual CAST symposium explores the art and science of computation
In the popular imagination, artificial intelligence is either a salve or a menace: a bright panacea to optimize our brains and solve all our problems, or a cold interloper threatening our livelihoods, our democracy, and our humanity itself. In bringing together artists, humanists, scientists, and engineers, the MIT CAST symposium, “Unfolding Intelligence: The Art and Science of Contemporary Computation”, challenged this binary to investigate new ways of thinking about computation and its ramifications for art and society. “Unfolding offers different, and sometimes counterintuitive and disruptive, in a good sense, ways of asking what computers do and mean,” said Stefan Helmreich, who convened the symposium alongside D. Fox Harrell, Caroline A. Jones, and William Lockett. In the week-long virtual gathering, participants explored how AI might provide new models of the world, offer new forms of perception and creative expression, and create a more equitable and just future.
Deep Time & Intelligence
Looking at scales ranging from the human to the universe, the panel Deep Time & Intelligence, convened by William Lockett, examined how intelligence develops over time and in response to specific environments. As musicologist Gary Tomlinson pointed out, meaning-making is not a fixed property but is instead a flexible and evolving process, involving attention, memory, and learning, that is shaped by an organism’s interactions with a changing environment. Even genes, he said, are not fixed blueprints but adaptive “open information systems in constant flux with the situations around them.” But can machines begin to approach this kind of fluid intelligence? In his work, computer scientist Antonio Torralba builds neural networks to form sensory associations–to know the difference, for example, between the sound of drumstick hitting a cymbal versus a drum, and whether it was a hit or a scratch, and for how long–without any human training, just as an infant learns to discriminate between different materials through senses like sight, sound, and touch.
Musician Holly Herndon’s “AI baby” Spawn is fed with communally-sourced data sets and performs alongside Herndon in her vocal ensemble. In this work, Herndon probes the relationship between the art form and its source material, contrasting “sampling” with the idea of “spawning.” Whereas the former often appropriates from its sources without adequate credit, , Herndon asserts, the latter consciously attempts to properly accredit and build upon the communal knowledge of the past. Material scientist Markus Buehler also collaborates with AI tools to create music. By sonifying protein sequences, the basic structures of life on earth, Buehler looks to shed light on our primordial past–what might have happened during the birth of the universe. “The creation of music is really a deeply human experience,” Buehler said, “It’s a way of mapping out memory of deep time.”
In exploring how computers are used to represent the state of the cosmos to the evolution of species to the spread of COVID-19 across different communities, the panel Unfolding Models, convened by Stefan Helmreich, demonstrated how our lives are shaped by simulations. The cosmologist Priya Natarajan discussed how she inputs telescope data into a computer simulation to understand how black holes grow in the universe. The computer model, she proposed, not only reflects knowledge but generates it, engendering new kinds of questions and offering a common language of exchange among scientists from different subfields.
Using Charles Darwin’s metaphor of the entangled bank, evolutionary biologist C. Brandon Ogbunu discussed how mathematical models can aid us in unfolding — disentangling — the complexities of biological and ecological systems, which, in the case of phenomena such the contemporary global pandemic, must always be recognized as also enmeshed in sociopolitical dynamics. Artist Rosa Menkman challenged us to consider how computer models always exist within digital substrates that may glitch or stutter and, in so doing, reveal limitations — as well as unexpected possibilities — in practices of information storage and rendering. Models, as convener Stefan Helmreich explained, “act as theory animated– in cosmology, biology, aesthetics–with consequences that may be all at once epistemological, political, and world bending.”
Bias in AI
While technological change is often seen as a sign of progress, the panel Bias in AI, convened by D. Fox Harrell and co-presented by the Center for Advanced Virtuality, explored how entrenched prejudices are in fact built into new technologies. Sociologist Ruha Benjamin, outlining the historical connections between eugenics and statistics, discussed what she calls “the new jim code,” in which ongoing racist practices are encoded into new technological systems, appearing everywhere from healthcare to education. Computer scientist Jon Kleinberg demonstrated, in mathematical terms, how blinding the algorithm to race or other identity categories, rather than achieving a “colorblind” neutrality, in actuality creates inequity.
But if artificial intelligence is used to perpetuate discrimination, can these same computational tools be used to subvert power? How might we train machines to look and act differently? With her robotic garments, artist Behnaz Farahi uses computer vision technologies to subvert the male gaze. In one “smart” cape she created, a facial-tracking algorithm provides women with information about the age and gender of those looking at them. And yet, while it might be tempting to see artificial intelligence as a quick-fix solution to long-standing social problems, Benjamin cautions us against putting faith in a single “social justice bot” to “slay centuries of racist and sexist demons.” Instead, as convener D. Fox Harrell noted, “we can combine our technical and scientific approaches with the profound insights from social sciences, humanities, and the arts—ranging from feminist scholarship to digital media arts in order to more completely understand the kind of operation of these systems.”
The panel, Open Systems, convened by Caroline A. Jones and co-presented by the Transmedia Storytelling Initiative, questioned the very definition of intelligence itself. Could the idea of sentience encompass not just an individual human brain, but the homeostatic intelligence of a planetary body? “For me ‘open systems’ implies opening out intelligence to these different kinds of forms that might be machinically more than human, or organically multi-species, or cosmically other,” said Caroline Jones. In her multimedia performances, artist Jenna Sutela collaborates with neural networks and extremophile bacteria in an attempt to find a common language among species. With curator Lars Bang Larsen, Sutela recast AI as a form of “alien intelligence,” a more-than-human mode of cognition as found in “the poorly understood gut brain of the microbes that govern our human moods, to the intelligence of other-than-human entities in the universe,” said Jones.
As part of understanding these more-than-human collaborative modes, biologist Megan Frederickson discussed her research on the mutualist behavior of rhizobia bacteria inside legume roots–a counterpoint to common Darwinist ideas about the “selfish gene” and the survival of the fittest–and how her team is attempting to evolve better symbionts in the lab. Could this same kind of kinship also be formed with our technologies, starting with the minerals used to create them? Digital media theorist, poet, and software designer Jason Edward Lewis explained how artificial intelligence, as a product of an individualist and productivity-obsessed colonial Western culture, could be transformed by Indigenous worldviews that organize the world instead as a set of interconnecting relationships. AI system engineers, he said, are “blind to important aspects of human existence, such as trust, care, and community, that are fundamental to how human intelligence actually operates.”
As the almighty algorithm has permeated every aspect of contemporary existence, the symposium offered critical perspectives from a diverse range of fields from astrophysics to aesthetics. “Relieving ourselves of that overexposure of AI to intellectual sureties, we may sense a moment in which an instrument of optimization and automation becomes a medium open to reuse and comment, tinkering, and informed reflection, even a generator of surprise results,” said Lockett. In the course of the conference, artists and scholars challenged basic assumptions–what is intelligence? whose intelligence?–to think in radical new ways about what computers can do, and how we as a society might learn, create, and act alongside them.
Artificial intelligent systems, far from cold automatons, can be expressive, malleable, radical, and surprising. “The AI system doesn’t have to do something just like a human,” remarked Harrell, “And in fact, the places where innovation will come might be where it’s doing something very different than what we already do naturally.” Rather than compromise our humanity, AI may in fact offer new ways of perceiving, thinking, feeling, and being beyond our natural limitations, and in the process deepen our relationships with the planet and one another. By imagining what other worlds might be made by and with computers, we might find ourselves somehow more human.
Written by Anya Ventura