In the late days of January in 2020, Matthew Ritchie staged a beta version of his VR game, The Invisible College, in the U-shaped atrium of MIT’s Physics building, a former century-old courtyard. On the bright grid-like floor designed by Sol LeWitt, audiences wandered in fields of images generated by artificial intelligence, virtual worlds created from datasets that spanned the subatomic to the galactic, against an otherworldly soundscape by Professor Evan Ziporyn. There was a sense of limitlessness and uncertainty as participants moved from one level of the game to the next, sliding through virtual walls with a swipe of the hand, guided by the following instructions:
The game is like time. Everything happens in the present tense. Go forward (you can) and you’ll see nothing has happened yet (you’ll just see the structure of the game) Go backward after completing a room (you can) and there are only ghosts.
The VR experience, for Ritchie, was like a ghost: a data space that could be shamanically conjured and then, all at once, sealed back into its box.
Ritchie’s multimedia art—expansive works that often aspire to the scale of the universe—is what one critic once called “an exercise in systematic complexity.” Ritchie, A Dasha Zhukova Distinguished Visiting Artist at CAST, has a long history with the Institute that dates back to his 2002 installation, Games of Chance and Skill, in the 80-foot-long corridor of MIT’s fitness center. Most recently, he participated in a collaboration with MIT, The Met, and Microsoft. In The Invisible College, Ritchie continues his interventions into the secret life of the institution.
The project incorporates the many dimensions of the university—social, material, intellectual, technological—from the informal conversations to the new technologies being developed in the labs. In exploring the Institute as both an information space and a physical one, Ritchie captures those states of turbulence, chaos and indeterminacy—the rush between classes, or the blurry moment before a picture sharpens into legibility—sketchy zones that, as he says, creates “the most generative space for the radical rethinking of reality.”
After the beta test, Ritchie would decide that the preparatory activity—the labyrinthine walk to the atrium, the distribution and donning of the VR masks—was a meaningful part of the aesthetic experience. This is his genius: the evolving conditions that first appear extraneous, even delimiting, to the artwork are constantly being absorbed into it, as part of the dynamism of the system. The university, far from a static object reducible to slogans, titles, and certificates, is a complex ecology whose changing circumstances are always shaping new iterations of the work.
Four months later, when we checked back in with Ritchie, a global pandemic had displaced much of public life to the virtual, and uprisings against police brutality had just erupted across the country. We found ourselves in a new transitional and indeterminate state. Everywhere, it seemed, the invisible systems that structured our lives were in the process of changing—a new world, still forming, only hazily sketched in. With a site-specific work that required visitors to share VR headsets now out of the question, Ritchie turned to the 360 footage he had been filming of the campus—the parking lots, underground garages, chapels and concert halls—those interstitial, and often empty, spaces that seemed eerily prescient, as if they were images transmitted from the future.
With the campus evacuated, Ritchie’s original premise now holds an uncanny resonance—the university is, in many ways, invisible. As the form of the project continues to shift, it still contains, like the campus itself, the traces of its other iterations, the phantoms of both past and future.
The invisible college was first written about in Francis Bacon’s utopian novel, New Atlantis, to describe a community of scholars on a fictional island. How would you describe the invisible college in relation to MIT?
After exploring the MIT campus over several months, attending lectures, classes and events and meeting with faculty and students, I became convinced there was a hidden topology and a choreography of knowledge here, a dance of thought in time and space—an invisible college within the university. There’s an active thing we all know: the classes, the administration, the university identity. Then there’s a role that’s emerged as an almost autonomous, invisible self. That is a much longer process, more vegetative, as an institution becomes something over hundreds of years—that part becomes invisible even to itself. As all these customs, rituals, and ways of communication begin to network to each other, connect, do they become a kind of autonomous entity? From the start, we thought the only way to represent that what would be a multi-platform investigation into all the different ways that both had existed in the past: the history of PhD papers at MIT, the output of academic discourse and the seminars, the PowerPoints, and the weird forms in which the university proliferates itself. There’s the information space, the physical space, and the people negotiating within them. What’s an information environment that could even possibly sustain all of those different points of view? As we were going through all the possible options, we landed on VR. It was this collection of strange, completely different, scalar representations of reality put into this quasi-architecture at MIT.
How did you decide to make MIT the subject of the work?
I’ve been working on projects at MIT for almost twenty years. I’ve experienced it in multiple states of being. There’s parts of the campus where I was literally there while they were being built, that are now ancient history. I have this sense of it as a physical engine that actually wasn’t always the way it is today. For this project, we landed on this idea of the investigation of the investigation. It goes back to the questions about Bruno Latour and the hard look that was taken in the 1970s into what institutions are doing, and do they have a part of themselves that is at a higher consciousness, that’s not just on autopilot. MIT, like every other institution, is both open and closed to that at the same time. Because it can’t constantly take itself apart and rebuild itself in motion. And yet it knows it must constantly refresh itself or it will become ossified and old.
How did the project change after the coronavirus?
Part of the genesis of the work was that I would wander through all the hallways. This turbulent stream of students would appear and disappear at certain hours. And at other hours, the halls would be deserted. And there might be just somebody dancing or wandering around with a kite or something. I’d taken a lot of footage all around campus. I was filming at night. There’s this odd resonance in the project of what was yet to come, which is an empty campus. Hardly any people are seen. There’s this huge machine working all by itself. Now I’ve made a short film version of The Invisible College based on that footage of these eerie uninhabited spaces. Being in VR is very much a one-off isolated experience. You put on a mask. Make yourself isolated from everyone. It has an odd parallel to the moment we’re in. The perspective has shifted from a cool new gaming tech to a more sobering encounter with isolation.
The other shift is that we’ve moved into augmented reality. That platform wasn’t available when we started the project, nor was a lot of the technology. I didn’t come in with an agenda to make an artwork from some science that had been done at MIT. I came in to ride the wave of what the researchers were realizing was possible with the technology that was emerging, the means of representation—one of which was the famous Generative Adversarial Networks (GANs) which would make a new kind of visuality. Another was a usable 360 camera, the GoPro MAXThe Oculus Quest. As the technology arrives, I try to incorporate it into the project.
How would you describe your short film?
There’s a particular kind of film that has emerged in the last few years called a machinima, where you make a film inside a video game. It’s a self-compiling work, where you set up rules, and then add the clips. This is something of a hybrid, because it’s material that I’ve made, but under this set of rules. The story, which is on another platform developed by OpenAI that came out while I was in residency, is also generated by AI. Evan’s music is also based on a database that’s being manipulated.
For the last twenty years, we’ve tended to treat these sophisticated new tools as instrumentalized, neutral products. Even in the Latourian sense, you don’t think of the beakers as having agency in the laboratory. The scientists have agency, but the tools themselves are neutral. What’s happened in the thinking of the last five years is that the tools aren’t neutral anymore. The tools—and not in a passive way—coax you toward specific results. The entire project is a co-creation at multiple levels, where what is happening is a product of this environment, but also of the tool sets within that environment. Rather than try to superimpose some final singular interpretation, I accept that a lot of these things are emergent, and play with that. I don’t want to misrepresent this project as something that’s entirely autonomous—this is a much more polluted relationship between me and the technologies. This film as it’s evolving is very much a hybrid form.
Much of this work seems to address emergence, complexity, co-creation, and radical contingency–even the way that you’re talking about how these technologies are organically integrated into the creative process. It seems as if you set up this ordered system, but then a chaos agent like a virus enters into society, it transforms the system again, and then you’re creating something new. As conditions evolve, and the constraints shift, they are absorbed into the DNA of the project.
A residency is about being a resident, which means you’re going to lunch with people, going to dinner with people, and staying at their house. You’re going through the virus together, and you’re going through people’s tenure applications with them. You’re experiencing all of that, and accepting that as an engine of whatever product. There have been a lot of things that have gone on in that period, and just at MIT. Read The Tech [MIT’s student newspaper—ed.] There’s Jeffrey Epstein and complaints about the food halls, minor scandals and major scandals, and now coronavirus—and it’s all part of that.
It reminds me of Latourian actor-network theory, where the tenure application, the coronavirus, Oculus Quest, the Sol LeWitt floor, are all elements interacting in this complex system, and you don’t know what’s going to emerge. The resulting work is the product of co-creation with both human and more-than-human actors.
There has been a nice peacefulness to that look at the interconnectivity of all things. Obviously, at this particular level, there’s riots in the streets. The interconnectivity of all things isn’t a problem that can then be comfortably placed just in the realm of academic collaboration. It becomes a much larger set of questions. Coronavirus, Black Lives Matter, these things really start to show that there are no boundaries—that no matter how uncomfortable it might be for an institution, the virus is inside the building. Jeffrey Epstein is inside the building. Black Lives Matter is inside the building. When Bruno Latour or Jacques Rancière were first writing, there was almost an innocence about how clean-handed everyone could be just thinking about rhizomatic networks, and how they all connect. But if you start seriously thinking about how everything connects—how food networks connect, climate networks connect, pollution networks connect—if you extend the logic to all possible spheres of activity, then it becomes much more challenging in an interesting way.
In the past, you’ve engaged a lot with the idea of the diagram. I’m thinking about the clean rationality of those early cybernetic diagrams, and how you’re engaging with much more ambiguous states.
Cybernetic networks were developed in the postwar period—precisely to clean up the mess, the social mess, and create an ordered society that stopped going to war with itself. But when you look at Latour’s diagrams closely, the parts that are difficult are unlabeled. But there’s a moment of rupture or turbulence, where those nice simple and complex geometric forms stop being representable mathematically or theorizable, and become a murky, turbulent, chaotic swirl, where all the parts of the system start to touch each other and this beautiful chaotic mess starts to happen. It goes quite quickly from the diagram to something you might call the drawing, and then into the sketch, like a very sketchy, shadowy space. And then into chaos. But that’s not to say that it isn’t theorizable. It’s just much harder. In a chaotic or topological environment, things start to touch each other a lot more often.
What was it like for you as an artist to have this project always shifting? Were there any breakthrough moments?
This is part of the sketch space, the conceptual sketch space, which is very typical for MIT. You always engineer a product with multiple phases of development. But it’s so not typical for artists. It’s fascinating to me to be able to hybridize like this. While I was at the hackathon working with MIT and Microsoft, there was a breakthrough aesthetic moment where I saw how you could influence the production of aesthetic material without changing the data itself. There is a presumption that data production is always neutral aesthetically. And if an artist steps in and changes it all, then you’re changing the data. But in this particular case, it’s not really true. What you’re watching is an aesthetic environment that only has meaning if a person discriminates, because the GAN itself is not designed to discriminate. It produces billions of potential images. And the only way that it produces them is if you ask it a specific question. If you say, “Show me all the faces.” It doesn’t know what faces are, and it just goes to the database of where it says these things are faces. Whatever you’ve put in that database, those are the faces.
I thought for the first time in my lifetime that I’m actually seeing the emergence of what information looks like as a picture, as opposed to a diagram or a schematic. It’s like looking at a system emerging while you’re watching it, like watching a sort of pool of mud and then you sort of see something surface vaguely, and you realize suddenly that it’s a turtle. And then it kind of goes back down again. There’s no defining moment where it’s not a turtle anymore. It’s just this shape vanishing. Before it comes up, you don’t know it’s a turtle. Then you’re like, “Oh, it’s a turtle.” And then it’s not a turtle again. That’s what it felt like watching information become visible in a way that I’ve never seen before. It’s this strange, blurry, muddy, chaotic thing that started to appear. And then moments of it become extremely legible, but you realize they’re not legible because there’s no content. You’re just looking at the AI rearranging the pixels infinitely. And every now and then, it makes some things. So it’s the monkeys and the typewriter debate. Is it meaningful?
In a piece that you had written about LeWitt, and you talk about this idea of a shadow project, where there is an extra-dimensional space overlaid on top of his work that’s a space of transcendence or immanence. Do you see this project, in exploring what you call “the higher consciousness” of the institution, as pointing toward that, or actualizing it in a way?
A beautiful term that came out of The Met project that’s used by neural network researchers is latent space. Latent space is enormously large. And then there is this other idea of the valencies within that, so you can have a sort of poly-valence space, which has maybe got quite a lot of directions. And then you might have a bi-valence space, where you’re restricted to only two choices. The emergence of a form, its specificity, is tied to the number of valences left to it. Anything that becomes clearly identifiable at a certain point—like that turtle emerging—has to become knowable and legible in some very specific ways for you to decide. The super deep neurological question that I’m drawn to is the shadow space between the infiniteness of latent space, which is effectively noise, even if it is a lot of information to us. That whole question about signal to noise comes very much out of the ’40s, where you’re trying to define what is the signal. But of course, to the universe, everything is a signal. The static is not noise, it just seems like noise, but it’s actually the sound of the universe doing its day to day business. You just want to extract what’s important to you out of it.
There’s a kind of information extraction that is very teleological, and it’s based on a result that you already know what you’re looking for—to presume you know what the signal is in order to tune out the so-called noise. Neurologically, when the human brain encounters something like a picture, it doesn’t have the same kind of urgency. You don’t have to know what a picture is. You’re allowed to exist in a state of deferred information. And that poly-valence space is something that I’m super interested in—where you haven’t decided on something. When you look at a sketch, a hand-drawn sketch in charcoal of some murky form, you don’t know what it is. But it might sponsor 100 possible contemplative spaces. That’s why you sit on your porch at sunset and look at the trees. That’s what you’re turning on, that part of your brain. That’s what I think happens on all those big blackboards at MIT. That’s where the most generative space for radical rethinking of reality is going to happen.
MIT has got these big, shadowy complicated spaces. It’s almost like the institution knows it has to keep making them, but it doesn’t know why. The LeWitt space, for all of its geometric certainties on the floor, the rest of it is this very large shadowy odd kind of cathedral-like space that’s contemplative. And the volume of that is much larger than the realized high-color space. But when I think about LeWitt, I think about someone who thought a lot about that differential. On the one hand, there’s the rules, very simple. On the other, there’s the work, which is never quite what the rules say it is. The work is much bigger. The work is the world. The rules are just a description.
I’m working with Adobe to beta test their AR app, and that has become this personification of inquiry. There’s a figure who can manifest, who looks like the people doing the VR. She’s like a personage in a mask, in a robe, which she can show up anywhere, on anyone’s phone now. That will be this persona who gets migrated from VR to the movie, but then it’s going to migrate back out into the spaces of MIT as a project where she can be—the ghost of what we’re talking about can be seen in the LeWitt, even if no one else can go there. She can go there, which is beautiful.
In the past, you’ve talked about the moment in a rational system where something changes and becomes a new form of being, and yet within the system there remains the “echo of that condition.” The Adobe ghost, as kind of the memory of the project, reminds me of that echo.
A commonly known figure is Walter Benjamin’s Angelus Novus, the angel of history. He’s confused and saddened by the chaos that always recedes into the past. I’ve always preferred another character in the iconography of angels, who is the angel of the future. She is looking forward. She’s the angel of chance. But her vision is obscured—she has partial sight. But she’s not afraid of the future. She’s not flying backwards into the future, but facing it.
Written by Anya Ventura