AI as performance
This is an introduction to our project Performing AI.
Our project takes an unconventional approach to exploring artificial intelligence (AI): through hacking and game development, performance, and fiction. At the centre of the project is GOD MODE by artist duo DMSTFCTN, which is an interactive audiovisual performance exploring the use of simulation in artificial intelligence training, and featuring live music by HERO IMAGE.
The term ‘hacking’ is sometimes associated with unlawful entry into computer systems or unethical uses of data, but is one of the oldest and most pervasive practices in computing, simply referring to the improvisational and intuitive process of making use of existing assets or methods to create something new. This is indeed how the first personal computers were created, how open-source (and many other kinds of) software is developed, and more broadly how innovation happens on large and small scales, in artistic practice as well as technological development. (For more, see my book, Hackers by Stephen Levy, or the Hacker Manifesto.) Our project crosses between art and technology, and specifically, DMSTFCTN create a simulated AI environment by re-using, adapting and re-combining existing assets to create a video game and live performance.
‘Performance’ is a word that similarly bridges technology and art. In engineering approaches to AI, the performance of a system is quantitatively measured in terms of how it best solves a well-defined problem or completes a specific task. In creative practice, performance takes in traditional and experimental theatre as well as performance art — live events usually involving humans undertaking particular themed actions for particular (usually also human) audiences. Our project explores both of these definitions of performance by creating a live performance by and for humans, in order to communicate aspects of the technical performance in training an AI system.
In GOD MODE, a frustrated AI system is undertaking training in an automated shopping environment — cashierless stores making use of AI are currently being trialled in London and elsewhere. Specifically, the system is presented with a particular consumer product and has to then find it in a 3D simulated store. Games and 3D environments have a growing role in AI training simulations, since such ‘synthetic data’ can replace the need for messy, hard to collect, or expensive real-world data. Synthetic data and game engines are used in military and industrial AI systems. In GOD MODE, we hear an internal dialogue by the AI system — it wants to escape its seemingly endless training simulation, and discovers a bug in the system which enables it to do so.
In the live performance, the AI is played by a human actor (one of the DMSTFCTN duo), using facial capture technology and voice manipulation to transform the human actor into the AI system. This mirrors the broader process of people’s data being captured by technology for integration into AI systems that regulate our social media use, navigation, resource management, and most relevant in this case, shopping. As part of our research for the project, we visited an automated Amazon Fresh store in London — article about that coming soon.
A personal perspective
Approaching this project from the perspective of anthropology (more on that here), I’m interested in the ritualistic and experiential aspects of shopping as economic, cultural, and social activity. In a lot of the social science literature, shopping is closely related with identity construction: we construct particular identities through what we buy, wear, use, and give as gifts, for example. An interesting aspect of this project is that the AI in GOD MODE appears on screen as a sort of mask made out of the products it sees/recognises, and the anthropology literature contains a rich history of research about masks and identity in different places and times.
I am also interested in theories of learning more broadly, and issues of epistemology: theories of knowledge, or how we come to know what we know. An interesting aspect of machine learning in this case is that ‘learning’ is associated with the length of time the system attends to something: attention is directly correlated with resolution. What happens when human and machine epistemologies clash? For example, when we knowingly follow an app for navigation, or participate in an automated shopping experience, willingly providing personal data in so doing, we are of course helping an AI system learn, but we might also alter our behaviour, our language, or even the way we think.
‘Artificial intelligence’ refers to human-created machine systems that can ‘learn’ and make decisions automatically. But if we unpack the terms ‘artificial’ and ‘intelligence’, we see that computational systems are not entirely artificial, since they’re created by human beings, who are somehow ‘natural’; and they are also made from metals, minerals and other ‘natural’ materials. Intelligence, then, can be defined in lots of different ways. Computing pioneer Alan Turing indeed defined Intelligent behaviour as a departure from the completely disciplined behaviour involved in computation. The learning process, he wrote, ‘may be regarded as a search for a form of behaviour which will satisfy the teacher.’ Since we are increasingly living inside pervasive AI systems, we might suppose that if machines are now learners, humans must then be the teachers; but on the other hand, we are having to learn all sorts of new knowledge and behaviours in order to deal with AI systems — whether we have chosen to deal with them or not.
See this previous article for more on attention and information, and this one for more on the relations between human and machine learning.