Piano vs. prompt

Which keyboard is more creative?

increasingly unclear
17 min readJan 27, 2024

This is a transcript of an interview with Dr Caterina Moruzzi. You can read a condensed version of this interview here.

Q: Tell me about your background — philosophy, right?

A: I was originally trained as a pianist actually. I studied piano for 15 years when I was in Italy, and ended the conservatory and got a diploma. And in the meantime, I was also doing BA and MA in philosophy at the University of Bologna.

It was mainly working during my ma on interpretation and music — the authenticity of music transcriptions. Then I started a PhD in Nottingham in 2014 on the philosophy of music — the nature and authenticity of musical works.

You still you still play the piano today?

Yeah, just before before this meeting, I was I went for a guided tour in the collection of early music instruments here in Edinburgh. My diploma was in piano, but I was playing early keyboards, and interested in the technology of the instrument.

So, from very old technologies to new ones.

Yeah, there are quite a lot of interesting parallels. Look at how the technology of the piano evolved — it has had to keep up with the needs of different pianists.

I got into AI during my PhD, during a visit to McGill University in Montreal — the same period when Ian Goodfellow was developing Generative Adversarial Networks there. And I thought, this might change my PhD, might change the nature of music maybe. I started working on GANs, and how technology is changing our creative processes.

In the PhD I developed a new theory on the nature of musical works: that every performance is a different musical work, with an argument for what is authenticity: there is not one single authenticity but many. A part of the thesis was on automated music, and whether that might go against my argument.

But now with generative AI, I think it’s more and more valid — this idea that there is not one single artwork anymore. It’s much more decentralised, for each individual.

And distributed between the human and the instrument — the computer or the piano, I guess. Even with automated music, there’s always a human somewhere in the process.

One computer scientist I spoke with compared the computer to the piano — both are a kind of button box, a fairly simple mechanism where you’re pressing keys. But of course it can be very expressive in the way you use it, and what you can produce. Could computers be as expressive as pianos? Is that a valid comparison?

Yeah, especially with the piano, because in the piano each note is fixed and discrete, not like a violin or other string instruments. This is similar to how we have to deal with computers — with very distinct categories.

It’s one of the challenges I think we will have in using AI in the arts — making sure that we don’t lose the nuances that human art has.

It’s a definition of AI, that it exists to classify and discriminate between one thing and another — although there are probabilities of confidence, and maybe this is where the nuance comes in. So, working in music influenced your approach to creativity.

Yes, it comes directly from my practice as a musician. When I started working on AI my focus was on music. At the time, the Google Magenta project was cutting edge. But now music is a bit behind with respect to the other modalities.

Music should be easier because it is rule-based. But there still doesn’t exist a tool for people who do not compose or play music to achieve good results. There are some text-to-music tools coming out all the time, but so far a bit boring to me.

Or you have very specialised systems, like live coding, which are more interesting, but less accessible as something like Midjourney.

That’s why I shifted my attention to the visual arts — also because I am interested in creativity in general, not just music. Also because I started collaborating with Adobe a couple of years ago.

What do you do with Adobe?

I work with Laura Herman, researcher from there, who is also a PhD at the Oxford Internet Institute, got in touch because she saw some of my studies when I was in Nottingham. So we applied for a couple of funded projects when I was in Konstanz — about creativity and embodiment in human-machine co-creation. We did a workshop at the International Conference on Computational Creativity last year. And I collaborated with her on an exhibition she did on algorithmic and human curation of images.

Now we have the BRAID programme at University of Edinburgh. It has a fellowship to work with industry partners, so we applied for one of these, to work with Firefly, Adobe’s generative engine, and how creatives are using it.

In one of your papers, you say that the AI technological revolution is causing us to rethink concepts like agency and creativity, that are normally attributed to humans. You say that there is a shift from humans having control over supervision, to the machine having more agency — no longer being a tool but a “co-pilot” or teammate. Is that right?

In general, when we talk about AI these days it’s about autonomy. AI started with the idea of creating “agents” with behaviours and appearances similar to humans. But in the 1980s, Marvin Minksy was specifically talking about creativity as one of the key features for what should be considered AI.

The way I think about creativity is not the individual genius, or as only applied to the arts, but the idea of creating something novel, with some kind of agency. The key to this is when someone or something knows when to stop creating.

I would say AI hasn’t reached this stage yet. But GANs were a huge step forward, because there is a kind of self-judgement within the system. Generative AI doesn’t work exactly like this, but the idea of internal competition is still present. But for me, until it knows when to stop, it’s still a tool.

Yes, in your writing you make a distinction between judging creativity from the outside, versus this internal definition. To me this implies that humans and machines are very different. A lot of AI development these days seems to be based on the assumptions that we can make machines as “smart” as humans somehow — this explicit comparison between machines and humans. Artificial General Intelligence, whatever that means.

To me, the counter-argument is that humans and machines are completely different and we shouldn’t even try to compare them. It goes back to the Turing test — that machines just imitate humans.

Yes, I think the comparison is useful if we want to achieve is the appearance of intelligence, or any other human feature. But yes, I would also say that, ontologically speaking, we are two different kinds of entities. So taking this analogy to the extreme may actually hurt the development of AI.

But still, to address the alignment problem between humans and AI, we should strive for AI that at least shares our principles and values. It all depends what we want to develop AI for. For me, it’s for helping humans in what we already do, taking us further, helping us think outside our own bubble. If it’s too similar, we cannot go outside of what we already know.

Untitled (Oracle 3), 2022 (detail) by Leo Robinson. Photo: Kevin Walker.

Yes, I talk with a lot of artists who use AI, and for many of them it’s about the element of surprise — coming up with something that you might not think about yourself.

But let’s get deeper into the notion of creativity. You go into detail about it — whether it’s a product or a process or an agent, or some combination of these. You refer to Margaret Boden’s definition — that it refers to something new, which has some kind of value. A mapping of a conceptual space, a search through some problem space. Can you explain that?

That is one of the three kinds of creativity that Boden talks about. There is creativity that combines elements from different areas. Then there is exploratory creativity — to explore something we didn’t think about. And then transformational creativity — completely going outside how we see things.

So the first one is what is already known, but remixing it — this is how generative AI works. The second is about the “known unknowns” — exploring what we think and can exist but we don’t know exactly what it is. And the third is the completely unknown unknowns. What Boden says is that only this third kind of creativity [in addition to the capacity of evaluating the outcome] would classify AI as creative as humans are. For now, I believe that AI is at the exploratory creativity stage.

Do you think it can get there?

Maybe we can reach transformational creativity through AI. Of course there are ethical issues around data collection, but for the first time we can harness huge amounts of human knowledge in one place, that as humans we wouldn’t be able to process.

Some people find AI so difficult to understand that it’s like an alien intelligence. So we should treat it one the same level as human intelligence but separate. What do you think about that?

I think it’s always striking a balance — keeping the alignment so that it doesn’t go in directions we don’t want it to, but at the same time having this dimension of exploration. In the arts, there is already a new aesthetics of AI images — an excessive homogenisation of aesthetic features.

You know this study done years ago in the US, where they asked people about their preferred features in a painting, and based on that they created the “perfect painting”. That’s more or less what AI is doing now. It mirrors human preferences — the preferences of some humans anyway. It will stay there if we don’t allow it to explore outside of what we already know we like.

Yamaha, 2013 (detail) by Lutz Bacher. Photo: Kevin Walker.

In one of your studies, you asked people what they regard as creativity, and some of the most frequent terms that came up were novelty, problem-solving, surprise, value. So you’re trying to get at specific dimensions of creativity.

In my studies, I like to leave these terms to the interpretation of the participants. This is always criticised by reviewers — “You cannot have valid results if what you are talking about is not the same for everyone”. But that’s exactly the point I want to make, whether it’s with embodiment or authenticity or creativity. I want to capture common understandings of these topics.

For creativity, it seems that Boden’s categories work, even for people who don’t know her work. We’re still struggling to get past the idea of the romantic genius as the creator — a lot of people still believe this.

One of the dimensions of creativity you discuss is intentionality. Could a computer ever be said to have intentions?

I think it will be almost impossible to know for sure. We don’t even know about how often humans have explicit intentions. But what I said before about knowing when to stop — for me, that’s the best possible result, that can be a measure of intentionality, of doing something with a purpose. And I think that’s something that machines shouldn’t take away from creators.

It seems you’re trying to measure creativity in an empirical way. Do you think it’s possible to have objective or quantitative measures of creativity?

If you asked me three years ago, I would have probably said yes — that’s what I was trying to do. Now I’m not so sure. I think there is a measure of creativity, but it’s a very subjective measure, depending on context and on who you ask. In computational creativity, it’s important to have a way for people, and also computers, to evaluate their creativity. So it might be impossible to say exactly what creativity is, but at least we can narrow down some key dimensions.

You refer to Herbert Simon, who says that creativity doesn’t happen by magic, but is the result of hard work, goals, and skills.

Yes, this is a problem-solving approach. It’s controversial — some people say that in art, we don’t always have problems to solve. But in the history of art, artists’ development, and the development of tools — like the piano — it’s because they had problems to solve. Even if the problem is, “How can I express myself, given that the tools I have available are insufficient?” Creative constraints.

Punch cards used for a Jacquard loom by artist Magdalena Abakanovicz. Photo: Kevin Walker

The title of another of your papers is “Natural and artificial creativity.” What do you mean by this distinction? What is artificial creativity?

It starts from the idea that there is one way to measure creativity, across humans, animals and machines. But still we recognise that they are distinct. So artificial creativity is simply creativity performed by artificial systems — machines — and doesn’t rely on humans. It’s trying to give a less anthropocentric way of understanding creativity, trying to level the field.

As you say, “anthropogenic but not anthropocentric”.

Yes — it is created by humans, but this doesn’t need to only reflect human actions.

One artist who works with AI told me that the way humans operate is completely different from machines, and we only regard it as creative because it doesn’t conform to our understanding. What do you think about that?

Do you know Daniel Kahneman’s Thinking Fast and Slow? People have one system that immediately and intuitively reacts to the environment, and another that is more logical — reasoning, slowing down — when we’re confronted with something unexpected. That second, more analytical, system is closer to how machine learning works. The other, more impulsive, reaction can often be wrong, but that’s the beauty of human nature.

Machine learning works by recognising and categorising patterns, and we as humans do that all the time, even if we don’t realise it. We have all this noise, all this information coming in all the time, and we categorise it without realising. Otherwise we wouldn’t be able to move.

I guess it’s also related to emotions, and that’s something I didn’t see in any of your publications. Some computer scientists try to quantify emotions, with biosensors and things. Do you think this is ever something that we could quantify, that computers could ever understand?

I think there are two very distinct types of emotions: again the very impulsive kind that come from our animal nature — fear, panic, attraction etc. And then there are the more complex emotions. I see these as more easily reproducible in a machine. They’re not rational, but we can better trace what triggered them — our background, our environment. Some scholars don’t even talk about the intuitive ones like fear — they see them not as emotions but as something different.

Of course machines don’t have the same needs as we do, moving in the environment. They can fake emotions. But would we even want them to have emotions? They could share our principles and values — yes.

In fact there are some systems that can detect human emotional cues better than humans can, from someone’s voice or their facial expressions, so maybe they do have some value.

But let’s talk about agency, another key concept in your work. If I understand, there are two main components: autonomy and intentionality. Then you list a lot of other factors: individuality, being goal-oriented, thinking ahead, adapting, being able to reflect and reason, looking at causes and counterfactuals.

And then you come to this distinction between first person and second person. If I understand correctly, I can detect in myself some sort of agency, that I’m in control of something. Or I can perceive in some other individual some agency.

You say agents can infer information, observe patterns, and adapt their understanding. And I can see how computers can do all these things. Learning from the environment of course also, using sensors and things.

I guess the question is about reflection or self-reflection, or self-regulation or self-evaluation?

Yes, again, knowing when something is done, and reflecting on our own performance. Networks competing against each other sort of do this, a bit like when we talk to ourselves.

A display in the Science Museum, London. Photo: Kevin Walker.

Speaking of networks, you talk about Actor-Network Theory, and networked notions of agency. That’s interesting because it places humans and nonhumans on an equal level. In that case, do you place agency in the connections between the nodes or actors?

Yes, and also the actions performed by the network are bigger than the sum of the individual actions. Think about the agency of a corporation.

This notion also might be helpful in addressing issues of copyright and attribution — normally we recognise copyrights only with respect to individual agents. Human persons, specifically in the US constitution — so agency is quite closely connected to being human.

But now corporations can hold copyrights. So we can adapt our notions of copyright and attribution to these new modalities. I do believe we need to speed up this process. Lawsuits and things avoid the bigger issue, and this is how the creative sector will work in the future.

So in this networked agency, it’s not only the machine producing things.

All the training data, the developers — a whole supply chain of actors that is distributed.

Actually going back into our history can help. Think about the history of music. Even by the 19th century there was no copyright and people were stealing from each other all the time and it was considered normal. Before that, composers like Bach stole a lot from others. He had a fixed income, irrespective of how much he composed.

Fast-forward to today and we’re talking about universal basic income. The model used by record labels or music streaming companies like Spotify clearly doesn’t work because it centralises the money and power.

That’s why with AI it’s important to have interdisciplinary conversations.

Let’s talk about communication, and what’s called “explainable AI” in one of your papers. What does it mean?

This term came up about five years ago. It means developing transparent systems, that users and other stakeholders can understand how they work. That idea has evolved a bit, because at the time it was trying to put too many ideas into one label. But the idea of transparency and inspection is still relevant, especially in high-risk applications like autonomous driving.

In that paper, we looked at how there is no universal explanation for how a system works — you cannot give the same explanation to a computer scientist and to a user.

So actually one thing I am working on, which came out of that, is applied to computational creativity: how to develop customisable interfaces for users to choose from different dimensions of a system they are co-creating with. And how much explainability they need.

Look at how human artists collaborate in many different ways. And sometimes we don’t require our human partner to explain everything they are doing in co-creation. The same should happen in co-creation with systems.

To me, the notion of a “black box” is problematic. Everything is explainable, we just need the right tools. Humans are black boxes, computers are not. We might not know how to extract the information, but there is a way.

Control Data 6600, at London Science Museum. Photo: Kevin Walker

In interfaces, do you see a move away from text prompts to more graphical forms?

I hope so. One of the things I’d like to explore with Adobe is moving beyond just text-based interfaces. But it can be difficult to bring in other modalities because of the hardware that users have, things are not too accessible yet.

But a visual artist might have difficulty explaining in words what they’re doing.

Thinking of communication, I’m surprised you don’t discuss the traditional models of communication, like Shannon’s, or Gordon Pask’s conversation theory.

We discussed these, but didn’t include them, because we were working within a different paradigm. But the idea of conversation and feedback I want to go back to.

Yeah, reading that paper where you talk about sharing mental models, I thought of my own PhD research, which was situated in education, and I thought — this is the process of learning: sharing our mental models.

Yes in fact in our framework, one of the aims is learning something from your conversational partner.

The last topic on my list is embodiment, and the question of where AI ends and the human begins.

When we have an interface besides the keyboard, it opens up so many more possibilities for interaction. Going back to my practice as a musician, sometimes you don’t know where the instrument stops and the body starts — it’s a really close integration of the tool with your body.

Thinking about embodiment and AI, most people think of robots. But they are complex and can be dangerous to be around. There are so many other technologies we deal with on a day to day basis. Look at all the tools for DJs to modulate sound. Or even the iPad and a pen can be interesting — how you use it in an embodied way impacts your creative process.

In that paper, you mention sensory-motor intelligence. The idea that there are multiple intelligences goes against AI being purely cognitive. One of my favourite studies is how cockroaches can adjust their movements faster than it takes for signals to reach their brain — so their bodies therefore contain some “intelligence”.

There are also microcellular organisms that, if you cut them into five pieces, they basically then have five brains. Or more specifically, the nervous system in each piece can perform the same tasks. (Research here)

Last question: AI is moving so fast these days. Are you generally hopeful or pessimistic about the direction things are going.

I am quite hopeful. We had many technological revolutions before. Photoshop changed everything for designers, for example. But yes, the pace this time is really fast.

I talk with many creatives, and some of them shut down when talking about AI, they don’t want anything to do with it. But it’s here, it won’t go away, and we need to engage with it.

There are interesting developments in AI hardware. It’s early days, but it’s exciting to think of new ways of interacting with these technologies. We’ve had the PC and the smartphone for too long now. I would like to see more of that. Lots of exciting developments — it’s an exciting time to be working in this area.

Find out more about Dr Moruzzi’s research here.