Learning from stories, not datasets
--
Nouf Aljowaysir recently won the Lumen Prize for moving image, and was nominated for the Youth Documentary Award (13+) at the International Documentary Film Festival in Amsterdam. Her film created with AI, Ana Min Wein (Where am I from)? traces her family heritage in the Middle East, and exposes some flaws with AI in the process. It was commissioned by Somerset House in London, where I saw it.
Read a condensed version of this interview here.
Q: Congratulations on your award at IDFA. How was the festival?
A: Amazing. It takes over the whole city, it feels like. Multiple buildings, huge cinemas. My film was shown in a cinema setting like that, because my film was accepted into the Film section — I didn’t apply to the New Media one. So I was with filmmakers — more traditional documentary-style. Mine was quite experimental, but they loved it, it was really great.
You never considered yourself a filmmaker before?
No, this was my first moving image work. It was hard. There was a lot of scriptwriting, thinking about cinematography, editing — I had help with editing, which was great.
But yeah, I never considered myself a filmmaker. This is partly due to graduating from ITP [which we both did], and being like, “What am I?”
Did you have a crew for the film?
No, not really. I had a producer who helped with editing and with shooting me. But I directed every single scene. Then I worked with a filmmaker based in UAE — she had gone to Tisch film school, and we met there. I really wanted to consult with someone from back home specifically. — UAE is culturally closest to Saudi Arabia [where she’s from].
So we had long conversations — about identity, memory, how AI fits into all this. It was really just reminiscing about life. The film kind of came about that way, from a lot of personal conversations about how I felt and what I wanted to do.
Did it come out of your earlier work, Salaf?
Yeah. Salaf means ‘ancestor’ in Arabic. I did a residency that was really focused on artistic process — developing the idea. It was with ThoughtWorks Arts, which unfortunately had to shut down after the pandemic. But they were really great pre-pandemic — they focused a lot on development, and they accepted artists at different stages of their careers. There was a lot of mentorship and stuff.
So I treated it totally like something different. I was like, I want to explore re-creating my memories. There was already a lot of generative AI around, and I truly wanted to see where my mom came from — the regions she lived in, if AI could help me visualize the stories, or if I could learn something — what could I learn from this?
The process was playful, not necessarily structured but really open-ended. One question led to the other, you discover things, and you write them down. So again, long conversations.
An exercise that my mentor tried to help me do, was constantly asking why I want to do this project — “You need to answer this from an authentic place. I need you to record yourself 20 times, 50 times — why you want to do this project.” Of course I started out saying things like, “Because it fits.” And she’s like, “No, that’s not real, do it again.” She was one of those harsh art critics [laughs].
I found this really close to a therapy practice, because it was so personal. I had to question myself — “Yeah, why do I want to do this? What am I trying to say?” And really focus on the voice I wanted to say it with, not only what I wanted people to see, to hear. There was a lot of reframing there.
There can be a tendency just to do something cool with technology. I wasn’t trying to do that, and this questioning helped to dig deeper into my voice.
And then, with the filmmaker, over and over again. I think, after living here [in New York] for so long, I’m used to thinking, “Who would want to see or know this?” I was being harsh on myself, and thinking about white audiences, catering for white audiences.
So I felt like I had to do a lot of unlearning, just focusing on my voice and not doing this for anyone else. I feel like I was in a very hypnotic, almost poetic relationship with myself. Obviously I was also thinking about people in similar circumstances too.
“I started finding a lot of labels that were wrong.”
But the ThoughtWorks project was the development of the AI. And understanding that I couldn’t really re-create this. That kind of unlocked what I wanted to say with AI, through the research.
I started finding a lot of labels that were wrong. Generalization — this thing is seeing me and saying, “This is an Eastern Indian culture,” “This is a turban.” It would so generalize me to that, and that made me think about who made it — the datasets they trained it on.
The second part was unlocking, more deeply, the personal voice. Because there’s two voices in the film, talking.
So, the second residency, with Somerset House, I spent a lot of time unlocking that personal voice. I wanted to capture my process. The film is essentially the process that I took, and the answer that I reached at the end of the film is the end of the process. It was coming from truth, authenticity.
Which, when you think about AI — what is truth becomes a larger question. It’s just making some generalizations based on the data it’s trained on, right? Do you mean the truth for you?
Yeah, I think it made me not just accept it. It made me mad.
Through the research, I interviewed my mom, and I traced my ancestry to the 1800s. We didn’t have colonialism in the same way as the US did. But I was really able to trace my great great great grandfather, how they migrated to the north for opportunity.
Saudi Arabia didn’t have a lot of wealth at that time. And she says in the film, a lot of people migrated to Iraq or Syria. So my father grew up in Syria and my mom grew up in Iraq, but they’re both Saudi Arabian technically, so they were able to reclaim their nationality later.
I wanted to go so deep because your identity doesn’t just stop at where you are from. That’s why the first work is really looking at ancestors — it says a lot about your temperament, how you respond to things, your intuition. Of course, obviously trauma is also passed down through ancestors.
So I felt so connected, because I grew up with a lot of my mom’s stories, her dreams and her memories. Even though I have no memories of my own from there — I’ve never visited Iraq. But I’ve almost felt my whole life like I’m from her memory.
Of course, my mom didn’t get at all what I was trying to do [laughs]. But it was still good to get her into the zone and talking about all this.
“It felt like the past was erased.”
So then, on your website you call it re-imagining the past using generative AI.
Yeah, and in fact it didn’t really help me re-imagine the past; it felt like that past was erased. At IDFA, a lot of people were like, “I had no idea there was this history”, “I didn’t know pictures like that existed of Iraq.” It was a really different time, before the war — now it looks awful. My mom has beautiful pictures from before, but it’s erased from the collective memory.
At Somerset House, I talked a lot with ethical AI researchers, who really validated a lot of my research. If you do a simple Google search, maybe you can get some text about history. But these kinds of memories live in stories — oral stories.
Technology doesn’t work that way. It learns to generalize, to simplify, break things down. And because it’s based on capitalism, it’s designed to generate profit, to gain information fast. A lot of my conversations were about why learning is being defined that way.
The way I’m learning, though, is through oral storytelling — information passed down through stories. AI learns from a very specific dataset — learn this, apply that. And in many ways, that’s how we’re taught too — in the Saudi Arabian education system, I had to learn word for word and take tests, memorizing.
What if we can also learn through storytelling?
There’s a lot of research around how people learn best through stories.
So why can’t AI do that? Why those datasets? Why those objects? Why this definition of learning? Why is intelligence defined in this way?
Objects are, of course, attached to a culture. An AI system in a Middle Eastern culture, maybe we wouldn’t use hats. Or cats. They say this is our intelligence, put into machinery. But there’s a lot of bodily intelligence that you learn through intuition. I don’t believe in re-creating humans in digital form.
And again, it’s generalizing. In my story, it’s erasing what is literally collective memory, and it shows a very skewed history of the world. There are a lot of voices not being heard.
“Maybe it’s radical to think about tools that are not disseminated to the entire world”
How much do you think it’s tied to language? You know, most programming is based on English, most of the datasets too. There are now countries developing AI systems trained on local cultures and languages. Could you imagine an AI based in an Arab context?
People have asked if that might be a solution. Of course if these technologies weren’t owned by a certain group of people. Of course in the Middle East we use these apps, they infiltrate us. And that’s really the danger of AI.
Yes maybe, if we had our own Google or something. But maybe it’s radical to think about tools that are not disseminated to the entire world — maybe that’s leading to a lot of problems.
Even here in the US there is erasure — darker bodies are not represented, even though the US is such a multicultural place. It’s not representing the US completely.
So developing an AI with local knowledge, yes, but really going further and reframing the way that learning is defined.
It seems like Saudi Arabia and UAE have the financial resources.
Saudi Arabia is investing a lot in AI. They are inviting people from the West to come and work there. They say, “Do what you do in the West, but do it in Saudi Arabia”. They might have a few Saudi consultants.
The Emirates is also replicating this. I’m excited to see local voices being incorporated and heard and to not put western technologies on complete pedestals.
“It’s another form of colonialization”
It’s kind of ironic, because, you know, mathematics, algorithms came from Middle Eastern science, ages ago.
I think it’s just heartbreaking to see this kind of globalization. Going to these cities, you don’t know what culture you’re in. I mean, sometimes it’s nice. But is there a way we can preserve who we are? It’s another form of colonialization, in a way.
You can trace it to when oil was discovered. We were essentially Bedouin, and we didn’t know what to do with all this money. So there was a lot of intervention from the US and Britain. The hospital where my mom worked was American.
I mean, I live in New York, and it’s a melting pot — so nice. So I’m arguing with myself a bit. Why do I want to preserve our culture so much?
This was not the first time you used AI. When, and why, did you get into AI?
For my ITP thesis, I traced my digital footprint — all the text messages, interactions on social media. Could I train something to talk and act like me? I re-created a three-dimensional scene with some of these things, then 3D-scanned myself. And I trained a simple AI voice to talk and behave like me, in this space. I called it nouf.io, and infused some humor into it — you could talk to my avatar and it would respond to you.
We still don’t have that on the internet.
You can get pretty close to reality, and it’s moving in that direction — how can I generate your likeness? Look at Hollywood.
The premise of my project was: is this me, or not? It’s never gonna be truly us, but we’re gonna get so confused about who is you, and did you really say that? That’s gonna create even more distrust. AI is not creating clarity but more fog. Now I go online and I do not know who or what to trust. I didn’t think that AI, at a larger scale, would cause this. But it makes a lot of sense.
“Learning from AI, and it, I guess, learning from me”
If you could, would you go further — as the technology develops and you can have this second self, would you keep going down that path?
I’ve been asking myself that question. My urge is to build on what I’ve done and continue in that direction. I do have a thought about, What if I did this local type of AI? That’s one direction.
Another is exploring other parts — more vulnerable parts of me. That’s what was so interesting, and so healing, about my project — looking at my identity. Because it was a hard subject for me to look at. There’s an urge for me to continue looking at hard subjects, and maybe pairing it with this other theme. It’s an interesting balance — the humaneness and vulnerability that comes with a topic, versus the stiffness and generality of AI. Learning from it, and it, I guess, learning from me.
You say it becomes a kind of therapy. I suppose this second voice, like in the film, could either help or hurt someone’s mental health. There are therapeutic chatbots out there.
Maybe the difference is that I don’t view what is says as right. I don’t see the coolness in the tech anymore, I just really see it as flawed. I feel like I know who it really is more than it knows me. ChatGPT is almost like a person talking to us, and I feel more like the therapist — from what it says and by asking it questions, I feel like I start to know it pretty well.
You did a mental health project where you used AI to combine different voices.
I first started doing really complex stuff with neural networks and stuff was working at an agency called Havas, after graduating from ITP. I started out as an intern — typical New York story, not great pay. But I worked under a creative director who had a client really interested in AI. So, I and another ITP graduate, we were looking at all the latest research. I dove more deeply into it. It wasn’t so popular at the time, in 2019, and it took a lot of convincing the client to invest in it. Look at it now.
Eventually I used style transfer, which copies the style of an artist. I had to set up a really complex model — that took a few weeks. And the client’s whole rebranding was based on this generated AI asset.
The mental health project came from that. A different client. By using all these different voices, they wanted to destigmatize talking about it.
So then you did this seance with Alexa.
I heard about the project, I approached them and said, “I love this project. Do you need someone to help with the tech stuff?” Their idea was to bring back mom from the dead. It was not you telling the device what to do, but the device telling you what to do.
Once I was in the project, I helped develop it more. It’s how I work — just start experimenting with the technology, and see how the script develops. I don’t do the script first; the process tells me what to write, what to say.
“Once you have a foundation, you can really apply it to the next thing”
Let’s talk technology and practicalities. How did you learn all this?
I went to Carnegie Mellon, which is a very techie school. I started off in architecture, but they had a specialism in computational architecture — software to create really complex forms, plus workshops with robots and things. The artist Madeline Gannon was my teacher — she dances with robots, treats them as pets. So I worked in her lab, programming robots to do things that were more playful, reframing these industrial devices.
So first I learned Python through that, to help work with Rhino, the architectural software. And then I started taking computer science classes for more Python — it was really, really difficult. Designers think visually, and coders think logically, and it was hard to switch my brain. When you’re coding, you need to see patterns.
Then I came to ITP with that background, and it was easy to dive in and start making things, for example with P5. That helped actually — P5 is so simplified that it kind of clicked a lot of the concepts that I had learned. Some stuff with Arduino, then I got into javascript. And then, when I started that commercial job, more complex stuff — neural networks and so on. But once you have a foundation, you can really apply it to the next thing.
You still have a day job now, at Medium, and I think you have a good sense of what’s going on in the tech world. Are there things you’re thinking about these days?
OpenAI reached out to us about training its model on our content, and we basically said no — they would have to contact authors individually. We have a zero-tolerance policy on AI — stories that have been generated by AI will be taken down, or reported. We also have a human layer of curation, in addition to the algorithmic layer — that also teaches the algorithm what high quality is. Of course, as a writing platform, and we’re about the craft of writing; we don’t want AI-generated trash on there.
[Medium is on record about this.]
Personally, I’m not completely pessimistic about AI, but I do think it’s not going in a good direction. OpenAI has already caused a shift in society through ChatGPT. What’s gonna happen now that Sam Altman is reinstated to the board, and he likes to do things fast, the Silicon Valley way?
At the same time, the technology has changed to become more inclusive. It’s mainly surface development, I don’t know that it’s changing more fundamentally or deeply — it’s kind of hard-coded in people and their ways of working.
What are you working on now? What’s coming up?
I have a solo exhibition coming up, in a small gallery in Colorado. It’s my work with ThoughtWorks — Salaf. I have a lot of materials from that, and I’m excited to display it in a whole space. I also have a few more invitations to display the film this year.