This essay was written in 2018 when I was a 2nd year undergraduate as part of a philosophy course at Higher School of Economics. I still find some parts of it relevant thus retroactively publishing it. It was originally written in Russian but Claude Opus 4 translated it and I fixed minor translation issues. There is also self-reflection after the essay written in June 2025.

These days, news outlets regularly run headlines like “AI Learns to Paint” or “Neural Network Trained to Create Music,” and from the recent stuff: “AI Creates Horror Movie.” With this essay, I want to show that people using these examples to prove AI’s creative abilities are subtly wrong. At least, for now.

Let’s first define our terms. First off, what’s creativity? I think it’s appropriate to use Cambridge Dictionary’s definition: the ability to produce or use original and unusual ideas¹. Now let’s define artificial intelligence (and we’ll sometimes call it “neural network,” “neural net,” and “machine”). Let’s use the definition introduced by John McCarthy²: “the science and technology of making intelligent machines, especially intelligent computer programs,” where intelligence means “the computational component of the ability to achieve goals.” Following John Searle’s³ classification, we distinguish between “strong” and “weak” AI, where strong AI is a real computer brain — meaning that with the right set of programs, such a machine would be capable of understanding what’s happening and having other cognitive states.

To answer whether AI can do creative work, we’ll only need to consider strong AI, since creativity is still considered a conscious process.

For simplicity, let’s narrow the topic and focus only on painting, since neural networks are actively used for graphic images, and this area is pretty well-studied. So let’s look at how computers “paint” right now. There are two approaches.

narrator note: at this point I had no clue about diffusion-based models taking off and probably had no clue about diffusion at all

The first approach is that the algorithm gets paintings by some artist and a source image as input, then it memorizes the key features of the artist’s style and tries to paint an image that wouldn’t differ much in style from the artist’s paintings but would still look like the source image. I think it’s obvious that this approach is hard to call creativity, since it’s basically just applying the style of already-painted works to some images. This approach is pure “Chinese room”⁴ — the algorithm doesn’t understand what’s happening, doesn’t paint anything “in a creative burst,” but just tries to fit the image to a known style — thus deceiving observers. This way you can’t create a unique work of art, so it won’t fit the definition of “creativity.” If AI can do creative work, it’s definitely not like this.

The second approach is more interesting. Images are also fed as input. But here two neural networks participate: a “generator” and a “discriminator.” The “generator” creates an image, and the “discriminator” evaluates the “generator’s” work: several images from the input data and the “generator’s” created image are selected, and the “discriminator” has to figure out which one was generated and which ones are real. If the “discriminator” “guessed” the synthetically created image, the “generator” will redo it. This continues until the “discriminator” completely stops correctly identifying artificially created images. Interestingly, this approach completely models the Turing test — except it’s not a human deciding what’s created by a machine and what’s not (because that would be too slow and expensive, and sometimes inaccurate), but another machine. The paintings turn out different from the input data, in different styles, so by dictionary definition, the algorithm is formally doing creative work. And it even passes the Turing test!

So it seems like here’s our universal artist, what’s wrong? Through empirical methods, scientists found a problem with that same “discriminator” conducting the Turing test for the “generator” and thereby conducting the Turing test. The problem is this: if you replace several pixels in an image so that the changes are completely invisible to the human eye, and they would still see a normal image and not distinguish it from works of art, it’s quite likely that such changes won’t go unnoticed by the machine — the “discriminator” will classify such an image as incredibly different from the original paintings. In other words, replacing a few pixels fundamentally changes the conclusion about the painting’s validity for such AI. I believe this can only be interpreted one way: such a neural network doesn’t understand what’s happening at all, what distinguishes a work of art from non-art, and gets confused in its testimony. So we’ve got an interesting situation: we got an AI that passes the Turing test but doesn’t understand the process of creating paintings and can’t distinguish a real painting from a slightly modified one. Turns out we just modeled the “Chinese room” experiment differently. Now instead of John Searle sitting in the room, we have a neural network, and instead of Chinese replies, we get paintings. And to us outside observers, it might seem like there’s something intelligent in the room, capable of painting, capable of creating. But we know for sure that the object inside the room (our machine) doesn’t comprehend its creations.

I should note why I think the proof that the machine paints unconsciously is its response changing based on a few pixels. It’s weird because an artist who just created a painting would be sure that making an error invisible to the human eye wouldn’t ruin the painting — it would still be a work of art, because the artist understands the value of their painting and the value of art. Right now, machines are too uncertain about their understanding of “work of art,” so we can’t say for sure they’re doing creative work, because to do any work, you should understand what that work consists of.

I’m convinced that when AI is taught to be more confident in creating works of art, when the definition of “creativity” becomes “stable” for AI (small changes won’t confuse it), then machines will truly become capable of doing creative work. Heidegger writes in “The Origin of the Work of Art” that an artist becomes an artist through creation: “it is the work that allows the artist to emerge as a master of their art”. This applies to AI artists too, since they’re also defined by their creations.

Machine creators are actually a pretty dangerous thing for creativity, because they’ll immediately devalue the work of mediocre artists — machines are more productive, so they’ll make more creations. In that case, we’ll likely see a differentiation of art into “machine” and human, and brilliant human art will significantly increase in value — it’ll be different from mass-produced art, it’ll become unique. So this will elevate creativity to a new level, similar to what the invention of photography did. Turns out that as soon as machines learn to create artistic works, the standards of creativity will rise to a level inaccessible to machines at that moment — machine-created paintings will stop being considered novel in conception. Does this mean that when machines learn to create, they’ll immediately unlearn it and have to learn again?

What do we have? Yes, the existence of AI capable of creating is definitely possible. And I think it’ll happen pretty soon. But will such creativity be valued?

Post Notes:

Main arguments seems to come from pixel problem/adversarial attacks standing in a way of true AI creativity — to me today that’s pretty much just a scale problem and the models will get more robust with their size and scale. Diffusion and AR over images seems to also fit the criteria, though such process introduced aspect of prompting with text — outsourcing the creative part? By seeding the result externally.

Another post-note is that I now don’t think understanding is required? What even is understanding in that case? In my view understanding is being robust to exogenous factors — not getting lost when something unpredictably changes, and this type of issue is definitely solved with scale as models learn to operate over more complicated abstractions. So yeah, AI for sure can do creative work and arguably the bar has already been crossed.

Everything is computer.


¹ Originally used S.I. Ozhegov’s definition: creativity is the creation of new cultural or material values based on original ideas

² John McCarthy, 1955, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence”

³ John Searle, 1980, “Minds, Brains, and Programs”

⁴ The Chinese Room argument: a thought experiment by John Searle demonstrating that a system can appear to understand Chinese by following rules without any actual understanding