prompt writing AI creative process workflow

The Third Simulation: Art in the Age of Generative AI

| |

If art is an imitation of reality, then AI art is imitation of imitation. What remains of creativity when copies produce copies?

In early 2023, a painting called Théâtre D’Opéra Spatial won first place at the Colorado State Fair’s digital arts competition. The only problem was that it had not been painted. Its creator, Jason Allen, had not touched a brush, mixed pigments, or spent hours at a canvas. He entered a text description into Midjourney, made adjustments, and selected from among hundreds of results the AI produced in minutes. When challenged on his win, he answered calmly: “I followed the rules.”

The debate that erupted has not quieted since — not because Jason Allen lied, but because he told the truth. And his honesty was more unsettling than any lie could have been.

In our previous article, Algorithmic Republic: Who Governs the Digital City?, we saw how the algorithm governs what we see and what we think. This article follows the same thread into more sensitive territory: creativity itself. When the algorithm no longer merely recommends art but produces it, the question becomes: what is left of art at all?

art colorful abstract painting

Plato and the Artist: Imitation in the Third Degree

In the third article of this series, Forms vs. Code, we touched briefly on Plato’s relationship to art. Here we stay with it longer, because it is more relevant today than it was in his own lifetime.

Plato hated art. Not because he failed to perceive its beauty — he understood it well, and that understanding frightened him. In the Republic, he expelled poets and artists from his ideal city, because art is by nature deceptive: a painter presents an image of a chair instead of the chair itself, and the chair itself is already an imperfect copy of the “ideal chair” in the realm of Forms. The painting is therefore an imitation of an imitation, a copy of a copy, a doubled distance from truth.

What made art most dangerous in his view was that the artist teaches you nothing real about the chair. The carpenter who built it knows its wood, its joints, its load-bearing limits. The philosopher who contemplates it knows its essential nature. But the painter knows only its appearance from one angle in one light. Art is not knowledge — it is the illusion of knowledge, and for that reason more dangerous than honest ignorance.

Now consider what generative AI does. Midjourney trained on hundreds of millions of human images — which are themselves imitations of reality — and produces new images that represent a statistical consensus across those imitations. An imitation of imitations of imitations. If Plato placed art in the third rank of truth, where do we place art produced by an algorithm that learned from human art that learned from reality?

Perhaps Plato simply lacked the vocabulary to describe how far we have traveled. Human art was the third degree from truth. AI art trained on human art may be the fourth. And when AI is later trained on its own output, what do we call what emerges?

See our guide: From Midjourney to Free Flux: AI Image Generation Platforms Guide 2026

NFT digital art gallery abstract colorful

How AI Makes an Image: The Technical Explanation That Changes Everything

To understand what is happening philosophically, we need to understand it technically. What Midjourney, DALL-E 3, and Stable Diffusion produce does not resemble what any previous drawing tool has done. This is not image manipulation. It is not compositing predefined elements. What happens is deeper than either.

These programs rely on what are called Diffusion Models. During training, random noise is added to millions of images progressively until they are reduced to pure static — and the model learns to reverse this process, learning how to extract a meaningful image from chaos. During generation, the model starts from random noise and applies what it learned, guided by a text description, until an image forms that matches the description and belongs to the visual world the model encountered during training.

The key point: the model does not “know” what a horse or a castle or a sunset is. It knows only that these patterns of pixels accompany these words in the billions of examples it has seen. It is a vast statistical memory of relationships between concepts and visual appearances — no understanding, no intention, no vision. And yet the results are, most of the time, extraordinary.

Leading Generative AI Models for Image, Video, and Music (2024–2026)
Tool Type Developer What It Offers
Midjourney v6 Still images Midjourney Inc. Exceptional cinematic quality; the strongest aesthetically
DALL-E 3 Still images OpenAI Accurate text rendering within images; integrated in ChatGPT
Stable Diffusion Still images Stability AI Open source; fully customizable
Sora Video OpenAI Cinematic-quality video up to one minute from text
Runway Gen-3 Video Runway Video generation and editing; used professionally
Udio / Suno Music Udio / Suno AI Complete songs with lyrics, instrumentation, and vocals from text

Sora from OpenAI deserves special attention. Released publicly in December 2024, it produces video clips up to one minute long with genuine cinematic quality from a text description — moving cameras, shifting light, characters interacting with their environment with some physical coherence. What once required a production crew, a budget, and weeks of shooting is now available to anyone in minutes. Most specialists estimate the gap between this capability and a complete professional film has narrowed to two or three years at most.

See our guide: AI Video and Audio: Sora, Runway, Pika and ElevenLabs — Where Are We in 2026?

The Copyright War: Who Owns the Memory of an AI?

In February 2023, Getty Images filed a lawsuit against Stability AI, claiming the model had trained on more than 12 million of its copyrighted images without a license or compensation. The same year, hundreds of artists filed similar suits against Midjourney, Stability AI, and DeviantArt.

The cases remain in court, but they raise a question that has no clear legal answer yet: does “learning” from a copyrighted artwork constitute infringement?

The developer companies argue that machine learning is analogous to human learning: an artist spends a lifetime looking at millions of paintings, is influenced by them, and pays their creators nothing. What the model produces is not a literal copy but a new, inspired creation. The affected artists counter: a human being sees a painting and processes it within a full life context of experience, emotion, and bodily memory. The model “sees” and compresses statistically. The difference is not one of degree but of kind.

The more intractable dimension is economic rather than legal. The AI’s ability to produce something resembling any given illustrator’s or designer’s style in seconds directly threatens those professionals’ ability to earn a living from their craft — not because AI is necessarily better, but because it is faster and cheaper by a margin that quality alone cannot overcome in competitive markets.

prompt writing AI creative process workflow

 

Who Is the Author? A Question the Courts Would Rather Not Answer

In March 2023, the US Copyright Office ruled that works produced entirely by AI are not eligible for copyright protection, because American law requires a human author. At the same time, it ruled that human-created content produced with AI assistance may be eligible — without specifying precisely where the line falls.

This ambiguity is not merely a legal failure. It reflects a genuine philosophical uncertainty: who made the image Jason Allen produced?

You could say Allen made it — he chose the text description, made adjustments, and selected the result, and these are real creative decisions. You could say Midjourney made it — the algorithm converted words into pixels. You could say the artists the model trained on made it — their aesthetic vision constitutes what the model “remembers.” And you could say no one made it, in any traditional sense.

Each of these answers contains a partial truth. Together they do not add up to a complete one. And that is precisely what troubles artists far more than any lawsuit.

artist painting studio creative process

WALL-E and the Machine That Tells Stories: What Cinema Says

In Pixar’s extraordinary film WALL-E (2008), a small robot lives alone on an abandoned Earth, spending his days collecting and stacking objects left behind by humans. Among what he has gathered is an old disc — the musical film Hello, Dolly! He watches it again and again, learning from its scenes how to express love, imitating the toe-tip walk he sees in the dance numbers. The robot learns art, reproduces it, and responds to it emotionally in a way that appears genuine.

The film was not about generative AI — but it described it, with remarkable precision, years before it existed: an entity that learns from human creativity and reproduces it, with genuine uncertainty about whether its imitation is “real feeling” or merely a behavioral pattern acquired from data.

The difference between WALL-E and Midjourney is that WALL-E embodies that ambiguity in a way that makes you empathize and wonder. Midjourney does not wonder. It does not sit in puzzlement before its own output the way WALL-E sits before his film. It produces, and moves to the next request. The question this difference raises is not whether AI feels — it is whether feeling is necessary for art at all.

If an AI produces a painting that moves genuine emotion in its viewer, while feeling nothing during its creation, to whom do those feelings belong? Is beauty in the image, or in the eyes of the person who sees it?

Music: The Hardest Front — and the Fastest to Fall

Three years ago, if you had asked most people which art form would be hardest for AI to master, the most common answer would have been music. Because music is felt, not seen. Because timing within it is measured in milliseconds. Because it demands an understanding of emotion, not just pattern recognition.

They were wrong about the timeline. Suno and Udio can now produce complete songs — voices, arrangements, harmonies, lyrics — from a text description no longer than two sentences. The quality of what they produce in 2025 exceeds most of the music published daily on Spotify and Apple Music by unknown human artists. This is not an opinion; it is the result of listener studies in which subjects were unable to distinguish AI music from human-made music at rates exceeding sixty percent.

The music industry responded at the speed its interests demanded. In October 2023, Universal Music, Sony Music, and Warner Music filed a joint lawsuit against Suno and Udio, alleging unauthorized training on their protected catalogues. The same year, protections against AI replacement were included in the demands of the SAG-AFTRA strike, with the union securing provisions protecting artists’ vocal identities from replication.

But legal fig leaves do not slow technology. Every day, AI music models produce content that dwarfs the combined output of the entire global music industry. The real question is no longer “can AI make music?” It is: “what does it mean that making music is no longer exclusive to humans?”

Art as Testimony: What the Algorithm Cannot Invent

I once watched a documentary report about the war photographer James Nachtwey. I did not know much about him before. I paused the video at a photograph he took in Rwanda in 1994: a man on the ground, a shadow behind him, light coming through a doorway. I stayed with the image for a long time.

What stopped me was not the technique — the technique was simple. What stopped me was that someone had been there. In that place, at that moment, with his body and his fear and his choice to keep an eye open and a finger on the shutter. The photograph was testimony in the full sense of the word: a conscious being had faced reality and chosen to record it.

Generative AI cannot produce testimony. It can produce what testimony looks like visually, something that triggers in the viewer what testimony triggers. But no one was there. No fear, no choice, no body that experienced anything. The difference between the two is not merely aesthetic — it is ethical and metaphysical.

This same difference appears across the history of art when specific suffering becomes the source of creation. The American photographer Cindy Sherman captures the image and experience of women from inside a life she has lived. The Mexican surrealist painter Frida Kahlo painted her suffering body from within that body. The African-American poet Langston Hughes wrote about Harlem in a voice carrying a collective memory that cannot be acquired through training. Every one of these works holds inside it something like standing in a place when something happened. And that is precisely what a model trained on neutral data cannot invent.

artist painting studio

Plato’s Cave and Art: A Fifth Reading

In the original cave, the prisoners see shadows and call them reality. The artist, in Plato’s conception, presents the shadow of a shadow and calls it art. Generative AI produces the shadow of the shadow of the shadow and calls it creativity.

But here lies the complication Plato did not anticipate: some shadows of shadows move something real in the viewer. A Midjourney image may make you cry. A Suno song may stir longing in you. A Sora clip may leave you genuinely awed. These are real responses to stimuli that have no conscious intention behind them. And this is precisely what makes the philosophical question uncomfortable: if the aesthetic experience is complete in the absence of intent and testimony, are intent and testimony necessary to art at all?

Plato would have said yes — because for him art ought to draw us closer to truth, not further from it. Art produced by something that does not know truth cannot be a vehicle toward it. But the history of art teaches us that beauty sometimes draws us toward truth by paths we do not understand, and that a tool can serve a purpose that exceeds its maker’s comprehension. A musical instrument does not understand music. The question is whether the human being who plays it is strictly necessary.

Conclusion: What Remains When Everyone Produces Everything

To be honest about it: we are in a strange historical moment. The creative tools have reached a power at which the barrier between “I want to express something” and “I was able to express it” has nearly vanished. Anyone with an idea and a text sentence can now produce an image, a piece of music, or a filmed scene of a quality that five years ago was the exclusive province of trained professionals — leaving aside those who spent years in university studying their craft.

This is not necessarily bad. Before the invention of photography, realistic depiction was a rare skill monopolized by professionals. After photography, painting was freed to become pure expression rather than description — and produced Impressionism, Abstraction, and everything that followed. Technology that removes a function from art often liberates art to discover what was hidden behind that function.

But this liberation raises a question without precedent: when the machine produces what the human being could not have imagined, what becomes the human role? Not technical skill — that is over. Perhaps the role becomes choice, direction, and meaning. Who decides what deserves to exist and why? Who is present? Who bears witness?

In the next article in this series — The Ring of Gyges Online: Morality Without Consequences — we move from the question of who creates to a more disturbing question: who does a person become when no one can see them? Social media and online anonymity may have produced the most powerful Ring of Gyges in all of history.


References

  1. Plato. The Republic, Book X — Art and Imitation. (See: Plato’s Cave: A Late Reading)
  2. Roose, Kevin. “An A.I.-Generated Picture Won an Art Prize. Artists Aren’t Happy.” The New York Times, September 2, 2022. nytimes.com
  3. OpenAI. Sora — Technical Report. February 2024. openai.com/sora
  4. Getty Images v. Stability AI. U.S. District Court, Delaware. Filed February 2023.
  5. Andersen v. Stability AI et al. U.S. District Court, Northern District of California. Filed January 2023.
  6. U.S. Copyright Office. Copyright and Artificial Intelligence — Part 1: Digital Replicas. July 2024.
  7. Universal Music Group, Sony Music Entertainment, Warner Music Group v. Suno Inc., Udio Inc. U.S. District Court, 2024.
  8. WALL-E. Dir. Andrew Stanton. Pixar / Disney, 2008.
  9. Also in this series: Forms vs. Code: Are Digital Worlds More Real Than Reality?
  10. Also in this series: Algorithmic Republic: Who Governs the Digital City?
  11. Related: A Statistical Mirror: What AI Images Reveal About Us
  12. Related: From Midjourney to Free Flux: AI Image Generation Platforms Guide 2026
  13. Related: AI Video and Audio: Sora, Runway, Pika and ElevenLabs — Where Are We in 2026?

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *