artificial intelligence https://www.artnews.com The Leading Source for Art News & Art Event Coverage Fri, 05 May 2023 14:37:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.1.1 https://www.artnews.com/wp-content/themes/vip/pmc-artnews-2019/assets/app/icons/favicon.png artificial intelligence https://www.artnews.com 32 32 Artists Are Suing Artificial Intelligence Companies and the Lawsuit Could Upend Legal Precedents Around Art https://www.artnews.com/art-in-america/features/midjourney-ai-art-image-generators-lawsuit-1234665579/ Fri, 05 May 2023 14:37:34 +0000 https://www.artnews.com/?p=1234665579 Mike Winkelmann is used to being stolen from. Before he became Beeple, the world’s third most-expensive living artist with the $69.3 million sale of Everydays: The First 5000 Days in 2021, he was a run-of-the-mill digital artist, picking up freelance gigs from musicians and video game studios while building a social media following by posting his artwork incessantly.

Whereas fame and fortune in the art world come from restricting access to an elite few, making it as a digital creator is about giving away as much of yourself as possible. For free, all the time.

“My attitude’s always been, as soon as I post something on the internet, that’s out there,” Winkelmann said. “The internet is an organism. It just eats things and poops them out in new ways, and trying to police that is futile. People take my stuff and upload it and profit from it. They get all the engagements and clicks and whatnot. But whatever.”

Winkelmann leveraged his two million followers and became the face of NFTs. In the process, he became a blue-chip art star, with an eponymous art museum in South Carolina and pieces reportedly selling for close to $10 million to major museums elsewhere. That’s without an MFA, a gallery, or prior exhibitions.

“You can have [a contemporary] artist who is extremely well-selling and making a shitload of money, and the vast majority of people have never heard of this person,” he said. “Their artwork has no effect on the broader visual language of the time. And yet, because they’ve convinced the right few people, they can be successful. I think in the future, more people will come up like I did—by convincing a million normal people.”

In 2021 he might have been right, but more recently that path to art world fame is being threatened by a potent force: artificial intelligence. Last year, Midjourney and Stability AI turned the world of digital creators on its head when they released AI image generators to the public. Both now boast more than 10 million users. For digital artists, the technology represents lost jobs and stolen labor. The major image generators were trained by scraping billions of images from the internet, including countless works by digital artists who never gave their consent.

In the eyes of those artists, tech companies have unleashed a machine that scrambles human—and legal—definitions of forgery to such an extent that copyright may never be the same. And that has big implications for artists of all kinds.

Two side by side images of an animated woman.
Left: night scene with Kara, 2021, Sam Yang; RIght: Samdoesarts v2: Model 8/8, Prompt: pretty blue-haired woman in a field of a cacti at night beneath vivid stars (wide angle), highly detailed.

In December, Canadian illustrator and content creator Sam Yang received a snide email from a stranger asking him to judge a sort of AI battle royale in which he could decide which custom artificial intelligence image generator best mimicked his own style. In the months since Stability AI released the Stable Diffusion generator, AI enthusiasts had rejiggered the tool to produce images in the style of specific artists; all they needed was a sample of a hundred or so images. Yang, who has more than three million followers across YouTube, Instagram, and Twitter, was an obvious target.

Netizens took hundreds of his drawings posted online to train the AI to pump out images in his style: girls with Disney-wide eyes, strawberry mouths, and sharp anime-esque chins. “I couldn’t believe it,” Yang said. “I kept thinking, This is really happening … and it’s happening to me.”

Yang trawled Reddit forums in an effort to understand how anyone could think it was OK to do this, and kept finding the same assertion: there was no need to contact artists for permission. AI companies had already scraped the digital archives of thousands of artists to train the image generators, the Redditors reasoned. Why couldn’t they?

Like many digital artists, Yang has been wrestling with this question for months. He doesn’t earn a living selling works in rarefied galleries, auction houses, and fairs, but instead by attracting followers and subscribers to his drawing tutorials. He doesn’t sell to collectors, unless you count the netizens who buy his T-shirts, posters, and other merchandise. It’s a precarious environment that has gotten increasingly treacherous.

“AI art seemed like something far down the line,” he said, “and then it wasn’t.”

Two side by side images of an animated woman.
Left: JH’s Samdoesarts: Model 5/8, Prompt: pretty blue-haired woman in a field of a cacti at night beneath vivid stars (wide angle), highly detailed. Right: Kara sees u, Kara unimpressed, 2021, Sam Yang

Yang never went to a lawyer, as the prospect of fighting an anonymous band of Redditors in court was overwhelming. But other digital artists aren’t standing down so easily. In January, several filed a class action lawsuit targeted at Stability AI, Midjourney, and the image-sharing platform DeviantArt.

Brooklyn-based illustrator Deb JJ Lee is one of those artists. By January, Lee was sick and tired of being overworked and undervalued. A month earlier, Lee had gone viral after posting a lowball offer from Epic Games to do illustration work for the company’s smash hit Fortnite, arguably the most popular video game in the world. Epic, which generated over $6 billion last year, offered $3,000 for an illustration and ownership of the copyright. For Lee, it was an all-too-familiar example of the indignities of working as a digital artist. Insult was added to injury when an AI enthusiast—who likely found out about Lee from the viral post—released a custom model based on Lee’s work.

“I’ve worked on developing my skills my whole life and they just took it and made it to zeros and ones,” Lee said. “Illustration rates haven’t kept up with inflation since the literal 1930s.”

Illustration rates have stagnated and, in some cases, shrunk since the ’80s, according to Tim O’Brien, a former president of the Society of Illustrators. The real money comes from selling usage rights, he said, especially to big clients in advertising. Lee continued, “I know freelancers who are at the top of their game that are broke, I’m talking [illustrators who do] New Yorker covers. And now this?”

Lee reached out to their community of artists and, together, they learned that the image generators, custom or not, were trained on the LAION dataset, a collection of 5.6 billion images scraped, without permission, from the internet. Almost every digital artist has images in LAION, given that DeviantArt and ArtStation were lifted wholesale, along with Getty Images and Pinterest.

The artists who filed suit claim that the use of these images is a brazen violation of intellectual property rights; Matthew Butterick, who specializes in AI and copyright, leads their legal team. (Getty Images is pursuing a similar lawsuit, having found 12 million of their images in LAION.) The outcome of the case could answer a legal question at the center of the internet: in a digital world built on sharing, are tech companies entitled to everything we post online?

The class action lawsuit is tricky. While it might seem obvious to claim copyright infringement, given that billions of copyrighted images were used to create the technology underlying image generators, the artists’ lawyers are attempting to apply existing legal standards made to protect and restrict human creators, not a borderline-science-fiction computing tool. To that end, the complaint describes a number of abuses: First, the AI training process, called diffusion, is suspect because it requires images to be copied and re-created as the model is tested. This alone, the lawyers argue, constitutes an unlicensed use of protected works.

From this understanding, the lawyers argue that image generators essentially call back to the dataset and mash together millions of bits of millions of images to create whatever image is requested, sometimes with the explicit instruction to recall the style of a particular artist. Butterick and his colleagues argue that the resulting product then is a derivative work, that is, a work not “significantly transformed” from its source material, a key standard in “fair use,” the legal doctrine underpinning much copyright law.

As of mid-April, when Art in America went to press, the courts had made no judgment in the case. But Butterick’s argument irks technologists who take issue with the suit’s description of image generators as complicated copy-paste tools.

“There seems to be this fundamental misunderstanding of what machine learning is,” Ryan Murdock, a developer who has been working on the technology since 2017, including for Adobe, said. “It’s true that you want to be able to recover information from the images and the dataset, but the whole point of machine learning is not to memorize or compress images but to learn higher-level general information about what an image is.”

Diffusion, the technology undergirding image generators, works by adding random noise, or static, to an image in the dataset, Murdock explained. The model then attempts to fill in the missing parts of the image using hints from a text caption that describes the work, and those captions sometimes refer to an artist’s name. The model’s efforts are then scored based on how accurately the model was able to fill in the blanks, leading it to contain some information associating style and artist. AI enthusiasts working under the name Parrot Zone have completed more than 4,000 studies testing how many artist names the model recognizes. The count is close to 3,000, from art historical figures like Wassily Kandinsky to popular digital artists like Greg Rutkowski.

The class action suit aims to protect human artists by asserting that, because an artist’s name is invoked in the text prompt, an AI work can be considered “derivative” even if the work produced is the result of pulling content from billions of images. In effect, the artists and their lawyers are trying to establish copyright over style, something that has never before been legally protected.

Two collaged images of young Black girls side by side.
A side-by-side comparison of works by Lynthia Edwards (left) and Deborah Roberts (right), that was included as an exhibit in Robert’s complaint filed in August 2022.

The most analogous recent copyright case involves fine artists debating just that question. Last fall, well-known collage artist Deborah Roberts sued artist Lynthia Edwards and her gallerist, Richard Beavers, accusing Edwards of imitating her work and thus confusing potential collectors and harming her market. Attorney Luke Nikas, who represents Edwards, recently filed a motion to dismiss the case, arguing that Roberts’s claim veered into style as opposed to the forgery of specific elements of her work.

“You have to give the court a metric to judge against,” Nikas said. “That means identifying specific creative choices, which are protected, and measuring that against the supposedly derivative work.”

Ironically, Nikas’s argument is likely to be the one used by Stability AI and Midjourney against the digital artists. Additionally, the very nature of the artists’ work as content creators makes assessing damages a tough job. As Nikas described, a big part of arguing copyright cases entails convincing a judge that the derivative artwork has meaningfully impacted the plaintiff’s market, such as the targeting of a specific collecting class.

In the end, it could be the history of human-made art that empowers an advanced computing tool: copyright does not protect artistic style so that new generations of artists can learn from those who came before, or remix works to make something new. In 2012 a federal judge famously ruled that Richard Prince did not violate copyright in incorporating a French photographer’s images into his “Canal Zone” paintings, to say nothing of the long history of appropriation art practiced by Andy Warhol, Barbara Kruger, and others. If humans can’t get in trouble for that, why should AI?

Three of 400 “Punks by Hanuka” created by a cyberpunk brand that provides a community around collaborations, alpha, and whitelists on AI projects.

In mid-March, the United States Copyright Office released a statement of policy on AI-generated works, ruling that components of a work made using AI were not eligible for copyright. This came as a relief to artists who feared that their most valuable asset—their usage rights—might be undermined by AI. But the decision also hinders the court’s ability to determine how artists are being hurt financially by AI image generators. Quantifying damages online is tricky.

Late last year, illustrator and graphic novelist Tomer Hanuka discovered that someone had created a custom model based on his work, and was selling an NFT collection titled “Punks by Hanuka” on the NFT marketplace OpenSea. But Hanuka had no idea whom to contact; such scenarios usually involve anonymous users who disappear as soon as trouble strikes.

“I can’t speak to what they did exactly because I don’t know how to reach them and I don’t know who they are,” Hanuka said. “They don’t have any contact or any leads on their page.” The hurt, he said, goes deeper than run-of-the-mill online theft. “You develop this language that can work with many different projects because you bring something from yourself into the equation, a piece of your soul that somehow finds an angle, an atmosphere. And then this [AI-generated art] comes along. It’s passable, it sells. It doesn’t just replace you but it also muddies what you’re trying to do, which is to make art, find beauty. It’s really the opposite of that.”

For those who benefited from that brief magical window when a creator could move more easily from internet to art world fame, new tools offer a certain convenience. With his new jet-setting life, visiting art fairs and museums around the world, Winkelmann has found a way to continue posting an online illustration a day, keeping his early fans happy by letting AI make the menial time-consuming imagery in the background.

This is exactly what big tech promised AI would do: ease the creative burden that, relatively speaking, a creator might see as not all that creative. Besides, he points out, thieving companies are nothing new. “The idea of, like, Oh my god, a tech company has found a way to scrape data from us and profit from it––what are we talking about? That’s literally been the last 20 years,” he said. His advice to up-and-coming digital artists is to do what he did: use the system as much as possible, and lean in.

That’s all well and good for Winkelmann: He no longer lives in the precarious world
of working digital artists. Beeple belongs to the art market now.  

]]>
Getty Images Sues Stability AI Over Photos Used to Train Stable Diffusion Image Generator https://www.artnews.com/art-news/news/getty-images-lawsuit-stability-ai-12-million-photos-copied-stabile-diffusion-1234656475/ Tue, 07 Feb 2023 19:17:19 +0000 https://www.artnews.com/?p=1234656475 Getty Images has filed a lawsuit against artificial intelligence company Stability AI Inc., accusing it of “brazen infringement” in using more than 12 million photographs from the stock image company’s collection to train its artificial intelligence image generator Stable Diffusion.

The lawsuit was filed in a Delaware federal court on Feb. 3 and made public earlier this week. In the complaint, Getty alleges that Stability copied the company’s photographs, associated captions, and metadata “as part of its efforts to build a competing business” through its revenue-generating interface called DreamStudio.

“Stability AI now competes directly with Getty Images by marketing Stable Diffusion and its DreamStudio interface to those seeking creative imagery, and its infringement of Getty Images’ content on a massive scale has been instrumental to its success to date,” the complaint reads, which was first reported by Reuters.

Stability also offers open-source versions of Stable Diffusion to third-party developers to access, use, and develop their own image-generating models. Getty Images is also apparently not thrilled about that, stating in the lawsuit, “Those third parties benefit from Stability AI’s infringement on Getty Images and, in turn, Stability AI benefits from the widespread adoption of its model.”

The lawsuit also alleges that Stability has “removed or altered Getty Images’ copyright management information, provided false copyright management information, and infringed Getty Images’ famous trademarks through various treatments on the company’s watermark” in copied photos and generated images.

Getty’s claims against Stability include copyright infringement, providing false copyright management information, removal or alteration of copyright management Information, trademark infringement, unfair competition, and deceptive trade practices. Getty’s lawyers have requested a jury trial and statutory damages of up to $150,000 for each infringed work.

Earlier this year, Getty filed a separate case against the artificial intelligence image company in the United Kingdom. Meanwhile, in January, artists filed a class-action lawsuit against Stability and two other image generating giants in the United States District Court of the Northern District of California.

In September, Getty Images also announced it was banning AI-generated art from its platform due to concerns about copyright law. The lawsuit also followed a report from tech blogger Andy Baio, who sampled the 12 million images used to train Stable Diffusion, indexed the domains, and found that 15,000 were from Getty Images.

Getty Images and Stability AI did not immediately respond to a request for comment.

]]>
Korean Illustrator Kim Jung Gi’s ‘Resurrection’ via AI Image Generator Is Orientalism in New Clothing https://www.artnews.com/art-news/news/kim-jung-gi-death-stable-diffusion-artificial-intelligence-1234649787/ Fri, 09 Dec 2022 15:42:52 +0000 https://www.artnews.com/?p=1234649787 It took exactly three days for someone to resurrect beloved Korean artist Kim Jung Gi after his death in early October at the age of 47. Not in flesh and blood, of course, but in code through the use of an AI image generator. 

By October 6, a former French game developer known online as 5you announced that he had released a tool, based on popular open-source AI image generator Stable Diffusion, that could generate images in Kim’s iconic style with a text prompt. 5you described the tool as an “hommage [sic]” and encouraged users to “feel free to use it” so long as they “credit plz.”

Manga, anime, and comic book artists were universally outraged, with 5you telling Rest of World that he had received death threats from Kim’s fans and fellow artists. Some denounced the move as a tasteless ploy for publicity; others, meanwhile, read it as a premonition of the dystopian future that lay ahead—“they won’t let people die, they’ll work forever” read one popular comment on a TikTok. For many, 5you’s request for credit echoed a long line of white creators appropriating and then benefitting from Asian art styles.

But the controversy illuminates a truth long held by technology ethicists: New technology is not ideologically neutral; these tools latch onto and aggravate existing structural inequalities. Image generators like Stable Diffusion, DALL-E, and others are poised to do the same, creating new avenues of exploitation while also reinforcing our existing biases. 

While all artists stand to lose from AI, the Kim Jung Gi bot reveals how Asian artists in particular — long racialized as the robotic Other — are uniquely vulnerable to these tools as well as a new form of technologically-inflected Orientalism.

The Old and the New Orientalism

An advertisement for Murad Cigarettes showing people in Turkish garb harvesting tobacco in front of a mosque.
In the early 1900s, many advertisements, like these from New York based Murad cigarettes, incorporated Orientalist motifs or models dressed in Middle Eastern dress.

Historically, Orientalism in Western art was motivated by a vision of Asia as an archaic, traditionalist alternative to Europe— prone to barbarism, yet in contact with a mystic essence lost to those in modernity. 

During the early period of Orientalism — typically considered the 18th and 19th centuries — it wasn’t uncommon for artists to adopt the styles and signifiers of these cultures from clothing and decor to artistic practices like printmaking, in the hopes of imbuing their work with a kind of exotic authenticity. 

In the post-World War II era, however, Asia became an increasingly dominant player in the global economy and in the development of technology. The notion of Eastern backwardness could no longer hold in the face of its growing influence, and so a new myth arose to rationalize the superiority of the Western subject while making sense of this global reconfiguration. Asian peoples began to be depicted in the media as essentially robotic— automata capable of fine-tuned execution and coordination, but lacking the sort of individual creativity and spirit that defined the Western subject. 

As the theorist Wendy Hui Kyong Chun wrote in her 2019 study on “High-Tech Orientalism,” these depictions protected whiteness’ hegemony over the human by “jettisoning… the Asian/Asian American as robotic, as machine-like,” while explaining Asia’s technological proficiency in dehumanizing terms that justified their continued exploitation.

This framework has since been dubbed “techno-Orientalism,” where the traditional and premodern imagery that once stereotyped Asia is replaced by “hyper technological terms,” as the editors of an essay collection on the subject note. In this formulation, Asian people are racialized using the language of machines, as industrious, hard-working, and functionally competent, yet hollow and lacking personality, while their art is rendered “soulless and mechanical.” 

Techno-Orientalist stereotypes have become commonplace in mainstream media, from Hollywood blockbuster films like Blade Runner, with its Tokyo inspired cityscapes and Asian holograms, to musicians like Grimes, who frequently draws on Asian iconography in her flirtations with technoscapes, to video games like 2022’s Stray, which depicts a thinly disguised Kowloon Walled City populated solely by robots. White creators in nearly every medium have and continue to use Asianness to symbolize robotic futurity, and vice versa.

The Logical End of Techno-Orientalism

In this photo illustration, a cat walks by a screen as a person plays the video game "Stray" in Los Angeles, California, on August 2, 2022. - Cat-lover Enzo Yaksic was instantly taken by the recently released video game "Stray" which allowed him to explore a bright, kaleidoscopic underground world of puzzles as a feline avatar.
He was surprised to find that his own cat was just as fascinated. 
"My cat, Hobbes, was taken by 'Stray' from the minute I loaded it on the screen. He watches every move the cat makes," Yaksic said. (Photo by Chris Delmas / AFP) (Photo by CHRIS DELMAS/AFP via Getty Images)
In this photo illustration, a cat walks by a screen as a person plays the video game “Stray.” The video game has been criticized for including techno-Orientalist themes and imagery.

5You’s model simulating Kim Jung Gi and his art style is just the latest evolution in this twisted association between Asian bodies and roboticism. Finally, the Asian artist, long racialized as mechanistic, is reproduced via technology. The Asian artist is fully realized as a machine, thrust into the public as a product to be exploited and consumed. It’s not just their art, but the artists themselves that are now appropriated— pacified, transformed into a pure tool, made manipulable for anyone who might be so inclined (as long as they give the developer credit, of course). 

Though one might point out that historically famous white artists have also been simulated by image generators, the speed with which Kim was mechanically refashioned after his death gestures towards an underlying attitude that already saw him as mechanistic to begin with. 

In September, Jennifer Gradecki, a Northeastern University art and design professor, said in an interview about the future of artists that “creativity is actually the one thing that isn’t going to be able to be automated” in the future. Gradecki may have meant to reassure artists, but for Asians, a group often denied claims to creativity — and characterized at best as effective replicators, but rarely creators or originators — her statement rings like a warning for how these tools might push us further towards the margins and alienate us from our art. 

Jet Li, for instance, voiced precisely this fear when he explained in 2018 that his refusal to work on The Matrix franchise stemmed from the fact that the filmmakers wanted Li to “record and copy all [his] moves into a digital library,” meaning that the movements of his body, honed during a lifetime of training, would become someone else’s intellectual property forever.

When you’re already seen as mechanistic, the reproduction of your bodies and art through machines isn’t seen as theft, but simply the next logical step.

Fighting the AI Future

A black woman and a young boy look up at two floating ovoid sculptures.
“Hyundai Commission: Anicka Yi” is unveiled as the Tate Modern Hyundai Commission 2021 at Tate Modern on October 11, 2021 in London, England.

We shouldn’t simply resign ourselves to this alienated future — and many artists aren’t, instead using a range of disciplines to interrogate the boundaries between human and machine creativity.

Chinese Canadian painter Sougwen Chung, for example, incorporates a robotic arm into her art-making practice. As the robotic arm and Chung read each other’s brush strokes and respond in an ever-evolving dance, the machine collaboration centers her body— rather than divorcing her from it— by highlighting the physical performance between these organic and inorganic beings. 

Meanwhile, last year, Korean American artist Anicka Yi, installed a series of floating “biologized machines” — giant ovoid airborne structures that suggest jellyfish — in Tate Modern’s Turbine Hall. The installation contained another component often included in Yi’s practice: a particular aroma that changed week to week based on different time periods. One scent profile was meant to evoke the bubonic plague. Yi’s semi-autonomous robo-organisms challenge the cold, sterile notion of artificial intelligence as disembodied “pure cognition” by integrating them in natural forms, while her olfactory scent-scapes serve as a carnal reminder of embodiment.

Korean American filmmaker Kogonada has challenged techno-Orientalist ideology directly in his latest film, After Yang. In the film, starring Colin Farrell, Jodie Turner-Smith, and Justin Min, a father goes on a mission to repair his daughter’s non-responsive robotic “big brother” and caretaker, Yang.  The film is a profound meditation on the figure of the Asian robot in the sci-fi genre, complicating this archetype beyond recognition and making space for something new.

The impact of these technologies is not yet written, but if history is to serve as a guide, new tools tend to carry on the mistakes of the past without direct intervention. Liberating these devices from their dehumanizing potential will require us to challenge and reimagine the frameworks we use to navigate the oppressive binaries of man and machine and East and West. 

Only then might we truly live up to Kim Jung Gi, who hoped that these technologies would help us pave new paths forward— enabling us to express ourselves more clearly and ultimately “make our lives more diverse and interesting.”

]]>
Getty Images Bans AI-Generated Images Due To Copyright Worries https://www.artnews.com/art-news/news/getty-images-bans-ai-generated-images-due-to-copyright-1234640201/ Thu, 22 Sep 2022 15:41:10 +0000 https://www.artnews.com/?p=1234640201 Getty Images announced on Wednesday that it is banning AI-generated art, including images produced by OpenAI’s DALL-E and Meta AI’s Make-A-Scene, from its platform. The decision, according to Getty, stems from concerns that copyright laws are currently unsettled with regard to imagery created by those tools.

“There are real concerns with respect to the copyright of outputs from these models and unaddressed rights issues with respect to the imagery, the image metadata and those individuals contained within the imagery,” Getty Images CEO Craig Peters told the Verge. “We are being proactive to the benefit of our customers.”

The concern over copyright is not unfounded. AI image generators scrape publicly available pictures from across the web to train their algorithms and to sample them when producing new imagery. Those images are often copyrighted ones that come from news sites or even stock photo sites like Getty. As Gizmodo noted, tech blogger Andy Baio analyzed the image set used by Stable Diffusion, an AI tool similar to DALL-E produced by Stability AI, and found that 35,000 of the 12 million images were scraped from stock photo sites.

Whether that usage violates U.S. copyright law is an open question for the courts. Typically, to use copyrighted material, a creator has to demonstrate that the copying was done for a “transformative” purpose, which generally falls into commentary, criticism, or parody of material in question, as noted by the Stanford Libraries’ primer on Fair Use doctrine. The question of whether images produced by DALL-E and other AI tools constitute a “transformative” purpose is, at best, murky due to the automated nature of their production.

Many in the arts and the AI space have noted that it will likely take new legislation to settle the question.

“On the business side, we need some clarity around copyright before using AI-generated work instead of work by a human artist,” Jason Juan, an art director and artist with clients including Disney and Warner Bros., told Forbes last week. “The problem is, the current copyright law is outdated and is not keeping up with the technology.”

Similarly, Daniela Braga, who is on the White House Task Force for AI Policy, said a “legislative solution” is necessary.

“If these models have been trained on the styles of living artists without licensing that work, there are copyright implications,” Braga told Forbes.

In the meantime, Getty has said it is using the Coalition for Content Provenance and Authenticity, an industry-created development project, to filter out AI-generated content. It’s unclear whether such a tool will be effective.

]]>
Word Processing https://www.artnews.com/art-in-america/features/word-processing-surrealism-artificial-intelligence-1234625624/ Mon, 18 Apr 2022 16:20:24 +0000 https://www.artnews.com/?p=1234625624 IN THE ORIGIN STORY FOR SURREALISM that André Breton provides in his 1924 manifesto, he claims that “a rather strange phrase” came to him in the hypnagogic state before sleep: “There is a man cut in two by the window.” Easy as it is to link the phrase with Surrealism’s preoccupation with transgressing binaries and seeking passages between life’s apparently divided aspects (what if dreams are real and reality a dream?), Breton seems to ignore the phrase’s substance to fixate on the means of its arrival:

I realized I was dealing with an image of a fairly rare sort, and all I could think of was to incorporate it into my material for poetic construction. No sooner had I granted it this capacity than it was in fact succeeded by a whole series of phrases, with only brief pauses between them, which surprised me only slightly less and left me with the impression of their being so gratuitous that the control I had then exercised upon myself seemed to me illusory and all I could think of was putting an end to the interminable quarrel raging within me.

What mattered was not the content but the loss of conscious control over the language that was coming to him. Breton concludes that “poetic construction” should be a matter not of will but of surrender: You can assume an aesthetic distance from yourself and become a spectator to your own thought process, ideally a detached connoisseur of it. “Let yourself be carried along,” Breton declares, “events will not tolerate your interference.”

In Breton’s view, this approach to creation was democratizing; he proclaimed in the 1933 essay “The Automatic Message” that “it is to the credit of Surrealism that it has proclaimed the total equality of all ordinary human beings before the subliminal message, that it has constantly insisted that this message is the heritage of all.” It was also (somewhat absurdly, given Breton’s manifest egomania) a means of escaping individualist egotism: Michel Carrouges, the author of an early sympathetic study of Surrealism, goes so far as to say that Breton discovered the “natural link” between “personal unconscious, collective unconscious, and even cosmic unconscious.”

Breton’s flash of inspiration would coalesce into the inaugural Surrealist method of automatic writing, the attempt to outflank the conscious mind by scribbling words or doodles down faster than the speed of thought. For Breton, this radical abdication technique was a breakthrough; it allowed writers to seemingly repudiate intentionality and ambition and become “simple receptacles” and “modest recording instruments.” Not only did this thwart any debasing, approval-seeking tactics on the part of the artist, it offered the “superior reality of certain forms of previously neglected associations, in the omnipotence of dream, in the disinterested play of thought.”

Deep Dream Generator: Starry Night - Reworked, 2022, dimensions variable, digital file.

Deep Dream Generator: Starry Night – Reworked, 2022, dimensions variable, digital file.

Uninhibited by a rationalistic need to make sense or link causes to effects, automatic writing can surprise us with the intentions and connections we discover retroactively in what might otherwise seem like random gibberish or the product of sheer coincidence. Breton claimed that automatic writing, for one thing, “tends to ruin once and for all all other psychic mechanisms and to substitute itself for them in solving all the principal problems of life.”

As hyperbolic as that sounds, it anticipates the hype frequently deployed today to justify a similar sort of passivity. Only now, rather than turn over our agency to “objective chance” and the unfathomable power of the collective unconscious as the Surrealists preached, we are invited to give way to machine-learning algorithms, often touted as artificial intelligence. Where Breton imagined that significant truths were somehow mystically imprinted in our unconscious depths, proponents of AI can point to the billions of actual data points (i.e., “mechanical recordings” of reality), aggregating the observable effects of countless human decisions and capable of positing “previously neglected associations” within the data on demand, and investing these correlations with the air of oracular truth. Predictive systems have been widely introduced to solve the “principal problems of life,” to foster efficient processing everywhere. They are used to sort social media feeds and tailor retailing sites to individual users, and have been implemented to automate decisions in banking, government services, and the judicial system. They are often said to be “disinterested” (though they have repeatedly been shown to be riddled with bias).

In this respect, AI represents a realization of what for Breton was merely a speculative faith in decision-making procedures that could surmount human calculation. The rapid cultural rise of the “algorithm”—in its vernacular sense of connoting an oracular technological deity—testifies to the success of the Surrealist revolution that Breton never tired of promising. AI cultivates and caters to our passivity, seeming to offer the fruits of creativity and self-examination without the effort and self-doubt. Algorithms always find us interesting and always testify to our insatiable desires by showing us all the things we should still want. We can routinely experience the fascination of being surprised by our own depths, revealed to us for our delectation by personalized feeds. The experience of AI in everyday life renders us default Surrealists, deferring to opaque automatic processes that no longer need be arduously evoked with Ouija-esque analog rituals.

Natural language processing models, like those developed by OpenAI—a research lab that launched with $1 billion of funding from the likes of Elon Musk and Peter Thiel with a mission of developing “artificial general intelligence”—make the link between Surrealism and AI seem especially clear. When fed a textual prompt, GPT-3, a model that OpenAI launched in 2020, predicts what sentences should follow based on its statistical analysis of billions of words of text pulled from the internet. How it completes whatever prompt it’s fed can be interpreted as a “social average” response, making it a kind of oblique search engine of the collective consciousness, liberated from any of the contextual social relations that would discipline what it produces. It doesn’t experience inhibition or self-satisfaction. Thus it seems to fulfill Breton’s wildest dreams for automatic writing, producing text that is estranged from human agency yet nonetheless has some perceivable sense to it that a reader can extract, or project on it, “reason’s role being limited to taking note of, and appreciating, the luminous phenomenon,” as Breton put it, of unpremeditated language that can at the same time be parsed.

GPT-3 has its visual equivalent in AI-generated images, which over the past few years have come in a variety of flavors from a range of approaches. Some systems, like Google’s DeepDream, were developed to try to document how image-recognition algorithms work, producing videos, often described as “surreal,” in which eyeballs and dog heads and the like (prominent shapes in the training data) emerge grotesquely from images the system is fed. Others like Nvidia’s StyleGAN can make realistic human faces. Some algorithms have been trained on images from the fine art corpus to produce “paintings” in the style of particular eras or artists, as with Microsoft’s Next Rembrandt project. OpenAI’s Dall-E (get it?) produces images in response to text prompts by using a GPT-3-like model trained on pixel sequences rather than words.

Leonel Moura: 270504, 2004, permanent ink on paper, 77 by 71 inches.

Leonel Moura: 270504, 2004, permanent ink on paper, 77 by 71 inches.

Artists have adopted these kinds of tools to produce generative works that are sometimes described as instances of machine creativity. Leonel Moura, who had works included in the 2018 “Artists & Robots” show of AI art at the Grand Palais in Paris, has linked his practice with “artbots” directly to Surrealism and its attempts to “take human consciousness out of the loop.” In a review of the 2021 generative art exhibition “LUX: New Wave of Contemporary Art” in London for the New Left Review, art historian Julian Stallabrass argues that “what is new here, and undeniably impressive, is the scale and speed of this processing, the vast datasets on which it draws, and the hypnotic vision of an inhuman intelligence playing with human cultural techniques and material.” Of work by onetime Google “artist-in-residence” Refik Anadol, in which machines trained on Italian Renaissance paintings project morphing images that approximate and deform faces and landscapes, Stallabrass writes that “the viewer is held in the sublime of a vision of a superior generator of painterly form. . . . The work opens up a glimpse of a future in which the traces or indeed ruins of human creation are reworked forever by inhuman intelligences.”

Many journalists have also been warily impressed with GPT-3. While some worry about its capacity to produce well-tailored disinformation on demand or eliminate journalistic careers, most commentators have responded with guarded wonder, balancing gee-whiz enthusiasm with vague concern about the future of humanity. At the same time that GPT-3 can perform stunts like writing explanatory articles about itself or review books on AI, it can be an exploratory tool for writers to expand their creative potential. Last April, in the New Yorker, writer Stephen Marche likened using GPT-3 to the ancients invoking the muses. Novelist Vauhini Vara used it to help her write a requiem about her dead sister for the Believer. GPT-3’s language, she notes, “was weird, off-kilter—but often poetically so, almost truer than writing any human would produce.” In an essay for n+1, critic Meghan O’Gieblyn notes the parallels between GPT-3 and automatic writing, pointing out the similarities between an automatic writing text like Breton and Philippe Soupault’s The Magnetic Fields and one written using GPT-3.

AI Art Shop: Surreal Beauty, 2021, dimensions variable, digital file.

AI Art Shop: Surreal Beauty, 2021, dimensions variable, digital file.

The text outputs of generative models are at once mechanistic and unpredictable; they are based entirely on calculations and old data, but they can come across as original, ingenious—combinations humans would likely never think of. For years, researcher Janelle Shane has been playing with generative models like OpenAI’s to explore their limits and extract funny and surprising output from them for her blog, “AI Weirdness”: AI names your pet, AI bakes some cakes, AI makes New Year’s resolutions. In these experiments, Shane tweaks the models, adjusting their settings and her inputs until they cohere as a system of well-managed distortions, producing phrases or images that home in on uncanniness. Their odd juxtapositions garble received ideas just enough to create a frisson; they seem funny or clever in a way that cannot be anticipated but appreciated only after the fact. From Shane’s recent list of New Years’ resolutions: “Record every adjective I hear on the radio.” “Act like a cabbage for a month.” “At 4 o’clock every day I will climb a tree.” “Speak only to apples for 24 hours.” In the process, Shane trains herself and her readers to enjoy these occurrences, learning in a sense to be alive to the results of unpredictable creativity—what Breton called “convulsive beauty.” By interacting with AI, one can refine the sense of one’s own unique human capacity, capable of seeing what the machine cannot about its own production. The flight from agency appears to redeem itself in such acts of aesthetic recognition.

But the fact that AI and Surrealism so readily fit together is reason to be suspicious of both. Surrealism was supposed to free minds from bourgeois rationality by subtracting intentionality and tapping into deep levels of mythic experience through randomness and dreams, celebration of “primitivism” and “subjectlessness.” But this experience is readily elicited by AI, the supremely rational application of reductive ideas about how minds work.

In Compulsive Beauty (1993), Hal Foster argues that the Surrealists seized upon the “irrational residue” left behind by the most mechanistic, capitalistic production processes to undermine the “modernisms that value industrialist objectivity.” Their fixation on uncanny hybrids of animate and inanimate, human and machine, of robotized laborers of various types, could be understood as critique, Foster argues, and such approaches as automatic writing were “a form of autonomism that parodies the world of automatization.” But the Surrealist critique proved readily susceptible to capitalist co-optation: “In the postmodern world of advanced capitalism, the real has become the surreal,” Foster acknowledges, and he wonders whether “our forest of symbols is less disruptive in its uncanniness than disciplinary in its delirium.” Surrealism’s dreamscapes no longer posit an escape from the bourgeois life of convention but form a commonplace expression and experience of it.

AI systems make generating surreal images or conducting surrealistic experiments trivially easy. They don’t require rigor but invite us to let go, to see AI’s efforts to predict us as a form of play or a kind of dream state. Their immense processing power and capacity for digesting troves of data on our preferences and predilections allows them to construct and exhaust the field of imaginative possibility for us. In Anadol’s November 2021 discussion with MoMA curators Michelle Kuo and Paola Antonelli, the artist describes training AI on data sets of the museum’s collection as tracing a vast multidimensional “latent space” of “other creations and imaginations and outcomes.” Kuo sees this as yielding “totally fantastical images, almost automatist, or like automatic writing or drawing.”

But another way of describing AI systems is that they systematically work through countless concatenations of ideas on their own terms and overwhelm us with them, as if that is all there could possibly be. As artist and media theorist Joanna Zylinska writes in AI Art: Machine Visions and Warped Dreams (2020), paraphrasing the ideas of Polish writer Jacek Dukaj, “AI exponentially amplifies the knowledge shared by marketing experts with regard to our desires and fantasies” and is “much quicker and much more efficient” in putting that knowledge to use. Under such conditions, AI art can become “an outpouring of seemingly different outcomes whose structure has been predicted by the algorithmic logic that underpins them, even if not yet visualized or conceptualized by the carbon-based human.”

Recuperated as AI, Surrealism provides the basis not for liberation but for further entrapment in existing cultural patterns reshuffled in novel ways, but not fundamentally changed. Rather, they are further ingrained. The idea of escaping from the control exercised by reason ends up being a way of fully submitting to a different form of programming, to what a machine learning model can produce and what algorithmic forms of control can induce.

Andre Breton: Mouchoir noir, handwritten manuscript, circa 1924.

Andre Breton: Mouchoir noir, handwritten manuscript, circa 1924.

OpenAI hopes that GPT-3 will be integrated across a range of applications where it’s necessary to generate spontaneous text. It could produce more dynamic nonplayer characters for games, or make automated small talk in customer service settings. It could turn search engines into a kind of conversation between human and machine. Such interactions turn Surrealism into a business model, following in the footsteps of artists like Dalí who long ago discovered and exploited its commercial potential.

Another business model for the Surrealism-AI symbiosis is evident in generative art NFTs (Crypto Kitties, Bored Apes, and countless other copycat projects), where the content associated with any token is basically a pretext for financial speculation. The absence of human intention in the generated image for any given NFT makes sure ideas don’t interfere with trading, but the Surrealist pedigree for “automatic” creation helps substantiate the alibi that works of art are still involved. In the other direction, the whimsicality of artists’ experiments with machine learning—whether Shane’s quirky lists or Moura’s toylike action-painting robots—help domesticate AI, providing a framework for making sense or making light of its glitches and anomalies, rendering it more acceptable, perhaps, when it attempts to dictate options to us, framing our sense of the range of choices. We can experience the way technology is being deployed to inhibit and control us as wonder and surprise. We can imagine that our effort to direct it toward outputs that amuse us doesn’t at the same time function as a form of surveillance, of data collection that will be used to refine its more ominous capabilities.

Predictive text functionality (e.g., autocompletion of your words or sentences) already lives in email and texting apps and will ultimately move into all sorts of consumer products, a form of control implemented as a mercy of convenience. Its point is not to estrange us from the familiar channels of our thoughts in the classic Surrealist manner but to more efficiently conduct us through them. Algorithmic text completion intervenes in how we think, making us absent where we are expected to be present, at the moment we are ostensibly speaking. It assures us that we don’t need to be the speaking subject behind our words, just as Surrealism promised.

This article appears in the April 2022 issue, pp. 78–83.

]]>
AI Analysis Says National Gallery’s ‘Samson and Delilah’ Painting Isn’t a Rubens https://www.artnews.com/art-news/news/national-gallery-london-rubens-samson-and-delilah-ai-authentication-1234604957/ Mon, 27 Sep 2021 21:23:19 +0000 https://www.artnews.com/?p=1234604957 A painting attributed to Peter Paul Rubens, Samson and Delilah (1609–10), that hangs in London’s National Gallery has long been under suspicion as to whether it is in fact an authentic work by the Baroque artist. Research by the Swiss-based tech company Art Recognition, which uses artificial intelligence (AI) to authenticate artworks, has concluded that the painting has a 91 percent probability of being fake, according to a report in the Guardian.

Rubens did paint a scene of Samson and Delilah, depicting the biblical scene where Delilah orchestrates the cutting of a sleeping Samson’s hair, for one his major patrons Nicolaas Rockox, the mayor of Antwerp, but it disappeared after his death in 1640.

This Samson and Delilah allegedly only resurfaced in 1929 when Ludwig Bruchard, a Rubens expert, attributed the work to the artist. Suspicions regarding the work’s authenticity first began in 1960, after Bruchard’s death and it was revealed he provided certificates of authenticity for cash. More than 60 works by Rubens that were authenticated by Bruchard have since been identified as fakes, including two versions of Diana and Her Nymphs Departing for the Chase, respectively held at the Cleveland Museum of Art in Ohio and the J. Paul Getty Museum in Los Angeles.

The National Gallery purchased the work for a then-record price of £2.5 million from Christie’s in 1980. Over the years, several critics have questioned the Rubens attribution for the museum’s Samson and Delilah, including artist and independent scholar Euphrosyne Doxiadis who has claimed in several papers and interviews that certain details didn’t add up. Doxiadis pointed out that the National Gallery’s painting differed from the studies that Rubens made for the piece purchased by Rockox, including that Samson’s feet are cropped in the painting but shown in studies and engravings. 

The recent discovery using AI technology is just another strike against the piece. Using a database of fake and authentic Rubens paintings, Art Recognition taught an AI bot to identify the minute details that comprised authentic Rubens works. Then, the trained bot analyzed the National Gallery’s Samson and Delilah by dividing the canvas into a grid and looking for signs of deviance from Rubens’s style square by square.

“We repeated the experiments to be really sure that we were not making a mistake and the result was always the same,” Carina Popovici, the scientist who led the AI analysis, told the Guardian. “Every patch, every single square, came out as fake, with more than 90% probability.”

However, it is unclear how the AI bot might adjust for the varieties in style that might be evidence of studio assistants hands. Art Recognition did not immediately respond to a request for comment.

In an email to ARTnews, the National Gallery said, “The Gallery always takes note of new research. We await its publication in full so that any evidence can be properly assessed. Until such time, it will not be possible to comment further.”

]]>
Scientists Discover the Key to Artistic Success: ‘Promising New Ideas’ and Intense Focus https://www.artnews.com/art-news/news/artificial-intelligence-predicts-success-study-1234603724/ Tue, 14 Sep 2021 15:28:29 +0000 https://www.artnews.com/?p=1234603724 In a new study published on Monday in Nature Communications, a team of scientists said it had discovered the key to “hot streaks,” or periods of intense and successful artistic productivity. The paper cites Jackson Pollock’s four-year period of intense productivity and success with his drip paintings. 

In a previous paper, Dashun Wang and a team of researchers had proven the existence of hot streaks. “In scientific careers, we see that it is in a four to five year period where scientists publish their best work,” he said. “Ninety percent of scientists experience a hot streak, and it usually happens once.”

But one discovery complicated his findings. “There is equal probability that the hot streak could occur in the beginning, middle, or end of a career,” Wang said. “It seemed like a random magical period.” This puzzle set the team on a three-year journey to understand what kinds of conditions precede a hot streak.

To produce its new paper, Wang and his team relied on artificial intelligence to track the kinds of outputs artists, filmmakers, and scientists made in the period leading up to a hot streak. In particular, they were looking to see if exploration or exploitation best predicted periods of peak creativity. For the researchers, exploitation meant focusing on a narrow range of subjects or style, not abuse of some kind.

The researchers’ specially designed AI was able to consider the evolution of an artist’s art style over time. It was exposed to 800,000 images culled from museum and gallery collections that represent the careers of 2,128 artists. If the AI detected a lot of variety in style this was termed as a period of exploration, or if the AI detected little variety, it was a period of exploitation. An artist’s hot streak was identified by examining which period of time resulted in the artist’s most expensive works.

The scientists found that neither exploration nor exploitation on its own could significantly predict a hot streak, writing, “Not all explorations are fruitful, and exploitation in the absence of promising new ideas may not be as productive.”

On the other hand, the researchers did find that a sequence of exploration followed by exploitation could predict hot streaks in the careers of not only artists but filmmakers and scientists too. The researchers cited Jackson Pollock as an example in the paper, saying that the Abstract Expressionist’s hot streak took place between 1946 and 1950, when he produced some of the drip paintings for which he is best known. For Pollock, this era was a time of intense focus on a very specific style, and this, the researchers said, preceded by a good deal of experimentation.

“We searched for an answer for three years,” Wang says. “I was surprised the answer was this simple. But the best kind of results are the results that are so obvious once you know the answer.”

]]>
Artificial Intelligence Restores Mutilated Rembrandt Painting ‘The Night Watch’ https://www.artnews.com/art-news/news/rembrandt-ai-restoration-1234596736/ Wed, 23 Jun 2021 21:53:25 +0000 https://www.artnews.com/?p=1234596736 One of Rembrandt’s finest works, Militia Company of District II under the Command of Captain Frans Banninck Cocq (better known as The Night Watch) from 1642, is a prime representation of Dutch Golden Age painting. But the painting was greatly disfigured after the artist’s death, when it was moved from its original location at the Arquebusiers Guild Hall to Amsterdam’s City Hall in 1715. City officials wanted to place it in a gallery between two doors, but the painting was too big to fit. Instead of finding another location, they cut large panels from the sides as well as some sections from the top and bottom. The fragments were lost after removal.

Now, centuries later, the painting  has been made complete through the use of artificial intelligence. The Rijksmuseum in the Netherlands has owned The Night Watch since it opened in 1885 and considers it one of the best-known paintings in its collection. In 2019, the museum embarked on a multi-year, multi-million-dollar restoration project, referred to as Operation Night Watch, to recover the painting. The effort marks the 26th restoration of the work over the span of its history.

In the beginning, restoring The Night Watch to its original size hadn’t been considered until the eminent Rembrandt scholar Erst van der Wetering suggested it in a letter to the museum, noting that the composition would change dramatically. The museum tapped its senior scientist, Rob Erdmann, to head the effort using three primary tools: the remaining preserved section of the original painting, a 17th-century copy of the original painting attributed to Gerrit Lundens that had been made before the cuts, and AI technology. 

About the decision to use AI to reconstruct the missing pieces instead of commissioning an artist to repaint the work, Erdmann told ARTnews, “There’s nothing wrong with having an artist recreate [the missing pieces] by looking at the small copy, but then we’d see the hand of the artist there. Instead, we wanted to see if we could do this without the hand of an artist. That meant turning to artificial intelligence.” 

AI was used to solve a set of specific problems, the first of which was that the copy made by Lundens is one-fifth the size of the original, which measures almost 12 feet in length. The other issue was that Lundens painted in a different style than Rembrandt, which raised the question of how the missing pieces could be restored to an approximation of how Rembrandt would have painted them. Erdmann created three separate neural networks, a type of machine learning technology that trains computers to learn how to do specific tasks to address the problems.

“The first [neural network] was responsible for identifying shared details. It found more than 10,000 details in common between The Night Watch and Lundens’s copy.” For the second, Erdmann said, “Once you have all of these details, everything had to be warped into place,” essentially by tinkering with the pieces by “scoot[ing one part] a little bit to the left” and making another section of the painting “2 percent bigger, and rotat[ing another] by four degrees. This way all the details would be perfectly aligned to serve as inputs to the third and final stage. That’s when we sent the third neural network to art school.”

Erdmann made a test for the neural network, similar to flashcards, by splitting up the painting into thousands of tiles and placing matching tiles from both the original and the copy side-by-side. The AI then had to create an approximation of those tiles in the style of Rembrandt. Erdmann graded the approximations—and if it painted in the style of Lundens, it failed. After the program ran millions of times, the AI was ready to reproduce tiles from the Lundens copy in the style of Rembrandt. 

The AI’s reproduction was printed onto canvas and lightly varnished, and then the reproduced panels were attached to the frame of The Night Watch over top the fragmented original. The reconstructed panels do not touch Rembrandt’s original painting and will be taken down in three months out of respect for the Old Master. “It already felt to me like it was quite bold to put these computer reconstructions next to Rembrandt,” Erdmann said.

As for the original painting by Rembrandt, it may receive conservation treatment depending on the conclusions of the research being conducted as part of Operation Night Watch. The painting has sustained damaged that may warrant additional interventions. In 1975, the painting was slashed several times, and, in 1990, it was splashed with acid. 

The reconstructed painting went on view at the Rijksmuseum on Wednesday and will remain into September. 

]]>
Algorithms Can’t Automate Beauty https://www.artnews.com/art-in-america/aia-reviews/trevor-paglen-bloom-algorithms-beauty-1234571299/ Mon, 21 Sep 2020 17:35:26 +0000 https://www.artnews.com/?p=1234571299 You feel the subtle effects of algorithms while using digital platforms: Spotify automatically plays another song based on what you already like; Instagram shows you the stories first from the accounts you interact with most often; and TikTok, dispensing with agency entirely, just gives you a feed of videos “For You,” no choice about who to follow required. Algorithms are designed so that you don’t necessarily recognize their effects and can’t always tell whether or not they’re modifying your behavior. A new body of work by the interdisciplinary artist and technology activist Trevor Paglen—on view at Pace Gallery’s London venue, with a virtual version online—attempts to visualize their workings.

“Bloom” is a series of high-resolution photographs of flowering trees. The sprays of blossoms are tinted different colors in variegated sections, a slightly nauseating spectrum of reds, yellows, blues, and purples. The colors are the biggest sign that something inhuman has happened: they don’t seem to follow a single logic and their arrangements are too granular to have been executed by hand. As Paglen explains in a video published by Pace, the colors have been assigned by machine-learning algorithms developed by his studio that dissect the images’  textures and spatial arrangements, then apply colors to mark differences. Flowers might stay bright white while the trees’ leaves and branches recede into blues. Looking at the images means trying to decode what the computer was evaluating when adding color.

Flowers are a perennial artistic subject, from the Dutch Baroque memento mori that Paglen references in the video to Andy Warhol’s screen prints. But his visualize how a machine perceives an image. The algorithms interpret no symbolism; there’s no ephemerality or tragedy latent to a springtime blossom. The colors emerge from a mathematical process that could be applied to any other image. The elegiac quality of the series comes from the contrast between the content of the images, familiar to human viewers, and the coldness of the machine’s gaze. We don’t really know what it’s looking for, or at.

Cameras surveil a gallery with white walls and parquet flooring; high in a corner, a monitor shows a viewer looking at the exhibition from home.

View of Trevor Paglen’s exhibition “Bloom,” 2020, at Pace Gallery.

Paglen’s recent work, both at Pace and in a concurrent exhibition at the Carnegie Museum of Art, evokes the uncanniness that we feel when using Spotify, Facebook, or Tinder. These platforms purport to calculate our judgements and tastes and then replicate them, serving us our own desires so quickly that we don’t have time to consider how well our identities are being reflected by the algorithms’ decisions. Over the past decade, since he earned a PhD in geography in 2008 from the University of California at Berkeley, Paglen has become famous for using his practice to reveal things that are hidden, making media headlines as much as exhibitions. He moves between formats—photography, collage, renderings, and installations of technological devices—to expose contemporary artifacts like the physical cables that undergird the Internet and souvenir badges from classified Pentagon programs. In recent years he has shifted his attention to artificial intelligence, exploring how machine vision is shaping our perception of the world.

“Bloom” shows that beauty can’t be automated—at least, not by the technology we currently have. More than a series of visual alignments or colors, beauty lies in our memories of the world, the connection of a flower to the experience of spring inevitably passing. Algorithms lack any understanding of this context; they can only approximate it.

A digital photograph of flowering trees in a forest, where blossoms in the background have been colored a strange shade of orange that enhances the contrast with the white flowers in the foreground

Trevor Paglen, Bloom (#79655d), 2020, dye sublimation print, 26 by 19½ inches.

In his “CLOUD” series (2019), Paglen uses algorithms to analyze transcendental photos of the sky; he has continued exploring this technique using the mountainous landscapes in the American West, as seen in the Carnegie exhibition. He applies calculations like Hough Circle Transform, first introduced in 1962 to detect circles in images, and then retains the results on the print so that the viewer knows what the machine has seen: thin white circular outlines with dots at the center identify patterns that the human eye would otherwise pass over. The algorithmic lines recall the jokey meme in which the golden ratio is superimposed on any image and always fits something, like Donald Trump’s hair. Paglen’s series appears ominous—machines attempt to perceive beauty by reducing it to straight lines and perfect shapes—but it’s also a little goofy. The patterns don’t change our understanding of the photographs, and the photographs don’t educate us about the algorithms. They function as illustrations.

Paglen tends to hide his critical epiphanies in sumptuous visuals. Viewers may get lost in color or pattern and turn away after a few seconds. Paglen’s activist bent—the artist as investigative journalist or social educator—competes with his urge to make compelling objects. In the best examples, like the “Bloom” series, these goals merge. Art history meets the technological filter through which we now experience much of visual culture, via iPhone cameras, Instagram posts, and TikTok feeds. Once we learn to recognize the influence of algorithms, Paglen hopes, we might figure out how to counter it and reclaim some of the humanity of our vision.

]]>
Hard Truths: Will Museums’ Digital Plans Make Curators Obsolete? https://www.artnews.com/art-in-america/columns/artificial-intelligence-art-curators-art-world-advice-1202687166/ Thu, 14 May 2020 18:40:48 +0000 https://www.artnews.com/?p=1202687166 The museum where I work as a curator of prints and drawings instructed staff to work from home during the coronavirus lockdown. In a recent Zoom conference call, our director revealed that revenue had dropped to such an extent during our closure that the entire institution may be in peril. He said that we need to recalibrate (he’s right), but then he went on about the immediate need for blockbuster shows and Instagrammable experiences to drive up future ticket sales. Our superb collection of works on paper is focused on prewar America, and while I can think of many wonderful exhibitions that could feature these works, if I’m being honest, none of them are barnstormers. There has been a big push to digitize collections for our newly redesigned website, but the media guys haven’t uploaded my files, and only a quarter of my collection has even been scanned or photographed. The museum recently launched flashy online virtual exhibitions in response to the pandemic and they’re generating a lot of traffic. But I rarely get to select works from my collection to go online because they just want “the hits,” and when it comes to captions I can’t even edit the texts that they pull straight out of our internal database. I feel left out of this digital revolution and the overall big show boom. It certainly looks like ticket sales, algorithms, and social media are taking over the industry. What I’m wondering is: will curators like me become obsolete in the future?

We pondered this question while reading a Hard Truthing research report titled A Future That Works: Automation, Employment, and Productivity, by fellow consultants McKinsey & Company. They report that engineers have nearly perfected automation technology that can simulate human cognition, social behaviors, and emotional capacities. Evidently, “about half of all the activities people are paid to do in the world’s workforce could potentially be automated by adapting currently demonstrated technologies.” The rise of robotics, artificial intelligence, and machine learning could have devastating consequences even for professionals as humanistic as museum curators. As art institutions continue to raise ticket prices and roll out blockbusters, we may come to see shows that will be curated entirely by optimized algorithms. It is one thing for cars to become driverless, but what happens when a museum becomes curator-less? Will this be the end of art history as we’ve known it, or the dawn of a techno renaissance?

Imagine, if you will, that an artificially intelligent robot, let’s call it Kunst Lieben Algorithmic Unix System (K.L.A.U.S.), has been programmed to curate the next Venice Biennale. Rather than spend two years making studio visits around the world, this robo-curator would scan every art periodical, browse gallery and art fair websites, scrape all social media accounts, and linger at parties where it could soberly overhear all the gossip and insider trading. After K.L.A.U.S. finished crunching petabytes of data and training its neural network, would the resulting exhibition outperform a biennial organized by human curators? What metrics would it use to determine how to make a cohesive exhibition? Would it choose the usual suspects or venture into territory outside the confines of the art market? How creatively would it install the show? Would its catalogue essay possess any literary flair or genuine insight? How much would the show touch the hearts of humans? If this robot had a romantic partner, would it be included in the show? What kind of after-party would it plan and, more important, who would DJ?

Our human survival instinct tells us that K.L.A.U.S. would be a harbinger of doom, yet we wonder if this intelligent machine might curate one of the most monumental shows of all time. Employing endless streams of metadata and market research, this curatomaton could effectively synthesize an exhibition that satisfies the often contrasting tastes of the general public and our most puckered critics. You might think we’re being absurd, and we admit that this dark prophecy is alarmist, but we raise the specter to drive home a point.

Curators must draw on their most human qualities in order to combat the steel-cold judgment of a smart refrigerator that can tell you when your milk has spoiled or who the next hotshot Bard grad will be. Only by trusting in their eyes and hearts will today’s curators have a slim chance to outsmart machines. Until that harrowing day comes when robots learn to tell the difference between hotdogs and genitalia, we suspect you will still have a job.

Questions? Email hardtruths@artinamericamag.com.

 

This article appears under the title “Auto-Curator” in the May 2020 issue, pp. 20.

]]>