Mindar, a robot priest built on ChatGPT technology, is giving Buddhist teachings at Kodai-ji Temple in Kyoto, Japan.
www.turningpointmag.org

Infocalypse or Cultural Renaissance? The Two Faces of Generative AI

by

Infocalypse or Cultural Renaissance? The Two Faces of Generative AI

by

Cover photo: ©Raül Gallego Abellán

Cover photo

© Raül Gallego Abellán

Share or print

In 2023, artificial intelligence (AI) took the world by storm. New applications such as ChatGPT dominated the news headlines while gaining hundreds of millions of users faster than any other digital product in history. AI-related companies hoarded one-third of the overall $170.6 billion invested in startup companies in the US throughout the year. But most of all, the year 2023 marked a cultural revolution of a sort. For millions of people, artificial intelligence has become a part of everyday life and consciousness, shaping the way we understand and imagine our interactions and societies.

However, even before recent breakthroughs, artificial intelligence and machine-generated content have been steadily entering cultural production, especially in digital media. Recently published research by developers at Amazon Web Services AI Lab revealed that a “shocking amount” of online content is already produced by technologies generally grouped under the concept of artificial intelligence. Their research focused on text content, finding that up to 57% of sentences published online already appear in three or more languages via machine translations.

“Machine-generated content not only dominates the translations in lower resource languages; it also constitutes a large fraction of the total web content in those languages,” the research concludes. The authors raised “serious concerns” about the proliferation of “low-quality machine translations” and training new language models on data already generated by other models or earlier versions of the same algorithms.

The same increasingly applies to images and other types of content amid forecasts when generative software will outpace humans also in audio and visual formats. Recent projections by the investment bank Goldman Sachs signal that in Western countries, up to 26% of all jobs in media, entertainment, and sports industries may be automatized or displaced by AI-powered technologies. Considering these applications’ low costs and high speed, the anticipated change will be even more significant in content volumes than in replaced work hours.

The prospect of AI-generated content dominating the internet has even been titled a looming “infocalypse” by journalist Nina Schick. In this dystopian trajectory, the internet will become a trash yard of relatively homogenous, low-quality, and mass-produced synthetic content. AI models will degenerate or even collapse without enough high-quality data to train them. Pervasive misinformation – such as AI “hallucination” and unreviewed, machine-generated news articles – and disinformation – increasingly sophisticated voice, image, and video fakes – will blur the boundaries of reality and unreality, alienate people, and further degenerate social cohesion and the prospects of informed and democratic decision making.

“I grew up with a lot of optimism about the internet in the 1990s. Also, the period of Arab Spring was very optimistic and revived the promise of democracy via the Internet. But now we have a bit darker vision of the internet,” says Filippo Lubrano. Lubrano is an AI and cybersecurity consultant, freelance journalist, and the author of “Anthropology for AI – A Cultural Guide for the Next Generations of Technological Innovations,” published in 2023 by the Italian publishing house D Editore. Although Lubrano does not agree with the dystopian visions of the future of AI and the internet, he says he can understand the fears associated with these developments.

“The ‘Spotify approach’ already shattered the music industry, providing artists a small river of streaming revenue. The music industry has less power and money than before,” continues Lubrano, explaining how new digital technologies have already shaken the cultural status quo and traditional privileges. Similarly, generative AI is expected to disrupt the content creation industry. Most probably, a high number of stable jobs will disappear. Also, these positions have become a privilege of a sort in the 21st century amid the intense precarization of media and cultural work.

“In the knowledge work era, the means of production are in everybody’s hands, as Karl Marx would say. Everyone can access these tools, and the entrance barrier is very, very low,” adds Lubrano. The previous decade saw a rapid expansion of content creators ranging from bloggers, podcasters, and social media influencers to more traditional job titles such as translators, designers, photographers, and freelance journalists. In many cases, practical skills to produce attractive content and the ability to rally audiences to publishing platforms outweighed traditional job qualification requirements such as formal professional education or CVs. At the same, increasing freelancing, platform royalties, and self-employment outpaced stable work contracts and income. Generative AI adds to other digital tools in our smartphones and laptops that have driven this transformation.

A more optimistic vision of the future depicts a world where any regular smartphone user, empowered by evermore easy-to-use digital content creation tools, can become a pop star, journalist, or movie director and start to distribute professional-grade content on social networks or ‘platforms.’ This vision highlights increased participation, empowerment, autonomy, and even democracy. This user-empowered future is also the promise that the companies behind these tools are marketing. While generally optimistic, Lubrano believes reality to be more nuanced.

“In theory, anyone can do great stuff or become famous, but it is very difficult to stand out. Of course, every now and then, someone becomes viral, but in reality, 99% of high-quality content producers struggle to distribute, and you still need to go through some gatekeepers to stand out,” Lubrano says. “We are also overflown by content. It is difficult to move in the jungle.”

Once again, everyone is promised to be the architect of their fortune, yet only a few ever make it, while the majority struggles to make ends meet. Despite technological disruptions, traditional power dynamics appear to dominate also the new digital organization of cultural labor: pervasive competition, job insecurity, and quasi-monopolized markets steered by a handful of massive corporations. While publishing houses and production companies have lost their decisive role in cultural production, new power brokers like Spotify and Netflix have moved on their thrones. If the media, arts, and entertainment industries are still struggling to adapt to the digital transformation of the past 20 years, the disruptions of AI may turn out even more challenging to absorb.

Despite technological disruptions, traditional power dynamics appear to dominate also the new digital organization of cultural labor: pervasive competition, job insecurity, and quasi-monopolized markets steered by a handful of massive corporations.

To anticipate just how significant the cultural impact of AI-driven transformation can be, generative technologies may be compared to the rise of earlier modern technologies of cultural production: photography, films, television, or music records. While esteemed art critics largely disregarded nascent photography and movies as vulgar and lacking the traditional qualities of art, these forms eventually took their place in the canon of modern art, pop culture, and society at large. The industries built around these creative technologies shaped the cultural trajectory of the 20th century, transforming values, tastes, and lifestyles across the globe. Movies played a pivotal role in selling the “American dream” as well as the dream of the Third Reich. Illustrated newspapers and televised news reels brought the world into living rooms and allegedly shaped the course of wars and revolutions. Later developments, like wireless internet, social media, and live streaming, have continued to keep media and communication technologies at the center of social and cultural evolution.

However, with AI, the speed of transformation is on a different scale. Generative AI applications have produced up to 15 billion images since OpenAI’s DALL-E 2 launch in April 2022. Human photographers took 150 years to create the same number of photos (since Nicéphore Niépce first photographed the View from the Window at Le Gras in 1826).

In terms of economic power, the 12 leading generative AI companies already command $7.28 trillion in market capitalization; that is, nine times the total market value of Hollywood. Furthermore, the AI market is estimated to grow by a staggering 3,150% within the next decade.

“Things are developing so fast that it is even difficult to understand or imagine how everything is going to evolve and transform,” says journalist and filmmaker Raül Gallego Abellán. Artificial intelligence has been in Gallego’s interest for years, and his recent documentary Japan, the Utopia of AI and Robots examined the subject in Japan, reportedly the most open society to incorporate robots and AI into everyday life.

“For sure, as a journalist, filmmaker, and person who produces content, I have many questions about how we are going to face all of this,” adds Gallego. Citing Tristan Harris, a former Google employee and the current head of the Center for Humane Technology, Gallego estimates that AI-generated content will surpass human-made content in numbers, possibly already in five years.

“I believe that people’s perception or care about how content is created will change. At some point, AI-generated content will become normal, normalized,” elaborates Gallego. According to him, thus far, people do not care so much if AI is used to create illustrations, animations, music, or movies. The primary public concern appears to be identifying or verifying if information is factual, not how it is created. For Gallego, content creators will increasingly share their work field with AI-powered generative applications.

“There is going to be all this content produced by AI, perhaps supervised by humans, and then possibly premium human-created content,” Gallego concludes. He draws a comparison with organic food, saying that ‘this magazine or documentary is 100% human-made’ may become a trademark that attracts specific audiences. Meanwhile, contents partially or entirely made by generative tools will become the standard of cultural production.

Both Gallego and Lubrano believe there will always be demand for human-created content, regardless of advances in generative technologies. Both also welcome a democratic oversight and regulation of AI technologies.

“‘Move fast and break things’ attitude may have worked in the beginning of social networks but cannot be applied to AI or self-driving cars,” says Lubrano, referring to Mark Zuckerberg’s philosophy of unconstrained innovation and creative destruction.

In an attempt to catch up, in December 2023, the European Union countries agreed upon the Union’s Artificial Intelligence Act. Set in motion in 2021, the AI Act is considered globally the most comprehensive regulation of AI-related technologies. Among other things, the act prohibits using AI technologies to create biometric recognition or social score systems and imposes guardrails for developing general-purpose AI (GAI). However, the act appears to be driven more by traditional national security concerns than concerns for job displacement and cultural impact.

The AI Act does not, for example, address questions regarding copyrights or compensation for using human-made content in datasets and the training of AI models. Since the public launch of generative AI applications, using human-made content to train AI models has been one of the most disputed aspects. In January 2023, artists Sarah Andersen, Kelly McKernan, and Karla Ortiz filed a class action lawsuit in the Northern District Court in California, in the US, against three companies – Stability AI, Midjourney, and DeviantArt – for the alleged abuse of copyrighted material of “millions of artists.” Similar cases have followed in other jurisdictions, and in November 2023, the New York Times filed a new high-profile lawsuit against Microsoft and OpenAI. According to the NYT, the two companies should be held responsible for “billions of dollars in statutory and actual damages” due to their “unlawful” use of the newspaper’s contents.

In Europe as well, the use of human works to build AI models has stirred a heated moral and legal dispute. In the months leading to the EU’s AI Act agreements, artists and cultural workers’ associations such as European Visual Artists (EVA) and the newly founded European Guild For Artificial Intelligence Regulation (EGAIR) intensified their campaigning efforts to include provisions for content creators.

While all industry associations and unions champion the artistic and creative possibilities opened by AI-related technologies, they have also raised opposition to how the underlying tech companies exploit the efforts and labor of human creators to make massive profits for their stakeholders. Companies such as Stability AI and Mijourney have admitted to building their text-to-image generators by collecting “billions” of images across the internet with outspoken disregard for copyrights and creator consent.

The associations’ core demands gravitate around the three golden rules or “three c’s” of intellectual and creative labor: consent, compensation, and credit. The associations claim that the companies behind generative AI models fail to comply with any of the three. There has been no creator consent for using content to train AI models and no compensation or credit to original creators. Furthermore, the exact use of the content is obfuscated behind proprietary software whose functioning remains opaque to outsiders.

“We are in no way against this technology, and indeed, we can only hope for a future in which it can help us improve everyone’s living conditions. We are against the misuse of data. We think the problem is in defining a better agreement between human beings, not between man and machine,” EGAIR states on their website.

Members of SAG-AFTRA and their supporters in the SAG-AFTRA Labor Day Parade in New York, Sept. 9, 2023.
Members of SAG-AFTRA and their supporters during the SAG-AFTRA Labor Day Parade in New York, Sept. 9, 2023. Photo credit: Derek French/ Shutterstock for SAG-AFTRA.

Without clear and binding regulation specifically for AI-related use and processing of contents, such agreements are likely to occur outside of the legislative system. The first large-scale agreement was reached in Hollywood, the heart of the Western culture industry, in early Autumn 2023. After 118 days of strike action, the Writers Guild of America and the actors’ union SAG-AFTRA signed an accord with the Alliance of Motion Pictures and Television Producers, commonly known as “the studios.” The strike was the largest labor dispute in Hollywood since 1960 and was titled a “fantastic win for writers.”

The final agreement states that “AI can’t write or rewrite literary material, and AI-generated material will not be considered source material.” The deal also prohibits using screenwriters’ works to train AI models. While the arm wrestling between labor unions and corporations about the adoption of AI is likely to continue and even intensify in the following years, such clauses are expected to safeguard workers from automation or AI-driven job displacement in the immediate future. The Hollywood strike may encourage other industries and unions to follow, as the implications of AI tools span much wider than cultural and creative works. In many ways, we find ourselves at a crossroads to define how culture will be produced, reproduced, and enjoyed in the 21st century.

It is unlikely that court decisions based on human-centric legal concepts like intellectual property and copyrights will settle the questions about creating synthetic content for good. At the same time, it seems equally unlikely that defensive clauses for labor protection in individual sectors would do more than postpone the inevitable unless wider parts of society participate in shaping the development, use, and adoption of these technologies.

The most significant long-term impact of the Hollywood strike and recent legal actions may turn out to be a wake-up call: that humans agree upon and define how new technologies are adopted and used. At the end of the day, it is the very real visions, ambitions, and struggles of very real people that will define the future trajectory: whether we sink towards an “infocalypse” and the concentration of technological, economic, and cultural power in the hands of a few, or carve a path, even a cultural renaissance, that benefits the whole of humanity.

Jean Desta

Jean Desta is a pseudonym representing a collaborative effort to explore libertarian and internationalist perspectives for contemporary society. By examining various social and ecological issues, the writings of Jean Desta aim to ignite conversations, stimulate debates, and offer potential solutions for navigating our rapidly changing world.

Share or print

Engage

Turning Point magazine logo, "TURNINGPOINT" written in two different brand fonts.

This article was published in Turning Point, an independent online magazine created by and for those actively seeking for a radical change. Read more articles at www.turningpointmag.org.

Published under Creative Commons CC BY-NC-SA 4.0.