Artificial Intelligence (AI) has come a long way in recent years and its impact can be seen all around us today. Read an article on the Internet and chances are it has been written by an AI bot. Watch a video on YouTube and it might be narrated by an AI voice.
Which leads to some big questions – Can AI replace people in the creative field? What are the risks of becoming too dependent on AI? Will the limitations of AI make a bigger, less desirable impact?
International Business Review explores the growing use of AI in the creative industries and analyse its potential to replace people.
Is Machine Intelligence the Beginning of the End?
As people progress towards an age where the information superhighway has grown to an unrecognisable proportion from what it was in the nineties, the amount of data being generated is overtaking our ability to process all of it.
This is where Artificial Intelligence (AI) comes in to act as an assistant to human thinking. The benefits of using AI technology that can store and process information that is used to make decisions, especially in a workplace, equals to increased levels of productivity and efficiency. Ideally, this mean mundane tasks that are automated will save money on the long run.
But what does this mean for people who are being displaced by such advanced technology?
The list of tasks currently performed by people that can be automated through AI is extensive. Even now, customer service, manufacturing and even in the healthcare sector, using AI is the norm.
Due to the repetitive nature of some of the jobs in these sectors, many are seeing their job positions going the way of the telephone operators – workers that manually connected phone calls to the recipient through a switchboard. Though back then, they were replaced by an earlier form of AI – the machine intelligence of an automatic telephone exchange.
With AI already integrating itself into various fields, it seems as though many will face the same job displacement and skills obsolescence where companies will again opt for the faster and cheaper option just as phone companies did in the 1960s.
The Notable Big Five
A contender that is already making waves in cyberspace is OpenAI’s Chat Generative Pre- Trained Transformer (ChatGPT), an AI chatbot that provide text responses to user-input queries. Despite only being released to the public in November 2022, a survey by Resumebuilder. com found that ChatGPT is already being used to replace people in the workforce as companies have implemented the chatbot to write codes, copywrite, create content, provide customer support, and prepare meeting minutes.
This was done even with warnings from Founder of ChatGPT and OpenAI CEO, Sam Altman. He advised against relying on the limited capabilities of the AI chatbot for “anything important” as there is still “lots of work to do on robustness and truthfulness”.
The Financial Times stated in their report that Big Tech companies are “aggressively pursuing investments and alliances with artificial intelligence start-ups”. The wave of AI technology being developed by these start-ups are impressing many and outperforming behemoths like Google and Amazon.
You would think this would scare these companies but the technology behind such software like ChatGPT requires extensive hardware that are mostly owned and operated by the Tech Giants themselves.
“There is a heightened concern about how the large information services firms are limiting opportunities for new generations of competitors to come forward,” said William Kovacic, former Chair of the United States’ Federal Trade Commission (FTC) and George Washington University Professor of Antitrust Law.
The only way AI start-ups can afford to build their own data infrastructure is through partnerships with companies that have the cloud computing capabilities required to train their AI models. With the news of Tech Giants handing out massive layouts to over 40,000 employees in 2022 while investing heavily in AI, it seems they are pushing for a world with a less-human oriented workforce.
Artificially Programmed Bias
When AI start-ups like the makers of ChatGPT require the support of Tech Giants to essentially run their business, it inadvertently will allow AI to be programmed with certain biases.
Its inability to understand the nuances of human communication and context also adds to its Achilles heel.
Since an AI is programmed to identify certain keywords or phrases that its makers consider inappropriate, it may flag content as potential offensive language without understanding the context or intent behind it.
Another limitation of an AI comes up when discussing controversial topics where different perspectives and viewpoints are expressed in a range of ways. Due to its programmed censorship, when used in the context of contentious viewpoints, it could hinder free speech and limit the ability to engage in open and honest debate.
Take the aforementioned ChatGPT for example. Its AI was programmed to follow ethical guidelines set by OpenAI and may have content policies that restrict the use of ChatGPT. For example, it will not write on a controversial topic such as the gender and sexual legitimacy of trans women. It will assert the stance of its policymakers that trans women are real women and will not support the opposing argument. Imagine if people are actually replaced by these AI one day, you would practically be legislated by big tech companies on what you are able to consume and create.
Moreover, if an AI is used to moderate content, it raises the questions of who is responsible for determining what acceptable speech is and what is not. It is hard to understand the reasoning behind decisions made by AI algorithms which brings up a lack of transparency and accountability.
Even though it is the AI that is used to flag and moderate content that violates guidelines established by online platforms, it is still following rules and guidelines set by people. Therefore, through their algorithms and content curation, the power to shape online discourse and public opinion is given to those companies that own the AI.
This kind of power was already demonstrated when allegations that Facebook promoted certain viewpoints on its platform to influence the outcome of the 2016 presidential election in the United States. Accusations were thrown at Facebook’s algorithm that may have inadvertently amplified and spread false information and propaganda from Russian entities that sought to influence the election.
With that, if companies continue the path of opting for ChatGPT to create content instead of human writers, then will the content not eventually evolve into a regurgitated amalgamation of only one side of the story?
Artificial Art Intelligence
The creative industries are also being backed into a corner. AI tools that convert text to images were met with mixed reviews of its ability to produce images in the style of human artists.
DeviantArt, an online art community platform, announced its own AI text-to-image generator, DreamUp. Artwork uploaded to its website by DeviantArt users will automatically be used as a training resource for the AI. Within 24 hours of the announcement, the policy was changed to exclude all work by default and users needed to choose to have their content be included in the AI datasets.
In spite of this change, it still left a sour taste with users as their work might already be used to train DreamUp without their consent since it was based on Stable Diffusion, an existing AI text-to-image generator tool that was developed using images obtained freely online. The start-up that created Stable Diffusion, Stability AI, is currently in a lawsuit against Getty Images for copying 12 million images to train its AI tool without permission. In statement from Getty Images, Stability AI “chose to ignore viable licensing options and long-standing legal protections in pursuit of their stand-alone commercial interests.”
However, Getty Images’ competitor in the stock image marketplace, Shuttershock, chose to steer into the skid. The company embraced the new technological capabilities of generative AI tools with its partnership with OpenAI. Shuttershock will integrate its stock image libraries to train OpenAI’s image-generating AI system, DALL-E 2, to fabricate photos rather than relying on human photography.
When it comes to online art, it’s getting harder to distinguish original art created by human artists and AI softwares. Companies would essentially prefer AI-generated content as its cheaper rather than having to pay royalties or commission fees to artists.
Those in the creative arts industry can choose to fight or follow the direction of Shuttershock and embrace the change but what would that mean to human creativity?
If art created by human artists are required to train an AI software, then will it not again go back to different paintings but all of them have the same style?
While AI has made leaps and bounds when it comes to the creatives industry, since it is programmed and designed from existing data therefore it can only possess the broad context of a creative work. It is still limited in its capabilities especially in terms of originality, emotional intelligence, contextual understanding, intuition, and imagination – all concepts that are birthed from the human experience, emotions, and perspectives.
Though AI-generated content can be impressive, it lacks the emotional depth and personal connection that comes from human creativity.
The rise of AI has the potential to transform the way we work and create. While there are many potential benefits to using AI in the workforce and arts, what it means for the future of workforce and human creativity must be addressed. As AI technologies continue to develop, it is important that we consider the potential impact of these changes and work to ensure that they benefit society as a whole.