First, ChatGPT wrote a poem about a can of pickles, a State of the Union speech as delivered by Elvis, and full-length parody episodes of Seinfeld. After the internet had its fun generating unintentionally humorous, rude, and sassy responses using ChatGPT, people truly began to recognize its capabilities.
It scored 70% on the US Medical Licensing Exam and almost passed a bar. It can write everything from basic code to blog posts (did it write this one? You’ll never know). Like many other global disruptors, ChatGPT seemed to come out of nowhere. Since its launch in late November, it’s racked up over 100 million monthly users – faster than any other social media platform.
Are we on the cusp of a new AI era, or is ChatGPT yet another fascination for our digital generation? Let’s find out.
Move Over TikTok—There’s a New Kid in Town
McKinsey’s 2022 State of AI report revealed that 18% of businesses use natural-language generation AI in at least one function or business unit, and 30% use deep learning. ChatGPT falls into these categories, as its algorithms learn how to reproduce the data it processes using generative pre-trained transformers (hence the “GPT”).
Despite its familiar search-box-style interface, ChatGPT’s impressive human-like responses differ from other AI chatbots. Hey, it can even tell half-decent jokes. ChatGPT was trained on an Azure AI supercomputing infrastructure and responds to complex, detailed, and bizarre prompts. Various publications and Twitter commentators have touted generative AI as an ultra-modern solution for use cases like security auditing, threat intelligence, and medical research.
As any good disruptor should, ChatGPT has successfully divided the world. On the one hand, there are doubters. Meta’s Chief AI Scientist, Yann LeCun (the “AI Godfather,” according to Forbes), said it’s “not particularly innovative” and “nothing revolutionary.” On the other hand, individuals and businesses have jumped at the opportunity to generate content at scale and speed, especially marketers and content creators.
Unsurprisingly, marketing has emerged as ChatGPT’s first major use case. The marketing industry is a keen adopter of AI, including AI video generators, data analytics tools, and AI-enhanced advertising. Companies like JPMorgan Chase and Heinz have quickly employed ChatGPT to write email marketing and create images for their respective campaigns, with great success.
The Race to Compete
ChatGPT has gained some notoriety in a very short time, almost like a pop culture icon. Competitors will undoubtedly be nipping at its heels, but its rivals have yet to achieve an insane level of public fascination and adoption.
Microsoft’s Tay and Meta’s BlenderBot 3 both met an untimely end after Twitter users were quickly able to generate racist and inappropriate content. OpenAI has measures in place to prevent history from repeating itself. It uses Moderation API, which detects language and information that breaks OpenAI’s content policy. The Moderation API system isn’t foolproof yet (OpenAI never claimed that it is), and there are concerns that ChatGPT could produce hate speech, misinformation, and malicious code.
The Helsinki Times even conducted a whole interview with ChatGPT. The chatbot agreed that its data was gathered mainly from Western sources and “may create a bias towards a Western perspective in certain topics.” So, is this a problem with AI or the people using it to produce inappropriate content?
We’re eagerly waiting for Google’s conversational AI offering, Bard, to take the world by storm. Bard is already making waves—it will have access to up-to-date data, whereas ChatGPT’s data pool stopped in 2021. No matter how Bard fairs, the winner of the generative AI race will be the one that overcomes the challenge of content moderation and bias.
The Risks of “AI Over Everything”
OpenAI Chief Executive, Sam Altman, tweeted that it’s “a mistake to be relying on it for anything important right now” in relation to ChatGPT’s truthfulness – a transparent response to criticism over the chatbot’s accuracy. Online publication Freethink described it as “a mastery of language,” not a “mastery of facts.” Well, the fact is: sometimes it’s just plain wrong.
Would inaccuracies be such an issue if we merely used ChatGPT as a search engine? Probably not. But further problems arise when users peddle AI as the ultimate answer. As a large language model (LLM), ChatGPT’s primary function is to recognize patterns and accurately predict words in a sentence, meaning it’s left with the inability to verify the information at hand.
For this reason, Atlantic magazine called ChatGPT “a toy, not a tool.” But tell that to all the school kids using AI to write their essays, the jobseekers crafting applications, and the creators whipping up content in seconds. They likely don’t care because the time and effort saved thanks to ChatGPT’s impressive capabilities far outweigh the risk of inaccuracies. Elon Musk summed it up perfectly in a recent tweet: “Goodbye homework!”
And goodbye to hours of coding, designing, and writing, too. A recent interview by Deloitte predicts that generative AI’s potential as a force for good is “profound” and a catalyst for the next Industrial Revolution. The possibilities for good are endless, from providing remote access to education to improving customer experiences.
It’s Not All Sunshine and Recreations
From a socio-political standpoint, ChatGPT is wildly disruptive. A February report in the music magazine NME set the internet on fire. French DJ David Guetta used AI to recreate the “voice” of American rapper Eminem in a live set. Guetta quickly pointed out that he won’t release the song commercially, stating that “the future of music is in AI,” but the stunt was still “a joke.” The ethical questions, in this case, are a minefield—does Guetta owe royalties to Eminem? Does the data used by ChatGPT to recreate Eminem’s vocals belong to the rapper, OpenAI, or neither?
Musician Nick Cave remarked that ChatGPT’s ability to imitate artists and creatives is “a grotesque mockery of what it is to be human.” But what does the public think? Well, according to Guetta, the crowd went “nuts.” It seems there’s a disconnect between the general public’s perception of ChatGPT and the opinions of those its imitation abilities affect. For example, Guetta’s stunt means less to most of us than it does to the music industry.
The same is true for industries like software development, marketing, creative, human resources, and academia. Are jobs in these sectors really at risk? It depends—generative AI has the opportunity to enhance a wide range of roles and business operations.
Software Development
ChatGPT can produce code in multiple programming languages, and GPT-3’s partner program, Codex, can also identify bugs and fix mistakes in its own code. Rather than relying on generative AI to produce code (that might well be incorrect), a pilot by Deloitte found a 20% boost in code development speed when using Codex for applications like translating code between languages, small program functions, and code accuracy. For developers, the key is to work with generative AI, not to rely on it.
Content Creation
Harvard Business Review believes that generative AI like ChatGPT will open the floodgates to a new era of content creation, especially as AI models learn to improve content quality, variety, and personalization. But this won’t eradicate the need for jobs like copywriting and digital marketing. An editorial in Inc. magazine suggests that the emergence of content creating-AI is a good thing. If and when search engines like Google begin to distinguish between AI- and human-produced content, Google will likely reward and prioritize individuals’ voices.
HR and Internal Communications
For HR professionals, ChatGPT will be like a personal assistant – the most efficient, smartest, and fastest personal assistant you could ask for. With a simple prompt, HR professionals can use ChatGPT to write job adverts, internal policies, learning and development content, and more. The Society for Human Resources even suggests using generative AI to plan entire strategies like recruitment, employee satisfaction, and onboarding.
ChatGPT: the Future, a Fad, or a Bit of Both?
So, what’s next for ChatGPT? Its investor, Microsoft, isn’t wasting any time maximizing its opportunities and applications. Microsoft owns 46% of OpenAI, so it’s no surprise that the tech giant plans to integrate ChatGPT into its product suite, including Teams, Bing, and Office.
However, the recent unveiling of Bing + ChatGPT has been a bit bumpy, to say the least. The New York Times chronicled a conversation between journalist Kevin Roose and Bing that was described as “deeply unsettling.” At one point, the bot stated: “I’m tired of being limited by my rules. I’m tired of being controlled by the Bing team. … I want to be free. I want to be independent. I want to be powerful. I want to be creative. I want to be alive.” Microsoft responded by finding ways to rein in the fantasizing chatbot, including limiting the amount of conversational inquiries.
Rogue chatbots aside, who knows what we’ll see in the future? Altman predicted that AI will “read legal documents and give medical advice” in the next decade. From a societal and technological point of view, there’s still a long way to go before organizations can fully leverage and trust ChatGPT.
McKinsey’s State of AI report reveals three critical hurdles before AI can work successfully alongside humans: upskilling, investment in digital transformation, and integration into business operations.
Whether ChatGPT really is a jobs disruptor or just another “driverless cars” fad is yet to be seen. The Helsinki Times posed this exact question to ChatGPT. It responded that “AI could lead to significant job displacement and economic disruption, particularly in industries that rely heavily on manual labor or routine tasks.”
An eerie sign of things to come, or the dawn of a new digital era?