At the end of November 2022 the San Francisco-based company OpenAI launched a prototype of a free artificial intelligence chatbot fine-tuned with both supervised and reinforcement learning techniques to deliver detailed, articulate responses across a wide range of topic areas.
Within five days, the chatbot —called ChatGPT in reference to the category of large language models it uses, which are known as Generative Pre-trained Transformers— had over a million users. By January, it had over 100 million users, making it the fastest growing consumer application ever to date.
Within just three months on the strength of ChatGPT’s early performance, OpenAI began eyeing an IPO with a valuation of $29 billion.
That’s billion with a B.
To put that in context, when Saudi Arabia went public with its national oil company in December of 2019, the initial public offering was $25.6 billion. The Saudi Aramco IPO is the largest in history to date, and OpenAI is showing every possibility of easily surpassing it sometime very soon.
Now a lot can happen between preparations for an IPO and actually going public, but it is still probably worth sitting quietly for a moment and thinking about what it means when a piece of software that right now is doing its thing online for free is estimated to be worth more than one of the largest companies in the world by revenue built upon more than a century of infrastructure and technology development with more than 270 billion barrels of proven crude oil reserves.
What exactly is ChatGPT expected to do, and how quickly, and what should you and your business be doing right now to get ready for it?
Before I get too far into things, I want to admit as someone who writes for both work and play, I am biased against this powerful new tool. I admit and will expand upon the point that it is a remarkable piece of technology that is going to change the way businesses interact with each other and their customers while saving a lot of people a lot of drudgery, but as someone who views writing as a craft and a calling, I dislike the notion that what I do can be replaced by a computer program —I am painfully aware I wrote a blog entitled, “Will Automation Cost Me My Job?” fourteen months ago without any notion that software like ChatGPT was less than a year away from going mainstream— and you may have noticed I have been leaning into writing with a distinctive narrative voice over the last few months that cannot be mistaken for the current generation of automated text.
With that said, my goodness if this is not a powerful tool for people who view writing as a bit of a chore, who are not particularly good at stringing words together in print, and who do not view the personal touch as a key component of how they communicate with others. It also offers the promise of liberating millions of people from the day-to-day task of writing content-light but otherwise on-brand business copy.
Let’s Start with the Cons
Having already said I do not particularly care for the idea of having copywriter go the way of the whale oil salesman, let’s start off by acknowledging ChatGPT is not a magical box where you put in a prompt and it spits out a correct and polished reply, well-researched, deeply thought out, and comparable to the output one would expect from any subject matter expert.
OpenAI itself admits there are limitations to the tool. Among other things, the company says, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers. Fixing this issue is challenging, as: (1) during RL [Reinforcement Learning] training, there’s currently no source of truth; (2) training the model to be more cautious causes it to decline questions that it can answer correctly; and (3) supervised training misleads the model because the ideal answer depends on what the model knows, rather than what the human demonstrator knows.”
Let me put this to you a simpler way. When you ask ChatGPT to do something, it does not do what you think it is doing. Instead, it generates, “What would a response to this sound like?” If you ask it a scientific question, it will go looking through the internet for information relevant to your inquiry to create something you will accept as sounding like a real answer. It will not hesitate to cite a non-existent paper with a plausible title using a real publication’s name and a real author’s name who has written on the topic area you asked about, and you will be given a response that sounds like a correct answer to your question that involved no actual thought or original insight.
If you were to spot an error or ask for a correction or the addition of more information, ChatGPT will generate something that sounds like the next bit of a conversation, but it did not engage in introspection on where it went wrong the first time. It is not fact-checking or reflecting on why the first response was not acceptable. It is only ever generating, “What would a response to this sound like?”
It is powerfully tempting for the user to anthropomorphize engaging with ChatGPT to imagine it is thinking and acting as a person would think and act, but it is not. It is only obeying its programming —programming that demands it produce an output that will satisfy the user, even if it does not have everything it needs to do so honestly or accurately— and where that programming has rules that steer it away from an accurate answer, you will lose accuracy, and because there is no transparency on how the response was generated, there is no way to check if an answer is right or not without doing the kind of deep-reading, learning, and analysis you were trying to delegate to software in the first place.
To summarize bluntly, if you ask ChatGPT something with any kind of depth or nuance, you will only receive a response designed to satisfy you, and a response that satisfies you will probably not stand up to further examination by someone better informed or more attentive than you are at the time you were engaging with the AI chatbot.
Here are a few real-world examples where you would not accept ChatGPT’s answer to be correct:
- You would not want to drive across a bridge whose architects were passing off ChatGPT’s understanding of physics as their own.
- You would not want to make choices about your health with a doctor who is cribbing their knowledge of current pharmaceutical options from ChatGPT’s summaries.
- You would not want your elected officials drawing up legislation based on ChatGPT’s understanding of existing laws and precedents.
- You would not want to eat food made by a company whose Quality and Safety guidelines, protocols, inspection processes, and compliance documentation were put together by ChatGPT.
I could go on and on. The parlor trick of, “Hey, that sounds right!” cannot be relied upon when something actually needs to be correct. To emphasize just how far off we know the software is right now, ChatGPT has only a limited understanding of events that have occurred in the last two years since it was first exposed to the human oversight phase of testing. In that time, it has learned humans tend to be more impressed with long answers than short ones, and it has been observed to ‘pad’ its output to receive better feedback. It also is only as good as the data it draws upon, which is finite, largely unspecified by OpenAI, and clearly often dated. For instance, ChatGPT tends to prefer gender roles that would be more acceptable in the 1920s than the 2020s.
Now all of this will probably be addressed in future iterations of the software. In fact, an update was released on a limited basis to paid subscribers March 14th, but ChatGPT today is not delivering what people believe it is capable of yet. To use a parallel AI phenomenon by way of illustration, you can spot art generated by the DALL·E 2 AI-driven image creator software because for whatever reason Artificial Intelligence really struggles with human hands. Think how much more complicated it will be to teach ChatGPT to give answers that go beyond sounding plausible to actually being correct rather than training an art generator where four fingers and a thumb go.
The Pros, and There Are Many!
Having said what ChatGPT is not and should not be trusted to do, let’s remember that a $29-billion IPO estimation is not built on a fundamental misunderstanding of this technology’s capabilities and promise. Maybe ChatGPT is not ready to replace subject matter experts just yet, but it can certainly replace a lot of mundane copywriting tasks, especially as the software is perfected by interaction with the curious public and then bespoke paid-for services become available to businesses of all shapes and sizes that want ChatGPT to do something they need done that right now only people can do, and especially the boring low-level entry-level things people do not want to spend their entire careers doing.
Let’s start with social media. Most companies have an online presence these days, and while it is true the people doing those jobs have an awareness of what their company does and how it wants to be perceived by the marketplace, awareness of the company’s offerings and guidelines on branding and engagement can be programmed into a customized ChatGPT, and the marketing and communications professionals whose day-to-day used to revolve around simple copywriting will be freed to do more important things.
Technology doesn’t replace people. It replaces tasks. Meanwhile, every company is struggling to hold onto the people it has while attracting the talent it is going to need in the future to succeed. Giving people better more interesting work to do through automating boring, low-content social media copywriting offers huge opportunities to make work more interesting and engaging for people who otherwise would view an entry-level position as a temporary gig.
Okay, social media copywriting to one side, what about customer service? We already have chatbots on many websites, and they tend to be terrible. ChatGPT is going to be a step change in how people engage with companies online. You have to think most interactions follow the same broad themes that can be analyzed and reiterated over and over again by ChatGPT’s software. That also means the software can be trained to identify and flag when an atypical conversation requiring a real subject matter expert is happening, and employees can be standing by to receive only conversations that require their knowledge and abilities.
Again, we now see customer service representatives elevated to doing more interesting, engaging, and challenging jobs for the customers who actually need them rather than having a large number of people handling all kinds of different inquiries, many of which could have been easily handled by automation if that automation was just more palatable to customers. Customer satisfaction and employee satisfaction both go up, and sooner than you would believe the companies that are doing this will have a distinct competitive advantage over the companies that are not.
What about sales? A salesperson who knows their product and can talk one-on-one with the decision-maker is invaluable, but how does that salesperson find and engage with that decision-maker right now? How much of lead generation, first contact, and follow-up could be outsourced to a ChatGPT-like piece of software that has been taught enough about the business and the product to make realistic and engaging small talk but also knows when to pivot things over to a human being for the important part of the conversation? How many salespeople would like every interaction with a customer to be the decision point? ChatGPT might finally solve the puzzle of how to actually automate large parts of the sales cycle, freeing up talented salespeople to do the most important part of their job more often.
It should also not be denied that with a thorough understanding of its limitations, this is an excellent way to gather a lot of connected data and put it in a digestible format. As long as the user understands they are looking at a rough and unverified overview to be edited, confirmed, and refined by a human being, ChatGPT as an information aggregator and grammatically correct first draft producer is an incredible tool for anyone and everyone, especially those who struggle to do independent research and writing quickly and to a deadline.
So What Should Your Business Be Doing Right Now?
One of the most impressive things about ChatGPT is the speed with which it has thrust itself into the public zeitgeist. Everyone is asking questions about it, and that’s a good thing! There is still a lot to learn about this tool. I encourage everyone to continue following news and updates relevant to this tool. The software is going to get better. The customization is going to become a standard part of the premium service, and maybe even the free service to an extent. The experience of early adaptors is going to inform everyone else. Customers are going to respond to seeing ChatGPT used as an engagement tool. Employees are going to respond to having this become part of their working lives. There are so many lessons we are all about to learn, so pay attention to that as it comes.
It is worth saying almost all successful technology integrations involve a pull coming from within the business —an issue needed to be addressed— rather than the push of senior management trying to force a new tool into a business that was not looking for new tools. Maybe the next few months should be about examining where ChatGPT can do the most good in automating repetitive writing tasks requiring little in-depth knowledge, freeing the people whose days were once heavily committed to that kind of work to do other more important and valuable tasks?
The best course right now seems to be ongoing research, keeping eyes and ears open for updates on new capabilities and the elimination of current growing pains and teething problems, brainstorming where you would want this tool to help you people if and when you can, and perhaps also spare a thought for the soft skills of change management that will be involved in bringing this new tool into the toolbox of existing copywriters. People fear change. This is going to be a big one. It’s not quite here yet, but it’s not going away, and it’s getting closer every day.
I imagine I will be writing more about ChatGPT and its successors in the future. Stay tuned for that while I follow my own advice from the previous paragraph.
Head of Content & Research
Geoff joined the industry events business as a conference producer in 2010 after four years working in print media. He has researched, planned, organized, run, and contributed to more than a hundred events across North America and Europe for senior leaders, with special emphasis on the energy, mining, manufacturing, maintenance, supply chain, human resources, pharmaceutical, food and beverage, finance, and sustainability sectors. As part of his role as Head of Content & Research, Geoff hosts Executive Platforms’ bluEPrint Podcast series as well as a weekly blog focusing on issues relevant to Executive Platforms’ network of business leaders.
Geoff is the author of five works of historical fiction: Inca, Zulu, Beginning, Middle, and End. The New York Times and National Public Radio have interviewed him about his writing, and he wrote and narrated an animated short for Vice Media that appeared on HBO. He has a BA Honours with High Distinction from the University of Toronto specializing in Journalism with a Double Minor in History and Classical Studies, as well as Diploma in Journalism from Centennial College.