Chatbots and GPT-3: Using human knowledge and relevant context for better chatbot experiences

chatbots
(Image credit: Shutterstock / PopTika)

GPT, or Generative Pre-trained Transformer, is an autoregressive language model that uses deep learning to produce human-like texts. GPT-3 is the third generation of the GPT series launched by OpenAI, an innovative company co-founded by the famous tech-prodigy Elon Musk. OpenAI started giving selective access to the technology starting July 2020 to stimulate the use of GPT-3 to build language based solutions.

Such language comprehension in AI comes at a much-needed time, with many of us now operating in a disparate digital landscape. By the end of 2021, 80% of businesses are expected to have some sort of chatbot automation, however the user experience with chatbots to date has been a rocky one.

The conversational context GPT-3 provides enables the bot to understand user intent better, respond in a much more human-like way, and engage with brand personality. To understand how GPT-3 will re-imagine the customer experience through chatbots, let’s break down what GPT-3 is (sans the hype), and how it applies to chatbots.

About the author

Nitesh Dudhia is co-founder and CBO at Aikon Labs

The ABCs of GPT-3

(G) Generative: Generative models apply a statistical approach to understanding the true data distribution of a training data set. It aims to simply estimate, predict, or generate an output given some input. Generative models have shown remarkable progress in the recent years for unsupervised deep learning. GPT-3 applies this generative methodology to the 175 billion parameters of open-source content language content it has processed.

(P) Pre-trained: With this large amount of knowledge, not much input is needed, making the GPT-3 ‘pre-trained’ and ready to use. With minimal prompting it can discern the linkages and context from conversations. GPT-3 can sound like Shakespeare or Richard Feynman if you wish—but the catch is that it doesn't really understand the emotion or content. It just understands the minute details of how words are strung together for some predefined context. It does this better than any other AI however, resulting in the closest thing we have to consistently generated human-like prose with minimal prompting.

(T) Transformer: Transformers can extract words from a sentence and then compute the proximity of them based on how frequently particular words occur together. They do this by projecting words into a multidimensional space or a mathematical representation— which in turn helps them predict what words can be strung together as a relevant response based on a particular prompt. GPT-3 takes this ability further as it doesn't require a ton of training data in order to perform multiple language tasks, making it operational right out of the box.

Why does GPT-3 matter for chatbots or text-based human machine interactions in general?

Not so long ago, chatbots used to struggle to hold their own in a conversation with a human. For example, when a person calls into a call center or a helpline, they typically don’t get the help they need because they are thrown into a loop of robot speak. This is because chatbots are tightly scripted more often than not. To date, most chatbots have had hard-coded scripts with little wiggle room when it comes to the words and phrases that are understood. This has significantly improved over time, and chatbots are becoming more capable of handling edge cases thanks to machine learning (ML) and Natural Language Processing (NLP), but GPT-3 takes this quantum leap further.

Chatbots need two key capabilities to be useful and deliver a better experience. Firstly, they need to understand the user's intent better. This is where a combination of GPT-3 and Natural Language Understanding (NLU) come in to help better understand intent from the conversational interactions. Secondly, chatbots need to be able to respond in a more meaningful manner. To date, chatbots were limited to scripts and templates, making them ingenuine, robotic, and most importantly—often unhelpful. GPT-3 can give more freedom, within the bounds of personality, politeness, and even domain to craft a response—heck, it can even do math on the fly during a conversation if you want it to!

There is also the additional opportunity to fine-tune the structure, style, and mannerism of the chatbot by utilizing the limitless capacity of GPT-3 to customize the response generated. Infusing your chatbot with GPT-3 gives it language context superpowers. It can sense when there is a switch in context and that information can help a bot load the script relevant to the context and manage a conversation just like a human would. It can use analogies and relevant examples based on the user’s profile, and even mirror or mimic their style or voice. The super language model can use your inputs as a prompt and generate an appropriate response while still following a script. An improved self-service experience can be had with a chatbot even if it is powered by a script, because it can be supercharged with the knowledge and context discovered by GPT-3.

GPT-3 can glean context and knowledge from structured and unstructured conversations in the form of intent, entities, correlation, etc.—helping to create rich knowledge graphs. Richer knowledge graphs can help create better models with embedded context which in turn helps to further enrich the knowledge graph. This is a virtuous cycle that will make the collection, organization and reuse of knowledge within the organization exponentially better. GPT-3 working in tandem with other models and an enterprise knowledge graph will power the next generation of cognitive agents.

GPT-3 chatbot features

The "aha" moment achievable with GPT-3 in chatbots is that it becomes much easier to have a civil and meaningful chat with the interactive pseudo-human personality (bot). Thus, the chatbot aspires to bring back social and create user engagement.

GPT-3 does this through three main features: Engagement Hangouts, Custom Actions, and Machine Reading Comprehension.

Engagement Hangouts enables the bot to disappear after a predetermined number of messages from a user to avoid awkward "dead air." GPT-3 has a machine-learning model to gauge the best amount of time to avoid the "dead air" situation.

Custom Actions allow for more dynamic engagement with the chatbot. The chatbot can store your responses and use them, in context, in future conversations.

Machine Reading Comprehension is GPT-3's ability to predict what the user is going to type next. For example, if the user says there is traffic on "6th Street," the chatbot can suggest a short-term or long-term solution for avoiding traffic.

Chatbots have become increasingly popular. While the conversational context used by most chatbots isn't very human-like, GPT-3 can help increase the likelihood that it will be with its user engagement, custom actions, and machine reading comprehension.

Bigger does not necessarily mean better

However, for any kind of progress that man makes, there are going to be measurable costs.

The catch with GPT-3 is that it doesn't really know and understand what it has said—it is simply regurgitating from the information and context it has built via the algorithm. This means it can reflect inherent biases and not understand that it is doing so. It can only string words together in a particular style and doesn’t really appreciate the emotion that a poetic verse can elicit.  At the end of the day it is only a language model that manages everything it has seen in a multi-dimensional vector space - nothing more, nothing less.

GPT-3 is pre-trained on 175 billion parameters of available content—giving it a worldview of context, but unless it’s been recently updated, its view is limited to everything that happened until its last refresh. For example, if its last update was the world until October 2019—it may still think that Donald Trump is the US president. Making inferences based on the info it has seen, GPT-3 is prescribed in the orientation of how it was programmed. Rules need context, however, because one thing can have multiple meanings.

Many human biases and views, no matter to the far left or right, may already be present in GPT-3, for it has seen and processed virtually all the available content out there at the time of its creation. This is not the algorithm's fault necessarily, it is about what it has been fed with. GPT-3 has seen some contemptible content too, and if you don’t curtail it to be polite, it can easily reciprocate with offensive content. This is like a baby using swear words—the baby picks up on what is happening around them, what the mother and father are saying, what the people around them are doing—and mimicking it.

Shane Legg, Chief Scientist and Co-founder at DeepMind, explained that AI works on “one-algorithm,” versus the “one-brain” generality humans have. One-algorithm generality is very useful but not as interesting as the one-brain kind. “You and I don’t need to switch brains when we change tasks; we don’t put our chess brains in to play a game of chess,” he said.

Even with its progression, this “one algorithm” that AI works on means that it segregates information, limiting its ability to connect incongruent data points. So in other words, it cannot critically think—which oftentimes when an issue arises is a human’s strongest capacity to problem solve. This could very well be seen in chatbots, because as much as it might seem like we are talking to another human online, like “Judy B. from Kansas”, in reality we are not— and this truth could crop up in a multitude of ways.

A future with GPT-3

A machine can have infinite memory and lightning-quick recall. Imagine combining that with  universal language models that derive intent and context. And we have the next generation chatbots, powered by GPT-3 and knowledge graphs, that can replicate human-like responses and generate new levels of user experience.

This makes a potent mix of intelligence that will disrupt how chat experiences for customers and employees are built. Understanding the cogs behind the machine, the potential gears that could get stuck, and the ways in which you can apply the machine to the language in your everyday business are the first steps to integrating this new evolution of AI into the world of intelligence that we now live in.

P.S. One of these paragraphs was written by GPT-3’s AI. Can you spot which one?

Nitesh Dudhia is co-founder and CBO at Aikon Labs