GPT-3, the Human Algorithm From Open AI

GPT-3

Created by OpenAI, a research group co-founded by Elon Musk, GPT-3 is an artificial intelligence program capable of generating human-like text on any topic, in a variety of different writing styles using machine learning.

 

First released in 2020 with a private beta, Generative Pre-Trained Transformer 3, or GPT-3 for short, is a huge leap forward in creating content entirely written by a computer. This is the third iteration of the software and is currently subscription-based, with developers having cloud-based access to the AI’s API.

Below we’ll take a look at GPT-3 in a little more detail.

How Does GPT-3 Work?

In a nutshell, GPT-3 uses its own pre-learned judgment to rearrange and tweak existing text found on the internet to produce original content.

It achieves this by having been “fed” around 570gb of text scraped from the internet using a dataset from CommonCrawl. In addition to scraped content, it has access to specifically chosen information such as Wikipedia.

With this input data, GPT-3 gives words “weight” and judges which should come next based on the user’s requirements to produce text output.

GPT-3 is Pre-Trained

What sets GPT-3 apart from other AI’s is that the software has been “pre-trained” with 499 billion words, something that set OpenAI back to the tune of $4.6 million. Though costly, this training model is vastly superior to anything that has come before it, including Google’s BERT. It allows GPT-3 to refer back to what it already knows about language and make unsupervised decisions on word choice.

 

For example, GPT-3 may have been fed the sentence: “the cat is on the mat.” It would then have been given the phrase again but with a word or two missing: “the _ is on the mat.” GPT-3 would then rummage through what it already knows and attempt to string the sentence back together.

 

GPT-3 and Machine Learning

Over millions of attempts, the software begins to “learn” what is right and wrong. During the course of this process, words and sentences are assigned a “weight” that can let GPT-3 know it’s on the right lines.

And it’s this “weighting” that is causing such a stir in the world of AI as GPT-3 has learned a whopping 175 billion parameters to formulate sentences. This is vastly more than the previous iteration, GPT-2, which had only 1.5 billion parameters.

When it comes to machine learning and producing content like text, more really is better.

What Is GPT-3 Useful For?

As discussed, the key selling point of GPT-3 is that it reduces the complexity of machine learning through being pre-trained. The upshot of this is that content produced by GPT-3 is far and away more natural sounding than any AI text that has come before it. The practical applications of this are numerous.

Ease Writing Frustration

For those who find it difficult to write content, AI like GPT-3 offers a way to get their ideas out there.

By simply giving GPT-3 a few prompts, an entire article can be written quickly and accurately with the AI’s language model enabling content to even adjust tone and complexity.

For people with writing dyslexia or disabilities that make it difficult to write, software like GPT-3 offers a way to quickly express concepts in concise prose.

Human and Machine Interaction

As well as making writing easier, GPT-3 and future iterations will most likely see a lot of use in customer support.

We can immediately tell when we are speaking to a chatbot, waiting until the human connects. With GPT-3’s natural-sounding language, companies may be able to resolve customer issues entirely with “friendly” AI that has immediate access to information, expediting things considerably.

Videogames may also see AI like GPT-3 integrated into virtual experiences in the future, allowing a level of immersion not currently possible with prewritten scripts.

Generate Code

GPT-3 has also demonstrated an ability to write in various coding languages based on natural language user requests. The Twitter user who discovered this described it as “mind-blowing“.

The possibilities for this are quite significant. Whereas you currently require a trained developer to write software or a website for you, with software like GPT-3, someone could have an app up and running in no time, without writing a single line of code.

Problems

It’s not all smooth sailing though. GPT-3, though a remarkable feat of software engineering, is not truly intelligent. And it makes mistakes, sometimes big ones.

One issue some users have found is that the AI will come out with toxic ramblings like an out-of-touch Grandpa. It seems that when you rely on the internet to inform an AI, you shouldn’t be surprised when it starts spouting hateful, sexist, and racist comments. OpenAI claims it’s aware of these issues though and is attempting to rectify the issue with various measures that allow GPT-3 and future iterations to judge the controversy of a sentence.

Another issue is the social impact that an AI like GPT-3 will have. While the idea of software automatically doing the legwork of writing text or code for us sounds good, the impact on jobs is something that has more than few people worried.

Similarly, while students everywhere are licking their lips at the prospect of an AI writing their homework for them, the issue of authenticity becomes more serious going forward. Some of GPT-3’s samples are certainly good enough to fool some people and can certainly pass as a mediocre essay.

OpenAI claims its aim is to better humanity through the use of AI, and, despite a few teething problems, GPT-3 appears to be a stepping stone to bigger and better things.