A Simple Guide to OpenAI API with
Python
ChatGPT API is coming soon, but you can learn how to use
OpenAI API today!
Image via Shutterstock under license to Frank Andrade
Recently OpenAI announced that ChatGPT will be coming to their
API soon.
While we don’t know how long that will take, we can familiarize
ourselves with the OpenAI API today!
By learning the OpenAI API today, you’ll be able to access OpenAI’s
powerful models such as GPT-3 for natural language tasks, Codex to
translate natural language to code, and DALL-E to create and edit
original images.
In this guide, we’ll learn how to use OpenAI API with Python.
First Things First — Generate Your API Key
Before we start working with the OpenAI API, we need to login into
our OpenAI account and generate our API keys.
Image by author
Remember that OpenAI won’t display your secret API key again
after you generate it, so copy your API key and save it.
I’ll create an environment variable named OPENAI_API_KEY that will
contain my API key and will be used in the next sections.
Exploring the OpenAI API with Python
To interact with OpenAI API, we need to install the official Python
bindings by running the following command.
pip install openai
There are many things we can do with this API. In this guide, we’ll
do text completion, code completion, and image generation.
1. Text Completion
Text completion can be used for classification, text generation,
conversations, transformation, conversion, summarization, etc. To
work with it, we have to use the completion endpoint and give the
model a prompt. Then the model will generate text that attempts to
match the context/pattern given.
Say we want to classify the following text.
Decide whether a Tweet's sentiment is positive, neutral, or
negative.
Tweet: I didn't like the new Batman movie!
Sentiment:
Here’s how we’ll do this with OpenAI API.
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
prompt = """
Decide whether a Tweet's sentiment is positive, neutral, or negative.
Tweet: I didn't like the new Batman movie!
Sentiment:
"""
response = openai.Completion.create(
model="text-davinci-003",
prompt=prompt,
max_tokens=100,
temperature=0
)
print(response)
According to the OpenAI docs, GPT-3 models are meant to be used
with the text completion endpoint. That’s why we’re using the
model text-davinci-003 for this example.
Here’s part of the printed output.
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"text": "Negative"
}
],
...
}
In this example, the sentiment of the tweet was classified as
Negative.
Let’s have a look at the parameters used in this example:
model :ID of the model to use (here you can see all the models
available)
prompt: The prompt(s) to generate completions for
max_token:The maximum number of tokens to generate in the
completion (here you can see the tokenizer that OpenAI uses)
temperature: The sampling temperature to use. Values close to 1
will give the model more risk/creativity, while values close to 0
will generate well-defined answers.
You can also insert and edit text using the completion and edit
endpoint respectively.
2. Code Completion
Code completion works similarly to text completion, but here we use
the Codex model to understand and generate code.
The Codex model series is a descendant of the GPT-3 series trained
on natural language and billions of lines of code. With Codex, we can
turn comments into code, rewrite code for efficiency, and more.
Let’s generate Python code using the model code-davinci-002 and
the prompt below.
Create an array of weather temperatures for Los Angeles
import os
import openai
openai.api_key = os.getenv("OPENAI_API_KEY")
response = openai.Completion.create(
model="code-davinci-002",
prompt="\"\"\"\nCreate an array of weather temperatures for Los Angeles\
n\"\"\"",
temperature=0,
max_tokens=256,
top_p=1,
frequency_penalty=0,
presence_penalty=0
)
print(response)
Here’s part of the printed output.
{
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"text": "\n\nimport numpy as np\n\ndef create_temperatures(n):\n
\"\"\"\n Create an array of weather temperatures for Los Angeles\n
\"\"\"\n temperatures = np.random.uniform(low=14.0, high=20.0, size=n)\
n return temperatures"
}
],
...
}
}
If you give proper format to the text generated, you’ll get this.
import numpy as np
def create_temperatures(n):
temperatures = np.random.uniform(low=14.0, high=20.0, size=n)
return temperatures
You can do a lot more, but first I recommend you test Codex in
the Playground (here are some examples to get you started)
Also, we should follow best practices to make the most out of Codex.
We should specify the language and libraries to use in the prompt,
put comments inside of functions, etc.
3. Image Generation
We can generate images using DALL-E models. To do so, we have to
use the image generation endpoint and provide a text prompt.
Here’s the prompt we’ll use (remember that the more detail we give
in the prompt, the more likely we are to get the result we want).
A fluffy white cat with blue eyes sitting in a basket of flowers,
looking up adorably at the camera
import openai
response = openai.Image.create(
prompt="A fluffy white cat with blue eyes sitting in a basket of
flowers, looking up adorably at the camera",
n=1,
size="1024x1024"
)
image_url = response['data'][0]['url']
print(image_url)
After opening the URL printed, I got the following image.
Source: OpenAI
But that’s not all! You can also edit an image and generate a
variation of a given image using the image edits and image
variations endpoints.
That’s it for now! In case you want to check more things you can do
with the OpenAI API, check its documentation.