OpenAI Completion API

Vivek Singh
2 min read1 day ago

--

generated by DeepAI

OpenAI’s completion API is a service that allows developers to integrate language models into their applications. It uses advanced AI models, such as GPT-3, GPT-4, or other versions, to generate natural language text based on user-provided prompts or instructions. The API takes an input prompt and returns a continuation or “completion” of the text, making it useful for various tasks, including:

  1. Text Generation: Creating human-like text based on a given prompt, such as writing essays, articles, or stories.
  2. Summarization: Condensing large pieces of text into shorter, meaningful summaries.
  3. Chatbots and Virtual Assistants: Powering conversational interfaces that understand and respond to user queries.
  4. Code Generation and Debugging: Helping developers write code or identify issues in existing code.
  5. Text Classification: Categorizing text, such as identifying sentiment, spam detection, or other classification tasks.
  6. Data Extraction: Pulling structured information from unstructured text, such as extracting dates, names, or other details.

Using this API

import requests

# Set up the API key and endpoint
api_key = 'YOUR_OPENAI_API_KEY'
url = 'https://api.openai.com/v1/completions'

# Define the prompt
prompt = "Explain the significance of the Apollo 11 mission."

# Set up the request headers and payload
headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}

data = {
'model': 'text-davinci-003', # You can use other models like 'gpt-4' if available
'prompt': prompt,
'max_tokens': 100, # Limits the length of the generated response
'temperature': 0.7, # Controls randomness (higher value = more creative response)
'n': 1, # Number of completions to generate
}

# Make the API request
response = requests.post(url, headers=headers, json=data)

# Extract and print the generated response
if response.status_code == 200:
completion = response.json().get('choices')[0].get('text').strip()
print(f"Generated Response: {completion}")
else:
print(f"Error: {response.status_code}, {response.text}")

Points to remember

  1. A low setting of the temperature value will ensure a more consistent and deterministic output.
  2. When building generative applications with OpenAI, especially those that involve interacting with multiple users concurrently, it’s important to maintain separate conversation contexts for each user. This approach helps in managing the interactions and responses effectively, ensuring that each user’s context remains isolated and coherent throughout the conversation.
  3. Context windows represent the maximum length of text that can be handled by the LLM in a single API call. Different LLMs have varying context windows due to the methodology and the datasets used for training the model.

References

https://platform.openai.com/docs/api-reference/introduction

--

--

Vivek Singh

Software Developer. I write about Full Stack, NLP and Blockchain. Buy me a coffee - buymeacoffee.com/viveksinless