98 LEC: Working with OpenAI’s API
This module introduces the basics of interacting with OpenAI’s API from R. We’ll explore how to make API calls, handle responses, and integrate AI capabilities into data science workflows.
98.1 Getting Started
First, we need to load the required packages:
98.1.1 API Authentication
To use OpenAI’s API, you’ll need an API key. Like we learned with other APIs, it’s important to keep this secure:
# Store API key securely (NEVER commit to Git!)
openai_api_key <- readLines("path/to/api_key.txt")
98.1.2 Making API Requests
The core workflow involves:
- Constructing the API request
- Sending it to OpenAI’s endpoint
- Processing the response
Next, we define a function to generate text using OpenAI’s API. The function takes a prompt as input and returns the generated text.
Here’s a basic function for text generation:
generate_text <- function(prompt) {
response <- POST(
# curl https://api.openai.com/v1/chat/completions
url = "https://api.openai.com/v1/chat/completions",
# -H "Authorization: Bearer $OPENAI_API_KEY"
add_headers(Authorization = paste("Bearer", openai_api_key)),
# -H "Content-Type: application/json"
content_type_json(),
# -d '{
# "model": "gpt-3.5-turbo",
# "messages": [{"role": "user", "content": "What is a banana?"}]
# }'
encode = "json",
body = list(
model = "gpt-3.5-turbo",
messages = list(list(role = "user", content = prompt))
)
)
str_content <- content(response, "text", encoding = "UTF-8")
parsed <- fromJSON(str_content)
# return(parsed$choices[[1]]$text)
return(parsed)
}98.2 Example Usage and Handling the Response
Now that we’ve defined our generate_text() function, let’s test it by sending a request to OpenAI’s API and working with the response.
98.2.2 Step 2: Examine the Raw API Response
When we call the generate_text(prompt) function, OpenAI’s API returns a structured response in JSON format, which R reads as a list. This response contains multiple components, but the most important part is the generated text.
Let’s print the raw response to see its structure.
print(generated_text)
#> $id
#> [1] "chatcmpl-DAJn3LdqzSqEoRQ0VCmtNxvemI21L"
#>
#> $object
#> [1] "chat.completion"
#>
#> $created
#> [1] 1771351725
#>
#> $model
#> [1] "gpt-3.5-turbo-0125"
#>
#> $choices
#> index message.role
#> 1 0 assistant
#> message.content
#> 1 1. Define the problem: Clearly state the problem or question that needs to be answered using data and analytics.\n\n2. Data collection: Gather relevant data from various sources, such as databases, APIs, surveys, or web scraping.\n\n3. Data cleaning and preparation: Clean and preprocess the data to ensure its quality and suitability for analysis, including handling missing values, outliers, and inconsistencies.\n\n4. Exploratory data analysis: Explore the data visually and statistically to gain insights, identify patterns, and detect relationships between variables.\n\n5. Feature engineering: Create new features or transform existing ones to optimize model performance and improve predictive power.\n\n6. Model selection: Choose the most appropriate machine learning model or algorithm based on the problem type and data characteristics.\n\n7. Model training: Train the selected model on the prepared data to learn patterns and relationships.\n\n8. Model evaluation: Evaluate the model's performance using metrics such as accuracy, precision, recall, or ROC curve to assess its effectiveness.\n\n9. Model tuning: Fine-tune the model by adjusting hyperparameters and optimizing its performance through techniques like cross-validation or grid search.\n\n10. Model deployment: Deploy the trained model into production or make predictions on new data to solve the problem and deliver insights or recommendations.\n\n11. Model monitoring and maintenance: Continuously monitor the model's performance, retrain it with new data if necessary, and update it to ensure it remains effective over time.
#> message.refusal message.annotations logprobs finish_reason
#> 1 NA NULL NA stop
#>
#> $usage
#> $usage$prompt_tokens
#> [1] 19
#>
#> $usage$completion_tokens
#> [1] 284
#>
#> $usage$total_tokens
#> [1] 303
#>
#> $usage$prompt_tokens_details
#> $usage$prompt_tokens_details$cached_tokens
#> [1] 0
#>
#> $usage$prompt_tokens_details$audio_tokens
#> [1] 0
#>
#>
#> $usage$completion_tokens_details
#> $usage$completion_tokens_details$reasoning_tokens
#> [1] 0
#>
#> $usage$completion_tokens_details$audio_tokens
#> [1] 0
#>
#> $usage$completion_tokens_details$accepted_prediction_tokens
#> [1] 0
#>
#> $usage$completion_tokens_details$rejected_prediction_tokens
#> [1] 0
#>
#>
#>
#> $service_tier
#> [1] "default"
#>
#> $system_fingerprint
#> NULLAs you can see, the response is a nested list containing various metadata (e.g., request ID, model name, creation time), the AI-generated response (inside $choices[[1]]$message$content), token usage information (inside \(usage\)total_tokens), and more.
98.2.3 Step 3: Extract the AI-Generated Text
Since the response contains both metadata and content, we need to extract only the generated text. The key part of the response is stored in:
Now, let’s print the AI-generated text:
print(ai_response)
#> [1] "1. Define the problem: Clearly state the problem or question that needs to be answered using data and analytics.\n\n2. Data collection: Gather relevant data from various sources, such as databases, APIs, surveys, or web scraping.\n\n3. Data cleaning and preparation: Clean and preprocess the data to ensure its quality and suitability for analysis, including handling missing values, outliers, and inconsistencies.\n\n4. Exploratory data analysis: Explore the data visually and statistically to gain insights, identify patterns, and detect relationships between variables.\n\n5. Feature engineering: Create new features or transform existing ones to optimize model performance and improve predictive power.\n\n6. Model selection: Choose the most appropriate machine learning model or algorithm based on the problem type and data characteristics.\n\n7. Model training: Train the selected model on the prepared data to learn patterns and relationships.\n\n8. Model evaluation: Evaluate the model's performance using metrics such as accuracy, precision, recall, or ROC curve to assess its effectiveness.\n\n9. Model tuning: Fine-tune the model by adjusting hyperparameters and optimizing its performance through techniques like cross-validation or grid search.\n\n10. Model deployment: Deploy the trained model into production or make predictions on new data to solve the problem and deliver insights or recommendations.\n\n11. Model monitoring and maintenance: Continuously monitor the model's performance, retrain it with new data if necessary, and update it to ensure it remains effective over time."Ok, so that wasn’t really readable. Let’s try to format it a bit better:
Define the problem: Clearly state the problem or question that needs to be answered using data and analytics.
Data collection: Gather relevant data from various sources, such as databases, APIs, surveys, or web scraping.
Data cleaning and preparation: Clean and preprocess the data to ensure its quality and suitability for analysis, including handling missing values, outliers, and inconsistencies.
Exploratory data analysis: Explore the data visually and statistically to gain insights, identify patterns, and detect relationships between variables.
Feature engineering: Create new features or transform existing ones to optimize model performance and improve predictive power.
Model selection: Choose the most appropriate machine learning model or algorithm based on the problem type and data characteristics.
Model training: Train the selected model on the prepared data to learn patterns and relationships.
Model evaluation: Evaluate the model’s performance using metrics such as accuracy, precision, recall, or ROC curve to assess its effectiveness.
Model tuning: Fine-tune the model by adjusting hyperparameters and optimizing its performance through techniques like cross-validation or grid search.
Model deployment: Deploy the trained model into production or make predictions on new data to solve the problem and deliver insights or recommendations.
Model monitoring and maintenance: Continuously monitor the model’s performance, retrain it with new data if necessary, and update it to ensure it remains effective over time.
98.2.4 Step 4: Understanding Token Usage
Since OpenAI charges based on token usage, it’s useful to monitor how many tokens are used per request. The API response includes:
- usage$prompt_tokens → Tokens in the input prompt
- usage$completion_tokens → Tokens generated by the model
- usage$total_tokens → The total token count for billing
To check token usage:
98.3 Error Handling
Like we’ve seen with other APIs, it’s important to handle errors gracefully. As with any API call, errors can occur due to network issues, invalid requests, or rate limits. To ensure our script doesn’t crash, we can wrap API calls in tryCatch():
generate_text_safe <- function(prompt) {
tryCatch(
{
generate_text(prompt)
},
error = function(e) {
warning("API call failed: ", e$message)
return(NULL)
}
)
}Now, we can use generate_text_safe() to handle errors. If an error occurs, the function will return NULL and print a warning message.
98.4 Processing Multiple Requests
When working with multiple prompts, we can use purrr::map_chr() to process them efficiently:
library(purrr)
prompts <- c(
"Define p-value",
"Explain Type I error",
"What is statistical power?"
)
responses <- list()
responses <- map(prompts, generate_text_safe)This code generates text for each prompt in the prompts vector. If an error occurs, the response will be NULL. After running this code, we can examine the responses and handle any errors. I’ve included a table below to display the responses.
As you can see, the table displays the prompts, AI-generated responses, token usage, model name, and completion time for each request. This information can help us monitor the API usage and response quality.
98.4.1 Rate Limiting
OpenAI has rate limits we need to respect. We can add delays between requests to avoid exceeding these limits. Here’s a throttled version of the generate_text() function:
generate_text_throttled <- function(prompt) {
Sys.sleep(1) # Wait 1 second between requests
generate_text_safe(prompt)
}This function adds a 1-second delay between requests to avoid exceeding OpenAI’s rate limits. You can adjust the delay as needed based on the API’s rate limits.
98.5 Conclusion
In this guide, we’ve covered how to generate text using OpenAI’s GPT-3 API in R. We’ve defined a function to interact with the API, handled responses, extracted generated text, monitored token usage, and processed multiple requests. We’ve also discussed error handling, rate limiting, and best practices for working with the API. By following these steps, you can effectively use OpenAI’s GPT-3 API to generate text in R for various applications. For the curious, yes, these prompts and responses are generated using the OpenAI API every time you render this notebook.