A Simple Implementation to Use OpenAI Chat Completions in Jupyter Notebooks
While working on a research project in JupyterLab, I found myself too frequently jumping back and forth between ChatGPT and the notebook. My research has to do with prompt engineering, so capturing the ongoing dialog is important. This encouraged me to write a small Python class, allowing me to conduct my full workflow in Jupyter.
Here’s how you can configure it in your notebook, too!
Install Packages, Load Environment Variables
Firstly, install the required dependencies in your Jupyter Notebook. This includes the OpenAI library for interacting with OpenAI models and python-dotenv for handling environment variables securely.
# Install required modules and set required tokens
!pip install openai python-dotenv
# Load the .env file in a Jupyter notebook:
%load_ext dotenv
%dotenv
I use the python-dotenv
package to manage environment variables in a local .env
file. This is an important step to protect your OpenAI API Key and Organization ID when sharing or committing the notebook to Git.
For your .env
file, it needs to contain (at a minimum) the following values.
OPENAI_ORG_ID=your-epic-org-id
OPENAI_API_KEY=your-string-de-jibberish
Now, we can create a Python class that will manage our chat sessions.
Adding the ChatSession Class
The ChatSession
class provides an easy way to create and manage chat completions with any chat enabled model. Copy and paste the following class into a code cell and execute it.
import os
import json
import openai
openai.api_key = os.environ.get('OPENAI_API_KEY')
openai.organization = os.environ.get('OPENAI_ORG_ID')
class ChatSession:
"""
Class for running a chat session.
Create and use a new chat by running:
chat = ChatSession('new-chat-name', 'system-role-message')
Load an existing chat by running:
chat = ChatSession('existing-chat-name')
"""
model = 'gpt-3.5-turbo-16k-0613'
def __init__(self, chat_name, system_role=None):
self.name = chat_name
self.path = f"./chats/{chat_name}.json"
if system_role is not None:
with open(self.path, "w") as outfile:
outfile.write(json.dumps([system_role], indent=4))
print(f"Chat {self.path} created 💬")
else:
print(f"Active chat set to {self.name} 💬")
def chat(self, message):
# Retrieve chat history
with open(self.path) as openchat:
chat_history = json.load(openchat)
# Update chat history before completion
chat_history.append({'role': 'user', 'content': message})
# Run completeion
chat_completion = openai.ChatCompletion.create(model=self.model, messages=chat_history)
# Save message to chat
chat_history.append(chat_completion.choices[0].message)
# Handle lenth finish reason
if chat_completion.choices[0].finish_reason == 'length':
chat_completion = openai.ChatCompletion.create(model=self.model, messages=chat_history)
# Save the chat history
with open(self.path, "w") as outfile:
json.dump(chat_history, outfile)
# Print new dialog.
print(chat_history[-2:])
Next, create a directory named chats
in the same directory as your notebook. This will be used to store the chat history files.
Using the Class
You can create a new chat session by creating an instance of the class, passing in the chat name and system role definition.
c1 = ChatSession("philosophical-chat", { "role": "system", "content": "You're an extremely amateur philosopher." })
You can interact with the chat using the .chat()
method, which takes your message as input and generates a response from the model.
c1.chat("Do memories exist even if you forget them?")
One unique opportunity bringing chat into your notebook allows for is running multiple chats concurrently and allowing one chat instance to inform another. For example:
# Create an instance of ChatSession for the first chat. We don't pass a system role, so it loads an existing chat.
chat_1 = ChatSession('amateur-philosophy-chat')
# In the first chat, we want to create a summary of our conversation to bring a new participant up to speed.
completion = chat_1.chat('We are going to bring a new participant into the conversation. Summarize our full chat in 250 words that will provide the second participant prompt context on our conversation')
# Create an instance of ChatSession for the second chat. This time, we provide a system role to create a new chat.
chat_2 = ChatSession('free-spirit-chat', {
"role": "system",
"content": "You are a free spirit."
})
# We take the summary provided by chat_1 (stored in 'completion'), pass it to chat_2, and ask for questions based on the summary.
completion_1 = chat_2.chat(completion.choices[0].message.content + "\n\n What questions do you have, based on the summary provided?")
# The questions asked by chat_2 (stored in 'completion_1') are then returned to chat_1.
chat_1.chat(completion_1.choices[0].message.content)
In this code snippet, chat_1
and chat_2
are working together.
Wrap Up
Hopefully, you found this helpful! There are 1000-ways to add complexity and features to the class to make it more powerful. Copy it into a comment and add your ideas, should you feel inclined to!