Step-by-Step Guide to Developing a Story Generator with OpenAI's GPT and Ruby on Rails
Build a story generator in 10 minutes
Last night I was working on a new tool that I'll share more about later. Fans of text based games are going love it when it's ready. :)
In the meantime, I've written this tutorial to discuss how we can set up an OpenAI client in Rails, send it a test prompt, and get a response.
If you have an OpenAI account, you can find your API keys here:
https://platform.openai.com/account/api-keys
If you don't have an OpenAI account yet, make one! I spent a grand total of $0.11 learning this for the first time and setting up this tutorial using text-davinci-003.
If you are interested in testing out the API, I highly recommend it.
We're going to keep our app super simple for this example's sake, but this can be expanded to be useful in any part of your Rails app.
Let's dive in:
Step 1: Initialize a new rails app
rails new new-app
cd new-app
Step 2: Installing the OpenAI gem
Add ruby-openai
to your gemfile. That was easy. Next.
Step 3: Set up environment variable
Add dotenv-rails
to your gemfile and run bundle install
Create a new file in your root directory named .env
and include your OpenAI token. The format is as such, with no quotation marks surrounding the value.
OPENAI_TOKEN=abcd1234zxcv
Step 4: Add a Character with fields to populate
We're going to make a character model with a name and two traits. We'll ask GPT to write us a backstory for it to easily demonstrate connecting to openAI, prompting it, and updating our record with the response.
Add the character using a scaffold, then run the migration:
rails generate scaffold Character name trait_1 trait_2 backstory
rails db:migrate
To be clear, in these generators when we describe attributes without describing their type, they're automatically strings.
For simplicity's sake, let's also navigate to app/views/characters/_form.html.erb
.
Delete the following form labels and text fields since we don't want users to input a backstory or traits, we want the AI to generate them for us.
<div>
<%= form.label :trait_1, style: "display: block" %>
<%= form.text_field :trait_1 %>
</div>
<div>
<%= form.label :trait_2, style: "display: block" %>
<%= form.text_field :trait_2 %>
</div>
<div>
<%= form.label :backstory, style: "display: block" %>
<%= form.text_field :backstory %>
</div>
Step 5: Set up an example request, see the response
For now lets just throw all this into the character model. If we wanted to extend this out further, we could build a GPT client model with fields for the necessary setup, but we can worry about that in a blog post about best practices or refactoring.
5a. GPT client setup
# app/models/character.rb
class Character < ApplicationRecord
require 'openai'
def gpt_client
OpenAI::Client.new(access_token: ENV["OPENAI_TOKEN"])
end
def gpt_model
"text-davinci-003"
end
def run_prompt(attribute, prompt, max_tokens)
response = gpt_client.completions(
parameters: {
model: gpt_model,
prompt: prompt,
max_tokens: max_tokens
}
)
self.update(attribute => response["choices"][0]["text"])
end
end
Lets break this down piece by piece.
First off, we're requiring openai
so that we have access to the gem's library. That's how we're able to utilize it in the gpt_client
method.
Next is our gpt_client
method, which establishes a connection with OpenAI. ENV['OPENAI_TOKEN']
is how we access environment variables, stored by dotenv in our .env
file we made earlier.
Self-explanatorily, gpt_model
is the name of the model we want to use. You can use GPT-4 if you want, but it's more expensive. Check out API pricing here.
Now we get to the magic. The run_prompt
method is designed in such a way that we can update self
, in this case our Character, by invoking run_prompt
with an attribute, prompt, and the maximum amount of tokens we'd be willing to spend on the query. Token pricing calculator, separate from API pricing, is available here.
An example of invoking this might be as such:
Character.first.run_prompt(:trait_1, "Give me a number from 20-100", 100)
After running this in your terminal you would see the SQL as your Character model is updated with a new 'age' value. If you refreshed your character's page, you'd see that reflected in the UI.
5b. Adding prompts
Now that we've got a connection to OpenAI working, it's time to start chaining methods together to build out a backstory for our character. Lets add some prompts into the model:
def generate_all
self.generate_traits
self.generate_backstory
end
def generate_traits
run_prompt(:trait_1, trait_1_prompt, 60)
run_prompt(:trait_2, trait_2_prompt, 60)
end
def generate_backstory
run_prompt(:backstory, backstory_prompt, 500)
end
def trait_1_prompt
"Describe a positive trait about #{self.input_name}. Only use one word"
end
def trait_2_prompt
"Describe a negative trait about #{self.input_name}. Only use one word"
end
def backstory_prompt
"You're a legendary storycrafter. Build a backstory about a fantasy character named #{self.name}. They have #{self.trait_1} and #{self.trait_2} traits."
end
In this code we're setting up prompts to send to GPT. If you've ever typed into the chat interface for ChatGPT, these prompts are exactly the same as a prompt you might type there.
Protip: you can tell GPT to take the role of an experienced author from history if you're interested in adding creative flair.
As you can see with the way we utilize #{self.name}
in backstory_prompt
, you can combine user entered terms with your own prompts in order to allow users to write inputs for the command. In this case, we're using AI generated traits, but a user could provide them as well.
Also noteworthy, in each instance of us running run_prompt
, we are changing the amount of max_tokens
we use. This is to prevent costs from being ridiculously high. Since we only want one word for these traits, 60 tokens is more than enough. But if the max_tokens
was still set to 500
, we may end up using more than we expect.
5c. Call our model's method
Lets call generate_all
to generate AI characteristics when we save a character in the characters_controller.rb
:
#app/controllers/characters_controller.rb
def create
@character = Character.new(character_params)
respond_to do |format|
if @character.save
@character.generate_all
format.html { redirect_to @character, notice: "Character was successfully created." }
end
end
end
Now, when you create a new character at the /characters/new
route, after the page loads you will get an AI generated backstory!
Step 6: Expanding
There's a lot of flaws in this implementation. I just wanted to build a quick example of how easy it was to get set up with OpenAI and start implementing results from LLMs into your own applications!
If you were to build this out, I would recommend building out a new model for the GPT connection and for prompts to separate concerns out of the Character model and keep logic clean.
You could also move the AI queries to a background worker like Delayed Job or Sidekiq in order to create a smoother experience for the user. Alternatively, a loading screen with a progress bar would be nice.
Of course, projects like these always look better styled in a UI.
Final thoughts, useful links
Thanks so much for reading this all the way through!
If you run into any issues implementing this, feel free to leave a comment and I'd be happy to connect and help you troubleshoot.
Links
Check out this repo on github: https://github.com/frogr/ruby-AI-blog-post
Follow me on twitter: https://twitter.com/ex_austin
Subscribe to receive more posts like this straight into your inbox: https://austn.net/newsletter