This serverless function needs to serve as an API for my frontend web code, allowing it to call the "generateStory" function. The generateStory function takes in parameters for the Story Theme and Story Characters and the produces an AI streamed output from OpenAI.
Here’s a rundown of how I tackled this step:
1. Preparing the Go Environment
First things first, I needed to set up Go on my local machine. This guide came in handy, and ChatGPT provided some essential pointers as well.
2. Creating a Go Project
With Go up and running, I followed the instructions to create a new Go project on my local computer. I then created a new GitHub repository to host the server code and connected it as a remote repo.
3. Porting the Code
To start writing code, I had to port my TypeScript code that I had written for my first pass implementation to Go. I could have implemented the server in TypeScript/Node but I wanted to give Go a try just to learn some on it. I input the TypeScript code to ChatGPT and ask it to rewrite it in Go. It was a huge time save, although there were some fixes I had to make due to recent changes in the third-party OpenAI Go library.
4. Local Server Testing
I tested the server locally and made adjustments to my local client front-end code to ensure it was using the new server properly. During testing, I ran into some issues with cross site requests. I had to dive into HTTP headers to learn about Access-Control-Allow-Origin headers to facilitate cross-site communications. This resource provided some valuable insights and again ChatGPT helped to as a teaching aid for questions that I had.
5. Creating a GCP Account
I considered myself ready for deploying to a production environment. Since I didn’t already have a Google Cloud Platform (GCP) account, I needed to create one. Thankfully, the process was straightforward. My usage level looks to be all free.
6. Implementing Cloud Functions
For the serverless setup, I chose Google Cloud Functions. Setting it up manually allowed me to learn the configuration aspects, including triggers, permissions, and secrets storage for API keys and passwords. It was super easy to setup. I validated it worked using postman from my local machine for a client connection.
7. Automating Deployments
Now I was ready to automated deployments so that as I made changes to the code it would deploy automatically. I wanted a system similar to Netlify's auto-deploy. Google Cloud Build seemed to fit the bill. I manually configured it to connect to my GitHub repository, fetch the source, and trigger builds as a webhook.
The trickiest part was ensuring the right permissions within GCP. Understanding how permissions work and granting appropriate access to service accounts to keep to least privilege was important to me coming from a security minded projects in the past.
8. Frontend code update
The last step was basic configuration updates in my front-end artificial_expo code to have a the proper environment configuration (local dev & production) for the URL to send requests to for the generate-story function.
With all these pieces in place, I successfully set up the Go serverless function on GCP. in about 8 hours. It took longer than I expected due to the learning curve, I’m confident that future implementations will be much smoother. Thinking about everything I did this would have typically taken me several days to get worked out for dev environment setup, coding, prod infrastructure setup, and automating deployment. Between cloud ease of use and use of generative AI to make the process faster things are pretty easy even though there are many moving parts to keep track across front-end code, backend servers, deploy automations, and environment management.