When integrating advanced AI models like OpenAI’s GPT series into your applications, a critical yet often overlooked architectural component is AI middleware.
This intermediary layer acts as a vital bridge between your frontend and AI APIs, enabling secure, controlled, and scalable communication.
Why is middleware necessary?
Security: Safeguard sensitive API keys by keeping them off client devices.
Input validation & transformation: Enforce business logic, sanitize inputs, or enhance prompts before forwarding to the AI.
Response formatting: Normalize AI output for consistent client consumption.
Operational controls: Implement logging, monitoring, rate-limiting, and user-specific quotas without complicating your frontend.
In essence, middleware helps you abstract the AI integration, reduce frontend complexity, and adhere to security best practices.
Anyone inspecting network requests or browser sources can extract your keys and rack up costs or breach data. A backend middleware service acts as a gatekeeper, securely holding API keys and proxying client requests with appropriate validation and controls.
Catalyst: The modern full-stack cloud platform
Managing a traditional backend involves configuring servers, SSL, scaling, logging, and deploying code a considerable operational overhead for many teams.
Catalyst revolutionizes this by providing:
Serverless functions: Write backend endpoints in your favorite language without managing infrastructure. Just code, deploy, and you're live.
Secrets management: Use environment variables to securely manage sensitive credentials like API keys.
Local development & testing: Catalyst runs your full app locally for seamless testing.
One-command deployment: Catalyst instantly publishes your app with automatic HTTPS, logging, and scaling.
This platform-centric approach empowers developers to focus on code and features, not operational complexity.
One of the best ways to understand this setup is through a practical example. In our case, we can build a small regex generator tool where users describe a pattern in plain English, and the backend returns a valid regex and explanation, powered by AI.
Let’s jump in:
Before we start wiring up prompts, we need to lay the foundation, setting up both our Catalyst project and OpenAI access.
Create a Catalyst project
If you haven’t already, sign up at Catalyst. You can create a new project either through the Catalyst Developer Console (UI-based) or via the CLI, depending on how you like to work.
We used the CLI; it’s fast, clean, and gives us more control during development.
That’s it—with Catalyst initialized and our OpenAI key ready, we’re all set to start coding the AI middleware.
Now let’s connect everything. Our app will have two main parts: the frontend, which you can build using your favorite language—for now, we’ll use HTML, CSS, and JavaScript to keep it simple—and the backend, written in Python, which takes the input and talks to OpenAI to generate a regex pattern along with a quick explanation.
Add the code
We’ll place the Python logic inside a Catalyst Advanced I/O function using FastAPI. The best part? Catalyst takes care of all the client and backend setup, so we can focus entirely on building. - Use this sample and enhance the code with your creativity
//This backend function receives a prompt from the user, sends it to OpenAI, and returns a regex pattern with an explanation.
from fastapi import FastAPI
from pydantic import BaseModel
from fastapi.middleware.cors import CORSMiddleware
import openai
import logging
# Set up basic logging to track activity and errors
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
# Initialize the FastAPI app
app = FastAPI()
# Enable CORS (Cross-Origin Resource Sharing)
app.add_middleware(
CORSMiddleware,
allow_origins=["*"], # TODO: Replace with actual domains in production
allow_credentials=True,
allow_methods=["*"],
allow_headers=["*"],
)
# Set your OpenAI API key here.
# You can securely store your API key as an environment variable
# and access it instead of hardcoding. Catalyst Functions support this approach.
openai_api_key = os.getenv("OPENAI_API_KEY") # Retrieve key from environment variables
openai.api_key = openai_api_key
# Define the input format expected from the frontend
class PromptInput(BaseModel):
prompt: str # Example: "Match email addresses"
# Define the POST endpoint for generating regex
@app.post("/generate-regex")
async def generate_regex(data: PromptInput):
try:
# Get the input from the user and clean whitespace
user_input = data.prompt.strip()
# Handle empty input early
if not user_input:
logger.warning("Empty prompt received.")
return {"error": "Prompt is empty"}
logger.info(f"Received prompt: {user_input}")
logger.info(f"Using OpenAI Key: {'***' + openai_api_key[-6:]}") # Log only last part for safety
# Define the message format to send to OpenAI
base_prompt = f"""
You are a helpful AI that generates regular expressions.
Given the following natural language description, return ONLY the regex pattern and then a short explanation.
Description: "{user_input}"
Format your response as:
Pattern: <regex>
Explanation: <one-line explanation>
"""
logger.info("Calling OpenAI API...")
# Send the prompt to OpenAI's GPT-3.5 model
response = openai.ChatCompletion.create(
model="gpt-3.5-turbo",
messages=[{"role": "user", "content": base_prompt}],
temperature=0.2,
max_tokens=200,
)
logger.info("OpenAI API call successful.")
# Extract the content from the API response
content = response['choices'][0]['message']['content'].strip()
logger.info(f"OpenAI Response:\n{content}")
# Parse the regex and explanation from the response
regex = ""
explanation = ""
for line in content.split('\n'):
lower_line = line.lower()
if "pattern" in lower_line:
regex = line.split(":", 1)[1].strip()
elif "explanation" in lower_line:
explanation = line.split(":", 1)[1].strip()
# Handle missing regex pattern in response
if not regex:
logger.error("Regex not found in response.")
return {"error": "Regex not generated properly. Please try again."}
logger.info(f"Parsed Regex: {regex}")
logger.info(f"Explanation: {explanation}")
# Send back the parsed result to the frontend
return {
"regex": regex,
"explanation": explanation
}
except Exception as e:
# Log and return errors in case anything goes wrong
logger.exception("Error during regex generation:")
return {"error": f"Internal error: {str(e)}"}
This JavaScript handles both the AI prompt and regex testing workflow. It sends user input to the backend for regex generation and visually highlights regex matches in test strings. You can paste this in your Js file [Main.js]
Catalyst is your security-first AI middleware, enabling you to integrate intelligence into your applications with confidence and control perfect for building compliant, scalable, and trustworthy AI solutions