Why Build with the OpenAI API?

The OpenAI API gives you programmatic access to powerful language models, letting you add features like text generation, summarization, Q&A, code assistance, and more to your web applications. In this guide you'll connect to the API, send your first prompt, and learn best practices for building AI-powered features responsibly.

Prerequisites

  • Basic JavaScript knowledge (async/await, fetch)
  • Node.js installed (v18+ recommended)
  • An OpenAI account with an API key

Step 1: Get Your API Key

Sign in to platform.openai.com, navigate to API Keys, and create a new secret key. Never expose this key in client-side code or commit it to version control. Store it in an environment variable.

# .env
OPENAI_API_KEY=sk-xxxxxxxxxxxxxxxxxxxxxxxx

Step 2: Install the OpenAI SDK

The official Node.js library simplifies authentication and request formatting.

npm install openai
npm install dotenv  # to load .env variables

Step 3: Make Your First API Call

Create a file called chat.js and write the following:

import OpenAI from 'openai';
import 'dotenv/config';

const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

async function askQuestion(userMessage) {
  const response = await client.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [
      { role: 'system', content: 'You are a helpful web development assistant.' },
      { role: 'user', content: userMessage }
    ],
    max_tokens: 500,
  });

  return response.choices[0].message.content;
}

const answer = await askQuestion('What is the difference between var, let, and const in JavaScript?');
console.log(answer);

Run it with node chat.js and you'll see the model's response printed to your terminal.

Understanding the Request Parameters

ParameterPurpose
modelWhich AI model to use (e.g. gpt-4o, gpt-4o-mini)
messagesThe conversation history — system prompt + user messages
max_tokensLimits the length of the response
temperatureCreativity (0 = focused, 1 = creative). Default is 1.

Step 4: Build a Simple Express API Wrapper

To use this from a frontend safely, create a backend proxy so the API key stays server-side.

import express from 'express';
import OpenAI from 'openai';
import 'dotenv/config';

const app = express();
app.use(express.json());
const client = new OpenAI({ apiKey: process.env.OPENAI_API_KEY });

app.post('/api/chat', async (req, res) => {
  const { message } = req.body;
  if (!message) return res.status(400).json({ error: 'Message required' });

  const completion = await client.chat.completions.create({
    model: 'gpt-4o-mini',
    messages: [{ role: 'user', content: message }],
    max_tokens: 300,
  });

  res.json({ reply: completion.choices[0].message.content });
});

app.listen(3000, () => console.log('API proxy running on port 3000'));

Important Best Practices

  1. Always proxy through your backend — Never call the OpenAI API directly from the browser.
  2. Rate limit your endpoints — Prevent abuse with libraries like express-rate-limit.
  3. Validate and sanitize input — Don't pass raw user input straight into prompts without checks.
  4. Handle errors gracefully — The API can return errors for rate limits, content policy violations, or network issues. Always wrap calls in try/catch.
  5. Monitor your usage — Set spending limits in your OpenAI dashboard to avoid surprise bills.

What to Build Next

  • A blog post summarizer that takes a URL and returns key points
  • A code review assistant integrated into your dev workflow
  • A chatbot widget embedded in a customer support page
  • An SEO meta-description generator for your CMS

Wrapping Up

The OpenAI API is straightforward to integrate and opens up a wide range of features for your web apps. Start small — one endpoint, one clear purpose — and expand from there. The biggest pitfalls are security (key exposure) and cost (unbounded token usage), both of which are easy to manage with the right setup.