You've built an incredible app with Cursor and Claude. It works perfectly on your laptop. Users are asking for access. You're ready to ship.
Then you open your terminal and freeze: "Wait... how do I actually deploy this?"
Suddenly you're drowning in questions: Where do I host this? How do I set up CI/CD? What about monitoring? Security? Backups? SSL certificates? Environment variables? Database migrations?
If you don't have a dedicated DevOps team (and let's be honest, most indie hackers don't), this can feel overwhelming. But here's the truth: You don't need a DevOps team to productionize your AI-built app. You just need the right approach.
This guide walks you through the exact process I've used to help dozens of indie hackers go from localhost to production-ready in one day.
Why Most AI-Built Apps Fail in Production
Your app works on localhost. You deploy it. Then reality hits:
Week 1:
- API keys don't work (they're hardcoded for localhost)
- Database isn't configured
- Environment variables are missing
- Deployment crashes with cryptic errors
Week 2 (if you get it deployed):
- Random crashes you can't debug
- Slow performance under load
- Users report errors you can't reproduce
- No way to monitor what's happening
- One bad commit breaks everything
Week 3:
- You're terrified to deploy updates
- Users are frustrated
- You're manually SSHing into servers at 2 AM
- Your product launch becomes a firefighting nightmare
The problem isn't your code. It's that AI code generators optimize for "works on localhost" not "survives production."
Let's fix that.
The Production Readiness Checklist
Before you deploy anything, verify these four pillars are in place. Skip even one and you're setting yourself up for failure.
P0: Must-Haves (Before First User)
- Environment Variables: All secrets in .env, never in code
- Database: Connection pooling configured, migrations automated
- Error Monitoring: Sentry or similar installed
- Health Check Endpoint: Simple /health route that verifies dependencies
- SSL Certificate: HTTPS everywhere (Let's Encrypt is free)
- Rate Limiting: Basic protection against abuse
P1: Should-Haves (Before 100 Users)
- CI/CD Pipeline: Automated testing and deployment
- Logging: Structured logs sent to aggregator (Axiom, Logtail)
- Backup Strategy: Automated database backups
- Rollback Plan: Can revert bad deployments in < 5 minutes
- Performance Monitoring: Track response times and errors
- Staging Environment: Test changes before production
P2: Nice-to-Haves (Before 1000 Users)
- Load Testing: Know your breaking point
- CDN: Fast asset delivery worldwide
- Database Replicas: Read replicas for scaling
- Auto-Scaling: Handle traffic spikes automatically
- Security Scanning: Automated vulnerability detection
- Custom Metrics: Business-specific tracking
Don't try to do everything at once. Focus on P0 first. Get to production. Iterate from real usage.
Step-by-Step: Localhost to Production in One Day
Let's walk through a real productionization process for a typical AI-built app (Next.js + PostgreSQL + Python API).
Step 1: Clean Up Your Code (30 minutes)
Audit for hardcoded secrets:
# Search for potential secrets
grep -r "sk-" . --exclude-dir=node_modules
grep -r "API_KEY" . --exclude-dir=node_modules
grep -r "mongodb://" . --exclude-dir=node_modules
grep -r "postgresql://" . --exclude-dir=node_modules
Move everything to .env:
# .env.example (commit this)
OPENAI_API_KEY=your_key_here
DATABASE_URL=postgresql://localhost:5432/myapp
STRIPE_SECRET_KEY=sk_test_...
NEXT_PUBLIC_API_URL=http://localhost:3000
# .env (NEVER commit this)
OPENAI_API_KEY=sk-proj-abc123...
DATABASE_URL=postgresql://prod.db:5432/myapp
STRIPE_SECRET_KEY=sk_live_xyz789...
NEXT_PUBLIC_API_URL=https://api.myapp.com
Validate environment variables at startup:
// env.ts
const requiredEnvVars = [
'OPENAI_API_KEY',
'DATABASE_URL',
'STRIPE_SECRET_KEY'
]
for (const varName of requiredEnvVars) {
if (!process.env[varName]) {
throw new Error(`Missing required environment variable: ${varName}`)
}
}
export const env = {
openaiKey: process.env.OPENAI_API_KEY!,
databaseUrl: process.env.DATABASE_URL!,
stripeKey: process.env.STRIPE_SECRET_KEY!
}
Step 2: Choose Your Deployment Platform (15 minutes)
Pick based on your stack and complexity:
| Platform | Best For | Pros | Cons |
|---|---|---|---|
| Vercel | Next.js, static sites | Zero config, great DX, fast | Expensive at scale, vendor lock-in |
| Railway | Full-stack apps with DB | Easy DB management, fair pricing | Smaller ecosystem |
| Render | Containerized apps | Simple, good free tier | Slower cold starts |
| Fly.io | Global edge apps | Best performance, Docker-native | Steeper learning curve |
My recommendation for most AI apps: Start with Vercel (frontend) + Railway (backend + DB). It's the fastest path to production.
Step 3: Set Up Monitoring (20 minutes)
Problem: You don't know when your app goes down until users complain.
Solution: 5-minute monitoring setup with Sentry (free tier)
npm install @sentry/nextjs
npx @sentry/wizard@latest -i nextjs
// sentry.client.config.ts
import * as Sentry from "@sentry/nextjs"
Sentry.init({
dsn: process.env.NEXT_PUBLIC_SENTRY_DSN,
environment: process.env.NODE_ENV,
tracesSampleRate: 0.1, // 10% of requests for performance
beforeSend(event) {
// Don't send errors from localhost
if (event.request?.url?.includes('localhost')) {
return null
}
return event
},
})
Add a health check endpoint:
// app/api/health/route.ts
import { db } from '@/lib/db'
export async function GET() {
try {
// Verify database connection
await db.query('SELECT 1')
// Verify critical services
const openaiKey = process.env.OPENAI_API_KEY
if (!openaiKey) throw new Error('OpenAI key missing')
return Response.json({
status: 'healthy',
timestamp: new Date().toISOString(),
services: {
database: 'up',
openai: 'configured'
}
})
} catch (error) {
return Response.json(
{ status: 'unhealthy', error: error.message },
{ status: 503 }
)
}
}
Set up uptime monitoring (free):
- Go to uptimerobot.com (free plan: 50 monitors)
- Add your /health endpoint
- Get alerts when it goes down
Step 4: Database Production Config (25 minutes)
AI often generates database code that leaks connections. Fix it now before you run out of connections under load.
Connection pooling (PostgreSQL example):
// lib/db.ts
import { Pool } from 'pg'
// Create ONE pool for the entire app
const pool = new Pool({
connectionString: process.env.DATABASE_URL,
max: 20, // Maximum connections
idleTimeoutMillis: 30000, // Close idle connections after 30s
connectionTimeoutMillis: 2000, // Fail fast if pool is exhausted
})
// Graceful shutdown
process.on('SIGTERM', async () => {
await pool.end()
process.exit(0)
})
export const db = {
async query(text: string, params?: any[]) {
const client = await pool.connect()
try {
return await client.query(text, params)
} finally {
client.release() // CRITICAL: Always release
}
}
}
Migration strategy:
# Use a migration tool (Prisma, Drizzle, or plain SQL)
npm install prisma --save-dev
npx prisma init
npx prisma migrate dev --name init
# In production, run migrations before deploy
npx prisma migrate deploy
Step 5: CI/CD Pipeline (30 minutes)
Automate deployments so you're not manually pushing to production at 2 AM.
GitHub Actions example (for Vercel + Railway):
# .github/workflows/deploy.yml
name: Deploy to Production
on:
push:
branches: [main]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v3
- name: Setup Node.js
uses: actions/setup-node@v3
with:
node-version: '20'
- name: Install dependencies
run: npm ci
- name: Run tests
run: npm test
- name: Run type check
run: npm run typecheck
- name: Deploy to Vercel
uses: amondnet/vercel-action@v25
with:
vercel-token: ${{ secrets.VERCEL_TOKEN }}
vercel-org-id: ${{ secrets.VERCEL_ORG_ID }}
vercel-project-id: ${{ secrets.VERCEL_PROJECT_ID }}
vercel-args: '--prod'
Railway auto-deploys from GitHub:
- Connect your repo in Railway dashboard
- Railway auto-deploys on push to main
- Done.
Step 6: Add Basic Security (20 minutes)
Rate limiting to prevent abuse:
// middleware.ts
import { Ratelimit } from '@upstash/ratelimit'
import { Redis } from '@upstash/redis'
const ratelimit = new Ratelimit({
redis: Redis.fromEnv(),
limiter: Ratelimit.slidingWindow(10, '10 s'), // 10 requests per 10 seconds
})
export async function middleware(request: Request) {
const ip = request.headers.get('x-forwarded-for') ?? 'unknown'
const { success } = await ratelimit.limit(ip)
if (!success) {
return new Response('Too many requests', { status: 429 })
}
}
Input validation (Zod example):
import { z } from 'zod'
const createUserSchema = z.object({
email: z.string().email(),
name: z.string().min(2).max(100),
})
export async function POST(req: Request) {
try {
const body = await req.json()
const validated = createUserSchema.parse(body) // Throws if invalid
// Now safe to use validated data
const user = await createUser(validated)
return Response.json(user)
} catch (error) {
if (error instanceof z.ZodError) {
return Response.json({ error: error.errors }, { status: 400 })
}
return Response.json({ error: 'Internal error' }, { status: 500 })
}
}
Common Production Mistakes (And How to Avoid Them)
Mistake 1: No Error Boundaries
Problem: One component crashes, entire app white-screens.
Fix: Add error boundaries to all route components.
// components/ErrorBoundary.tsx
'use client'
import React from 'react'
export class ErrorBoundary extends React.Component {
state = { hasError: false, error: null }
static getDerivedStateFromError(error: Error) {
return { hasError: true, error }
}
render() {
if (this.state.hasError) {
return (
<div className="p-8 bg-red-50 border border-red-200 rounded">
<h2 className="text-xl font-bold mb-2">Something went wrong</h2>
<button onClick={() => window.location.reload()}>
Reload page
</button>
</div>
)
}
return this.props.children
}
}
Mistake 2: Infinite Loops in Production
AI-generated code loves infinite loops. Common culprits:
// ❌ BAD: Infinite re-render
useEffect(() => {
fetchData() // Runs on every render, triggers re-render, loops forever
})
// ✅ GOOD: Runs once on mount
useEffect(() => {
fetchData()
}, []) // Empty dependency array
Mistake 3: Missing Database Indexes
Problem: Queries slow down as data grows.
Fix: Add indexes to frequently queried columns.
-- Find slow queries (PostgreSQL)
SELECT query, mean_exec_time
FROM pg_stat_statements
ORDER BY mean_exec_time DESC
LIMIT 10;
-- Add indexes
CREATE INDEX idx_users_email ON users(email);
CREATE INDEX idx_posts_user_id ON posts(user_id);
When to Get Help
You don't need a DevOps team, but sometimes you need a second pair of eyes. Get help when:
- Your app crashes in production but works locally
- Database queries are slow (> 1 second)
- Your deployment pipeline is broken
- Security vulnerabilities are flagged
- You're spending more time firefighting than building
VibeOps specializes in exactly this: We review your AI-built app and deliver a personalized Loom video + PDF showing what will break in production before it happens.
Summary: Your Production Checklist
Before you deploy:
- All secrets in environment variables
- Database connection pooling configured
- Error monitoring installed (Sentry)
- Health check endpoint created
- SSL certificate configured
- Rate limiting enabled
- CI/CD pipeline set up
- Database migrations automated
- Backup strategy in place
- Can roll back bad deployments in < 5 minutes
Productionizing an AI-built app without a DevOps team is absolutely possible. You don't need to become a DevOps expert – you just need to follow a proven checklist and use the right tools.
Start with the P0 checklist. Deploy to production. Learn from real users. Iterate.
The biggest mistake I see is perfectionism. You don't need a perfect setup on day one. You need a good enough setup that won't cause disasters, then iterate as you grow.
Need a production vibe check? Get a free async audit – we'll review your repo and send back a personalized 8–12 minute Loom video + PDF showing exactly where your app will break under real load.