Documentation

Everything you need to set up and deploy your AI chatbot. Follow the sections below to configure each service and get running.

Before You Start

hourzero.dev is a production-grade codebase with a lot of moving parts — auth, billing, streaming, rate limiting, and more, all wired together. That's the value, but it also means jumping straight into modifications without understanding how the pieces connect will slow you down.

Before writing any code, spend some time reading. Walk through the folder structure, trace a chat message from the input box to the streaming response, look at how auth wraps API routes. You don't need to understand every line — you're building a mental map that will pay off every time you need to find or change something later.

Use an AI coding tool to help you navigate. Claude Code, Cursor, Codex, OpenCode — any of them will dramatically speed up your orientation. Ask them to explain a file, trace a data flow, or find where a feature is implemented. If you're not already using one of these, now is a great time to start. Each boilerplate (Starter, Pro, Enterprise) comes with a well documented set of docs (.md files) that your AI assistant can reference.

Read the docs before the code. They're structured to walk you through setup, configuration, and architecture in order, and they'll give you the context that makes the codebase click.

And resist the urge to restructure early. Get the app running as-is, build your first feature on top of the existing patterns, and refactor later once you understand why things are where they are.

Take your time with this step. It makes everything that follows faster.

Quick Start

Get the boilerplate running locally in four steps. You need Node.js 20+, pnpm, and optionally Docker for a local PostgreSQL instance.

1

Install dependencies

pnpm install
2

Configure environment variables

Copy the example file and fill in your keys. Each variable is covered in detail in the sections below.

cp .env.example .env.local
3

Run database migrations

This creates all required tables in your PostgreSQL database. If you need a local instance, see the Database section.

pnpm db:migrate

Note: If you're using a managed Postgres instance (e.g. Supabase or Neon), try setting POSTGRES_URL to the Session Pooler or Transaction Pooler connection string rather than the Direct one. Some providers ship with default settings that can cause the migration script to fail on a direct connection.

4

Start the dev server

pnpm dev

Open http://localhost:3000 in your browser. The chatbot is ready to use once you provide at least one AI provider key.

Environment Variables

The project uses around 25 environment variables. Below is a grouped quick-reference. Each service section later in this page explains how to obtain the values.

Core

VariableDescriptionSource
BETTER_AUTH_SECRETSecret for signing sessions and tokensopenssl rand -base64 32
BETTER_AUTH_URLApplication base URLhttp://localhost:3000 for local dev
NODE_ENVRuntime environmentdevelopment or production
NEXT_PUBLIC_BETTER_AUTH_URLPublic URL used for absolute linksYour deployed domain
NEXT_PUBLIC_EMAIL_VERIFICATION_OPTIONVerification method for email sign-upotp or magic_link

Google Auth

VariableDescriptionSource
GOOGLE_CLIENT_IDGoogle OAuth client IDGoogle Cloud Console
GOOGLE_CLIENT_SECRETGoogle OAuth client secretGoogle Cloud Console

AI Providers

VariableDescriptionSource
ANTHROPIC_API_KEYAnthropic (Claude) API keyconsole.anthropic.com
OPENAI_API_KEYOpenAI API keyplatform.openai.com
GOOGLE_GENERATIVE_AI_API_KEYGoogle Gemini API keyaistudio.google.com

Database & Redis

VariableDescriptionSource
POSTGRES_URLPostgreSQL connection stringSee Database section
UPSTASH_REDIS_REST_URLUpstash Redis REST endpointconsole.upstash.com
UPSTASH_REDIS_REST_TOKENUpstash Redis auth tokenconsole.upstash.com

Email

VariableDescriptionSource
RESEND_API_KEYResend transactional email API keyresend.com
ADMIN_EMAILReceives contact form submissionsYour email address

Stripe

VariableDescriptionSource
STRIPE_SECRET_KEYStripe secret API keyStripe Dashboard
STRIPE_WEBHOOK_SECRETWebhook signing secretStripe CLI or Dashboard
STRIPE_PRO_MONTHLY_PRICE_IDPrice ID for monthly subscriptionStripe product page
STRIPE_PRO_YEARLY_PRICE_IDPrice ID for yearly subscriptionStripe product page

AI Tools

VariableDescriptionSource
SERPER_API_KEYGoogle Search via Serperserper.dev
EXA_API_KEYSnippet search via Exaexa.ai
JINA_API_KEYWebpage retrieval via Jinajina.ai

File Uploads (Railway)

VariableDescriptionSource
FILE_UPLOAD_BUCKETS3 bucket nameRailway volume settings
FILE_UPLOAD_ENDPOINTS3-compatible endpoint URLhttps://storage.railway.app
FILE_UPLOAD_REGIONBucket regionRailway volume settings
FILE_UPLOAD_ACCESS_KEY_IDS3 access keyRailway volume settings
FILE_UPLOAD_SECRET_ACCESS_KEYS3 secret keyRailway volume settings
FILE_UPLOAD_RAILWAY_S3_FORCE_PATH_STYLEForce path-style URLsSet to true

Database

The project uses PostgreSQL as its primary database, managed through Drizzle ORM for schema definitions, migrations, and type-safe queries. The schema lives in lib/db/schema.ts and covers 18 tables across authentication, organizations, and application data.

Local PostgreSQL with Docker

The included Docker Compose file spins up a local PostgreSQL instance. Run it and set your connection string.

pnpm dev:docker

Then set your POSTGRES_URL in .env.local:

POSTGRES_URL=postgresql://postgres:postgres@localhost:5432/chatbot

Migration commands

Drizzle provides a full migration toolkit. The most common commands:

CommandPurpose
pnpm db:generateGenerate migration files from schema changes
pnpm db:migrateRun pending migrations against the database
pnpm db:studioOpen Drizzle Studio — a visual database browser
pnpm db:pushPush schema changes directly (dev only, no migration files)
pnpm db:pullPull existing database schema into Drizzle format

Full-text search setup

The memory system uses PostgreSQL full-text search with a GIN index. Custom SQL files in lib/db/custom-sql/ are executed automatically after Drizzle migrations. The memory-fts-setup.sql file creates a search_memory function and a GIN index on to_tsvector('english', content). These scripts are idempotent — they are safe to run multiple times.

Authentication

Authentication is handled by better-auth, which provides email + password sign-up, session management, and organization support out of the box. A single catch-all route at /api/auth/* handles all auth endpoints automatically.

Required configuration

Generate a secret for signing sessions and tokens:

openssl rand -base64 32

Set it as BETTER_AUTH_SECRET in your .env.local. Also set BETTER_AUTH_URL to your application URL (http://localhost:3000 for local development).

Email verification mode

The boilerplate supports two email verification methods, controlled by NEXT_PUBLIC_EMAIL_VERIFICATION_OPTION:

  • otp — Users receive a 6-digit code via email. They enter it on the sign-in page to verify.
  • magic_link — Users receive a clickable link that signs them in directly.

Both methods require a working Resend configuration to send verification emails.

Post-signup behavior

When a user signs up, a Stripe customer is automatically created in the background. This ensures every user has a Stripe customer ID ready for when they subscribe to a paid plan. The organization onboarding flow is presented after first sign-up, allowing users to create or skip creating their first organization.

Session handling

Sessions are stored in the database. A session hook automatically populates activeOrganizationId on login, so all subsequent queries are correctly scoped to the user's active organization.

AI Providers

The chatbot supports three AI providers through the Vercel AI SDK v6. Eight models are registered in a custom provider, and users can switch between them from the chat UI.

API keys

You need at least one provider key to use the chatbot. Create accounts and generate API keys at:

Available models

Model IDProviderContextNotes
claude-opus-4.6Anthropic200KDefault chat model
claude-sonnet-4.6Anthropic200K
claude-sonnet-4.5Anthropic200K
claude-haiku-4.5Anthropic200K
gpt-4.1-miniOpenAI1MThis is the default model used for webpage summarization, when the content of a single webpage is too long.
gpt-5.2OpenAI400k
gemini-3-proGoogle1M
gemini-2.5-flash-liteGoogle1MInternal only (generates titles for conversations)

Context window compaction

When a conversation exceeds 60% of the active model's context window (minimum 50,000 tokens), the system automatically compresses older messages into a summary. The summary is cached in Redis with a 7-day TTL. The 6 most recent messages are always preserved in full. This keeps long conversations functional without ballooning token costs.

AI Tools

The chatbot can do more than conversation — it has four built-in tools that let it search the web, read pages, and execute code. Tools are automatically invoked by the model when relevant, with up to 30 sequential tool calls per request.

Google Search (Serper)

Sign up at serper.dev and set SERPER_API_KEY. The model can perform Google searches and receive up to 10 organic results with titles, snippets, and URLs. Serper also serves as the primary webpage scraper for the retrieve tool.

Snippet Search (Exa)

Sign up at exa.ai and set EXA_API_KEY. This tool finds semantically relevant text snippets across the web — returning 9 results with highlighted passages of up to 13 sentences each. It also serves as a fallback for webpage retrieval.

Webpage Retrieval (Jina AI)

Sign up at jina.ai and set JINA_API_KEY. When the model needs to read a full webpage, a 3-provider fallback chain is used: Serper Scraper (17s timeout) → Exa → Jina AI. Content is truncated at 200K characters and summarized via GPT-4.1-mini if needed.

Code Execution

Anthropic and Google models include native code execution in a sandboxed Python environment. No additional API keys are needed. Code runs in the provider's own sandbox — no access to the filesystem or network.

Adding custom tools

To add a new tool, define it in lib/ai/tools/ using the Vercel AI SDK tool() helper, register it in the chat API route, create a result component in components/tool-results/, and add a rendering case in the message component. Tool keys use snake_case and map to tool-* part types in the message stream.

Email

Transactional emails are sent via Resend with templates built using React Email. The system handles verification emails, password resets, organization invitations, and contact form submissions.

Setup

1

Create a Resend account

Sign up at resend.com and create an API key. Set it as RESEND_API_KEY.

2

Verify your sending domain

In the Resend dashboard, add your domain and configure the DNS records they provide (SPF, DKIM, DMARC). Until verified, you can only send to your own email address.

3

Set the admin email

Set ADMIN_EMAIL to the address that should receive contact form submissions.

Built-in email types

  • Verification link — Magic link for email verification
  • Verification OTP — 6-digit code for verification, sign-in, or password reset
  • Organization invitation — Invite with accept button linking to the app
  • Password reset — Reset link for forgotten passwords
  • Contact form — Submission forwarded to the admin email

Templates live in lib/emails/templates/ and utility functions in lib/emails/utils.ts.

Payments

Subscription billing is powered by Stripe. The boilerplate includes two plans, a checkout flow, a billing management portal, and webhook handlers that sync subscription state to the database.

Stripe setup

1

Get your API keys

Go to Stripe Dashboard → Developers → API keys. Copy the secret key (starts with sk_test_ in test mode) and set it as STRIPE_SECRET_KEY.

2

Create products and prices

In the Stripe Dashboard, create a product with the following price options:

  • Pro Monthly — recurring, monthly billing
  • Pro Yearly — recurring, yearly billing

Copy each Price ID (starts with price_) into STRIPE_PRO_MONTHLY_PRICE_ID and STRIPE_PRO_YEARLY_PRICE_ID.

3

Set up webhooks for local development

Install the Stripe CLI and forward events to your local server:

# Install (macOS)
brew install stripe/stripe-cli/stripe

# Log in to your Stripe account
stripe login

# Forward webhook events to your local server
stripe listen --forward-to localhost:3000/api/webhooks/payments

The CLI prints a webhook signing secret (starts with whsec_). Set it as STRIPE_WEBHOOK_SECRET.

Webhook events handled

The webhook route at /api/webhooks/payments processes four event types:

  • checkout.session.completed — First-time purchase. Updates the user with subscription fields and resets their token budget to 1,000,000.
  • customer.subscription.updated — Plan changes or renewals. Syncs subscription fields and resets tokens.
  • invoice.payment_succeeded — Confirms the plan is active.
  • customer.subscription.deleted — Cancellation. Clears all Stripe fields and deactivates the plan.

Production webhooks

For production, create a webhook endpoint in the Stripe Dashboard pointing to https://yourdomain.com/api/webhooks/payments and subscribe it to the four events above. Use the signing secret from the Dashboard (not the CLI) as your production STRIPE_WEBHOOK_SECRET.

Redis

Upstash Redis is used for three purposes: rate limiting, token budget tracking, and conversation compaction caching. It operates over HTTP, so no persistent connections are needed — ideal for serverless environments.

Setup

Create a free Redis database at console.upstash.com. Copy the REST URL and token from the database details page into UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN.

Rate limiting

Requests are rate-limited using a sliding window algorithm via @upstash/ratelimit npm package:

  • Free tier — 0 messages per 60 seconds (effectively blocked from sending messages)
  • Paid tier — 10 messages per 60-second window

Token budget

Each paid user gets a token budget of 1,000,000 tokens per billing cycle. Token counts are tracked in Redis under the key chatbot-usage:<userId>. The budget resets on subscription renewal. When a user exhausts their tokens, the API returns a 429 with error code rate_limit:insufficient_tokens.

Compaction cache

When a conversation is compacted (summarized to fit within the context window), the resulting summary is cached in Redis under compaction:<chatId> with a 7-day TTL. This avoids re-summarizing the same messages on every request.

File Uploads

File uploads use Railway'sS3-compatible object storage. Users upload files directly to Railway via presigned POST URLs — the Next.js server never handles file bytes, keeping it lightweight.

Railway bucket setup

Create a volume (object storage) in your Railway project. Copy the bucket name, endpoint, region, access key, and secret key into the six FILE_UPLOAD_* environment variables listed in the Environment Variables section above. Set FILE_UPLOAD_RAILWAY_S3_FORCE_PATH_STYLE to true.

CORS configuration

Since browsers upload directly to Railway, you need to configure CORS on the bucket. Install the AWS CLI and run:

aws s3api put-bucket-cors \
  --bucket FILE_UPLOAD_BUCKET \
  --endpoint-url FILE_UPLOAD_ENDPOINT \
  --cors-configuration '{
    "CORSRules": [{
      "AllowedOrigins": [
        "http://localhost:3000",
        "https://yourdomain.com"
      ],
      "AllowedMethods": ["POST", "PUT"],
      "AllowedHeaders": ["*"],
      "MaxAgeSeconds": 3600
    }]
  }'

Verify with aws s3api get-bucket-cors --bucket FILE_UPLOAD_BUCKET --endpoint-url FILE_UPLOAD_ENDPOINT. Remember to include all origins where your app runs (localhost for dev, production domain, any preview deployments).

Supported files

Allowed MIME types: JPEG, PNG, WebP, GIF, and PDF. Maximum file size is 10 MB. Users can attach up to 10 files per message, with a concurrency limit of 10 simultaneous uploads. Files are stored with the key pattern chat-uploads/<userId>/<yyyy>/<mm>/<dd>/<uuid>-<filename>. These limits are configurable directly in code. However, the current default limits are reasonable for most use cases.

Upload flow

The upload process is fully automated. The client validates the file, calls /api/file-uploads/railway/prepare to get a presigned POST URL, uploads the file directly to Railway, and then creates a database record via trpc.file.create. The UI provides a queue with progress tracking, cancel, and retry.

Organizations & Teams

The boilerplate includes a full multi-tenant system powered by better-auth's organization plugin. Organizations group users together and scope data — chats, memories, files, and usage events are all isolated per organization.

Organization roles

  • Owner — Full control. Can delete the organization, manage all members, and access everything.
  • Admin — Can manage members, invitations, teams, and view org-wide usage.
  • Member — Standard access. Can use the chatbot, manage own files, and view shared resources.

Teams

Within an organization, teams provide finer-grained grouping. Team admins can manage their team's members. File sharing can be scoped to specific teams rather than the entire org. Usage analytics can be filtered by team.

Invitations

Organization owners and admins can invite users by email. Invitations are sent via Resend and include an accept button. The invitation page at /accept-invitation/[id] handles authentication gating — if the invitee isn't signed in, they're redirected to sign-in first.

Usage analytics

The organization settings page includes a Usage tab with date range filtering, summary cards (messages sent, tokens used), a daily area chart powered by Recharts, and per-user breakdowns. Usage data is scoped to the active organization, team, or personal context.

tRPC API

The entire client-server API layer uses tRPC v11 for end-to-end type safety. All endpoints live under /api/trpc/* as a catch-all route. The client uses httpBatchLink for automatic request batching.

Router overview

The API is organized into eight routers with a total of 69 procedures:

RouterProceduresDescription
chat16Chat CRUD, messages, votes, streams
user9Profile, billing sync, sessions, preferences
usage6Token balance, rate limits, org analytics
payments2Stripe checkout & billing portal
memory7CRUD, full-text search, upsert (org-scoped)
file11CRUD, sharing, org/team scoping
org11Organization management, invitations, roles
team7Team CRUD, member management, permissions

Authentication

Two procedure types are available: publicProcedure (no auth required) and privateProcedure (requires a valid session). Private procedures automatically extract the user's activeOrganizationId from the session, so all downstream queries are correctly scoped.

Adding new routes

To add a new router, create a file in trpc/routers/, define your procedures using publicProcedure or privateProcedure, and register the router in trpc/routes.ts. The client automatically picks up the new types — no code generation step needed.

Deployment

The boilerplate is ready to deploy out of the box. You have two options depending on your infrastructure preferences.

Vercel

The fastest path to production. Connect your repository to Vercel and every push to your main branch will trigger an automatic deployment. Vercel natively supports Next.js — no build configuration is needed. Set your environment variables in the Vercel project settings and you're live.

Docker (self-hosted)

A Dockerfile is included in the repository, allowing you to deploy anywhere Docker is supported — AWS ECS, Google Cloud Run, Fly.io, Railway, a VPS, or your own servers. Build and run the image:

# Using the compose.yml file
docker compose --profile app up --build
                  
# Or build the image manually and run the container
# Build the image
docker build -t nextjs-standalone-image .

# Run the container
docker run -p 3000:3000 --env-file .env.local -e NODE_ENV=production -e PORT=3000 nextjs-standalone-image

Pass your environment variables via --env-file or your platform's secret management. The container exposes port 3000 by default.

Deployment suggestion

The choice of where to deploy your application is deceptively tricky, especially when it comes to AI Chatbots. You should consider your expected usage throughput and how it will affect your server(s).

What is your expected usage volume? AI Chatbots tend to make a lot of requests, all while streaming their responses to the client. The node.js server is kept busy waiting for tool calls to complete and for the token stream to finish. Depending on your use-case, a single request may consistently take 500 - 600 seconds to complete. It happened to me!

Regardless of where you choose to deploy your chatbot, you should strongly consider a choice that allows you to set up auto-scaling.

© 2026 hourzero. All rights reserved.