Documentation
Everything you need to set up and deploy your AI chatbot. Follow the sections below to configure each service and get running.
Before You Start
hourzero.dev is a production-grade codebase with a lot of moving parts — auth, billing, streaming, rate limiting, and more, all wired together. That's the value, but it also means jumping straight into modifications without understanding how the pieces connect will slow you down.
Before writing any code, spend some time reading. Walk through the folder structure, trace a chat message from the input box to the streaming response, look at how auth wraps API routes. You don't need to understand every line — you're building a mental map that will pay off every time you need to find or change something later.
Use an AI coding tool to help you navigate. Claude Code, Cursor, Codex, OpenCode — any of them will dramatically speed up your orientation. Ask them to explain a file, trace a data flow, or find where a feature is implemented. If you're not already using one of these, now is a great time to start. Each boilerplate (Starter, Pro, Enterprise) comes with a well documented set of docs (.md files) that your AI assistant can reference.
Read the docs before the code. They're structured to walk you through setup, configuration, and architecture in order, and they'll give you the context that makes the codebase click.
And resist the urge to restructure early. Get the app running as-is, build your first feature on top of the existing patterns, and refactor later once you understand why things are where they are.
Take your time with this step. It makes everything that follows faster.
Quick Start
Get the boilerplate running locally in four steps. You need Node.js 20+, pnpm, and optionally Docker for a local PostgreSQL instance.
Install dependencies
pnpm installConfigure environment variables
Copy the example file and fill in your keys. Each variable is covered in detail in the sections below.
cp .env.example .env.localRun database migrations
This creates all required tables in your PostgreSQL database. If you need a local instance, see the Database section.
pnpm db:migrateNote: If you're using a managed Postgres instance (e.g. Supabase or Neon), try setting POSTGRES_URL to the Session Pooler or Transaction Pooler connection string rather than the Direct one. Some providers ship with default settings that can cause the migration script to fail on a direct connection.
Start the dev server
pnpm devOpen http://localhost:3000 in your browser. The chatbot is ready to use once you provide at least one AI provider key.
Environment Variables
The project uses around 25 environment variables. Below is a grouped quick-reference. Each service section later in this page explains how to obtain the values.
Core
| Variable | Description | Source |
|---|---|---|
BETTER_AUTH_SECRET | Secret for signing sessions and tokens | openssl rand -base64 32 |
BETTER_AUTH_URL | Application base URL | http://localhost:3000 for local dev |
NODE_ENV | Runtime environment | development or production |
NEXT_PUBLIC_BETTER_AUTH_URL | Public URL used for absolute links | Your deployed domain |
NEXT_PUBLIC_EMAIL_VERIFICATION_OPTION | Verification method for email sign-up | otp or magic_link |
Google Auth
| Variable | Description | Source |
|---|---|---|
GOOGLE_CLIENT_ID | Google OAuth client ID | Google Cloud Console |
GOOGLE_CLIENT_SECRET | Google OAuth client secret | Google Cloud Console |
AI Providers
| Variable | Description | Source |
|---|---|---|
ANTHROPIC_API_KEY | Anthropic (Claude) API key | console.anthropic.com |
OPENAI_API_KEY | OpenAI API key | platform.openai.com |
GOOGLE_GENERATIVE_AI_API_KEY | Google Gemini API key | aistudio.google.com |
Database & Redis
| Variable | Description | Source |
|---|---|---|
POSTGRES_URL | PostgreSQL connection string | See Database section |
UPSTASH_REDIS_REST_URL | Upstash Redis REST endpoint | console.upstash.com |
UPSTASH_REDIS_REST_TOKEN | Upstash Redis auth token | console.upstash.com |
| Variable | Description | Source |
|---|---|---|
RESEND_API_KEY | Resend transactional email API key | resend.com |
ADMIN_EMAIL | Receives contact form submissions | Your email address |
Stripe
| Variable | Description | Source |
|---|---|---|
STRIPE_SECRET_KEY | Stripe secret API key | Stripe Dashboard |
STRIPE_WEBHOOK_SECRET | Webhook signing secret | Stripe CLI or Dashboard |
STRIPE_PRO_MONTHLY_PRICE_ID | Price ID for monthly subscription | Stripe product page |
STRIPE_PRO_YEARLY_PRICE_ID | Price ID for yearly subscription | Stripe product page |
AI Tools
| Variable | Description | Source |
|---|---|---|
SERPER_API_KEY | Google Search via Serper | serper.dev |
EXA_API_KEY | Snippet search via Exa | exa.ai |
JINA_API_KEY | Webpage retrieval via Jina | jina.ai |
File Uploads (Railway)
| Variable | Description | Source |
|---|---|---|
FILE_UPLOAD_BUCKET | S3 bucket name | Railway volume settings |
FILE_UPLOAD_ENDPOINT | S3-compatible endpoint URL | https://storage.railway.app |
FILE_UPLOAD_REGION | Bucket region | Railway volume settings |
FILE_UPLOAD_ACCESS_KEY_ID | S3 access key | Railway volume settings |
FILE_UPLOAD_SECRET_ACCESS_KEY | S3 secret key | Railway volume settings |
FILE_UPLOAD_RAILWAY_S3_FORCE_PATH_STYLE | Force path-style URLs | Set to true |
Database
The project uses PostgreSQL as its primary database, managed through Drizzle ORM for schema definitions, migrations, and type-safe queries. The schema lives in lib/db/schema.ts and covers 18 tables across authentication, organizations, and application data.
Local PostgreSQL with Docker
The included Docker Compose file spins up a local PostgreSQL instance. Run it and set your connection string.
pnpm dev:dockerThen set your POSTGRES_URL in .env.local:
POSTGRES_URL=postgresql://postgres:postgres@localhost:5432/chatbotMigration commands
Drizzle provides a full migration toolkit. The most common commands:
| Command | Purpose |
|---|---|
pnpm db:generate | Generate migration files from schema changes |
pnpm db:migrate | Run pending migrations against the database |
pnpm db:studio | Open Drizzle Studio — a visual database browser |
pnpm db:push | Push schema changes directly (dev only, no migration files) |
pnpm db:pull | Pull existing database schema into Drizzle format |
Full-text search setup
The memory system uses PostgreSQL full-text search with a GIN index. Custom SQL files in lib/db/custom-sql/ are executed automatically after Drizzle migrations. The memory-fts-setup.sql file creates a search_memory function and a GIN index on to_tsvector('english', content). These scripts are idempotent — they are safe to run multiple times.
Authentication
Authentication is handled by better-auth, which provides email + password sign-up, session management, and organization support out of the box. A single catch-all route at /api/auth/* handles all auth endpoints automatically.
Required configuration
Generate a secret for signing sessions and tokens:
openssl rand -base64 32Set it as BETTER_AUTH_SECRET in your .env.local. Also set BETTER_AUTH_URL to your application URL (http://localhost:3000 for local development).
Email verification mode
The boilerplate supports two email verification methods, controlled by NEXT_PUBLIC_EMAIL_VERIFICATION_OPTION:
- otp — Users receive a 6-digit code via email. They enter it on the sign-in page to verify.
- magic_link — Users receive a clickable link that signs them in directly.
Both methods require a working Resend configuration to send verification emails.
Post-signup behavior
When a user signs up, a Stripe customer is automatically created in the background. This ensures every user has a Stripe customer ID ready for when they subscribe to a paid plan. The organization onboarding flow is presented after first sign-up, allowing users to create or skip creating their first organization.
Session handling
Sessions are stored in the database. A session hook automatically populates activeOrganizationId on login, so all subsequent queries are correctly scoped to the user's active organization.
AI Providers
The chatbot supports three AI providers through the Vercel AI SDK v6. Eight models are registered in a custom provider, and users can switch between them from the chat UI.
API keys
You need at least one provider key to use the chatbot. Create accounts and generate API keys at:
- Anthropic — console.anthropic.com → set
ANTHROPIC_API_KEY - OpenAI — platform.openai.com → set
OPENAI_API_KEY - Google — aistudio.google.com → set
GOOGLE_GENERATIVE_AI_API_KEY
Available models
| Model ID | Provider | Context | Notes |
|---|---|---|---|
claude-opus-4.6 | Anthropic | 200K | Default chat model |
claude-sonnet-4.6 | Anthropic | 200K | |
claude-sonnet-4.5 | Anthropic | 200K | |
claude-haiku-4.5 | Anthropic | 200K | |
gpt-4.1-mini | OpenAI | 1M | This is the default model used for webpage summarization, when the content of a single webpage is too long. |
gpt-5.2 | OpenAI | 400k | |
gemini-3-pro | 1M | ||
gemini-2.5-flash-lite | 1M | Internal only (generates titles for conversations) |
Context window compaction
When a conversation exceeds 60% of the active model's context window (minimum 50,000 tokens), the system automatically compresses older messages into a summary. The summary is cached in Redis with a 7-day TTL. The 6 most recent messages are always preserved in full. This keeps long conversations functional without ballooning token costs.
AI Tools
The chatbot can do more than conversation — it has four built-in tools that let it search the web, read pages, and execute code. Tools are automatically invoked by the model when relevant, with up to 30 sequential tool calls per request.
Google Search (Serper)
Sign up at serper.dev and set SERPER_API_KEY. The model can perform Google searches and receive up to 10 organic results with titles, snippets, and URLs. Serper also serves as the primary webpage scraper for the retrieve tool.
Snippet Search (Exa)
Sign up at exa.ai and set EXA_API_KEY. This tool finds semantically relevant text snippets across the web — returning 9 results with highlighted passages of up to 13 sentences each. It also serves as a fallback for webpage retrieval.
Webpage Retrieval (Jina AI)
Sign up at jina.ai and set JINA_API_KEY. When the model needs to read a full webpage, a 3-provider fallback chain is used: Serper Scraper (17s timeout) → Exa → Jina AI. Content is truncated at 200K characters and summarized via GPT-4.1-mini if needed.
Code Execution
Anthropic and Google models include native code execution in a sandboxed Python environment. No additional API keys are needed. Code runs in the provider's own sandbox — no access to the filesystem or network.
Adding custom tools
To add a new tool, define it in lib/ai/tools/ using the Vercel AI SDK tool() helper, register it in the chat API route, create a result component in components/tool-results/, and add a rendering case in the message component. Tool keys use snake_case and map to tool-* part types in the message stream.
Transactional emails are sent via Resend with templates built using React Email. The system handles verification emails, password resets, organization invitations, and contact form submissions.
Setup
Create a Resend account
Sign up at resend.com and create an API key. Set it as RESEND_API_KEY.
Verify your sending domain
In the Resend dashboard, add your domain and configure the DNS records they provide (SPF, DKIM, DMARC). Until verified, you can only send to your own email address.
Set the admin email
Set ADMIN_EMAIL to the address that should receive contact form submissions.
Built-in email types
- Verification link — Magic link for email verification
- Verification OTP — 6-digit code for verification, sign-in, or password reset
- Organization invitation — Invite with accept button linking to the app
- Password reset — Reset link for forgotten passwords
- Contact form — Submission forwarded to the admin email
Templates live in lib/emails/templates/ and utility functions in lib/emails/utils.ts.
Payments
Subscription billing is powered by Stripe. The boilerplate includes two plans, a checkout flow, a billing management portal, and webhook handlers that sync subscription state to the database.
Stripe setup
Get your API keys
Go to Stripe Dashboard → Developers → API keys. Copy the secret key (starts with sk_test_ in test mode) and set it as STRIPE_SECRET_KEY.
Create products and prices
In the Stripe Dashboard, create a product with the following price options:
- Pro Monthly — recurring, monthly billing
- Pro Yearly — recurring, yearly billing
Copy each Price ID (starts with price_) into STRIPE_PRO_MONTHLY_PRICE_ID and STRIPE_PRO_YEARLY_PRICE_ID.
Set up webhooks for local development
Install the Stripe CLI and forward events to your local server:
# Install (macOS)
brew install stripe/stripe-cli/stripe
# Log in to your Stripe account
stripe login
# Forward webhook events to your local server
stripe listen --forward-to localhost:3000/api/webhooks/paymentsThe CLI prints a webhook signing secret (starts with whsec_). Set it as STRIPE_WEBHOOK_SECRET.
Webhook events handled
The webhook route at /api/webhooks/payments processes four event types:
checkout.session.completed— First-time purchase. Updates the user with subscription fields and resets their token budget to 1,000,000.customer.subscription.updated— Plan changes or renewals. Syncs subscription fields and resets tokens.invoice.payment_succeeded— Confirms the plan is active.customer.subscription.deleted— Cancellation. Clears all Stripe fields and deactivates the plan.
Production webhooks
For production, create a webhook endpoint in the Stripe Dashboard pointing to https://yourdomain.com/api/webhooks/payments and subscribe it to the four events above. Use the signing secret from the Dashboard (not the CLI) as your production STRIPE_WEBHOOK_SECRET.
Redis
Upstash Redis is used for three purposes: rate limiting, token budget tracking, and conversation compaction caching. It operates over HTTP, so no persistent connections are needed — ideal for serverless environments.
Setup
Create a free Redis database at console.upstash.com. Copy the REST URL and token from the database details page into UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN.
Rate limiting
Requests are rate-limited using a sliding window algorithm via @upstash/ratelimit npm package:
- Free tier — 0 messages per 60 seconds (effectively blocked from sending messages)
- Paid tier — 10 messages per 60-second window
Token budget
Each paid user gets a token budget of 1,000,000 tokens per billing cycle. Token counts are tracked in Redis under the key chatbot-usage:<userId>. The budget resets on subscription renewal. When a user exhausts their tokens, the API returns a 429 with error code rate_limit:insufficient_tokens.
Compaction cache
When a conversation is compacted (summarized to fit within the context window), the resulting summary is cached in Redis under compaction:<chatId> with a 7-day TTL. This avoids re-summarizing the same messages on every request.
File Uploads
File uploads use Railway'sS3-compatible object storage. Users upload files directly to Railway via presigned POST URLs — the Next.js server never handles file bytes, keeping it lightweight.
Railway bucket setup
Create a volume (object storage) in your Railway project. Copy the bucket name, endpoint, region, access key, and secret key into the six FILE_UPLOAD_* environment variables listed in the Environment Variables section above. Set FILE_UPLOAD_RAILWAY_S3_FORCE_PATH_STYLE to true.
CORS configuration
Since browsers upload directly to Railway, you need to configure CORS on the bucket. Install the AWS CLI and run:
aws s3api put-bucket-cors \
--bucket FILE_UPLOAD_BUCKET \
--endpoint-url FILE_UPLOAD_ENDPOINT \
--cors-configuration '{
"CORSRules": [{
"AllowedOrigins": [
"http://localhost:3000",
"https://yourdomain.com"
],
"AllowedMethods": ["POST", "PUT"],
"AllowedHeaders": ["*"],
"MaxAgeSeconds": 3600
}]
}'Verify with aws s3api get-bucket-cors --bucket FILE_UPLOAD_BUCKET --endpoint-url FILE_UPLOAD_ENDPOINT. Remember to include all origins where your app runs (localhost for dev, production domain, any preview deployments).
Supported files
Allowed MIME types: JPEG, PNG, WebP, GIF, and PDF. Maximum file size is 10 MB. Users can attach up to 10 files per message, with a concurrency limit of 10 simultaneous uploads. Files are stored with the key pattern chat-uploads/<userId>/<yyyy>/<mm>/<dd>/<uuid>-<filename>. These limits are configurable directly in code. However, the current default limits are reasonable for most use cases.
Upload flow
The upload process is fully automated. The client validates the file, calls /api/file-uploads/railway/prepare to get a presigned POST URL, uploads the file directly to Railway, and then creates a database record via trpc.file.create. The UI provides a queue with progress tracking, cancel, and retry.
Organizations & Teams
The boilerplate includes a full multi-tenant system powered by better-auth's organization plugin. Organizations group users together and scope data — chats, memories, files, and usage events are all isolated per organization.
Organization roles
- Owner — Full control. Can delete the organization, manage all members, and access everything.
- Admin — Can manage members, invitations, teams, and view org-wide usage.
- Member — Standard access. Can use the chatbot, manage own files, and view shared resources.
Teams
Within an organization, teams provide finer-grained grouping. Team admins can manage their team's members. File sharing can be scoped to specific teams rather than the entire org. Usage analytics can be filtered by team.
Invitations
Organization owners and admins can invite users by email. Invitations are sent via Resend and include an accept button. The invitation page at /accept-invitation/[id] handles authentication gating — if the invitee isn't signed in, they're redirected to sign-in first.
Usage analytics
The organization settings page includes a Usage tab with date range filtering, summary cards (messages sent, tokens used), a daily area chart powered by Recharts, and per-user breakdowns. Usage data is scoped to the active organization, team, or personal context.
tRPC API
The entire client-server API layer uses tRPC v11 for end-to-end type safety. All endpoints live under /api/trpc/* as a catch-all route. The client uses httpBatchLink for automatic request batching.
Router overview
The API is organized into eight routers with a total of 69 procedures:
| Router | Procedures | Description |
|---|---|---|
chat | 16 | Chat CRUD, messages, votes, streams |
user | 9 | Profile, billing sync, sessions, preferences |
usage | 6 | Token balance, rate limits, org analytics |
payments | 2 | Stripe checkout & billing portal |
memory | 7 | CRUD, full-text search, upsert (org-scoped) |
file | 11 | CRUD, sharing, org/team scoping |
org | 11 | Organization management, invitations, roles |
team | 7 | Team CRUD, member management, permissions |
Authentication
Two procedure types are available: publicProcedure (no auth required) and privateProcedure (requires a valid session). Private procedures automatically extract the user's activeOrganizationId from the session, so all downstream queries are correctly scoped.
Adding new routes
To add a new router, create a file in trpc/routers/, define your procedures using publicProcedure or privateProcedure, and register the router in trpc/routes.ts. The client automatically picks up the new types — no code generation step needed.
Deployment
The boilerplate is ready to deploy out of the box. You have two options depending on your infrastructure preferences.
Vercel
The fastest path to production. Connect your repository to Vercel and every push to your main branch will trigger an automatic deployment. Vercel natively supports Next.js — no build configuration is needed. Set your environment variables in the Vercel project settings and you're live.
Docker (self-hosted)
A Dockerfile is included in the repository, allowing you to deploy anywhere Docker is supported — AWS ECS, Google Cloud Run, Fly.io, Railway, a VPS, or your own servers. Build and run the image:
# Using the compose.yml file
docker compose --profile app up --build
# Or build the image manually and run the container
# Build the image
docker build -t nextjs-standalone-image .
# Run the container
docker run -p 3000:3000 --env-file .env.local -e NODE_ENV=production -e PORT=3000 nextjs-standalone-imagePass your environment variables via --env-file or your platform's secret management. The container exposes port 3000 by default.
Deployment suggestion
The choice of where to deploy your application is deceptively tricky, especially when it comes to AI Chatbots. You should consider your expected usage throughput and how it will affect your server(s).
What is your expected usage volume? AI Chatbots tend to make a lot of requests, all while streaming their responses to the client. The node.js server is kept busy waiting for tool calls to complete and for the token stream to finish. Depending on your use-case, a single request may consistently take 500 - 600 seconds to complete. It happened to me!
Regardless of where you choose to deploy your chatbot, you should strongly consider a choice that allows you to set up auto-scaling.