All posts

Presigned or it didn't happen: safe direct-to-bucket uploads in Next.js

Bogdan Aioanei·
guidenextjsfile-uploadspresigned-url

TL;DR

Don't proxy file uploads through your Next.js server. Instead, authenticate the user, validate the upload intent (type, size, filename), generate a short-lived presigned POST with policy conditions that lock the key, content type, and size range, then let the browser upload directly to your S3-compatible bucket. The bucket enforces your rules even if the client lies. Your server stays lean, your bandwidth stays cheap, and your security is enforced at two layers instead of one.

File uploads are one of those features that feel straightforward until you actually sit down to build them. You think: user picks a file, file goes to server, server saves it somewhere. Done. Then you remember what “somewhere” really means, you start thinking about auth, abuse, and bandwidth, and what looked like an afternoon task starts turning into a small infrastructure project with feelings. Suddenly you’re asking yourself whether streaming every byte through your Next.js server is wise, or whether you’ve accidentally built a public “upload anything” endpoint with extra steps. Fun.

The pattern that cleans this up is direct-to-bucket uploads via presigned requests. Your server never touches the file bytes. Instead, it authenticates the user, validates what they’re trying to upload, generates a short‑lived cryptographically signed permission slip that bakes your constraints into the storage policy, and hands that back to the browser. The browser then uploads straight to your S3-compatible bucket using that slip. If the upload violates your rules—wrong content type, file too large, expired policy—the bucket rejects it, even if the client tries to get creative. Your app stays lean, your bandwidth costs stay sane, and your security story is enforced in two places instead of one.

In this tutorial I’ll use TypeScript, Next.js (App Router), and Railway’s S3-compatible buckets (and yes, the free egress is nice). The same approach works with AWS S3, R2, MinIO, and basically anything that speaks S3. The names change. The shape of the solution doesn’t.

Let’s get it.

Step 0: Type definitions

Before a single byte moves anywhere, you need to establish a shared language between your queue, your uploader, and your UI. This is the part of every tutorial where people’s eyes glaze over. It’s also the part that prevents your future self from shipping “progress: 700%” to production and learning humility the hard way.

Create src/lib/file-uploads/types.ts:

export type UploadStatus =
  | "queued"
  | "uploading"
  | "success"
  | "error"
  | "cancelled";

export type UploadProgress = {
  loaded: number;
  total: number;
  percent: number;
};

export type UploadResult = {
  key: string;
  objectUrl: string;
  previewUrl: string;
};

export type UploadQueueItem = {
  id: string;
  file: File;
  status: UploadStatus;
  progress: UploadProgress | null;
  result: UploadResult | null;
  error: string | null;
};

UploadStatus is the lifecycle of a file as it slowly transforms from “I swear I clicked upload” into “ok it’s actually in the bucket.” It starts as "queued", becomes "uploading", and eventually lands on "success", "error", or "cancelled" depending on how the day is going. UploadProgress is the shape you can render without inventing a new math problem. UploadResult is what you store when the upload is real: the object key, and two URLs you can use to reference or preview it.

Now add two more types, because the queue needs to talk to the thing that actually uploads:

export type UploadAdapterInput<TContext = unknown> = {
  file: File;
  context?: TContext;
  signal: AbortSignal;
  onProgress: (progress: UploadProgress) => void;
};

export type UploadAdapter<TContext = unknown> = (
  input: UploadAdapterInput<TContext>,
) => Promise<UploadResult>;

UploadAdapter is your contract. It says, “give me a function that accepts a file, an AbortSignal, and a progress callback, and I will treat it like an uploader.” The queue doesn’t care if you’re uploading to Railway, S3, R2, MinIO, or an old laptop you repurposed into ‘cloud.’ It just wants the interface. TContext is there for when you eventually want to attach metadata to an upload, like a chat message id or a “this belongs to project X” tag. You won’t need it until you do. That’s how it works.


Step 1: The user picks a file (React)

This is where the story begins: the user clicks, selects a file, and expects magic. Your job in the UI is not to understand presigned policies or object storage quirks. Your job is to take a File object and hand it to the queue like it’s a fragile package and you’re not paid enough to ask questions.

Start with the component in src/components/ChatUploader.tsx:

"use client";

import { useMemo } from "react";
import { useFileUploadQueue } from "@/lib/file-uploads/use-file-upload-queue";
import { createRailwayPresignedPostUploader } from "@/lib/file-uploads/railway/client-uploader";

export function ChatUploader() {
  const uploader = useMemo(() => createRailwayPresignedPostUploader(), []);

  const { items, enqueue, cancel, retry, remove } = useFileUploadQueue({
    uploader,
    concurrency: 2,
    autoStart: true,
    onItemSuccess: (item) => {
      console.log("Upload succeeded:", item.result);
    },
    onItemError: (item, err) => {
      console.error("Upload failed:", item.id, err);
    },
  });

  return (
    <div>
      <input
        type="file"
        multiple
        onChange={(e) => {
          if (!e.target.files) return;
          enqueue(e.target.files);
          e.target.value = "";
        }}
      />

      <div style={{ marginTop: 12 }}>
        {items.map((item) => (
          <div key={item.id} style={{ padding: 8, border: "1px solid #ddd", marginBottom: 8 }}>
            <div>{item.file.name}</div>
            <div>Status: {item.status}</div>

            {item.progress && (
              <div>Progress: {item.progress.percent}%</div>
            )}

            {item.error && (
              <div style={{ color: "crimson" }}>{item.error}</div>
            )}

            {item.status === "uploading" && (
              <button onClick={() => cancel(item.id)}>Cancel</button>
            )}

            {item.status === "error" && (
              <button onClick={() => retry(item.id)}>Retry</button>
            )}

            {item.status === "success" && item.result && (
              <a href={item.result.previewUrl} target="_blank" rel="noreferrer">
                Preview
              </a>
            )}

            {(item.status === "success" || item.status === "error" || item.status === "cancelled") && (
              <button onClick={() => remove(item.id)}>Remove</button>
            )}
          </div>
        ))}
      </div>
    </div>
  );
}

A few things worth calling out. uploader is wrapped in useMemo so it doesn’t get recreated on every render, because React will happily do that forever and your queue will politely suffer in silence. Small thing. Big peace.

Also, that e.target.value = "" line is there because browsers have a personality. Without it, selecting the same file twice won’t fire onChange, and you’ll get a bug report that reads “upload button randomly stops working” from someone who is technically correct. Resetting the input value makes the browser behave like a tool instead of a mood.

The UI is intentionally plain. You can make it pretty later. The shape matters: render each item’s status, show progress if you have it, and expose the right action at the right time. Cancel while uploading. Retry on error. Preview on success. The queue already tracks all of this. Your component just tells the story on screen.

And onItemSuccess is where you’d do real app work. In a chat app, that’s where you’d attach the returned key to a pending message. That’s the moment the upload becomes “an asset the rest of your app can refer to,” not just “a request that didn’t crash.”


Step 2: The upload queue (your traffic controller)

When the user picks files, enqueue is called, and this hook becomes the adult supervision. It decides when uploads start, how many run in parallel, how you track progress, and what happens when the user panic-clicks cancel. This is the most complex part of the client-side flow, so we’ll do it the way sane people do: slowly, with good names, and with enough refs to make React stop gaslighting you.

Create src/lib/file-uploads/use-file-upload-queue.ts. Start with the setup:

"use client";

import { useCallback, useMemo, useRef, useState } from "react";
import type {
  UploadAdapter,
  UploadProgress,
  UploadQueueItem,
  UploadResult,
} from "@/lib/file-uploads/types";

type UploadQueueInternalItem<TContext = unknown> = UploadQueueItem & {
  context?: TContext;
};

export type UseFileUploadQueueOptions<TContext = unknown> = {
  uploader: UploadAdapter<TContext>;
  concurrency?: number;
  autoStart?: boolean;
  onItemSuccess?: (item: UploadQueueItem) => void;
  onItemError?: (item: UploadQueueItem, error: Error) => void;
  onAllComplete?: (items: UploadQueueItem[]) => void;
};

function createQueueItem<TContext = unknown>(
  file: File,
  context?: TContext,
): UploadQueueInternalItem<TContext> {
  return {
    id: crypto.randomUUID(),
    file,
    context,
    status: "queued",
    progress: null,
    result: null,
    error: null,
  };
}

UploadQueueInternalItem extends UploadQueueItem with an optional context. That context never leaves the hook. It’s just passed through to the uploader when it’s time to run. createQueueItem gives every file a UUID immediately, which means you can render the item and track it even before the first network request happens. This makes the UI feel responsive, which users interpret as “reliable,” even when the Wi‑Fi is plotting against them.

Now add the hook itself, starting with state and refs:

export function useFileUploadQueue<TContext = unknown>({
  uploader,
  concurrency = 2,
  autoStart = true,
  onItemSuccess,
  onItemError,
  onAllComplete,
}: UseFileUploadQueueOptions<TContext>) {
  const [items, setItems] = useState<UploadQueueInternalItem<TContext>[]>([]);
  const [isRunning, setIsRunning] = useState(autoStart);
  const itemsRef = useRef<UploadQueueInternalItem<TContext>[]>([]);
  const activeCountRef = useRef(0);
  const controllersRef = useRef<Map<string, AbortController>>(new Map());
  const processQueueRef = useRef<() => void>(() => undefined);

You’ll notice there’s both React state (items) and an itemsRef. That’s not paranoia. It’s experience. React state is great for rendering, but async code loves to read stale values from closures. itemsRef is the escape hatch: it always points at the latest list, even while uploads are mid-flight. activeCountRef is the same idea for concurrency: it’s a number you can update synchronously without waiting for renders. It keeps your concurrency logic honest.

controllersRef is a Map from item id to AbortController. One controller per active upload. This is the nice version of cancellation: no special cases, no global flags, no “we’ll ignore the response when it comes back.” When you cancel, you abort the signal. The upload stops.

Next, add updateItem and checkAllComplete:

const updateItem = useCallback(
  (
    id: string,
    updater: (
      item: UploadQueueInternalItem<TContext>,
    ) => UploadQueueInternalItem<TContext>,
  ) => {
    setItems((currentItems) => {
      const nextItems = currentItems.map((item) =>
        item.id === id ? updater(item) : item,
      );
      itemsRef.current = nextItems;
      return nextItems;
    });
  },
  [],
);

const checkAllComplete = useCallback(() => {
  const currentItems = itemsRef.current;
  if (currentItems.length === 0) return;

  const hasPendingItems = currentItems.some(
    (item) => item.status === "queued" || item.status === "uploading",
  );
  if (!hasPendingItems && activeCountRef.current === 0) {
    onAllComplete?.(currentItems);
  }
}, [onAllComplete]);

updateItem takes an updater function so you always compute the next value from the current value, not from whatever your callback happened to capture an hour ago. It also keeps itemsRef in sync inside the same setItems call, so your ref doesn’t lag behind your UI. checkAllComplete uses that ref because completion checks usually happen in finally blocks, and finally blocks are not the place to discover you were looking at yesterday’s state.

Now the most important piece: runItemUpload:

const runItemUpload = useCallback(
  async (item: UploadQueueInternalItem<TContext>) => {
    activeCountRef.current += 1;
    const controller = new AbortController();
    controllersRef.current.set(item.id, controller);

    updateItem(item.id, (currentItem) => ({
      ...currentItem,
      status: "uploading",
      error: null,
      progress: { loaded: 0, total: item.file.size, percent: 0 },
    }));

    try {
      const result = await uploader({
        file: item.file,
        context: item.context,
        signal: controller.signal,
        onProgress: (progress: UploadProgress) => {
          updateItem(item.id, (currentItem) => ({
            ...currentItem,
            progress,
          }));
        },
      });

      const nextItem: UploadQueueInternalItem<TContext> = {
        ...item,
        status: "success",
        result: result as UploadResult,
        error: null,
        progress: {
          loaded: item.file.size,
          total: item.file.size,
          percent: 100,
        },
      };

      updateItem(item.id, () => nextItem);
      onItemSuccess?.(nextItem);
    } catch (error) {
      const wasCancelled = controller.signal.aborted;
      const resolvedError =
        error instanceof Error ? error : new Error("Upload failed.");

      const nextItem: UploadQueueInternalItem<TContext> = {
        ...item,
        status: wasCancelled ? "cancelled" : "error",
        error: wasCancelled ? null : resolvedError.message,
        progress: null,
        result: null,
      };

      updateItem(item.id, () => nextItem);
      if (!wasCancelled) {
        onItemError?.(nextItem, resolvedError);
      }
    } finally {
      controllersRef.current.delete(item.id);
      activeCountRef.current -= 1;
      checkAllComplete();
      queueMicrotask(() => processQueueRef.current());
    }
  },
  [checkAllComplete, onItemError, onItemSuccess, updateItem, uploader],
);

This is the heartbeat. When an upload starts, you increment the active count, register an AbortController, and immediately mark the item as "uploading" so the UI matches reality. Then you call uploader and wait. That uploader is the next step, and it’s going to do two network requests, which is where the real presigned magic lives.

The error branch distinguishes cancellation from failure. That’s subtle. It’s also what makes the UI feel respectful: “cancelled” isn’t an error, it’s a choice. The finally block cleans up and then schedules another processQueue run via queueMicrotask, which avoids the “two slots are open but only one upload starts” race condition that shows up when state updates haven’t flushed yet.

Now add processQueue:

const processQueue = useCallback(() => {
  if (!isRunning) return;

  const maxConcurrency = Math.max(1, Math.floor(concurrency));
  const currentItems = itemsRef.current;
  if (currentItems.length === 0) return;

  const availableSlots = maxConcurrency - activeCountRef.current;
  if (availableSlots <= 0) return;

  const queuedBatch = currentItems
    .filter((item) => item.status === "queued")
    .slice(0, availableSlots);

  for (const item of queuedBatch) {
    void runItemUpload(item);
  }
}, [concurrency, isRunning, runItemUpload]);
processQueueRef.current = processQueue;

This is the concurrency governor. It calculates available slots, selects a stable batch of queued items, and starts them. The .slice(0, availableSlots) is doing quiet hero work. Without it, you can schedule the same queued item twice before status updates land. Then you get “why did my file upload twice?” and you start questioning your life choices.

Finally, the public API your component calls:

  const enqueue = useCallback(
    (files: File[] | FileList, context?: TContext) => {
      const nextItems = Array.from(files).map((file) =>
        createQueueItem(file, context),
      );

      setItems((currentItems) => {
        const merged = currentItems.concat(nextItems);
        itemsRef.current = merged;
        return merged;
      });

      if (autoStart || isRunning) {
        setIsRunning(true);
        queueMicrotask(processQueue);
      }

      return nextItems.map((item) => item.id);
    },
    [autoStart, isRunning, processQueue],
  );

  const start = useCallback(() => {
    setIsRunning(true);
    queueMicrotask(processQueue);
  }, [processQueue]);

  const pause = useCallback(() => {
    setIsRunning(false);
  }, []);

  const cancel = useCallback(
    (itemId?: string) => {
      if (itemId) {
        controllersRef.current.get(itemId)?.abort();
        updateItem(itemId, (currentItem) => ({
          ...currentItem,
          status: "cancelled",
          error: null,
          progress: null,
        }));
        return;
      }

      for (const controller of controllersRef.current.values()) {
        controller.abort();
      }
      controllersRef.current.clear();
      setItems((currentItems) => {
        const nextItems = currentItems.map(
          (item): UploadQueueInternalItem<TContext> =>
            item.status === "uploading"
              ? { ...item, status: "cancelled", progress: null, error: null }
              : item,
        );
        itemsRef.current = nextItems;
        return nextItems;
      });
    },
    [updateItem],
  );

  const retry = useCallback(
    (itemId: string) => {
      updateItem(itemId, (currentItem) => ({
        ...currentItem,
        status: "queued",
        error: null,
        progress: null,
      }));
      queueMicrotask(processQueue);
    },
    [processQueue, updateItem],
  );

  const remove = useCallback((itemId: string) => {
    controllersRef.current.get(itemId)?.abort();
    setItems((currentItems) => {
      const nextItems = currentItems.filter((item) => item.id !== itemId);
      itemsRef.current = nextItems;
      return nextItems;
    });
  }, []);

  const clear = useCallback(() => {
    for (const controller of controllersRef.current.values()) {
      controller.abort();
    }
    controllersRef.current.clear();
    activeCountRef.current = 0;
    itemsRef.current = [];
    setItems([]);
  }, []);

  const stats = useMemo(() => {
    const queued = items.filter((i) => i.status === "queued").length;
    const uploading = items.filter((i) => i.status === "uploading").length;
    const success = items.filter((i) => i.status === "success").length;
    const error = items.filter((i) => i.status === "error").length;
    const cancelled = items.filter((i) => i.status === "cancelled").length;
    return { queued, uploading, success, error, cancelled, total: items.length };
  }, [items]);

  return {
    items,
    stats,
    isRunning,
    enqueue,
    start,
    pause,
    cancel,
    retry,
    remove,
    clear,
    processQueue,
  };
}

These helpers are exactly what you want: enqueue, cancel, retry, remove, clear. They read like the actions a user expects. stats is derived state for UI summaries like “3 uploading, 12 queued,” which is the kind of information users don’t ask for explicitly but immediately miss when it’s not there.


Step 3: The uploader adapter (where “uploading” becomes two requests)

Now the queue calls your uploader, and we enter the part of the story that surprises people: direct-to-bucket uploads are a two-request dance. First you ask your app server for permission. Then you upload bytes directly to the bucket using that permission slip. If you skip the first request, you’re either shipping credentials to the browser (no) or you’re leaving your bucket wide open (also no).

Create src/lib/file-uploads/railway/client-uploader.ts. Start with the response shape your /prepare route will return:

// src/lib/file-uploads/railway/client-uploader.ts
"use client";

import type { UploadAdapter } from "@/lib/file-uploads/types";

type PrepareResponse = {
  key: string;
  url: string;
  fields: Record<string, string>;
  objectUrl: string;
  previewUrl: string;
};

This is the contract: the server returns a bucket URL and form fields for a presigned POST, plus a key and some convenience URLs. The browser doesn’t get credentials. It gets a short-lived capability with tight constraints.

Now define the upload helper. We use XMLHttpRequest here because it gives you upload progress events without drama. fetch is great, but “upload progress” is where it starts acting like it’s too cool for school.

function xhrUploadWithProgress({
  url,
  formData,
  signal,
  onProgress,
}: {
  url: string;
  formData: FormData;
  signal: AbortSignal;
  onProgress: (loaded: number, total: number) => void;
}) {
  return new Promise<void>((resolve, reject) => {
    const xhr = new XMLHttpRequest();
    xhr.open("POST", url, true);

    xhr.upload.onprogress = (evt) => {
      if (!evt.lengthComputable) return;
      onProgress(evt.loaded, evt.total);
    };

    xhr.onload = () => {
      if (xhr.status >= 200 && xhr.status < 300) resolve();
      else reject(new Error(`Upload failed with status ${xhr.status}`));
    };

    xhr.onerror = () =>
      reject(new Error("Upload failed due to a network error."));

    const onAbort = () => {
      xhr.abort();
      reject(new DOMException("Aborted", "AbortError") as unknown as Error);
    };

    if (signal.aborted) return onAbort();
    signal.addEventListener("abort", onAbort, { once: true });

    xhr.send(formData);
  });
}

The abort wiring is important. Your queue is built around AbortController. This helper respects that, so cancellation works end-to-end without inventing a second cancellation mechanism. When the user hits cancel, the signal aborts, XHR aborts, and your queue can mark the item cancelled. Clean.

Now the adapter itself: ask permission, then upload bytes.

export function createRailwayPresignedPostUploader(): UploadAdapter {
  return async ({ file, signal, onProgress }) => {
    const prepareRes = await fetch("/api/file-uploads/railway/prepare", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({
        fileName: file.name,
        contentType: file.type || "application/octet-stream",
        size: file.size,
      }),
      signal,
    });

    if (!prepareRes.ok) {
      const message = await prepareRes.text().catch(() => "");
      throw new Error(message || "Failed to prepare upload.");
    }

    const prepared = (await prepareRes.json()) as PrepareResponse;

That request body is intentionally tiny: name, type, size. No file bytes. Not even a base64 preview. Just metadata.

And yes, the browser can lie about file.type. Browsers are allowed to be wrong. People are allowed to be malicious. That’s why we’re going to enforce content type again in the presigned policy on the server side. The client sending it is just a hint, not a source of truth.

Now you transform the server’s response into the exact multipart form the bucket expects. Presigned POST is picky. It wants those fields included exactly, and the actual file attached under file.

const form = new FormData();
for (const [k, v] of Object.entries(prepared.fields)) {
  form.append(k, v);
}
form.append("file", file);

And finally, you POST to the bucket URL, not your app. This is where the bytes go. This is the whole point.

    await xhrUploadWithProgress({
      url: prepared.url,
      formData: form,
      signal,
      onProgress: (loaded, total) => {
        const percent = total > 0 ? Math.round((loaded / total) * 100) : 0;
        onProgress({ loaded, total, percent });
      },
    });

    return {
      key: prepared.key,
      objectUrl: prepared.objectUrl,
      previewUrl: prepared.previewUrl,
    };
  };
}

Pause and appreciate what just happened.

Your server never saw the file. Not a single byte. It only minted a short-lived permission slip with constraints, and the bucket enforced them.

This is also why this pattern scales so well. If 10 users upload 10 files each, your app server doesn’t suddenly become a reluctant CDN. It stays calm. Possibly even hydrated.

Next up we’ll walk into the /api/file-uploads/railway/prepare route itself, because that’s where the real “safety and auth” decisions live: authentication, allowlisted types, size limits, safe object keys, and policy conditions that make the bucket say “nope” when something doesn’t match.


Step 4: The “prepare upload” route (where your app says yes or no)

So far the browser has behaved. It asked for permission before sending bytes. Now we’re on the server, and this is where you get to be strict. Politely strict. The prepare route is not an upload route. It’s the bouncer at the door checking IDs, looking at the dress code, and refusing entry to anything that looks like it’s about to start a fire.

Create app/api/file-uploads/railway/prepare/route.ts. Start with imports and the request schema:

import { z } from "zod";
import { auth } from "@/lib/auth";
import { CHAT_UPLOAD_ALLOWED_CONTENT_TYPES_SET } from "@/lib/file-uploads/allowed-content-types";
import {
  buildRailwayObjectKey,
  createRailwayPresignedPost,
} from "@/lib/file-uploads/railway/server-utils";

const MAX_FILE_SIZE_BYTES = 10 * 1024 * 1024;

const bodySchema = z.object({
  fileName: z.string().min(1).max(255),
  contentType: z.string().min(1).max(100),
  size: z.number().int().positive(),
});

This schema is doing more than type-checking. It’s your first filter for nonsense. And the internet is mostly nonsense. Notice what’s missing: bucket details, credentials, and any hint that the client can choose where the file goes. That’s intentional. The client requests an upload. The server decides the destination.

Now authenticate:

export async function POST(request: Request) {
  const session = await auth.api.getSession({
    headers: request.headers,
  });

  if (!session?.user?.id) {
    return Response.json({ message: "Unauthorized" }, { status: 401 });
  }

This is the philosophy in one decision: don’t mint upload capabilities for anonymous traffic. Presigned requests are powerful because they let clients write directly to your storage. That power should be tied to an identity you can trust and audit.

Then validate the body:

let body: z.infer<typeof bodySchema>;
try {
  body = bodySchema.parse(await request.json());
} catch {
  return Response.json(
    { message: "Invalid upload request body" },
    { status: 400 },
  );
}

If parsing fails, you exit. No side effects. No “maybe” presigned policy. The route stays boring. Boring routes are reliable routes.

Now enforce your rules: allowlisted types and max size.

if (!CHAT_UPLOAD_ALLOWED_CONTENT_TYPES_SET.has(body.contentType)) {
  return Response.json({ message: "Unsupported file type" }, { status: 400 });
}

if (body.size > MAX_FILE_SIZE_BYTES) {
  return Response.json({ message: "File is too large" }, { status: 400 });
}

This is where developers get anxious: “contentType comes from the browser, so isn’t this meaningless?” It would be meaningless if it were the only check. It’s not. This is the fast reject on your app server. The bucket-enforced check comes next when you bake the same constraints into the presigned policy conditions. Two layers. Same rule. Less regret.

Now generate the key on the server:

const key = buildRailwayObjectKey({
  fileName: body.fileName,
  namespace: "chat-uploads",
  userId: session.user.id,
});

This is a security decision disguised as a convenience function. If the client could pick the key, they could attempt overwrites, spray arbitrary prefixes, or create keys that make cleanup and lifecycle management painful. So the server chooses a safe, scoped key, typically including user id boundaries and a random UUID. Predictability is the enemy. Boring keys are your friend.

Finally, mint the presigned POST and return it:

  const presigned = await createRailwayPresignedPost({
    key,
    contentType: body.contentType,
    maxFileSizeBytes: MAX_FILE_SIZE_BYTES,
    minFileSizeBytes: 1,
    expiresInSeconds: 60 * 5,
  });

  return Response.json(presigned);
}

This response is the permission slip your uploader uses in Step 3. The route still hasn’t touched file bytes. It doesn’t need to. It just encoded your decision into a policy the bucket will enforce. That’s the win.


Step 5: The allowlist (aka “no, you can’t upload a zip bomb here”)

At some point, every app has to answer an awkward question: “What kinds of files do we accept?” If your answer is “anything,” that’s not a product decision. That’s an incident report draft.

So we use an allowlist. It’s simple. It’s explicit. It’s also a great way to sleep at night.

Create src/lib/file-uploads/allowed-content-types.ts:

export const CHAT_UPLOAD_ALLOWED_CONTENT_TYPES_SET = new Set<string>([
  "image/png",
  "image/jpeg",
  "image/webp",
  "image/gif",
  "application/pdf",
  "text/plain",
]);

That’s it. You can expand this later, but start small. Most apps only need a handful of types. If you’re building chat uploads, images and PDFs cover a surprising amount of real use.

One nuance, because developers will (correctly) worry about it: the browser’s file.type is not a cryptographic truth. It’s a hint. Sometimes it’s missing. Sometimes it’s wrong. Sometimes it’s malicious. That’s why the allowlist check in your /prepare route is only one layer. The second layer is more important: you also embed the content-type requirement into the presigned POST policy so the bucket enforces it.

We’ll get to that next.


Step 6: Auth (the presigned URL is a capability, so treat it like one)

Your prepare route calls auth.api.getSession({ headers }). In your real app this is whatever auth system you already use. The key point is: you should only mint presigned upload permissions for authenticated users, and ideally you want a stable userId you can use to scope object keys.

I won’t try to replace your auth provider in a tutorial. That’s like trying to replace someone’s plumbing over a blog post. But I do want to be explicit about what the prepare route expects from auth: it expects that given the request headers, you can derive a session with a user.id.

A minimal shape that matches what the upload route needs looks like this:

export type Session = {
  user?: { id?: string };
};

export const auth = {
  api: {
    async getSession({
      headers,
    }: {
      headers: Headers;
    }): Promise<Session | null> {
      // Replace with your real implementation.
      return null;
    },
  },
};

That’s not meant to be “copy-paste production auth.” It’s just the interface contract: the upload system only cares about one thing—who is this user? If you can answer that, the rest plugs in cleanly.

Now we can move on to the server utilities, where the big security decisions actually live: safe object keys, endpoint normalization, and presigned policies with conditions.


Step 7: Server utilities (part 1) — bucket config + S3 client, safely

This entire module should be server-only. It contains secrets. It also contains the logic that mints upload capabilities. You do not want any of this anywhere near a client bundle.

Create src/lib/file-uploads/railway/server-utils.ts and begin with the server-only guard and config types:

import "server-only";

import { randomUUID } from "node:crypto";
import {
  DeleteObjectCommand,
  GetObjectCommand,
  S3Client,
} from "@aws-sdk/client-s3";
import { createPresignedPost } from "@aws-sdk/s3-presigned-post";
import { getSignedUrl } from "@aws-sdk/s3-request-presigner";

export type RailwayBucketConfig = {
  bucket: string;
  endpoint: string;
  region: string;
  accessKeyId: string;
  secretAccessKey: string;
  forcePathStyle: boolean;
};

Then define a couple small helpers. These look boring. They’re not. They prevent the “it works on my machine but the endpoint is malformed in prod” genre of pain.

function parseBoolean(value: string | undefined) {
  if (!value) return false;
  const normalized = value.trim().toLowerCase();
  return normalized === "1" || normalized === "true" || normalized === "yes";
}

function normalizeEndpoint(rawEndpoint: string) {
  const endpointWithProtocol = rawEndpoint.startsWith("http")
    ? rawEndpoint
    : `https://${rawEndpoint}`;
  const url = new URL(endpointWithProtocol);
  url.pathname = "";
  url.search = "";
  url.hash = "";
  return url.toString().replace(/\/$/, "");
}

Now the environment-based config loader. This is the part that makes your system fail fast and loudly if anything is misconfigured, which is exactly what you want when dealing with storage credentials.

export function getRailwayBucketConfig(): RailwayBucketConfig {
  const bucket = process.env.FILE_UPLOAD_BUCKET?.trim();
  if (!bucket) throw new Error("Missing FILE_UPLOAD_BUCKET");

  const rawEndpoint = process.env.FILE_UPLOAD_ENDPOINT?.trim();
  if (!rawEndpoint) throw new Error("Missing FILE_UPLOAD_ENDPOINT");
  const endpoint = normalizeEndpoint(rawEndpoint);

  const accessKeyId = process.env.FILE_UPLOAD_ACCESS_KEY_ID?.trim();
  if (!accessKeyId) throw new Error("Missing FILE_UPLOAD_ACCESS_KEY_ID");

  const secretAccessKey = process.env.FILE_UPLOAD_SECRET_ACCESS_KEY?.trim();
  if (!secretAccessKey)
    throw new Error("Missing FILE_UPLOAD_SECRET_ACCESS_KEY");

  const region = process.env.FILE_UPLOAD_REGION?.trim() || "auto";
  const forcePathStyle = parseBoolean(
    process.env.FILE_UPLOAD_RAILWAY_S3_FORCE_PATH_STYLE,
  );

  return {
    bucket,
    endpoint,
    region,
    accessKeyId,
    secretAccessKey,
    forcePathStyle,
  };
}

And finally, the client constructor:

export function createRailwayS3Client(config = getRailwayBucketConfig()) {
  return new S3Client({
    region: config.region,
    endpoint: config.endpoint,
    forcePathStyle: config.forcePathStyle,
    credentials: {
      accessKeyId: config.accessKeyId,
      secretAccessKey: config.secretAccessKey,
    },
  });
}

At runtime, this is what happens during an upload: your /prepare route calls createRailwayPresignedPost, which calls getRailwayBucketConfig, which builds an S3 client using your endpoint and credentials. That client never gets sent to the browser. It exists only long enough to generate a presigned policy.

Next we’ll do the most delicate part: object keys. This is where you avoid traversal weirdness, avoid leaking filenames, and keep keys scoped per user and namespace. Then we’ll finally generate the presigned POST itself, including the conditions that make the bucket enforce type and size limits even if a client tries to cheat.


Step 7 (continued): Server utilities (part 2) — object keys that won’t ruin your week

You can have perfect auth and perfect presigned policies and still ship something risky if you let the client influence the object key too much. Keys are sneaky. They’re “just strings,” until someone figures out they can create confusing paths, collide with other users, or upload files that become hard to reason about later.

So we make the server choose the key. Every time. No exceptions.

Also, we treat filenames as untrusted input, because they are. Users will upload my resume final FINAL (2).pdf. Attackers will upload ../../definitely-not-suspicious. Your job is to accept the file, not their chaos.

Here’s the key-safety core. It’s small enough to keep in the article, and it’s the part I’d rather over-explain than under-explain.

function sanitizeKeySegment(input: string) {
  return input
    .normalize("NFKD")
    .replace(/[^\w.-]+/g, "-")
    .replace(/^-+|-+$/g, "")
    .replace(/-{2,}/g, "-")
    .toLowerCase();
}

function sanitizeFileName(fileName: string) {
  const baseName = fileName
    .replace(/[\\/:*?"<>|]/g, "-")
    .replace(/\s+/g, " ")
    .trim();

  const noTraversal = baseName.replace(/\.\./g, "");
  const [namePart, ...extensionParts] = noTraversal.split(".");
  const extension = extensionParts.length > 0 ? extensionParts.pop() : "";

  const sanitizedName = sanitizeKeySegment(namePart || "file");
  const sanitizedExtension = extension ? sanitizeKeySegment(extension) : "";

  const combined = sanitizedExtension
    ? `${sanitizedName}.${sanitizedExtension}`
    : sanitizedName;

  return combined.slice(0, 140) || "file";
}

function assertSafeObjectKey(key: string) {
  if (!key || key.length > 1024) {
    throw new Error(
      "Invalid object key. It must be between 1 and 1024 characters.",
    );
  }

  if (
    key.includes("..") ||
    key.startsWith("/") ||
    key.endsWith("/") ||
    key.includes("\\")
  ) {
    throw new Error("Unsafe object key detected.");
  }
}

This does a few things that matter. It makes segments boring, because boring keys are predictable to you and uninteresting to attackers. It strips characters that make object keys hard to work with across platforms. It also explicitly blocks traversal-ish patterns and weird leading/trailing slashes. Yes, object storage isn’t a filesystem. No, you still don’t want to allow filesystem-shaped nonsense.

Now the actual key builder. This is where you scope uploads to a namespace and user, and you add a UUID so keys can’t be guessed.

import { randomUUID } from "node:crypto";

export function buildRailwayObjectKey({
  fileName,
  namespace = "uploads",
  userId,
  prefix,
}: {
  fileName: string;
  namespace?: string;
  userId?: string;
  prefix?: string;
}) {
  const now = new Date();
  const datePrefix = `${now.getUTCFullYear()}/${String(now.getUTCMonth() + 1).padStart(2, "0")}/${String(
    now.getUTCDate(),
  ).padStart(2, "0")}`;

  const safeNamespace = sanitizeKeySegment(namespace || "uploads") || "uploads";
  const safeUserId = userId ? sanitizeKeySegment(userId) : "";
  const safePrefix = prefix ? sanitizeKeySegment(prefix) : "";
  const safeFileName = sanitizeFileName(fileName);
  const id = randomUUID();

  const segments = [safeNamespace, safeUserId, safePrefix, datePrefix]
    .filter(Boolean)
    .join("/");

  const key = `${segments}/${id}-${safeFileName}`;
  assertSafeObjectKey(key);
  return key;
}

The structure here is intentional. You get an obvious grouping (chat-uploads/<userId>/yyyy/mm/dd/...) that makes it easier to debug, to set lifecycle rules, and to clean up later. You also get uniqueness without relying on user filenames. You’re not in the business of deduplicating resumes named “resume.pdf.” You’re in the business of not overwriting them.

If you’re thinking, “Do I really need the date prefix?” you don’t. But it’s one of those small ops-friendly decisions that pays off later, especially when you’re looking at logs or doing selective cleanup. It’s cheap. Keep it.

Next we’ll do the part everybody came for: generating the presigned POST, with conditions that make the bucket enforce your file type and size rules even if the client lies through its teeth. And yes, some clients will. Not your users. Definitely not your users. The other users.


Step 8: Presigned POST generation (where your rules become bucket-enforced)

Up to now, your server has authenticated the user and picked a safe key. Good. But there’s still a gap you want to close.

If you only validate on your server, you’re basically saying: “I trust the client to upload what they told me they’d upload.” That’s optimistic. It’s also unnecessary.

The nicer pattern is to encode your constraints into the presigned POST policy itself. That way, even if the client tries to upload a different content type, or a larger file, the bucket rejects it. Your server doesn’t have to be in the data path to enforce rules. The storage layer becomes an enforcement point.

This is the function that turns “upload intent” into “upload capability.”

Start with the input/output types and a few constants:

const MAX_PRESIGNED_POST_EXPIRATION_SECONDS = 60 * 60 * 24 * 90;
const DEFAULT_PRESIGNED_POST_EXPIRATION_SECONDS = 60 * 5;
const DEFAULT_MIN_UPLOAD_BYTES = 1;
const DEFAULT_MAX_UPLOAD_BYTES = 25 * 1024 * 1024;

export type CreateRailwayPresignedPostInput = {
  key: string;
  contentType?: string;
  contentTypePrefix?: string;
  expiresInSeconds?: number;
  minFileSizeBytes?: number;
  maxFileSizeBytes?: number;
};

export type RailwayPresignedPostResult = {
  key: string;
  url: string;
  fields: Record<string, string>;
  objectUrl: string;
  previewUrl: string;
};

type PresignedPostCondition =
  | Record<string, string>
  | ["eq" | "starts-with", string, string]
  | ["content-length-range", number, number];

The big idea is: you’re going to produce url and fields for a multipart form POST. The browser will submit those fields exactly, plus the file bytes. If anything doesn’t match the policy, the upload fails.

Now the function itself. First, validate the key and the expiration. Short expirations are your friend. Five minutes is plenty for a user upload in most apps, and it shrinks the blast radius if something leaks.

export async function createRailwayPresignedPost(
  input: CreateRailwayPresignedPostInput,
  config = getRailwayBucketConfig(),
): Promise<RailwayPresignedPostResult> {
  assertSafeObjectKey(input.key);

  const expiresInSeconds =
    input.expiresInSeconds ?? DEFAULT_PRESIGNED_POST_EXPIRATION_SECONDS;

  if (
    !Number.isInteger(expiresInSeconds) ||
    expiresInSeconds <= 0 ||
    expiresInSeconds > MAX_PRESIGNED_POST_EXPIRATION_SECONDS
  ) {
    throw new Error(
      `Invalid presigned post expiration. Value must be 1-${MAX_PRESIGNED_POST_EXPIRATION_SECONDS} seconds.`,
    );
  }

Now validate your file size constraints. This is the second place you enforce size. The first place was in the /prepare route (fast reject). This place is the bucket policy (hard reject).

const minFileSizeBytes = input.minFileSizeBytes ?? DEFAULT_MIN_UPLOAD_BYTES;
const maxFileSizeBytes = input.maxFileSizeBytes ?? DEFAULT_MAX_UPLOAD_BYTES;

if (
  minFileSizeBytes < 0 ||
  maxFileSizeBytes <= 0 ||
  minFileSizeBytes > maxFileSizeBytes
) {
  throw new Error("Invalid upload file-size constraints.");
}

Now build the POST policy conditions. This is the part that makes the presigned POST “tight.”

You’re saying: the upload must be for exactly this key, and the object size must fall within this range.

const conditions: PresignedPostCondition[] = [
  ["eq", "$key", input.key],
  ["content-length-range", minFileSizeBytes, maxFileSizeBytes],
];

const fields: Record<string, string> = {
  key: input.key,
};

Now content type. You’ve designed this nicely: either you lock to an exact content type, or you allow a prefix (useful for “image/” style rules). For chat uploads, exact matches are usually better. They’re stricter. They’re clearer.

if (input.contentType) {
  fields["Content-Type"] = input.contentType;
  conditions.push(["eq", "$Content-Type", input.contentType]);
} else if (input.contentTypePrefix) {
  conditions.push(["starts-with", "$Content-Type", input.contentTypePrefix]);
}

This is also where a lot of developer anxiety lives: “What if the browser lies about the content type?” Great question. If the browser lies and then tries to upload with a mismatched Content-Type, the bucket rejects the POST because it violates ["eq", "$Content-Type", "..."]. That’s the whole point of doing this at the policy layer.

Now generate the presigned POST with the AWS SDK:

const client = createRailwayS3Client(config);

const { url, fields: presignedFields } = await createPresignedPost(client, {
  Bucket: config.bucket,
  Key: input.key,
  Conditions: conditions,
  Fields: fields,
  Expires: expiresInSeconds,
});

At this moment, your server has produced a short-lived capability: “someone who has these fields may POST to this URL and create exactly one object at exactly this key, if they meet these conditions.”

That’s the permission slip your browser uses in the uploader adapter.

Now you return the presigned form data, plus two URLs: an “object URL” (useful for internal references) and a “preview URL” (a signed GET URL) so you can show the user what they uploaded without making your bucket public.

We’ll define those two functions next, but the shape looks like this:

  const previewUrl = await createRailwayPresignedGetUrl({
    key: input.key,
    expiresInSeconds: 60 * 60,
    config,
  });

  return {
    key: input.key,
    url,
    fields: presignedFields,
    objectUrl: buildRailwayObjectUrl({
      endpoint: config.endpoint,
      bucket: config.bucket,
      key: input.key,
      forcePathStyle: config.forcePathStyle,
    }),
    previewUrl,
  };
}

That’s the end of the prepare step. From the browser’s perspective, it now has everything needed to upload bytes directly to the bucket.

From your perspective, you’ve achieved something important: the server is in charge, but it’s not in the hot path. You authenticate, authorize, and encode constraints. Then you get out of the way.

Next we’ll finish the server-utils module by defining buildRailwayObjectUrl (including path encoding and path-style vs virtual-hosted style), and createRailwayPresignedGetUrl so you can safely preview private objects.


Step 9: URLs and previewing (getting the file back without making your bucket public)

At this point the upload has already happened. The browser has POSTed the multipart form to the bucket URL, the bucket has verified the policy, and the object now exists at the key your server generated.

Now the question becomes: what do you return to the UI, and how do you let a user preview what they just uploaded without turning your bucket into the world’s least curated file-sharing service?

That’s why your createRailwayPresignedPost returns two different URL concepts: an objectUrl and a previewUrl.

objectUrl is the canonical location of the object in the bucket. It’s useful as an internal reference, and it’s deterministic. But it might not be publicly accessible, and depending on your provider it may not even be the URL you want clients to hit directly.

So you also return previewUrl, which is a short-lived signed GET URL. It’s like saying: “This object is private, but for the next hour, anyone holding this link may fetch it.” That’s a much safer default for user content.

To build objectUrl correctly you need to handle two things: path encoding (keys can include slashes and weird characters) and whether your provider expects “path-style” URLs or “virtual-hosted-style” URLs. Railway/S3-compatible providers vary, so it’s nice that you made this configurable.

Here are the URL helpers, broken out.

First, a safe path encoder (it encodes each segment without destroying the / separators):

function encodePath(key: string) {
  return key
    .split("/")
    .map((segment) => encodeURIComponent(segment))
    .join("/");
}

Now the object URL builder:

export function buildRailwayObjectUrl({
  endpoint,
  bucket,
  key,
  forcePathStyle = false,
}: {
  endpoint: string;
  bucket: string;
  key: string;
  forcePathStyle?: boolean;
}) {
  assertSafeObjectKey(key);

  const endpointUrl = new URL(endpoint);
  const encodedKey = encodePath(key);

  if (forcePathStyle) {
    endpointUrl.pathname = `/${bucket}/${encodedKey}`;
    return endpointUrl.toString();
  }

  endpointUrl.hostname = `${bucket}.${endpointUrl.hostname}`;
  endpointUrl.pathname = `/${encodedKey}`;
  return endpointUrl.toString();
}

This is one of those “looks like string concatenation” functions that quietly saves you from bugs. Encoding matters. Providers differ. And when you eventually upload a filename with spaces or unicode, you’ll be glad you didn’t hand-roll this at 1am.

Now for the preview URL. This is a standard presigned GET, generated server-side, time-limited, and safe for private buckets.

export async function createRailwayPresignedGetUrl({
  key,
  expiresInSeconds = 60 * 60,
  config = getRailwayBucketConfig(),
}: {
  key: string;
  expiresInSeconds?: number;
  config?: RailwayBucketConfig;
}) {
  assertSafeObjectKey(key);

  const client = createRailwayS3Client(config);

  return getSignedUrl(
    client,
    new GetObjectCommand({
      Bucket: config.bucket,
      Key: key,
    }),
    { expiresIn: expiresInSeconds },
  );
}

That’s it. Your UI can now show a “Preview” link immediately after upload, without your bucket being public, and without your app server proxying downloads.

If you want cleanup tooling later, you already have the delete helper, and it follows the same rule as everything else: validate the key, then issue a server-side command.

export async function deleteRailwayObjectByKey(
  key: string,
  config = getRailwayBucketConfig(),
) {
  assertSafeObjectKey(key);

  const client = createRailwayS3Client(config);

  await client.send(
    new DeleteObjectCommand({
      Bucket: config.bucket,
      Key: key,
    }),
  );
}

At this point the flow is complete. The user picked a file. The queue started an upload. The browser asked your app for permission. Your app authenticated the user, validated intent, minted a short-lived presigned policy tied to a safe key, and returned it. The browser uploaded straight to the bucket under those constraints. The bucket enforced them. And your UI got back stable identifiers plus a safe preview URL.


Ending: what you’ve built (and why it’s worth it)

If you’ve ever shipped “uploads” by just POSTing a FormData blob to your server and hoping for the best, this approach will feel like a step up in ceremony. It is. But it’s the good kind of ceremony—the kind that buys you scale, cost control, and a dramatically smaller blast radius when something goes wrong.

You’re now running uploads in a way that’s hard to abuse by accident and surprisingly annoying to abuse on purpose. The presigned POST is short-lived. It’s constrained to one key. It bakes in content type and size requirements that the bucket enforces even if the client lies. Your server never handles the bytes, so you don’t melt your app under load or pay for bandwidth you didn’t need.

One practical gotcha before you celebrate: browsers are picky, and your bucket needs to explicitly allow cross-origin requests from your site. If you skip CORS, your presigned POST can be perfectly valid and still fail in the browser with a CORS error, which is the web’s way of saying “technically correct, emotionally devastating.” So make sure you configure bucket CORS to allow POST (and PUT if you ever switch to presigned PUT) from your domain.

Here’s an example using the AWS CLI against a S3-compatible endpoint:

AWS_ACCESS_KEY_ID=your_access_key_id \
AWS_SECRET_ACCESS_KEY=your_secret_access_key \
  aws s3api put-bucket-cors \
  --bucket your_bucket_name \
  --endpoint-url https://storage.domain.app \
  --cors-configuration '{
    "CORSRules": [
      {
        "AllowedHeaders": ["*"],
        "AllowedMethods": ["PUT","POST"],
        "AllowedOrigins": ["https://your_domain.tld"],
        "MaxAgeSeconds": 3000
      }
    ]
  }'

From here, the pattern stays friendly as your app grows. Want virus scanning? Add an async job after upload and only “activate” attachments once scanned. Want to tie uploads to chat messages? Store the returned key when the message is created and delete orphaned objects on a schedule. Want stricter rules? Tighten the allowlist and policy conditions.

You’ve built a solid spine, and now you can add features without rewriting the whole thing.

© 2026 hourzero. All rights reserved.