How to Bypass Vercel's 4.5MB Payload Limit
You have built an incredible file-sharing application. It works perfectly on your local machine. You deploy it to Vercel, attempt to upload a 5MB image, and your console instantly lights up with a fatal error: 413 Payload Too Large.
Serverless platforms like Vercel and AWS Lambda impose strict limits on API request sizes (4.5MB for Vercel functions). Sending a file from the browser, through your backend API, and then to a storage bucket is an anti-pattern. Here is how to build the enterprise solution: Direct-to-Cloud Uploads using Presigned URLs.
Why Serverless Functions Hate Files
Serverless functions are designed to be ephemeral, lightweight compute nodes. They wake up, execute a quick database query, and die.
When you stream a 50MB file to a serverless function via a multipart/form-data request, the function has to hold that entire file in its limited RAM while simultaneously trying to open a new network connection to your actual storage provider (like AWS S3 or Supabase Storage). This causes memory timeouts and massive bandwidth costs.
The Solution: The "Valet" Architecture
Instead of having the serverless function carry the file, the function should act as a security guard.
- The Request: The browser asks the backend, "I want to upload a 10MB encrypted file."
- The Signature: The backend verifies the user's permissions, generates a temporary, cryptographically signed URL (valid for 60 seconds), and sends it back to the browser.
- The Direct Upload: The browser takes that URL and uploads the file directly to the cloud storage bucket, completely bypassing Vercel.
Step 1: Generate the Presigned URL (Backend)
In your Vercel API route, you will use your storage provider's SDK to generate the signed URL. This example uses Supabase, but the logic is identical for AWS S3.
// api/generate-upload-url.js (Vercel Serverless Function)
import { createClient } from '@supabase/supabase-js';
export default async function handler(req, res) {
// 1. Initialize admin client bypassing RLS
const supabase = createClient(process.env.SUPABASE_URL, process.env.SUPABASE_SERVICE_ROLE_KEY);
// Generate a unique filename (e.g., using a UUID)
const fileName = `${crypto.randomUUID()}.enc`;
// 2. Request a signed URL from the storage bucket valid for 60 seconds
const { data, error } = await supabase
.storage
.from('encrypted_vault')
.createSignedUploadUrl(fileName);
if (error) return res.status(500).json({ error: error.message });
// 3. Return the URL and the path to the frontend
return res.status(200).json({
uploadUrl: data.signedUrl,
path: fileName
});
}
Step 2: Upload Directly from the Client (Frontend)
Now that your frontend has a secure, temporary ticket, you can use the native fetch API to stream the file directly from the user's browser into the storage bucket.
async function uploadFileDirectly(fileBuffer) {
// 1. Ask our backend for the secure upload ticket
const response = await fetch('/api/generate-upload-url', { method: 'POST' });
const { uploadUrl, path } = await response.json();
// 2. PUT the file directly to the Cloud Storage Bucket
const uploadResponse = await fetch(uploadUrl, {
method: 'PUT',
body: fileBuffer,
headers: {
// It is critical to set the correct content type
'Content-Type': 'application/octet-stream'
}
});
if (uploadResponse.ok) {
console.log(`Success! File stored at: ${path}`);
return path;
} else {
throw new Error("Direct upload failed.");
}
}
Secure, Heavy-Duty Uploads
By combining client-side Web Crypto API encryption with direct-to-cloud Presigned URLs, we engineered ZeroKey to handle secure file payloads without ever choking our serverless architecture.
Your browser encrypts the file locally into a raw ArrayBuffer, requests a temporary ticket from our API, and fires the encrypted ciphertext directly into our locked Supabase buckets.
Conclusion
Vercel's 4.5MB payload limit isn't a restriction; it's a guardrail forcing you to build better architecture. By adopting direct-to-cloud uploads, you reduce server costs, decrease upload latency for your users, and unlock the ability to handle gigabytes of data on a serverless free tier.