Skip to main content
AnotherWrapper includes several security layers out of the box. Here’s what’s built in and what you should add before launch.

Security Layers

The repo uses centralized helpers in lib/auth/server.ts for all server-side auth checks.For API routes, use requireApiUser():
app/api/protected/route.ts
import { requireApiUser } from "@/lib/auth/server";

export async function GET() {
  const { user, unauthorizedResponse } = await requireApiUser();
  if (unauthorizedResponse) return unauthorizedResponse;

  return Response.json({
    message: "Protected data",
    userId: user.id,
  });
}
For server components and pages, use requireUser():
app/protected/page.tsx
import { requireUser } from "@/lib/auth/server";

export default async function ProtectedPage() {
  const user = await requireUser();

  return <div>Welcome, {user.email}</div>;
}
Always use these helpers instead of rolling your own auth checks. They handle the edge cases for you.
The default model enforces ownership through authenticated server helpers and user-scoped queries — not database-level RLS policies. This keeps the starter portable across PostgreSQL hosts.The practical rules:
  1. Fetch the current user on the server
  2. Scope reads and writes by user.id
  3. Keep that logic in lib/db/* instead of scattering it across UI code
import { and, eq } from "drizzle-orm";
import { db } from "@/lib/db/client";
import { chatDocuments } from "@/lib/db/schema/chat";

export async function getUserDocument(documentId: string, userId: string) {
  return db.query.chatDocuments.findFirst({
    where: and(
      eq(chatDocuments.id, documentId),
      eq(chatDocuments.user_id, userId)
    ),
  });
}
This pattern prevents one user from reading or mutating another user’s records.
If you want database-level defense in depth, you can add your own PostgreSQL RLS policies on top of this model.
AI services can get expensive fast. Protect yourself on two levels:Provider-level: Set budget alerts and hard spending limits in every AI provider dashboard you enable (OpenAI, Anthropic, Google, etc.).Application-level: The credit system acts as a usage meter. Users must have sufficient credits for paid AI features, and credits are consumed through transactional database updates to prevent race conditions.
Always set up budget alerts and hard limits in your AI service dashboards. The credit system is your app-level safeguard, but provider-level limits are your financial safety net.
Magic link and password-reset emails go through your configured provider (Resend, Loops, or Brevo), not through a bundled SMTP path.Before launch:
  • Verify your sender domain
  • Monitor bounces and suppressions
  • Rate limit auth email endpoints if abuse becomes a concern
  • Test magic link and reset-password delivery
Revisit your auth, upload, and AI-abuse protections before launch. Most production security issues in products like this come from missed operational controls, not missing UI code.