Skip to main content

Your Framework Is Your AI's Biggest Expense

Β· 9 min read
Vince Canger
Developer Relations @ Wasp. Creator of OpenSaaS.sh.

TL;DR​

We gave Claude Code the exact same feature prompt for two identical apps β€” one built with Next.js, the other built with Wasp β€” and measured everything that Claude Code did to implement the feature.

MetricWaspNext.jsWasp's Advantage
Total cost$2.87$5.1744% cheaper
Total input & output tokens2.5M4.0M38% fewer
API turns669631% fewer
Tool uses526621% fewer
Files read1215Smaller blast radius
Output tokens (code written)5,4165,395~same

Not surprisingly, the savings were proportional to the amount of actual total code/tokens the Wasp framework abstracts away, which we also measured by running a static token count across both codebases. In this case, the Wasp version reduced total code/tokens by ~40%.

Because the framework allows Claude Code to get the same work done in fewer tokens, it delivers a ~70% higher token efficiency (output per token).

So, if you're using AI coding tools daily, your framework choice might be your single biggest lever for improving your AI's ability to generate accurate, complex code, quickly and cheaply.

What Wasp Actually Is​

Wasp is a full-stack framework for React, Node.js, and Prisma with batteries-included. Think Laravel-like productivity for the JS ecosystem but where authentication, routing, server functions, database, and cron jobs are defined declaratively as config.

You write business logic and Wasp handles the boilerplate and glue code for you.

Here's a simple example of a Wasp app's config file:

main.wasp.ts
import { App } from 'wasp-config'

const app = new App('todoApp', {
title: 'ToDo App',
wasp: { version: '^0.21' },
// head: []
});

app.auth({
userEntity: User,
methods: {
google: {},
email: {...}
}
});

const mainPage = app.page('MainPage', {
component: { importDefault: 'Main', from: '@src/pages/MainPage' }
});
app.route('RootRoute', { path: '/', to: mainPage });

app.query('getTasks', {
fn: { import: 'getTasks', from: '@src/queries' },
entities: ['Task']
});

app.job('taskReminderJob', {
executor: 'PgBoss',
perform: {
fn: { import: 'sendTaskReminder', from: '@src/workers/taskReminder' }
},
schedule: { cron: "*/5 * * * *" },
entities: ['Task']
});

export default app;

Wasp's opinionated approach via its config gives AI tools (and developers) a big advantage: it acts as a large "specification" that both you and your AI coding agents already understand and agree on.

This gives AI one common pattern to follow, fewer decisions to make, less boilerplate to write, and fewer tools to stitch together, making the entire development process more reliable.

Show, Don't Tell​

Before the numbers mean anything, here's how Wasp's "declarative config" compares to Next.js's equivalent.

In this example, we're comparing the fundamental auth setup from the actual test apps. The first tab shows Wasp, and the second tab shows Next.js. Click between the tabs to see the difference.

main.wasp
auth: {
userEntity: User,
methods: {
email: {
fromField: {
name: "SaaS App",
email: "hello@example.com"
},
emailVerification: {
clientRoute: EmailVerificationRoute,
},
passwordReset: {
clientRoute: PasswordResetRoute,
},
},
},
onAfterSignup: import { onAfterSignup } from "@src/auth/hooks",
onAuthFailedRedirectTo: "/login",
onAuthSucceededRedirectTo: "/dashboard",
},
lib/auth/session.ts + middleware.ts
// lib/auth/session.ts
import { SignJWT, jwtVerify } from 'jose';
import { cookies } from 'next/headers';

const key = new TextEncoder().encode(process.env.AUTH_SECRET);

type SessionData = {
user: { id: number };
expires: string;
};

export async function signToken(payload: SessionData) {
return await new SignJWT(payload)
.setProtectedHeader({ alg: 'HS256' })
.setIssuedAt()
.setExpirationTime('1 day from now')
.sign(key);
}

export async function verifyToken(input: string) {
const { payload } = await jwtVerify(input, key, { algorithms: ['HS256'] });
return payload as SessionData;
}

export async function setSession(user: NewUser) {
const expiresInOneDay = new Date(Date.now() + 24 * 60 * 60 * 1000);
const session: SessionData = {
user: { id: user.id! },
expires: expiresInOneDay.toISOString(),
};
const encryptedSession = await signToken(session);
(await cookies()).set('session', encryptedSession, {
expires: expiresInOneDay,
httpOnly: true, secure: true, sameSite: 'lax',
});
}

export async function getSession() {
const session = (await cookies()).get('session')?.value;
if (!session) return null;
return await verifyToken(session);
}

export async function hashPassword(password: string) {
return hash(password, SALT_ROUNDS);
}

export async function comparePasswords(plainText: string, hashed: string) {
return compare(plainText, hashed);
}

// middleware.ts β€” route protection + token refresh
import { signToken, verifyToken } from '@/lib/auth/session';

const protectedRoutes = '/dashboard';

export async function middleware(request: NextRequest) {
const { pathname } = request.nextUrl;
const sessionCookie = request.cookies.get('session');
const isProtectedRoute = pathname.startsWith(protectedRoutes);

if (isProtectedRoute && !sessionCookie) {
return NextResponse.redirect(new URL('/sign-in', request.url));
}

let res = NextResponse.next();

if (sessionCookie && request.method === 'GET') {
try {
const parsed = await verifyToken(sessionCookie.value);
const expiresInOneDay = new Date(Date.now() + 24 * 60 * 60 * 1000);
res.cookies.set({
name: 'session',
value: await signToken({
...parsed,
expires: expiresInOneDay.toISOString(),
}),
httpOnly: true, secure: true, sameSite: 'lax',
expires: expiresInOneDay,
});
} catch (error) {
res.cookies.delete('session');
if (isProtectedRoute) {
return NextResponse.redirect(new URL('/sign-in', request.url));
}
}
}
return res;
}

As you can see, Wasp's config is much more concise and readable than Next.js's.

Why This Matters for AI​

Now that you can see the abstraction gap, here's why it compounds for AI:

  1. Context window is a hard ceiling

    AI tools are stateless and have a finite context window. A larger codebase fills it faster, which means earlier message compression, less room for reasoning, and degraded output. Developers also have to maintain detailed memory and skill files to help AI understand the app structure and implementation expectations.

  2. Signal-to-noise ratio

    More tokens isn't just more expensive, it's more noise. Wasp's config is pure signal for AI, acting like a spec and concise map of the app the AI can follow. Next.js's equivalent is spread across route files, API handlers, middleware, and config. Higher noise = higher chance of mistakes.

  3. Every API turn re-reads the codebase

    AI tools don't remember between turns. Each turn re-reads the session conversation and codebase context from scratch. A bigger codebase means every single turn costs more. In our test, cache reads alone cost $1.71 with Next.js vs. $1.09 for Wasp.

  4. It compounds over a project's lifetime

    This test only measured one full-stack feature (db model, server operation, client page), and the differences were already significant (2.5M tokens vs. 4.0M tokens). In larger codebases, these differences quickly compound.

The Full Results​

Combined: Planning + Implementation Phases​

WaspNext.jsDelta
Total cost$2.87$5.17Next.js 80% more expensive
Total duration14.9m15.0mNearly identical
Total API turns6696Next.js 45% more
Total tokens2,505,7964,049,413Next.js 62% more
Total tool uses5266Next.js 27% more
Subagents spawned33Same
Unique files read1215
Files edited66Same
Files created23
Token cost by category
Input$0.0005$0.0801
Output$0.2099$0.2135Nearly identical
Cache read$1.0861$1.7073Next.js 57% more
Cache creation$1.3230$2.8184Next.js 113% more
Subagent$0.25$0.35

Here are the main takeaways:

  • Output tokens (what the AI wrote): nearly identical β€” $0.21 vs $0.21
  • Cache creation (what the AI first loaded): Next.js 113% more β€” $2.82 vs $1.32
  • Cache read (what the AI re-read each turn): Next.js 57% more β€” $1.71 vs $1.09

Cache creation cost. Next.js had 2.2x more cache creation tokens (343K vs 155K), at $6.25/M that's $2.15 vs $0.97 β€” a $1.18 difference that accounts for most of the gap. The bigger codebase means more new content being loaded into cache.

Cache read cost. Each API turn re-reads the growing context. The Next.js codebase is bigger so each read costs more β€” $1.14 vs $0.67. Output tokens were nearly identical (~5,400), meaning the AI wrote roughly the same amount of code but had to read far more to do it.

The main differences comes from the Next.js codebase being bigger, meaning more tokens to load and re-read across every single API turn to get the same output.

Proportionality -- The Core Insight​

The savings mirror what Wasp abstracts: authentication, routing, database management, server functions, and jobs are defined in the config, so AI doesn't need to read, navigate, or generate layers of glue code.

And while this is a new approach in the framework space, Wasp is just following a general principle here: tools that make coding easier for humans make it easier for AI, too. They offload structural work and let the AI focus on business logic, giving a clearer path to generating complex code more accurately.

Think of it as a new evaluation axis for any framework: "context efficiency", or how much of an AI's context window goes to signal vs. boilerplate. Add it alongside DX, performance, and ecosystem when choosing your stack in the AI-assisted coding era.

Methodology​

Wasp vs Next.js token efficiency
We compared the two frameworks on the same feature prompt in the same app: Vercel's SaaS Starter.

We made sure to use the same models (Opus for planning and implementation, Haiku for exploring), same framework-agnostic prompt, and same plan-then-implement flow, outlined in the test protocol. Measurement scripts were used to pull metrics from Claude Code's detailedJSONL session transcripts.

We used Anthropic's API pricing as of March 2026 (per 1M tokens), e.g.:

InputOutputCache ReadCache Create
Opus 4.6$5.00$25.00$0.50$6.25

Fairness caveats: Claude has seen far more Next.js training data (advantage: Next.js). Wasp's codebase is ~40% smaller, but that's the point. And this is a single feature test, not a comprehensive benchmark.

Explore the comparison yourself: both apps, the test protocol, and measurement scripts are in the comparison repo.

Try Wasp​

Want to try Wasp? Get started with:

npm i -g @wasp.sh/wasp-cli@latest

Then start a new Wasp app:

wasp new my-app
wasp start

or check us out on GitHub.

Discord

Join our developer community

Wasp is 100% open source. Join our Discord to learn from others and get help whenever you need it!

Join our Discord πŸ‘Ύ
β†’
πŸ“«

Subscribe to our newsletter

Once per month - receive useful blog posts and Wasp news.