Skip to main content
Getting Structured Output from LLMs (Without the Pain)

Getting Structured Output from LLMs (Without the Pain)

Nov 2, 2025

Getting LLMs to return proper JSON instead of rambling text is harder then it sounds. Ask for JSON and youll get "Sure! Here's the JSON:" followed by... sometimes JSON, sometimes not.

Heres what actually works.

The Problem

LLMs love to add commentary. Even when you explicitly say "return only JSON", they often wrap it in explanations.

Solution 1: Output Anchoring

Start the response for the model:

const response = await llm.complete({
  prompt: `Extract user info as JSON.
Text: "${input}"

{
  "name":`,
});

// Model continues from where you left off
const json = JSON.parse(`{"name":${response}`);

This works because the model sees youve already started the JSON and just continues. No preamble, no "here you go".

Solution 2: Use Response Format (OpenAI)

OpenAI and some others support native JSON mode:

const response = await openai.chat.completions.create({
  model: "gpt-4-turbo",
  response_format: { type: "json_object" },
  messages: [{
    role: "user",
    content: `Return user data as JSON with fields: name, email, age.
    Text: "${input}"`
  }]
});

// Guaranteed valid JSON
const data = JSON.parse(response.choices[0].message.content);

Way more reliable then prompting alone. Use it when available.

Solution 3: Zod + Validation

Never trust LLM output. Always validate:

import { z } from "zod";

const UserSchema = z.object({
  name: z.string(),
  email: z.string().email(),
  age: z.number().int().positive()
});

async function extractUser(text: string) {
  const response = await llm.complete(/* ... */);

  try {
    const parsed = JSON.parse(response);
    return UserSchema.parse(parsed);
  } catch (e) {
    // Retry with more explicit instructions
    return retryWithSchema(text, UserSchema);
  }
}

Solution 4: Instructor Library

If you use this pattern alot, check out Instructor. It handles the retry logic for you:

import Instructor from "@instructor-ai/instructor";

const client = Instructor({ client: openai });

const user = await client.chat.completions.create({
  model: "gpt-4",
  response_model: { schema: UserSchema },
  messages: [{ role: "user", content: `Extract: ${text}` }]
});
// Returns typed, validated object

Quick Comparison

My Recommendation

For production code: JSON mode + Zod validation. The model handles formatting, Zod handles type safety. Best of both worlds.

For quick scripts: Output anchoring works fine.

Dont rely on prompting alone. The model will eventually return malformed JSON and your app will crash at 2am. Ask me how I know.

© 2026 Tawan. All rights reserved.