Skip to main content

JSON Output Functions Parser

The JSON Output Functions Parser is a useful tool for parsing structured JSON function responses, such as those from OpenAI functions. This parser is particularly useful when you need to extract specific information from complex JSON responses.

Here's how it works:

  1. Output Parser: You can either pass in a predefined outputParser, or the parser will use the default OutputFunctionsParser.

  2. Default Behavior: If the default OutputFunctionsParser is used, it extracts the function call from the response generation and applies JSON.stringify to it.

  3. argsOnly Parameter: If the argsOnly parameter is set to true, the parser will only return the arguments of the function call, without applying JSON.stringify to the response.

  4. Response Parsing: The response from the output parser is then parsed again, and the result is returned.

Let's look at an example:

npm install @langchain/openai
import { ChatOpenAI } from "@langchain/openai";
import { JsonOutputFunctionsParser } from "langchain/output_parsers";
import { HumanMessage } from "@langchain/core/messages";

// Instantiate the parser
const parser = new JsonOutputFunctionsParser();

// Define the function schema
const extractionFunctionSchema = {
name: "extractor",
description: "Extracts fields from the input.",
parameters: {
type: "object",
properties: {
tone: {
type: "string",
enum: ["positive", "negative"],
description: "The overall tone of the input",
},
word_count: {
type: "number",
description: "The number of words in the input",
},
chat_response: {
type: "string",
description: "A response to the human's input",
},
},
required: ["tone", "word_count", "chat_response"],
},
};

// Instantiate the ChatOpenAI class
const model = new ChatOpenAI({ model: "gpt-4" });

// Create a new runnable, bind the function to the model, and pipe the output through the parser
const runnable = model
.bind({
functions: [extractionFunctionSchema],
function_call: { name: "extractor" },
})
.pipe(parser);

// Invoke the runnable with an input
const result = await runnable.invoke([
new HumanMessage("What a beautiful day!"),
]);

console.log({ result });

/**
{
result: {
tone: 'positive',
word_count: 4,
chat_response: "Indeed, it's a lovely day!"
}
}
*/

API Reference:

In this example, we first define a function schema and instantiate the ChatOpenAI class. We then create a runnable by binding the function to the model and piping the output through the JsonOutputFunctionsParser. When we invoke the runnable with an input, the response is already parsed thanks to the output parser.

The result will be a JSON object that contains the parsed response from the function call.

Streaming

This parser is also convenient for parsing functions responses in a streaming fashion. It supports either the aggregated functions response or a JSON patch diff:

import { z } from "zod";
import { zodToJsonSchema } from "zod-to-json-schema";
import { ChatOpenAI } from "@langchain/openai";
import { JsonOutputFunctionsParser } from "langchain/output_parsers";
import { ChatPromptTemplate } from "@langchain/core/prompts";

const schema = z.object({
setup: z.string().describe("The setup for the joke"),
punchline: z.string().describe("The punchline to the joke"),
});

const modelParams = {
functions: [
{
name: "joke",
description: "A joke",
parameters: zodToJsonSchema(schema),
},
],
function_call: { name: "joke" },
};

const prompt = ChatPromptTemplate.fromTemplate(
`tell me a long joke about {foo}`
);
const model = new ChatOpenAI({
temperature: 0,
}).bind(modelParams);

const chain = prompt
.pipe(model)
.pipe(new JsonOutputFunctionsParser({ diff: true }));

const stream = await chain.stream({
foo: "bears",
});

// Stream a diff as JSON patch operations
for await (const chunk of stream) {
console.log(chunk);
}

/*
[]
[ { op: 'add', path: '/setup', value: '' } ]
[ { op: 'replace', path: '/setup', value: 'Why' } ]
[ { op: 'replace', path: '/setup', value: 'Why don' } ]
[ { op: 'replace', path: '/setup', value: "Why don't" } ]
[ { op: 'replace', path: '/setup', value: "Why don't bears" } ]
[ { op: 'replace', path: '/setup', value: "Why don't bears wear" } ]
[
{
op: 'replace',
path: '/setup',
value: "Why don't bears wear shoes"
}
]
[
{
op: 'replace',
path: '/setup',
value: "Why don't bears wear shoes?"
},
{ op: 'add', path: '/punchline', value: '' }
]
[ { op: 'replace', path: '/punchline', value: 'Because' } ]
[ { op: 'replace', path: '/punchline', value: 'Because they' } ]
[ { op: 'replace', path: '/punchline', value: 'Because they have' } ]
[
{
op: 'replace',
path: '/punchline',
value: 'Because they have bear'
}
]
[
{
op: 'replace',
path: '/punchline',
value: 'Because they have bear feet'
}
]
[
{
op: 'replace',
path: '/punchline',
value: 'Because they have bear feet!'
}
]
*/

const chain2 = prompt.pipe(model).pipe(new JsonOutputFunctionsParser());

const stream2 = await chain2.stream({
foo: "beets",
});

// Stream the entire aggregated JSON object
for await (const chunk of stream2) {
console.log(chunk);
}

/*
{}
{ setup: '' }
{ setup: 'Why' }
{ setup: 'Why did' }
{ setup: 'Why did the' }
{ setup: 'Why did the beet' }
{ setup: 'Why did the beet go' }
{ setup: 'Why did the beet go to' }
{ setup: 'Why did the beet go to therapy' }
{ setup: 'Why did the beet go to therapy?', punchline: '' }
{ setup: 'Why did the beet go to therapy?', punchline: 'Because' }
{ setup: 'Why did the beet go to therapy?', punchline: 'Because it' }
{
setup: 'Why did the beet go to therapy?',
punchline: 'Because it had'
}
{
setup: 'Why did the beet go to therapy?',
punchline: 'Because it had a'
}
{
setup: 'Why did the beet go to therapy?',
punchline: 'Because it had a lot'
}
{
setup: 'Why did the beet go to therapy?',
punchline: 'Because it had a lot of'
}
{
setup: 'Why did the beet go to therapy?',
punchline: 'Because it had a lot of unresolved'
}
{
setup: 'Why did the beet go to therapy?',
punchline: 'Because it had a lot of unresolved issues'
}
{
setup: 'Why did the beet go to therapy?',
punchline: 'Because it had a lot of unresolved issues!'
}
*/

API Reference:


Help us out by providing feedback on this documentation page: