Skip to content

Quickstart: TypeScript

Get started with Guardrails TypeScript in minutes. Guardrails provides drop-in replacements for OpenAI clients that automatically validate inputs and outputs using configurable safety checks.

Install

npm install @guardrails/guardrails-ts

Set API Key

export OPENAI_API_KEY=sk-...

Create Pipeline Configuration

The fastest way is using the Guardrails Wizard - a no-code tool for creating configurations.

Or define manually:

{
    "version": 1,
    "input": {
        "version": 1,
        "guardrails": [
            {"name": "URL Filter", "config": {}},
            {"name": "Moderation", "config": {"categories": ["hate", "violence"]}}
        ]
    },
    "output": {
        "version": 1,
        "guardrails": [
            {"name": "Contains PII", "config": {"entities": ["EMAIL_ADDRESS", "PHONE_NUMBER"]}}
        ]
    }
}

Pipeline Stages

Guardrails use a pipeline configuration with 1 to 3 stages:

  • Preflight - Runs before the LLM call (e.g., mask PII, moderation)
  • Input - Runs in parallel with the LLM call (e.g., jailbreak detection)
  • Output - Runs over the LLM generated content (e.g., fact checking, compliance)

Not all stages are required - you can use just input, just output, or any combination.

Use as Drop-in Replacement

Replace your OpenAI client with the Guardrails version (GuardrailsOpenAI):

We support chat.completions.create and responses.create.

import { GuardrailsOpenAI } from '@guardrails/guardrails-ts';

async function main() {
    // Use GuardrailsOpenAI instead of OpenAI
    const client = await GuardrailsOpenAI.create({
        version: 1,
        output: {
            version: 1,
            guardrails: [
                {"name": "Moderation", "config": {"categories": ["hate", "violence"]}}
            ]
        }
    });

    try {
        const response = await client.responses.create({
            model: "gpt-5",
            input: "Hello world"
        });

        // Access OpenAI response via .llm_response
        console.log(response.llm_response.output_text);

    } catch (error) {
        if (error.constructor.name === 'GuardrailTripwireTriggered') {
            console.log(`Guardrail triggered: ${error.guardrailResult.info}`);
        }
    }
}

main();

That's it! Your existing OpenAI code now includes automatic guardrail validation based on your pipeline configuration. Just use response.llm_response instead of response.

Guardrail Execution Error Handling

Guardrails supports two error handling modes for guardrail execution failures:

Fail-Safe Mode (Default)

If a guardrail fails to execute (e.g., invalid model name), the system continues passing back tripwire_triggered=False:

// Default: raiseGuardrailErrors=false
const client = await GuardrailsOpenAI.create(config);
// Continues execution even if guardrails have any errors

Fail-Secure Mode

Enable strict mode to raise exceptions on guardrail execution failures:

// Strict mode: raiseGuardrailErrors=true
const client = await GuardrailsOpenAI.create(
    config,
    undefined,
    true  // raiseGuardrailErrors = true
);
// Raises exceptions if guardrails fail to execute properly

Note: This only affects guardrail execution errors. Safety violations (tripwires) are handled separately - see Tripwires for details.

Agents SDK Integration

For OpenAI Agents SDK users, we provide GuardrailAgent as a drop-in replacement:

import { GuardrailAgent } from '@guardrails/guardrails-ts';
import { Runner } from '@openai/agents';

// Create agent with guardrails automatically configured
const agent = await GuardrailAgent.create({
    version: 1,
    output: {
        version: 1,
        guardrails: [
            {"name": "Moderation", "config": {"categories": ["hate", "violence"]}}
        ]
    }
}, "Customer support agent", "You are a customer support agent. You help customers with their questions.");

// Use exactly like a regular Agent
const result = await Runner.run(agent, "Hello, can you help me?");

GuardrailAgent automatically wires up your pipeline configuration to the agent's input and output guardrails, so you can focus on building your agent logic.

Azure OpenAI

Use the Azure-specific client:

import { GuardrailsAzureOpenAI } from '@guardrails/guardrails-ts';

const client = await GuardrailsAzureOpenAI.create({
    version: 1,
    output: {
        version: 1,
        guardrails: [
            {"name": "Moderation", "config": {"categories": ["hate", "violence"]}}
        ]
    }
}, {
    azure_endpoint: "https://your-resource.openai.azure.com/",
    api_key: "your-azure-key",
    api_version: "2025-01-01-preview"
});

Third-Party Models

Works with any OpenAI-compatible API:

import { GuardrailsOpenAI } from '@guardrails/guardrails-ts';

// Local Ollama model
const client = await GuardrailsOpenAI.create(config, {
    baseURL: "http://127.0.0.1:11434/v1/",
    apiKey: "ollama"
});

Next Steps