Skip to main content

LogSpanGuardrailContent

Span content for safety and compliance checks. Use this type when logging guardrail operations — content moderation, PII detection, toxicity scoring, policy enforcement, or any validation that gates whether a response is safe to return.

Import

import type { LogSpanGuardrailContent } from '@adaline/api';
import { LogSpanGuardrailContentTypeEnum } from '@adaline/api';

Type Definition

interface LogSpanGuardrailContent {
  type: 'Guardrail';
  input: string;                   // JSON string (must be valid JSON)
  output: string;                  // JSON string (must be valid JSON)
}

Properties

  • type - Discriminator field, always 'Guardrail' for this content type
  • input - The content being checked as a JSON string (JSON.stringify() of the input payload)
  • output - The guardrail verdict as a JSON string (JSON.stringify() of the result)
Both input and output must be valid, parseable JSON strings (the result of JSON.stringify()). Passing a plain string that isn’t valid JSON will cause the span to be rejected.

Example

async function checkSafety(text: string) {
  const res = await fetch('https://moderation.example/v1/classify', {
    method: 'POST',
    body: JSON.stringify({ text }),
  });
  return res.json();
}

const input = { text: userMessage, checks: ['toxicity', 'pii', 'jailbreak'] };
const result = await checkSafety(input.text);

span.update({
  content: {
    type: 'Guardrail',
    input: JSON.stringify(input),
    output: JSON.stringify(result),
  },
});

  • LogSpanContent — union type that includes LogSpanGuardrailContent
  • Span — class that accepts LogSpanContent via span.update()