Skip to main content

Documentation Index

Fetch the complete documentation index at: https://www.adaline.ai/docs/llms.txt

Use this file to discover all available pages before exploring further.

LogSpanGuardrailContent

Content type for safety and compliance check spans.

Overview

LogSpanGuardrailContent captures guardrail evaluations such as content moderation, policy checks, or input validation. It is wrapped in a LogSpanContent union via the actual_instance pattern.
from adaline_api.models.log_span_guardrail_content import LogSpanGuardrailContent

Fields

type
str
required
Must be "Guardrail".
input
str
required
The input payload as a JSON string. Must be valid, parseable JSON (the result of json.dumps()).
output
str
required
The output payload as a JSON string. Must be valid, parseable JSON (the result of json.dumps()).

Construction Pattern

All span content is wrapped in LogSpanContent using the actual_instance parameter:
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_guardrail_content import LogSpanGuardrailContent

content = LogSpanContent(
    actual_instance=LogSpanGuardrailContent(
        type="Guardrail",
        input=json.dumps({"text": "user message"}),
        output=json.dumps({"safe": True}),
    )
)

Example

import json
from adaline_api.models.log_span_content import LogSpanContent
from adaline_api.models.log_span_guardrail_content import LogSpanGuardrailContent

guard_input = {"text": "How do I reset my password?"}
guard_output = {"safe": True, "categories": [], "flagged": False}

span.update({
    "status": "success",
    "content": LogSpanContent(
        actual_instance=LogSpanGuardrailContent(
            type="Guardrail",
            input=json.dumps(guard_input),
            output=json.dumps(guard_output),
        )
    ),
})