Transient
Reference

Workflow Policy

Required behavior for any integration that consumes TI output — REST API, MCP, or downstream pipeline.

Understand runtime states

These states are part of normal operation. None indicate system failure on their own.

StateMeaningRequired action
retrieval_ready = falseIndexing still in progressPoll run status and apply backoff
citation_count = 0No evidence retrievedExpand query before declaring absence
weak_extractionLow parse quality, reduced retrieval confidenceRetry upload or improve source format
grounded = falseNo evidence supports the answerAbstain — do not synthesise a response

Handle absence correctly

citation_count = 0 means retrieval did not match — not that the content is absent. Always confirm retrieval_ready = true before treating zero results as final, then expand the query.

JavaScript
// Poll until retrieval_ready, then query
async function waitForReady(runId, apiKey) {
let ready = false;
while (!ready) {
const res = await fetch(`/api/models/v1/runs/${runId}`, {
headers: { "x-api-key": apiKey }
});
const { retrieval_ready } = await res.json();
if (retrieval_ready) break;
await new Promise(r => setTimeout(r, 2000));
}
}

Query expansion checklist

  • • Rewrite using synonyms and related section headings.
  • • Include numeric tokens and dates explicitly — embedding retrieval can miss them.
  • • Retry with lexical fallback if the first pass returns zero.

Keep evidence constraints through your pipeline

TI scopes output to indexed evidence only. Check grounded before using a response downstream.

JavaScript
// Abstain if grounded = false
const data = await askTI(sessionId, question);

if (!data.grounded) {
return { answer: null, reason: "No evidence found." };
}

// Only proceed if grounded
return { answer: data.answer, citations: data.citations };
  • Do not pass TI output to an unconstrained LLM summariser — hallucinations can be reintroduced.
  • Preserve abstention through the full pipeline, including UI copy.

Map claims to citations

Every factual claim must map to at least one citation. A citation block at the end of a paragraph is not sufficient.

JSON
// Required structured output shape
{
"claims": [
{
"statement": "Q3 revenue reached $1.2M",
"support_type": "direct | partial | inferred",
"citations": ["doc_abc_page_2_paragraph_4"]
}
]
}

What "grounded" means

Every claim is traceable. Each locator is navigable. No inference extends beyond the citation's scope.

Watch for normative drift

Normative drift occurs when the AI applies an unstated external standard to evaluate a document, rather than confining its output to what the document explicitly contains. The standard comes from the model's priors about what a given document type "should" look like — not from the user's question.

In practice, it surfaces as commentary on what the document lacks rather than what it contains. The user asked a scoped question; the AI answered a broader one it invented.

Drift

"The document is missing a multi-year forecast."

Assumes a standard the user never referenced.

On scope

"The document includes a 1-year forecast and a 50% YoY target."

Answers only what was asked.

  • • Any implied absence or structural deficiency is normative drift.
  • • Rewrite to what is cited and within requested scope.
  • • If completeness is the question, make that explicit in the query.

Integration patterns

Non-compliant

  • 1. Retrieve citations from TI.
  • 2. Discard citation constraints.
  • 3. Generate a narrative summary via LLM.

Compliant

  • 1. Decompose the response into atomic claims.
  • 2. Map each claim to a citation.
  • 3. Forbid unsupported additions in downstream steps.

TI optimises for accuracy over completeness, evidence over narrative, and abstention over speculation.