Transient
IntegrationsAI Clients

AI Clients

Setup guides for Claude, ChatGPT, Perplexity, Gemini, and Grok. For MCP-capable clients, use the hosted MCP endpoint + OAuth. For non-MCP clients, use /v1/answer as a custom tool or action.

The canonical MCP setup values are: https://mcp.transientintelligence.com/mcp and OAuth client ID ti-mcp-public-20260306.

Session note: reuse the same session_id only for follow-up questions on the same evidence set. Start a new session for new documents or new review goals to avoid mixing context.

Claude (MCP)

Use the hosted TI MCP endpoint with OAuth. Do not pass a client secret in end-user setup.

Claude connector fields
MCP URL: https://mcp.transientintelligence.com/mcp
Auth type: OAuth
Client ID: ti-mcp-public-20260306
Client secret: Do not enter one for your setup
First commands after OAuth
ti_set_api_key({ api_key: "ti_..." })
ti_workflow_policy()
ti_run_workflow({ question: "...", text: "..." })

ChatGPT

Two integration paths depending on your plan: hosted MCP (OAuth) or Custom GPT Actions (REST). Hosted MCP is the recommended path when available.

Setup steps

  1. 1. Open your Custom GPT editor → Actions → Add action.
  2. 2. Import the TI OpenAPI spec from /TI01_V1_OPENAPI.yaml.
  3. 3. Set authentication to API Key, header name x-api-key.
  4. 4. Point the server URL to your TI API base.
System prompt snippet
When a user provides a document or question:
1. Call /v1/answer with the document as 'input' and the question as 'question'.
2. Only present claims that appear in the returned 'citations' array.
3. If citations is empty, state that no evidence was found — do not synthesise.

Perplexity

Perplexity Pro and local instances support custom tool endpoints. Point a custom tool at /v1/answer for a one-call integration — no upload/poll loop required.

Custom tool definition
{
"name": "transient_intelligence",
"description": "Evidence-first document analysis. Returns grounded citations only.",
"endpoint": "https://api.transientintelligence.com/api/models/v1/answer",
"method": "POST",
"headers": { "x-api-key": "YOUR_API_KEY" },
"parameters": {
"question": "string (required)",
"input": "string — document content to analyse",
"top_k": "integer — evidence depth (default 10)"
}
}

Gemini (Google AI Studio)

Gemini supports Function Calling in both the Gemini API and AI Studio custom agents. Define /v1/answer as a function and Gemini will invoke it when document evidence is needed.

Function declaration (Python SDK)
transient_intelligence = {
"name": "transient_intelligence",
"description": "Evidence-first document analysis. Returns grounded citations from the provided document.",
"parameters": {
"type": "object",
"properties": {
"question": { "type": "string", "description": "The question to answer from the document" },
"input": { "type": "string", "description": "The full document text to analyse" },
"top_k": { "type": "integer", "description": "Number of evidence passages to retrieve (default 10)" }
},
"required": ["question", "input"]
}
}

Grok (xAI)

Grok supports tool use via the xAI API. Pass /v1/answer as a tool definition in the tools array and Grok will invoke it when evidence retrieval is appropriate.

xAI API — tool definition (JavaScript)
const tools = [{
type: "function",
function: {
name: "transient_intelligence",
description: "Evidence-first document analysis. Returns grounded citations from the provided document.",
parameters: {
type: "object",
properties: {
question: { type: "string" },
input: { type: "string" },
top_k: { type: "integer" }
},
required: ["question", "input"]
}
}
}]

System prompt templates

Each client needs a system prompt that enforces citation discipline and defines when to call TI. The Prompt Library has ready-to-copy templates for every client surface, plus use-case addons for legal, financial, and research documents.

Prompt Library

Operating prompt, client-specific system prompts (Claude Desktop, ChatGPT, Gemini, Grok), and use-case addons — all in one place.

Browse the Prompt Library →