Skip to content

AgentThinkConfig

LLM configuration for the Agent API think stage.

Defined in: src/providers/agent/deepgram/types.ts:441

LLM configuration for the Agent API think stage.

Remarks

Controls which LLM provider handles inference, the system prompt, available functions, and optional custom endpoint routing.

Properties

PropertyTypeDescriptionDefined in
context_length?number | "max"Maximum number of conversation-history tokens sent to the LLM. Remarks Set to 'max' to use the model’s full context window. A numeric value caps the token count, which can reduce latency and cost for long sessions.src/providers/agent/deepgram/types.ts:472
endpoint?{ headers?: Record<string, string>; url: string; }Custom LLM endpoint override. Remarks Route inference through a self-hosted or OpenAI-compatible endpoint instead of the provider’s default URL.src/providers/agent/deepgram/types.ts:452
endpoint.headers?Record<string, string>Additional HTTP headers sent with each request.src/providers/agent/deepgram/types.ts:456
endpoint.urlstringFully-qualified URL of the custom LLM endpoint.src/providers/agent/deepgram/types.ts:454
functions?AgentFunctionDefinition[]Functions the agent can call during the conversation.src/providers/agent/deepgram/types.ts:463
prompt?stringSystem prompt that defines the agent’s persona and behaviour.src/providers/agent/deepgram/types.ts:460
providerThinkProviderLLM provider configuration. See ThinkProvider for options.src/providers/agent/deepgram/types.ts:443

© 2026 CompositeVoice. All rights reserved.

Font size
Contrast
Motion
Transparency