Skip to content

AI Models

TeXRA supports models from multiple providers. Select models from the dropdown in the TeXRA UI. Hover over options to see context window and cost estimates.

Model ID suffixes:

  • T = Thinking/reasoning mode enabled (shows chain-of-thought)
  • - = Lighter/faster variant
  • Numbers indicate version (e.g., 45 = 4.5, 25 = 2.5)

Anthropic Models

Model IDUse CaseCostSpeed
opus46TComplex tasks with reasoning$$$$Slow
opus46High quality, complex tasks$$$$Slow
opus45TComplex tasks with reasoning$$$$Slow
opus45High quality, complex tasks$$$$Slow
sonnet45TAll-rounder with reasoning$$$Medium
sonnet45Strong all-rounder$$$Medium
haiku45TFast with reasoning$$Fast
haiku45Fast responses$$Fast
haiku35Budget option$Fast

For 1M-token context on Opus 4.6 and Sonnet 4/4.5, enable texra.model.useAnthropic1MBeta in settings.

OpenAI Models

Model IDUse CaseCostSpeed
gpt52proPremium reasoning$$$$Slow
gpt52Flagship reasoning$$$Medium
gpt51Flagship, 400k context$$$Medium
gpt5Flagship reasoning$$$Medium
gpt5-Fast flagship$$Fast
gpt41Long context (1M), vision$$$Medium
gpt4oStrong all-rounder, vision$$$Medium
o3proHeavy compute reasoning$$$$Slow
o3Coding, tool calling$$$Medium
o1Advanced reasoning$$$$Slow

GPT-5 reasoning summaries require account verification. Enable with texra.model.gpt5ReasoningSummary.

Google Models

Model IDUse CaseCostSpeed
gemini3pPro with reasoning, 1M context$$$Medium
gemini3fFlash with reasoning, 1M context$$Fast
gemini25pStrong reasoning, vision, 1M context$$$Medium
gemini25fFast reasoning, 1M context$$Fast
gemini25f-Budget flash, 64k context$Fast

DeepSeek Models

Model IDUse CaseCostSpeed
deepseekV3.2 chat mode$Fast
deepseekTV3.2 with reasoning$Medium
dsr1Advanced reasoning$$Medium

Moonshot Kimi Models

Model IDUse CaseCostSpeed
kimi25TK2.5 with thinking mode$$$Medium
kimi25K2.5, agent tasks$$$Medium

DashScope Qwen Models

Model IDUse CaseCostSpeed
qwen3maxFlagship coding, 262k context$$$Medium
qwenplusHybrid thinking, 1M context$$Medium
qwenturboFast with optional thinking$Fast

Grok / xAI Models

Model IDUse CaseCostSpeed
grok4Large context (256k), reasoning$$$Medium
grok3Large context (131k)$$$Medium
grok2vVision-enabled$$Medium

Choosing a Model

  • Simple tasks: Fast, cheap models (gemini25f-, gpt5-, haiku35)
  • Complex tasks: Powerful models (opus46, gpt52pro, o1)
  • Reasoning-heavy: Thinking models (sonnet45T, deepseekT, o3)
  • Large documents: High-context models (gemini*, gpt41, gpt5)

Configuration

Customize available models in VS Code Settings under texra.models:

json
"texra.models": [
  "gemini3p",
  "sonnet45T",
  "opus46T",
  "gpt52",
  "deepseekT"
]

Using OpenRouter

To access additional models or alternative pricing:

  1. Get an OpenRouter API key
  2. Add via TeXRA: Set API Key command
  3. Enable texra.model.useOpenRouter in settings

Streaming

Enable streaming for long responses in settings:

json
"texra.model.useStreaming": true

Next Steps