MATTERAI / MODELS

AXON
MODEL FAMILY

Enterprise-grade AI models built for coding agents, tool calling, and advanced reasoning. Available via API and integrated directly in Orbital.

OPENAI-COMPATIBLE API
https://api2.matterai.so/v1INFERENCE DOCS
ALL MODELS

Axon 2.5 Pro High

MOST CAPABLE

Highest intelligence for the most demanding long-running agent tasks, complex reasoning, and advanced tool calling.

axon-2-5-pro-high
Context
200K
tokens
Max Output
128K
tokens
Input Price
$3.00
/ 1M tokens
Output Price
$12.00
/ 1M tokens
Tool CallingStructured OutputsWeb SearchText + Image InText Out

Axon 2.5 Pro

LATEST

High-intelligence model for long-running agent tasks, tool calling, coding and general purpose use.

Context
200K
Output
128K
Input
$2.00
Output
$8.00
Tool CallingStructured OutputsWeb Search
Text + Image InText Out
axon-2-5-pro

Axon 2.5 Mini

FASTEST

General purpose super intelligent LLM coding model for low-effort day-to-day tasks.

Context
200K
Output
16K
Input
$0.50
Output
$2.00
Tool CallingStructured OutputsWeb Search
Text + Image InText Out
axon-2-5-mini

Axon 2.1 Pro

High-intelligence model for long-running agent tasks, tool calling, coding and general purpose use.

Context
200K
Output
128K
Input
$1.50
Output
$6.00
Tool CallingStructured OutputsWeb Search
Text + Image InText Out
axon-2-pro

Axon 2.1

High-intelligence model for long running agent tasks, tool calling, coding and general purpose.

Context
200K
Output
128K
Input
$1.00
Output
$4.00
Tool CallingStructured OutputsWeb Search
Text + Image InText Out
axon-2
CACHING

PROMPT
CACHING.

Cached input tokens are automatically discounted at 50% off the standard input price. Caching kicks in automatically for repeated prompt prefixes.

Axon 2.5 Pro High$1.500 / 1M cached
Axon Code 2.5 Pro High$1.500 / 1M cached
Axon 2.5 Pro$1.000 / 1M cached
Axon Code 2.5 Pro$1.000 / 1M cached
Axon 2.5 Mini$0.250 / 1M cached
Axon 2.1 Pro$0.750 / 1M cached
Axon 2.1$0.500 / 1M cached
Axon Code 2.1 Pro High$0.750 / 1M cached
Axon Code 2.1 Pro$0.750 / 1M cached
INTEGRATION

USE WITH
ANY SDK.

Fully OpenAI-compatible. Drop in your existing OpenAI client with just a base URL change.

PYTHON
from openai import OpenAI

client = OpenAI(
    api_key="YOUR_MATTERAI_KEY",
    base_url="https://api.matterai.so/v1",
)

response = client.chat.completions.create(
    model="axon-2-pro",
    messages=[{"role": "user", "content": "Hello"}],
)
TYPESCRIPT
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: "YOUR_MATTERAI_KEY",
  baseURL: "https://api.matterai.so/v1",
});

const response = await client.chat.completions.create({
  model: "axon-2-pro",
  messages: [{ role: "user", content: "Hello" }],
});

START BUILDING
WITH AXON.