Isac

Unified LLM gateway. 28 AI models from OpenAI, Google, Anthropic, DeepSeek, and xAI — one API, full cost tracking, native MCP support.

Powered by Isac

If you arrived here from a “Powered by Isac” link on a chat widget — Isac is the AI gateway that routed your conversation. The site owner chose which AI model to use, and Isac handled the connection.

Your privacy: Isac does not store conversation content. Each message is forwarded to the selected AI provider and the response is returned directly. Only usage metadata (token count, cost, latency) is logged for the service owner.

5 AI Providers
28 Models
12 MCP Tools
$0.05 Cheapest Model / 1M Tokens

What is Isac?

Isac is an LLM gateway that routes requests to 28 AI models across OpenAI, Google Gemini, Anthropic Claude, DeepSeek, and xAI Grok. One endpoint, one API key — Isac handles provider authentication, request formatting, and cost tracking.

Built by Publifye in Norway. Available as a hosted service at isac.publifye.pro with a web chat UI, or as a self-hosted Go binary on your own infrastructure.

Why Isac

One API, All Providers

Send the same JSON to GPT-5.2, Gemini 3 Flash, Claude Opus 4.6, or DeepSeek Chat. Isac normalizes inputs and outputs across all 5 providers.

Per-Request Cost Tracking

Every response includes input tokens, output tokens, exact cost in USD, and latency in milliseconds. Models range from $0.05 to $75 per million tokens.

MCP Native

12 MCP tools work with Claude Code, Claude Desktop, and any MCP client. Register Isac with one command and access all 28 models from your AI assistant.

Tiered Access Control

Control which models each user can reach by setting cost thresholds. Non-admin users default to models under $3/1M input tokens — configurable per user.

Image Generation & Vision

Generate images with Gemini models (~$0.04/image). Analyze images with 12 vision-capable models from Google, OpenAI, and xAI.

Self-Hosted Binary

Single Go binary, bring your own provider API keys. Auto-updates, PubHub SSO integration, and Redis/Dragonfly storage for request logs.

28 Models, 5 Providers

Google Gemini

8 models. Gemini 3.1 Flash Lite (default — $0.25/1M, 1M context), Gemini 3 Flash, Gemini 3.1 Pro, image generation models. All support vision.

OpenAI

8 models. GPT-5 Nano ($0.05/1M), GPT-5.2, GPT-5.2 Codex (SWE-bench 82%), GPT-5.4 (1M context), o4-mini, o3.

Anthropic Claude

7 models. Claude Haiku 4.5, Claude Sonnet 4.5, Claude Opus 4.6 (1M context, SWE-bench 81%). Best prose quality and reasoning.

xAI Grok

3 models. Grok 4.1 Fast Reasoning ($0.20/1M, 2M context), Grok Code Fast, Grok 4 with real-time web search.

DeepSeek

2 models. DeepSeek Chat ($0.27/1M) and DeepSeek Reasoner. Budget-friendly with strong prose quality. Routes through Chinese servers.

Model Selection

Best coding: GPT-5.2 Codex. Cheapest: GPT-5 Nano. Longest context: Grok 4.1 (2M). Best prose: Claude Opus. Best value: Gemini 3.1 Flash Lite.

For Developers

Model Context Protocol (MCP)

12 MCP tools: send prompts to any model, list models with pricing and benchmarks, check provider health, upload images for vision analysis, and manage per-user access levels. All via JSON-RPC 2.0.

Register as a stdio daemon for Claude Desktop, or connect to the HTTP endpoint at isac.publifye.pro/mcp. CLI available: isac call for direct requests, isac models for pricing, isac stdio for MCP mode.

HTTP & MCP API

REST endpoints for health, metadata, and images. MCP JSON-RPC for AI tool use. API key authentication via header or environment variable.

CLI & Stdio

isac call sends prompts from the terminal. isac stdio runs as a background MCP daemon. isac configure sets provider keys.

Response Metadata

Every response: model used, provider, input/output tokens, cost in USD, latency in ms. Full request logs with get_request_log for auditing.