TetherAI is a modular AI vendors provider SDK that abstracts the complexity of multiple APIs (OpenAI, Anthropic, Mistral, Grok, Local LLM) into a single, clean TypeScript interface.
It's designed for modern frameworks, lightweight agents, and SaaS products that need reliable AI access without vendor lock-in.
Highlights
- Unified API: Call OpenAI, Anthropic, Mistral, Grok, or Local LLM with the same interface.
- Zero Bloat: Each provider is a standalone package (
@tetherai/openai
,@tetherai/anthropic
,@tetherai/mistral
,@tetherai/grok
,@tetherai/local
) – install only what you use. - TypeScript First: Full typings, modern ESM, tree-shakeable.
- Lightweight: Packages are ~10–20 kB, built for speed.
- Pluggable: Designed for future extensions (memory, agents, custom backends).
- Open Source: MIT-licensed, actively maintained.
- Production Ready: 100% test coverage, CI/CD pipeline, npm published packages.
Quick Start
- Install any provider package (all packages are now v0.2.0):
# OpenAI Provider
pnpm install @tetherai/openai
# Anthropic Provider
pnpm install @tetherai/anthropic
# Mistral Provider
pnpm install @tetherai/mistral
# Grok Provider
pnpm install @tetherai/grok
# Local LLM Provider
pnpm install @tetherai/local
- Run examples locally:
- Next.js Example
cd examples/nextjs
export OPENAI_API_KEY=sk-...
pnpm dev
- Node.js Example
cd examples/node
export OPENAI_API_KEY=sk-...
pnpm dev
- Try it out:
- Next.js → http://localhost:3000
- Node.js → POST to http://localhost:8787/chat
curl -X POST http://localhost:8787/chat \
-H "Content-type: application/json" \
-d '{ "model":"gpt-4o-mini","messages":[{"role":"user","content":"Hello"}]}'
Usage
Create Providers
// OpenAI
import { openai } from "@tetherai/openai";
const openaiProvider = openai({ apiKey: process.env.OPENAI_API_KEY });
// Anthropic
import { anthropic } from "@tetherai/anthropic";
const anthropicProvider = anthropic({ apiKey: process.env.ANTHROPIC_API_KEY });
// Mistral
import { mistral } from "@tetherai/mistral";
const mistralProvider = mistral({ apiKey: process.env.MISTRAL_API_KEY });
// Grok
import { grok } from "@tetherai/grok";
const grokProvider = grok({ apiKey: process.env.GROK_API_KEY });
// Local LLM
import { localLLM } from "@tetherai/local";
const localProvider = localLLM({
baseURL: "http://localhost:11434",
apiKey: "local"
});
Add Retry and Fallback
import { withRetry, withFallback } from "@tetherai/openai";
const resilientProvider = withFallback(
withRetry(openai({ apiKey: process.env.OPENAI_API_KEY }), { retries: 2 }),
[]
);
Stream a Chat Completion
import type { ChatRequest } from "@tetherai/openai";
const req: ChatRequest = {
model: "gpt-4o-mini",
messages: [{ role: "user", content: "Tell me a joke." }],
};
for await (const chunk of resilientProvider.streamChat(req)) {
process.stdout.write(chunk.delta);
}
API Overview
openai(options)
→ create OpenAI provideranthropic(options)
→ create Anthropic providermistral(options)
→ create Mistral providergrok(options)
→ create Grok providerlocalLLM(options)
→ create Local LLM providerprovider.streamChat(request)
→ async iterator of chat chunksprovider.chat(request)
→ single chat completionprovider.getModels()
→ list available modelsprovider.validateModel(modelId)
→ validate model compatibilityprovider.getMaxTokens(modelId)
→ get token limitswithRetry(provider, { retries })
→ wrap a provider with retry middlewarewithFallback([providerA, providerB])
→ try multiple providers in order
Middleware Examples
With Retry
import { openai, withRetry } from "@tetherai/openai";
const provider = withRetry(
openai({ apiKey: process.env.OPENAI_API_KEY }),
{ retries: 2 }
);
With Fallback
import { anthropic, withFallback, withRetry } from "@tetherai/anthropic";
const provider = withFallback(
withRetry(anthropic({ apiKey: process.env.ANTHROPIC_API_KEY }), { retries: 2 }),
[]
);
Testing & Quality
100% Test Success Rate across all test suites:
- Unit Tests: Individual provider method testing with mocked dependencies
- Integration Tests: Real API response format validation without API keys
- E2E Tests: Complete provider functionality verification
- Build Validation: All packages compile and export correctly
Testing Strategy:
- No real API keys required for testing
- Mock realistic API responses using ReadableStream
- Test actual parsing logic with real response formats
- Fast execution (under 1 second total test suite)
Quality Indicators:
- Comprehensive provider method coverage
- Real API validation without external dependencies
- Robust error handling testing
- Consistent test results every run
Examples
Ready-to-run demos:
- Next.js Chat App – Edge runtime UI example with streaming + retry/fallback.
- Node.js Server – Minimal backend exposing
/chat
endpoint with SSE streaming.
Why TetherAI?
Modern AI applications need to move fast, but every provider (OpenAI, Anthropic, …) has slightly different APIs, SDKs, and error handling. TetherAI was built to solve this fragmentation.
- Unified Interface: One consistent Provider interface across vendors.
- Streaming-First: All providers implement async iterators for token-by-token streaming.
- Middleware Compatibility: Add retry, fallback, logging, or custom middleware without rewriting code.
- Lightweight & Modular: Install only what you need (
@tetherai/openai
,@tetherai/anthropic
,@tetherai/mistral
,@tetherai/grok
,@tetherai/local
). - Edge & Node Compatible: Works seamlessly in Next.js Edge, Vercel, Cloudflare Workers, and Node.js.
- TypeScript First: Strictly typed, ESM-only, and tree-shakeable.
TetherAI helps developers build resilient AI-powered products faster, without vendor lock-in or unnecessary boilerplate.