Welcome to Unify!#

Unify is your centralized platform for LLM endpoints. If you’re using or planning to use an LLM in your application or service, Unify lets you:

  • 🚀 Route to the best endpoints: Send each prompt to the endpoint(s) that will (a) return the best response, and (b) yield the best performance for a target metric of your choice, including high-throughput, low cost or low latency.

  • ⚖️ Benchmark endpoint performance: Objectively compare endpoints speed, latency and costs with our dynamic benchmarks to make informed decisions when selecting a model or provider.

  • 🔑 Use any endpoint with a single key: Access the latest LLMs through any endpoint provider with a single API Key to. Each model has a unfied API that enables seamless provider switching without needing to refactor your code.

Getting Started#

We recommend you give the concepts section a quick read to get familiar with routing and benchmarking. Once you’re ready to go, start by Signing In. From this point you can either learn how to:

  • Use our interfaces (Recommended): The interfaces guides explain how to interact with endpoints and deploy your custom router in a no-code environment.

  • Make your first request: The api guides explains how to start querying endpoints and using the router with our API.

Warning

Throughout the guides, you’ll notice some sections marked as (Beta). Any section marked as Beta is currently not available and only illustrate planned features we are currently working on. We’re constantly iterating on our roadmap so if you’d like to leave some feedback or suggestion on features you’d like to see, we’d love to discuss this with you!