Free Beta Now Available

Manage Prompts, Route LLMs, Ship with Confidence

Create a workspace, scope each application, version prompts like code, test them in the Playground, route changes through approvals, and trace every request in one place. Install the SDK once, then teach Codex or Claude Code to use it with one portable skill.

Use the SDK in your app

Install the typed client, fetch prompts, and call Skyeline's multi-provider gateway from your codebase.

pnpm add @skyeline/sdk
Teach your coding agent

Install the Skyeline skill with one command so supported coding agents pick up the right SDK context immediately.

npx skills add kulkarniatharv/skyeline-skills --skill skyeline-sdk

Free during beta — no credit card required

Scroll to explore
Product model

How Skyeline works

Skyeline provides a unified workflow for your entire AI engineering lifecycle. From drafting and testing prompts to managing approvals and tracing production requests, everything is built around a single, consistent model.

01

Create a workspace and application

Start with the same mental model the product uses everywhere: your team lives in a workspace, and each project lives in its own application.

Onboarding flowWorkspace switcherApplication switcher
02

Test prompt versions against real providers

Version prompts with statuses, tags, diffs, and rollback. Load any version into the Playground, stream output, and save the result back to the library.

Draft -> Active -> ArchivedVariablesRun history
03

Approve, observe, and ship with context

Route sensitive changes through approval chains, inspect every request with latency and token detail, then integrate with user or app-scoped API keys.

Approval inboxRequest tracingSDK + API keys

Everything you need to ship AI with confidence

A complete toolkit for managing the full lifecycle of your AI applications, from development to production.

Available Now

Prompt Library

Version, diff, and rollback prompts with full audit trail. Approval workflows and tag-based organization.

Version control with full historyStatus management (Draft → Active → Archived)Full audit trailTag organizationApproval workflows for prompt changes

LLM Gateway

Unified multi-provider gateway for routing LLM requests. Track token usage and latency across OpenAI, Anthropic, and Groq.

  • Multi-provider routing (OpenAI, Anthropic, Groq)
  • Token usage tracking
  • Latency monitoring
  • Streaming (SSE)

Observability

Trace every LLM request. Monitor token usage, latency, and errors across all providers.

  • Request tracing
  • Token & latency monitoring
  • Error tracking
Ready

TypeScript SDK

Type-safe client with full IntelliSense support. OpenAI-compatible API for seamless integration.

$ pnpm add @skyeline/sdk
New

Agent Skills

Start with the skills CLI for supported agents, then use the hosted raw file as a manual fallback.

npx skills add kulkarniatharv/skyeline-skills --skill skyeline-sdk
View install steps
App-informed previews

A guided tour of the product

These preview cards are built from the actual product surfaces in Skyeline: the Prompt Library, Playground, Approvals, Observability, and settings used to ship the SDK into production.

Versioned prompt workflow

Prompt Library

Manage prompts like assets instead of copy-pasting strings between files.

SearchStatus: activeTags: supportNew Prompt

support-routing

v18

triage, escalation, handoff

active

refund-policy

v04

commerce, policy

draft

daily-summary

v21

ops, digest

archived

Lifecycle

Draft -> Active -> Archived

Review hooks

Approval history per version

Useful detail

{{customer_query}}

Live iteration loop

Playground

Load a saved version, swap providers, stream output, inspect tokens, then export SDK code.

OpenAIAnthropicGroqRun

System

You are a support assistant for Skyeline customers.

Context

Route each request using {{customer_query}} and {{workspace_tier}}.

{{customer_query}}{{workspace_tier}}Run historyExport SDK code
Output
Complete

1. Match the user intent to billing escalation.

2. Ask one clarifying question about invoice period.

3. Hand off if the customer mentions charge disputes.

In

1,248

Out

382

Total

1,630

Team governance

Approvals

Inbox-style review plus configurable approval chains for high-stakes prompt changes.

Pending inbox3 pending

support-routing

v18

Awaiting: Editor review

refund-policy

v04

Awaiting: Owner signoff

Chain builder
1

Editor review

Required step

2

Security review

Required step

3

Owner signoff

Required step

Request tracing

Observability

Filter by app, provider, status, and time range, then drill into messages, raw JSON, tokens, and latency.

App: allProvider: OpenAIStatus: successRange: 7d
RequestModelStatusLatencyTokens
#1288gpt-4o-mini
success
640ms16,420
#1287claude-3-5-sonnet
error
1.9s4,180
#1286llama-3.3-70b
success
420ms8,104

Request #1288 details

Inspect provider/model, latency, input/output tokens, request body, response body, and the exact messages that produced a result.

Production handoff

API keys & providers

Manage workspace-level provider connections and issue application-scoped keys to safely authorize your SDK or coding agent.

User keys

Active

sk_live_user_********4n7

Inherits member permissions.

Application keys

Preferred

sk_live_app_********9xk

Locked to one app for safer deploys.

Provider connections

BYOK
OpenAIConnected
AnthropicConnected
GroqMissing key

SDK handoff

pnpm add @skyeline/sdk

Simple, transparent pricing

Start building for free during our beta. No credit card required.

Limited Time

Free Beta

Full access during beta period

$0
during beta
Unlimited prompts and versions
Multi-provider LLM gateway
Full request tracing
TypeScript SDK with full type safety
Approval workflows
OpenAI-compatible API
Get Started Free

No credit card required.

Pro and Enterprise plans coming soon with increased limits and priority support.

Ready to build with confidence?

Join the beta and start managing your AI applications like never before. It's free to get started.

Questions? Get in touch