Command Palette

Search for a command to run...

YepAPI
OpenAI/v1/ai/openai/gpt-oss-120b

GPT-OSS 120B

Access GPT-OSS 120B through one API key. OpenAI's open-source model.

OpenAI's open-source 120B parameter model. Extremely affordable with competitive performance.

Context Window

131K tokens

Max Output

131K tokens

Input Price

$0.04 / 1M tokens

Output Price

$0.19 / 1M tokens

Try it live

Send a message and see GPT-OSS 120B respond in real time.

POST /v1/ai/openai/gpt-oss-120bLive testing
Loading...

Maximum tokens in the response.

Real-time token streaming

Hit "Run" to see the response

Strengths

Cost-effective
Open source
General purpose
Coding

Quick start

Copy this snippet and start making calls with GPT-OSS 120B.

const res = await fetch('https://api.yepapi.com/v1/ai/openai/gpt-oss-120b', {
  method: 'POST',
  headers: {
    'x-api-key': 'YOUR_API_KEY',
    'Content-Type': 'application/json',
  },
  body: JSON.stringify({
    "messages": [
      {
        "role": "user",
        "content": "Explain API gateways in 2 sentences."
      }
    ],
    "maxTokens": 256
  }),
});
const data = await res.json();
console.log(data);

Why use GPT-OSS 120B through YepAPI?

One API key for all models — no separate accounts
OpenAI SDK compatible — just change the base URL
No monthly minimums — pay per token
Switch models with one line of code
Full provider passthrough — citations, search results, and all extras included
Streaming and non-streaming support on every model
Works with Cursor, Claude, LangChain, and any LLM tool
Unified billing across all providers

Frequently asked questions

OpenAI's open-source 120B parameter model. Extremely affordable with competitive performance.

Input tokens cost $0.04 per 1M tokens and output tokens cost $0.19 per 1M tokens through YepAPI. No monthly minimums — you only pay for what you use.

Sign up for a free API key, then send requests to the /v1/ai/openai/gpt-oss-120b endpoint. YepAPI is OpenAI SDK compatible, so you can use it with any tool that supports OpenAI — just change the base URL.

GPT-OSS 120B supports a 131K token context window with up to 131K output tokens per request.

Ready to use GPT-OSS 120B?

Get your API key and start making calls in 30 seconds. No credit card required.

Explore more models