o3-Pro

Model Page:o3-Pro API

Basic Information

The o3-Pro API is a RESTful ChatCompletion endpoint that enables developers to invoke OpenAI’s advanced chain-of-thought reasoning, code execution, and data-analysis capabilities via configurable parameters (model=”o3-pro”, messages, temperature, max_tokens, streaming, etc.) for seamless integration into complex workflows.

OpenAI o3‑pro is a “pro” variant of the o3 reasoning model engineered to think longer and deliver the most dependable responses by employing private chain‑of‑thought reinforcement learning and setting new state‑of‑the‑art benchmarks across domains like science, programming, and business—while autonomously integrating tools such as web search, file analysis, Python execution, and visual reasoning within API.

Technical Details

  • Architecture: Builds on the o3 backbone with an enhanced private chain of thought, enabling multi-step reasoning at inference.
  • Tokenization: Supports the same token schema as its predecessors—1 million input tokens ≈ 750,000 words.
  • Extended Capabilities: Includes web search, Python code execution, file analysis, and visual reasoning; image generation remains unsupported in this release.

Benchmark Performance

  • Math & Science: Surpassed Google Gemini 2.5 Pro on the AIME 2024 contest, demonstrating superior problem-solving in advanced mathematics.
  • PhD-Level Science: Outperformed Anthropic’s Claude 4 Opus on the GPQA Diamond benchmark, indicating robust expertise in scientific domains.
  • Enterprise Use: Internal tests report consistent wins over predecessor models across coding, STEM, and business reasoning tasks.

How to call o3-Pro API from CometAPI

\**o3-Pro\** API Pricing in CometAPI,20% off the official price:

  • Input Tokens: $16/ M tokens
  • Output Tokens: $64/ M tokens

Required Steps

  • Log in to cometapi.com. If you are not our user yet, please register first
  • Get the access credential API key of the interface. Click “Add Token” at the API token in the personal center, get the token key: sk-xxxxx and submit.
  • Get the url of this site: https://api.cometapi.com/

Useage Methods

  1. Select the “**\o3-Pro\“or”o3-pro-2025-06-10**” endpoint to send the request and set the request body. The request method and request body are obtained from our website API doc. Our website also provides Apifox test for your convenience.
  2. Replace with your actual CometAPI key from your account.
  3. Insert your question or request into the content field—this is what the model will respond to.
  4. . Process the API response to get the generated answer.

For Model Access information in Comet API please see API doc.

This model adheres to the OpenAI v1/responses standard call format. For specific reference:

curl --location 
--request POST 'https://api.cometapi.com/v1/responses' \ 
--header 'Authorization: Bearer sk-xxxxxx' \ 
--header 'User-Agent: Apifox/1.0.0 (https://apifox.com)' \ 
--header 'Content-Type: application/json' \ 
--header 'Accept: */*' \ 
--header 'Host: api.cometapi.com' \ 
--header 'Connection: keep-alive' \ 
--data-raw '{ "model": "o3-pro", "input": [{"role": "user", "content": "What’s the difference between inductive and deductive reasoning?"}] }'

If you have any questions about the call or have any suggestions for us, please contact us through social media and email address [email protected].

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support