To compete more aggressively with rival AI companies like Google, launching OpenAI Flex processingAn API option that provides the cost of using low AI models in exchange for slow response time and “Occasionally resource unavailable”. “
The Flex processing available in Bita for rational models recently published by OPNA OPENIA acts as low-priority and “non-production” functions such as model evaluation, data enrichment and aschronous workload, Openi.
It reduces the API expenses exactly half. For O 3, flake processing is $ 5/m input token (~ 750,000 words) and 20/m output token vs. standard $ 10/m input token and $ 40/m output token. For O4-Mini, the flake price drops $ 1.10/m input token from $ 2.20/m output token and $ 4.40/m output token.
As the Frontier AI continues to climb, the introduction of flake processing comes and the rivals publish cheap, more efficient budget-based models. Thursday, Google Roll Out Gemini 2.5 flashA reasonable model that matches or best matches the DIPSC R1 in terms of performance at low input token costs.
A Email to customers The OpenAI also announces the introduction of flake pricing, and the newly-introduced ID verification process should be completed for the O3 access to 1-3 developers of its use layer. (Levels are determined by the amount of money spent on OpenAI services)
Open has earlier said that the purpose of the ID verification is to prevent the use of bad actors from violating the use of bad actors.
