A Update the preparation structureOpenAIE uses the internal structure OpenAI to decide whether AI models are safe and need to be protected during development and publication, but open AIIs reveal a “high-risky” system without comparative protection.
The change reflects the growing competitive pressure for the quick model of commercial AI developers. Open Accused of reducing the quality of protection In favor of rapid publication, and its security test details failure to provide timely report.
Perhaps the OpenAI claimed criticism claimed that it would not make these principles the consistency lightly and it would keep its protection “a level more protective”.
“If any other Frontier AI developer reveals a high-risky system without comparative protection, we can adjust our requirements,” wrote at the OpenAI Blog post Published on Tuesday afternoon. “However, we will first strictly ensure that the risky landscape has actually changed, publicly acknowledged that we are a compatibility, evaluate the compatibility that does not increase the overall risk of severe damage with money, and still keeps protection at a more protective level.”
The refresh preparation structure further clear that OpenAI depends on automatic evaluation to increase the development of the product. The agency says that although it has not fully abandoned human-led tests, it has created “a growing suite of automatic evaluation” that may continue ” [a] Quick [model release] Cadence. “
According to the Financial TimesOpena has given more than a week for examiners for an upcoming main model protection check – a compressed timeline compared to previous publication. Publishing sources further complained that many OPENCE PRIVILEGE TESTS now conducted the previous versions of models than the versions published by the public.
Other changes in the structure of the opening are related to how the companies categorize the models according to risk, including models that can hide their abilities, protect them from protection, prevent their own shutdown, and even include self-cuisine. Opena says that it will now concentrate on whether the models fill one between the two thresholds: “high” power or “critical” power.
The definition of the former of the former is a model that “can widen the existing paths for serious losses.” The latter is the model that “launches new paths for serious losses” by the company.
In his blog post, OpenAI wrote, “The cover systems that have reached high power must have protection arrangements that can reduce the risk of serious losses before deploying them,” Openi writes in his blog post. “The systems that reach critical capacity also require protection that reduces the risk related to development during development.”
The changes are made in the first opening of the preparation structure since 2023.
