OpenAI ships GPT-4.1 without a safety report

Cyber Security, ICT, Most Popular, Trends News

No Comments

Photo of author

By Karla T Vasquez

WhatsApp Group Join Now
Telegram Group Join Now


On Monday, a new family of OPNI AI models, GPT -1.0, has launched the company, which has surpassed some models of criteria for programming, especially for programming. However, the GPT -1.5 protection report does not usually send the model or system card to the Openai model release.

Until Tuesday morning, Open has not yet been able to publish a security report for GPT -1.3 -and it seems that it doesn’t plan. In a statement to TechCrunch, OpenAI spokesman Shouki Amdo said “GPT -1.3 is not a border model, so separate system cards will not be published for it.”

To evaluate the protection of specific models in favor of AI labs, they show the kind of tests that they conducted internally and with third -party partners to publish security reports fairly standard. These reports occasionally reveal uninterrupted information, such as a model cheats or persuade people dangerously. In greater, the AI ​​community realizes these reports as a good faith attempt by AI labs to support independent research and red teaming.

However, over the past several months, the top AI labs seem to have reduced the values ​​of their report, reacting to security researchers. Some like Google have taken their legs in a security report, others have published the report General detailsThe

The recent track record of the Openai is not exceptional. In December, the company has criticized for publishing a security report There are benchmark results for a variety of a model It has been deployed from the version to production. Last month, Openi introduced a model, deep research a few weeks before the system was released for that model.

Former OpenAI protection researcher Steven Adler mentions TechCrunch that security reports are not compulsory by any law or control – they are voluntary. Nevertheless, Open has made several promises to the government to increase transparency around its models. Before the UK’s AI Protection Summit in 2023, Openai in a blog post The system is called the card “A key part” of its approach toward accountability. And OpenAIAE, led by Paris AI Action Summit in 2025, says that system cards provide valuable insights At risk of a modelThe

Adler told TechCrunch in an email, “The system cards are the main equipment in the AI ​​industry to describe what the system cards were for transparency and the security test.” “The rules and promises of today’s transparency are ultimately voluntary, so the decision to publish the system card for the given model depends on each AI organization to decide whether to publish it.”

GPT -1.3 is shipping without a system card at a time when current and former workers are raising concerns over the openings of the open. Last week, Adler and five other opening employees filed a proposed amicus brief in the Elon Mask case against the Openai, arguing that any profitable Open could cut the corner of the security. Financial Times recently reported Chatzipt manufacturer It allocates security testers.

Although the most capable model in the GPT -5.1 family, GPT -1.5, OpenAE’s roster is not the maximum performance, it makes considerable profit in skill and delayed categories. The Secure AI project co-founder and policy analyst Tomas Woodside told TechCrunch that the improvement of performances made a security report more critical. The more sophisticated the model gets, the higher the risk, the more sophisticated, he said.

Many AI labs have batted to code to code the requirements for security reports in the Act. For example, Open is opposed to California’s SB 1047, for which many AI developers needed to monitor and publish protection evaluation on models that they made public.

Leave a Comment