Connect with us

Hi, what are you looking for?

CIMR
CIMRCIMR
Sam Altman says OpenAI doesn’t completely understand how GPT works

Development Technology

Sam Altman says OpenAI doesn’t completely understand how GPT works

In order to deliver fresh iterations of its GPT product, OpenAI CEO Sam Altman stated that the artificial intelligence startup does not fully comprehend its product. During the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, Switzerland, CEO Nicholas Thompson of The Atlantic spoke with attendees about the potential benefits of AI technology and its safety.

The understanding of how AI and machine learning systems make decisions is known as interpretability, and Altman stated, “We certainly have not solved interpretability.” Nicholas Thompson interrupted him, saying, “Isn’t that an argument to not keep releasing new, more powerful models? If you don’t understand what’s happening?”

“These systems [are] generally considered safe and robust,” said Sam Altman in response. Although we are unable to fully comprehend the workings of individual neurons in your brain, we are aware that you are able to follow certain guidelines and respond to questions about your reasoning.

The announcement this week that OpenAI has “recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI [artificial general intelligence]” coincides with the publication of GPT-4o.

“If we are right that the trajectory of improvement is going to remain steep,” he stated in reference to the creation of a new safety and security committee at OpenAI, then developing policies will be crucial. “It seems to me that the more we can understand about what’s going on in these models, the better,” he continued. That seems like it may be a part of this whole package that enables us to validate and make safety assertions.

Sam Altman Says OpenAI Doesn’t Completely Understand How GPT Works

OpenAI CEO Sam Altman has admitted that even the company’s researchers don’t fully understand how GPT (Generative Pre-trained Transformer) models function at a deep level, highlighting the complexity and unpredictability of artificial intelligence. The statement comes as AI systems, including OpenAI’s GPT-4, continue to demonstrate impressive but sometimes inexplicable reasoning, problem-solving, and language generation abilities.

During a recent discussion, Altman acknowledged that while OpenAI has extensive knowledge of how large language models (LLMs) are trained and how they process data, certain aspects of their internal decision-making remain a “black box.” This means that while developers can predict overall patterns and behaviors, they cannot always explain why the model arrives at a specific answer or exhibits unexpected emergent properties.

This lack of full transparency raises both opportunities and concerns. On one hand, AI systems continue to surprise researchers by displaying capabilities that were not explicitly programmed, such as reasoning skills and creative problem-solving. On the other hand, the inability to fully explain how and why AI models behave in certain ways raises ethical and safety concerns, especially as they are deployed in critical industries like healthcare, finance, and legal services.

Altman emphasized that OpenAI is actively working on improving AI interpretability—a field of research focused on making AI decision-making more understandable and predictable. Efforts include mechanistic interpretability studies, fine-tuning techniques, and transparency measures to enhance trust in AI systems.

The revelation has sparked discussions in the AI community about the need for regulatory frameworks and safety protocols to mitigate risks associated with advanced AI. As models become more powerful, ensuring that they behave reliably and align with human values becomes increasingly important.

Advertisement. Scroll to continue reading.

Despite these challenges, Altman remains optimistic about the future of AI, stating that OpenAI is committed to responsible development and that advancements in interpretability research will eventually lead to more transparent and controllable AI models. However, for now, the inner workings of AI models like GPT remain a complex mystery, underscoring both the potential and unpredictability of modern artificial intelligence.

Group Media Publication
Construction, Infrastructure and Mining   
General News Platforms – IHTLive.com
Entertainment News Platforms – https://anyflix.in/
Legal and Laws News Platforms – https://legalmatters.in/
Podcast Platforms – https://anyfm.in/

You May Also Like

Uncategorized

On the 111-km-long, still-under-construction Banihal-Katra Railway link, Northern Railway has made considerable progress by breaking through T-48 tunnel, the fourth-longest tunnel in Indian Railways,...

Project

Andhra Pradesh is poised to gain a 318-kilometer-long expressway connecting Kurnool in AP and Solapur in Maharashtra. The project will be developed by the...

Company

Title: Balason Bridge Construction in West Bengal’s Darjeeling District: An Engineering Triumph Introduction: In the picturesque Darjeeling district of West Bengal, a remarkable engineering...

Construction

In order to assure compliance with essential standards for horizontal and vertical clearances, which are vital for safe passage, any bridge building across a...

Copyright © 2025 Anyflix Media And Entertainment Private Limited. GSTIN = 07AASCA2022K1ZN.