In order to deliver fresh iterations of its GPT product, OpenAI CEO Sam Altman stated that the artificial intelligence startup does not fully comprehend its product. During the International Telecommunication Union (ITU) AI for Good Global Summit in Geneva, Switzerland, CEO Nicholas Thompson of The Atlantic spoke with attendees about the potential benefits of AI technology and its safety.
The understanding of how AI and machine learning systems make decisions is known as interpretability, and Altman stated, “We certainly have not solved interpretability.” Nicholas Thompson interrupted him, saying, “Isn’t that an argument to not keep releasing new, more powerful models? If you don’t understand what’s happening?”
“These systems [are] generally considered safe and robust,” said Sam Altman in response. Although we are unable to fully comprehend the workings of individual neurons in your brain, we are aware that you are able to follow certain guidelines and respond to questions about your reasoning.
The announcement this week that OpenAI has “recently begun training its next frontier model and we anticipate the resulting systems to bring us to the next level of capabilities on our path to AGI [artificial general intelligence]” coincides with the publication of GPT-4o.
“If we are right that the trajectory of improvement is going to remain steep,” he stated in reference to the creation of a new safety and security committee at OpenAI, then developing policies will be crucial. “It seems to me that the more we can understand about what’s going on in these models, the better,” he continued. That seems like it may be a part of this whole package that enables us to validate and make safety assertions.
Group Media Publication
Construction, Infrastructure and Mining
General News Platforms – IHTLive.com
Entertainment News Platforms – https://anyflix.in/
Legal and Laws News Platforms – https://legalmatters.in/
Podcast Platforms – https://anyfm.in/