Skip to content

OpenAI's Sam Altman expresses dissatisfaction with GPT-5, stating it falls short of Artificial General Intelligence (AGI) standards, while Microsoft's partnership appears to be losing ground - "it seems there's something lacking."

GPT-5, the latest model from OpenAI CEO Sam Altman, fell short of achieving artificial general intelligence (AGI) due to its failure to learn from fresh experiences as it operates.

OpenAI's Sam Altman expresses disappointment over GPT-5 not reaching Artificial General...
OpenAI's Sam Altman expresses disappointment over GPT-5 not reaching Artificial General Intelligence (AGI) milestones as Microsoft's collaboration appears to weaken, with a hint of dissatisfaction that it "lacks something."

OpenAI's Sam Altman expresses dissatisfaction with GPT-5, stating it falls short of Artificial General Intelligence (AGI) standards, while Microsoft's partnership appears to be losing ground - "it seems there's something lacking."

In the realm of artificial intelligence (AI), OpenAI's GPT-5 has been unveiled as a significant stride towards Artificial General Intelligence (AGI), a type of AI system that surpasses human cognitive capabilities. However, it falls short of being fully human-level, according to the company's CEO, Sam Altman.

AGI is generally defined as an artificial system with the capability to perform intellectual tasks across a broad range of environments at a level comparable to or surpassing human cognitive abilities. This contrasts with narrow or weak AI, which specializes in specific tasks without broad adaptability or general reasoning.

OpenAI's GPT-5 represents a significant step towards AGI, with advanced capabilities in multi-modal input processing, extended context understanding, improved multi-step reasoning, and refined self-evaluation. While highly capable, it is generally viewed as approaching AGI but not fully there, categorized around emerging AGI level.

The European Commission’s AI Act (2025) defines General Purpose AI (GPAI) models by a computational threshold: training compute exceeding (10^{23}) FLOPS, capable of generating language (text/audio), text-to-image, or text-to-video outputs. Functional generality is required: models must not be narrowly specialized but should exhibit broad capabilities across multiple tasks. This threshold roughly corresponds to models of around 1 billion parameters trained on substantial datasets, encompassing scale in model size and data. Lifecycle obligations under law apply once a model meets this GPAI classification, including transparency and documentation.

No company has publicly announced achieving full human-level AGI, as current models, even GPT-5, retain limitations in autonomy, deep understanding, and adaptability relative to biological intelligence. The path to AGI remains partly conceptual, with ongoing debate on precise definitions, evaluation methods, and ethical implications.

Despite the progress made, challenges remain. GPT-5 has faced criticism from users due to glitches, bugs, and unresponsiveness, leading to a backlash, especially after OpenAI's decision to deprecate GPT-5's predecessors and bury them behind its $20 ChatGPT Plus paywall.

Google's DeepMind perceives current large language models (like GPT-4 and LLaMA 2) as emerging AGI—showing some generalized abilities but still below human-level competence in many areas. Their framework implies true AGI is more advanced than current top models.

Anthropic and other leading AI labs are focused on advancing safety, interpretability, and aligned general intelligence, but publicly available data suggest their work is similarly in the emerging or competent AGI phase rather than fully achieved AGI.

In a separate report, concerns have been raised about the potential for OpenAI to prematurely declare AGI via an AI coding agent.

As the race towards AGI continues, it is crucial to remember the ethical implications and the need for transparency in AI development. The journey towards AGI is a complex one, requiring vast resources, including cooling water, GPUs, AI talent, and money.

Sam Altman, CEO of OpenAI, attributed the issues with GPT-5 to a broken auto switcher and has since fixed it. Microsoft's CEO, Satya Nadella, shares the same focus on delivering real-world impact using AI.

Reports have emerged that Microsoft's multi-billion-dollar partnership with OpenAI is facing strain over OpenAI's for-profit evolution plans to mitigate outsider interference, hostile takeovers, and losing funding from investors. However, Microsoft has indicated that it is ready to walk away from the high-stakes negotiations and ride out the rest of its partnership through 2030.

Sam Altman suggested that users complaining about ChatGPT's supposed degraded user experience want the tool to be a "yes man" because they never had anyone support them before. He also proposed a shift in focus towards self-replication in the tech world.

Demis Hassabis, CEO of Google's DeepMind, claims we're on the verge of achieving AGI. Despite the progress, society's readiness to handle all that AGI entails is still a matter of debate, with Hassabis warning that we may not be fully prepared.

  1. Sam Altman, CEO of OpenAI, recently updated the issue with GPT-5, attributing it to a broken auto switcher and has since fixed it.
  2. Microsoft's CEO, Satya Nadella, shares the same focus on delivering real-world impact using AI, like the software giant's Windows and Office applications in the finance and business technology sectors.
  3. Google's DeepMind, in partnership with Microsoft, is working on large language models like GPT-4 and LLaMA 2, aiming to advance them to the level of emerging AGI.
  4. The potential for Microsoft's partnership with OpenAI to become a matter of business technology negotiations and contractual disputes has been raised, primarily due to OpenAI's for-profit evolution plans.

Read also:

    Latest