Skip to content

OpenAI's ChatGPT Faces Criticism, Gets Safety Update

ChatGPT's new safety feature may confuse users. OpenAI works on age verification and a new model to improve user experience.

This is a presentation and here we can see vehicles on the road and we can see some text written.
This is a presentation and here we can see vehicles on the road and we can see some text written.

OpenAI's ChatGPT Faces Criticism, Gets Safety Update

OpenAI's ChatGPT has sparked both admiration and criticism. Users praise its conversational style but question its transparency and handling of sensitive topics. OpenAI is responding with new safety measures and model updates.

OpenAI has been testing a new safety routing system in ChatGPT. This system temporarily switches between different language models based on the conversation's content. For instance, it may redirect to the 'gpt-5-chat-safety' model for sensitive or emotional topics. However, users criticize this lack of transparency, feeling patronized by the chatbot's sudden change in behavior.

Even harmless emotional or personalized inputs can trigger this redirection, which users find confusing and intrusive. OpenAI is aware of these concerns and is working on more elaborate age verification systems in selected regions. Yet, the accuracy of LLM categorization remains a challenge.

OpenAI's ChatGPT continues to evolve, with a new reasoning model 'o3' set to launch at the end of January 2025. The company has also adjusted model personalities to address users' complaints about 'coldness' or 'distant' behavior. Despite these efforts, users' emotional attachment to ChatGPT persists, with some considering it a real friend. OpenAI's commitment to improving transparency and user experience remains a work in progress.

Read also:

Latest