OpenAI introduces GPT-4o, making more capabilities available for free in ChatGPT.
OpenAI has announced GPT-4o, its new flagship model that can reason across audio, vision, and text in real time.
GPT-4o (“o” for “omni”) is reportedly a step towards much more natural human-computer interaction—it accepts as input any combination of text, audio, and image and generates any combination of text, audio, and image outputs. It can respond to audio inputs in as little as 232 milliseconds, with an average of 320 milliseconds, which is similar to human response time(opens in a new window) in a conversation. It matches GPT-4 Turbo performance on text in English and code, with significant improvement on text in non-English languages, while also being much faster and 50% cheaper in the API. GPT-4o is especially better at vision and audio understanding compared to existing models.
With GPT-4o, they reportedly trained a single new model end-to-end across text, vision, and audio, meaning that all inputs and outputs are processed by the same neural network.
OpenAI is an American artificial intelligence (AI) research organization founded in December 2015, researching artificial intelligence with the goal of developing “safe and beneficial” artificial general intelligence, which it defines as “highly autonomous systems that outperform humans at most economically valuable work”. As one of the leading organizations of the AI boom, it has developed several large language models, advanced image generation models, and previously, released open-source models. Its release of ChatGPT has been credited with starting the AI boom.
The organization consists of the non-profit OpenAI, Inc. registered in Delaware and its for-profit subsidiary OpenAI Global, LLC.
Комментарии (0)