When it seemed that Artificial Intelligence had reached one of its highest points with ChatGPT, OpenAI announced this week the evolution of this natural language processing technology: GPT-4

And, unlike its predecessor that only received instructions or answered questions through text, the new version of the GPT-4 chatbot has reached a new level: processing images.

With GPT-4 users will be able to use images as part of the input on which they want to generate a response. For example, taking a photo with ingredients available in your kitchen and asking the chatbot to offer you recipe options and instructions.

According to the international media, by presenting him with the image of hundreds of balloons tied to an object, GPT-4 is able not only to discern that they are balloons, but also understands that if you cut the strings, they will fly away.

In addition to image processing, Artificial Intelligence can also provide solutions to more complex tasks. If a user writes on a piece of paper the data that they would like to see on a new web page, GPT-4 can read that and create all the code necessary to generate a complete website.

During the presentation of GPT-4 Greg Brockman, president and co-founder of OpenAI, spoke of the "superpowers" of AI when it comes to feeding on images. “GPT-4 is not only a linguistic model, it is also a visual model. It flexibly accepts 'inputs' that intersperse texts and images”Brockman said.

However, this function is not yet available to the public and is currently being tested solely and exclusively by the Be My Eyes company.

“Our GPT-4 model is the most capable and aligned to date. It is a multimodal model, so it admits images, as well as texts as prompts”, said in turn Sam altman, also a co-founder of OpenAI.

However, the company has clarified that, despite the improvements, GPT-4 will not be immune to "hallucinations", which are nothing more than the errors in the responses that have been detected in the ChatGPT version. The new AI will reportedly score 40% higher when evaluating its propensity for these types of errors.

"GPT-4 still has many known limitations that we are working to address, such as social biases, hallucinations, and conflicting indications." , explained OpenaAI.

How to use GPT-4?

For now, OpenAI has announced that it will let users with a ChatGPT Plus subscription test its new GPT-4 model, a service that costs $20 per month. Once subscribed, ChatGPT Plus gives the option to choose the AI ​​model, being able to select GPT-4.

Another way to experiment with GPT-4 is by using the Bing chat feature. Bing Chat is a search tool from Microsoft that has already implemented GPT-4 technology to provide answers to user questions in a chat format.

To use Bing Chat, users must open the Bing app and select the chat feature. From there, they can type their questions in natural language, and the system will provide answers in real time.

Notably, Microsoft has invested $10.000 billion in OpenAI.

Image: Getty images