Openai launches gpt-4o, a faster, free model that can "see" with the cell phone camera. Update arrives with new apps for cell phones and desktops and can better understand and talk to the user, as if they were a human

OpenAI launches GPT-4o, a faster, free model that can “see” with your cell phone camera 

victor pacheco avatar
Update arrives with new apps for cell phones and desktops and can better understand and talk to the user, as if they were a human

In, quick event held on your channel YouTube, we were introduced to the new language model of OpenAI, called the GPT-4o. LLM promises to be twice as fast and 50% cheaper than GPT-4 Turbo, the company's most complete language model to date. See all the details right now.

Advantages of GPT-4o

Mira Murati. openai technology director
Mira Murati, Technology Director at OpenAI, presented a new language model (Screenshot: Glauco Vital/Showmetech)

The “o” of GPT-4o comes from the Latin word “Omni”, which means “everything”. During the event OpenAI, Mira Murat, the company's Technology Director, took the stage to present the new language model. She highlighted that the new feature can be used free of charge by everyone who has an OpenAI account, but anyone who has a ChatGPT Pro account will have 5x the message limit.

The great advantage of the new language model is its support for more than 97 languages, in addition to the ability to create texts, images and perform in a more improved way than GPT-4 Turbo. The new feature also has a memory feature, which records previous conversations for a better understanding of chats in the future.

The company also mentions that the new language model will be able to browse the web soon, but did not say whether this will be available for users with free accounts or only for those with a Pro account.

Finally, by listening to what the user is saying, OpenAI's new language model can understand a person's mood. In an example shown during the presentation, he identified that a person would be nervous just by breathing heavily. GPT-4o can also identify mood just from a photo.

Openai launches gpt-4o, a faster, free model that can "see" with the cell phone camera. Update arrives with new apps for cell phones and desktops and can better understand and talk to the user, as if they were a human
New feature can be accessed from today (Image: Glauco Vital/Showmetech)

Mira Murat also shared that currently, more than 100 million people use ChatGPT to create images, texts and more content. The executive highlighted OpenAI's great purpose in creating accessible technologies for everyone and the launch of the new language model is yet another way of putting this commitment into practice.

Key Features and Examples

There is no denying that the GPT-4o It looks more like a great personal assistant, like Google Assistant and Amazon Alexa. During the OpenAI news event, LLM's ability to use the smartphone's camera to support its tasks was demonstrated.

The developer Mark Chan He simulated being in a nervous situation and immediately after identifying the abnormal behavior, the language model indicated exercises to make him calmer. Check out the video below:

The most interesting feature is the ability for the language model to recognize facial expressions, environments and more, just using the smartphone's camera. Just give a command to the Chat GPT and open the camera and wait for it to be done, in a few seconds. See the demo:

If you are a person who needs help with mathematical problems, artificial intelligence can also help with solving this, using the support camera. During today's event, a developer designed a first degree equation (3X + 1 = 4) and, with the support of the new language model present in ChatGPT, managed to achieve the result. See another demo:

With support for more than 97 languages, AI's artificial intelligence OpenAI You will also have the ability to help two people who speak different languages, in real time (something already seen in Google Translate). Just give the command, citing both languages, and a conversation can be had using the GPT-4o language model as a base. Check out:

The company also highlights that all orders made by users can be changed while the voice of the GPT-4o is sending the answers. This way, it is no longer necessary to wait for an entire response to be sent to make a new request to the artificial intelligence. Watch the demo:

During the interruption of sending responses, it will be possible to change the tone of voice, the intonation of the commands sent and the speech speed of the artificial intelligence voice. All this in just a few seconds. These are just a few examples of what the new language model can do, but the new feature will certainly gain new possibilities once it is released for general use.

New desktop app

Openai launches gpt-4o, a faster, free model that can "see" with the cell phone camera. Update arrives with new apps for cell phones and desktops and can better understand and talk to the user, as if they were a human
Desktop application will be launched for macOS, at least at this first moment (Photo: Glauco Vital/Showmetech)

An application that simulates the use of artificial intelligence on the web was launched and displayed during the event OpenAI of today. In addition to sending responses to commands written in the chat, the application, which already has the new integrated language model, can see what is being shown on the screen and even make summaries. The new feature can also use the Mac's webcam to “see” and recognize images.

For now, the ChatGPT desktop app is only released for macOS starting today for Pro users, and other users over the next few weeks. There is no information about availability for Windows or Linux, but we will update the article when the information is released by OpenAI.

The company also took advantage of the launch to renew the AI ​​web interface, however without announcing a date for implementation. Look:

New chatgpt interface with gpt-4o language model
Navigation became less polluted (Photo: Reproduction/MacRumors)

Among the new features, there is the repositioning of buttons and a more centralized layout for AI responses, all to be more “friendly and conversational”, according to the company.

Availability

The launch of the GPT-4o, despite starting today, it will be done in parts. According to the press statement, everyone who has a free or Pro account will have access to the GPT-4o capacity for free, but those who are subscribers will have a language model with a 5x greater message limit. See the new interface for OpenAI subscribers:

gpt-4o language model
Language model is being released starting today (Photo: Bruno Martinez/Showmetech)

Talking about Voice Mode, which gives voice to ChatGPT, it will be necessary to wait a little longer: the company announced that this feature will be launched later, with its full capacity, “in the coming weeks”.

The API GPT-4o It has also been released and is 2x faster than the GPT-4 Turbo, in addition to being 50% cheaper and having 5x higher rate limits compared to the previous model.

Can ChatGPT become a personal assistant?

The news arrives to change how we use the AI ​​that has become famous since the end of 2023. And there is no denying that OpenAI presents its great tool as an interesting option against Google Assistant and Amazon Alexa, especially since it can now “speak” .

It remains to be seen whether it will be available on devices launched during the remainder of 2024, but we are seeing the beginning of a new era for using the Chat GPT. In the meantime, tell us Comment how do you see this change and what is your favorite feature introduced today.

See also other features

ChatGPT-4 outperforms psychologists in social intelligence test, says study

OpenAI and Moderna announce partnership to improve vaccines and treatments

With information: OpenAI

reviewed by Glaucon Vital in 13 / 5 / 24.

Leave a comment

Your email address will not be published. Required fields are marked with *

Related Posts