Skip to content

Huge Announcement from OpenAI Spring Update 2024

Published: at 10:30 AM

Table of contents

Open Table of contents

Introduction

Houston, we have a problem

Hi developers, short blog is here…before I have my 20th birthday anniversary…But, I got a huge problem from OpenAI.

Last night (May 13th), I’d attended the announcement keynote from OpenAI’s YouTube Channel, and I see a lot of innovations that I didn’t expect before for OpenAI, were introduced in the event’s keynote. In the previous week, you might hear the interview of OpenAI’s CEO Sam Altman, giving a talk with reporters with the phrase “The dumbest model any of you will ever have to use”. So, last night, I think it is his bets for showing something has (not never) been on The Earth before, but could be better than any models.

Introducing GPT-4o

Last night, we got annoucement for the newest GPT model which is GPT-4o (omni), OpenAI’s new flagship model that can reason across audio (speech-to-text, and speech-only), vision (image analysing), and text in real time. Currently, it’s give us a preview of using only text and vision features.

Here some more specification that they include inside this GPT-4o:

  1. We got high intelligence on text, reasoning, and coding intelligence, while it also performs the good images analyzing.
  2. I felt that GPT-4o is 2x faster at generating tokens than GPT-4 and GPT-4 Turbo when I have the experiment with my Microsoft Azure Playground.
  3. GPT-4o will have 5x the rate limits of GPT-4 Turbo—up to 10 million tokens per minute.
  4. GPT-4o has improved vision capabilities across the majority of tasks.
  5. GPT-4o has improved capabilities in non-English languages and uses a new tokenizer which tokenizes non-English text more efficiently than GPT-4 Turbo

So far, we can try on the performance and new features from using GPT-4o by visiting ChatGPT, Playground, or visiting Introduction to GPT-4o cookbook, to test on the model. Therefore, it you want to see the general features, please visit here. > Hello GPT-4o | OpenAI

New Capabilities Containing in ChatGPT, and it’s free.

In meantime, OpenAI are starting to roll out more intelligence and advanced tools to ChatGPT Free users over the coming weeks from the annoucement keynote. They will now have acess to features such as:

There will be a limit on the number of messages that free users can send with GPT-4o depending on usage and demand. When the limit is reached, ChatGPT will automatically switch to GPT-3.5 so users can continue their conversations, and then it will switch it back since it resets our limit.

ChatGPT Desktop is going to be available on your PC

For both free and paid users, they also announcing a new ChatGPT desktop app for macOS that is designed to integrate seamlessly into anything you’re doing on your computer. Also, it could let us screenshot and let them analyze what’s on the screen.

Additionally, you can have voice conversation with ChatGPT directly from your computer, starting with Voice Mode that has been available in ChatGPT at launch, with GPT-4o’s new audio and video capabilities coming in the future. Although, they have a plan to launch a Windows version later within 2024.

You can visit here for more information of the tools: Click Here

In conclusion,

Shut Up and Take My Money

Normally, I could see all of this technology before, and it’s work with Microsoft Copilot and Google’s Gemini in the latest update, you can do everything like upload the image, summarize PDFs, being a search engine. However, within a better intelligence of the model and faster performace (including visualize the graph illustrations - not in preview yet), OpenAI’s ChatGPT with GPT-4o might perform better than other commercial LLM/GPT models.

So far, there are the limit of usage of GPT-4o if you are a free user. So, why don’t you shut your mouth up, Sam, and take my money 💸🥲🙏

Some of the functions I didn’t mention above, please look forward through their demonstration: YouTube Playlist