OpenAI DevDay, Opening Keynote: What's next for ChatGPT?
It's 6 November, 2023 and OpenAI's Sam Altman has just kicked off the company's inaugural DevDay with a keynote that was jam-packed with updates, giving us a window into the near future of Generative AI.
So what's new? And what might these latest developments mean for you?
The session started with a recap of some impressive figures; CahtGPT now boasts 100 million weekly active users, and over 92% of Fortune 500 companies use the platform. Then, taking straight out of Apple's keynote playbook, it quickly skipped to a video reel of users dolling out stories of how ChatGPT is changing their lives. Tim Cook, take note.
New stuff
But enough of that, what’s actually new?
Well, looking at the fundamental platform, OpenAI's latest AI model, GPT-4 Turbo has launched and will roll out to all users in the next couple of weeks. It features:
Greatly improved context length, from 8k up to 128k tokens (equivalent to around 300 pages of text), means you can input more data, but fundamentally, GPT will be far less likely to 'forget' earlier parts of a thread.
More control over responses and outputs.
Better knowledge - training data cutoff is now up to April 2023 (previously September 2021) and OpenAI is working to close the gap further
New modalities - DALL-E3, GPT-4 Turbo and Text To Speech - are all now accessible via the API.
Increased rate limits.
And the fees for using the APIs are dropping whilst the functionality is improving. We can see some insane new apps being built on the back of these new updates.
ChatGPT improvements
Not a developer? Me neither, but you'll be glad to hear that OpenAI made some substantial announcements that will fundamentally change what we mere mortals can do with ChatGPT.
Unsurprisingly, all of the new features of ChatGPT-4 Turbo will be rolling out to GPT-4 users soon.
And one of the big changes to usability is a simple one: the annoying model picker is gone. Now, there will no need to select your model (ie Browse with Bing, DALL-E3, Advanced Data Analysis) - you'll be able to access all of these through a single chat request.
Custom GPTs
But the really big update coming through on ChatGPT is the ability to create - and share - completely custom chat agents, primed and ready to work on any number of tasks.
Examples listed on the ChatGPT website include bots trained as a Sous Chef, giving you recipes based on the foods you love and the ingredients you have, Tech Support, helping folk around you with everything from setting up a printer to troubleshooting a device. Man, I'm building one of those for my family the minute I have access!
Hey, there's even an example of a bot being created to act as a Maths Mentor. Wherever did they get that idea from?
GPTs can be private or shared publicly. And Enterprise users (those with a premium business account) will be able to create and deploy GPTs within their company.
What's more, the GPT Store will be launching later this month. Whilst it's not entirely clear how creators will monetise their bots, Altman revealed OpenAI will pay the people building the most useful GPTs 'a proportion of their revenues.'
Seriously, this is huge.
NO CODING KNOWLEDGE REQUIRED
You don't need to be a coder to create your bot; you can programme a GPT just through a natural language conversation. You'll be able to feed in your own training data (docs, databases, spreadsheets) for it to draw insights from, and you can set custom instructions as to how it should behave in conversation.
Imagine being able to create a custom GPT to support your work, or the work your team does? There's huge potential here.
It also feels like the slew of basic 'wrapper' AI solutions - those built around simple API integrations packaged to a particular use case - are pretty much dead.
That's all for now. I'll be itching to get my hands on this stuff...but until then...