ChatGPT-4 Gets a Boost: OpenAI Addresses AI' Laziness' with Innovative Updates
On Jan 25th, 24 Open AI made a surprising announcement regarding the update of the AI Models that power the ChatGPT assistant. Amongst all the irrelevant updates, Open AI hyped a mention about the potential fix to acknowledge the concerns about 'Laziness', which was observed in the GPT-4 Turbo that was released in November 2023. Additionally, Open AI also announced a new GPT 3.5 Turbo Model and Text moderation Model, a new embedding model that can be a new alternative to managing API usage.
Purpose of the new GPT- 4 Turbo Model
GPT 4 model has the ability to manage and complete tasks such as code generation with more accuracy and thorough output than the previous model. It is also intended to lower cases of 'Laziness' where the model doesn't finish the given task and stops halfway through.
GPT4 Turbo's Recent Observation
Ever since ChatGPT 4 Turbo took off, many GPT users have reported and are convinced that the previous GPT -4 version has been rejecting tasks with constant exhaustive depth as it did earlier in other versions of GPT 4.
Though OpenAI hasn't officially acknowledged this behaviour, many employees who work at OpenAI came forward to acknowledge it via social media,
An Open AI Employee mentioned this in their post on X (formerly known as Twitter)
@willdepue
"It's confusing how Twitter thinks RLHF is like the 'wokeness algorithm' that only makes models stupid.
Rather than argue, here's some evidence that this isn't true:
1. Most open researchers run experiments/use the same RLHF'ed model checkpoints in the app every day, even though we all have access to base models!
2. Custom models, which allow customers to modify the post-training process, will most likely end up with post-trained models like our production models!
3. There is confusion about where regressions come from system prompts, model variability, chatbot vs API differences, and different tool use.
I'm not saying we don't have problems with over-refusals (we do) or other weird things (working on fixing a recent laziness issue), but that's a product of the iterative process of serving and trying to support so many use cases at once.
When we significantly improve one part of the ChatGPT experience, you don't hear much about it (I mean, there's lots of AI hype, but not seen as per model updates often). When some parts of the model occasionally regress, those issues are far more noticeable.
We totally get that over-refusals and regressions suck. Trust me: I use the same ChatGPT every day as you (except we got ChatGPT for Enterprise, which is pretty sick, ngl, a lot faster and private GPTs).
If you see any issues or have specific examples, please annoy me or others on the team (@ me is easiest). Having concrete examples is so helpful to resolve these things as fast as possible. "
Another Employee also mentioned, "We've heard all your feedback about GPT4 getting lazier! We haven't updated the model since Nov 11th, and this certainly isn't intentional. Model behaviour can be unpredictable, and we're looking into fixing it."
GPT 3.5 Turbo's Recent Observations
As mentioned earlier, Open AI also announced GPT 3.5 Turb, promising high accuracy in responding to given formats and fixing a bug that caused a text encoding problem for non-English language process calls.
Once again, Open AI will decrease the cost of GPT 3.5 Turbo for the third time this year in order to help their customers and users scale higher and gain their trust again.
In addition, OpenAI is slashing prices for both the model's input and output. The input costs are dropping by 50% to $0.0005 per thousand tokens. Meanwhile, the output costs are getting a 25% cut, now at $0.0015 per thousand tokens.
The benefit is that these token prices will make operating third-party bots significantly less costly, but the GPT-3.5 model is generally more likely to confabulate than the GPT-4 Turbo.
New embedding models with lower pricing
Apart from GPT 3.5 Turbo and GPT 4 Turbo, Open AI introduced two new embedding models, one being a smaller and highly efficient text-embedding-3-small model and the other being a text-embedding-3-large model.
For those who don't know, embedding is a series of numerals that illustrates the visions within content, such as NL or code. Embeddings make it simple for ML models and other algorithms to comprehend the relationship between content and conduct tasks such as clustering or retrieval. They run applications like knowledge retrieval in ChatGPT, the Assistants API, and many retrieval-augmented generation (RAG) developer tools.
Good News for Developers
Lastly, Open AI is working on enhancements to its developer platform, which includes introducing tools for managing the usage of API keys and a brand new dashboard to monitor the usage of the API. Developers can now allocate permissions to API keys directly from the API Key page, helping to bring down the mishandling of API keys, which can be costly for developers.
Curious about leveraging the power of trending language models like GPT 4 for your organization's applications?
It's time to join hands with Mobiloitte's OpenAI ChatGPT solutions.
Mobiloitte is here to keep you ahead of the curve with its Open AI ChatGPT Development services. From Model Training, Natural Language Processing, Machine Learning, User Interface (UI) Design, API Development, Feedback Mechanism, Content Moderation to Ethical AI Considerations Mobiloitte covers every development spectrum.
Ready to transform communication? If Yes, then contact Mobiloitte today!!
Comments
Post a Comment