NIX Solutions: OpenAI Unveils GPT-4 Turbo

Today at its first developer conference, OpenAI unveiled GPT-4 Turbo, an improved version of its flagship large language model.

Improved Performance and Lower Costs

Developers from OpenAI note that the new GPT-4 Turbo has become more powerful and, at the same time, cheaper than GPT-4.

NIX Solutions

Two Versions for Enhanced Functionality

The GPT-4 Turbo language model will be offered in two versions: one designed exclusively for text analysis, the second understands the context of not only text but also images. The text analytics model is available in preview via the API starting today. The company promised to make both versions of the neural network publicly available “in the coming weeks.”

Pricing Structure

The cost to use GPT-4 Turbo is $0.01 per 1000 input tokens (about 750 words) and $0.03 per 1000 output tokens. Input tokens are pieces of raw text. For example, the word “fantastic” is split into the tokens “fan,” “tas,” and “tic.” Output tokens, in turn, are the tokens that the model generates based on the input tokens. The price of GPT-4 Turbo for image processing will depend on the image size. For example, processing a 1080×1080 pixel image in GPT-4 Turbo would cost $0.00765.

Knowledge Base and Context Window

“We have optimized performance so we can offer GPT-4 Turbo at three times the price for input tokens and half the price for output tokens compared to GPT-4,” OpenAI said in a blog post. For GPT-4 Turbo, we updated the knowledge base, which is used when answering queries. The GPT-4 language model was trained on web data until September 2021. The GPT-4 Turbo knowledge limit is April 2023. In other words, the neural network will provide more accurate answers to queries related to recent events (until April 2023).

Expanded Context Window

Based on many examples from the Internet, GPT-4 Turbo learned to predict the probability of occurrence of certain words based on patterns, including the semantic context of the surrounding text. The GPT-4 Turbo model has a context window of 128 thousand tokens, which is four times larger than GPT-4. This is the largest context window of any commercially available AI model.

JSON Format Capability and Flexible Settings

The GPT-4 Turbo model is capable of generating valid JSON format, useful for data-transferring web applications. GPT-4 Turbo has generally received more flexible settings that will be useful to developers.

Integration with OtherAI Capabilities

GPT-4 Turbo can also be integrated with DALL-E 3, text-to-speech, and visual capabilities, expanding the use of AI.

Copyright Protection Guarantee

OpenAI also announced that it will provide copyright protection guarantees for enterprise users through the Copyright Shield program. “We will now defend our customers and pay their costs if they face legal claims of copyright infringement,” the company said in a blog post.

Fine-Tuning and Rate Limit Increase

For GPT-4, the company launched a fine-tuning program, providing developers with even more tools for customizing AI for specific tasks. The company has also doubled the rate limit for token deposits and withdrawals per minute for all paid GPT-4 users.

OpenAI’s GPT-4 Turbo offers improved performance, enhanced capabilities, and cost-efficiency, concludes NIX Solutions. It brings valuable updates to the world of AI, promising better responses, extended context, and copyright protection for enterprise users. Developers can explore fine-tuning options, making it a versatile tool for various applications.