this post was submitted on 14 Jun 2023
9 points (100.0% liked)

Technology

37604 readers
190 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 2 years ago
MODERATORS
 

On Tuesday, OpenAI announced a sizable update to its large language model API offerings (including GPT-4 and gpt-3.5-turbo), including a new function-calling capability, significant cost reductions, and a 16,000 token context window option for the gpt-3.5-turbo model.

In large language models (LLMs), the "context window" is like a short-term memory that stores the contents of the prompt input or, in the case of a chatbot, the entire contents of the ongoing conversation. In language models, increasing context size has become a technological race, with Anthropic recently announcing a 75,000-token context window option for its Claude language model. In addition, OpenAI has developed a 32,000-token version of GPT-4, but it is not yet publicly available.

top 2 comments
sorted by: hot top controversial new old
[–] saiyan@vlemmy.net 2 points 1 year ago (1 children)

I been waiting for my gpt 4 api access for a couple months hopefully I get accepted soon