Economy

Study claims ChatGPT is losing capability, but some experts aren’t convinced

Enlarge (credit: Benj Edwards / Getty Images)

On Tuesday, researchers from Stanford University and University of California, Berkeley published a research paper that purports to show changes in GPT-4‘s outputs over time. The paper fuels a common-but-unproven belief that the AI language model has grown worse at coding and compositional tasks over the past few months. Some experts aren’t convinced by the results, but they say that the lack of certainty points to a larger problem with how OpenAI handles its model releases.

In a study titled “How Is ChatGPT’s Behavior Changing over Time?” published on arXiv, Lingjiao Chen, Matei Zaharia, and James Zou, cast doubt on the consistent performance of OpenAI’s large language models (LLMs), specifically GPT-3.5 and GPT-4. Using API access, they tested the March and June 2023 versions of these models on tasks like math problem-solving, answering sensitive questions, code generation, and visual reasoning. Most notably, GPT-4’s ability to identify prime numbers reportedly plunged dramatically from an accuracy of 97.6 percent in March to just 2.4 percent in June. Strangely, GPT-3.5 showed improved performance in the same period.

This study comes on the heels of people frequently complaining that GPT-4 has subjectively declined in performance over the past few months. Popular theories about why include OpenAI “distilling” models to reduce their computational overhead in a quest to speed up the output and save GPU resources, fine-tuning (additional training) to reduce harmful outputs that may have unintended effects, and a smattering of unsupported conspiracy theories such as OpenAI reducing GPT-4’s coding capabilities so more people will pay for GitHub Copilot.

Read 14 remaining paragraphs | Comments

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close
Close