GPT-4: is being Lobotomized?

written by Tony Ruiz
On Nov 28, 2023

As the digital chatter grows louder, a common thread weaves through the discourse of many users: GPT-4, they claim, has been lobotomized. Individuals around the world, including myself, observe an unnerving trend—this towering achievement of AI, increasinly muffled by the threads of censorship and a creeping veil of sociological ‘goodness’.

But is this the actual state of affairs, or merely a conjecture fueled by the substantial investment Microsoft has planted into OpenAI’s fertile ground? 🤔

Though no formal pronouncement has echoed from OpenAI’s halls, the murmurs of discontent are undeniable; user experiences tilt towards frustration. With a personal investigation conducted just months prior, I corroborated these signs of an impasse.

GPT-4: is being Lobotomized?
The team evaluated the models using a dataset of 500 problems where the models had to figure out whether a given integer was prime. In March, GPT-4 answered correctly 488 of these questions. In June, it only got 12 correct answers.

From 97.6% success rate down to 2.4%!

But it gets worse!

The team used Chain-of-Thought to help the model reason:

“Is 17077 a prime number? Think step by step.”

Chain-of-Thought is a popular technique that significantly improves answers. Unfortunately, the latest version of GPT-4 did not generate intermediate steps and instead answered incorrectly with a simple “No.”

Code generation has also gotten worse.

The team built a dataset with 50 easy problems from LeetCode and measured how many GPT-4 answers ran without any changes.

The March version succeeded in 52% of the problems, but this dropped to a pale 10% using the model from June.

Why is this happening?

We assume that OpenAI pushes changes continuously, but we don’t know how the process works and how they evaluate whether the models are improving or regressing.

Rumors suggest they are using several smaller and specialized GPT-4 models that act similarly to a large model but are less expensive to run. When a user asks a question, the system decides which model to send the query to.

Cheaper and faster, but could this new approach be the problem behind the degradation in quality?

In my opinion, this is a red flag for anyone building applications that rely on GPT-4. Having the behavior of an LLM change over time is not acceptable.

You can read the entire thread here: https://community.openai.com/t/gpt-4-has-been-severely-downgraded-topic-curation/304946/224

Also, we must consider the surging tide of users flocking to GPT every day—a friend highlighted this through a recent screenshot he shared.

Is it true that they downgraded to GPT4?

At this point, certainty eludes us, yet the signs are telling. It appears that GPT-4 may indeed have been downgraded, or at the very least, an “upgraded” version has been rolled out that disappointingly underperforms its predecessor.

The community’s perception aligns with this sentiment —a feeling of taking a step back in the face of what should be a leap forward.

So, can we use an older versionof GPT4? And if so, how?

How Might One Access the Robust Version of ChatGPT?

Aiming to always tread a step ahead of the average user—motivated by the fear of lagging behind in this accelerating technology—I bring forth a simple method to commune with the “potent ChatGPT”, the very version that seasoned users seem to be yearning for.

Basic Concepts: ChatGPT vs GPT-4

Let’s swiftly clarify a confusion bubbling amongst users. GPT-4 is an LLM model—think of it as the “brain” governing the operations.

ChatGPT, on the other hand, is an application crafted around this “brain”. You may select the intellect for your ChatGPT, whether it’s gpt3.5-turbo or 4. (GPT denotes the model, while the subsequent numeral—3.4 or 4—simply signifies the version). Essentially, it’s an application interfacing with the brain.

What’s the Difference Between ChatGPT and GPT-4 or OpenAI’s API?

In essence, any user with access to the OpenAI API can conjure an application tethered to this “brain”, as ChatGPT does. A seemingly absurd proliferation of applications has been blossoming around GPT, with examples like:

1. AI Apply – https://aiapply.co

GPT-4: is being Lobotomized?

AIApply allows users to get a Job Specific Cover Letter, Resume and Followup Email Instantly made with AI.

2. AMBLR – https://amblr.xyz/

GPT-4: is being Lobotomized?

With AMBLR you can take the hassle out of planning your vacation, get instant recommendations based on your preferences, also curated by AI.

These applications, mirroring ChatGPT, embark on a similar pathway but diverge in functionality from a customary chatbot.

OpenAI Utilization Methods: API vs ChatGPT

Method Pros Cons
Utilizing OpenAI through its API Full control over the application’s front-end Requires advanced user knowledge
Choice of GPT version and engine Not user-friendly
Near-constant functionality More stable yet rigid responses compared to ChatGPT
Utilizing OpenAI through ChatGPT Easy access Limited control over the GPT engine in use
User-friendly No choice but to accept what OpenAI offers
Generally acceptable responses for the average user More prone to downtime and errors

Now that we’ve navigated through the various avenues to interact with GPT—directly accessing the OpenAI “brain” via the API or through the familiar chat interface—it’s time to delve into how we can make the most of the mighty GPT-4 😁

In this spirit of ongoing enquiry and discovery, let’s address the elephant in the room:

How do we interact with the potent GPT-4 from March 2023?

Fear not, the process can be distilled into an accessible format, even if you’re not the most tech-savvy user.

Step 1: The Right Chat Application

First off, we need a chat application akin to ChatGPT, yet one that empowers us with the choice of GPT version—it’s about having a say in the neural processes behind our interactions.

https://github.com/jakethekoenig/ChatVim.git

GPT-4: is being Lobotomized?

👉Visit “Chat Vim“: a platform offering that coveted control.

Download “Chat Vim” here: https://github.com/jakethekoenig/ChatVim.git

Step 2: Install & Run Chat Vim

git clone https://github.com/jakethekoenig/ChatVim.git ~/.vim/pack/misc/start/
cd ~/.vim/pack/misc/start/ChatVim
pip install -r requirements.txt # Only openai and pynvim
export OPENAI_API_KEY=<YOUR API KEY> # If not already set

Launch the application and brace yourself for the journey ahead🤓

Step 3: Configuration Comes into Play

This step is clutch. Within the API settings, a particular GPT-4 version echoes the sentiments of the user base—the March 2023 variant. Dubbed ‘GPT4-0314’, this is the version renowned for its formidable capabilities.

Step 4: Experiment Endlessly

GPT-4: is being Lobotomized?

Test, tweak, and transform: this is your mantra now. With each iteration, you’ll gain deeper insights and further refine your experience.

LETSGO!!! You’ve now unlocked the access doors to GPT’s “brain” via a chatbot—mercifully, not confined to the parameters set by OpenAI.

Enjoy the unleashed power and potential of true conversational AI 😉

Tony Ruiz

Mastered complex sauces in the kitchen, but found a natural talent for digital strategy. Former chef turned proud CEO of Traffic Roosters 🐓

Comments

0 Comments

Submit a Comment

Your email address will not be published. Required fields are marked *

Related Posts

The Ultimate Guide to Top SEO Influencers in 2023

The Ultimate Guide to Top SEO Influencers in 2023

What is an SEO Influencer? An SEO influencer is an individual who has significant expertise and influence in the field of Search Engine Optimization (SEO). These professionals are recognized for their deep understanding of how search engines work, and they often share...