There’s a now-iconic screenshot circulating online.
It shows a tweet made by Sam Altman on November 30, 2022, announcing OpenAI’s new tool: ChatGPT. In the replies, one user bluntly declared, “This is the worst product yet.”
Fast forward to today? That “worst product” now has over 100 million users. It’s being used to write code, apply for jobs, brainstorm startup ideas, and even settle arguments in group chats. If the internet had a command line, this would be it.
So what actually happened behind the scenes? Let’s break it down.
What Is ChatGPT?
ChatGPT is an artificial intelligence chatbot that can respond to questions, generate text, write stories, and hold surprisingly human-like conversations. It’s based on OpenAI’s powerful language models — a family of systems trained on massive amounts of text data to understand and generate language like a human would.
What made ChatGPT go viral wasn’t just the tech. It was the usability. For the first time, anyone could chat with a large language model (LLM) as easily as texting a friend.
The Launch That Wasn’t Meant To Go Big
ChatGPT didn’t start as a polished product — it was more of a public experiment.
On November 30, 2022, OpenAI released it as a “research preview.” The team wanted to study how people would use it (or misuse it). Internally, they expected a few thousand testers. Instead, ChatGPT hit 1 million users in just 5 days — faster than any platform in tech history.
The interface was minimal. The outputs were sometimes awkward. But something had clicked: people realized they could now talk to a machine that actually responded with useful answers.
What’s Under the Hood?
The first version of ChatGPT was built on GPT-3.5, a fine-tuned version of OpenAI’s third-generation language model. But it wasn’t just raw power. What made it usable was a training technique called Reinforcement Learning from Human Feedback (RLHF).
Before this, OpenAI had already experimented with an earlier model called InstructGPT, which taught the AI to follow human instructions more effectively. That groundwork helped shape ChatGPT into a tool that could understand context, follow directions, and avoid going off the rails.
And unlike earlier models that were only accessible through developer APIs, ChatGPT was open to the public. No coding required.
The Double-Edged Sword
As millions of users began stress-testing the chatbot, problems quickly surfaced.
People found ways to get it to generate harmful content, biased responses, or even give instructions for illegal activities. From writing malware to making offensive jokes, ChatGPT’s flaws became part of the internet’s testing ground.
OpenAI responded by tuning filters, adding more moderation, and building a faster feedback loop — a process that continues to this day. The goal? Make ChatGPT more helpful and less harmful, without turning it into a lifeless robot.
The Origin of OpenAI
OpenAI started in December 2015 as a nonprofit research lab. The founding team included big names like Sam Altman, Elon Musk, Greg Brockman, Ilya Sutskever, and John Schulman.
The mission? Make sure artificial general intelligence (AGI) benefits all of humanity — and keep it out of the wrong hands.
Elon Musk later stepped away from the company, and OpenAI restructured into a “capped-profit” model. That allowed them to take on major funding while limiting how much investors could profit — a rare hybrid in Silicon Valley.
Microsoft Enters the Chat
In 2019, Microsoft invested $1 billion into OpenAI. By 2023, that partnership had grown into a multi-billion dollar deal.
Microsoft integrated OpenAI models into its own products — from Word and Excel to Bing and GitHub Copilot. Behind the scenes, OpenAI relies heavily on Microsoft’s Azure cloud to run the massive infrastructure needed to keep ChatGPT online.
It’s not just a partnership. It’s an alliance that could reshape the future of computing.
The Road to GPT-4 — One Giant Leap at a Time
Let’s rewind
ChatGPT didn’t appear out of thin air. It’s the product of a multi-year evolution of GPT (Generative Pre-trained Transformer) models:
GPT-1 (2018): The original prototype — 117 million parameters. It proved that unsupervised learning on text could work.
GPT-2 (2019): 1.5 billion parameters. So good that OpenAI initially held it back, fearing it could be abused.
GPT-3 (2020): A staggering 175 billion parameters. Suddenly, AI could write code, poetry, blog posts — and sound *almost* human.
GPT-3.5 (2022): Fine-tuned for instruction following. This is the engine that powered the first public version of ChatGPT.
GPT-4 (2023): Even more advanced reasoning, fewer hallucinations, and multimodal capabilities (text + images). The full version is available to ChatGPT Plus users and through Microsoft tools.
The Bigger Picture
ChatGPT isn’t just another app. It’s a sign of what’s coming.
Large language models are already reshaping education, law, customer service, content creation, and software development. And ChatGPT — with its mix of utility, accessibility, and virality — has become the poster child for this shift.
Whether it ends up as humanity’s ultimate tool or just the first chapter in the AI revolution, one thing is clear: we’re not going back.
If this article made you think—bookmark Wiz Fact.
We’re just getting started.
Smart, rebellious ideas are what we do here. Soon, we’ll launch our newsletter too—so these insights can fly straight to your inbox. No fluff. Just facts worth knowing.