Why ChatGPT users Still Have an Upper Hand in the Workforce (and how to take advantage)
By now, people who know what ChatGPT is, but don’t use it themselves, are probably sick and tired of hearing about it from those who do use it. But even with its meteoric rise, there are still a vast majority of people out there who have little to no idea that it even exists.
Intro: the adoption curve
With every new technology, we see a predictable rise and fall of what’s often called the adoption curve. A graph showing how quickly a new technology gets picked up by the population.
At the beginning you have the early adopters; the slim minority who pick up on and start using the new technology first. If you know me, you might agree that this is where I am on the curve. And if you’re reading this, chances are you’re closer to this side of the graph than you might think. At the top, the middle, we have the majority of people, the mainstream, as the fad is widely picked up by “everyone.” And then at the tail end you have the “laggards,” the people who refused to pick up on the trend while it was mainstream, but are now filing in because of one reason or another, sometimes they have little to no choice.
We are seeing this happening now with several examples in various stages, but you know which one I want to talk about.
Understanding the usefulness of ChatGPT is more important than understanding how it works.
A lot of people don’t seem to fully understand how to use ChatGPT, or appreciate the value it can bring. And I get that. It’s a new technology and it takes a bit of a mindset shift. Especially given the cultural fear of “AI.” But that fear has taken the focus away from how to use it, and held the focus on the fact that barely anyone knows quite exactly how they work, and that creates a lot of fear and misunderstanding.
You can rest assured that the way people think of “AI” as being “exactly like a human brain, but a machine instead,” is not even close to what this is. There’s actually a confusing mix of “ChatGPT sucks because it isn’t ‘true AI’” and “ChatGPT is scary because it’s literally sentient just like a human.” (which it isn’t.)
Relying on things we don’t understand is not new.
Not being able to understand how a technology works does not mean you can’t understand how to make use of it. In fact 90% of the time this is how people relate to technology. Most people aren’t car mechanics or computer scientists, but nearly all of us drive and use computers. Hell, only some of us are medical professionals, but everyone has a body! Some people don’t know how they work, and they manage to survive.
It’s like a calculator.
Trying to name all the things you can do with ChatGPT is like trying to list all the uses for a calculator. If you think that sounds easy, I invite you to participate in the first ever Shloomth thought experiment.
Creative exercise
Stop reading this, open a note on your phone or grab a pen and piece of paper, whichever you’re most comfortable with, and set a timer for 2 minutes (or 2 and a half minutes if that doesn’t sound like enough time) and write down all the uses you can think of for a calculator.
If you’re like me your mind may have gone to things like, calculating the total square footage of a building, building a rocket, counting the stars, grocery budgeting, budgeting in general, etc…
It’s actually already an established creative exercise. You tell the participants they will be told an object, given a time limit, and they prepare with their note taking instrument before the object is presented. So I actually gave you a head start by telling you about the exercise after you already knew what object it would be about.
After two minutes the number of uses you’re able to think of for the given object, is thought of as a metric for a certain kind of creative intelligence. A MacGyver skill, if you like.
So you might be wondering, what happens if we engage ChatGPT in this creative exercise? This is actually a good example of the type of task it’s really good at. It came up with 50 uses for a calculator in twelve seconds, and it was happy to continue. Wanna see the list?
And then, feeling like a real smartass, I asked it to make a similar list of uses for itself, avoiding any math-related ones, to avoid redundancy. Here’s that list.
Follow-up commands
You might notice that I sent a second command in that one, where I just described what I wanted it to do. I didn’t bother being polite to it because it’s not a person, but i was able to describe in natural language what task i wanted performed, and it was able to follow the instructions. In this case it was modifying a previous instruction (changing something i had previously told it to do) which it understood because it can see the conversation leading up to that point.
ChatGPT isn’t like Google, where you type in one thing and hope for the best. It really is more like having a conversation. Not in a lofty metaphorical sense, but literally, you just talk to it, and sometimes it doesn’t understand the first time, but if you re-explain and rephrase, it’s more likely to get it, given the added context of its failed attempt. It has an example of what not to do and a bit of instructions about what it did wrong. It can use this to perform better.
Siri ruined everyone’s concept of what a “digital assistant” can be. Siri is programmed to listen for a specific set of words in a preset library of commands. And improving Siri up until now has meant finding individual little ways to make the system seem like it understands you better, but really it’s just listening for a big list of key words like “minutes” or “umbrella.”
A little bit about how it works
The way GPT works revolves around predicting which word will come next, over and over, based on every previous word up til that point. Like a super big giant scaled up version of the keyboard prediction on your phone, but optimized for entire conversations instead of just one word after another in a sentence. The big difference is the “context window,” or how much information the system can actively hold and work with.
GPT’s context window is large enough that if you start a new conversation for most new topics, it will be able to remember the entire conversation up until a given point. This means it can account for details that were discussed earlier. Which in turn means that, if you ask it to do something and it does it wrong, you can explain what it did wrong, it can look back on its previous output, understand its mistake, and do it again, better this time.
Think about how you would explain a mistake to a real person. Clarify, rephrase, give follow-up instructions. I’ve had moments where it misunderstood my instructions, especially with 4o, but with a bit of persistence and clear explanations, it got it right. It’s capable of so much more than people often give it credit for. It just needs guidance.
The limitations make it better for workers than bosses.
There is a widespread fear of people losing their jobs to this technology, and that fear isn’t without precedent. But knowing how to use the new technology always puts you at an advantage. For now, technology isn’t replacing workers, but workers who use the new technology are going to replace workers who don’t use it. So many people would benefit from figuring out how to make use of this new tech. And as always, there is a balance to be struck. We don’t want to be too overly reliant on it for things that we don’t really need it for. But there is no denying that it can bring value to many tasks.
Also, it’s free now
All the functionality that used to be exclusive to paying users is now rolling out to free users, which opens up incredible possibilities for everyone. Paying users are still enjoying higher usage caps, in case you were wondering about why they would offer the same functionality to free users. It’s the same functionality, you just get more of it if you pay.
And if, again, you’re wondering how and why that works, it may be helpful to know that OpenAI described ChatGPT as a “research product” when it came out. It’s part of a research project, but it’s also a product. By using the thing, you’re helping them in their research. They’re studying how these kinds of systems can be useful and how to identify and deal with any potential dangers. They’re at a point right now where having more people interacting with the thing is helping them to test it from every possible angle. It also costs them a lot of money to run this thing, so they offer a paid tier for people who really want to be able to use this thing really often.
Wrap up
I’m not saying you should try to figure out how to make ChatGPT do as much of your job for you as possible. I’m saying you should figure out which parts of your job are very time consuming or effort-intensive for you, that could be done by a GPT. And in order to possibly do that, it helps to have a good idea of the kinds of things it’s capable of. And trying to list all of the things it can do is a futile task, but just saying “language based tasks” doesn’t give a clear idea.
Let me know if this did or didn’t make sense, and if it did or didn’t help you understand something that you didn’t already.