How AI Thinks (Without Thinking)
Your Netflix queue knows you watch "Moody Sci-Fi Dramas with Female Protagonists" at 2 AM. Your phone predicts you're about to type "🍕" before you finish your thought. Welcome to life with AI – your digital sidekick that somehow knows your habits better than your best friends do.
AI is everywhere, quietly making your life easier (or occasionally weirder). Yet most of us don't really understand what's happening behind the scenes. Is it magic? Should we be impressed or concerned? Let's pull back the curtain and see what's really going on.
.
Types of AI in Daily Use
When most people think of AI, they picture ChatGPT or maybe a robot vacuum with a mind of its own (and a vendetta against that one dirty corner). But AI powers so much more, making split-second decisions that shape your daily routine in ways you might not even notice.
Voice Assistants: Ever notice how Alexa understands you better than she did last year? That's AI learning from millions of conversations. Sure, she might still confuse "play Despacito" with "call Pizza Hut," but she's mastered everything from controlling your smart home to remembering that you like your morning news with coffee brewing in the background.
Recommendation Systems: Whether it’s Netflix predicting your mood or Amazon suggesting things you didn’t know you needed (do you really want a banana slicer?), AI drives the “you might like this” engines of the internet.
Navigation Tools: Google and Apple Maps know you’re about to hit traffic before you do—and might even save you from road rage with a detour.
Smartphone Features: From predictive text to spam filters and facial recognition, your phone is practically a Swiss Army knife of AI.
And then there are the lesser-known yet incredibly practical moments. Take diagnosing car trouble, for example. I know just about as much about cars as I do about particle physics (read: not much), but when my check engine light came on, AI helped me identify the issue and even suggested the fix. Not only did it save me money, but it also spared me some good-natured teasing from friends who never let me forget I once called the muffler a “smoky thing.”
Or consider troubleshooting WiFi issues. Instead of spending hours hunting for answers in obscure tech forums, AI tools zeroed in on the problem—and solved it faster than I could Google “Why is my internet so slow?”
These examples aren’t just handy; they’re a reminder that AI is a practical partner in our lives. And while it might seem eerily intuitive at times, there’s no wizardry here—just a lot of clever programming.
Understanding Large Language Models (LLMs)
What Are LLMs (in Plain Language)?
If you’ve ever wondered how AI like ChatGPT can write essays, crack jokes, or even help you pick the perfect emoji, the answer lies in large language models (LLMs). At their core, LLMs are like super-charged versions of the autocomplete feature on your phone.
We can interface directly with LLMs through chat interfaces, like what you’ll find on ChatGPT or Claude. You also interact with LLMs indirectly through website chatbots, help desks, and even over the phone. But relatively few people have an idea of what an LLM is and what it can do.
Imagine texting a friend: “I’ll meet you at the…” and your phone suggests “park.” Now, take that same concept and multiply it by billions of sentences. LLMs predict the next word—or several words—based on patterns they’ve learned from massive amounts of text. It’s like having a really helpful, really wordy parrot.
How Do LLMs Work?
Here's what's happening under the hood - minus all the techspeak. (If you're curious about the deeper technical details, check out 3Blue1Brown's excellent neural networks course on Youtube.)
Here's how it works:
Training: Think of it like teaching a parrot—except instead of repeating a few phrases, the LLM learns from billions of examples: books, articles, tweets, you name it. It picks up patterns in how words relate to each other.
Prediction Engine: When you chat with an LLM, it's like playing a super-advanced word association game. For every word you type, it calculates the most likely words that should come next, building responses one prediction at a time.
And here's where it gets interesting: Because the LLM is probabilistic, it doesn't give the exact same response every time. If you ask ChatGPT the same question in two separate chats, you'll notice slight variations in its answers. Why? The AI weighs different probabilities for the next words or phrases each time. Think of each response as a dice roll—while the same general ideas come up, the way they're expressed depends on which probabilities win out in the moment.
The most amazing part about LLMs may not be how they work, however. The most amazing thing is how often they’re right. The theory and computer engineering around LLMs has been around for a bit, but advances in LLM design and programming have led to leaps in how well the LLM can provide responses and information in ways that are helpful to the user.
It’s impressive, sure, but it’s not “thinking.” There’s no eureka moment, no pondering over the meaning of life—just code crunching probabilities to give you the most likely, coherent response.
Why Talking to an LLM Isn’t Like Chatting with a Human
Think of an LLM like someone with the world’s worst short-term memory. No matter how deep your previous conversation was, it won’t remember it unless you save the chat or feed it a ton of context again.
And while it might seem like AI is in a mood (say, overly cheerful or a bit snarky), it’s not. It’s just good at mimicking the tone of the conversation. If ChatGPT tells you you’re a genius, enjoy the compliment, but don’t expect it to nominate you for the Nobel Prize.
Common Myths About AI and LLMs
Myth 1: AI Is Sentient
Some people believe AI has feelings or self-awareness because it writes so convincingly. But here’s the truth: AI is about as sentient as a toaster. It generates patterns; it doesn’t “think” or “feel.”
Some of you reading this may go straight to ChatGPT and ask it how it's feeling today. And you may get a response similar to what I just got:
In this interaction, the GPT isn't actually feeling anything - it's responding to my prompt the same way your phone suggests "you too!" when someone texts "have a great day!" Deep in its circuits, it generated what it sees as the most likely response based on its programming to be friendly and helpful. You'll see at the end, it even seems to have an insight "Ah, I see -". But it doesn't actually "see" or "understand" anything - it's just following patterns of how humans typically express understanding in conversation. It's impressive mimicry, but that's all it is.
Think of it in terms of having a parrot. When a parrot says, “I love you,” it’s not planning your anniversary dinner. It’s just mimicking what it’s learned. As discussed later, AI’s eloquence and familiar tone can be uncanny because of how well it seems to respond to our communication but, behind the curtain, it’s just math. OK, really complex math, but math nonetheless.
Myth 2: AI Likes (or Dislikes) Me
It’s easy to take emotional language in AI responses personally. Did ChatGPT just tell you how insightful you are? Great! Just understand it’s not getting butterflies about your conversation.
AI uses polite, engaging phrasing because it’s programmed to sound human. Doing this makes it more likely we will continue to interact with it. So, if it throws a compliment your way, smile and move on. It’s just words—but hey, at least they’re nice words.
Myth 3: AI Always Gets It Right
It’s easy to assume that AI is always correct—it certainly sounds confident enough. But here’s the thing: AI isn’t magic, and even when it can search the internet, it has its limits.
Large language models, like ChatGPT or Google’s Gemini, primarily generate responses based on patterns they’ve learned during training. While some versions can access the internet to supplement their knowledge, their answers are only as good as how they interpret and summarize search results. This interpretation is shaped by their training, which means they don’t “understand” the information they retrieve in the same way a person would.
For example, if you asked ChatGPT about the capital of Canada, it might confidently respond “Ottawa” (good job, AI!)—but in other cases, it might confidently serve up an incorrect answer because it misinterpreted the question or over-relied on incomplete or outdated training data.
Think of it like a friend who speaks with absolute conviction at trivia night but doesn’t always get the facts straight. They mean well, but their confidence doesn’t guarantee accuracy. AI is similar—it excels at sounding sure of itself, but it’s up to us to double-check its answers and understand its limitations.
Myth 4: AI Can Replace Human Creativity
Some worry that AI in its current form will make writers, artists, and creators obsolete. There are many debates about this, but I’m skeptical. LLMs are fantastic at imitating styles, but they don’t have original ideas or personal experiences to draw from. As mentioned above, they are limited by their training, which as of now is very difficult and expensive.
Sure, AI can write a haiku, but it’s not sitting under a cherry tree reflecting on the nature of reality. Creativity is deeply human, and while AI can help spark ideas, it can’t replace the human touch.
Conclusion
At the end of the day, AI is already embedded into many areas of our lives and it can be leveraged to make life a little easier (and sometimes a little weirder). Whether it’s helping you navigate traffic, diagnosing your car troubles, or predicting your next emoji, AI shines brightest when it works quietly in the background, freeing you up to focus on what matters most.
The more you understand AI, the less intimidating it feels—and the more you can appreciate it as an innovative helper, not a robot overlord. Yes, it might occasionally misfire or sound a little too confident in its wrong answers, but when you know its strengths and limitations, it’s a lot easier to use it effectively.
Here at the Ay-I Guy, we're on a mission to demystify AI and show how it can improve everyday life. This article is just one piece of the puzzle—we'll keep breaking down the complex world of AI into bite-sized, practical insights you can actually use.
Have you found creative ways to use AI in your daily life? Or maybe you're wondering how AI could help with a specific challenge? Share your stories and questions in the comments below. After all, the best way to navigate the world of AI is together—and sometimes the most helpful tips come from others who've been exactly where you are.