Thanks @treehugg3 for the thoughtful feedback! 🙏 My main motive here was to break it down in simple words for people who are new to AI or just starting to learn about reasoning models. I completely understand your concern I’ll make sure to include more detailed explanations, examples, sources, and technical depth (like backtracking and novel reasoning paths) in upcoming blogs. Really appreciate your input, it helps me improve!
Rakshit Aralimatti
RakshitAralimatti
·
AI & ML interests
Nvidia
Recent Activity
replied to
their
post
6 days ago
When you ask ChatGPT, Claude, or Gemini a really tough question,
you might notice that little "thinking..." moment before it answers.
But what does it actually mean when an LLM is “thinking”?
Imagine a chess player pausing before their next move not because they don’t know how to play, but because they’re running through possibilities, weighing options, and choosing the best one.
LLMs do something similar… except they’re not really thinking like us.
Here’s the surprising part :-
You might think these reasoning skills come from futuristic architectures or alien neural networks.
In reality, most reasoning LLMs still use the same transformer decoder-only architecture as other models
The real magic?
It’s in how they’re trained and what data they learn from.
Can AI actually think, or is it just insanely good at faking it?
I broke it down in a simple, 4-minute Medium read.
Bet you’ll walk away with at least one “aha!” moment. 🚀
Read here - https://lnkd.in/edZ8Ceyg
replied to
their
post
6 days ago
When you ask ChatGPT, Claude, or Gemini a really tough question,
you might notice that little "thinking..." moment before it answers.
But what does it actually mean when an LLM is “thinking”?
Imagine a chess player pausing before their next move not because they don’t know how to play, but because they’re running through possibilities, weighing options, and choosing the best one.
LLMs do something similar… except they’re not really thinking like us.
Here’s the surprising part :-
You might think these reasoning skills come from futuristic architectures or alien neural networks.
In reality, most reasoning LLMs still use the same transformer decoder-only architecture as other models
The real magic?
It’s in how they’re trained and what data they learn from.
Can AI actually think, or is it just insanely good at faking it?
I broke it down in a simple, 4-minute Medium read.
Bet you’ll walk away with at least one “aha!” moment. 🚀
Read here - https://lnkd.in/edZ8Ceyg
commented on
their
article
7 days ago
What’s MXFP4? The 4-Bit Secret Powering OpenAI’s GPT‑OSS Models on Modest Hardware