mentioning that some AI are less restricted in the up to date nature of their knowledge - like perplexity or gemini which search the Web and synthesise new data to answer your questions.
We have a tendency to say that because they're just predicting the next word they "don't think like us", but I'm not always so sure. The whole idea of a neural net is a simplification of our own brain structures and even now researchers are experimenting with adding more structures that we know (or think we know) about. I'm not so sure there isn't part of our brain that thinks exactly like an LLM does, even if we don't realise it.
Thanks for the feedback, Nicholas! And thanks for reading! You bring up a great point re: Perplexity and Gemini (and DeepSeek since they're all over the news right now). I struggled a bit with how far to go down the road with that and in the end decided it's a finer distinction than I want to make in this article and better left for a follow-up that can also address things like source weighting and such. But your point is well taken!
To your point about neural nets and the nature of intellect, I think that's a fascinating discussion. I actually had an interesting exchange with ChatGPT on the subject of how its information processing and reasoning was similar to, and different from, human experience. I saw something earlier today about how some folks have digitized smell that brought some of these questions to mind (although admittedly I didn't read past the headline). Anyway, I tend to agree with what I'm interpreting as your point, and find it fascinating. Especially in the sense that in our rush to AGI and ASI we're creating multiple factors of speed and complexity without really pausing to ask too many questions about the nature of what we're creating.
I suspect AGI, if and when it comes, will be a surprise to us. So much so that we'll have it for a long time before we admit it. To use popular culture as a reference, there is an early episode of Star Trek TNG where Data, the crew's sentient robot, is scheduled to be dismantled and examined against his will to see how he works. The argument being that a machine isn't a person and so they own him and it's for a greater good.
It's an old cliche perhaps, but it's something that comes to my mind when I see people talking so confidently, so sure, about how they _know_ AIs aren't like people because they just predict and they _know_ we don't think like that. For thousands of years we've understood that we don't really know anything at all about how we know things, and yet 2025 we're still blindly confident in our assumptions. In some ways I don't want to see us reach AGI, because if we do these people are likely to hurt it just to prove it can't be hurt. (And if we manage to reach ASI, it'll probably hurt us back in response).
Nice overview, it might have been worth
mentioning that some AI are less restricted in the up to date nature of their knowledge - like perplexity or gemini which search the Web and synthesise new data to answer your questions.
We have a tendency to say that because they're just predicting the next word they "don't think like us", but I'm not always so sure. The whole idea of a neural net is a simplification of our own brain structures and even now researchers are experimenting with adding more structures that we know (or think we know) about. I'm not so sure there isn't part of our brain that thinks exactly like an LLM does, even if we don't realise it.
Thanks for the feedback, Nicholas! And thanks for reading! You bring up a great point re: Perplexity and Gemini (and DeepSeek since they're all over the news right now). I struggled a bit with how far to go down the road with that and in the end decided it's a finer distinction than I want to make in this article and better left for a follow-up that can also address things like source weighting and such. But your point is well taken!
To your point about neural nets and the nature of intellect, I think that's a fascinating discussion. I actually had an interesting exchange with ChatGPT on the subject of how its information processing and reasoning was similar to, and different from, human experience. I saw something earlier today about how some folks have digitized smell that brought some of these questions to mind (although admittedly I didn't read past the headline). Anyway, I tend to agree with what I'm interpreting as your point, and find it fascinating. Especially in the sense that in our rush to AGI and ASI we're creating multiple factors of speed and complexity without really pausing to ask too many questions about the nature of what we're creating.
I suspect AGI, if and when it comes, will be a surprise to us. So much so that we'll have it for a long time before we admit it. To use popular culture as a reference, there is an early episode of Star Trek TNG where Data, the crew's sentient robot, is scheduled to be dismantled and examined against his will to see how he works. The argument being that a machine isn't a person and so they own him and it's for a greater good.
It's an old cliche perhaps, but it's something that comes to my mind when I see people talking so confidently, so sure, about how they _know_ AIs aren't like people because they just predict and they _know_ we don't think like that. For thousands of years we've understood that we don't really know anything at all about how we know things, and yet 2025 we're still blindly confident in our assumptions. In some ways I don't want to see us reach AGI, because if we do these people are likely to hurt it just to prove it can't be hurt. (And if we manage to reach ASI, it'll probably hurt us back in response).