The key point is that LLMs don’t process information, as we see it. The knowledge they have is predigested, and embedded into the text they were trained on.
Don’t get me wrong, they are a big step towards a true AI, but they cannot do some things that seem fundamental to intelligence. The best analogy is that they are a lobotomized speech centre. They can put on a veneer of being intelligent and self aware, but it’s a veneer.
I personally suspect they will be a critical component to a future AI, but are a dead end path on their own.
The key point is that LLMs don’t process information, as we see it. The knowledge they have is predigested, and embedded into the text they were trained on.
Don’t get me wrong, they are a big step towards a true AI, but they cannot do some things that seem fundamental to intelligence. The best analogy is that they are a lobotomized speech centre. They can put on a veneer of being intelligent and self aware, but it’s a veneer.
I personally suspect they will be a critical component to a future AI, but are a dead end path on their own.