We describe our reality using words. Large Language Models (LLMs)
create an abstraction of our use of words, not of reality. The output of
these models can be garbled (an hallucination) as a result of this difference.

Fundamental Flaw In AI
AuthorGladray
Published24 May 2025
Related Clips

Can We Build AI Without Losing Control Over It
Scared of superintelligent AI? You should be, says neuroscientist and philosopher Sam Harris
Published01 Jun 2016

What Happens When Computers Get Smarter
Nick Bostrom presentation about smart machines. Will they have our values or have values of their own
Published01 Mar 2015

Implications Of Computers That Can Learn
Jeremy Howard presentation about the implications of computers that can learn
Published01 Dec 2014

How Brain Science Will Change Computing
Jeff Hawkins presentation on how the brain helps us predict, intelligently, what will happen next
Published01 Feb 2003