Knowledge🧾
Abschnittsübersicht
-
-

From Truth to Knowledge
If you have already worked through the episode on truth, you are already familiar with Emma. However, having finished the episode on truth is not necessary to tackle this episode.
After pondering over the topic of truth for a long time, Emma leaned back in her chair, feeling the weight of the challenges. Her journey had begun with questions of truth - how to make her AI accurate and reliable in its responses.But as she delved deeper, these questions naturally evolved into a more profound challenge: what does it mean to ‘know’?
If truth itself was elusive in the realm of AI, could an AI ever truly know anything, let alone understand it?
She considered how casually we speak of what an AI "knows," "understands," or even "thinks," as if it were capable of human-like comprehension. We might even regard it as a "source of knowledge." But this led her to question: does the ability to generate plausible responses equate to knowledge or understanding? Or is the AI simply simulating the appearance of knowledge without genuinely grasping the meaning behind its words?
In this episode, we explore whether LLMs can 'know' anything at all and what is required for humans to have knowledge.
Click on the two options below to find out whether LLMs can 'know' anything at all, and what criteria need to be met for humans to know. These are central questions of this episode. You will also be introduced to some interesting philosophical thought experiments related to these questions.
-
Can LLMs 'know'?As Emma pondered the nature of knowledge, she recalled John Searle’s Chinese Room argument, a thought experiment that challenged the very idea that machines could truly understand language.
Please watch this short video for an overview over the thought experiment:
Ein externer Inhalt wurde blockiert.
Du kannst erlauben, dass iframes von www.youtube.com immer sofort angezeigt werden. Zu diesem Zweck speichern wir deine Zustimmung in deinem Browser.
Searle argued that while the person in the room can manipulate symbols to produce appropriate responses, there’s no actual understanding going on. They’re simply following rules.
This struck Emma as uncannily similar to how her AI functioned. Like the person in the Chinese Room, the AI could generate responses that seemed intelligent, meaningful, even insightful. But under the surface, it was merely processing patterns in language, matching inputs to outputs without any comprehension.
Emma realised that her users might naturally interpret the AI’s responses as signs of understanding, as if it “knew” the information it presented. Yet, in reality, it was as oblivious to meaning as the person in Searle’s Chinese Room.
This raised a significant concern for Emma: if users believed the AI understood, they might begin to treat it as a reliable “knower.” And if that happened, the line between real knowledge and superficial imitation might blur, leaving users unknowingly dependent on an entity that offered the appearance of knowledge without the substance.
So, are LLMs then just a knowledge aid if anything, but never a knowledge source?
When do humans 'know'?Emma's thoughts spiralled. If large language models (LLMs) like her AI merely generated plausible answers without any real understanding - perhaps even acting as "bullshitters", as philosopher Harry Frankfurt suggests - could they really convey knowledge? Could humans gain knowledge from something that doesn't understand what it presents?
This reminded Emma of Gettier cases, named after the philosopher Edmund Gettier. In his 1963 paper, 'Is Justified True Belief Knowledge?', Gettier challenged the traditional definition of knowledge as “justified true belief.” This definition suggests that if someone has a belief that is true and well-justified, then it counts as knowledge. However, Gettier presented scenarios where people had justified true beliefs that didn’t actually constitute knowledge. His cases showed that it’s possible to be right “by luck,” even with a belief that seems fully justified, and still not possess genuine knowledge.
Please watch this short video to get a grasp of Gettier's point:
Ein externer Inhalt wurde blockiert.
Du kannst erlauben, dass iframes von www.youtube.com immer sofort angezeigt werden. Zu diesem Zweck speichern wir deine Zustimmung in deinem Browser.
Emma realised that similar “right by luck” situations could easily occur when interacting with AI. Imagine a user asking the AI a question about an obscure historical event. The AI, based on language patterns in its training data, generates a response that happens to align with historical fact - perhaps it even sounds well-justified. However, the AI has simply pieced together words that resemble the correct answer.
In this case, the user might believe they’ve “learned” something true, but like in a Gettier case, they’ve only stumbled onto truth by luck, rather than through real knowledge from a knowledgeable source.
This realisation strengthened Emma’s conviction: if she wanted her AI to be a useful tool, she needed to make users aware that the AI could appear knowledgeable without truly being so. Gettier cases in AI interactions underscored the importance of critical engagement, showing that users should verify and question AI responses to avoid mistaking chance correctness for real knowledge.
-
-
AI as a Tool for Knowledge Acquisition
Emma considered the implications of the transfer of Searle's thought experiment and Gettier's criticism of a definition of knowledge centred on justified beliefs and truth: If users accepted AI responses at face value, mistaking fluent language for true knowledge, they might assume they were “learning” something meaningful when, in reality, they were interacting with text devoid of understanding. Worse, the confidence of an AI’s responses could obscure the fact that it might be “right” only by coincidence or luck.If the AI functioned as a bullshitter, uninterested in truth, then any knowledge gained from it would be tenuous, relying on users’ critical thinking rather than on the AI’s reliability.
In the episode on truth, this is discussed at length.

So could humans gain knowledge from an indifferent entity? Emma realised the answer was complex. Users might extract valuable information by critically engaging with the AI’s output, fact-checking, and applying their judgment. In other words, humans could create knowledge from AI responses by scrutinising them carefully, but the AI itself was no reliable source of knowledge. It couldn’t impart understanding directly; it could only offer language patterns that users would have to dissect and validate for truth.
The AI, then, could be a tool, a prompt, or a starting point for knowledge, but not a true knower or teacher.
And this, Emma decided, was a crucial insight. If she proceeded with the design, she’d need to make it clear to users that the AI was not an authority but an assistant—an entity that could aid in exploration but should never be mistaken for a source of true understanding.
Emma is now trying to apply her knowledge at work. Maybe, you will be in a similar situation in the future yourself. Let's see how it goes for Emma.
-
Emma's Conversation: Designing for Knowledge
Emma gathered her team to discuss the implications of Searle’s Chinese Room and Gettier cases on their AI’s design. She wanted to explore practical solutions to make the AI responsible and transparent, and she was excited to see her team’s reactions to these ideas.Read through Emma's conversation to find out what practical solutions she and her team have come up with. Luckily she took screenshots of the conversation. Use the navigational arrows to view all screenshots.
-

-
-