Truth✨
Abschnittsübersicht
-
-
Truth Filters for LLMs?
Emma's curiosity deepened as she wondered how these philosophical perspectives on truth could be practically applied to help both users and developers interact more responsibly with LLMs. If truth could be seen through different lenses, then each lens might offer a way to improve the way people work with LLMs. She began to think of these theories as 'truth filters' - different ways of evaluating, interpreting and even designing model responses with specific goals in mind. These filters, she thought, could form the basis of an ethics by design framework, a system that could make interactions with LLMs more transparent, grounded, and ultimately safer for all.
She sketched out ideas for how users and developers could apply each truth filter to guide their approach:- Correspondence Filter
- Coherence Filter
- Pragmatic Filter
- Epistemic Filter
-
Excited by her insights, Emma decided to share her ideas with Alex, an AI developer. She hoped she could help her understand how practical it would be to integrate these truth filters and find ways to control hallucinations in LLMs. As the conversation deepened, Alex, brought up a technical concept that piqued her curiosity.Let's follow their conversation to learn more by clicking here or on the title of this activity.


Emma uses ChatGPT at Work. 






Feeling better informed after her deep dive with the developers, Emma returned to her workspace, ready to make a pivotal decision about how her team would implement their AI assistant. She wanted the tool to be both useful and responsible, but she was faced with two distinct options for how the assistant should handle information and respond to users.
Emma paused, considering the pros and cons of each option. She understood the stakes - how each choice could shape the users’ experience with the AI and, potentially, how responsible her team would be seen in deploying this technology.