Zum Hauptinhalt

Ethics by Design

Truth Filters for LLMs?

Emma's curiosity deepened as she wondered how these philosophical perspectives on truth could be practically applied to help both users and developers interact more responsibly with LLMs. If truth could be seen through different lenses, then each lens might offer a way to improve the way people work with LLMs. She began to think of these theories as 'truth filters' - different ways of evaluating, interpreting and even designing model responses with specific goals in mind. These filters, she thought, could form the basis of an ethics by design framework, a system that could make interactions with LLMs more transparent, grounded, and ultimately safer for all.
She sketched out ideas for how users and developers could apply each truth filter to guide their approach:

  1. Correspondence Filter
  2. Coherence Filter
  3. Pragmatic Filter
  4. Epistemic Filter

3. Pragmatic Filter

  • For users: Emma recognised that not all tasks require strict factual accuracy. So the pragmatic filter could guide users to assess the practical utility of an output. If the model answer is useful for a particular task - such as generating ideas in a brainstorming session - then it's useful, even if it's not perfectly factual. This filter could remind users to think about the context: Is this a creative or brainstorming scenario where a useful, flexible answer is better than an accurate one?
  • For developers: Developers could use this filter to prioritise output based on the intended purpose of user prompts. Contextual fine-tuning could encourage models to produce relevant, pragmatic answers that meet user needs, especially in exploratory or creative applications. In this way, the model could focus on providing answers that are "good enough" for the context, rather than absolute truths, creating a more flexible interaction.