Ethics by Design
Truth Filters for LLMs?
Emma's curiosity deepened as she wondered how these philosophical perspectives on truth could be practically applied to help both users and developers interact more responsibly with LLMs. If truth could be seen through different lenses, then each lens might offer a way to improve the way people work with LLMs. She began to think of these theories as 'truth filters' - different ways of evaluating, interpreting and even designing model responses with specific goals in mind. These filters, she thought, could form the basis of an ethics by design framework, a system that could make interactions with LLMs more transparent, grounded, and ultimately safer for all.
She sketched out ideas for how users and developers could apply each truth filter to guide their approach:
- Correspondence Filter
- Coherence Filter
- Pragmatic Filter
- Epistemic Filter
Emma realised that embedding these truth filters into both user interfaces and model development processes could shape a responsible and user-conscious approach to interacting with AI. By guiding both sides - users to evaluate and developers to implement mindful strategies - these filters could improve transparency, usability and accountability in AI applications.
She imagined a future where every interaction with an LLM included subtle reminders of these filters, encouraging users to engage critically and ensuring that developers took proactive steps to minimise the risk of error and unintended consequences. In this future, truth wasn't just about producing correct answers; it was a dynamic process that allowed people to interact with LLMs in a meaningful and responsible way. With truth filters as her blueprint, Emma saw the possibility of an AI landscape where technology and human values were thoughtfully aligned.