- Nutze alle Lernfunktionen, wie Tests, Quizze und Umfragen.
- Schreibe Beiträge und tausche dich in unseren Foren aus.
- In einigen Lernangeboten bestätigen wir dir die Teilnahme.
Ethics by Design
Kursthemen
-
‘Ethics by Design’ is a pioneering learning programme that emphasises the central role of ethics in the development and application of technology. Instead of only looking at ethics retrospectively, it is integrated into every phase of development. This course shows you how to build technology that is not only innovative but also fair, just, and ethically sound.
In interactive modules, you will make decisions in everyday scenarios and deal intensively with topics such as fairness, justice and data protection. This hands-on content sensitises you to the relevance of ethical considerations and the positive effects of their implementation in the development processes of technology or the negative consequences if ethics do not play a role in the development process.
Basically, it's like with food: Our consumer behaviour changes when we know the origin of our food and the consequences of our consumption. The same applies to technology: the more we know about how it is developed and what the consequences are, the more you will view and develop technology differently. The course provides the necessary tools and knowledge to develop technology that is fair, transparent and in the best interests of all stakeholders.
-
-
-
Truth Filters for LLMs?
Emma's curiosity deepened as she wondered how these philosophical perspectives on truth could be practically applied to help both users and developers interact more responsibly with LLMs. If truth could be seen through different lenses, then each lens might offer a way to improve the way people work with LLMs. She began to think of these theories as 'truth filters' - different ways of evaluating, interpreting and even designing model responses with specific goals in mind. These filters, she thought, could form the basis of an ethics by design framework, a system that could make interactions with LLMs more transparent, grounded, and ultimately safer for all.
She sketched out ideas for how users and developers could apply each truth filter to guide their approach:- Correspondence Filter
- Coherence Filter
- Pragmatic Filter
- Epistemic Filter
-
Excited by her insights, Emma decided to share her ideas with Alex, an AI developer. She hoped she could help her understand how practical it would be to integrate these truth filters and find ways to control hallucinations in LLMs. As the conversation deepened, Alex, brought up a technical concept that piqued her curiosity.Let's follow their conversation to learn more by clicking here or on the title of this activity.
-
-
-
-
Work and the Quest for Meaning: Designing AI with Purpose and Autonomy in Mind
Work has long been a cornerstone of human identity, shaping our sense of meaning, purpose, and connection to the world. Through work, individuals often find purpose, achieve goals, and contribute to their communities. It serves not only as a means of economic survival but also as a way to express creativity, develop mastery, and engage in something larger than oneself.
As AI continues to transform the nature of work, understanding how people derive meaning from work becomes essential for designing AI systems that support, rather than detract from, human fulfillment, autonomy, and dignity.
-


Emma uses ChatGPT at Work. 






Feeling better informed after her deep dive with the developers, Emma returned to her workspace, ready to make a pivotal decision about how her team would implement their AI assistant. She wanted the tool to be both useful and responsible, but she was faced with two distinct options for how the assistant should handle information and respond to users.
Emma paused, considering the pros and cons of each option. She understood the stakes - how each choice could shape the users’ experience with the AI and, potentially, how responsible her team would be seen in deploying this technology.
If you have already worked through the episode on truth, you are already familiar with Emma. However, having finished the episode on truth is not necessary to tackle this episode.
Emma considered the implications of the transfer of Searle's thought experiment and Gettier's criticism of a definition of knowledge centred on justified beliefs and truth: If users accepted AI responses at face value, mistaking fluent language for true knowledge, they might assume they were “learning” something meaningful when, in reality, they were interacting with text devoid of understanding. Worse, the confidence of an AI’s responses could obscure the fact that it might be “right” only by coincidence or luck.
Emma gathered her team to discuss the implications of Searle’s Chinese Room and Gettier cases on their AI’s design. She wanted to explore practical solutions to make the AI responsible and transparent, and she was excited to see her team’s reactions to these ideas.























