2.4 Fundamental ethical and social principles and reflection
2.4 Fundamental ethical and social principles and reflection
Can algorithms be fair? A discussion of fundamental ethical principles is needed.
Alongside legal requirements, fundamental ethical principles provide an indispensable framework for the use of Artificial Intelligence in higher education. Ethical guidelines help to shape technological developments in line with societal values and academic integrity. Ethical questions are particularly essential in the field of education, where personal development, participation and the fair distribution of opportunities are key concerns.
Key ethical principles in the use of AI
- Autonomy: The freedom of choice of learners and teachers must be preserved. AI systems must not be patronising, but should provide support. Users must be able to recognise at all times whether they are interacting with an AI system and must not be forced to use it.
- Fairness and non-discrimination: AI systems must be designed in such a way that they do not exacerbate existing inequalities. Bias in training data or decision-making processes must be identified and corrected. Particular attention must be paid to vulnerable groups.
- Transparency: The functioning, areas of application and limitations of an AI system must be comprehensible – both to users and to those responsible. Non-transparent behaviour (“black box”) is at odds with the requirements of informed academic practice.
- Accountability: Responsibility for decisions made with or by AI systems must be clearly defined. Responsibility cannot be delegated entirely to a technical system.
- Sustainability and societal benefit: The use of AI in higher education institutions should make a positive long-term contribution to educational processes, inclusion and scientific progress. Technical feasibility alone is not a sufficient criterion for decision-making.
Practical implementation at higher education institutions
- Development of institution-wide ethical guidelines or codes of conduct for the use of AI
- Integration of ethical reflection into degree programmes, particularly in teacher training, engineering and social sciences
- Establishment of interdisciplinary ethics committees with representatives from teaching, IT, law, the student body and equality work
- Regular ethical evaluation of AI systems, particularly those involving decision-making
- Promotion of critical media literacy and participatory negotiation processes
Responsibility means reflection and structure.
The responsible use of AI in higher education requires more than just compliance with legal norms and technical standards. It necessitates continuous reflection that takes into account individual attitudes, institutional structures and societal impacts. As educational and research institutions, universities bear a special responsibility to lead by example and actively shape an ethically informed approach to AI.
Reflection at the individual level: When do I rely on AI?
Lecturers, researchers and staff should be encouraged to reflect on their own role in dealing with AI. This includes questions such as:
- In which situations do I rely on AI systems?
- Where should I critically question, interpret or intervene to correct?
- How do I communicate the use of AI to students, colleagues or the public?
The development of reflective competence can be fostered through higher ed teaching support programmes, workshops, case studies and interdisciplinary discourse. Teaching and learning settings in which students are guided to analyse ethical dilemmas in dealing with AI also contribute to critical engagement.
Institutional responsibility: creating structures
Higher education institutions should create structures that enable the responsible development, use and evaluation of AI. These include:
- the definition of clear responsibilities for AI projects,
- transparent decision-making processes when introducing new systems,
- regular evaluation and accountability mechanisms,
- the involvement of different stakeholder groups (e.g. students, teaching staff, administration, equality officers, IT and research) in decision-making processes,
- an active discourse on ethical and societal challenges in everyday university life.
Social responsibility: Initiating discourse
As centres of knowledge, opinion-forming and innovation, universities also bear a responsibility towards society. They should:
- contribute to the public debate on the use of AI,
- promote interdisciplinary research projects on the opportunities and risks of AI,
- document best practices and make them openly accessible,
- and impart ethical AI knowledge in the training of teachers, journalists, engineers and other professional groups.
Reflection and responsibility must not be understood as isolated tasks for individuals, but are part of a comprehensive culture of mindfulness, transparency and participation. Only in this way can higher ed institutions actively and responsibly help shape the digital transformation.
Where AI reaches its limits
However powerful these systems may be, they remain prone to error. Hallucinations – statements that are factually incorrect but sound convincing – are not uncommon. Bias also remains a key risk.
Technological advances raise profound societal questions:
- Who bears responsibility for wrong decisions?
- What impact does AI have on labour markets, education or our self-image?
- How do we deal with deep fakes (media content such as videos or images generated or manipulated using AI), fake news or the high energy footprint of such models?
Many of these questions remain unanswered. This makes transparent rules – such as those set out in the EU AI Act – and open discussions about opportunities, risks and limits of use all the more important.
Technical and environmental challenges
Conclusion: The future of AI is a task we must shape
AI is changing our world – this is not a question for the future, but for the present. Whether this change leads to greater justice, freedom and participation depends on the decisions we make today: which values we prioritise, which risks we mitigate, and which opportunities we capitalise on. For the crucial question is not whether Artificial Intelligence will change our society – it is already doing so.
The answer to the AI question is therefore not a technical one, but a societal one. And it requires: knowledge, dialogue, responsibility.
💡 Learning Summary Chapter 2.4: Fundamental Ethical and Social Principles and Reflection
- Reflection is the key to the responsible use of AI: Lecturers, researchers and staff should be aware of their role in dealing with AI and critically examine when and how they use AI systems.
- Ethics as a guiding framework for AI in higher education: Alongside legal requirements, ethical principles such as autonomy, fairness, transparency and accountability are crucial for the responsible use of AI in education.
- Empowerment rather than control: AI systems should support users, not control them. Responsibility lies with people – not with algorithms.
- Practical implementation requires structures: Higher education institutions should develop ethical guidelines, interdisciplinary committees and teaching formats to promote ethical reflection and continuously monitor AI systems critically.
- Risks of AI systems: AI can generate misinformation (“hallucinations”), deliberately mislead and reinforce societal biases. Furthermore, high energy consumption has a negative impact on sustainable development goals.
- Societal impacts: AI is transforming information ecosystems, working environments and our societal self-image. At the same time, new opportunities are emerging in education, research, health and participation.
References
- Mensch und Maschine – Herausforderungen durch Künstliche Intelligenz | Deutscher Ethikrat
- Generative KI – jenseits von Euphorie und einfachen Lösungen | Judith Simon, Indra Spiecker gen. Döhmann, Ulrike von Luxburg
- Macgilchrist, F. (2024). AI in education: How developers and legislators shape the future. NORRAG Policy Insights: AI and Digital Inequities, 15–16.
- Macgilchrist, F. (2024) "Der Design-Justice-Ansatz mit einer Anwendung im Bereich der KI", in A. Renz & S. Hartong (eds.) Digital Learning Technologies (pp. 186–205). Bielefeld: transcript.
- Williamson, B.; Macgilchrist, F.; & Potter, J. (2023) "Re-examining AI, automation and datafication in education" Learning, Media & Technology 48(1).