1.1 Basics: How AI works
1.1 Basics: How AI works
What happens when machines suddenly start creating text, images or music? Welcome to the world of generative AI!
Artificial Intelligence (AI) is an umbrella term for technologies that perform tasks which previously required human intelligence, such as understanding language, recognising images or making decisions.
As early as 1950, Alan Turing proposed a benchmark for machine intelligence: a system is considered intelligent if, in conversation, a human can no longer tell whether they are speaking to a machine or to another human. Yet with developments such as AI chatbots like ChatGPT, this understanding is constantly shifting. The bar for what is understood as ‘true’ artificial intelligence is constantly being raised – so that AI is never regarded as truly "intelligent". This phenomenon is described by the expression moving goalpost. If someone moves the goalposts during a match, no player can score a goal, no matter how well they play. This is why the Turing test is controversial today.
Machine learning – the foundation of modern AI
A central subfield of AI is machine learning (ML). Here, algorithms independently recognise patterns in data and improve their performance with every new piece of information. Unlike with traditional programmes, behaviour is not specified in detail but optimised through experience.
The basis for this is training data: structured (e.g. tables or JSON files – JSON stands for JavaScript Notation) or unstructured data (e.g. text, images, audio). Different data formats are used depending on the field of application and the algorithm.
When bias creeps into the code: Bias
A key problem in machine learning is systematic bias. This can lead to unfair or discriminatory results. The causes are often non-representative training data, unsuitable selection procedures or problematic evaluation metrics. Bias arises within the system – often unnoticed – but can have a massive impact on decisions.
Chat assistants and Large Language Models (LLMs)
AI has recently become widely known primarily through chat assistants such as ChatGPT, Claude or Gemini. These are based on so-called Large Language Models (LLMs) – particularly large examples of deep neural networks capable of understanding text. Neural networks are machine learning models modelled on the human brain. Deep networks consist of a particularly large number of interconnected layers of neurons stacked on top of one another. LLMs are deep neural networks trained on vast amounts of text.
LLMs learn to understand linguistic structures and to generate new, contextually coherent texts themselves. This is why they are also referred to as generative models. In addition to commercial models, there are now also open-source alternatives such as LLaMA (Meta) or DeepSeek, which can be used locally.

By Lwneal – Own work, CC0, Link
Discriminative vs. generative models Image source
What modern AI systems can do
Current AI systems are multimodal: they process not only text but also spoken language, images or music, and generate new content from them. They can write articles, write programme code, generate images or answer complex exam questions.
A key technical feature is the context window. Whilst early systems could only take the user’s most recent input into account, current models analyse entire documents or incorporate longer dialogue histories into their responses.
Another new method is Chain of Thought (CoT): before the model responds, it analyses the input internally, reflects on possible meanings and develops hypotheses – in a kind of exposed internal dialogue – prior to the actual text suggestion.
💡 Learning Summary Chapter 1.1: Basics
- AI and Machine Learning: Artificial Intelligence mimics human thinking. Machine learning identifies patterns based on data and improves problem-solving without explicit programming.
- How modern AI systems work: LLMs such as ChatGPT use deep neural networks, can be used in a multimodal manner and generate content independently based on large amounts of data.
- Challenges and the need for regulation: Risks such as bias and misinformation make clear rules necessary – such as those established by the EU AI Act.