Zum Hauptinhalt

History of AI

Alessandro Anatrini

2. The Pioneers

2. The Pioneers

2.1 The Artificial Neuron & Boolean Logic

Several cyberneticians endeavoured to comprehend human cognition by drawing inspiration from the brain's fundamental components: neurons. The pioneering artificial neuron (AN), proposed by Warren McCulloch and Walter Pitts in 1943, stands as a milestone in the fields of artificial intelligence and computational neuroscience.

AN serves as a simplified abstraction of the functioning of biological neurons in the human brain. It is grounded in a mathematical model that considers the synaptic connections between neurons and how electrical signals are integrated and transmitted through these connections.

The model envisioned by Pitts and McCulloch comprises two main components: inputs and outputs. Inputs are represented by electrical signals from other neurons, while the output is the neuron's response based on these inputs.

Here, Boolean logic, derived from the principles of mathematical logic articulated by George Boole in 1854 in his work An Investigation of the Laws of Thought, plays a crucial role. Boolean logic operates on variables that can only assume two values: 0 (false) and 1 (true). The primary operations of Boolean logic include AND, OR and NOT, from which all other operations derive.

In the context of AN, Boolean logic is employed to determine its most distinguishing feature: the activation function. This function dictates whether the neuron should be activated or remain inactive based on the weighted sum of its inputs. If the weighted sum surpasses a certain threshold, the neuron activates and produces an output; otherwise, it remains inactive. This activation and deactivation mechanism is crucial for the operation of artificial neurons and mirrors the fundamental logic of decision-making processes in biological neurons in the human brain.

Therefore, Boolean logic is fundamental for understanding how the AN processes information and makes decisions. This system provides a theoretical foundation for comprehending the computational models of artificial neurons and neural networks, paving the way for the development of more advanced concepts of activation functions used in modern artificial neural networks. The simplicity and effectiveness of Pitts and McCulloch's model demonstrate that even an extremely elementary computational system can process information and solve problems through the process of input integration and connection.


2.2 Rosenblatt's Perceptron

In the latter part of the 1950s, psychologist Frank Rosenblatt (1957) proposed a neural-inspired system capable of pattern recognition. The perceptron represents an evolution of the AN and was designed to recognize patterns in images through supervised learning. Utilising a learning algorithm based on updating the weights of connections between neurons in response to inputs, the perceptron proved capable of learning to distinguish between categories of objects based on their visual characteristics.


2.3 The Pandemonium

Contemporaneous with the perceptron is Oliver Selfridge's pandemonium (1959), a different pattern recognition model inspired by the structure of a pantheon, in which various entities called 'demons' collaborate to recognize complex patterns. Each demon is responsible for recognizing a specific aspect of input patterns. These demons collaborate through a voting process to determine the final outcome of pattern recognition. The pandemonium illustrates the efficacy of distributing the workload among multiple, specialised components to solve complex pattern recognition problems. Thus, the pandemonium employs a bottom-up approach, analysing the basic features of inputs and combining them to form more complex and meaningful representations.


2.4 The Illiac Suite

During this period, the first computer-assisted experiments in algorithmic composition also emerged. In 1957, the first work composed solely with artificial intelligence techniques, the Illiac Suite for string quartet, appeared. Composer Lejaren Hiller, along with mathematician Leonard Isaacson, employed a Monte Carlo algorithm that generated random numbers mapped to certain musical properties such as pitch or rhythm. By applying a series of constraints, these random properties were confined to elements allowed by the rules of music theory along with statistical probabilities (Markov chains) and the imagination of the two composers.