Zum Hauptinhalt

History of AI

Alessandro Anatrini

3. Symbolic AI vs. Machine Learning

3. Symbolic AI vs. Machine Learning

3.1 Cognitivism

The evolution of artificial models capable of emulating human learning abilities is intricately intertwined with the comprehension and modelling of mental processes and the theories of mind governing them. The advent of cognitivism, also referred to as computationalism, heralded a paradigm shift in the realms of artificial intelligence and cognitive psychology, moving the focus from mere subjects of study to the underlying processes and systems. Cognitivism presents a theoretical framework positing that the human mind operates akin to a computational system, processing information based on predefined rules and cognitive schemas similar to those employed by computers. Noteworthy figures in the realm of cognitivism, such as Allen Newell and Herbert Simon, spearheaded crucial concepts like cognitive processes and the significance of symbolic representation in mental processing. Additionally, Noam Chomsky's contributions to generative grammar (1957) have significantly shaped our comprehension of human language, emphasising the presence of inherent cognitive structures underlying linguistic capability.

Noam Chomsky on cognition and deep learning (with Lex Fridman, "Podcast #53," YouTube, Nov. 2019).

This theoretical framework, along with the abstractions of the functioning of biological neurons, introduced earlier, laid the foundation for the development of modern, artificial neural networks and has profoundly influenced the field of cognitive computing.


3.2 First AI Winter

Despite the efforts of these trailblazers, such artificial models encountered notable limitations in addressing complex and nonlinearly separable problems. During the late 1960s and early 1970s, two divergent approaches emerged regarding how systems should learn: symbolic AI and machine learning. Marvin Minsky and Seymour Papert (1969) advocated for symbolic AI, which integrated heuristic-computational rules to represent intelligence. The inherent limitations of this approach, however, led to the so-called "first AI winter" in the late 1970s.

In contrast, machine learning proposed a model wherein machines autonomously learn, rather than explicitly incorporate intelligence. This paradigm shift marked a fundamental turning point, emphasising the necessity for machines to structurally possess the technical conditions for independent learning.


3.3 Connectionism

While symbolic AI is closely associated with cognitivism, machine learning finds its correspondence in connectionism. The latter underscores the importance of connections between concepts, ideas and resources in facilitating distributed learning. Unlike cognitivism, connectionism, a field in which the philosopher Paul Churchland is a prominent figure, highlights that learning is not confined to the individual but is distributed across networks of connections among individuals, concepts and ideas (Churchland 1986). This approach broadens the concept of learning, emphasising the crucial role of interactions and discussions in learning networks, in addition to directly accessible information.


3.4 Probabilistic Processes in Music

During this historical period a prominent figure in the music panorama was Iannis Xenakis, a composer and engineer, who extensively employed approaches based on statistical derivation techniques akin to those used today in the field of AI.

Stochastic processes, such as those employed by Xenakis and formalised in "Musiques Formelles" (1962), are a set of mechanisms implementing random probability distributions that cannot be predicted but can be statistically analysed. In the early 1960s Xenakis utilised computers and the FORTRAN language to intertwine various probability functions to determine the overall structure and other parameters (such as pitch and dynamics) of a composition. Xenakis modelled his music as if he were conducting a scientific experiment. Each instrument was akin to a molecule and would undergo a stochastic process to determine its behaviour, such as the pitch and duration of certain notes.

The graphical score of Iannis Xenakis's Pithoprakta (1955–56).

His contribution not only led to the introduction of new compositional approaches but it also represented one of the earliest examples of AI in a dual role: both as a generator of musical content and as an analytical and supportive tool. This duality clearly highlights how AI can simultaneously exhibit, depending on specific implementations, generative and analytical characteristics, a phenomenon widely observable today.