History of AI
Website: | Hamburg Open Online University |
Kurs: | MUTOR: Artificial Intelligence for Music and Multimedia |
Buch: | History of AI |
Gedruckt von: | Gast |
Datum: | Samstag, 23. November 2024, 10:33 |
Beschreibung
Alessandro Anatrini
1. Introduction
1. Introduction
Cybernetics emerged in the post-war era in the UK and USA with the aim of delving into the complexities of the brain and comprehending the fundamental mechanisms governing both organic and computational systems. It laid the foundation for the development of machine learning, originating in the 1980s and experiencing a resurgence in the 2000s. The interdisciplinary nature of cybernetics led to the creation of adaptive and autonomous machines (Walter 1950, Ashby 1954), as well as the formulation of new theories of control and communication (Shannon 1948, Wiener 1961).
Cybernetics and artificial intelligence, though often conflated, represent distinct paradigms in understanding intelligent systems. While cybernetics focuses on examining complex systems and their self-regulatory mechanisms, artificial intelligence (AI) aims to instil machines with behaviours resembling human actions. Both disciplines explore the conditions necessary for learning, but they approach the concept from differing perspectives. Artificial intelligence relies heavily on datasets to inform intelligent behaviour, whereas cybernetics emphasises grounded behaviours which express intelligence and learning capacity through interactions and feedback mechanisms.
Cybernetics and AI operate within the framework of binary logic and share a fundamental principle: intent. While the logic is universal, the intentions are culturally contingent.
2. The Pioneers
2. The Pioneers
2.1 The Artificial Neuron & Boolean Logic
Several cyberneticians endeavoured to comprehend human cognition by drawing inspiration from the brain's fundamental components: neurons. The pioneering artificial neuron (AN), proposed by Warren McCulloch and Walter Pitts in 1943, stands as a milestone in the fields of artificial intelligence and computational neuroscience.
AN serves as a simplified abstraction of the functioning of biological neurons in the human brain. It is grounded in a mathematical model that considers the synaptic connections between neurons and how electrical signals are integrated and transmitted through these connections.
The model envisioned by Pitts and McCulloch comprises two main components: inputs and outputs. Inputs are represented by electrical signals from other neurons, while the output is the neuron's response based on these inputs.
Here, Boolean logic, derived from the principles of mathematical logic articulated by George Boole in 1854 in his work An Investigation of the Laws of Thought, plays a crucial role. Boolean logic operates on variables that can only assume two values: 0 (false) and 1 (true). The primary operations of Boolean logic include AND, OR and NOT, from which all other operations derive.
In the context of AN, Boolean logic is employed to determine its most distinguishing feature: the activation function. This function dictates whether the neuron should be activated or remain inactive based on the weighted sum of its inputs. If the weighted sum surpasses a certain threshold, the neuron activates and produces an output; otherwise, it remains inactive. This activation and deactivation mechanism is crucial for the operation of artificial neurons and mirrors the fundamental logic of decision-making processes in biological neurons in the human brain.
Therefore, Boolean logic is fundamental for understanding how the AN processes information and makes decisions. This system provides a theoretical foundation for comprehending the computational models of artificial neurons and neural networks, paving the way for the development of more advanced concepts of activation functions used in modern artificial neural networks. The simplicity and effectiveness of Pitts and McCulloch's model demonstrate that even an extremely elementary computational system can process information and solve problems through the process of input integration and connection.
2.2 Rosenblatt's Perceptron
In the latter part of the 1950s, psychologist Frank Rosenblatt (1957) proposed a neural-inspired system capable of pattern recognition. The perceptron represents an evolution of the AN and was designed to recognize patterns in images through supervised learning. Utilising a learning algorithm based on updating the weights of connections between neurons in response to inputs, the perceptron proved capable of learning to distinguish between categories of objects based on their visual characteristics.
2.3 The Pandemonium
Contemporaneous with the perceptron is Oliver Selfridge's pandemonium (1959), a different pattern recognition model inspired by the structure of a pantheon, in which various entities called 'demons' collaborate to recognize complex patterns. Each demon is responsible for recognizing a specific aspect of input patterns. These demons collaborate through a voting process to determine the final outcome of pattern recognition. The pandemonium illustrates the efficacy of distributing the workload among multiple, specialised components to solve complex pattern recognition problems. Thus, the pandemonium employs a bottom-up approach, analysing the basic features of inputs and combining them to form more complex and meaningful representations.
2.4 The Illiac Suite
During this period, the first computer-assisted experiments in algorithmic composition also emerged. In 1957, the first work composed solely with artificial intelligence techniques, the Illiac Suite for string quartet, appeared. Composer Lejaren Hiller, along with mathematician Leonard Isaacson, employed a Monte Carlo algorithm that generated random numbers mapped to certain musical properties such as pitch or rhythm. By applying a series of constraints, these random properties were confined to elements allowed by the rules of music theory along with statistical probabilities (Markov chains) and the imagination of the two composers.
3. Symbolic AI vs. Machine Learning
3. Symbolic AI vs. Machine Learning
3.1 Cognitivism
The evolution of artificial models capable of emulating human learning abilities is intricately intertwined with the comprehension and modelling of mental processes and the theories of mind governing them. The advent of cognitivism, also referred to as computationalism, heralded a paradigm shift in the realms of artificial intelligence and cognitive psychology, moving the focus from mere subjects of study to the underlying processes and systems. Cognitivism presents a theoretical framework positing that the human mind operates akin to a computational system, processing information based on predefined rules and cognitive schemas similar to those employed by computers. Noteworthy figures in the realm of cognitivism, such as Allen Newell and Herbert Simon, spearheaded crucial concepts like cognitive processes and the significance of symbolic representation in mental processing. Additionally, Noam Chomsky's contributions to generative grammar (1957) have significantly shaped our comprehension of human language, emphasising the presence of inherent cognitive structures underlying linguistic capability.
Noam Chomsky on cognition and deep learning (with Lex Fridman, "Podcast #53," YouTube, Nov. 2019).
This theoretical framework, along with the abstractions of the functioning of biological neurons, introduced earlier, laid the foundation for the development of modern, artificial neural networks and has profoundly influenced the field of cognitive computing.
3.2 First AI Winter
Despite the efforts of these trailblazers, such artificial models encountered notable limitations in addressing complex and nonlinearly separable problems. During the late 1960s and early 1970s, two divergent approaches emerged regarding how systems should learn: symbolic AI and machine learning. Marvin Minsky and Seymour Papert (1969) advocated for symbolic AI, which integrated heuristic-computational rules to represent intelligence. The inherent limitations of this approach, however, led to the so-called "first AI winter" in the late 1970s.
In contrast, machine learning proposed a model wherein machines autonomously learn, rather than explicitly incorporate intelligence. This paradigm shift marked a fundamental turning point, emphasising the necessity for machines to structurally possess the technical conditions for independent learning.
3.3 Connectionism
While symbolic AI is closely associated with cognitivism, machine learning finds its correspondence in connectionism. The latter underscores the importance of connections between concepts, ideas and resources in facilitating distributed learning. Unlike cognitivism, connectionism, a field in which the philosopher Paul Churchland is a prominent figure, highlights that learning is not confined to the individual but is distributed across networks of connections among individuals, concepts and ideas (Churchland 1986). This approach broadens the concept of learning, emphasising the crucial role of interactions and discussions in learning networks, in addition to directly accessible information.
3.4 Probabilistic Processes in Music
During this historical period a prominent figure in the music panorama was Iannis Xenakis, a composer and engineer, who extensively employed approaches based on statistical derivation techniques akin to those used today in the field of AI.
Stochastic processes, such as those employed by Xenakis and formalised in "Musiques Formelles" (1962), are a set of mechanisms implementing random probability distributions that cannot be predicted but can be statistically analysed. In the early 1960s Xenakis utilised computers and the FORTRAN language to intertwine various probability functions to determine the overall structure and other parameters (such as pitch and dynamics) of a composition. Xenakis modelled his music as if he were conducting a scientific experiment. Each instrument was akin to a molecule and would undergo a stochastic process to determine its behaviour, such as the pitch and duration of certain notes.
The graphical score of Iannis Xenakis's Pithoprakta (1955–56).
His contribution not only led to the introduction of new compositional approaches but it also represented one of the earliest examples of AI in a dual role: both as a generator of musical content and as an analytical and supportive tool. This duality clearly highlights how AI can simultaneously exhibit, depending on specific implementations, generative and analytical characteristics, a phenomenon widely observable today.
4. Generative Modelling
4. Generative Modelling
4.1 Artificial Life
In the mid-1980s a transformative shift occurred, marking the transition from the pioneering era. Cybernetics, once a dominant science, gradually faded away, making room for a new epoch characterised by advancements in computational power, the emergence of early graphical applications, and the advent of synthetic sound production. Amidst these evolving landscapes a resurgence of interest in neural networks took root, propelled by their capacity to navigate increasingly intricate configurations, building upon the foundations laid by Rosenblatt's perceptron.
Simultaneously, rule-based methodologies gained attention, propelled by the development of expert systems designed to encapsulate human expertise within logical frameworks. Inspired by cybernetics, complexity theory and the broader domain of AI, a nascent field emerged: artificial life (ALife). ALife sought to emulate vital processes using computational means, employing a bottom-up approach to construct intricate systems from elemental components. This methodology frequently simulated evolutionary and adaptive processes, which were inherently challenging to capture through top-down approaches that deconstructed systems into their constituent parts for analytical purposes. This field drew inspiration from John Conway's seminal Game of Life, a cellular automaton introduced in Scientific American in 1970, laying the groundwork for evolutionary algorithms.
Epic Conway's Game of Life.
4.2 Evolutionary Algorithms
One such influential algorithm was the biomorph, conceived by ethologist Richard Dawkins, which recursively generated branching lines reminiscent of biological structures. In 1986 William Latham fused Dawkins's evolutionary engine with novel geometric shapes and 3D graphics, collaborating with Stephen Todd to develop the Mutator program. Mutator aimed to construct a virtual ecosystem where forms could evolve and mutate over time through manipulation of parameters and growth rules. The works produced by Mutator often exhibited organic and abstract structures reminiscent of natural and biological forms, offering profound insights into the nexus between nature, technology and human creativity. Latham's endeavours represented a watershed moment in digital art, showcasing the creative potential of evolutionary algorithms in graphic design and inspiring artists, designers and architects from the early 1990s onwards.
During the same years we also witnessed the theoretical development of the first recurrent neural networks (RNN), notably through the work of John Hopfield who, in 1982, proposed the use of a recurrent structure for data processing (Hopfield Network). However, their practical application and widespread use in the fields of music generation and natural language processing (NLP) would only occur in the subsequent decades.
4.3 Second AI Winter
By the late 1980s, however, the resurgence of interest and funding in artificial intelligence and artificial life encountered headwinds due to limitations in the real-world applicability of the introduced algorithms, precipitating what became known as the "second AI winter."
This shift towards artificial intelligence systems developing their own autonomous understanding of musical elements represents the foundation of today's advanced musical intelligence.
4.4 Experiments in Musical Intelligence
In the 1980s, the composer and researcher David Cope, with his Experiments in Musical Intelligence (EMI), strongly advocated that computer-assisted composition could embrace a deeper understanding of music from three distinct perspectives:
- analysis and segmentation into parts,
- identification of common elements and patterns that define what is perceived as style, and
- recombination of musical elements to create new works.
His work was based on the idea of recombining elements from previous compositions to create new musical pieces. Many of the greatest composers of all time have explored this concept, consciously or not, as they reshaped existing ideas and styles in their work, e.g. the ReComposed series by Deutsche Grammophon. With EMI, Cope aimed to replicate this process through the use of computers and their computing power. Cope's work laid the groundwork for many of the current AI models on the market. Initially, music and its attributes are encoded into databases, then recombinant segments are extracted using specific identifiers and pattern matching systems. From there, musical segments are categorised and reconstructed in a logical and musical order using augmented transition networks until new music output is produced. This type of 'regenerative' construction of music is reminiscent of many of today's neural networks such as MuseNet by OpenAI, which uses a transformer-based architecture to generate compositions with multiple instruments in a variety of styles.
David Cope's "Chorale (after Bach)," from the album Bach by Design (1994).
Other developments in this period continued to explore the boundaries of computational creativity. Composer Robert Rowe devised a system enabling a machine to deduce metre, tempo and note durations as someone plays freely on a keyboard. Furthermore, in 1995 Stephen Thaler's company, Imagination Engines Inc., utilised reinforcement learning to train a neural network with popular melodies, resulting in the creation of over 10,000 new musical choruses. This method involves rewarding or penalising the model based on its decisions, aiming to achieve predefined goals.
The shift towards AI systems that autonomously develop their understanding of musical elements forms the cornerstone of contemporary, advanced musical intelligence. In the mid-1980s the first AI research endeavours also commenced at IRCAM, a research and artistic production institute in Paris founded by Pierre Boulez about a decade earlier. Specifically, their focus lay primarily on the development of rule-based systems for sound synthesis (Formes) and environments based on LISP for the symbolic manipulation of musical structures (PatchWork, OpenMusic). Research has relentlessly continued to the present day, addressing the creation of instrumental sample databases and assisted orchestration (Orchidea).
5. New AI & Embodied Intelligence
5. New AI & Embodied Intelligence
During the "Second AI Winter," scientist Rodney Brooks, celebrated for his groundbreaking work in robotics, pioneered an alternative approach to traditional AI that eschewed reliance on complex, internal symbolic representations (Brooks 1999). Instead, Brooks focused on fostering direct interaction between robots and their environment, giving rise to what came to be known as the "New AI." This departure from the conventional paradigm of explicit rule encoding marked a significant shift in AI research.
Contrary to the conventional methods of developing intricate algorithms and symbolic knowledge models, New AI emphasises the use of emergent behavioural patterns and experiential learning. This approach enables robots to engage with their surroundings in a more autonomous and adaptable manner, a methodology often referred to as "behaviour-based robotics." Central to New AI is the concept of embodied intelligence, positing that intelligence emerges not solely from cerebral processing but from the dynamic interplay between an agent's body and its external environment. Brooks exemplified this concept through the development of physical robots like the renowned Cog, engineered to move and interact with the physical world and humans in a naturalistic manner.
5.1 Systems Thinking in Video Games
Throughout the 1990s New AI and ALife served as wellsprings of inspiration for new-media artists. Notably, their influence permeated the socio-cultural sphere, profoundly impacting video games such as SimCity and Civilization. SimCity, conceived by Will Wright and initially launched in 1989, gained widespread popularity during the 1990s. The game introduced the concept of systems thinking into city design, requiring players to balance diverse factors such as transportation, housing and environmental concerns to cultivate successful urban centres. This approach underscored the notion that complex social systems could be effectively represented in a game or simulated environment, with implications extending to real-world urban planning. Indeed, professionals in fields such as engineering, architecture and design increasingly embraced a holistic approach to urban planning, mirroring the strategies depicted in SimCity.
Video of SimCity, first release (1989).
The influence of New AI and ALife in the 1990s transcended the realm of video games, permeating various aspects of culture and technology.
5.2 ALife in the Arts
In contemporary art the integration of artificial life (ALife) into artistic expression has evolved into a distinct art form. Notable figures such as Nell Tenhaaf, Susie Ramsay and Rafael Lozano-Hemmer contributed significantly to this movement. In 1999 Lozano-Hemmer initiated the Art and Artificial Life International Awards (VIDA), which aimed to foster artistic exploration within the realm of artificial life. Over its sixteen-year span the award recognized artists exploring the intersection of technology and creativity. Among the recipients were robotics artists Louis-Philippe Demers, Ken Rinaldo and Bill Vorn, whose work drew inspiration from Rodney Brooks's New AI movement. Similarly influenced by Brooks, artist and media theorist Simon Penny coined the term aesthetics of behaviour to describe artwork involving artificial agents interacting with the real world.
6. Contemporary AI Models
6. Contemporary AI Models
6.1 Iamus
In the last 20 years the use of AI in the field of music has undergone significant advancement. Consider for example Iamus (2010), a computer cluster powered by Melomics’s technology and developed at the University of Málaga. Iamus is based on the use of genetic algorithms capable of autonomously creating scores. As in a natural selection process, a random sequence of notes is first generated, then mutated and finally analysed by a set of rules. These rules are based on music theory in order to compose contemporary classical music, such as Opus One and Admus (2010), the former recorded by the London Symphony Orchestra in 2011.
Admus, Málaga Philharmonic Orchestra.
6.2 Variational Auto Encoders: Magenta, Jukebox and RAVE
In 2011 Google inaugurated Google Brain, a company department dedicated to conducting research in the field of AI. In 2016 Google released Magenta, an ecosystem of tools, models and resources designed to support the creation and processing of artistic content through AI (Engel et al. 2017).
One of the main components of Magenta is MusicVAE, short for Music Variational Autoencoder. In brief, a MusicVAE encodes and decodes the input data, such as scores, into a continuous latent space through two neural networks: an encoder and a decoder. The encoder converts high-dimensional input data into a low-dimensional probability distribution in the latent space, providing a mathematically meaningful representation. The decoder reconstructs the original data from a point in the latent space, allowing the MusicVAE to learn a compact and structured representation of the data and to generate new musical data consistent with the input characteristics. After learning how to compress and decompress the data, MusicVAE adds a hierarchical structure to the process to produce a structure that takes into account the long-term relationships of the input data. MusicVAEs have been successfully implemented in various musical contexts, including AI-assisted composition, sound environment creation for video games and automatic generation of musical content.
In addition to MusicVAE, Magenta includes a series of other tools and models for creating and manipulating musical content. One of these is PerformanceRNN, an LSTM-based recurrent neural network designed to model polyphonic music with expressive timing and dynamics. Notably, PerformanceRNN also includes a layer of constraint that allows for the specification of chords and keys, providing more control over the generated music. Finally, Magenta also includes NSynth, a sound synthesis model capable of generating sound from individual samples rather than with oscillators and wavetables, as found in conventional synthesisers.
VAEs are used not only for symbolic music generation, as in Magenta, but also for neural audio synthesis. This involves generating sound sample by sample, without relying on oscillators or wavetables. The popular Jukebox, introduced by OpenAI in 2020 (Dhariwal et al. 2020), is based on this family of models. Similarly RAVE (Real-Time Audio Variational auto-Encoder), developed specifically for real-time neural synthesis, also stems from this model family (Caillon & Esling 2021). RAVE is highly versatile, enabling unsupervised learning on extensive audio datasets without the need for labels or annotations, making it particularly suitable for performative and research contexts.
From the 2010s onwards, in addition to the aforementioned autoencoder architecture, other types of models that will revolutionise the field of AI have emerged. Here is a brief overview of the most frequently used types in creative contexts.
6.3 Recurrent Neural Networks: SampleRNN by Dadabots
The already mentioned recurrent neural networks (RNNs) are widely used for processing sequential data, such as texts and time series. RNNs are able to capture long-term dependencies between different parts of a sequence and are excellent for music generation. This is the case with the Dadabot duo (Carr & Zukowski 2018) which in 2016, using a variant of an RNN developed for the occasion, SampleRNN, generated a parody album, Bot Prownies, based on NOFX's punk rock album Prunk in Drublic.
The album Bot Prownies features music generated autoregressively using SampleRNN. The NOFX album, on which the model was trained, was listened to 26 times. This resulted in 900 minutes of generated audio, from which 20 minutes were selected by humans across various learning epochs to compose the album.
6.4 Generative Adversarial Networks: WaveGAN
Generative adversarial networks (GANs), first introduced by Goodfellow et al. (2014), are composed of two neural networks, a generator and a discriminator, which compete against each other. The generator creates synthetic data while the discriminator tries to distinguish between real and synthetic data. This model is typically used to generate images that are both locally and globally coherent. A variant of this model, DCGAN, forms the basis of WaveGAN, a popular model for unsupervised neural audio synthesis (Donahue et al. 2019).
6.5 Convolutional Neural Networks: Text-to-image Tools
Convolutional neural networks (CNNs) are specialised in processing grid-structured data such as images. They use convolutional filters to detect patterns and features in images, making them effective for object recognition, classification and other image-related tasks. Popular text-to-image models such as Stable Diffusion are based on architectures derived from them, such as U-Net.
6.6 Transformers: Large Language Models
The transformer model, introduced in the paper "Attention is all you need" (Vaswani et al. 2017), has revolutionised the field of Natural Language Processing (NLP). Transformers use self-attention mechanisms to capture relationships between words in a text, making them particularly effective for tasks such as automatic translation, text generation and natural language understanding. Popular large language models (LLMs) such as ChatGPT and Copilot, for example, are based on this type of model and also find application in the aforementioned Stable Diffusion.
References
References
Ashby, William R. Design for a Brain (Second printing, corrected). New York, John Wiley & Sons, 1954.
Audry, Sofian. Art in the Age of Machine Learning. Cambridge: The MIT Press, 2021.
Brooks, Rodney A. Cambrian Intelligence: The Early History of the New AI. Cambridge: The MIT Press, 1999.
Caillon, Antoine and Philippe Esling. “RAVE: A variational autoencoder for fast and high-quality neural audio synthesis.” arXiv:2111.05011, 2021. Url: https://doi.org/10.48550/arXiv.2111.05011.
Carr, C.J. and Zack Zukowski. “Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands.” arXiv:1811.06633, 2018. Url: https://doi.org/10.48550/arXiv.1811.06633.
Chomsky, Noam. Syntactic Structures. The Hague: Mouton & Co., 1957.
Churchland, Paul M. “Some Reductive Strategies in Cognitive Neurobiology.” Mind 95, no. 379 (1986): 223-253.
Cope, David. “Experiments in musical intelligence (EMI): Non‐linear linguistic‐based composition.” Interface 18 no. 1-2, (1989): 117-139.
Dhariwal, Prafulla, Heewoo Jun, Christine Payne, Jong Wook Kim, Alec Radford and Ilya Sutskever. “Jukebox: A Generative Model for Music.” arXiv:2005.00341, 2020. Url: https://doi.org/10.48550/arXiv.2005.00341.
Diaz-Jerez, Gustavo. “Composing with Melomics: Delving into the Computational World for Musical Inspiration.” Leonardo Music Journal 21, (2011): 13-14.
Donahue, Chris, Julian McAuley and Miller Puckette. “Adversarial Audio Synthesis.” arXiv:1802.04208, 2019. Url: https://doi.org/10.48550/arXiv.1802.04208.
Engel, Jesse, Cinjon Resnick, Adam Roberts, Sander Dieleman, Mohammad Norouzi, Douglas Eck and Karen Simonyan. “Neural audio synthesis of musical notes with WaveNet autoencoders.” In Proceedings of the 34th International Conference on Machine Learning (ICML17), edited by D. Precup, Y. Whye Teh, 1068-1077, 2017.
Gardner, Martin. “Mathematical Games - The fantastic combinations of John Conway's new solitaire game ‘Life’.” Scientific American 223, (1970): 120-123.
Goodfellow, Ian J., Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville and Yoshua Bengio. “Generative Adversarial Network.” arXiv:1406.2661, 2014. Url: https://doi.org/10.48550/arXiv.1406.2661.
Grünberger, Christoph, ed. The Age of Data: Embracing Algorithms in Art & Design. Salenstein: Niggli, 2022.
Hopfield, John J. “Neural networks and physical systems with emergent collective computational abilities.” Proceedings of the National Academy of Sciences 79, no. 8 (1982): 2554-2558.
McCulloch, Warren S. and Walter Pitts. “A logical calculus of the ideas immanent in nervous activity.” The bulletin of mathematical biophysics 5 (1943): 115-133.
Minsky, Marvin Lee, and Seymour Papert. Perceptrons: An Introduction to Computational Geometry. Cambridge, MIT Press, 1969.
Rosenblatt, Frank. “The Perceptron: A Perceiving and Recognizing Automaton.” Technical Report Cornell Aeronautical Laboratory 85 (1957): 460-461.
Selfrdige, Oliver G. “Pandemonium: A Paradigm for Learning.” In Symposium on the Mechanization of Thought Processes, edited by D.K. Blake and A.M. Uttley, 511-531, 1959.
Shannon, Claude E. “A Mathematical Theory of Communication.” The Bell System Technical Journal 27 (1948): 379-423.
Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser and Illia Polosukhin. “Attention Is All You Need.” arXiv:1706.03762. Url: https://doi.org/10.48550/arXiv.1706.03762.
Walter, William Grey. “An Imitation of Life.” Scientific American 182, no. 5 (1950): 42-45.
Wiener, Norbert. Cybernetics or Control and Communication in the Animal and the Machine. Cambridge, MIT Press, 1961.
Xenakis, Iannis. “Musiques Formelles.” Special issue of La Revue Musicale no. 253-254 (1963).