My composition The Instrument consists of an interactive live-electronics computer system programmed in SuperCollider, with supplementary audio material; it can be played by any number of musicians connected to the computer through its audio input. Any musical instrument may be used to trigger the system, as well as other kinds of resonating objects which are not traditional musical instruments. The composition does not involve a score or any other prescribed instructions for performance. This chapter will focus on the patterns and behaviors according to which the computer and the musicians interact and, as a result of this interaction, produce, in real time, a musical structure. Throughout the chapter I will show how the musical identity of the composition is based on the combination of freedom and fixity.
I will start with an account of The Instrument’s performance history (Part 2). During the period from the first performance (which took place in 2013) until today, the work has had several realizations, reshaping the composed content and presenting it in different staged contexts. For example, what was initially thought of as an installation – an open-ended piece, not presented in a concert situation – later transformed into a “normal” concert piece by presenting the same live-electronics system in the form of an improvised set. The content of the composition itself (the programmed code) has remained largely the same throughout these later realizations. I will give a detailed account of the different realizations and propose that the reason why a single source of composed material has the potential to manifest itself in such a variety of ways is the outcome of various combinations of its free and fixed properties. In that sense, the different derivations of the composed material of The Instrument are a direct result of the fundamental idea of my research.
In Part 3, I will concentrate on the composition. This part will be divided into three subsections, focusing on the structure, the audio material (used for the digital processing), and the interactive features of the live-electronics system. In section 3.1, I will use the term “flexibility” to refer to The Instrument’s musical form which emerges only in the course of performance, based on the interactions between the musicians and the computer. This flexibility is generated by the system’s presets (the SuperCollider code), but these patterns are activated in real time, triggered by the musicians’ live input. In that sense, the freedom of the musicians to act spontaneously (in the sense of not acting according to any prescribed path, but by following impromptu impulses) is necessary for the structuring, while at the same time, this freedom is always intertwined with fixed elements (the responsive behaviour of the system, and the audio material used for processing).
In section 3.2, I will focus on the audio material processed by the computer. This consists of a collection of samples, mostly of people reading a list of music-related words and phrases: names of instruments, musical terms, and so on. The audio samples form the building blocks or “raw” material from which the structure will be molded during a performance. On the various occasions The Instrument was played, I continued to collect new recordings by additional speakers, so that the collection of samples has been constantly evolving. This collecting process suggests freedom at a different level from the structural one: in this case, it is the sonic material, the pre-processed “core” from which the musical form will be later generated, which undergoes modification.
In section 3.3, I will describe the real-time interactive system of The Instrument. This system combines several different patterns of interaction between the computer and the musicians, including direct and indirect modes of operation, depending upon how the computer responds to the musicians’ input. Furthermore, the system is designed to include both preset and random patterns. In order to gain a wider perspective on the subject of real-time interactivity, I will discuss two different schemes of interactive computer systems, those of Robert Rowe and Laurence Casserley (3.3.1). Each involves its own taxonomy and, by comparing their approaches to mine, I will be able to uncover some of the musical potential of The Instrument.
In Part 4, I will present several different viewpoints from researchers and musicians on live-electronics systems and what these systems may provide for performers. This will contextualize my project and allow me to reflect on the ideas which have guided me while composing the work, such as the various different functions which can coexist within a single musical work. In section 4.1, I will discuss the work Voyager by composer, performer, and researcher George Lewis. Like The Instrument, Voyager consists only of a computer code, which runs an interactive live-electronics system. Lewis regards the computer as an autonomous improviser: it is capable not only of interacting with the musicians but also of generating independent sounds without any input from the musicians. Although comparable to Voyager, The Instrument also differs in significant ways from Lewis’ work. The Instrument is, for example, designed to be more subordinate to its human performers. Rather than establishing the computer as an autonomous improviser, my focus has been on the real-time generation of a musical structure. I have tried to create a structure which can be stretched and reshaped and with which the musicians interact by improvising: the system in itself does not act autonomously.
In section 4.2, I will focus on the concepts of “open work” and “work-in-movement.” These terms, suggested by semiotician Umberto Eco, point to a notion of incompleteness in composed works. Works of this kind are not entirely fixed, but rather remain explicitly open, providing an array of possible paths which the performer can follow in their interpretation. This idea is further developed by composer, performer, and researcher Henrik Frisk as an ongoing negotiation between composer, system, musical material, and performer, for example in his composition Repetition Repeats All Other Repetitions. In my work, I tried to embody the work-in-movement concept inherently within the structure of the composition, which can take different shapes depending on the actions of the performer. My notion of openness is not restricted to only one particular part within a modular structure – an idea which is in itself rather limited, since it allows freedom only in a controlled manner, that is, within an otherwise determinate compositional fabric. In The Instrument, the entire structure is generated in real time. The computer code does define certain fixed variables – a “blueprint” or framework for the musical structure – but the way in which this design will be rendered into music is dependent on the interaction between the musicians and the electronics during the performance. This quality of the system, reinforced by the absence of any prescribed score, embodies a more substantial degree of openness or freedom in comparison with Frisk’s approach.
Finally, I will discuss the idea of the computer as a musical instrument (section 4.3). I will suggest that a reciprocal relation exists between the musicians and the computer system: the musicians trigger and control the system, which cannot function without them, but the computer also influences the behavior of the musicians. This relationship between the computer and the musicians links The Instrument with certain present-day ideas on technology. I will discuss the thoughts of two scholars, philosopher and sociologist Bruno Latour and Aden Evens, whose research focuses on digital studies and contemporary culture, and who both link technology to openness. Both Latour and Evens do not regard technology as having a determinate function, as a “means to an end.” Instead, they propose a more open view, based on the palpable range of possible paths which can be opened by technology, and which are not necessarily foreseen in advance. In this sense, a performance of The Instrument can be perceived as a process of learning in which the musicians can freely explore the principally infinite characteristics of the system, and through this exploration create music. I will conclude this chapter with a summary of the topics discussed, reflecting on some further possibilities which this work might generate.
2. Performance History
The creation of The Instrument began at a week-long workshop/residency, organized by Musica in July 2013, tutored by composer and sound artist Volker Staub and composer Wim Henderickx. The initial concept I had in mind was to develop a sound installation with two basic preconditions: firstly that it would allow for audience participation, and secondly that it would be presented in a format other than that of a staged concert.
The first performance was realized at the end of my residency, on July 6, 2013. Two large metal triangles were hung from the ceiling, and their sound, captured by two microphones, was used to trigger the computer. For audience members, who were walking across the performance space and striking the triangles, the combination of percussive sounds and electronic “responses” produced a sonic environment which they could freely explore.
Already during the residency period, while working towards the first version of The Instrument, I began to realize that the same work could in fact function not only as an installation but also as a concert piece. This would suggest a “normal” stage presentation with a formal beginning and end, contrary to the open-ended and continuous presentation of the sound installation. In addition, presenting The Instrument as a concert piece would also mean that the interactive computer system could be triggered onstage by performing musicians instead of by an audience of “passers-by.”
Apart from these alterations, there is no fundamental difference in the basic design. The composition – comprising the interactive system and the pre-recorded audio samples – allows for a range of realizations of the same material. I regard these as different manifestations of one single work: The Instrument allows for realizations which are distinctively different from each other, while at the same time retaining certain stable elements connecting the different versions (I will elaborate more on this in section 4.2). At this point, I include an outline of the performance history of The Instrument, demonstrating the different possibilities for rendering the same material:
- In a performance at the Laaktheater (The Hague, December 8, 2013), The Instrument was included as part of a concert program alongside other musical pieces. The performance opened with two musicians playing onstage – contrabass and voice. This was then followed by an invitation to audience members to come on stage and replace the musicians, taking over the performance. The participants could use their voices or explore the sonic possibilities provided by the contrabass. No further instructions were given. This performance could be described as a hybrid of interactive installation and concert piece. The event provided the opportunity to establish a staged concert presentation, and to disrupt that conceptual frame as audience members became active participants in the performance. The idea fitted well with the rest of the concert program, which included compositions that were performed off-stage or explored audience participation.
- I have performed The Instrument in Israel, the UK, and the Netherlands in 2012 and 2013 with two different groups – a voice–bass duo (together with singer Elisenda Pujals), and Hatzatz (together with viola player Maya Felixbrodt and Tomer Harari on MIDI keyboard). The composition has become part of the standard repertoire of both groups, exclusively performed as a concert piece. The rest of the repertoire of the voice–bass duo consists mainly of notated works. Hatzatz, on the contrary, has created music in collaborative processes, often by exploring the possibilities of non-notated compositions. The Instrument, however, seems to have sat well in both habitats. Its openness has provided a wide-ranging palette of performative possibilities that works well in both settings.
• In 2014 and 2015, the composition went through another metamorphosis with guitar player Roberto Garretón and myself on contrabass. We approached The Instrument not as an autonomous composition, but as part of a more elaborate “toolkit.” This toolkit consisted of several (other) interactive live-electronics systems, involving our musical instruments in combination with computers. The Instrument fitted within this constellation: the instruments functioned as sources for two separate computers, giving us the opportunity to play The Instrument alongside other software, creating spontaneously chosen combinations of several of the “tools” available. The overall result was shaped using this variety of acoustic/software instruments as a free-improvised set.
- On September 30, 2016, I presented The Instrument at the Nutshuis in The Hague, again as an installation. This time I used two sets of percussion instruments for the sound input, hanging from the ceiling of the venue’s main hall, available for the public to play. This became the central hub around which the rest of the evening’s performances took place. The sound of the percussion and the electronic system could be heard in various parts of the building, sometimes even during the performance of other pieces in the program. In that sense, the ongoing, continuous character of the original installation idea kept resonating throughout the entire evening, giving The Instrument not only the role of a musical work in its own right, but also of a central axis around which the rest of the program revolved.
To sum up: any performer, on any instrument, professional or otherwise, could participate in playing The Instrument. It could be featured either as a separate piece or as part of a more involved performance setting. At the same time, a presentation of The Instrument would always maintain certain compositional features for which reason I still treat all renditions of The Instrument as a single work. The ability of this work to re-adapt, by responding to different performance situations, in combination with the “stability” of the material, resulting from the retention of certain compositional features, points towards an interesting and challenging combination of freedom and fixity. In the next section I will describe the different features of The Instrument, paying attention to the specific choices I have made during the creation processes, in order to better explain how they reflect ideas of freedom and fixity.
3. The Composition: Structure, Audio Material, and Interactive System
In the following subsections, I will describe the design of The Instrument, divided into three different components: the structure, the audio material, and the interactive computer system. In each part, I will elaborate on how various free and fixed properties are intertwined, thus forming a composed framework for improvisation.
3.1 Structure and Flexibility
The software of The Instrument uses a list of pre-recorded audio samples, processing them into shorter particles with varying properties, for example, length or envelope (amplitude shape). This procedure takes place in real time, that is, during the performance: at each moment, the preset code selects one particular sample which will then be processed into an electronic sound with distinct characteristics. The changes between the different samples and the resulting sonic transformations become the defining feature of the musical form. In this sense, the structure of The Instrument is based on (composed) preset patterns, but it is generated in real time, as a live, electronic process.
The patterns according to which the samples are selected determine a fixed number of times that each sample will be processed before moving on to the next one. While this number is fixed, the time it takes to move between one sample and the next is indeterminate, since it depends on the interaction with the musicians who operate the system: upon each trigger, the computer selects one sample and emits one processed sound. Furthermore, the code offers the possibility of cycling through the collection of samples endlessly, so that the musicians can decide on the length and shape of the performance through their interactions with the system in real time and not according to any composed prescription. And finally, if more than one musician is playing, the computer can run several independent systems simultaneously (each one triggered separately by a different audio input of the computer), so in that sense more than one “Instrument” can function at the same time, creating parallel layers of sound which combine into a single multi-layered structure. The structure of The Instrument can therefore be described as flexible or elastic: it is influenced both by fixed features and by the behavior of the musicians during the performance.
3.2 Audio Material: Recorded Samples
In order to collect the necessary samples, I recorded various audio materials. This task, which has formed a significant part of the composition process, manifests the idea of freedom at another level than the real-time structuring of the music. The sampled material has kept evolving independently of the structure, allowing The Instrument to remain open also at a more preliminary compositional level – that of the “raw” material (the unprocessed samples).
Initially, the source material was supposed to consist only of pre-recorded “untrained” voices reading a list of music-related terms: names of musical instruments, performance instructions (“slow,” “loud,” etc.) and music-related actions (“pluck,” “bow,” “improvise,” etc.). Later, with the intention of providing additional samples, further recordings were made, and the inventory of source material became more diverse. In subsequent versions of The Instrument, additional material was included: sung or played parts, which served as a contrast to the original spoken samples. In another version, a decision was made to add the possibility of real-time recording: the computer’s audio inputs would be recorded and updated continuously into a buffer, thus forming an additional source for the live processing.
As demonstrated by all of the above cases, The Instrument’s structure remains open to “absorb” different materials. These will define the most basic characteristics of the performed result – the “color” of the processed audio. The search for specific voices and other materials, and the recording process itself, gave rise to a particular type of involvement in the composition process which sets The Instrument apart from the other case studies discussed in this thesis: there is an ongoing freedom to shape the basic sonic material, a process which is separate from any decision made regarding the structure. Here, freedom is embodied by the fact that certain choices have to be made before each performance: selecting specific audio material and distributing it through the structure of the composition in order to “charge” the structure with the necessary content.
3.2.1 “Auxiliary” Influences: The Significance of the Recording Process and Searching for Audio Materials
Why is the idea of a structure which is open to absorb into itself different materials important? Why not leave the idea of freedom embodied within the structure and generated during the performance? One answer is that in this way The Instrument can embrace a range of materials (spoken, played, sung, pre- or live-recorded), which adapt to specific situations or circumstances. For example, for the performance of The Instrument in Israel, the original text was translated into Hebrew, Arabic, and Russian. Instead of reusing the original material, this version was tailored to a particular situation, that is, relying on local ingredients and addressing “native” ears.
This kind of reciprocal influence between the performing circumstances and the composition’s structure is also described by Henrik Frisk in relation to his work process on the composition Repetition Repeats All Other Repetitions:
Many circumstances, some of which are auxiliary to the actual process of ‘composing’ (i.e. the tasks traditionally assigned to the labor of the ‘composer’) had a great influence on the way the piece developed. However, in the end it would turn out that these ‘circumstances’ or ‘processes’ were not in fact ‘auxiliary’: They were, or would become, an integral part of the process of composing (now also in the extended sense of the term). Some of these were planned and others came about as a result of the ways in which the project developed. (Frisk, 2008, pp. 45–6)
The role of the recorded samples, initially understood as a response to a technical demand (to generate the necessary audio material for the real-time processing), developed substantially, facilitating greater and more nuanced artistic expressions. It has allowed the composition to adapt itself to the different situations in which it was performed and the musical result to be influenced by the changing circumstances of each performance.
3.3 The Real-Time Interactive System
The main processing function of the computer program operates through a sound synthesis method called granular synthesis. It slices up the specified audio sample into tiny sound grains or particles, targeted here at durations between 100 and 1000 milliseconds. The program controls various parameters of each grain, such as duration, envelope, or frequency (pitch shifting), so that each grain will have distinct sound properties. Additionally, the individual grains are grouped into discrete sets of successive sound blocks (grain tails). The length of each set of grains is determined by the number of grains it contains (between 3 and 9), and by the time span of each grain (between 100 and 1000 milliseconds). These discrete sets of grain tails, derived from the original vocal (or other) materials, form the basic sound blocks of The Instrument. The accumulation of these sound blocks provides an electronic soundscape or “sound mass,” with distinct colors (characteristics of the individual grains) and densities (the number of grains being distributed), based on the original source material. Throughout each performance, these sound masses will undergo a series of fluctuating textures and grain densities, which will form the overall musical structure.
The system operates by triggering the onsets of the sound blocks through either a direct response to the performer’s audio input or an indirect response in which the frequency of triggering of grain tails is determined by the performer’s activity levels, calculated as the amount of input onsets per second. The musicians can freely switch between these two modes during a performance.
Both modes operate in response to the performer’s activity levels: the “busier” it gets, the higher the triggering rate will be. What is then the difference between the two modes? For each mode of operation, a different elastic quality is superimposed onto the sound properties and materials. Performing The Instrument is in fact based on the exploration of these different qualities: how each one affects the transformation of sound material and how it allows the performers to find an emerging mode of engagement or interaction between themselves and the system. In other words, both the direct and indirect modes of response allow the structure of The Instrument to become flexible: it will be shaped, in real time, according to the decisions of the performers. But the fact that each mode reveals a different responsivity – creating a different interaction with the computer system – presents different states of flexibility.
The combination of freedom and fixity exists not only in the way in which the system sets off the grain tails, but also in the way it controls specific parameters of each individual grain. While some of these parameters are mapped from the data extracted from the audio input (level, frequency, etc.), others (duration, envelope, speed, etc.) are determined through preset patterns which do not rely on the audio input. These latter patterns do not just demonstrate a deterministic nature but may at certain points also act randomly. Various degrees of randomness are embedded in the SuperCollider code and incorporated in combination with the input of the players or with the preset pattern values. I deliberately use the quantitative term “degree” in relation to randomness, since the programming language can in fact define a specific proportional value for distributing the control over the system’s generated values, divided between responsiveness, preset control, and randomness. For example, by multiplying the grain’s amplitude value (mapped from the input level) with random values between 0.9 and 1.1, arbitrary micro-fluctuations can be achieved, giving the whole system a slight instability in its reaction. This will loosen, up to a certain degree, the sense of predictability displayed by the system and introduce a sense of freedom.
A combination of patterns which are predetermined, or random, or which rely on the performer’s input, creates a flexible, unfixed, and open-ended structure, generated in real time and shaped both by the behavior of the musicians and the pre-set features. This kind of structure demonstrates a combination of freedom and fixity, embodied within a real-time interactive computer system. The structure can stretch or compress, condense or become rarefied. It is a live structure, which will come into existence only through the interaction with the musicians.
In the following section I will discuss two alternative schemes for live-electronics interactive systems, in order to gain a broader perspective on how my work deals with freedom and fixity.
3.3.1 Two Paradigms of Interactive Systems: Instrument–Player (Rowe) and Local–Field (Emmerson)
The various features of The Instrument’s interactive system – its direct and indirect modes, its preset patterns, and its real-time input-dependent parametric control – bring to mind the frequently used taxonomy of computer systems which divides them into “instrument” and “player,” as suggested by electronic music composer and researcher Robert Rowe:
Instrument paradigm systems are those that treat the machine contribution as an extension or augmentation of the human performance. Player paradigm systems present the machine as an interlocutor – another musical presence in the texture that has weight and independence distinguishing it from its human counterpart. (Rowe, 2001, p. 302)
In the most basic, technical sense, the direct triggering mode bears a resemblance to the instrument paradigm: the computer augments the sound of the performer by emitting the grain tails in direct response to the instrumental onset. The indirect triggering mode resembles the player paradigm, since no input is required for the computer to create sounds (although the rate of triggering is influenced by the input).
Another relevant viewpoint, concerning the relation between the sound of a live musician and electroacoustic sounds, is suggested by Simon Emmerson, a composer of electroacoustic music. Emmerson suggests the terminology “local” and “field,” which, according to him
has its roots in a simple model of the situation of the human performer (as sound source) in an environment. Local controls and functions seek to extend (but not to break) the perceived relation of human performer action to sound production. While field functions place the results of this activity within a context, a landscape or an environment. (Emmerson, 1994, p. 31, italics in original)
Emmerson’s focus is mainly on issues of amplification and diffusion in works that combine live and electronic sounds; nevertheless, his ideas can also make a useful contribution in the present discussion of live processing. In The Instrument, the differences between local and field processes are made apparent by the duration of the electronic responses (the length of one grain tail or the accumulation of several) compared to the source sound that triggered it: a shorter response will be perceived as local, whereas longer responses will create a more extended, global field of sound. In this way, a single grain tail directly triggered by an onset provides a local function which extends the original acoustic sound source, while the accumulation of several grain tails provides the function of a field by creating an electroacoustic environment.
The ideas proposed by Rowe and Emmerson are not presumed to provide strong binary classifications, which would derive the function of an electronic music system from either a player- or instrument-related paradigm, or on the other hand from either local or field processes. Such extreme cases would probably produce results that are predictable and not interesting from a musical point of view. A more expanded viewpoint, and one which is probably more realistic in terms of performed music, would be to look within the range of possibilities that might emerge between the extremes of these ideas. This is well understood by Emmerson himself, who asserts that “the listener’s perspective on the relationship of local to field may vary continuously and hence so can the composer’s aims. Local is continuous to field: the borderline varies with musical context and may in fact not exist” (Emmerson, 1994, p. 33). This idea is also understood by Casserley: “Clearly many processes can fall into more than one category according to how they are used. In addition, these are not discrete conditions; there is a continuum between them, and there are many areas of ambiguity” (Casserley, 1997, n.p.).
Also, the idea of a middle ground between the instrument and the player is not a novel one: in Voyager (which I will discuss more extensively in the next section), George Lewis suggests that these “two models of role construction in interactive systems should be viewed as on a continuum” (Lewis, 2000, p. 34). Additionally, Frisk – who applies Rowe’s ideas to the discussion of his own work – states that “these are not fixed positions but possible starting points” (Frisk, 2008, p. 21). How then does a performance of The Instrument explore this middle ground between the instrument and the player or between the local and the field? Furthermore, how can my work contribute to the already existing discussion?
The SuperCollider patch generates the structure of the music during the performance, as a musical form which is indeed composed, yet also flexible: it is based on a balance between predetermined properties and freedom with which the musicians interact in real time. This flexible structure can provide a middle ground or continuum between the local and the field, between the instrument and the player paradigms. For example, the local–field continuum is embodied in the rate at which the grain tails are being triggered: it is governed in real time by the performer’s activity rate, which produces what is perceived as either a more direct, local response (separate, single grain tails) or a field process (the accumulation of various grain tails). Also relevant is the balance between the duration and shape of the individual grains, the overall duration of the grain tails (the aggregate of several grains), and the triggering rate of the system’s response. Since the triggering rate is influenced by the performers’ activity, the response has to be carefully adjusted in order not to cascade into an over-dominating texture (when the musicians’ level of activity is high and the durations of the grains/grain tails are too long) or to evaporate too fast into complete silence (when the activity level is too low and the durations are too short). Also the continuum between instrument and player paradigms is explored in and through The Instrument: the way in which the system interacts with the input is kept unpredictable to a certain extent. The samples and grain tails are generated by several functions which may be directly or indirectly responsive to the input, thus demonstrating a behavior on the range between a more dependent instrument and a more autonomous player.
All of the above ideas are combined within one system, so the musicians cannot rely on a single, stable response pattern. Furthermore, the design of the system is based on constant change: processes of transformation in the distinct characteristics of the sound (changing samples) and textural density (single versus multiple grain-tails). Performing The Instrument can be perceived as an ongoing exploratory process (even on repeated performances with the same musicians), which requires the players to stay alert, whereas more straightforward solutions would provide simpler, more predictable conditions for the performance, and hence, be less surprising. The result is a complex network that combines the different paradigms – the instrument (as an “augmentation” for the actions of the musicians) and the player (as an independent “interlocutor”), the local (as a single triggered electronic response) and the field (as the accumulation of several responses) – and integrates all of them into a single musical structure.
4. Contextualization and Discussion
In the following three subsections I will present and discuss the views of several musicians and researchers whose ideas are relevant within the context of developing and reflecting on The Instrument. George Lewis’ composition Voyager is a classic example of the use of a computer in a musical context. Lewis composed this work during the late 1980s, working on it at STEIM in Amsterdam, and since then it has occupied an important role in discussions of music technology and improvisation. I will compare Lewis’ work to mine from the perspective of the computer as an autonomous improviser. Another important concept is work-in-movement. Proposed by Umberto Eco, this is a paradigm of a flexible musical structure that incorporates freedom into the performance. I will discuss this concept in the work of Henrik Frisk and compare his interpretation with mine. Finally, I will discuss several ideas by Bruno Latour. Latour suggests that the notion of openness is inherent in technology, and, as such, his thought has a bearing on my work. Continuing from Latour’s views, I will also present some ideas by Aden Evens, who has discussed the computer as a musical instrument. In each subsection, I will note direct links to The Instrument as a live-electronics interactive system which calls for improvisation during its performance.
4.1 The Computer as an Improviser
In the work Voyager, George Lewis explores the idea of the computer as an improviser. Like The Instrument, Lewis’ system is designed as computer code which runs an interactive electronic system. It is played by and together with live musicians who influence the system while also responding to it. The comparison between the two works raises important issues regarding the autonomous role of the computer during a performance and the interaction between humans and computers within the domain of music. How autonomous is the behavior of the computer and in what ways is it dependent on the musicians? What function can an interactive computer system have within a musical performance?
Lewis describes Voyager as a “virtual improvising orchestra” which is responsive to the actions of “up to two human improvisors, who are either performing on MIDI-equipped keyboards or playing acoustic instruments through ‘pitch followers’, devices that try to parse the sounds of acoustic instruments into MIDI data streams” (Lewis, 2000, pp. 33–4). In addition to being responsive to the player(s), Voyager also functions as an independent system: “In the absence of outside input, the complete specification of the system’s musical behavior is internally generated. In practical terms, this means that Voyager does not need to have real-time human input to generate music” (Lewis, 2000, p. 36). These two modes of behavior – the responsive and the independent – make Voyager’s system a computational improvising partner to the musician(s) in an improvised dialogue: it grants the machine the role of an “active contributor to the unfolding creative process” (McCormack and d’Inverno, 2016, p. 98), or a “collaborative musical improvisor” (Linson, Dobbyn, Lewis, and Laney, 2015, p. 3).
The way in which Voyager creates communication between the musicians and the computer, as two autonomous yet interactive sound-generating streams, can be compared to the interaction within a group of human improvisers. Every interaction between the computer and the musicians takes place sonically, without involving other channels of control:
Since the program exhibits generative behavior independently of the improviser, decisions taken by the computer have consequences for the music that must be taken into account by the improvisor. With no built-in hierarchy of human leader/computer follower – no ‘veto’ buttons, foot-pedals or physical cues – all communication between the system and the improvisor takes place sonically. (Lewis, 2000, p. 36)
Yet there are also certain limitations to this system, for example “the fact that the computer is given no information about the sound itself – the timbre. Only the pitch is fed to the computer” (Frisk, 2008, p. 22). This is, according to Frisk, a fundamental limitation in Lewis’ work, since “the particularity of that which is ‘said’ is encoded in the sound rather than the pitch” (Frisk, 2008, p. 23). Frisk finds Voyager lacking in its ability to improvise:
When I listen to George Lewis and Roscoe Mitchell improvising together with/in Voyager, that is what I hear: I hear that the interaction between the two musicians and the computer is of a different order than the interaction between Lewis and Mitchell. (Frisk, 2008, p. 73)
In his own work Frisk tries to bridge the gap between the computer and the human player by creating a system which is sensitive and responsive to timbre changes, as well as being able to produce convincing timbral results, thus to establish a situation which is as close as possible to what happens between human improvisers.
Both Lewis and Frisk have designed computer systems that are capable of improvising. They seek a non-hierarchical relation between the computer and the human musician, comprehending the two as equal. The Instrument, on the other hand, is designed to be more subordinate to its human performers: the electronic sounds are triggered in real time by the musicians and would not exist without their constant input. In comparison to Lewis’ or Frisk’s approach – both highlighting improvisation as the starting point for musical interaction – my focus with The Instrument is on the musical structure: its ability to stretch or absorb different audio materials, and to combine improvisation and pre-determined structural features through the interactive features of the system. Rather than an equal counterpart for an improvising musician, the electronic system of The Instrument should be perceived primarily as composed, even if it demonstrates flexibility (of its real-time generated structure) and unpredictability (due to certain autonomous or random patterns which are embedded in the computer code), and even if the performance involves improvisation (by the musicians).
Following this interpretation, I would situate The Instrument as part of what Lewis regards as
the overwhelming majority of computer music research and compositional activity [which] locates itself . . . within the belief systems and cultural practices of European concert music. Voyager, [on the other hand] exemplifies an area of musical discourse using computers that is not viewed culturally and historically as a branch of trans-European contemporary concert music and, moreover, is not necessarily modeled as a narrative about “composition.” (Lewis, 2000, p. 33)
This view, which is rooted in Lewis’ commitment to his African-American tradition and involves a certain sense of criticism, provides a dichotomy which is not entirely relevant to my work. Even though The Instrument should be perceived first and foremost as a composed work, it still contains a substantial degree of freedom. The fact that the computer is subordinate to the musicians does not disrupt this idea: it embodies it within a composed structure.
Lastly, it is also worth mentioning that as several decades have already passed since the creation of Voyager, some of the methods Lewis has employed might seem outdated. However, an important statement made by Lewis helps to shift the focus from the technological issues towards the musical ones – and these are still relevant today:
Voyager is not asking whether machines exhibit personality or identity, but how personalities and identities become articulated through sonic behavior. Instead of asking about the value placed . . . on artworks made by computers, Voyager continually refers to human expression. Rather than asking if computers can be creative and intelligent – those qualities, again, that we seek in our mates, or at least in a good blind date – Voyager asks us where our own creativity and intelligence might lie – not ‘How do we create intelligence?’ but ‘How do we find it?’ Ultimately, the subject of Voyager is not technology or computers at all, but musicality itself. (Lewis, 2000, p. 38)
In a similar way to Lewis’ approach in Voyager, the focus of The Instrument is not on any technological research question; rather, it is a musical question: How to create freedom within a (live-electronics, interactive) composition? And what is the reciprocal relationship between structure and improvisation in such a case? Instead of situating the technical issues in the center, it is how they address ideas of freedom and structure that is my main concern in this work.
Earlier in this chapter I described how The Instrument can provide different performance possibilities. The fact that a single work can yield an array of potential realizations suggests a link to the concept of the “work-in-movement.” In his book The Open Work, Umberto Eco proposes a “search for suggestiveness [which] is a deliberate move to ‘open’ the work to the free response of the addressee” (Eco, 1989, p. 9). In other words, the artwork does not determine one fixed interpretation, but allows for multiple readings, depending on its addressee. Eco describes this by using the term “open work.” He goes further to present a more elaborate idea, and one that is also more relevant in the context of this research. It is a more drastic degree of openness, which he calls the work-in-movement. Works of this kind
characteristically consist of unplanned or physically incomplete structural units. . . . In other words, the author [of a work-in-movement] offers the interpreter, the performer, the addressee a work to be completed. . . . It installs a new relationship between the contemplation and the utilization of a work of art. (Eco, 1989, pp. 12–23)
Applying this concept to music implies that the completion of the work is entrusted to the performer. He or she shares the process of “organizing and structuring” the music, in collaboration with the composer (Eco, 1989, p. 12). This idea fits well with each of the compositions discussed in this chapter – their open, free qualities, and the involvements of the performers in impromptu playing processes. The idea that the performing musicians (and not only the composer) are involved in the organization and structuring of the music stands at the very basis of this research.
Concurrently, Eco’s idea as it stands does not provide a more objectified understanding of the nature of the musical work itself. What are the qualities we would need to allow for this kind of openness, and how should the work-in-movement be constructed? Within the context of artistic research Eco’s ideas seem too general and a further elaboration would still be required.
Such elaboration can be found in Henrik Frisk’s approach to composition. Frisk builds upon Eco’s approach, suggesting a further interpretation of work-in-movement:
It was in the radical way that we [Frisk and his collaborator, guitarist Stefan Östersjö] gave up the notion of the work, and even the open work and established a re-interpretation of Eco’s work-in-movement that the full consequences of my altered composer role became evident. The work-in-movement is focused on the process rather than the result, in itself not a novel idea at all. However, in the context of computers and interaction and in combination with the idea of the augmented score, the focus on the process allows for an altered view on musical interpretation as well as composition. The score as a growing container of musical experience, all of which is open-sourced to allow for any kind of transformation but with the request to let the interactive narrative, the collaboration, guide the additions, alterations and removals of material from the score. (Frisk, 2008, p. 104)
Here, the work remains ever open, as an ongoing negotiation between composer and performer. Making use of his composition Repetition Repeats All Other Repetitions as a case study, Frisk describes a process of “collaboration, negotiation and interaction” (Frisk, 2008, p. 91) between himself, in the role of the composer, and guitar player Stefan Östersjö, in the role of the performer. The composition is subject to constant transformations, developing from one version to the next through various performances. According to Frisk’s approach, this kind of process brings into question the very concept of the musical work itself, suggesting an unfixed entity instead of the more traditional notion of a stable one. The score is no longer a representation of a complete and finished work; instead, it functions as a dynamic, mutable set of instructions.
Frisk’s thought and work imply that the traditional roles of composer and performer should be redefined. A system of feedback between the two can be established, through which the work can be repeatedly re-modified for different performance occasions. It requires a commitment not only to the interactive narrative between composer and performer but also between instruments and electronics.
As much as Frisk’s work may seem progressive, in the sense that it represents a commitment to fluidity and constant change, I would claim that a more genuine integration of openness and structure is possible, creating a work which profoundly manifests both. Once the composing process is over, Frisk’s work-in-movement essentially leaves a structure which is modular: it has to be “completed” during the performance by “putting the pieces together,” but it does not remain unrestrictedly open for any kind of input by the performer, in the sense that it can enable an unbiased and free process of exploration of the structure’s properties, and not just call for the interpretation of prescribed material. For example, in Repetition Repeats All Other Repetitions, Frisk describes how the different “sections are ‘modular’ and may be combined in any way the performer sees fit” (Frisk, 2008, p. 185). This approach to the score is indeed open, as “the performer is not even restricted to using entire sections as building blocks [but] the sections themselves may be broken down into smaller units” (p. 185); nevertheless, the openness is still, to a great extent, contained or determined by the score. Frisk emphasizes the fact that “one must be careful not to distort the identities [of the notated materials] beyond recognition” (p. 185), which narrows down the idea of freedom.
With Frisk’s approach, flexibility is reduced to a simple scheme, where each part is interchangeable; yet, in my opinion, this fails to unlock a fuller potential, which can be inherent to the structure itself: to stretch the whole work into different shapes, to transfigure it into different appearances or different realizations by relying on the performer’s freedom and their proactive involvement in the creative process.
With The Instrument, I have tried to widen the capacity to explore freely the musical properties of the work by addressing the idea of openness on a different level. The musical structure is created in real time, and the work is (re-)shaped by the interactive processing of original source material. Instead of introducing modular fragments of partly notated and partly improvisatory material, the musical form of The Instrument is generated live, during the performance, through the real-time processing of the “raw” source material, the audio samples. The entire structure of The Instrument is based on the notion of flexibility, since the selection of the samples and the processing relies on the live interactions with the musicians. The Instrument does not only “offer the interpreter . . . a work to be completed” (Eco, 1989, p. 12); instead, it allows for the emergence of a musical structure during and through the performance. The performance itself is based on this real-time generated structure and on it alone, in the absence of a notated score or any other method of instruction. In this sense, The Instrument manifests Frisk’s idea of focusing “on the process rather than the result” (Frisk, 2008, p. 104) adequately, since through it “the outcome of the process exceeds any foreknowledge of it; the musician manages to not foresee even when the productive algorithm is known in advance” (Evens, 2005, p. 150). The Instrument’s system enables exactly this state: freedom is a result of the structure, even though the latter contains certain fixed attributes. The result remains genuinely free, and does not have to restrict the musician in any way in order to keep the identity of the work intact.
4.3 Instruments, Technology, and Openness
In the previous sections, I suggested two viewpoints that could be used to describe my work: the first perceives The Instrument as a composed work, in which the role of the predetermined structure is prominent (as opposed to Lewis’ notion of Voyager as a computer system which is an independent improviser and, as such, provides less concrete directions for the performance); the second draws on the work-in-movement concept, based on the different performance possibilities which are made available by the material. In this section I propose an additional perspective, by describing this composition as an “instrument.” This latter concept does not relate only to the idea of a musical instrument but can also be comprehended as a device in a more general sense, thereby creating a direct link to technology.
During a performance of The Instrument, the musicians are connected to the computer’s sound input, influencing a variety of parameters that shape the live-electronics sounds. In this sense, the performers are operating a device: they control an instrument which cannot function autonomously and therefore should not be considered as an independent actor; rather, it is an object which is activated by its user – without human interference it would do nothing. At the same time, the musicians also engage in a dialogue with the system: it responds, demonstrating unexpected behavior at times because of its autonomous or random elements. This two-way interaction raises several questions: What is the effect of The Instrument on the musicians? In other words, how can a live electronic system, which is triggered by and responds to the musicians, also influence their behavior – their sound, their instrumental and physical gestures? And in what way can a musical instrument provide its player with freedom? By perceiving this composition as an instrument, a reciprocal relation is established: the instrument is being used, while, at the same time, also affecting its user. I will shed light on the correlation between my composition and the musicians who perform it.
A possible starting point from which to answer these questions would be to acknowledge that an instrument – a musical one, as much as any kind of device – does not just serve as a means to an end by fulfilling a certain predetermined function. Rather, an instrument holds within itself a wide range of possible effects, some of which may be anticipated while some others remain unforeseen. In this sense, a musical instrument should be perceived not only according to its sonic and tactile features but also according to the palpable range of musical paths that it might open up for the musicians
This understanding of a (musical) instrument corresponds with the thought of Bruno Latour, a philosopher and sociologist of science who focuses on the role of technology. For Latour, technology is not (only) instrumental: it does not exclusively serve a designated and predefined aim, but, rather, its purpose remains open in the sense that unknown paths and unanticipated experiences can be revealed to its user. Technology, instead of filling a “functional utility . . . has never ceased to introduce a history of enfoldings, detours, drifts, openings and translations that abolish the idea of function as much as that of neutrality” (Latour, 2002, p. 255). Latour describes the way in which an instrument can open up the path of its user:
With [an instrument] in hand, the possibilities are endless, providing whoever holds it with schemes of action that do not precede the moment it is grasped. . . . [An instrument offers its user the possibility of] exploring heterogeneous universes that nothing, up to that point, could have foreseen and behind which trail new functions. (Latour, 2002, p. 250)
It is the unforeseen possibilities provided by an instrument that are important for Latour, rather than a single, functional, predetermined purpose. The users of an instrument may discover new paths and unforeseen directions, directions that they would not initially be aware of. The instrument is being used, and, at the same time, it affects its user, opening up for them a broader horizon.
A computer used as a musical instrument influences its performers in the same manner. In his book Sound Ideas: Music, Machines, and Experience (2005), Aden Evens discusses music in the digital age – how we perceive it, the way we play it, and how this is influenced by contemporary technology. According to Evens, the computer is not a transparent device. It does not simply react to the actions of its user but carries with it an added, unforeseen value which is significant and cannot be ignored. The computer is not only an extension of its user but also a countervailing force. However, rather than posing this as a problem, this is exactly what grants the computer a role as an expressive musical instrument: “It offers to the musician a resistance; it pushes back. The musician applies force to the instrument, and the instrument conveys this force, pushing sound out and pushing back against the musician” (Evens, 2005, p. 159). And it is precisely through this act – playing with the computer’s resistance – that music is created. In this sense, Evens agrees with Latour, by perceiving a musical instrument not as a means to an end, but as a portal to unknown paths:
Like any instrument, a musical instrument is a means. The player makes sound by means of the instrument, which transduces force into vibration. But a musical instrument is no mere means: it does not disappear in its use. The musical instrument remains opaque, and one does not know how it will respond to a given gesture. (Evens, 2005, p. 82)
Also a computer, in order “to become an expressive instrument, to allow the generation of ideas, . . . must not disappear, neither into the sensation nor the desire of the user. On the contrary, the computer must become resistant, it must become a machine for posing problems.” (Evens, 2005, p. 164)
A performance of The Instrument is directed by the exploration of the computer system: the players interact with the live-electronics system, and gradually, throughout the course of the performance, they get to know the system’s “behavior.” Through the interaction with the system, the behavior of the musicians also inevitably alters: forced away from their common performance practice, either through the attempt of “taming” the machine (for example, by trying to create a denser or sparser sound from the electronics, which might lead to unexpected results due to the system’s autonomous or random features), or by reacting to the system’s sounds (for example, playing together with the computer in an open dialogue), the performers will find themselves on unknown musical territory. This can be perceived as a learning process in which music is created simultaneously: “To play is to learn (to play), and one invents in concert with one’s instrument” (Evens, 2005, p. 82). During this process, the “persistence” of the computer – its idiosyncratic behavior, unforeseen results, and independent or random patterns – are translated into music. In this sense, the system is not only fed with input, it “pushes back,” providing its user with constant feedback.
Of the four case studies which are the focus of this dissertation, The Instrument is the most “open” one. It does not include a score or any instructions for performance, and its structure is not only flexible – featuring elastic length and shape, and a capacity to absorb different material (audio samples) – but also interactive: the live-electronics system depends on the input of the musicians to create the sounds, to shape them, and to generate, in real time, a musical form. At the same time, this “open” quality does not exclude certain fixed properties, which are as essential to the structuring: pre-recorded samples which order of appearance is set in advance, and designed patterns of interaction upon which the processing of the samples will be triggered during the performance.
Perhaps the main conclusion that can be withdrawn from this case study is that a single work could be observed from different perspectives: The Instrument is a composition, as much as it is a work-in-movement. It is an interactive live-electronics system as well as a musical instrument. The coexistence of these different identities was not just realized in retrospect, as part of a scholarly analysis; it was taken into consideration during the composing process itself, providing a rich “toolbox” for composing. The multiplicity of identities raises questions about the nature of the musical work, and of musicality in itself. For composer and performer alike, these are open question which encourage the exploration of unknown musical territories and set off unexpected interactions between different creative modes.
One final question which can be asked is: what comes next? Which musical paths are worth exploring further? Perhaps the issue which has remained the most underdeveloped in The Instrument is that of the computer’s autonomy. Although the interactive system allows for certain autonomous features, a more thorough exploration seems unavoidable. For that, a more in-depth study of the technical possibilities and of the existing knowledge should be undertaken, for example by introducing a code which is based on machine learning paradigms, rather than the task-specific algorithms which I have used here. The design of such a system should also be a modular one, allowing for greater flexibility and complexity within a network which combines separate units of machine listening, audio analysis, and sound generation. Without avoiding the notions of a composed work or a musical instrument, such a system would provide a far more autonomous behavior, making the interaction during the performance more “musical”: a true dialogue between the computer and the musicians, within the context of improvisation.