[Untitled, 2012]


[Untitled, 2012] – score excerpt (click to enlarge)

1. Introduction

[Untitled, 2012] is a composition for solo contrabass and electronics. Unlike my other compositions, it involves no live processing or real-time interaction between a musician and a computer, just a pre-recorded soundtrack which is played uninterruptedly throughout the performance (with one exception, which will be discussed later).

This case study provides the opportunity to revisit the theme of freedom and fixity within a particular situation: a comprehensively composed musical environment. The soundtrack plays a dominant role in this composition, as it presents a strong pole of fixity and provides the skeleton for the entire structure. In this sense, [Untitled, 2012] supports philosopher Andy Hamilton’s claim that “pre-realized electronic music stands at the far limit of pre-structuring since, although possibly possessing spontaneity at the level of composition, at the level of performance or ‘sounding’ it is fixed” (Hamilton, 2007, p. 197). The soundtrack in [Untitled, 2012] can thus be perceived as a fixed time grid within which the performer can exercise real-time freedom. I will discuss the relation between fixed soundtrack and live performance (Part 2.1) and introduce two concepts that establish a notion of freedom in the interaction of musicians with fixed media: musical time scales (2.1.1) and groove (2.1.2).

The score of [Untitled, 2012] can be described as particularly detailed (at least in comparison to my other compositions). It confronts the player with specific instructions regarding pitch, rhythm, dynamics, articulation, and other sound-production techniques such as bow position, bow pressure, glissandi, and portamenti. This level of precision remains constant throughout the entire composition. Nevertheless, despite the complex notation, the degree of freedom here is not less than in the other works; rather, the approach is different. The dense notational environment implies a link to interpretation, perhaps more than it does to improvisation. I will elaborate on the differences and similarities between these two concepts in Part 2.2, making reference to various notational approaches which create a space for collaboration between composer and performer: “non-finished” forms and complex notations.

In order to put [Untitled, 2012] within a broader context, the chapter will include a discussion of three other relatively recent compositions (Part 3): Plex by Agostino di Scipio (1991), Bump by Amnon Wolman (2005), and Bokeh by Janco Verduin (2014). Beside the general resemblance between these works and mine (all combine live performance with fixed media), the decision to include a comprehensive discussion of these works was influenced by my involvement in them as a performer. Plex, Bump, and Bokeh have become parts of my regular repertoire as a bass player. I learned the scores, practiced the pieces, and performed them on several occasions. In addition, I had the chance to discuss the works with the composers themselves, asking them directly about the way they understand the relation between their works and the idea of freedom. Hence the discussion will not only be based on my personal approach to composition but also on my experience as a performer and my conversations with the composers of these works. In that sense, the three compositions form a path that leads towards [Untitled, 2012] as an experiment in artistic research, musically formulating my ideas on freedom and fixity through the “collision” between fixed media and live performance.

Finally, a note about the title of the work: clearly, [Untitled, 2012] provides nothing more than a temporary placeholder for a more proper name. The reason is this work’s lack of “mileage”: its sole performance was its premiere in 2012. Unlike the other compositions presented in this thesis, [Untitled, 2012] has not had the chance to grow between one performance and the next and thus should not be regarded as a fully developed work (compare with the case study The Instrument). However, although its relative compositional “immaturity,” [Untitled, 2012] has been included in this thesis because of how it relates to such central concepts as notation and interpretation, and because it demonstrates my personal approach to these concepts based on the notions of freedom and fixity.

2. Music-theoretical Context

Here I will present various music-theoretical concepts that will help to explain the interaction of the musicians with the soundtrack and how the notation weaves real-time freedom around the fixed electronic time grid.

2.1 Fixed Media and Live Performance

[Untitled, 2012] is my own take on a fixed media and live performance work. Common during earlier stages in the development of electronic music, this format has become in a certain sense obsolete, pushed aside by later technological developments based on more reciprocal relationships between computer and musician, such as live-processing and interactive computer systems. An inherent limitation of the format is that the performing musician is straitjacketed by the tape’s progress, the latter imposing significant constraints on the freedom of interpretation. The challenge to overcome this limitation is a creative opportunity in itself, which formed the drive to compose [Untitled, 2012].

The question of how to employ freedom effectively in a fixed environment without giving up a clear relationship between the performer’s part and the soundtrack influences the notation and the way in which the score is aligned to the soundtrack. Instead of allowing the bass player to play freely within designated time frames (for example, during an entire section of the soundtrack) – a somewhat looser compositional approach – I chose to use more precision in the notation, challenging the musician to be flexible within the fixed time grid while keeping in direct relation to its sound content. In the following subsections I will introduce two concepts that locate my approach within a broader context: musical time scales and groove.

2.1.1 Musical Time Scales

The terms “meso” and “sound object,” suggested by electronic music composer and theorist Curtis Roads to describe different concepts of musical time scales, can offer a better understanding of freedom in a live performance with a fixed soundtrack. While meso relates to the “divisions of form, groupings of sound objects into hierarchies of phrase structures of various sizes, measured in minutes or seconds,” a sound object is “a basic unit of musical structure, generalizing the traditional concept of a note to include complex and mutating sound events on a time scale ranging from a fraction of a second to several seconds” (Roads, 2001, p. 3). The two concepts demonstrate the distinction between, on the one hand, seeking freedom of interpretation within larger time frames, and, on the other hand, establishing accurate relationships between smaller fractions of sound. The different time scales provide the performer with a range of ways of aligning themselves with the soundtrack: the interaction with the electronic sounds occurs at different compositional levels, each of which demands a different kind of attention and results in different playing and sound quality. For example, the performing musician relates to the single beats (provided by the electronic soundtrack) or to entire sections of the composition, each of which demands the use of a different improvisational/creative faculty: rhythmical hyperawareness or a more “remote” listening. Although the two terms originally come to suggest a taxonomy of electroacoustic sounds, the concern here is not “the myriad types of electroacoustic sound objects and structures,” but, rather, “the relation of these to our live performer . . . [for example] supportive/accompanying, antagonistic, alienated, contrasting, responsorial, developmental/extended” (Emmerson, 1994, p. 32). These different dispositions between the musician and the fixed time grid are, in fact, ways in which freedom is already incorporated in the composition.

2.1.2 Groove

Another concept which can assist in settling the (seeming) contradiction between freedom of interpretation and direct attention to details is “groove.” This term describes the ability to “[select] salient features out of a sequence of sounds and [relate] these features in such a manner that . . . a sense of regularity, differentiation, and cyclicity in the music” can be identified (Meelberg, 2011, n.p.). Particularly focusing on the differentiation between simultaneously played musical lines and their gradual falling into synchronization, electronic musician Tomer Baruch introduces the notion of “participatory discrepancies,” which addresses “the slight deviations which occur every time more than one person is playing music (together)” (Baruch, 2016, p. 13; see also Keil, 1987). Baruch’s definition is based on a “relation between music and a listener [which might be also a performer] which involves entrainment and participation” (Baruch, 2016, pp. 15–16, my italics). “Entrainment” refers to the synchronization of musician and music, whereas “participation” refers to the involvement of a musician in the music.

The way in which a musician engages with a fixed soundtrack is comparable to groove. The soundtrack can be perceived as a rhythmic frame which allows interpretive freedom as much as rhythmical synchronicity and around which the live playing develops. In this sense, the combination of rhythmic stability and instability (the participatory discrepancies) creates an effective mix of freedom and fixity, allowing for real-time adaptations rather than posing musical constraints.

2.2 Between Interpretation and Improvisation

The degree to which the score of [Untitled, 2012] provides detailed instructions suggests a link to the concept of interpretation. At the same time, the score of [Untitled, 2012] also remains substantially open, thus creating a link to improvisation. What is the difference between these two concepts, and should they be discerned as fundamentally different from each other? A possible answer is provided by Andy Hamilton, a philosopher interested in the aesthetics of composition and jazz:

As interpreters get to know a work intimately, they internalize it and make it their own – just as actors do not merely recite the lines of a play but become the part. A certain freedom then develops. In contrast to the macro-freedom of improvisers, there is a micro-freedom for interpreters to reconceive the work at the moment of performance, involving many subtle parameters such as tone and dynamics. A performance will then feel like a ‘leap into the unknown’ and will have an improvised feel. (Hamilton, 2007, p. 212)

Following Hamilton, it would be wrong to exclude freedom from either case. Yet the way in which it appears is different. For the improviser, freedom exists at a macro level: it is the liberty to invent from scratch, creating something which did not exist until the moment of playing. In contrast, interpretation encompasses micro freedom: this is the liberty of the musician to stretch or to compress, to emphasize or to understate, or to flex an already existing musical text in any way. The same notion also fits the idea of extemporization – embellishing a given melody by adding ornaments, yet without changing the pre-established structure. Preserving the prescribed information is in fact what makes this kind of freedom possible, since it provides the material which affords flexibility to the musician: “not only do performers have room for improvisation but also it is required: for there can be no performance without filling in [the] Unbestimmtheitsstellen [places of indeterminacy]” (Benson, 2003, p. 82, italics in original). In this sense, the notion of freedom is relevant to interpretation as much as it is to improvisation since it can reanimate the (already existing) material – the process which Hamilton describes as “an improvised feel” or a “leap into the unknown.”

In practice, however, the interesting question is not how to distinguish between interpretation and improvisation, but, rather, how to combine them on the basis of one factor which is common to both of them: freedom. Composer Pierre Boulez, in a harsh criticism of indeterminacy in music in his essay Alea, attempted to configure a way of incorporating what he refers to as “chance” into the compositional process using – in his opinion – a well-established, responsible approach: “If the interpreter can modify the text as he likes, this modification must be implied by the text and not merely added afterwards. The musical text should contain inherently this ‘chance’ of the interpreter” (Boulez, 1964, p. 46). Even if not meant as such, this statement can perhaps suggest a bridge between interpretation and improvisation. The performative process revolves around both micro and macro freedoms – a liberty which emerges out of the already existing text, yet also exceeds its pre-established boundaries. The freedom to improvise while, simultaneously, realizing an existing composition, should be afforded by the composition. The same idea which could be also encountered from the perspective of the performer, for whom “the interpretive act is an assertion of . . . individual values and ideas, as well as a rendering of the composer’s intentions” (Waterman, 1994, pp. 154–5).

2.2.1 “Non-Finished” Notations

A good example for the combination of interpretation and improvisation is the work of composer and improviser Anthony Braxton. Braxton acknowledges only sociocultural (rather than inherent) differences between interpretation and improvisation. According to music journalist Graham Lock:

Notation plays a different role in Western classical music than it does in African American creative music, where improvisation on written material is more highly prized than the correct execution of it. . . . In many black musics . . . notation is used as a guide or platform for improvisation – for example, in the way a written-out ensemble riff might underpin an improvised solo – so that the score is only one component of the total performance, whereas in the Western classical tradition there is generally more emphasis on a faithful rendition of the score as being the main focus and purpose of the performance. (Lock, 2008, p. 8, italics in original)

The difference between improvisation and interpretation, according to Braxton, is thus not inherent to notation, but to the way it is used (see also Lewis, 2002). In his own written compositions Braxton indeed recognizes the coexistence of interpretation and improvisation. As noted by Lock, Braxton’s notations

represent a kind of porous or [intentionally] non-finished form in which tiny pockets of improvisational space permeate the musical structure. This embedding of space within the formal fabric of the composition, via the visual ‘improviser’s notation’, means it is virtually impossible to play these works, even as a straight run-through of the score, without ‘individual presence’ and the ‘feeling of the moment’ suffusing the performance. (Lock, 2008, p. 8)

The space within the compositional fabric, however, should not be reserved only for improvisation, which happens in real time; it might also provide an invitation for the interpreter to participate in a creative process which takes place prior to the performance and goes beyond just learning and practicing a given part. In addition to Hamilton’s ideas, here interpretation is permeated by macro freedom too. In works such as those by Braxton,

the processes of revision and annotation inherent to the preparation of a performance often turn the score into something active and rather more transitory than the bound collection of printed sheets suggests at first sight. (Rebelo, 2010, pp. 21–2)

The emphasis here should be on “active” and “transitory” processes, which reconfigure the score into something which cannot be foreseen by the composer. The performers’ preparations exceed the notion of interpretation in its more traditional sense, blurring the division between interpreter and composer where the former is responsible for the realization of the material provided by the latter, while the latter’s responsibility is to communicate his or her ideas in a “finished” form.

2.2.2 Complex Notations

The notion of freedom as an integral part of interpretation should not be reserved only to so-called incomplete notation forms. What if the score does not underspecify the material, but in fact overspecify it? In the latter case, the interpreter is obliged to omit certain parts of the information, since the entire aggregate of instructions is sometimes impossible to execute. This involves significant preparations by the musician, exceeding the process of simply practicing the notated music. Describing the score of Cassandra’s Dream Song, a particularly complex composition for solo flute, composer Brian Ferneyhough writes: “This work owes its conceptions to certain considerations arising out of the problems and possibilities inherent in the notation – realisation relationship” (Ferneyhough, 1970, n.p.). The discrepancies, so to speak, between the information conveyed and its execution are perceived as a virtue rather than as a disadvantage.

It is important to realize, then, that even if the notation is highly detailed, requiring the musician to perform many simultaneous actions, the intention is not necessarily musical determinacy. Extremely detailed notation may promote freedom as much as it can imply fixity: an idea which has not been overlooked by composers who are fully aware that “the final sounding result is not precisely definable in advance, arising as it does from the intent of the performer to realise as many of the highly-specific notated actions as possible” (Ferneyhough, 1974, n.p.). The same point of view is shared by composer and improviser Richard Barrett, who writes: “Complexity is not a forbidding exterior but an endlessly attractive interior, a strange attractor” (Barrett, 1992, n.p.). Complex notations clearly point towards freedom being inseparable from the musical information conveyed by the score. The endless “mystery” behind complexity demands the attention of the performer, and his or her commitment to go beyond a simplistic view of the relationship between score, the actions of playing and the sounding result, in order to discover new and unexpected paths which may have been unforeseen by the composer.

3. Musical Context: Works for Fixed Media and Live Performance

How would the ideas discussed so far come through as part of a composed musical text? In the following sections I will discuss three compositions – Plex by Agostino di Scipio, Bump by Amnon Wolman, and Bokeh by Janco Verduin – each of which is based on a different approach to weaving instrumental instructions around a fixed soundtrack. I will comment on the advantages and disadvantages of the choices made by these composers regarding notation, soundtrack, and their combination.

While each of the three compositions has a distinct notational and compositional approach, they also have one important common factor: they define a relatively broad reference point for the musician(s) to follow the soundtrack. The relation between the live performance and the electronics is formed through wide musical gestures that relate to the overall texture of sound rather than to particular details or that occur at the level of entire sections of the composition (a meso time scale rather than a sound object). The outcome of such an approach is that the musicians develop their sound independently of the soundtrack without being “interrupted” by the electronics events and are free to explore various paths within entire sections or even throughout the whole piece. Plex, for example, lets the sound of the bass evolve independently of the electronics, disregarding (in most parts) the alignment between bass part and soundtrack. Bump has even less of a concrete relation between bass and electronics, since the performer is free to explore the musical material within what seems to be a surrounding environment of unrelated electronic sounds. And in Bokeh, although the score does introduce an exact alignment between the instrumental parts and the soundtrack, the performance does not necessarily depend on this idea, but relies on other qualities of the composition which are far more open.

While all three works rely on a less precise relation between the live parts and the soundtrack, [Untitled, 2012] asks the musician to lock tightly into the soundtrack, rhythmically engaging with the electronic sounds at the (micro) level of the beats and the individual phrases. But rather than implying a rigid synchronization between the instrumental part and the electronics, this particular focus is in fact what permits the performer’s freedom. My approach, which I see as essentially different from that of the other three composers, provides an alternative musical manifestation of the ideas I have discussed so far and fills a certain gap between freedom and fixity which I have become aware of through my interaction with these works.

3.1 Plex

Plex (Agostino Di Scipio), excerpt (click to enlarge)

Plex (1991) by Agostino Di Scipio is a composition for contrabass and electronic soundtrack. The score is divided into four parts, and apart from their starting moment and a few other events that have to be synchronized with the soundtrack (the player uses a stopwatch in order to keep track of time), the notation does not relate directly to the electronic part.

The score introduces a relatively small amount of notated material, one stave only for each of the four parts. This basic material is elaborated by what Di Scipio calls “backtrack paths”: the performer is invited to repeat smaller segments of the part, freely advancing forward and backward between the designated paths. The repetitions are enhanced with an extra layer of musical information: indications of speed, dynamics, or technique are superimposed on the material, allowing a single phrase to sound different each time. This stretches the interpretational micro freedom beyond its conventional boundaries and transforms the original content into smaller fractions of  idiosyncratic material. The idea of musical development in Plex is derived directly from this flexibility, “harvesting” the expansion of the basic material from the decisions of the performer.

However, I am ambivalent about whether Plex really allows the musician to exercise improvisation effectively. Di Scipio encourages the performer to “plan what paths should be followed in his/her way through the score, rather than taking random decisions while playing” (Di Scipio, 1991). And, indeed, from my experience as a player, realizing all the necessary factors for the performance – choosing which backtrack path to follow, applying the speed, dynamics, and playing technique for each part, while at the same time keeping track of the stopwatch – has proved an almost impossible task. After several experiments and performances with which I was less than content, I decided to fix my performance path by preselecting the backtrack paths.



My annotated renditions of the score. (click to enlarge)
Plex (excerpt, part B) played at haTeiva (Ilya Ziblat, contrabass)

Each backtrack path segment was cut and pasted in the correct sequence for the performance. I also marked in advance the playing technique for each segment, using a color code. Regarding spontaneous performance, Di Scipio commented: “Well, I was aware that ‘spontaneous decisions’ would have been too difficult to make, as you have seen for yourself. ‘Improvisation’ here would be possible only by very very deeply ‘internalizing’ the particular materials and the performance praxis. . . . It would be like an ideal target situation, but not achievable in actuality” (Di Scipio, personal communication, November 10, 2014).

In fact, in my interpretation of Plex the original notion of flexibility suggested by the navigation between the backtrack paths was eliminated from the performance itself, because I was relying on a pre-prepared path. Yet this proved a more practical solution for performing the piece, and my interpretation gained a greater sense of conviction that was lacking in earlier performances where I was improvising my path in real-time. The decision to remain within the limits of micro freedom has proved a liberating factor, allowing for musical flow to properly emerge during the performance.

What, then, stands behind the decision to use the backtrack paths? According to Di Scipio:

The same notated gesture would reveal different nuances of timbre if played with different timing and variable dynamics. I wanted everything to be more qualitatively merged in the sound flow heard from the tape, and I wanted to leave room for the performer to listen to the taped materials and find his/her way into the pace and rhythm of the whole thing. I never wanted instrumentalists to be under the spell of a click track. What was new, for me, in Plex was the . . . ‘local’ freedom to recycle and vary the notated materials, and the fixed matrix of larger-scale time spots where synch with the tape is requested. I have used these dual arrangements in many other pieces after that: sound matter evolves more qualitatively, ‘against’ a fixed frame of deadlines to be matched. I assume that creates in each section a sense of growing anxiety for the bassist (which reflects the overall form of the piece: a very long ‘anacrusis’, leading to no downbeat). (Di Scipio, personal communication, November 10, 2014)

Di Scipio’s approach seems first of all sound-oriented. The player is given the space to find his or her way during the performance in order to develop the instrumental part. This is also how the connection between the musician and the fixed soundtrack can be established: the bass part develops uninterruptedly, in parallel with the unfolding electronic soundtrack and independently of exact synchronizations. The player’s attention is directed towards a wider perspective than the small-scale level of the rhythmic details, which intensifies the listening experience and lets the live performance immerse with the pre-recorded soundtrack: “[By] leav[ing] room for the performer to listen to the taped materials . . . everything [is] more qualitatively merged in the sound flow heard from the tape” (Di Scipio, personal communication, November 10, 2014).

Finally, Di Scipio also discussed the relation between the flexibility of the score and the freedom of the performer and his interest in the subject, which is evidently different from mine:

It seems to me that, being more interested in timbre, texture and noise, as a composer I’d better provide an interpreter with ways to find his/her way, not prescribing a fixed result. A ‘fixed result’ would anyway remain an ideal. I am not about the actualisation of an ideal image of what a sound or a gesture should exactly be, I am more about opening up specific material conditions for the kind of sound events or gestures that can be acceptable and consistent with the context I propose. (Di Scipio, personal communication, November 10, 2014)

While my approach relies mainly on establishing freedom as the key notion of the composition itself, Di Scipio’s concern is more with sound: “timbre, texture and noise.” The difference can perhaps be best perceived in terms of the composer’s focus: towards the audience, who experiences the composition more as an auditory or performative experience, or towards the performer, who has to be concerned with “under-the-hood” practicalities which are essential for negotiating between the live performance and the requirements of the work. As asserted by electronic-music composer Simon Emmerson, the concern of the composer should not ignore

the frustrations of the real performer, straight-jacketed by a tape part, unable to hear the overall effect of live electronics, etc.; perhaps our position has moved to too great an extent towards the listener. One of the greatest dislocations of western art music (the performer/listener distinction) must not blind us to the need to let the performer have some control even over those elements which may not articulate ‘expressive’ detail. (Emmerson, 1994, p. 33)

Perhaps triggered by my experience as a performing musician, the real-time freedom of the performer has become an indispensable focus for my compositions – a focus point which to a certain extent is lacking in Plex.

3.2 Bump

Bump, excerpts from concert at Nutshuis, the Hague (performed by Ilya ziblat, 2016)

Bump (2005) by Amnon Wolman is a composition for bass and electronic soundtrack. Unlike the other works discussed in this chapter, this work is not meant to be part of a “normal” concert program but to be presented as a performance piece or installation with no determined length, “in an open space where people are usually standing or walking but not sitting. A gallery, a lobby, a foyer of a concert hall, or the middle of a park could all serve as places for the performance of the piece, but not a traditional concert hall” (Wolman, 2005, n.p.). The bass player wanders around the performance space, chooses one audience member, “stand[ing] as close as possible to that person . . . in the most intimate way” (Wolman, 2005, n.p.), and performs a short segment of the composition (simultaneously singing and playing the bass) before moving to the next person. 

Bump (click to enlarge)

Each system in the score contains three staves: one for the instrumental part, a second one for the vocal part – both notated in hand-drawn, broken/curved lines (graphic notation) – and a third stave that contains only a single note as a reference pitch for tuning the playing and singing. This graphic information has to be rendered into a performable version, and so, similarly to Plex, this work necessitates a certain amount of preparation before the performance, an active revision and annotation of the score by the performer. This is, in fact, an open invitation by the composer to the performer to share compositional responsibility. Wolman described this decision-making process to me in an email: “In general, with my scores, I decide in advance which factors I find important to define explicitly, and for which factors I would be willing to accept any decisions made by the player, as a presentation of my work. After that, I will leave it in their [the interpreter’s] hands” (Wolman, personal communication, June 4, 2016). I decided to “complete” Wolman’s “non-finished” notation, making my own version of the score:


Bump, my rendition of the score (click to enlarge)

Although the score has only one page, the duration of the entire performance might last up to one hour (which is the total duration of the soundtrack) or even longer (in a live-electronics version of Bump in which the electronic sounds are generated by a Max/MSP patch). This intended discrepancy between the length of the bass part and the electronics opens another channel of freedom for the performing musician, who has to make choices concerning the distribution of the notated material, dividing it into shorter segments and moving across the performance space from one “private” performance to another.

The feelings of intimacy and awareness, which arise from experiencing the work lead to an unusual musical encounter. The player shares his or her interpretation of the score, making use of the material learned in advance, and does so spontaneously (in real time). This constitutes the musical identity of Bump as an ever-changing and flexible but simultaneously fixed work. It requires a demanding combination of mental faculties (memorizing, playing, singing, tuning, improvising), and the sharing of these performance “risks” with the audience in a direct and intimate way. Although the result cannot said to be improvised (the score clearly indicates: “This is not an improvisation but rather the performer is asked to prepare a fixed version before the public interaction” [Wolman, 2005, n.p.]), it still involves a substantial degree of freedom for the player, which is communicated to the audience at first hand, in a one-on-one interaction between performer and audience.

Although the playing does not align with the soundtrack but rather floats independently in the same space, a sense of enhanced awareness can be experienced while playing Bump, enforcing a strong feeling of engagement between the performing musician, the electronic soundtrack, and the audience. The discovery of such a quality within a compositional framework which can be easily labeled “experimental” – avoiding any structural arrangement that would suggest a link between soundtrack and score – offers an interesting, even if distinct approach towards notation and composition on the one hand, and a particularly rewarding experience as to how these can be transmitted to an audience on the other.

3.3 Bokeh

Bokeh performed by Elisenda Pujals and Ilya Ziblat

Bokeh (2014) by the Dutch composer Janco Verduin is a composition for bass, voice, and electronic soundtrack. The score comprises eight parts (four vocal and four bass), of which six are pre-recorded, processed (passing through a reverb effect with various settings, creating the simulation of different recording spaces), and mixed down as a fixed 2-channel track. The remaining two parts, one for voice and one for bass, are performed live, simultaneously with the pre-recorded soundtrack. The entire aggregate of overlapping parts creates a richly woven tapestry which appears somewhat blurry (the term “bokeh” refers to out-of-focus parts of a photograph).

Bokeh, excerpt (click to enlarge)

The score is more traditional than most of the others described in this thesis. It uses metric notation, dividing the music into bars, beats, and their subdivisions (a constant 1/8 or 1/16 pulse is maintained throughout the entire work). This kind of notation implies precise synchronization between the parts – those that are played live and those that are pre-recorded. The score prescribes not only notes and rhythms but also dynamics and different playing techniques: for example, the position of the bow on the string (sul ponticello, sul tasto), different vocalization techniques for the singer (open/closed mouth). This layer of information is superimposed over the notes and rhythm, together creating an aggregate of undercurrent rhythmical pulses and textural changes. While this score appears to leave very little room for the musician, performing Bokeh does involve, in fact, a significant amount of freedom. How does the notation here convey flexibility to the performer?

Based on my views as a composer and performer I would have opted for a notation that represents the sound transformations differently, particularly the weaving of the individual parts. An open notation would contribute to a freer performance, liberating the musician from a more restrictive mode of playing, and reducing the risk of losing the alignment between score and soundtrack. But more fundamentally, open notation could help to shift the attention of the musician more towards listening and less towards following the score. Playing an open score would contribute to interlacing the different parts more loosely. In an email interview, I asked Verduin if, in his opinion, his composition would not have gained from a more flexible notation. His answer was:

I don’t agree. Not because of the advantages you mention but because of the concept of the piece itself. . . . [In] Bokeh I wanted to explore the idea of sound as the sum of transitions through different acoustical environments. As you know, the piece is like four duos of voice and double bass where each duo has a different environment (up close, far away and two intermediates of which one is the live duo). Rhythmically, I could have chosen more fuzzy textures, but for the bass I wanted a steady pulse so there would be a kind of unity that travels through these spaces, as if it were one sound made up by four particles in different circumstances. The total sound would be a compound creature, the sum of those four elements. (Verduin, personal communication, October 24, 2014)

Verduin’s description, “the sum of transitions through different acoustical environments,” refers to the textural transitions prescribed in the score and the way that they blend into an eight-part matrix, in a way which could perhaps be compared to the process of mixing an electronic track using the faders in order to blend in different channels. But at the same time the underlying pulse is an essential part of Bokeh’s identity (in a manner which can be considered as typically Dutch, demonstrating an obvious link to the rhythms and drive of works by Louis Andriessen and his followers). Without this pulse, the rhythmic drive would be lost, and “more fuzzy textures” would appear instead of the powerful drive which carries the performance in its current version.

In our performances of Bokeh, the singer and I interpreted the score in a relatively flexible manner, renouncing, to a certain extent, the strictness of the rhythmic alignment between live and pre-recorded parts and, as a result, attaining freedom without abandoning the rhythmic “engine.” In this way, despite its traditional notation, Bokeh could allow a considerable degree of freedom without losing its rhythmic drive. This form of extended interpretational micro freedom – on the verge of improvisational macro freedom – has proved practical in the sense that it has allowed the performance to accumulate enough energy while also respecting the composer’s directions.

4. [Untitled, 2012]

– download full score –

Each of the works described so far presents a distinct approach to interweaving live performance with fixed media. However, none was entirely satisfying to me, and [Untitled, 2012] stands as an experiment in overcoming the disadvantages I see in all three works. The compositions Bump and Plex let the bass part develop independently of the soundtrack and, to a certain extent, give up any clear sense of a relationship between the two. Bokeh offers a more elaborate relationship between performers and soundtrack, yet, to my mind, it fails to provide a sufficient degree of freedom (or, once such freedom is anyhow “claimed” by the musicians, they, the score, and the soundtrack become disconnected). How could my composition provide an answer to these shortcomings?

Although these reflections are mainly based on my personal experience, these shortcomings seem inherent to compositions that combine live performers and fixed soundtracks. [Untitled, 2012] is my tentative musical answer to the question of how a composition based on these two ingredients can form a more elaborate connection, while at the same time providing real-time musical freedom. It demands a tight engagement from the performers – “tight” being a jazz term to describe the speed of response and level of engagement within a group of improvising musicians. I wanted to focus the attention of the performer on the rhythmical aspects and not only on the texture of the soundtrack; to require the performer to stay alert to particular details, rather than allowing a wider, and more distant perspective; and to create a more idiosyncratic connection between player and soundtrack, allowing the immersion of live sounds into electronics to compensate for the inherent unresponsive quality of the fixed soundtrack.

 [Untitled, 2012] (excerpt), performed by John Eckhardt, ISCM New Music Days, 2012, Antwerp

The soundtrack of [Untitled, 2012] provides the rhythmic grid into which the bass player interweaves the instrumental part in real time. While the electronic sounds provide the role of accompaniment – a pre-composed “rhythm section” – the bass player takes the lead role as the “soloist.” A comparison to jazz improvisation seems logical: there, the soloist usually keeps a tight connection with the rhythm section. In [Untitled, 2012] the role of the rhythm section is filled by the soundtrack, yet, rather than preventing freedom, this has to supply the necessary creative drive for the bass player. The key feature that provides that freedom is that the score can be played in different tempi, so the orientation of the instrumental part to the soundtrack can be adjusted during the performance. This stretching or compressing of parts of the score enables the bass player to “lock in” to the accompanying electronic soundtrack, intertwining it with an additional layer of live instrumental sounds without the loss of freedom that would result from rigid submission to the soundtrack.

The score of [Untitled, 2012] consists of small units, several of which have flexible speed indications. The very beginning provides a good example: the first two bars are played repeatedly for 30 seconds, during which the soundtrack features a single layer of percussive-sounding periodic pulses. While the speed of the electronic beats is gradually accelerating (that is, the gap between each electronic beat and the next is getting shorter), the musician is instructed to perform his or her part slower on each repeat. The effect of these opposite processes – acceleration of the soundtrack and slowing down of the live part – is that a decreasing amount of notated material will be played with each repeat, “trimming” the end of a bar more and more. 

[Untitled, 2012] (beginning), performed by John Eckhardt
This part is played repeatedly, each repeat lasting 3 beats (marked in the score as an encircled “p” with a down arrow). The bass starts bar 1 in sync with the first beats, but the rest of the material in this bar (non-continuous glissandi, with separated bows) will be cropped incrementally every time it repeats because of the accelerating speed of the soundtrack’s beat.
(click to enlarge)

In order to allow for greater flexibility, the soundtrack, for the most part, features several overlapping layers of sound that are perceived as a multilayered polyrhythmic grid. Although it is fixed in advance, this grid still allows freedom in the interaction between bass player and soundtrack. The relation to musical groove should be clear: the soundtrack presents a rhythmically regular structure, based, however, on a selection process performed in real-time by the musician. The strong multilayered nature of the soundtrack simulates, to a certain extent, the interactions within a group of improvisers. The bass player can choose which sound layer to respond to and adjust the notated material in relation to it, creating the effect of momentarily synchronizing with the rhythm; more specifically, playing “before” or “after” the beat or with a double- or half-time “feel.” The flexibility of the notation, in combination with the compound rhythms implied by the electronic soundtrack, allows the bass player to shape the material, thus expanding interpretation beyond its traditional boundaries. The relation between the performed part and the electronics does not only rely on the perception of the sound quality: instead of that relatively amorphous frame of reference, the bassist interlocks more precisely at the level of the individual beats. As an example, the last system of the first page (in the audio recording starting from 1:06) features a sequence of five successive crotchets, corresponding to an accompaniment of five beats (“p” marks), which is repeated several times. But as the accompaniment is an amalgam of several overlapping rhythms, it provides the player with multiple options of how to synchronize with it.

[Untitled, 2012], excerpt (click to enlarge)

Hence, this freedom is situated more at the level of Roads’ sound object than the larger meso time scale.

Another feature of [Untitled, 2012] which overcomes fixity is the ability to interrupt the composed narrative: the player can stop the tape at any point for an unlimited period and freely elaborate on one notated event on the paused electronic timeline. In the score I have described these interruptions as a “comment on the given material, or as a possibility to break away from the compulsory motion of time,” suggesting that the player has the ability to break away from the automated tape progression in order to reflect on the composed content by improvising. This creates an explicit contrast with the fixity of the soundtrack.

 [Untitled, 2012] (excerpt), performed by John Eckhardt, ISCM New Music Days, 2012, Antwerp

By using the methods described above, I have ensured that the live part and the soundtrack connect in a “safe” way, eliminating the risk that the musician loses his or her place in relation to the electronics. But more fundamentally, this approach introduces freedom as an inherent facet of the composition. Although the soundtrack is fixed – a “frozen” aggregate of sounds and rhythms – the performance is still an open dialogue between bass player and soundtrack.

5. Conclusion

In each of the compositions I have described in this chapter, a soundtrack functions as the main structural backbone. These soundtracks present inflexible, hard-coded time grids to which the performers have to align themselves. They raise a compositional challenge: how to allow freedom while also retaining a clear relation to the electronic sounds? How not to fall into either of the two “traps”: creating a performance situation in which the musician is straitjacketed by a totally mechanical clock, or letting the live performance float freely without a coherent relation to the electronic part?

The scores I have discussed attempt to provide the missing link between the soundtrack and the live performance. The notations provides the space for the live sounds to develop as an interaction between the performer’s real-time decisions, the pre-composed contents, and the soundtrack. The directions remain open enough to allow freedom, while also directing the attention of the performer to the fixed electronics. The soundtrack becomes the primary source for evoking real-time creativity, rather than functioning as a restricting factor.

The presence of fixity in all these case studies is highlighted. By confronting the hard-coded framework, the three other composers and myself had to look for alternative ways to introduce freedom. On the border between improvisation and interpretation, freedom can be embodied at different levels of the composition: the musical material (especially the rhythm), the instrumental instructions, or, more generally, the sound quality. These different musical ingredients provide choices of how to “inject” real-time freedom into the fixed soundtrack.

My compositional strategy in [Untitled, 2012] was different from those of the other composers: the score is much more condensed, and richer in detail. I allowed the fixity of the soundtrack to influence my notation, making it more precise than in any of my other compositions. Nevertheless, freedom is still very much present: it is embodied in the accurate details and in the way these are superimposed on the soundtrack’s multilayered rhythmic grid; for example, the way in which relatively short gestures can be stretched or compressed in relation to the electronic pulses. And, prior to any played gesture, freedom exists in how the musician listens to the soundtrack’s groove, which provides a myriad of potential paths to follow. Freedom appears in [Untitled, 2012] in its micro-scale form more than it does in any of my other compositions; this, however, should not mean that the freedom is less concrete. This fine-tuning of freedom calls for close attention, careful listening, and fast responses. The outcome should be evaluated according to the close focus and “tightness” it requires from the performing musician, which hopefully is also transferred to the listener.

How does this case study provide a new perspective on the idea of freedom and fixity, that can add to the ideas presented in and through my other compositions? To start with, the fixed soundtrack in [Untitled, 2012] presents a different approach from those works in which I am using a flexible timeline: hasara and MRMO. While the latter implies innate freedom, the soundtrack of [Untitled, 2012] is a fixed skeleton around which the musician exercises real-time freedom. Also the notation here is different from the other two case studies: the scores of hasara and MRMO present traditional notation only at the beginning of each section, in order to establish a local musical idiom from which point on the performer is asked to continue improvising in the same “style.” In [Untitled, 2012], on the other hand, the notation remains detailed throughout the entire score. The same idea applies also to the electronics which in [Untitled, 2012] is fixed from beginning to end, while in The Instrument the computer part is interactive and  shaped by the musicians in real-time. Finally, a comparison could be made between the role of the soundtrack in [Untitled, 2012] and that of the undirected improvisation in hasara. In the latter case free improvisation functions as a compositional void around which a musical narrative is formed, thus playing a central structural role which is similar to that of the fixed electronics in [Untitled, 2012]. In this sense, the two compositions can be understood as each other’s antipodes, highlighting either fixity or freedom as their main musical-gravitational forces.

What next? What this case study might suggest in terms of future musical works and fresh artistic visions is not an easy question, since the path I have been exploring since composing [Untitled, 2012] was directed at live-processing and interactive computer systems, on the one hand, and open, “improvisatory” notation, on the other. A possible continuation would be to combine the score with a real-time generated electronic part and challenge the bass player’s flexibility even more by forcing him or her to react to a much less foreseen timeline. Another possibility would be to expand the electronic part, rendering the material as a multichannel track, thus creating a sonic environment which is richer in possibilities. Such an elaborated, multilayered electronics part could play on the range between “local” and “field” paradigms (Emmerson, 1994), offering the performing musician the choice to relate to a more direct, nearby sound source or to a more distant sonic environment. Such a work could take the shape of an installation (inspired by Wolman’s Bump), where audience and performer are free to move around in a large space in which several sound sources are placed, thus changing the sonic perspective of both player and listeners as the music unfolds. As a concluding idea, it would be good to indicate that the compositional “restraint” in form of a fixed soundtrack has served as a creative challenge, providing different paths for composing, performing, and discussing the material – many more than I might have thought at the outset.


Comments are closed.