Skip to content

Spatial objectives

Christophe Lengelé edited this page Jun 26, 2024 · 26 revisions

A research on how to link and associate rhythm and space of sound events with dynamic, finite event-based spatialisation

1. Continuous input-based spatialisation & (vs) Dynamic layer- and event-based spatialisation

Here, we propose a distinction between continuous input-based spatialisation vs finite event-based spatialisation. Contrary to most tools with input-based spatialisation from a fixed number of tracks, the tool is based on layer- and event-based spatialisation, which allows to control the temporality (rhythm, density, duration and amplitude) of spatialised sound events (as you can see with a graphical representation of a sequence).

  1. By continuous input-based spatialisation, we mean that the user selects the position or trajectory of each continuous input (like in most spatialisation plugins such as Ircam Spat). According to my experience, it tends to create more simple and clear spatialisation.
Capture d’écran, le 2024-06-26 à 10 29 45
  1. By finite event-based spatialisation, we mean that the user selects (among a pre-defined library) the position and trajectory for several layers of events with different times of beginning and durations (like in some creation systems allowing to control both time and space of events, such as Live 4 Life or Sound Particles). According to my experience, it tends to create more complex and confusing spatialisation.
Capture d’écran, le 2024-06-26 à 10 30 14 Capture d’écran, le 2024-06-26 à 10 30 37 Capture d’écran, le 2024-06-26 à 10 30 55 Capture d’écran, le 2024-06-26 à 10 31 23 Capture d’écran, le 2024-06-26 à 10 31 58

A way to mix different spatialisation concepts and paradigms

1. Mixing slow textural & (vs) dynamic quick rhythmic spatialisation

Space composition

The lack of direct, easily modifiable control of spatial parameters combined with symetrical and blurred space perception make that space is generally not considered as a whole dimension in composition, particularly given that our auditory spatial perception is more a comparison mechanism than an analysis tool. In absence of simple and flexible tools to experience spatial perceptions and a common, detailed space vocabulary with different scales, composers generally work on global notions and registers of space and their contraries (front vs back, bottom vs top, near vs distant, static vs moving, precise locations vs enveloping sensation, inside vs outside).

In order to emphasize the spatial aspect, sound elements are generally kept simple, or other parameters tend to be slowed down, as Stockhausen writes in the comments to the score of Oktophonie :

”The simultaneous movements - in 8 layers - of the electronic music of Invasion - Explosion with Farewell demonstrate how - through Octophony - a new dimension of musical space-composition has opened. In order to be able to hear such movements - especially simultaneously - the musical rhythm has to be drastically slowed down; the pitch changes must take place less often and only in smaller steps or with glissandi, so they can be followed; the composition of dynamics serves the audibility of the individual layers - i.e. dependent on the timbres of the layers and the speed of their movements; the timbre composition primarily serves the elucidation of these movements” [1].

Although spatial composition may involve to freeze or slow down other sound parameters, especially rhythmics, to put forward space perception, rhythmic spatialisation, focused on quick, dynamic and cyclic parameter sequences needs to be more investigated. By combining or alternating it with spatial textures, we can reveal and create more complex polyrhythms that are easier to localize in space thanks to sharp attack transients [2].


2. Mixing object-based & (vs) channel-based paradigms

  1. An object-based paradigm (sound sent to a virtual 3D position) involves that spatialisation can be easily reproduced to other different spatial configurations. According to my experience, the spatial algorithm tends to make forgive the presence of loudspeakers and be more CPU demanding, which has the effect of lowering the maximum number of spatialised events in real time.

  2. A channel-based paradigm (sound or effect sent to a physical loudspeaker position) involves that spatialisation cannot be reproduced in the exact same way on different loudspeaker configurations. According to my experience, the simple spatial algorithm tends to reinforce the presence of loudspeakers and have a low CPU usage, which has the effect of increasing the maximum number of spatialised events in real time.

Despite the tendency towards object-based paradigm for reasons of simplicity, standardisation and reproduction in every space regardless of speaker setup, mixing channel- and object-based paradigms to be used on every sound event allows to take advantage of the strengths of channel approach. Sending a sound directly to a specific loudspeaker can have much more impact (and at least a different effect) than sending it at the exact same coordinates of the loudspeaker through VBAP or ambisonics algorithm. However, it also has the effect that sequences integrating channel-based or a multichannel effect system are different according to the number of loudspeakers available and cannot be reproduced in the same way on different loudspeaker configurations.

This desire to mix these two paradigms, and to move away from standardisation towards the object-oriented model, is also a way of questioning the norm, as Feda Werdak, creator of plastic architectural installations in public space, says : “I thwart the norm, I provoke it, because I think the norm inhibits things. At a certain point, over-norming sanitises things, it industrialises, it standardises. And I think common sense should take precedence over the norm.“


3. Mixing abstract & (vs) concrete spatialisation techniques

As I wrote in an article for International Computer Music Conference in 2018, in the same way as Pierre Schaeffer contrasts the term concrete music, that implies direct manipulation on sound objects, with abstract music, that requires a score to be written and played by performers, I categorise two main kinds of spatialisation from the point of view of the composer : Abstract and Concrete spatialisation techniques.

  1. In abstract spatialisation techniques, you think first in terms of trajectories or positions, which determine indirectly phase, amplitude and spectral differences applied to a sound in each loudspeaker to perceive a specific direction.
  2. The starting point in concrete spatialisation are the temporal and spectral parameters of the sound material, on which you act directly to broaden the sound field.

Abstract spatialisation

Abstract spatialisation is external to sound, with the conceptual idea of spatial trajectories and pointillistic spatialisation within a geometrical design. With respect to this top-down approach, external spatial parameters are assigned and imposed on sound objects.

In terms of perception, a few events or streams can make up clear and well defined trajectories (like in Turenas with Lissajous curves [3]), superposing a large amount of layers can create a rich polypohony of spatial movement or spatial textures (like in Stria with static positions rotating slowly at each new event [4]). According to Gestalt grouping principles [5], different sounds may be perceived as coming from the same source, e.g. if their trajectories are correlated (symmetry) [6] or if their pitches are included within a specific bandwidth - up to ca. a minor third - (similarity) [7] and therefore be fused spatially into a very large object or seperated into distinct sources in space.

Abstract spatialisation, which refers to a specific spatial trajectory or precise position applied to a sound, is present in almost all spatialisation tools, maybe because it is direct and the most controllable and the easiest way to conceive abstractly a position in space. Trajectories can be either drawn in most DAW, built algorithmically in specific editors [6], determined with pixels of an image and modified with image filters [8], simulated via sequences of figures representing the speakers [9] or through matrices with which input streams are routed dynamically to output channels [10, 11] or made up of several crossfades between simple movement patterns with different sets of complex assignments to create a 3D listening environment (like in Oktophonie [12]).

Concrete spatialisation

Concrete spatialisation, internal to sound, extracts or acts directly on the parameters or internal space of the sound signal and may possibly break down its spectrum in the time or frequency domain. By considering this bottom-up approach, sound features and dimensions are imposed upon external space, either by using the data from the sound analysis to determine spatial location and movement [6,13] or by spreading and decorrelating multiple instances of a sound (particle), in order to create generally broad and diffuse sound fields and an immersive space [14]. So, concrete spatialisation takes more into account the internal parameters of the sound, either with correlation by linking a spectral parameter or intensity of the sound to a spatial dimension, or with micro/macro decorrelation – for example, on time, phase, playback speeds, transposition or distortion – which tends to create a diffuse/contrasted space.

It includes a wide range of spatialisation techniques such as : micro-temporal [15] and phase decorrelation [16], amplitude decorrelation with asymmetrical envelope shapes [17], decorrelation of multiple sound processes by modulating coherently synthesis parameters through functions on surfaces [18], pitch shifting and delays [19] (Inter-channel micro-delays and / or small pitch shifts - a quarter tone - or by extension variations in playback rates in one of the channels are well-known techniques to broaden apparent source width and spaciousness. Rainer Boesch used for example these strategies in combination with dynamic filters to create virtual spaces in Drama in 1975 [19].), sub-band decorrelation [20], spectral diffusion, either with band-pass filters [21] or FFT delays and filters [22, 23], or spatial granulation [24], whose particles can be controlled with GUI or algorithms like boids [25], wave-terrain synthesis [26] or image-based spatial sound maps [27].

Beyond classification : towards the composition of spatio-temporal relationships between sound objects

Some spatial techniques are at the border of this classification proposal and apply to both categories. Whereas a few abstract techniques use sound analysis to determine spatial parameters, e.g when the playback rate of a trajectory is determined by the spectral centroid of a sound, some concrete strategies may use spatial figures to spatialise spectral particles. It’s in the compositional interest not to seek spliting, but rather to exploit and gather the special features of each of those techniques and set up a space of dynamic relationships that will reveal the different aspects and natures of several sounds, their development and links. Although the choreography of a single movement may deliver a metaphorical message, as Clozier said:

”The fact that a sound element is placed or inscribed in one place rather than another does not confer any value upon it. For each sound, it is its relationship with the others, its movement towards or away from them which creates musical expression” [28].


References

[1] K. Stockhausen, Score Oktophonie: ElektronischeMusik vom Dienstag aus Licht. Kürten, Allemagne: Stockhausen-Verlag, 1994.

[2] G. S. Kendall, “Spatial Perception and Cognition in Multichannel Audio for Electroacoustic Music”, Organised Sound, vol. 15, no. 3, pp. 228–238, 2010.

[3] J. Chowning, “Turenas : the realization of a dream”, in Proceedings of Journées d’Informatique Musicale,Saint-Etienne, France, 2011.

[4] M. Meneghini, “An analysis of the compositional techniques in John Chowning’s Stria”, Computer Music Journal, vol. 31, no. 3, pp. 26–37, 2007.

[5] A. S. Bregman, Auditory Scene Analysis : The Perceptual Organisation of Sound. Cambridge, Massachusetts: MIT Press, 1994.

[6] L. Pottier, “Le contrôle de la spatialisation”, in La spatialisation des musiques électroniques, L. Pottier, Ed. Saint-Etienne, France : Publications de l’Université de Saint-Etienne, 2012, pp. 81–104.

[7] R. Gottfried, “Studies on the compositional use of space”, IRCAM Research Report, 2012.

[8] E. Lyon, “Image-based spatialization”, in Proceedings of International Computer Music Conference, Ljubljana, Slovenia, 2012.

[9] A. Boiteau, “Intégration de l’espace dans les processus compositionnels d’Emanuel Nunes : le cas de Lichtung I”, Master’s thesis, IRCAM - Paris, 1997.

[10] B. Truax, “Composition et diffusion : espace du son dans l’espace”, in Académie de Bourges, Actes III, Composition / Diffusion en Musique Electroacoustique. Editions Mnémosyne, 1997, pp. 177–181.

[11] J. Mooney and D. Moore, “Resound : open-SourceLive Sound Spatialisation”, in Proceedings of International Computer Music Conference, Belfast, N. Ireland, 2008.

[12] M. Clarke and P. Manning, “The influence of technology on the composition of Stockhausen’s Octophonie, with particular reference to the issues of spatialisation in a three-dimensional listening environment”, Organised Sound, vol. 13, no. 3, pp. 177–187, 2008.

[13] P. C. Chagas, “Composition in circular sound space : Migration - 12-channel electronic music (1995-97)”, Organised Sound, vol. 13, no. 3, pp. 189–198, 2008.

[14] H. Lynch and R. Sazdov, “An investigation into theperception of spatial techniques used in multi-channel electroacoustic music”, in Proceedings of International Computer Music Conference, Perth, Australia, 2013.

[15] H. Vaggione, “Décorrélation microtemporelle, morphologies et figurations spatiales du son musical”, in Espaces sonores, A. Sédès, Ed. Paris, France : Editions musicales transatlantiques, 2002, pp. 17–29.

[16] G. S. Kendall, “The decorrelation of audio signals andits impact on spatial imagery”, Computer Music Journal, vol. 19, no. 4, pp. 71–87, 1995.

[17] F. Cavanese, F. Giomi, D. Meacci, and K. Schwoon, “Asymmetrical envelope shapes in sound spatialization”, in Proceedings of Sound and Music Computing Conference, Berlin, Germany, 2008.

[18] M. C. Negrao, “ImmLib - A new library for immersive spatial composition”, in Proceedings of International Computer Music Conference, Athens, Greece, 2014.

[19] R. Boesch, “Composition / diffusion en électroacoustique”, in Académie de Bourges, Actes III, Composition / Diffusion en Musique Electroacoustique, F. Barrière and G. Bennett, Eds. Bourges, France : Editions Mnémosyne, 1997, pp. 39–43.

[20] G. Potard and I. Burnett, “Decorrelation techniques for the rendering of apparent sound source width in 3D audio displays”, in Proceedings of Conference on Digital Audio Effects, Naples, Italy, 2004.

[21] R. Normandeau, “Timbre spatialisation : the medium is the space”, Organised Sound, vol. 14, no. 3, pp. 277–285, 2009.

[22] R. H. Torchia and C. Lippe, “Techniques for Multi-Channel Real-Time Spatial Distribution Using Frequency-Domain Processing”, in Proceedings of International Computer Music Conference, 2003.

[23] D. Kim-Boyle, “Spectral spatialization - an overview”, in Proceedings of International Computer Music Conference, Belfast, N. Ireland, 2008.

[24] S. Wilson, “Spatial swarm granulation”, in Proceedings of International Computer Music Conference, Belfast, Ireland, 2008.

[25] D. Kim-Boyle, “Spectral and granular spatialization with boids”, in Proceedings of International Computer Music Conference, New Orleans, USA, 2006.

[26] S. James, “From Autonomous to Performative Control of Timbral Spatialisation”, in Proceedings of Australasian Computer Music Conference, Brisbane, Australia, 2012.

[27] E. Deleflie and G. Schiemer, “Images as spatial soundmaps”, in Proceedings of Conference on New Interfaces for Musical Expression, Sydney, Australia, 2010.

[28] C. Clozier, “Composition - diffusion / interprétation en musique électroacoustique”, in Académie de Bourges, Actes III, Composition / Diffusion en musique électroacoustique, F. Barrière and G. Bennett, Eds. Bourges, France : Editions Mnémosyne, 1997, pp. 52–85.

Clone this wiki locally