Friday, January 30, 2009

About 'Haptics'

What is Haptics?

"Haptics" comes from the Greek word Hapesthai, meaning "the science of touch."

The world of haptics is expansive by definition. It is the field of science and technology dedicated to tactile sensation, and it has applications for everything from handheld electronic devices to remotely operated robots. Yet outside of the research and engineering community, it is a virtually unknown concept. “People don’t even recognize the word ‘haptics’ yet,” says Ralph Hollis, director of the Microdynamic Systems Laboratory. “You have to spell it for them.”

In an age of digital devices that stimulate and amaze the eyes and ears with increasingly high fidelity, haptics has been employed mostly in relatively un­sophisticated applications—rumbling video-game controllers and buzzers that alert you to a cellphone call. But as our digital tools have become more complex and capable, our interfaces with these devices are beginning to run into the limitations of sight and sound. “It’s really only now that we’re seeing a migration from keyboards and mechanical switches to touchscreens and touch-sensitive surfaces,” said Immersion, a company that produces haptic interfaces. “We’re losing that tactile feel that we had before, and now we’re trying to bring it back.”

Touchable Technology

Moving haptics out of the lab can be challenging. Tactile feedback in consumer electronics must be both convincing to the user and appropriate for the device. Broadly speaking, touch can be divided into cutaneous sensing through the skin surface (feeling the pebbly surface of a basketball), and deeper kinesthetic sensing from muscles and tendons (experiencing the impact when hitting a ball with a bat). But much of the recent haptic development in consumer electronics has focused on fooling the fingertips into feeling onscreen buttons that aren’t physically there.

General consumers will first encounter haptics on these touchscreen gadgets and desktop controllers, but the most sophisticated touch technology outside of the lab is found in industrial, military and medical applications.

[read more - via]

Thursday, January 29, 2009

Le Corbusier at the Barbican Centre

Film & Music seasons

The film season, Visions of Utopia explores the vision and legacy of Le Corbusier through film, a medium embraced by the legendary architect himself in his integrated approach to architecture and the arts.


With the music season Total Immersion: Iannis Xenakis the BBC Symphony Orchestra celebrate the music of the extraordinary composer and architect Iannis Xenakis, who studied with Le Corbusier and worked with him on several projects, with a day of concerts, films, talks and free events.

Saturday 7 March 2009

[visit for full details]

Wednesday, January 28, 2009

An interview with Natasha Barrett, pt.3

by Matteo Milani, Federico Placidi - U.S.O. Project, January 2009

(Continued from Page 2)

USO: Pier Paolo Pasolini said that progress without a true cultural development is irrilevant. What is for you the actual relationship between technological progress and the living electroacoustic music community?

NB: One needs to be careful not to confuse cultural development and technological progress. They are of course tightly linked but not exactly the same thing. Cultural development is more complex to define than technological progress (which is also problematic and it is maybe better to discuss ‘technological change’). During recent years, technological change, which has made tools easier to use, has had a two-fold effect. We see a greater interest in non-pop electroacoustic music (everyone is ‘hands on’ for a small cost) - which we can say is good, coupled to a wave of similar sounding works - which we can say is not necessarily bad because the people who are active clearly have a lot of fun! But its difficult sifting out the really good work - simply the number of hours of music you need to listen to before funding a gem. Another change is in the music storage and distribution system. Currently Internet distribution is open and easy, which we could view as both a technological and cultural development. But as new laws and market forces come into place these changes may involve a cultural recession.

USO: What is it that makes your collaboration with
Tanja Orning (DR.OX) particularly inspiring?

NB: I find that working with a real, responsive person in an improvisation setting demands a different way of working and thinking to when sitting alone composing. This variety I find important. For someone else to throw in ideas that aren’t your own always makes collaborations interesting, but specifically to my live electronics improvisation it means I have to think very fast – and that’s fun! I also have similar project with a Swedish guitarist Stefan Ostersjo.

USO: What is the main difference between composing for a fixed medium and for a live performance involving real players?

NB: There is a lot to say about this, but here are some points concerning only acousmatic versus live electroacoustic music with visual performers. I leave out lap-top performance as that opens up a slightly different discussion.

The point of the acousmatic is that there is nothing to perceive via our eyes that will concretise the original sounding object, nor where the sound came from, nor the situational context in which the sound was made. In the visual context we receive a layer of visual information that renders the sound concrete.

Acousmatic listening is part of our everyday. Listening, understanding and ‘getting into’ acousmatic music is simply to focus on the acousmatic experience within a realm where human experiences are moulded, reconfigured and reconnected by the composer. Listening to acousmatic music is therefore natural. The perception of complex structures requires no understanding of Western instrumental music – no training beyond life in general. When we involve an instrumental performer, even if we avoid connection to history in our compositional material, the listener on the other hand identifies the instrument and enters a listening mode based on their understanding of instrumental music – at least to some extent.

Visual electroacoustic music may incorporate pre-made sound material and create ‘live’ electroacoustic material from sound or data derived from the live performance act. We need to consider the many relationships between these layers and the live instrument.

When a live performer is introduced the composition needs to consider this performer – to understand where human limits lie, to understand and take advantage of the live act, to incorporate the live human element in the fixed composition.

USO: What's on your schedule at the moment?

NB: Over the next week I must complete the score for a live-electronics work called “Zone-1” (for percussion, piano, clarinet and a sizeable live electronics part). This work will be tested early January ready for premier in February. I also set up an old installation Microclimates III-VI and complete a new CD project with Swedish guitarist Stefan Ostersjo. March I’ll be in the UK to perform a concert and lectures, then finally (hurrah!) getting stuck into the new commission from the Giga-Hertz competition.

[ Prev | 1 | 2| 3 ]

An interview with Natasha Barrett, pt.2

by Matteo Milani, Federico Placidi - U.S.O. Project, January 2009

(Continued from Page 1)

USO: In the latest years you received a lot of commissions by many institutions. Did their request influence your ideas and work itself?

NB: Normally I am free to compose what I wish within two types of guideline. The first is whether the work is acousmatic or involves live instrumental performers. The second is the duration and an agreed degree of involvement reflected in the time the work takes to compose. Even for a commission that is part of a thematic festival I am normally able to specify what I would like to do. Only in few cases am I strictly controlled. For example last year I composed some sound materials for a visual artist where the requirements were clearly specified.

USO: Which software resources are you using for composing your pieces? What make them specials?

NB: Well, as I mentioned above it’s easier to talk about the ‘why’ rather than the ‘how’ as I use any software that is appropriate – and probably the same as everyone else. It’s all down to how you use the tools! 14 years ago I would maybe have discussed how I use university mainframe systems or software that was non-commercial and custom made. Now, most non-commercial sound transformation algorithms we used in the mid-late 90’s are embedded in commercial software, which is accessible to all and significantly easier to use than the buggy programmes of the last decade. Even if we build a custom-made Max/MSP patch we are still working within a defined collection of Max/MSP objects (which I do quite a lot). We can of course write our own programmes and our own objects but are nevertheless mainly coding existing algorithms. I can however talk about one main area that I find interesting, and that is the difference between software intended for haptic ‘real-time’ control and software more suited to scripting and ‘out-of-time’ control. For many reasons ‘real-time’ and ‘out-of-time’ control of the same algorithm can produce markedly different results.

USO: What's the importance of Ambisonics in your works?

NB: I first used Ambisonics in 1999 and discovered how three-dimensional spatial structures (rather than stereo phantom images) may be transmitted directly to the listener without the need for headphones. Prior to this time, for me, three-dimensional sound lived either metaphorically in the stereo phantom image or in concert sound-diffusion performance over a large loudspeaker orchestra. In my work since 2000 Ambisonics has opened up a new layer of compositional potential in terms of both sound and temporal structure. I should point that I am referring to 3-D sound through the spatial continuum, rather than sound being positioned on specific loudspeaker points. I do however regularly perform traditional sound-diffusion.

USO: Can you talk about your menthors and how they affected your work?

NB: Hmm. Well mentors change, don’t they? Maybe I don’t really have any mentors, rather people whom I admire, or works that I admire or find inspiring – and the list grows all the time, spanning electronic music, instrumental music and visual art. To throw in a few classics: The early works of Stockhausen. Xenakis. Luc Ferrari. François Bayle. Brian Ferneyhough. Early electronic and tape works from the 50’s to 70’s – some are inspiring for the composition, others for the pioneering spirit and shear commitment to the immensely time-consuming process of the time.

USO: What does it mean to be an electroacoustic music composer in Norway today?

NB: I guess the same as anywhere else in Europe – one needs to be open-minded and active in a broad definition of electroacoustic music as composition and as art involving sound, while staying true to personal beliefs and knowledge.

USO: How should a composer survive nowadays in the global community, where the market and profits determine the reality that surround us, acting like a natural selection process, leaving the outsider thinkers behind?

NB: If you give in to market and profits then you may be able to live through working with sound, but then you need to ask yourself if you are surviving in what you believe in and what you find fun? The point is that no contemporary marginal art-forms survive in a free market economy. But there is rarely a completely free market. Public funds and private grants are there for marginal art-forms. In some counties they are of course very sparse, tricky to get and unfortunately sometimes distributed on a collegial basis. Collaboration can be both interesting on creative terms and useful in opening up new funding opportunities. I would say to be faithful to your ideas yet to be flexible enough to imagine how an idea could function in an alternative setting or framework.

USO: Is there room for a contemporary music scene in our western culture?

NB: Yes. There will always be people who wish to be actively stimulated by music or art rather than simply ‘absorbing’ or using music as a function for example to dance or work to. I know I am not alone in finding intellectual experience one spice of life and contemporary music is one area that offers such an experience.

[ Prev |1 | 2 | 3| Next ]

An interview with Natasha Barrett, pt.1

by Matteo Milani, Federico Placidi - U.S.O. Project, January 2009

U.S.O. Project had reached composer Natasha Barrett to talk about her winning acousmatic piece 'Kongsberg Silver Mines' at the 'Giga-Hertz-Award for electronic music 2008' and more.

Natasha Barrett (1972) works fore-mostly with composition and creative uses of sound. Her output spans concert composition through to sound-art, large sound-architectural installations, collaboration with experimental designers and scientists, acousmatic performance interpretation and more recently live electroacoustic improvisation. Whether writing for live performers or electroacoustic forces, the focus of this work stems from an acousmatic approach to sound, the aural images it can evoke and an interest in techniques that reveal detail the ear will normally miss. The spatio-musical potential of acousmatic sound features strongly in her work. Barrett studied in England with Jonty Harrison and Denis Smalley for masters and doctoral degrees in composition. Both degrees were funded by the humanities section of the British Academy. Since 1999 Norway has been her compositional and research base for an international platform.

USO: Can you reveal us the history of your winning acousmatic piece 'Kongsberg Silver Mines' at the GIGA-HERTZ-AWARD (field recording, processing and composing)?

NB: It was in fact the work ‘Sub Terra’ that won the prize. For the award-concert they needed a sorter composition due to the already rather lengthy programme. ‘Kongsberg Silver Mines’ was originally one of three sound-art installations used as a prelude to ‘Sub Terra’. The other two installations are ‘Under the Sea Floor (Coring and Strata)’ and ‘Sand Island’. Each of these three installations zoom in on sounds unique to three locations under Norwegian ground, creating surreal semi-narrative journeys. In the full ‘Sub Terra’ cycle the installation sites lead the visitor through underground or enclosed sound-worlds, gradually closer to the concert space. The concert work Sub Terra then crystallises into a musical form that which is most abstract from the installations. The version of ‘Kongsberg Silver Mines’ played at the award-concert was a special concert remix of the installation. This version was designed to play over the ZKM ‘Klangdom’ – a dome of 43 loudspeakers. The remix intends to create as a self-contained work, to be listened to from beginning to end, rather than with open-time characteristic of the installation.

Sub Terra was commissioned by Ny Musikk Rogaland, which is an organisation working with contemporary music and art in the southwest of Norway. Long before the title and idea were set, the plan was to explore interesting features of sound-worlds hidden from everyday experience. For concerts and other events Nymusikk Rogaland often use an old brewery, which was partly converted into an arts venue. In the basement of this building you find enormous storage cellars, and it was for this space the installations were initially intended (although in the end the installations were spread out over the Norwegian town Stavanger). The pitch-blackness and crazy acoustics in the cellars started the ‘underground’ chain of thought and eventually pointed to the type of field recordings that I would make.

The sounds for ‘Kongsberg Silver Mines’ are recorded during a journey into ‘Kongsberg Silver Mines’ – an enormous mine complex dating from the 1700’s, extending over 1km downwards and several km into a mountain. For access to the mines I tagged onto a group of geologists who were visiting for a research conference. I made a recording of the 2 km journey on the original old miner's train, which was used to transport silver ore and miners in and out of the mines. The deafening sound and immense vibrations of the old trucks were amazing. It was necessary to wear earplugs and so I had to guess on the recording by looking at level meters and hedging my bets on the how the microphones were responding. Inside the mines we were met by a Norwegian guide. In shaky English he explained the local history and demonstrated some of original lift machines still in operation. I recorded almost everything and soaked-up the complete atmosphere so that later I could try to re-inject the total experience into the composed work.

In ‘Under the sea floor (Coring and Strata)’ I was looking for sounds literally under the sea floor. At this time the University of Oslo was undertaking a research project where a 10-meter long core-sample would be taken from a “pockmark” in the Oslo Fjord, 32 meters below sea level. I slipped onto this trip via a friend and was not completely sure of what to expect apart from that I had to be careful not to get in the way! So for two days I hovered in the background on a cold, wet drilling vessel, recording sounds from on deck, below water and on the sea floor (and luckily bought hydrophones with 40 meter long cables!).

In ‘Under the sea floor (Coring and Strata)’ there are also two more sets of sounds. The first originates not from field recordings but from a seismic shot created by a large TNT blast recorded by an array of 2000 geophones spread over tens of kilometers. The sound on each geophone is about 15 seconds long and records the response from the Earth's crust and well into the mantle. The impulse and responses are very low frequency - well below 50 Hz. I obtained the geophone recordings as a seismic data set. After various tests I found that by using data from every 100th geophone the geological information was preserved while reducing the data set into a workable size. Each of these 20 data streams was converted into audible sound at the original sampling frequency of just 125 Hz. After pitch-shifting the sound was finally in a useful audible range. Using the seismic data as control data created the next set of sounds. The data mainly controlled frequency and volume modulation of sine tones. By selecting a suitable time step (speed) and modulation width (pitch variation), the simultaneous playback of the 20 data streams sonifies the seismic response. Audible interferences and correlations were also used as points of departure for compositional development.

For recordings in the third installation ‘Sand Island’ I simply buried two hydrophones under the sand in the tidal zone of a small bay on Søndre Sandøy in Hvaler, Norway. After 4 hours the tide lifts the hydrophones out of the sand and carries them into a floating bed of seaweed.

When it comes to processing it’s easier to talk about the ‘why’ rather than the ‘how’ as I use any technique and software that is appropriate. Whether it’s techniques originating in traditional ‘tape methods’ such as editing, mixing and montage or the latest spectral manipulation software, for me the point is to explore that which I find most interesting and important to the composition. This process normally begins by exploring on the one hand a sound’s spectrum, temporal and spatial shapes (the intrinsic content), and on the other hand the way the sound directly and abstractly connects to our experiences (the extrinsic content). Composition then involves making sense of the discovered information, further transformations to reinforce emerging connections, exploring counterpoints and motions and so forth. Specifically in the installations it was important to maintain compositional narrative journeys and a tight connection to my own original experiences during the field recording.

USO: How did you decide which sounds to use for this particular work called 'Sub Terra'?

NB: The installations were not originally intended for concert performance, even though now there exist concert remixes. Each installation is intended to be open for a listener entering and leaving as they wish, while the concert work ‘Sub Terra’ requires the listener to be attentive to the complete 16-minute work – an approach with involved somewhat different approaches to sound and structure. I wanted to tie narrative and recognisable sounds to the installations while the concert work would be more abstract and ‘intrinsic’. Here we have a paradox which I hope creates some degree of tension in the listening process: the installations are semi-narrative yet non-linear enough to allow a listener stay as long as they wish, while the concert work is more abstract, yet contains a level of detail and structural connections that require a continuous and concentrated listening process. For Sub Terra I created a new network from the more abstract installation materials. You may also hear materials clearly referencing the installations, but their development is somewhat different.

[ 1 | 2 | 3| Next ]

Monday, January 26, 2009

An interview with Trevor Wishart - pt.3

by Matteo Milani, Federico Placidi - U.S.O. Project, January 2009

(Continued from Page 2)

USO: What are your thoughts about the spatialisation issue?

TW: In the 70s I worked in an analogue 4-track. But then the 4-track technology died. Since then I have been cautious about using any spatialisation procedure beyond stereo, as I don’t want my work to be dependent on a technology which might not last. I’m also concerned with the average listener, who will not go to a concert, or a special institution with complex spatialisation facilities. Most people will listen to music on headphones or domestic hifis. So the music must work in this context. With diffusion, however, I can expand the stereo work, using the diffusion process to reinforce and expand the gestures within the music. My current piece will probably be in 8-tracks, partly for aesthetic reasons, - it is based on recognizable human speech, and I would like the speech ‘community’ to surround the audience – and partly because 8-track is, perhaps, future-proof, being essentially 4 times stereo! I’m excited by the multichannel spatialisation systems being developed at the moment, but I would like to see the development of very cheap, high quality loudspeakers, to make these technologies accessible to smaller (normal) venues, and to composers like me, who work at home.

USO: How should your music be performed (by you or any other sound artist) in live context?

TW: With the pure electro-acoustic works, the (stereo) pieces should be diffused over many stereo pairs. I provide basic diffusion scores if I am not diffusing the pieces myself.

USO: What do you think about the structural approach in music composition? Do you think it could be helpful to shape tools for computer aided composition in order to speed up the composer's work?

TW: Absolutely essential, there is no music without structure. But computer-aided composition is something for commercial music and film, where we don’t want to stray from known pathways, so we’re happy to encapsulate those in an algorithm. For art music, I want to hear how another person shapes materials in order to bring me a musical experience. I’m not interested in hearing the output of an algorithm, though, of course, algorithms might be used on a small scale to help shape particular events.

USO: Is your academic career helped you to expand your knowledge about your way of composing?

TW: Yes. Because I was studying Chemistry at Oxford, when I changed subject to Music it was into one of the most conservative music departments in the UK, But I’m glad I was therefore brought into contact with music from the middle ages to the early 1900s and taught about the many different approaches composers have taken to organizing their materials, over hundreds of years.

USO: In your music is there any technical or inspirational reference to composers of the past?

TW: I don’t think so. OS, should I say, not yet.

USO: What are your near future projects?

TW: My present project, a 3 year residency, involves recording the speaking voices of many people across the community in the North East of England. I have recorded in schools, old peoples’ centres, pubs and homes. I wish to capture people speaking naturally, and to collect together a diverse range of vocal types – different ages, from 4 to 93, gender, and sheer vocal quality. My intention is to make an electro-acoustic piece of about one hour, in 4 ‘acts’, which plays on both the uniqueness of each speaker, and on the common features of the voice we all share. The piece must be comprehensible to the local community, particularly to those people whose voices I recorded (few of whom have any special interest in experimental music!), but also to an international concert audience, where I cannot assume that the listeners will understand English.

[Prev | 1 | 2 | 3 ]

An interview with Trevor Wishart - pt.2

by Matteo Milani, Federico Placidi - U.S.O. Project, January 2009

(Continued from Page 1)

USO: You have been involved in software developing for several years. Why did you choose to shape your own tools? Can you tell us about the genesis and the actual development of the Composer Desktop Project?

TW: There are two main reasons. The first is poverty (!). Most of my life I’ve been a freelance composer, and being a freelance experimental composer in England is seriously difficult! When I first worked with electronics you were dependent on custom-built black boxes like the SPX90. The problem for me was, even if I could afford to buy one of these, I could not afford to upgrade every other year, as University music departments could. It became clear that the solution to this would be to have IRCAM-like software running on a desktop computer. A group of like-minded composers and engineers in York got together (in 1985-6) as the Composers’ Desktop Project, and began porting some of the IRCAM/Stanford software to the Atari ST (The MAC was, then, too slow for professional audio). We then began to design our own software.

The second reason is that creating one’s own instruments means you can follow your sonic imagination wherever it leads, rather than being restricted by the limitations of commercially-focused software. You can develop or extend an instrument when you feel the need to (not when the commercial producer decides it’s profitable to do so), and you can fix it if it goes wrong!

The original ports onto the Atari ran very slowly: e.g. doing a spectral transformation of a 4 second stereo sound might take 4 minutes at IRCAM, but took 2 days on the Atari. However, on your own system at home, you could afford to wait … certainly easier than attempting to gain access to the big institutions every time you wanted to make a piece.

Gradually PCs got faster – even IRCAM moved onto MACs. The CDP graduated onto the PC (MACs were to expensive for freelancers!) and the software gradually got faster than realtime.

The CDP has always been a listening-based system, and I was resistant for a long time to creating any graphic interface – much commercial software had a glamorous-looking interface hiding limited musical possibilities. However, in the mid-90s I eventually developed the ‘Sound Loom’ in TK/Tcl (this language was particularly helpful as it meant the interface could be developed without changing the underlying sound-processing programs). The advantages of the interface soon became apparent, particularly its ability to store an endless history of musical activities, save parameter patches, and create multi-processes (called ‘Instruments’). More recently I’ve added tools to manage large numbers of files. ‘Bulk Processing’ allows hundreds of files to be submitted to the same process, while ‘Property Files’ allow user-defined properties and values to be assigned to sounds, and sounds can then be selected on the basis of those properties. There are more and more high level functions which combine various CDP processes to achieve higher-level functionality.

USO: How did you develop your skills in programming algorhythms for the spectral domain?

TW: I studied science and maths at school, and did one term of university maths (for chemists). Mathematics has always been one of my hobbies – it’s beautiful, like music. When I was 15, my school organized a visit to the local (Leeds) university computer, a vast and mysterious beast hidden in an air-conditioned room from where it was fed by punched-card readers. I wrote my first Algol programs then. I only took up programming later, as a student at York, when Clive Sinclair brought out his first ultra-cheap home computer. I taught myself Basic, graduated to the other Sinclair machines, up to the final ‘QL’ which I used to control the sound-spatialisation used in VOX-1. Later I graduated to C.

I’ve never been formally taught to program or taken a proper course, but I’ve been lucky enough to hang around with some programming wizards, like Martin Atkins, who designed the CDP’s sound-system, and Miller Puckette at IRCAM, from whom I picked up some useful advice … but I’m still only a gifted amateur when it comes to programming!

USO: Could you explain us your preference for offline processing software instead of real-time environments?

TW: Offline and realtime work are different both from a programming and from a musical perspective. The principal thing you don’t have to deal with in offline work, is getting the processing done in a specific time. The program must be efficient and fast, and understand timing issues. Offline all this is simply irrelevant. Offline also, you can take your time to produce a sound-result e.g. a process might consult the entire sound and make some decisions about what to do, run a 2nd process, and on the basis of this run a third and so on. As machines get faster and programmers get cleverer (and if composers are happy for sounds to be processed off-line and re-injected into the system later) then you can probably get round most of these problems.

But the main difference is aesthetic. If you’re processing a live-event, you have to accept whatever comes in the mike or input device. However precise the score, the sounds coming in will always be subtlety different on each performance. So the processes you use have to work with a range of potential inputs.
The main problem with live music, is you have to be the sort of person who likes going to concerts, or is happy in the social milieu of the concert. And many people are not. This can be changed through, on the one hand, education, but, more significantly, by making the concert world more friendly to more groups of people, and e.g. taking performances to unusual venues (I’ve performed in working mens’ clubs, schools, and so on).

Working offline you can work with the unique characteristics of a particular sound – it may be a multiphonic you managed to generate in an improvising session but is not simply ‘reproducible’ at will, or it may be the recording of a transient event, or a specific individual, that cannot be reproduced at will in a performance. Moreover, the absence of performers on stage might seem like a weakness, but it has its own rewards. Compare theatre and film. A live performance has obvious strengths – a contact with living musicians, the theatre of the stage etc, but you’re always definitely there in the concert-hall. In the pure electro-acoustic event we can create a dream-world which might be realistic, abstract, surreal or all these things at different times – a theatre of the ears where we can be transported away from the here and now into a world of dreams. Electro-acoustic music is no different to cinema in the respect of its repeatability, except that sound-diffusion, in the hands of a skilled diffuser, can make each performance unique, an interaction with the audience, the concert situation and the acoustics of the space. Divorcing the music from the immediacy of a concert stage allows us to explore imaginary worlds, conjured in sound, beyond the social conventions of the concert hall.

[Prev | 1 | 2 | 3 | Next]

An interview with Trevor Wishart - pt.1

by Matteo Milani, Federico Placidi - U.S.O. Project, January 2009

U.S.O. Project had the pleasure of interviewing Trevor Wishart, whom work has been recently awarded at the 'Giga-Hertz-Award for electronic music 2008'. The 'Grand Prize' honored his artistic and technical achievements along his career.
Trevor Wishart, born 1946 in Leeds, is a composer and performer who developed (not only) sound transformation tools in his Sound Loom - Composers’ Desktop Project software package. He's the author of the worldwide-known books 'On Sonic Art' and 'Audible Design', where he shares his extensive research on extended human voice and computer music techniques for sound morphing. He is currently 'Arts Council Composer Fellow' in the Department of Music at Durham University, England.

USO: You've been awarded at the Bourges Festival, at Ars Electronica and now at Giga-Hertz, three prestigious prizes for electronic music. How are your feelings now for all these tributes about your life work as composer?

TW: It’s very gratifying to receive this kind of recognition from the musical community, especially as the type of music I make is almost invisible in the cultural press in the UK, where electro-acoustic composition is squeezed out between classical instrumental music and popular music.

USO: How did you begin being interested in music composition?

TW: I was brought up in a working class household, with no real connection with the world of concerts, or theatres, but there was a piano in the house, a family heirloom, which no-one ever played. I became fascinated by it, and so my parents agreed to let me have piano lessons. A little later, when I was 7, I was looking in the window of the only classical music shop in town when I noticed a strange book, a musical score with no notes on it (it was a manuscript book). I immediately bought one and began to write my own music.

USO: Why did you choose the electronic medium and how has this affected your way of composing?

TW: I went to university to study Chemistry (my parents could not imagine music as a real job … my school advised me against pursuing this ‘dilettantism’) but soon changed to music. After being thrown into the deep end of Darmstadt by my college tutor at Oxford, I was writing complex instrumental music (7 tone rows with different intervallic structures + random number tables) when my father died. He worked in a factory in Leeds, and I suddenly felt my sense of disconnectedness from his world. So I bought the only tape-recorder I could afford (it ran at 3+3/4 inches per second and had a built in poor-quality mike) and went around factories and power-stations in Nottingham and Leeds, recording the sounds of industrial machinery, with some vague notion of making a piece for him. In the studio I discovered that many of my presumptions about composition were challenged in this new medium. Sounds had a life of their own which had to be respected. Preconceived absolute notions of musical proportions and musical relationships were challenged by the concrete reality of the sounds themselves. After two large early projects (‘Machine’ and ‘Journey-into-Space’) I settled on a way of working with complex materials based on the idea of sound-transformation, in the work ‘Red Bird’.

‘Red Bird’ treats its real-world sounds (birds, animals, voices, machines) as metaphors in a mythical landscape, one sound type transforming into another to create a form somewhere between music and mythic narrative. In fact the approach was suggested by reading Lévi Strauss’s ‘The Raw and the Cooked’, where he uses music as a kind of metaphor for his structural anthropological approach to myth. My idea was to turn this on its head and create a contemporary myth (about industrialism, determinism, ‘freedom’ and creativity) using sound and musical structure. The sound-transformation approach (very difficult to achieve in the analogue studio) was thus partly inspired by a political idea … the possibility of changing the social world.

With the advent of computers, very precise control of sound structure made it possible to adopt sound-transformation as a general approach to musical materials. I think of this as a generalization of the classical ideas of variation and development, extended into the entire spectral domain.

USO: You prefer using samples generated from vocal sources performed by you with extended techniques. Could you describe the creation process?

TW: Not strictly. I very often use the human voice (sometimes my own, sometimes live performers like ‘Electric Phoenix’ or ‘Singcircle’, sometimes voices from the media, or from the local community), for two reasons. On the one hand the voice is much more than a musical instrument. Through speech it connects us to the social world, and thence to the traditions of poetry , drama, comedy, etc. It reveals much about the speaker, from gender, age and health to attitude, mood and intention, and it also connects us with our Primate relatives. The listener recognizes a voice even when it is minimally indicated, or vastly transformed, just as we recognize faces from very few cues. And the average listener will be affected by e.g. a vocal multiphonic, an immediate empathic or guttural link, in a way which they will not be affected by a clarinet multiphonic (appreciate more in the sphere of contemporary music afficionados).

At the same time, apart from the computer itself, the voice is the richest sound-producing ‘instrument’ that we have, generating a vast variety of sounds from the stably pitched, to the entirely noisy, to complex rapidly-changing multiphonics or textures of grit and so on. This is a rich seam to mine for musical exploration.

My musical creation process depends on the particular work. Normally I have a general idea (a ‘poetic’) for the work. For example, ‘Imago’ has the idea of surprising events arising from unlikely origins (just like the imago stage of insect metamorphosis, the butterfly, emerges from the unprepossessing pupa). For ‘Globalalia’ the idea is to express the connectedness of human beings by exploring the fundamental elements of all languages, the syllables. I also have some general notions of how musical forms work … clearly stating the principal materials is important for the listener in a context where a traditional musical language is not being used; establishing key moments or climaxes in the work; repetition and development, and recapitulation of materials, especially leading towards and away from these foci, and so on. Repetition and (possibly transformed) recapitulation are especially important to the listener to chart a path through an extended musical form.

But all sound materials are different, and it is not possible to predict, except in the most obvious ways, what will arise when one begins to transform the sounds. So I spend a lot of time, exploring, playing with, the sources, transforming them, and transforming the transformations, and gradually a formal scheme appropriate to what I discover, and to those particulars materials, crystallizes in the studio.

USO: How was your experience working at IRCAM for "Vox 5"?

TW: My first visit to IRCAM was the most exciting learning experience of my life. Discovering the possibilities opened up by the analysis and transformation of sounds using software, learning about psycho-acoustics in Steve McAdams inspiring lectures, and mingling with high-powered researchers bursting with new ideas. As you probably know, I was offered the opportunity to make a piece during my 1981 visit, but then the entire hardware base of IRCAM changed and it was not until 1986 that I could actually begin work on ‘VOX-5’, a piece growing out of the experience of ‘Red Bird’. I was assigned a mentor (Thierry Lancino), but he soon realized that I could probably manage on my own (my science background at school meant that programming came naturally to me), and I soon discovered the Phase Vocoder, taking apart the data files to discover what was in them, and using this knowledge to begin designing sound-morphing tools based on the transformation of that data.

[1 | 2 | 3 | Next]

Friday, January 23, 2009

Australia: Featurette - Sound Design

Wayne Pashley, supervising sound designer for Australia, gives an overview of how he combines on-set recordings and post production sound effects to let viewers see the movie with their ears.


Saturday, January 17, 2009

InharmoniCity (ZBF-0108): now shipping!

We are very happy to announce that the dvd-video 'InharmoniCity' is out. It's a co-production by U.S.O. Project with visual artist Selfish. Limited edition hard copies (300) are available via Synesthesia Recordings or via Zerofeedback (PayPal - $16). The press release is available here (pdf).

An excerpt from the dvd 'InharmoniCity':

Girl Running - U.S.O. Project + Selfish on Vimeo.


1) Girl Running (08.48)
2) Invisible Words (12.20)
3) ...from the Past...out of the Future... (40.36)

is packaged in a 2-panel wallet.


Best played at high volume in the dark.

Thursday, January 15, 2009

An Argument For Reinventing The Term "Sound Design"

Below is an email Randy Thom is sending to people who work in film sound.

In his own words: "My previous post on the list about "Dialog As Sound Design" is related to my interest in broadening the scope of sound design to encompass all creative work in sound. I don't expect this little piece to change people's minds, but I hope it will help start a dialog."

[via Sound Article List]

Hi, All, and Happy New Year!

I'm sorry for this mass mailing, but I couldn't think of a better or less intrusive way to air some ideas I've been puzzling over, regarding issues we all know about, but rarely get to discuss in a formal way. At the center of it is that relatively new, controversial and ambiguous term "Sound Design." To begin to frame some of the issues I want to say something about another historically controversial screen credit. I don't bring it up to suggest that it is an exact model for our current situation in Sound, or a predictor of how the credit Sound Design will eventually be seen, but simply because there are some parallels between the two that are interesting.

In 1939 William Cameron Menzies was the first Art Director to receive the screen credit "Production Designer" on a film that made a rather big splash. The film was "Gone With The Wind." Many other Art Directors were appalled at the new credit, and the acrimony over the "Production Design" continued for several decades. Why, some said, is this new title necessary? Menzies is doing exactly the same job he did when he called himself an Art Director. Is he trying to aggrandize his position? Is he trying to make himself seem better, more desirable than we mere Art Directors are? In other words... Is this a scheme to steal our clients?... some wondered.

In the mid 1980's Richard Beggs and I (presumably because Murch and Burtt weren't available : ) were asked to come to a meeting of the Executive Committee of the Academy's Sound Branch so that we could explain what a sound designer is. I honestly don't remember what we said, but I suspect we did an appallingly bad job of it. What follows is something close to what I think we should have said...

The credit first appeared on a film, actually two films, in 1979. On "Apocalypse Now" Walter Murch took the screen credit "Sound Design and Montage." About the same time Ben Burtt got the "Sound Design" credit on the sequel to "American Graffiti." But they weren't the first to get that credit, they were just the first to get it on a film.
Earlier, some sound people working on Broadway plays in New York had received the credit "Sound Design," and Dan Dugan, who was doing the same kinds of work in the San Francisco theater scene, took that credit as well, before it appeared in the world of film.

Contrary to what many people think, the work that "Sound Designers" do is not new. It was being done long before anyone called him or herself a "Sound Designer." People in film had been doing what "Sound Designers" do at least two generations before "Apocalypse Now." The crucial question then: what is that work? What does a Sound Designer do? Well, we know what sound is, and we know what design is, so shouldn't it be clear? Maybe, but it's not.

When Walter and Ben took that credit they saw it as something very similar to what a few Supervising Sound Editors and Mixers had indeed already done... work with the Director to shape the sound of the film beginning very early in the process, as early as production or even pre-production, and continue that work all the way through post production. Since those opportunities to work on a project from pre-production through post were very rare, they thought it deserved a new name, and Sound Design seemed appropriate. Among others, they used Orson Welles and the way he worked with his sound crews as a model. The idea was that if sound was to be a full collaborator somebody was needed to work with the Director from nearly the beginning of the project so that sound ideas could influence creative decisions in the other crafts before it was too late, so that Sound could be a driving creative force rather than a band-aid. That was the grand theory in a nutshell, but it didn't catch on.

The "sexy" term Sound Design caught on in the movie biz, but with a very different and unfortunately much narrower meaning. Somehow Sound Design in film came to be associated exclusively with things "high tech," with using 24track tape recorders and midi in the early days, and a little later plug-ins. Basically the grand notion of a sound collaborator for the Director morphed into "gadget specialist."
A Sound Designer became something like a high tech audio grease monkey, a nerd you hired to electronically fabricate sounds you couldn't find in the effects library. Lots and lots of people started calling themselves Sound Designers. It quickly became an easy way to get cheap attention, whether the attention was deserved or not. Established Supervising Sound Editors and Mixers, especially in LA, justifiably saw many of these newly minted "Sound Designers" as con artists out to steal their clients with a few slick techy moves.

In my view, the word design applies to all the creative work we do in sound. Fabricating and manipulating sounds is sound design. Editing existing sounds is sound design. Brilliant sound design can be done using unmodified sound effects from the most basic commercial library. Breathtakingly beautiful sound design can be done and has been done with dialog alone, no sound effects at all. Supervising is also design. It's a crucial kind of sound design in my opinion because it consists of guiding the creative process. And finally, Mixing is design. Despite the sometimes questionable use of the term by "wannabees," I think Sound Design is a credit very worth preserving. The "grand notion" is worth preserving and spreading as well. We should all be pushing, to the degree we can, to make Sound a full creative collaborator in the storytelling process.

The most important part of the work that Editors and Mixers do is making creative decisions. The word "design" makes it easy to distinguish us from engineers and administrators, whose work is extremely important but not focused on artistic creativity. Oscars in the Sound categories are awarded to those people who make the final creative decisions for the Director's approval. If someone has acted as a creative supervisor for sound all the way through post production until the end of the final mix I feel strongly that he or she should be eligible for Oscar nomination regardless of whether the person's screen credit was "Supervising Sound Editor," "Mixer," or "Sound Designer." The borders between editing and mixing are rapidly disappearing as technology allows both kinds of work to be done with nearly identical machines. Given that "mixing" and "editing" are becoming one thing, wouldn't it be better if the people supervising the creative decisions in Sound were called "Supervising Sound Designers?"

Randy Thom
January 13, 2009

Max for Live

After two years of development, Cycling '74 and Ableton have finally revealed their integration project for extending Live using Max. Don't miss David Zicarelli's perspective on the Max for Live project, and read about the new Max tools for building Live devices. Here's the press release.

[update: Create Digital Music - Make Max Patches that Integrate with Ableton]

Ableton proudly presents...


And, if you'll be in Los Angeles, please take note of the following:

Touch Controlling Ableton Live Seminar presented by Lemur experts Gareth Williams (aka Raw Hedroom) and Bryant Place (aka CPU) focusing on the unique interfacing capabilities of the JazzMutant Multitouch Controller with Ableton Live.
Topics include:
  • "Performing a powerful hybrid live/DJ set with the Lemur"
  • "Using the Lemur with Live as a powerful sound design tool"
  • "Natural and musical sequencing with Lemur and Live"
Ableton Live User Group-Los Angeles
January 22, 2009 - 8 p.m.
SAE Institute | DFC Theater | 6565 Sunset Blvd., Suite 100
Los Angeles, CA 90028

[read more - pdf]

Monday, January 12, 2009

Kyma X & Pacarana/Paca: Now shipping!

In 1990, Symbolic Sound revolutionized the sound design and music software industry with the introduction of Kyma, a graphical modular software sound design environment accelerated by the software-reconfigurable Capybara multi-processor sound computation engine.
The flagship model Pacarana is 150% the power of a fully-loaded Capybara-320 for less than half the price ($ 4402). The entry-level Paca costs less than a Basic Capybara-320 ($ 2907), but the new entry-level model is 5 times more powerful.

On the back of the Pacarana—all the high-speed connectivity you want and need for digital audio production: two FireWire 800 ports, 2 USB ports, 100-base T Ethernet jack, and more…
A DC power plug connects the Pacarana to an external power supply that auto-senses voltage and frequency of the AC power source no matter where in the world you travel.
The Pacarana communicates with the Kyma X software running under Mac OS or Windows via FireWire 800 (IEEE1394B) or an 800-to-400 adapter cable.

Audio and MIDI input and output is handled via an external FireWire or USB converter or, for current Kyma owners, through a Capybara-320 with Flame FireWire I/O. Connect additional USB MIDI controllers like keyboards or fader boxes via the second USB port.
Pacarana is one rack-unit high and weighs approximately 1.7 kg (about 3.7 lbs). The entry-level Paca is the same width and height as the pro model but is a few inches shorter.

[more info via]
[Audio News Room]

Saturday, January 10, 2009

The Klangdom (Sound dome) in the ZKM_Cube

Hightech instrument for spatialization at the ZKM | Institute for Music und Acoustics.

ZKM "Cube" (photo by Marc Wathieu)

The Klangdom was finished in 2006 after three years of conception, planning and installation. Forty-three Meyersound speakers are attached to an elliptical rig system three-dimensionally in the room, four additional ones are placed on the ground. With this unique speaker instrument, as the head of the institute Ludger Brümmer explains, "complex polyphonic space-sound-movements can be displayed realistically and perceived from every spot in the room".

It can be controlled via Open Sound Control protocol or can also be used as Audio Unit plug-in. Thus, the user can incorporate Zirkonium's panning capabilities into his preferred environment (including Digital Performer, Logic).

User interface of the control software Zirkonium, studio window

[more via]
[Zirkonium Manual -PDF]
[Zirkonium - Installer]
[Zirkonium - Max/MSP and SuperCollider Examples]

Wednesday, January 07, 2009

Michel Chion: Audition Musicale

Michel Chion will present an audition musicale and a talk about his work at the University of Edinburgh on 21st January 2009.

Michel Chion was born in 1947 in Creil (France). After literary and musical studies, he began in 1970 to work for the ORTF (French Radio and Television Organization) Service de la recherche, where he was assistant to Pierre Schaeffer at the Paris' Conservatoire national de musique, producer of broadcasts for the GRAM, and publications director for the Ina-GRM, of which he was a member from 1971 to 1976.

He also works as a theoretician in a new area: the systematic study of audio-visual relationships, which he teaches at several centres (notably at Université de Paris III where he is an Associate Professor), and film schools (ESEC, Paris; DAVI, Lausanne) which has developed in a series of five books. He has also written on Pierre Henry, François Bayle, Charlie Chaplin, Jacques Tati, David Lynch, diverse subjects on music and film.
After having dedicated his Guide des objets sonores to the ideas of Schaeffer, in 1991 he published "L'art des sons fixés" in which he proposes, in order to properly designate this music, the return to the term ‘musique concrète' in its initial non-causal sense. His redefinition insists upon the effects particular to the fixation of sound, a term which he proposes in place of recording.


An interview with Michel Chion

[excerpt from]

When did you join the GRM yourself?

In 1971, not as a student but as a member. My first job was to be Schaeffer's assistant for his classes at the Conservatoire. It was a very original class which didn't only focus on electroacoustic music, but all forms of music. I prepared his lessons, taught some of the classes, and set assignments for the students, which Schaeffer graded. Composition assignments, exercises in montage. The course was about music in general, including non-Western musics and music therapy, and I thought it was quite original. Schaeffer wanted it that way. He wanted students to ask questions on the music's background, its social origins and function. In class it was more a question of participating and debating. So I was his assistant for a year, and then someone else took over.

What led you to join the GRM in the first place?

I'd already read Schaeffer's Traité des Objets Musicaux, and found it more honest, direct and relevant than certain books by Boulez, which I thought avoided a lot of questions relating to perception. But Schaeffer's book was 700 pages long, so to make it more widely known I wrote a kind of abridged version called Guide des Objets Sonores. That's why I joined the GRM. Schaeffer's book still hasn't been translated into English, by the way, but mine has.

You referred to "electroacoustic music" a minute ago. What name do you prefer for this music?

Well, as you know, the terminology has changed. At the beginning of the 70s "electroacoustic" meant something on magnetic tape. But live electronic music already existed, and more and more composers started adding live instruments. Then you had people like Jean-Michel Jarre saying he was making electroacoustic music too, and people started thinking electroacoustic music had to be live. Anyone doing it on tape was kind of retarded. That was the ideology at IRCAM in the mid 70s, saying people who made music on magnetic tape (or later on a computer) were somehow lagging behind, or didn't understand that it could be done live. Well, I've always been of the opinion that there are things you can't do live, or rather, things you can do better on tape. It's like someone who doesn't understand that cinema is the art of fixing things, and tries to make a live film, with actors acting live in front of people and being filmed at the same time. Obviously that's absurd. In the same way I think there are many pieces of live electronic music that don't make sense. So after the mid 1970s, the terminology changed, and François Bayle came up with the idea that we should find a term specifically to describe music on tape. The problem for me was that he found a word – "acousmatic" – which was understood by only a handful of people in France and by nobody else in the world. When I mentioned acousmatic music outside France, nobody had a clue what I was talking about. They couldn't find the word in the dictionary. It didn't exist. You still can't find it in any French dictionary today. In the 1980s I suggested we return to the term "musique concrète", because it's known throughout the world. It's in all the dictionaries. Musique concrète, the art of fixed sounds – I wrote a book on the subject. I thought it was important that members of the public should understand what a work of musique concrète consisted of. So, yes, I still call it musique concrète, and that applies to François Bayle as well as Karlheinz Stockhausen.

When did you begin writing about cinema yourself?

That started in about 1980, when I was 33. Pierre Schaeffer told me about an offer he'd had to lecture on sound in cinema at a film school, which at the time was called IDHEC, Institut de Hautes Etudes Cinématographiques (now called FEMIS). He declined, but told them Michel Chion could do it! I'd already written an article on the relationship between sound and image, so I agreed. It was right about that time that something very important happened, with the arrival of videotape. Prior to that, if you wanted to study a film it was difficult, because you had to borrow a copy of a print and sit in a cutting room for three days taking notes. But by 1980 video recorders had appeared, and you could record a film from the television. Which I did. The first thing I analysed was a film by Bresson I recorded from the television.

Which one?

Un condamné à mort s'est échappé, which is magnificent, and very good for showing off-screen sound. It's the story of a man in prison, and we hear the sounds as he hears the sounds. It's a real lesson in off-screen sound, and a very beautiful film.
So with a VCR you can stop the image, analyse the sound, listen to it alone or watch the image without it. Until then, few people had studied film like that. When I started, it was an area in which there were few books published, and most of those were by people coming from a technical or literary background. I'm one of only a few writers in France who came from a musical background, and I think you have to understand music to be able to talk about the use of sound in cinema. So I started writing articles and ended up at the Cahiers du Cinéma. I suppose I'm best known as a writer. But it all started because of Pierre Schaeffer.

[read the full interview - via]

Tuesday, January 06, 2009

Ben Burtt on Star Trek?

"According to a studio insiders, one of the last people to join the Star Trek movie team was Academy Award winning sound designer Ben Burtt, best known for his work on the Star Wars franchise. Burtt worked on all six Star Wars films and created most of the iconic sounds, including Chewbacca’s roar, light sabers and the droids. Burtt also worked as a sound designer on all of the Indiana Jones films, E.T., and last year’s Pixar hit Wall-E.
Burtt is working alongside Mark Stoeckinger who was the sound editor for J.J. Abrams Mission: Impossible: III. According to the source, they needed all the help they could get for the sheer number of new sounds that had to be created for this new Star Trek. Even though the film re-uses some iconic TOS sounds (like the Red Alert klaxon), all the sounds still need to be recreated, and then there are many more new sounds as well. Apparently Burtt is a huge Trek fan and was thrilled to get a chance to be involved."


[The "twang" from Star Trek and Star Wars compared]

Sunday, January 04, 2009

Curtis Roads: Microsound (Lecture)

Curtis Roads teaches and pursues research in the interdisciplinary territory spanning music and technology. He is currently Professor of Media Arts and Technology + Music, University of California Santa Barbara.

This lecture presents an overview of several projects pursued over the past five years in laboratories at UC, Santa Barbara. All this research is based on a scientific model of sound initially proposed by Dennis Gabor (1946), and soon afterward extended to music by Iannis Xenakis (1960). Granular analysis (also called atomic decomposition) and granular synthesis has evolved over more than five decades from a paper theory into a broad range of applied techniques. Specific to the granular model is its focus on the microsonic time scale (typically 1 to 100 ms). Granular methods treat sound as a stream of acoustic particles in both the time domain and the time- frequency (TF) domain.

960 East 3rd Street
Los Angeles, CA, 90013 USA
7 PM - February 11th, 2009