Academia.edu

Monday, January 26, 2009

An interview with Trevor Wishart - pt.2

by Matteo Milani, Federico Placidi - U.S.O. Project, January 2009

(Continued from Page 1)

USO: You have been involved in software developing for several years. Why did you choose to shape your own tools? Can you tell us about the genesis and the actual development of the Composer Desktop Project?

TW: There are two main reasons. The first is poverty (!). Most of my life I’ve been a freelance composer, and being a freelance experimental composer in England is seriously difficult! When I first worked with electronics you were dependent on custom-built black boxes like the SPX90. The problem for me was, even if I could afford to buy one of these, I could not afford to upgrade every other year, as University music departments could. It became clear that the solution to this would be to have IRCAM-like software running on a desktop computer. A group of like-minded composers and engineers in York got together (in 1985-6) as the Composers’ Desktop Project, and began porting some of the IRCAM/Stanford software to the Atari ST (The MAC was, then, too slow for professional audio). We then began to design our own software.

The second reason is that creating one’s own instruments means you can follow your sonic imagination wherever it leads, rather than being restricted by the limitations of commercially-focused software. You can develop or extend an instrument when you feel the need to (not when the commercial producer decides it’s profitable to do so), and you can fix it if it goes wrong!

The original ports onto the Atari ran very slowly: e.g. doing a spectral transformation of a 4 second stereo sound might take 4 minutes at IRCAM, but took 2 days on the Atari. However, on your own system at home, you could afford to wait … certainly easier than attempting to gain access to the big institutions every time you wanted to make a piece.

Gradually PCs got faster – even IRCAM moved onto MACs. The CDP graduated onto the PC (MACs were to expensive for freelancers!) and the software gradually got faster than realtime.

The CDP has always been a listening-based system, and I was resistant for a long time to creating any graphic interface – much commercial software had a glamorous-looking interface hiding limited musical possibilities. However, in the mid-90s I eventually developed the ‘Sound Loom’ in TK/Tcl (this language was particularly helpful as it meant the interface could be developed without changing the underlying sound-processing programs). The advantages of the interface soon became apparent, particularly its ability to store an endless history of musical activities, save parameter patches, and create multi-processes (called ‘Instruments’). More recently I’ve added tools to manage large numbers of files. ‘Bulk Processing’ allows hundreds of files to be submitted to the same process, while ‘Property Files’ allow user-defined properties and values to be assigned to sounds, and sounds can then be selected on the basis of those properties. There are more and more high level functions which combine various CDP processes to achieve higher-level functionality.


USO: How did you develop your skills in programming algorhythms for the spectral domain?

TW: I studied science and maths at school, and did one term of university maths (for chemists). Mathematics has always been one of my hobbies – it’s beautiful, like music. When I was 15, my school organized a visit to the local (Leeds) university computer, a vast and mysterious beast hidden in an air-conditioned room from where it was fed by punched-card readers. I wrote my first Algol programs then. I only took up programming later, as a student at York, when Clive Sinclair brought out his first ultra-cheap home computer. I taught myself Basic, graduated to the other Sinclair machines, up to the final ‘QL’ which I used to control the sound-spatialisation used in VOX-1. Later I graduated to C.

I’ve never been formally taught to program or taken a proper course, but I’ve been lucky enough to hang around with some programming wizards, like Martin Atkins, who designed the CDP’s sound-system, and Miller Puckette at IRCAM, from whom I picked up some useful advice … but I’m still only a gifted amateur when it comes to programming!


USO: Could you explain us your preference for offline processing software instead of real-time environments?

TW: Offline and realtime work are different both from a programming and from a musical perspective. The principal thing you don’t have to deal with in offline work, is getting the processing done in a specific time. The program must be efficient and fast, and understand timing issues. Offline all this is simply irrelevant. Offline also, you can take your time to produce a sound-result e.g. a process might consult the entire sound and make some decisions about what to do, run a 2nd process, and on the basis of this run a third and so on. As machines get faster and programmers get cleverer (and if composers are happy for sounds to be processed off-line and re-injected into the system later) then you can probably get round most of these problems.

But the main difference is aesthetic. If you’re processing a live-event, you have to accept whatever comes in the mike or input device. However precise the score, the sounds coming in will always be subtlety different on each performance. So the processes you use have to work with a range of potential inputs.
The main problem with live music, is you have to be the sort of person who likes going to concerts, or is happy in the social milieu of the concert. And many people are not. This can be changed through, on the one hand, education, but, more significantly, by making the concert world more friendly to more groups of people, and e.g. taking performances to unusual venues (I’ve performed in working mens’ clubs, schools, and so on).

Working offline you can work with the unique characteristics of a particular sound – it may be a multiphonic you managed to generate in an improvising session but is not simply ‘reproducible’ at will, or it may be the recording of a transient event, or a specific individual, that cannot be reproduced at will in a performance. Moreover, the absence of performers on stage might seem like a weakness, but it has its own rewards. Compare theatre and film. A live performance has obvious strengths – a contact with living musicians, the theatre of the stage etc, but you’re always definitely there in the concert-hall. In the pure electro-acoustic event we can create a dream-world which might be realistic, abstract, surreal or all these things at different times – a theatre of the ears where we can be transported away from the here and now into a world of dreams. Electro-acoustic music is no different to cinema in the respect of its repeatability, except that sound-diffusion, in the hands of a skilled diffuser, can make each performance unique, an interaction with the audience, the concert situation and the acoustics of the space. Divorcing the music from the immediacy of a concert stage allows us to explore imaginary worlds, conjured in sound, beyond the social conventions of the concert hall.

[Prev | 1 | 2 | 3 | Next]

No comments:

Post a Comment