Tuesday, February 17, 2009

An interview with James A. Moorer, pt.3

by Matteo Milani, U.S.O. Project, February 2009

(Continued from Page 2)

MM: Can you talk about the synthesized arrows in Indiana Jones and the Temple of the Doom?

JM: This was done by linear prediction. Ben had recorded the sounds of arrows going by, but they were too fast. I took 100 ms from the middle of one of those sounds and created a filter of order 150 from it. When driven by white noise, it made the same noise as the arrow, but continuing forever. He then put that sound in the doppler program to produce the sounds of the arrows flying by.

In addition to being a numbers prodigy, ASP is quite garrulous. It can synthesize speech, the sounds of musical instruments, and even special effects by the same mathematical techniques. In Indiana Jones, for example, there is a hang-onto-your-seat scene in which Jones and his pals, while dangling precariously from a rope bridge slung across a deep chasm, come under attack by a band of archers. Lucasfilm technicians had recorded the sound of a flying arrow in a studio, but they discovered that the whistling noise did not last long enough to match the flight of the arrow on the film.
ASP came to the rescue. Moorer copied 25 milliseconds from the middle of the one-and-a-half-second recording and spliced the duplicate sounds to both ends, all electronically. Then he manipulated the arrow's noise so that it faded as the missile moved from left to right across the screen. To ensure total accuracy, Moorer even used ASP to include a Doppler shift - the change in pitch from high to low heard when an object sweeps rapidly past. Thus, as the arrow flies by actor Harrison Ford's head the audience hears a subtle change of frequency in its noise. In this way the sound track dramatically increases the audience's sense of the hero's peril.
[excerpt from Discover Magazine, August 1984

MM: How did Ben Burtt use the machine at its fullest on his projects?

JM: There were a number of sounds in "Temple of Doom" that were done using the ASP. The main ones were the arrow sounds and the sound of the airplane crashing near the beginning of the movie, although there were many, many other sounds throughout the film that were processed on the ASP.

MM: Do you recall the 'SoundDroid Spotter' genesis? Could you describe the early “spotting” system for searching sound effects libraries inside SoundDroid?

JM: People said they wanted a less expensive system. I designed and built a 1-board processor with a TI DSP chip on it for doing sound effects spotting. TI (Texas Instruments) made a DSP (digital signal processor) chip called the 320. It was a bit like the Motorola 56000 that we used later at Sonic Solutions, except that it could only use 16-bit samples.

MM: The first digital devices have landed in recording studios as outboard gear. Does it make sense nowadays to have custom DSP to spread calculation outside the computer?

JM: No. It didn't then, and it doesn't now. First digital devices were outboard devices. This never made any sense. For instance, they could not be automated. You can have this nice, big mixing desk with all this fancy automation, but none of your outboard equipment could be turned on or off using the mixer automation. Similarly, the tape recorder and the mixing desk and the outboard equipment were all separate things. There is no way to make sure that your automation program for the mixing desk is the correct one for the audio tape you put on the tape recorder. If you have to pick up a project a year later, it is impossible to get all the same settings, since you never remember what outboard equipment was used or what special patching might have been present between the tape recorder and the mixing desk. If you want to make the "disco mix" a year later, there is no way to do it. It is like starting all over. It creates more problems than it solves. Now, you do everything through one program, and all the effects are perfectly automated, and all the automation is combined with the editing into one integrated project. The mixing desk with external tape recorded and "uncontrolled" outboard devices never made any sense to me.

MM: You're the real composer of the original THX logo. Michael Rubin wrote in his book "Droidmaker" that it was the startup sound of ASP first, correct? Who came up with this idea like the Apple Mac's startup sound? Did you name it Deep Note (or The Chord)?

JM: It was not the startup sound of the ASP. It was a program that you would load into the ASP which, when started, synthesized the sound in real time. I had named it Deep Note. I have no idea who came up with the Apple Mac's startup sound. The THX logo was started because the THX sound system was to come out in time for the release of "Return of the Jedi". They were doing a 35-second animation that would go before feature was presented in the theater. The director of that animation came to me and said that he wanted a sound that "comes out of nowhere and gets really, really big". I told him that I knew how to do that, and 2 weeks later, we had the THX logo theme.

Jim Kessler found Andy Moorer in the second floor kitchen when he pitched him on his vision for a THX logo. He [...] wondered if the ASP could generate a cool sound. "It's gotta swirl around and it should be like this really big one dynamic thing," Kessler explained, "and make sure we all get the stereo high range and low range." Being a musician and composer himself, Moorer was excited to have a musical project. [...] Moorer retreated to the prototype ASP he had built. It was the only device of its kind in the world. [...] He envisioned a power chord that would emerge from a cluster of sounds. He built a single note from thirty-two voices of synthesized sounds. Moorer called the sound "Deep Note" (possibly one of the most recognizable sound logos ever, and certainly the most recognizable piece of completely digitally synthesized music. It plays more than 4,000 times every day).

Thirty-two simultaneous voices of sound from a synthesizer in real time would be a computational challenge, even by modern standards. Moorer programmed the computer to generate a cluster of sounds between 200 and 400 Hertz that sort of "wandered up and down," but soon resolved into a big carefully planned chord based on a note of 150 Hertz, a little below D. The exact trajectories of the "wandering" sounds were created randomly by the ASP, so every time Moorer ran the program, the score was slightly different.
[excerpt from Droidmaker]

MM: The task of re-creating the THX sound is a standard homework assignment at Stanford's CCRMA (Center for Computer Research in Music and Acoustics) in an introductory course entitled "Fundamentals of Computer-Generated Sound." Every year, twenty or so students attempt to re-create the sound using LISP and Common Lisp Music. How much influence has your musical preparation had in the "composition" of "The Deep Note"?

JM: I knew exactly what it was going to sound like before I made a sound.

The THX logo theme consists of 30 voices over seven measures, starting in a narrow range, 200 to 400 Hz, and slowly diverting to preselected itches encompassing three octaves. The 30 voices begin at pitches between 200 Hz and 400 Hz and arrive at pre-selected pitches spanning three octaves by the fourth measure. The highest pitch is slightly detuned while there are double the number of voices of the lowest two pitches.
[via tarr.uspto.gov]

I set up some synthesis programs for the ASP that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the "score" for the piece.

The score consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form "set frequency of oscillator X to Y Hertz". [...] It took about 4 days to program and debug the thing. The sound was produced entirely in real-time on the ASP. [...] There are many, many random numbers involved in the score for the piece. Every time I ran the C-program, it produced a new "performance" of the piece. The one we chose had that conspicuous descending tone that everybody liked. It just happened to end up real loud in that version.
[via musicthing.blogspot.com

MM: Would you like to make some comments about the evolution of the digital audio industry, and some of the expectations for the future of this technology?

JM: It was very difficult to convince people that we could perform all the operations on audio using the computer. Nowadays, nobody could imagine doing audio in any other way. The big challenge for the future is the amount of audio we will have. By 2040, we will all carry devices the size of an iPod Nano that will hold ALL THE AUDIO THAT HAS EVER BEEN RECORDED IN THE WORLD! You will not download audio any more - it will ALL be on your player all the time. The problem will be in indexing that material so you can find it. You will not press buttons - you will talk to your devices to tell them what to do. You will be able to get your email by asking your telephone to read you your new messages without ever touching it. You will start to forget that they are machines. You will give them names, like we do our pets.

"Sometime in the near future, the technology will make it possible to store all of the information in the world in a cube that's 12 inches square....but the index to access that knowledge would take up an space 1/2 the size of the planet." - James A. Moorer
[via blogs.adobe.com]

[Prev | 1 | 2 | 3 ]

Related Posts:


Signal Processing Aspects of Computer Music - A Survey. Invited Paper, Proceedings of the IEEE, Volume 65, Number 8, August 1977, pp. 1108-1137

The Audio Signal Processor: The Next Step in Digital Audio. in "Digital Audio", Collected Papers from the AES Premiere Conference, Rye, New York 1982, June 3-6, pp. 205-214

"Digital Audio Processing Station: A New Concept In Audio Post-Production" by James A. Moorer, et al., Journal of The Audio Engineering Society, 34:6, June 1986, pp. 454-463


Rubin, M. (2006) Droidmaker: George Lucas and the Digital Revolution- Triad Publishing Company