Academia.edu

Tuesday, February 17, 2009

An interview with James A. Moorer, pt.1

by Matteo Milani, February 2009


I had the pleasure of interviewing James A. Moorer, an internationally-known figure in digital audio and computer music, with over 40 technical publications and four patents to his credit. He personally designed and wrote much of the advanced DSP algorithms for the Sonic Solutions "NoNOISE" process which is used to restore vintage recordings for CD remastering.
Between 1980 and 1987, while Vice-President of Research and Development at Lucasfilm's The Droid Works, he designed the Audio Signal Processor (ASP) which was used in the production of sound tracks for Return of the Jedi, Indiana Jones and the Temple of Doom, and others.
Between 1977 and 1979, he was a researcher and the Scientific Advisor to IRCAM in Paris.
In the mid-seventies he was Co-Director and Co-Founder of the Stanford Center for Computer Research in Music and Acoustics. He received his PhD in Computer Science from Stanford University in 1975.
In 1991, he won the Audio Engineering Society Silver award for lifetime achievement. In 1996, he won an Emmy Award for Technical Achievement with his partners, Robert J. Doris and Mary C. Sauer for Sonic Solutions "NoNOISE" for Noise Reduction on Television Broadcast Sound Tracks. In 1999, he won an Academy of Motion Picture Arts and Sciences Scientific and Engineering Award for his pioneering work in the design of digital signal processing and its application to audio editing for film. He is currently working at Adobe Systems as Senior Computer Scientist in the DVD team.


[James Moorer (second from left), who gave the 2000 Richard C. Heyser Memorial Lecture at the 108th AES Convention in Paris, Audio in the New Millennium, receives Technical Council recognition - via aes.org]


The SoundDroid is an early digital audio workstation designed by a team of engineers led by Moorer at Lucasfilm. It was a hard-disk–based, nonlinear audio editor developed on the Audio Signal Processor. Only one prototype was ever built and it was never commercialized. Lucasfilm  started putting together a computer division right after Star Wars as an in-house project to build a range of digital tools for filmmaking. The audio project that became SoundDroid was done in close collaboration with the post-production division, Sprocket Systems, and later spun out as part of a joint venture called The Droid Works. Complete with a trackball, touch-sensitive displays, moving faders, and a jog-shuttle wheel, the SoundDroid included programs for sound synthesis, digital reverberation, recording, editing and mixing. EditDroid and SoundDroid were the beginnings of the desktop tools digital revolution.


MM: Mr. Moorer, who developed the concept of a "digital audio workstation", during those early days? How did you collect the ideas for the ASP?

JM: It was my idea from back in my days at Stanford University. My 1977 paper describes a "digital recording studio". The digital audio processing station and digital audio workstation came out of that work. I first coined the term "digital audio workstation" in a talk I gave at the AES. A paper from that talk came out in 1982, but I don't think the term "digital audio workstation" made it into the paper.

Andy Moorer, the director of Audio Research, was moving ahead with the design of the digital audio processor, the ASP. [...] The sound work centered on the invention of a brand new set of chips that could perform the special kinds of calculations required for digital audio. [...] Designing new hardware chips was a long and complex process. Once the engineer determined precisely what needed to be done, and perhaps executed the various algorithms in sofware, he needed to translate the software into fundamental logical steps that could be built with wire and chips. [...] This process of chip design, finding the best position of the chips on a board and the most efficient way to wire them together, was expedited by using specialized CAD software.
By summer of 1982, after most two years of work, Andy Moorer and his cohorts finally completed the first working prototype of the audio computer, the ASP. The massive ASP consisted of eight oversized boards assembled entirely by hand, requiring more than 3.000 IC chips. It represented a number of significant engineering breakthroughs.
[excerpt from Droidmaker
]


MM: Who's missing from this list of your crew at that time: John Max Snell, Curtis Abbott, Jim Lawson, Bernard Mont-Reynaud, John M. Strawn?

JM: That about it. John Snell worked on user interfaces (touch-sensitive screens, assignable knobs, moving faders and shuttle wheel). The other folks worked on software.

John Snell had met repeatedly with Ben Burtt and the rest of the Sprockets sound team. They all felt that the audio computer had to look and work pretty much like the traditional tools they knew. [...] Though Ben wanted the flexibility that digital processing and random access might provide, it couldn't be at the cost of changing the interface.
"We felt that you could use one knob or slider and have the sofware change its function," said Snell, "and a console could be built with perhaps eight sliders, or twelve, but not a hundred. [ The old console design ] simply wasn't practical."
Before Peter Nye arrived to the audio project, the ASP was controlled by a command-line interface, much like MS-DOS, and a small box of knobs that John Snell had built for Ben Burtt. Peter Nye, a computer-music student of Moorer's, implemented the paradigm on the SoundDroid for a film-oriented system, which traditionally viewed tracks running vertically, from top to bottom. The new SoundDroid interface featured Snell's touchscreen and a novel by-product of the digital audio: a waveform of the sound. [...] The Droid Works trademarked the feature as "See the Sound."
[excerpt from Droidmaker
]


MM: Are you the co-founder of CCRMA at Stanford University? Later, how was your experience in Paris @ IRCAM?

JM: Yes, with John Chowning, John Grey and Loren Rush. It was an interesting experience at IRCAM. I enjoyed my time there. I did not enjoy the internal politics.


MM: Did you help John Chowning during his research of FM synthesis? What kind of sound synthesis run in the ASP?

JM: Yes, of course. I designed the original hardware FM synthesizer that was the basis of the FM patent. Yamaha later built that particular device. We could run all kinds of sound synthesis on the ASP - FM, additive (Fourier), subtractive, whatever.


MM: Thanks to George Lucas who subsidized pure research, for the first time in the movie history scientists and filmmakers were together under the same roof (The Droid Works). What were the big hurdles from the machine-side and from the human-side during the development years?

JM: Technically, the problem was just that we didn't have a lot of the modern technology, like computers with 1 GHz processors and 1 GByte of memory. We had to build all the technology we used. From the human side, people were having trouble understanding the workflow in the digital audio world. I had a very, very difficult time selling people on the idea that you could do "scrub" or "reel-rock" on a computer, rather than by sliding a tape over a head. Someone even suggested that I use the top of a tape recorder as a computer interface, even with no tape on it, so it would "feel" the same. Today, nobody even knows that audio was edited using razor blades and scotch tape.


ASP does for sound tracks - the band of squiggly lines at the edge of a film that encode the film's sounds - what EditDroid does for images: it uses its silicon circuitry to mix, edit, and even synthesize the music, speech, and sound effects so important to film makers, especially somebody like Lucas for whom sound is as important as sight in stirring the emotions of an audience. ASP has already been used to heighten the drama - by changing the pitch of a scream, for example, to make it more chilling - in the current hit Indiana Jones and the Temple of Doom, co-produced by Lucas and Steven Spielberg. Lucasfilm's sound mixer is even more ambitious technologically. Today's films have as many as six sound tracks to drive the multiple speakers in many theaters. But even these are usually the distillation of many more tracks. Project chief Andy Moorer points out that in a film like Return of the Jedi the sounds in a typical scene may represent a mix of 70 separate tracks of dialogue, music, and audio effects. A single change in one of the original tracks - say, the boom of a rocket or the pitch of a siren - would require remixing all of them. Jedi needed only five film editors but 17 sound editors. "What we set out to do," says Moorer, "Was to put a computer in the middle of all of this." That was not easy: just as the visual images must first be converted to electronic signals for EditDroid, so the sounds must be turned into digital form for ASP. This means every flutter of noise has to be translated into the "on-off" binary language of the computer. In other words, the sounds become numbers.
[excerpt from Discover Magazine, August 1984
]


[ 1 | 2 | 3| Next ]

No comments:

Post a Comment