Academia.edu

Saturday, February 28, 2009

SONOPEDIA Sound Design Competition Results

Blastwave FX has announced the winning composition of the SONOPEDIA Sound Design Competition (in collaboration with Post Magazine). It was designed by Weston Fonger.
What Academy Award winner Richard King had to say about Weston’s piece:

“[this piece]… gave me the strongest emotional reaction … It’s clear the sound designer was striving to tell a story – certainly an abstract one which the listener can interpret in any number of ways, but there’s a purposeful development to the sounds. The subtle moments are well done and pleasingly organic, with a very effective use of total silence in one transition, and the louder, denser sections retain their clarity. The sounds chosen stand out for their uniqueness and clarity and the mix is focused. Very good work!”



But this piece by Erik Reimers is what I like the most.
Academy Award winner Randy Thom had this to say about it:


“Designing voices for creatures and robots is probably the most difficult kind of sound design there is. [Bot vs Bee] had a nice narrative arc, it was funny, and I’m guessing the designer had to twist the raw sounds quite a bit to achieve the final result.”



[more here - via blastwavefx.com]
[postmagazine.com]
[Judges]

Computer Making Music on MAKE Magazine

Meet CCRMA, a group of musical makers who stretch the sonic boundaries by turning personal computers into an electronic symphony. Based at Stanford University, CCRMA teams composers, artists and acoustical researchers together to meld music with new technology and explore the outer limits of audio from playground-activated sounds to laptop orchestras.



[via makemagazine]

Friday, February 20, 2009

Brian Eno on UbuWeb Film

Brian Eno: 14 Video Paintings (1981 & 1984)

These video installations were produced by Brian Eno to be shown at various galleries around the world. Subsequently released on vhs and laserdisc, the two works are to the video format what his audio pieces were to music; ambient musings on the nature of the medium. They are non linear and have no obvious plotline or direction: 'video paintings' as the title suggests, drifting in and out of focus. The first piece on the disc is accompanied by Eno's seminal Thursday Afternoon, a beautiful single hour-long piano track, and the second piece entitled Mistaken Memories of Mediaevil Manhattan is set to tracks from 'Music for Airports' and 'On Land'. 'Thursday Afternoon' is probably the most accessible, with Eno using film footage taken of his close friend Christine Alicino and cutting it together intimately. It ends up playing a little like a nostalgic diary, a musing on the life or a person now departed. The second piece is less figurative, and features painterly shots of the New York skyline, clouds moving overhead and the colours drifting like a melting palette.

[watch video - via UbuWeb]
[direct download - mp4]

[thanks to babelrecords.blogspot.com]

Tuesday, February 17, 2009

An interview with James A. Moorer, pt.3

by Matteo Milani, February 2009

(Continued from Page 2)

MM: Can you talk about the synthesized arrows in Indiana Jones and the Temple of the Doom?

JM: This was done by linear prediction. Ben had recorded the sounds of arrows going by, but they were too fast. I took 100 ms from the middle of one of those sounds and created a filter of order 150 from it. When driven by white noise, it made the same noise as the arrow, but continuing forever. He then put that sound in the doppler program to produce the sounds of the arrows flying by.

In addition to being a numbers prodigy, ASP is quite garrulous. It can synthesize speech, the sounds of musical instruments, and even special effects by the same mathematical techniques. In Indiana Jones, for example, there is a hang-onto-your-seat scene in which Jones and his pals, while dangling precariously from a rope bridge slung across a deep chasm, come under attack by a band of archers. Lucasfilm technicians had recorded the sound of a flying arrow in a studio, but they discovered that the whistling noise did not last long enough to match the flight of the arrow on the film.
ASP came to the rescue. Moorer copied 25 milliseconds from the middle of the one-and-a-half-second recording and spliced the duplicate sounds to both ends, all electronically. Then he manipulated the arrow's noise so that it faded as the missile moved from left to right across the screen. To ensure total accuracy, Moorer even used ASP to include a Doppler shift - the change in pitch from high to low heard when an object sweeps rapidly past. Thus, as the arrow flies by actor Harrison Ford's head the audience hears a subtle change of frequency in its noise. In this way the sound track dramatically increases the audience's sense of the hero's peril.
[excerpt from Discover Magazine, August 1984
]


MM: How did Ben Burtt use the machine at its fullest on his projects?

JM: There were a number of sounds in "Temple of Doom" that were done using the ASP. The main ones were the arrow sounds and the sound of the airplane crashing near the beginning of the movie, although there were many, many other sounds throughout the film that were processed on the ASP.


MM: Do you recall the 'SoundDroid Spotter' genesis? Could you describe the early “spotting” system for searching sound effects libraries inside SoundDroid?

JM: People said they wanted a less expensive system. I designed and built a 1-board processor with a TI DSP chip on it for doing sound effects spotting. TI (Texas Instruments) made a DSP (digital signal processor) chip called the 320. It was a bit like the Motorola 56000 that we used later at Sonic Solutions, except that it could only use 16-bit samples.


MM: The first digital devices have landed in recording studios as outboard gear. Does it make sense nowadays to have custom DSP to spread calculation outside the computer?

JM: No. It didn't then, and it doesn't now. First digital devices were outboard devices. This never made any sense. For instance, they could not be automated. You can have this nice, big mixing desk with all this fancy automation, but none of your outboard equipment could be turned on or off using the mixer automation. Similarly, the tape recorder and the mixing desk and the outboard equipment were all separate things. There is no way to make sure that your automation program for the mixing desk is the correct one for the audio tape you put on the tape recorder. If you have to pick up a project a year later, it is impossible to get all the same settings, since you never remember what outboard equipment was used or what special patching might have been present between the tape recorder and the mixing desk. If you want to make the "disco mix" a year later, there is no way to do it. It is like starting all over. It creates more problems than it solves. Now, you do everything through one program, and all the effects are perfectly automated, and all the automation is combined with the editing into one integrated project. The mixing desk with external tape recorded and "uncontrolled" outboard devices never made any sense to me.


MM: You're the real composer of the original THX logo. Michael Rubin wrote in his book "Droidmaker" that it was the startup sound of ASP first, correct? Who came up with this idea like the Apple Mac's startup sound? Did you name it Deep Note (or The Chord)?

JM: It was not the startup sound of the ASP. It was a program that you would load into the ASP which, when started, synthesized the sound in real time. I had named it Deep Note. I have no idea who came up with the Apple Mac's startup sound. The THX logo was started because the THX sound system was to come out in time for the release of "Return of the Jedi". They were doing a 35-second animation that would go before feature was presented in the theater. The director of that animation came to me and said that he wanted a sound that "comes out of nowhere and gets really, really big". I told him that I knew how to do that, and 2 weeks later, we had the THX logo theme.

Jim Kessler found Andy Moorer in the second floor kitchen when he pitched him on his vision for a THX logo. He [...] wondered if the ASP could generate a cool sound. "It's gotta swirl around and it should be like this really big one dynamic thing," Kessler explained, "and make sure we all get the stereo high range and low range." Being a musician and composer himself, Moorer was excited to have a musical project. [...] Moorer retreated to the prototype ASP he had built. It was the only device of its kind in the world. [...] He envisioned a power chord that would emerge from a cluster of sounds. He built a single note from thirty-two voices of synthesized sounds. Moorer called the sound "Deep Note" (possibly one of the most recognizable sound logos ever, and certainly the most recognizable piece of completely digitally synthesized music. It plays more than 4,000 times every day).

Thirty-two simultaneous voices of sound from a synthesizer in real time would be a computational challenge, even by modern standards. Moorer programmed the computer to generate a cluster of sounds between 200 and 400 Hertz that sort of "wandered up and down," but soon resolved into a big carefully planned chord based on a note of 150 Hertz, a little below D. The exact trajectories of the "wandering" sounds were created randomly by the ASP, so every time Moorer ran the program, the score was slightly different.
[excerpt from Droidmaker]



MM: The task of re-creating the THX sound is a standard homework assignment at Stanford's CCRMA (Center for Computer Research in Music and Acoustics) in an introductory course entitled "Fundamentals of Computer-Generated Sound." Every year, twenty or so students attempt to re-create the sound using LISP and Common Lisp Music. How much influence has your musical preparation had in the "composition" of "The Deep Note"?

JM: I knew exactly what it was going to sound like before I made a sound.

The THX logo theme consists of 30 voices over seven measures, starting in a narrow range, 200 to 400 Hz, and slowly diverting to preselected itches encompassing three octaves. The 30 voices begin at pitches between 200 Hz and 400 Hz and arrive at pre-selected pitches spanning three octaves by the fourth measure. The highest pitch is slightly detuned while there are double the number of voices of the lowest two pitches.
[via tarr.uspto.gov]


I set up some synthesis programs for the ASP that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the "score" for the piece.

The score consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form "set frequency of oscillator X to Y Hertz". [...] It took about 4 days to program and debug the thing. The sound was produced entirely in real-time on the ASP. [...] There are many, many random numbers involved in the score for the piece. Every time I ran the C-program, it produced a new "performance" of the piece. The one we chose had that conspicuous descending tone that everybody liked. It just happened to end up real loud in that version.
[via musicthing.blogspot.com
]


MM: Would you like to make some comments about the evolution of the digital audio industry, and some of the expectations for the future of this technology?

JM: It was very difficult to convince people that we could perform all the operations on audio using the computer. Nowadays, nobody could imagine doing audio in any other way. The big challenge for the future is the amount of audio we will have. By 2040, we will all carry devices the size of an iPod Nano that will hold ALL THE AUDIO THAT HAS EVER BEEN RECORDED IN THE WORLD! You will not download audio any more - it will ALL be on your player all the time. The problem will be in indexing that material so you can find it. You will not press buttons - you will talk to your devices to tell them what to do. You will be able to get your email by asking your telephone to read you your new messages without ever touching it. You will start to forget that they are machines. You will give them names, like we do our pets.

"Sometime in the near future, the technology will make it possible to store all of the information in the world in a cube that's 12 inches square....but the index to access that knowledge would take up an space 1/2 the size of the planet." - James A. Moorer
[via blogs.adobe.com]


[Prev | 1 | 2 | 3 ]


Related Posts:

References

Signal Processing Aspects of Computer Music - A Survey. Invited Paper, Proceedings of the IEEE, Volume 65, Number 8, August 1977, pp. 1108-1137

The Audio Signal Processor: The Next Step in Digital Audio. in "Digital Audio", Collected Papers from the AES Premiere Conference, Rye, New York 1982, June 3-6, pp. 205-214

"Digital Audio Processing Station: A New Concept In Audio Post-Production" by James A. Moorer, et al., Journal of The Audio Engineering Society, 34:6, June 1986, pp. 454-463


Bibliography
Books
Rubin, M. (2006) Droidmaker: George Lucas and the Digital Revolution- Triad Publishing Company

Websites
jamminpower.com
mixonline.com
en.wikipedia.org
cnmat.berkeley.edu

An interview with James A. Moorer, pt.2

by Matteo Milani, February 2009

(Continued from Page 1)

MM: Did Lucas himself choose the name SoundDroid?

JM: No. It came from the name of the group, which was "The Droid Works". Peter Langston came up with the name "Droid Works".


MM: Is it true that SoundDroid was the second generation digital audio workstation after the ASP? Or the ASP was called later SoundDroid?

JM: The ASP was later renamed SoundDroid. We were working on a second generation when the work was stopped.

The Droid Works closed before the production-ready version of the device was completed, before any products were delivered. Still, the audio research that went into it lived on. In the last months of 1986, Lucasfilm tried to find an industry leader to purchase the technologies had been created there, with no success.
[excerpt from Droidmaker
]


MM: What kind of sound processing did you implement? Filtering, eq, dynamics?

JM: All the kinds that were known at that time - filtering, dynamics, spatialization, reverberation, pitch shifting, denoising, and many more.


MM: Did your pioneering work on the Phase Vocoder lead to NoNoise for removing hiss, noise, clicks and pops from recordings?

JM: The Phase Vocoder work did lead directly to the algorithms for removing clicks and pops. The broadband noise reduction came out of some work I did with John Strawn on the "Karen Silkwood Tapes" back at Stanford. These were the techniques used for the Amadeus restoration projects.

Milos Forman was [...] mixing “Amadeus” at Fantasy Films (Saul Zaentz). [...] There was a scene in the movie (the asylum scene) where the performances of Salieri and Mozart were excellent but the audio recording was not. Forman's sound engineers, Mark Berger and Tom Scott, knew that Lucas's teams were pioneering new technology for exactly this kind of problem. They brought the tapes over to Andy Moorer and asked if his ASP could help.[...] Andy immediately digitized the tapes into the ASP. Then he excised small samples of pure noise and had the computer analyze them - getting, in essence, a fingerprint of the background noise. Then the pure noise was subtracted from the dialog tracks, leaving the human voices intact.
"About half of the tapes they brought me we were able to clean up," said Moorer. Amadeus was the first feature film to utilize their new noise reduction technology.
[excerpt from
Droidmaker]


MM: The visual waveform had been demonstrated originally in your program called "S" (for Sound) at Stanford in 1970. How was your contribution to the user interface design? Where did you find your inspiration of waveforms, from optical recordings?

JM: ALL modern digital audio workstations use the SAME waveform display that I created in the program "S". All modern digital audio workstations are descended directly from "S". I have no idea where the idea for the display on "S" came from. It just seemed good at the time.


MM: What was the audio recording format for the SoundDroid?

JM: We used 16 or 24-bit samples at 48 kHz.


MM: How big was the storage of the SoundDroid?

JM: We used 300 MByte "Storage Module Drives" on the original ASP. One drive would hold about 50 minutes of monaural sound at 48K and 16-bit samples. You could attach up to 4 drives to one ASP. With SoundDroid, we got some newer drives that could hold 850 MBytes, or about 2 hours of mono. Four of those drives adds up to about 1 hour of eight-track audio.


MM: How was your connection with Tomlinson Holman? Did you collaborate at that time?

JM: I enjoyed working with Tom Holman, although we had different responsibilities. He was all analog and didn't really work much with digital audio.


MM: How was your relationship with Ben Burtt? Was he the only figure who helped you sweetening the software?

JM: Ben helped me understand what sound designers did and how they do it.

"Ben Burtt said the biggest bottleneck was the spatializing of sounds; it was his biggest time sink," recalled Andy [Moorer]. Ben needed fine control over the room size around a sound, a magic knob that would adjust the spatial environment so he could immediately hear how a sound could be varied.
His traditional method was slow and tedious, with tape decks and amplifiers. And creating flybys! All those objects - space ships, motorcycles, airplanes - that had to be made to sound like they were moving fast by an imaginary microphone. He needed a Doppler shift that could be added to a sound.
Andy provided a solution. "We put a box in his little mixing room in the basement of C building, connected through the wall to the sound pit, where the ASP resided." He wrote Ben a basic set of programs that would allow the box to input a sound, fly it around at a desired speed, and output it.
[excerpt from
Droidmaker]

MM: The Doppler machine you invented for Ben was stand-alone or was it a remote controller doing a specific task?

JM: It was a remote terminal on the ASP. We would load a specific program into the machine for him to use.


[Prev | 1 | 2 | 3 | Next]

An interview with James A. Moorer, pt.1

by Matteo Milani, February 2009


I had the pleasure of interviewing James A. Moorer, an internationally-known figure in digital audio and computer music, with over 40 technical publications and four patents to his credit. He personally designed and wrote much of the advanced DSP algorithms for the Sonic Solutions "NoNOISE" process which is used to restore vintage recordings for CD remastering.
Between 1980 and 1987, while Vice-President of Research and Development at Lucasfilm's The Droid Works, he designed the Audio Signal Processor (ASP) which was used in the production of sound tracks for Return of the Jedi, Indiana Jones and the Temple of Doom, and others.
Between 1977 and 1979, he was a researcher and the Scientific Advisor to IRCAM in Paris.
In the mid-seventies he was Co-Director and Co-Founder of the Stanford Center for Computer Research in Music and Acoustics. He received his PhD in Computer Science from Stanford University in 1975.
In 1991, he won the Audio Engineering Society Silver award for lifetime achievement. In 1996, he won an Emmy Award for Technical Achievement with his partners, Robert J. Doris and Mary C. Sauer for Sonic Solutions "NoNOISE" for Noise Reduction on Television Broadcast Sound Tracks. In 1999, he won an Academy of Motion Picture Arts and Sciences Scientific and Engineering Award for his pioneering work in the design of digital signal processing and its application to audio editing for film. He is currently working at Adobe Systems as Senior Computer Scientist in the DVD team.


[James Moorer (second from left), who gave the 2000 Richard C. Heyser Memorial Lecture at the 108th AES Convention in Paris, Audio in the New Millennium, receives Technical Council recognition - via aes.org]


The SoundDroid is an early digital audio workstation designed by a team of engineers led by Moorer at Lucasfilm. It was a hard-disk–based, nonlinear audio editor developed on the Audio Signal Processor. Only one prototype was ever built and it was never commercialized. Lucasfilm  started putting together a computer division right after Star Wars as an in-house project to build a range of digital tools for filmmaking. The audio project that became SoundDroid was done in close collaboration with the post-production division, Sprocket Systems, and later spun out as part of a joint venture called The Droid Works. Complete with a trackball, touch-sensitive displays, moving faders, and a jog-shuttle wheel, the SoundDroid included programs for sound synthesis, digital reverberation, recording, editing and mixing. EditDroid and SoundDroid were the beginnings of the desktop tools digital revolution.


MM: Mr. Moorer, who developed the concept of a "digital audio workstation", during those early days? How did you collect the ideas for the ASP?

JM: It was my idea from back in my days at Stanford University. My 1977 paper describes a "digital recording studio". The digital audio processing station and digital audio workstation came out of that work. I first coined the term "digital audio workstation" in a talk I gave at the AES. A paper from that talk came out in 1982, but I don't think the term "digital audio workstation" made it into the paper.

Andy Moorer, the director of Audio Research, was moving ahead with the design of the digital audio processor, the ASP. [...] The sound work centered on the invention of a brand new set of chips that could perform the special kinds of calculations required for digital audio. [...] Designing new hardware chips was a long and complex process. Once the engineer determined precisely what needed to be done, and perhaps executed the various algorithms in sofware, he needed to translate the software into fundamental logical steps that could be built with wire and chips. [...] This process of chip design, finding the best position of the chips on a board and the most efficient way to wire them together, was expedited by using specialized CAD software.
By summer of 1982, after most two years of work, Andy Moorer and his cohorts finally completed the first working prototype of the audio computer, the ASP. The massive ASP consisted of eight oversized boards assembled entirely by hand, requiring more than 3.000 IC chips. It represented a number of significant engineering breakthroughs.
[excerpt from Droidmaker
]


MM: Who's missing from this list of your crew at that time: John Max Snell, Curtis Abbott, Jim Lawson, Bernard Mont-Reynaud, John M. Strawn?

JM: That about it. John Snell worked on user interfaces (touch-sensitive screens, assignable knobs, moving faders and shuttle wheel). The other folks worked on software.

John Snell had met repeatedly with Ben Burtt and the rest of the Sprockets sound team. They all felt that the audio computer had to look and work pretty much like the traditional tools they knew. [...] Though Ben wanted the flexibility that digital processing and random access might provide, it couldn't be at the cost of changing the interface.
"We felt that you could use one knob or slider and have the sofware change its function," said Snell, "and a console could be built with perhaps eight sliders, or twelve, but not a hundred. [ The old console design ] simply wasn't practical."
Before Peter Nye arrived to the audio project, the ASP was controlled by a command-line interface, much like MS-DOS, and a small box of knobs that John Snell had built for Ben Burtt. Peter Nye, a computer-music student of Moorer's, implemented the paradigm on the SoundDroid for a film-oriented system, which traditionally viewed tracks running vertically, from top to bottom. The new SoundDroid interface featured Snell's touchscreen and a novel by-product of the digital audio: a waveform of the sound. [...] The Droid Works trademarked the feature as "See the Sound."
[excerpt from Droidmaker
]


MM: Are you the co-founder of CCRMA at Stanford University? Later, how was your experience in Paris @ IRCAM?

JM: Yes, with John Chowning, John Grey and Loren Rush. It was an interesting experience at IRCAM. I enjoyed my time there. I did not enjoy the internal politics.


MM: Did you help John Chowning during his research of FM synthesis? What kind of sound synthesis run in the ASP?

JM: Yes, of course. I designed the original hardware FM synthesizer that was the basis of the FM patent. Yamaha later built that particular device. We could run all kinds of sound synthesis on the ASP - FM, additive (Fourier), subtractive, whatever.


MM: Thanks to George Lucas who subsidized pure research, for the first time in the movie history scientists and filmmakers were together under the same roof (The Droid Works). What were the big hurdles from the machine-side and from the human-side during the development years?

JM: Technically, the problem was just that we didn't have a lot of the modern technology, like computers with 1 GHz processors and 1 GByte of memory. We had to build all the technology we used. From the human side, people were having trouble understanding the workflow in the digital audio world. I had a very, very difficult time selling people on the idea that you could do "scrub" or "reel-rock" on a computer, rather than by sliding a tape over a head. Someone even suggested that I use the top of a tape recorder as a computer interface, even with no tape on it, so it would "feel" the same. Today, nobody even knows that audio was edited using razor blades and scotch tape.


ASP does for sound tracks - the band of squiggly lines at the edge of a film that encode the film's sounds - what EditDroid does for images: it uses its silicon circuitry to mix, edit, and even synthesize the music, speech, and sound effects so important to film makers, especially somebody like Lucas for whom sound is as important as sight in stirring the emotions of an audience. ASP has already been used to heighten the drama - by changing the pitch of a scream, for example, to make it more chilling - in the current hit Indiana Jones and the Temple of Doom, co-produced by Lucas and Steven Spielberg. Lucasfilm's sound mixer is even more ambitious technologically. Today's films have as many as six sound tracks to drive the multiple speakers in many theaters. But even these are usually the distillation of many more tracks. Project chief Andy Moorer points out that in a film like Return of the Jedi the sounds in a typical scene may represent a mix of 70 separate tracks of dialogue, music, and audio effects. A single change in one of the original tracks - say, the boom of a rocket or the pitch of a siren - would require remixing all of them. Jedi needed only five film editors but 17 sound editors. "What we set out to do," says Moorer, "Was to put a computer in the middle of all of this." That was not easy: just as the visual images must first be converted to electronic signals for EditDroid, so the sounds must be turned into digital form for ASP. This means every flutter of noise has to be translated into the "on-off" binary language of the computer. In other words, the sounds become numbers.
[excerpt from Discover Magazine, August 1984
]


[ 1 | 2 | 3| Next ]

Sunday, February 15, 2009

Brian Eno at the opening of the 258° academic year of the Accademia di Belle Arti (Venice)

Accademia di Belle Arti - Venezia: Year 258

On Monday 16th February at 11 am, the inauguration of the Academic Year of the oldest institution of high culture operating in the city will take place. It was born in 1750 from the will of the Serenissima Republic, while the University was rooted in Padua.
The ceremony will take place in the auditorium of San Servolo where the Academy has transferred, for lack of space in the headquarters of the (Former Hospital of) Incurabili, part of its teachings, in particular the subjects of the New Technologies in Art.
It is precisely for this reason that to hold the opening "lectio magistralis", was invited Brian Eno, self defined musician-not-musician, with a degree in Fine Arts in England taken in 1969. Devoting himself to music, Eno invented in the mid-70s, the "ambient music", this way theorizing a new way to make and enjoy music, while collaborating with, among others, U2, David Bowie, Talking Heads, John Cale, Laurie Anderson.
Poli-instrumentalist, sculptor, painter but especially a video experimenter combining images and sound, Eno creates alternative environments in which the audience can immerse themselves completely.
By visiting the laboratories of the multimedia laboratories of the Academy of Fine Arts of San Servolo, the audience will have access to a "sound environment" that evokes the process of demolition of reality theorized and applied by Eno.
A preview of "Imaginary Landscapes, a film on Brian Eno", the work that Duncand Ward and Gabriella Cardazzo have dedicated to him in 1989, has been presented in its entirety on Thursday February 12th, with the introduction by Gabriella Cardazzo in the headquarters of the Incurabili, along with Eno's video "Mistaken memories of mediaeval Manhattan" (1980-81).


[Brian Eno on his exhibition, Constellations | 77 Million Paintings]

[coming next: "77 Million Paintings" manifests itself as "PRESENTISM: Time And Space In The Long Now" at Palazzo Ruspoli in Rome | 20th February to 10th March]

UPDATE:

Brian Eno said the sort of Future proposed by Marinetti has had its day. "The kind of modernism the Futurists endorsed was sort of: smash the past, build the future and it's going to be rough."

"It very much fitted with ideas that were going around at that time in the century, and of course it had this very progressivist idea which then infected and was infected by what was going on in the rest of culture: the idea that we had to rebuild the world from scratch."

Eno's installation, in an ancient palace on Via del Corso, is at the polar extreme from the noisy, hyper-active, self-assertive art of the Futurists. "People enter a darkish room which has music coming from many different sources," he said.

"There are several large plasma screens on the wall and those form a continually changing, slowly moving painting; basically, a very complicated, extremely rich, coloured abstract picture. The important part about the motion on the screens is that it's very, very slow. It flies in the face of the Hollywood idea that people need more and more stimulation, that they have increasingly short attention spans.

"I'm absolutely convinced that that's the diametrical opposite of what's true... People who come to the shows say, 'I wish there was one of these in the city all the time.' And it makes me realise that there are things that people traditionally do – like go to church or sit in parks or daydream – which have become harder to do...

"So when people find a place where they can do that, they are pretty excited."

[via independent.co.uk]

Information:
[accademiavenezia.it]
[artspace.it]
[fondazionememmo.com]

Thursday, February 05, 2009

Pacarana in action

Edmund Eagan demonstrating the Continuum at the Haken Audio booth at Winter NAMM 2009.

The new Pacarana connected to a TC Electronic via konnekt24D, which handles Audio and MIDI I/O. Here is the current list of supported converters.

[see miroc.co.jp - via 8th Nerve Kyma News]