Mo’Wax, James Lavelle, DJ Shadow, and more in a new documentary

A new documentary is poised to take what looks like a personal, thrilling look at the UK turntablism revolution.

The film is “The Man from Mo’Wax,” a documentary set to premiere at the end of August, with a full digital release (disc and download) on September 10.

The film centers on James Lavelle and his label, the pioneering purveyor of trip hop, alternative hip hop, and other things involving vinyl. And because of Mo’Wax’s seminal role in the 90s UK music scene, you get Lavelle’s story, but a lot more. DJ Shadow, Joshua Homme, Badly Drawn Boy,
Robert Del Naja (3D), Ian Brown, Futura, Thom Yorke and Grandmaster Flash… you name them, they’re in this picture. And it’s a coming of age story about Lavelle, who launched his DJ career at 14 and the label at 18 – all the ups an downs.

And of course, a lot of what sampling and beat-driven music is today is connected to what happens in this film.

How you get to watch this – apart from the YouTube trailed we’ve embedded here – is also rather interesting. Via something dubbed ourscreen, you can actually order up a screening at a participating local cinema… erm, provided you’re in the UK. For the rest of us, of course, we can just wait some extra days and microwave some popcorn and make every crowd around our MacBook or something.

The real fun will be for Londoners on the premiere date:

On Thursday, 30 August at 20:30, London’s BFI Southbank will host a premiere launch screening alongside a live Q&A with James Lavelle and the filmmakers. The event will also feature a Pitchblack Playback of an exclusive mix from UNKLE’s new forthcoming album. Plus, join us for an after-party with a live DJ set from Lavelle. The Q&A with James Lavelle will also be broadcast via Facebook Live from the BFI.

Given the subject of the film, of course there’s also a lovely limited edition record to go with it:

http://www.themanfrommowax.com/pre-order/

If you can’t wait, though, here’s FACT’s two-parter on Lavelle from the label’s 21st birthday.

Images courtesy the filmmakers.

http://www.themanfrommowax.com

Thanks, Martin Backes!

The post Mo’Wax, James Lavelle, DJ Shadow, and more in a new documentary appeared first on CDM Create Digital Music.

Explore a huge, free archive of the history of Japanese animation

We live in a marvelous age, not just because it gives us access to what’s new, but access to what’s old, too. And artists feel free to draw from the past for their visual and musical imagination. Media archaeology and invention go hand in hand.

And if you want to appreciate just how much is possible, there’s something about watching an animated movie from 1917 – one that looks like it could be at home on Adult Swim in 2017.

The National Film Center of The National Museum of Modern Art, Tokyo has put up an enormous, free archive in celebration of a century of animation in that country. And it’s simply astounding.

It’s so astounding, actually, that even with Japanese language text, it’s worth randomly diving in. (Once you get to the films, there are English subtitles available. English navigation is coming soon, the museum says.)

japanfilms

There are works that seem like Japanese clones of California animation, to be sure. And you get the requisite propaganda films from WWII (which are themselves eerily similar to films Disney produced for the same purpose).

But you also get work that’s dazzlingly fresh. Unique art styles match surreal imaginative fantasies (yes, there are creatures with fish for heads, of course). There are flattened pictorial styles, nods to traditional imagery. There are animations that draw technique from puppetry. Heck, there’s something that looks like South Park, about 60 years early.

I’d say if someone were looking for a fresh take on digital animation – whether a new way to think about software-generated motion, or a way of adding traditional animation techniques and aesthetics to a hybrid digital/software project – this could be a revolutionary resource.

If you find any that are particularly inspiring, particularly in the vein of synesthesia and audiovisual work, we’d love to see – share links in comments.

http://animation.filmarchives.jp/

fishfilm

dullsword

bordercheckpoint

The post Explore a huge, free archive of the history of Japanese animation appeared first on CDM Create Digital Music.

This Movie About EDM DJing Is Apparently Not A Joke

Wait, what did we just watch, exactly?

So, there’s some sort of EDM movie involving Zac Efron. And then 128 bpm the… what?!

Laptops?

eBay?

2006 club hits by Justice … as the title / hit tune … hashtag?

Obvious DJ gear but also … as Aroon Karvna notes “WTF detail: there’s a Buchla Music Easel at 1:25.” Holy boutique modular, Batman, you’re right!

I want to comment. But I feel I’m walking into a big trap. Could it be worth it to someone to actually troll all DJs everywhere with a trailer? Is there a real movie, or is this all a viral campaign to sell something else? Like maybe a new turntable or something; I don’t know. Why do I have a feeling marketing departments at DJ manufacturers paid good money to make sure their gear wasn’t seen in this film?

For now, I’m embarrassed on behalf of mixers, laptops, LA, the United States of America, music, and projectors in movie theaters.

Though I do agree that you should totally do some field recording for more interesting sounds.

When this hits video, though, watch it drunk with your DJ friends. And then – you’re welcome.

Tell you what, share this on social media and I’ll reward you with, well, really anything else in the hopes that we all regrow whatever brain cells we just murdered.

And thank you, The Debrief, who are as baffled as we are:
YOU WON’T WATCH ZAC EFRON’S NEW EDM FILM, BUT LOL OVER ITS TRAILER, PLEASE.

I swear I’m going to start rickrolling any comment trolls on CDM directly to a torrent of this film from now on. I’ll do it! Don’t think I won’t! That’ll teach ya.

Public service record:

I like to start at 125 bpm, too. But 960 bpm – that’s the magic number.

Side note: wow, CDM is still older than that 2006 Justice hit. Crazy.

The post This Movie About EDM DJing Is Apparently Not A Joke appeared first on Create Digital Music.

Q+A: How the THX Deep Note Creator Remade His Iconic Sound

THXEclipseScreenshot

How do you improve upon a sound that is already shorthand for noises that melt audiences’ faces off? And how do you revisit sound code decades after the machines that ran it are scrapped?

We get a chance to find out, as the man behind the THX “Deep Note” sound talks about its history and reissue. Dr. Andy Moorer, the character I called “the most interesting digital audio engineer in the world,” has already been terrifically open in talking about his sonic invention. He’s got more to say – and the audience is listening. (Sorry, I sort of had to do that.)

Andy 2

CDM: First, my big question is – how did you go about reconstructing something like this? Since the SoundDroid / Audio Signal Processor (ASP) is gone, that’s obviously out. Was it difficult to match the original? Was there any use of the original recordings?

Andy: I had two computer files from 1983. One was the original C program that generated the score, and the other was the “patch” file for the ASP that was written in Cleo, an audio-processing language that doesn’t exist anymore. The first thing I did was to resurrect the C program and make sure that it did generate the score properly. Then I wrote a set of special-purpose sound synthesis routines to interpret the score and produce the sound.

This wasn’t as difficult as it sounds. First off, the original program didn’t use a lot of different kinds of processing elements – maybe a total of 6 different elements. Next, this is the kind of software I have been writing for 45 years – I could write it in my sleep. It was not a problem. It took about a week to write the synthesis engine software and plug it into the original 30-year-old C program and get some sound out of it. There were also some calibration issues that I had to deal with, since the original ASP used some fixed-point scaling that I no longer recall. I had to experiment a bit to get the scaling right.

Then I was back at zero – I could start making the top-quality modern version. It took about another week to “tune” the new version and get it the way I wanted it. I spent more time in San Francisco with the THX folks making sure it met their needs as well. We then took the resulting sound files to Skywalker Sound to make the final mixes. It was a real thrill to finally hear the various versions in the Skywalker Stag theater. It was breathtaking.

I used the original only as an audio reference when I was bringing up the synthesis engine. Otherwise, the original sound was not used.

Andy 1

What’s it like making this sound with today’s tech? It seems one advantage, apart from the clear affordability and accessibility of hardware, is the ability to control sound in real-time. But how did you work with it now? What’s your current tool setup?

Boy, it is night and day. In 1983, it took a rack of digital hardware 6 feet [1.8m] tall to synthesize 30 voices in real time. Today I can do 70 voices on my laptop at close to real time. It is an absolute joy the power we have at our fingertips today. For this particular project, I just used a standard Dell W3 laptop. Nothing fancy. I have an M-Audio Fast Track Ultra 8R for the multi-channel D/A output for monitoring and testing. That gives me 8 channels of output, so I can do up to 7.1 but not Dolby Atmos. I didn’t actually hear the Atmos version I had synthesized until I got to Skywalker. I had a pretty good idea of what it would sound like, though.

A couple of people have already asked me about how you wound up with 20,000 lines of code in the original. I expect there was a fair bit of manual mucking about?

Actually I made a mistake with that 20,000 lines of code statement – that was just off the top of my head. I need to correct that if I can figure out how, but it also depends a bit on what lines of code you count. The original 30-year-old C program is 325 lines, and the “patch” file for the synthesizer was 298 more lines. I guess it just felt like 20,000 lines when I did it.

Given that it was written and debugged in 4 days, I can’t claim the programming chops to make 20,000 lines of working code that quickly. But, to synthesize it in real time, in 1983, took 2 years to design and build a 19” rack full of digital hardware and 200,000 lines of system code to run the synthesizer. All that was already done, so I was building on a large foundation of audio processing horsepower, both hardware and software. Consequently, a mere 325 lines of C code and 298 lines of audio patching setup for the 30 voices was enough to invoke the audio horsepower to make the piece.

What state is that code in, presently? Some people were curious to see it, but it seems efforts like Batuhan Bozkurt’s do a good job. I was actually unclear in the other correspondence I read what you meant my musical notation.

I guess you are asking who owns the code. THX, Ltd is the owner of the code that I produced. As you note, Mr. Bozkurt and a number of other folks have done perfectly marvelous versions of the piece as well. Around 1984, folks at the trademark division of Lucasfilm asked me to make a “score” for the piece in musical notation – you know – treble clefs and staff lines and that stuff. I took out my Rapidograph india-ink pens and drafted something that kind of suggests how the piece works as well as it can be expressed in traditional music notation. That was apparently necessary to copyright the piece in 1984.

Andy 3

This time, THX wanted a whole lot of versions. How did you approach all of these different surround formats?

It was a bit tricky, since it was not just the additional formats, but it was also fact that we wanted it to work well in the living room as well as the modern cinema. In 1983, home theater systems were not even a gleam in the eye. In the living room, you are relatively close to the speakers, so to make it sound rich and “enveloping”, I wanted the sound in each speaker to be as different as possible. In the cinema, you have a different problem – if you sit to one side, you will be close to one speaker and a long ways away from the others, so that speaker will dominate. In this case, the sound in each speaker had to be as rich as possible so it could stand on its own.

My first couple of tries sounded OK in the living room, but when I listened to each channel separately, they sounded thin and cheezy. If I just ran up the number of voices to 70, 80, or 90, each speaker sounded fine, but the overall impression got “mushy” and diffused. What I ended up doing is to put about 8 distinct voices into each speaker, so the 5.1 has about 40 voices (plus the subsonic ones in the subwoofer), the 7.1 version about 56 voices, and the Atmos 9.1 bed has over 70 voices.

The different versions do sound different. The 5.1 has a lot of “clarity” – you can really hear the individual voices move around. The 7.1 is still pretty crisp, since the voices come from different directions, you can still make them out separately, but there are some new voices moving in different directions that aren’t in the 5.1 version. The Atmos 9.1 adds two more overhead. These are far enough away that you don’t hear them clearly, but they just add to the richness.

In short, it was a challenge, but I think we came up with a viable solution that provides an experience that scales with the setup but preserves a lot of clarity and precision.

Are you involved in music and sound today? (Still playing the banjo?)

Is there air? Yes, of course. I play and make music every chance I get. I did some sax and banjo tracks for an album of spoken poetry a couple years ago and I play at the local watering holes from time to time.

Thanks, Andy! Really an inspiration to get to hear some of the nitty-gritty details – and to see a sound like this have such power. Now, up to us to work on our sound design coding chops – and our banjo licks, both, perhaps!

Shown: Andy Moorer at THX Ltd. San Francisco headquarters in 2014 during his visit to work on the regenerated THX Deep Note. (Photos: © THX Ltd.)

Previously: THX Just Remade the Deep Note Sound to be More Awesome

THX Blue Note

The post Q+A: How the THX Deep Note Creator Remade His Iconic Sound appeared first on Create Digital Music.

THX Just Remade the Deep Note Sound to be More Awesome

It’s one of the best-known electronic sounds ever – perhaps the best electronic sound branding in history. It made its launch in 1983 – right before Star Wars: Episode VI – Return of the Jedi, no less.

But it seems the THX “Deep Note” was due for an upgrade. And that’s what it got last week. Lucasfilm called upon the original creator of Deep Note, Dr. James ‘Andy’ Moorer, to remake his legendary sound design for modern theater audio technology.

Here’s a look at that history and where it’s come.

You can in the meanwhile watch the trailer here, though I think you’ll really want a THX-certified theater for this, obviously (through stereo headphones and whatever they’ve used to encode it here, it isn’t really distinguishable from the original):
http://www.thx.com/consumer/movies/120832135

This is actually the third major version of the Deep Note trailer. “Wings” was the first, heralding the arrival of Lucasfilm’s theater sound certification process. The one that probably springs to mind is “Broadway”, which features a blue frame on the screen. Less is more: that elegant rectangle plays against the holy $#*(& my face is about the melt off!!$#($*& effect of the sound. See the brilliant authorized Simpsons parody:

The sound itself is patented – and using it unaltered, without permission, can land you in hot water. (Dr. Dre lost a suit by Lucasfilm when he used it without permission in his song ’2001′.)

But the history of the sound, and of Dr. Moorer, says a lot about the massive pace of creative technology in the past decades.

Dr. Moorer has four patents to his name and a series of lives in technology.

In the 70s, he co-founded and co-directed Stanford’s CCRMA research center, which continues to give birth to leading figures in music technology. (Today, superstars like doctoral student Holly Herndon go there, to study with teachers like Ge Wang who managed both to invent the ChucK programming language and reimagine the phone as an instrument with the hugely successful Smule.)

Dr. Moorer was also an advisor to Paris’ IRCAM, where he worked on speech analysis and synthesis – for a ballet company.

And he worked in research and development at Lucasfilm’s The Droid Works. There, he designed something called the Audio Signal Processor, the mainframe on which the Deep Note sound would be created – alongside pioneering sound design production techniques for Jedi, Temple of Doom, and more. That machine would eventually be sold for scrap, but its legacy lives on.

In fact, the ASP and the larger “SoundDroid” system around it read like a template for everything that would happen in audio production tools since. Listen to how it’s described on Wikipedia: “Complete with a trackball, touch-sensitive displays, moving faders, and a jog-shuttle wheel, the SoundDroid included programs for sound synthesis, digital reverberation, recording, editing and mixing.” Yes, touch displays, like the iPad. Hardware controls, like advanced studio controllers – years before they would become available for computers. Digital processing. Sure, we take this stuff for granted, but in the 80s, it had to be built from scratch.

And he worked with Steve Jobs at NeXT (which also would pioneer sound tech that would reach the masses later – the forerunner of today’s Max/MSP, for instance, ran exclusively on a NeXT machine).

Accordingly, Dr. Moorer has an Emmy Award, and an Oscar.

And now he’s Principal Scientist at Adobe Systems.

And he repairs old tube radios and plays banjo, says Music thing.

He is the most interesting digital audio engineer in the world.

He told Music thing the full story of the THX sound, built on a massive mainframe – no DSP chips could be had at the time.

As he tells it:

“I was asked by the producer of the logo piece to do the sound. He said he wanted “something that comes out of nowhere and gets really, really big!” I allowed as to how I figured I could do something like that.

“I set up some synthesis programs for the ASP that made it behave like a huge digital music synthesizer. I used the waveform from a digitized cello tone as the basis waveform for the oscillators. I recall that it had 12 harmonics. I could get about 30 oscillators running in real-time on the device. Then I wrote the “score” for the piece.

“The score consists of a C program of about 20,000 lines of code. The output of this program is not the sound itself, but is the sequence of parameters that drives the oscillators on the ASP. That 20,000 lines of code produce about 250,000 lines of statements of the form “set frequency of oscillator X to Y Hertz”.

“The oscillators were not simple – they had 1-pole smoothers on both amplitude and frequency. At the beginning, they form a cluster from 200 to 400 Hz. I randomly assigned and poked the frequencies so they drifted up and down in that range. At a certain time (where the producer assured me that the THX logo would start to come into view), I jammed the frequencies of the final chord into the smoothers and set the smoothing time for the time that I was told it would take for the logo to completely materialize on the screen. At the time the logo was supposed to be in full view, I set the smoothing times down to very low values so the frequencies would converge to the frequencies of the big chord (which had been typed in by hand – based on a 150-Hz root), but not converge so precisely that I would lose all the beats between oscillators. All followed by the fade-out. It took about 4 days to program and debug the thing. The sound was produced entirely in real-time on the ASP.

For more, check out the 2005 Music thing story:
TINY MUSIC MAKERS: Pt 3: The THX Sound

The other interesting thing about the story told to Music thing is that the piece is essentially a generative performance. Random numbers mean each time the code is run, it “performs” a different version. So some of the recognizable features of the THX recording are very much the outcome of a particular performance – so much so that, when the recording was temporarily lost, people complained.

I’m going to try to get hold of Dr. Moorer to find out how the new piece was created, as press materials (naturally) fail to go into detail. But part of the reason you’ll want to hear it in a theater is the mix: there are three different lengths (30 seconds, 45 seconds, and 60 seconds) each of them made with stereo, 5.1, 7.1 and Atmos mixes.

And yes, I definitely hear a similarity to Xenakis’ Metastasis. In fact, the technique described above in code is similar to the overlaid glissandi in the Xenakis score – and perception will do the rest.

Perception itself is interesting – particularly the fact that the design of the sound, not its actual amplitude, is what gives it its power. (Lesson to learn for all of us, there.) Even with the sound turned down, it sounds loud; sound designer Gary Rydstrom has said that this spectral saturation means it “just feels loud.”

It’s also been a model for recreation – a kind of perfect homework assignment for sound design coders. For instance:

Recreating THX’s Deep Note in JavaScript with the Web Audio API

Writing for his blog Earslap, Batuhan Bozkurt has a masterful recreation of the Deep Note sound in the coding environment SuperCollider. Whether it sounds exactly like that original recording I think isn’t so important – just working through the basic technique of reproducing it opens up a lot of techniques you could expand into other, more personal expressions.

This is a great article and well worth reading (and I wonder if the original creator would have anything to say about it, in fact):

Recreating the THX Deep Note [Earslap]

And it’s also notable that the SuperCollider language can run on a $25 Raspberry Pi comfortably – no Lucas mainframes in sight. Coding is also something that’s opened up to countless young men and women around the world in typical music classes.

Think about that: what was once the domain of a tiny handful of people in Hollywood is now something you can run on a $25 piece of hardware, something you can learn with more ease than finding a violin teacher. Indeed, only education and literacy are the final, if significant barrier. With that knowledge and basic technology access, the most advanced and unique computer music technique of my own childhood is now nearly as accessible worldwide as opening your mouth and singing. This says a lot about the power of access to ideas in the modern world – and it makes it even less excusable that there are the significant gaps in gender, in economic status, and in geography.

The post THX Just Remade the Deep Note Sound to be More Awesome appeared first on Create Digital Music.