These fanciful new apps weave virtual music worlds in VR and AR

Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).

Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)

And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.

But what is this stuff actually for?

That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.

They’ve got two apps now, one for VR, and one for AR.

Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:

Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).

But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.

The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.

The results can be totally crazy. Here’s one example:

Pitchfork go into some detail as to how this app came about:

Fields Wants to Be The Augmented Reality App for Experimental Music Fans and Creators Alike

More on the app, including a download, on its site:

http://fields.planeta.cc/

And then there’s Drops – a “rhythm garden.”

We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.

It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.

And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!

And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:

One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.

VR Visionaries: Planeta

Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.

Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)

Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)

We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”

Check out that whole article, as it’s also a great read:

Launch: Planeta, addressing the future of interface-sound composition

Find the apps:

http://fields.planeta.cc
http://drops.garden

And let us know if you have any questions or comments for the developers, or on this topic in general – or if you’ve got a creation of your own using these technologies.

The post These fanciful new apps weave virtual music worlds in VR and AR appeared first on CDM Create Digital Music.

4DSOUND: CIRCADIAN – Interactive Sound Performances at TodaysArt Festival The Hague, NL

Not specifically a synth event, but some of you might be interested in this. If you are unfamiliar with 4DSOUND, see this previous post featuring a couple of videos from Ableton. Also filing this one in the Art Installations channel. The following is the official press release on the event. You’ll find the event site here.

“4DSOUND: CIRCADIAN
PREMIERE AT TODAYSART FESTIVAL IN THE HAGUE, NL

Envelop Wants to Make an Ambisonic 3D Venue and Tools

envelopimage

3D, spatialized sound is some part of the future of listening – both privately and in public performances. But the question is, how? Right now, there are various competing formats, most of them proprietary. There are cinema formats (hello, Dolby), meant mainly for theaters. There are research installations, particularly in Germany. And then there are one-off environments like the 4DSOUND installation I performed on and on which CDM hosted an intensive weekend hacklab – beautiful, but only in one place in the world, and served up with a proprietary secret sauce.

Artist Christopher Willits has teamed up with two sound engineers / DSP scientists and someone researching the impact on the body to produce ENVELOP – basically, a venue/club for performances and research.

The speaker diffusion system is relatively straightforward for this kind of advanced spatial sound. You get a sphere of speakers to produce the immersive effect – 28 in total, plus 4 positioned subwoofers. (A common misconception is that bass sound isn’t spatialized; in fact, I’ve heard researchers demonstrate that you can hear low frequencies as well as high frequencies.) Like the 4DSOUND project (and, incidentally, less like some competing systems), the speaker install is built into a set of columns.

And while the crowd-funding project is largely to finish building the physical venue, the goal is wider. They want to not only create the system, but they say they want to host workshops, hackathons, and courses in immersive audio, as well.

You can watch the intro video:

Another key difference between ENVELOP and the 4DSOUND system is that ENVELOP is built around Ambisonics. The key with this approach, in theory, at least, is that sound designers and composers choose coordinates once and then can adapt a work to different speaker installations. An article on Ambisonics is probably a worthy topic for CDM (some time after I’ve recovered from Musikmesse, please), but here’s what the ENVELOP folks have to say:

With Ambisonics, artists determine a virtual location in space where they want to place a sound source, and the source is then rendered within a spherical array of speakers. Ambisonics is a coordinate based mapping system; rather than positioning sounds to different locations around the room based on speaker locations (as with conventional surround sound techniques), sounds are digitally mapped to different locations using x,y,z coordinates. All the speakers then work simultaneously to position and move sound around the listener from any direction – above, below, and even between speakers.

ambilive

aura3dj

One of our hackers at the 4DSOUND day did try “porting” a multichannel ambisonic recording to 4DSOUND with some success, I might add. But 4DSOUND’s own spatialization system is separate.

The ENVELOP project is “open source” – but it’s based on proprietary tools. That includes some powerful-looking panners built in Max for Live which I would have loved to have whilst working on 4DSOUND. But it also means that the system isn’t really “open source” – I’d be interested to know how you’d interact, say, with genuinely open tools like Pure Data and SuperCollider. That’s not just a philosophical question; the workflow is different if you build tools that interface directly with a spatial system.

It seems open to other possibilities, at least – with CCRMA and Stanford nearby, as well as the headquarters of Cycling ’74 (no word from Dolby, who are also in the area), the brainpower is certainly in the neighborhood.

Of course, the scene around spatial audio is hardly centered exclusively on the Bay Area. So I’d be really interested to put together a virtual panel discussion with some competing players here – 4DSOUND being one obvious choice, alongside Fraunhofer Institute and some of the German research institutions, and… well, the list goes on. I imagine some of those folks are raising their hands and shouting objections, as there are strong opinions here about what works and what doesn’t.

ambilive

auramix

If you’re interested, let us know. Believe me, I’m not a partisan of any one system – I’m keen to see different ideas play out.

ENVELOP – 3D Sound [Kickstarter]

For background, here’s a look at some of the “hacking” we did of spatial audio in Amsterdam at ADE in the fall. Part of our idea was really that hands-on experimentation with artists could lead to new ideas – and I was overwhelmed with the results.

4DSOUND Spatial Sound Hack Lab at ADE 2014 from FIBER on Vimeo.

The post Envelop Wants to Make an Ambisonic 3D Venue and Tools appeared first on Create Digital Music.

Envelop Wants to Make an Ambisonic 3D Venue and Tools

envelopimage

3D, spatialized sound is some part of the future of listening – both privately and in public performances. But the question is, how? Right now, there are various competing formats, most of them proprietary. There are cinema formats (hello, Dolby), meant mainly for theaters. There are research installations, particularly in Germany. And then there are one-off environments like the 4DSOUND installation I performed on and on which CDM hosted an intensive weekend hacklab – beautiful, but only in one place in the world, and served up with a proprietary secret sauce.

Artist Christopher Willits has teamed up with two sound engineers / DSP scientists and someone researching the impact on the body to produce ENVELOP – basically, a venue/club for performances and research.

The speaker diffusion system is relatively straightforward for this kind of advanced spatial sound. You get a sphere of speakers to produce the immersive effect – 28 in total, plus 4 positioned subwoofers. (A common misconception is that bass sound isn’t spatialized; in fact, I’ve heard researchers demonstrate that you can hear low frequencies as well as high frequencies.) Like the 4DSOUND project (and, incidentally, less like some competing systems), the speaker install is built into a set of columns.

And while the crowd-funding project is largely to finish building the physical venue, the goal is wider. They want to not only create the system, but they say they want to host workshops, hackathons, and courses in immersive audio, as well.

You can watch the intro video:

Another key difference between ENVELOP and the 4DSOUND system is that ENVELOP is built around Ambisonics. The key with this approach, in theory, at least, is that sound designers and composers choose coordinates once and then can adapt a work to different speaker installations. An article on Ambisonics is probably a worthy topic for CDM (some time after I’ve recovered from Musikmesse, please), but here’s what the ENVELOP folks have to say:

With Ambisonics, artists determine a virtual location in space where they want to place a sound source, and the source is then rendered within a spherical array of speakers. Ambisonics is a coordinate based mapping system; rather than positioning sounds to different locations around the room based on speaker locations (as with conventional surround sound techniques), sounds are digitally mapped to different locations using x,y,z coordinates. All the speakers then work simultaneously to position and move sound around the listener from any direction – above, below, and even between speakers.

ambilive

aura3dj

One of our hackers at the 4DSOUND day did try “porting” a multichannel ambisonic recording to 4DSOUND with some success, I might add. But 4DSOUND’s own spatialization system is separate.

The ENVELOP project is “open source” – but it’s based on proprietary tools. That includes some powerful-looking panners built in Max for Live which I would have loved to have whilst working on 4DSOUND. But it also means that the system isn’t really “open source” – I’d be interested to know how you’d interact, say, with genuinely open tools like Pure Data and SuperCollider. That’s not just a philosophical question; the workflow is different if you build tools that interface directly with a spatial system.

It seems open to other possibilities, at least – with CCRMA and Stanford nearby, as well as the headquarters of Cycling ’74 (no word from Dolby, who are also in the area), the brainpower is certainly in the neighborhood.

Of course, the scene around spatial audio is hardly centered exclusively on the Bay Area. So I’d be really interested to put together a virtual panel discussion with some competing players here – 4DSOUND being one obvious choice, alongside Fraunhofer Institute and some of the German research institutions, and… well, the list goes on. I imagine some of those folks are raising their hands and shouting objections, as there are strong opinions here about what works and what doesn’t.

ambilive

auramix

If you’re interested, let us know. Believe me, I’m not a partisan of any one system – I’m keen to see different ideas play out.

ENVELOP – 3D Sound [Kickstarter]

For background, here’s a look at some of the “hacking” we did of spatial audio in Amsterdam at ADE in the fall. Part of our idea was really that hands-on experimentation with artists could lead to new ideas – and I was overwhelmed with the results.

4DSOUND Spatial Sound Hack Lab at ADE 2014 from FIBER on Vimeo.

The post Envelop Wants to Make an Ambisonic 3D Venue and Tools appeared first on Create Digital Music.

Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND

The impressive, futuristic physical form of the 4DSOUND system. Photo: George Schroll.

The impressive, futuristic physical form of the 4DSOUND system. Photo: George Schroll.

You can’t really hear the results of the Spatial Audio Hacklab sitting at your computer – by definition, you had to be there to take in the experience of sounds projected in space. But you’ll probably feel the enthusiasm and imagination of its participants.

And that’s why it’s a pleasure to share the video documentation, produced for 4DSOUND by a team from FIBER – the Dutch audiovisual events and art platform – at Amsterdam Dance Event last month. In unleashing a diverse group of artist-experimenters on 4DSOUND’s unique speaker installation, we got a chance to create a sonic playground, a laboratory experiment in what people could do. It’s tough to overstate just how much those participants brought to the table – or just how little time they had. Actually working on the system was measured in minutes, forcing artists to improvise quickly with reality television levels of pressure. (Only, unlike TV show challenges, everyone kept their nerves and wits.)

4DSOUND Spatial Sound Hack Lab at ADE 2014 from FIBER on Vimeo.

To get through it, these artists focused on collaboration, finding ways of connecting essential skills. In the days and weeks leading up to Amsterdam, many of them fired missives back and forth wondering how best to exploit the spatial sound system. They then worked intensively to devise something they could try quickly, forming spontaneous teams to combine resources. They did in minutes what resident artists had done in days. With input from Nicholas Bougaïeff from Liine and a whole lot of guidance and assistance from the entire 4DSOUND team, in particular founder Paul Oomen, gathered hacekers managed to get a whole lot up and running. No project went silent; with tweaks, everything worked.

This wasn’t merely a show of coding prowess or engineering. Each project found some way to involve musical practice and sound, each was a “jam” as well as “hack.” That’s something different from the typical shape of hack days; these projects weren’t just demos. They were given a voice — sometimes literally singing, rather beautifully.

It was what we hoped for, and more. The Spatial Audio Hacklab was made in cooperation between myself and CDM, FIBER, the 4DSOUND team who have built the system and its software, and Liine (makers of the Lemur app), with support from Ableton (and their talented developer relations liason, Céline Dedaj). It followed a week of intensive artist projects on the 4D from Max Cooper, Vladislav Delay, Stimming, and even myself with Robert Lippok (on a bill with raster-noton labelmates Grischa Lichtenberger, Senking, and Frank Bretschneider). But it was also a kind of contrast to those performances and their accompanying master classes, one where any “what if?” question was game.

If days of programming hadn’t already convinced you, by the rapid-fire hacklab it was clear: a spatial sound system can be more than just a clever effect. It can feel like a venue, as unlimited in possibilities as the stage of a concert hall. It’s an empty box to be filled, in the best possible way.

Photo (during the raster-noton showcase) by Fanni Fazakas.

Photo (during the raster-noton showcase) by Fanni Fazakas.

Hearing fully-formed improvised music was especially gratifying. But for me, perhaps the most promising result was the Processing-powered game of Pong whipped up by a coder team, because it validated the accuracy of listeners’ perception. Using sensors that had previously tracked singers, it involved players scurrying around to bang a sonic ball back and forth. (That, too, makes it a nice three-dimensional counterpart of InvisiBall, a similar sonic game by Hakan Lidbo – and, tellingly, that game has been played even by blind people. Thinking out of the box can mean inventing things that aren’t limited to the usual population and audience.)

Developers from Ableton have hacked a game of pong using spatial sound. #4dsound #ade14

A video posted by Peter Kirn (@pkirn) on

Participant Will Copps also shared some of his thoughts after the experience, alongside lots of positive feedback we got from the hacklabbers:

The documentary does a good job of capturing the event, but is also incredibly impressive in capturing the various hack lab ideas and distilling them into a thirteen-minute piece… I’m very impressed. I know we could have spent more than thirteen minutes talking about each individual project.

As for additional thoughts, I’d stress that there was a clear benefit to just being there and hearing the system. While many of the ideas we had were specifically for the 4D system, hearing sounds through it challenges us to incorporate as many of the benefits of spatialization as we can into our current practices. The most obvious takeaway for me, at least to implement imminently, is exploring the possibilities of binaural recording. I may not be able to easily create immersion by placing firework recordings all around the listener like we did at 4D, but now it just seems foolish not to explore ways I can try.

It’s incredible when exposure to a technological development like this alters your perception of your practice in such a specific way. We’ve been fortunate to take lifetimes of those developments for granted in our art: the ability to record, to mix in stereo, to capture colors in video. I recognize that this distinct spatialization of sound is still a ways away from being as widely implemented as the developments I just mentioned. But to be one of the first to experience something like it is unreal. Comparing 4D to those developments may sound like (and may be) hyperbole… but I don’t think any of us knows for sure. And that’s perhaps what is most exciting.

Ana Laura Rincon, aka the DJ Hyperaktivist, echoed similar sentiments:

Some experiences in life must be lived so you really understand; in this case, the 4DSOUND is an experience that must be heard so you can have a real idea of the possibilities the system offers. And so we did in the Hacklab. The 4DSOUND is a very forward-thinking idea that allows you to control sound and its dimensions in the space, giving you the possibility of creating natural environments and also of making music with the space. The system gives you the possibility of listening to sound without being confined to a speaker or a stereo system; you hear sound as it is generated outside in everyday life. Having had the opportunity to participate in the HackLab, sharing ideas with amazing musicians, developers, and just great people, and being able to use the system and really understand how it works gave me a really good insight of the vast possibilities it has, and how the exploration of these is just starting. There is a new window a very big door just open for the future exploration of music, sound and how it can be experience and perceived — look out.

My feeling was, looking through the applications, that something would happen just by getting people together in the same room. Spatial audio is an nascent but evolving field, with a scattering of different systems. I’ll talk more about that soon, but the short version: they’re all rather different, from wave field synthesis to Dolby Atmos to the unique sound and physical presence of 4DSOUND. Amsterdam was a chance to reach human critical mass thinking about the problems, accelerating progress by connect people. It brought into one arena some of the most passionate researchers and artists, from those who have done doctoral dissertations on the topic to those curious to explore spatialization for the first time.

And that alone was transformative – as if everyone began dreaming in color for the first time, and then we all got to share the dream.

It’ll be terrific to see what’s next.

In the meanwhile, we can reveal the next playground: we’ll be in Berlin at CTM Festival making Tuning Machines in our third hacklab collaboration with that event in as many years, no doubt with a new group of collaborative spirits.

You can also have a listen to some audio impressions of the event, recorded in binaural sound.

Recording by Sero (one of the participants)
https://m.soundcloud.com/bassik

More on the film:

fiber-space.nl
codedmatters.nl
Credits:
Production – Jessica Dreu
Camera / editing – Tanja Busking
Interviews – Dayna Casey
FIBER Facebook: facebook.com/pages/Fiber-Festival/169577819730388
Music:
Frank Bretschneider – Phased Out, Oscillation, Funkalogic, Monoplex, Multiplex, Panback, Blue, Prussian
Moskitoo – Wham & Whammy (Frank Bretschneider remix)
Robert Lippok & Peter Kirn – Live performance at 4DSOUND during ADE 16-10-2014
All other sound was recorded during the Hacklab itself.
Coded Matter(s) is supported by the Creative Industry Fund NL

The post Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND appeared first on Create Digital Music.

Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos]

It was inspired by Nikolas Tesla’s radical ideas about energy in air – and site-specific opera. It breaks every notion you have of how to mix, how to set volume, and what “panning” or “stereo” means. It’s, specifically, the forest of metal columns filled with omni-directional speakers we’ve come to know as 4DSOUND. And it’s all coming to Amsterdam Dance Event in October in a big way.

But what’s most important about 4DSOUND isn’t just this particular, not-inexpensive and specific installation. It’s the fact that once you start imagining sound as virtually projected into three-dimensional space, you probably won’t really think about sound in the same way.

Taking something like a site-specific spatial audio system and putting it into an online video is a recipe for failure. But the team at Ableton have done a pretty bang-on job of doing just that in two films, one focused more on the system in general and its significance, and one on specifically how the technique works.

Various composers have worked on 4DSOUND; this film focuses on Stimming. That makes an interesting choice, because his set is so live. In his work, Ableton Live is mostly a control interface for the spatialization; its audio duties are limited to mixing in the system and adding some clips. Everything else is outboard, like the MFB Tanzbär drum machine, a Teenage Engineering OP-1, and an acoustic piano.

Just as important, 4DSOUND’s Paul Oomen, a classical composer, talks about the connections to Tesla and theater. See the deeper meaning introduced at top, then the technical – and thoughts for the future – below.

With that conceptual background, it’s likewise important to understand that this system is neither a surround setup like those in cinemas (most recently Dolby’s Atmos), nor Wave Field Synthesis.

Cinema sound is generally a different animal. Those systems, or crude systems like quad (or even stereo), are capable of spatializing sounds, but they’re dependent on listener position. Wave field synthesis is closer, in that it does produce virtual sonic locations, as if sounds are in specific places beyond the speakers, even as you move around. Wave field is also interesting in that it has been adopted by MPEG. But wave field synthesis, while very precise, works on a horizontal plane, and requires very specific settings and speakers.

4DSOUND takes a different approach, using something called vertical phantom imaging. By taking advantage of omni-directional speakers, they get the advantages of virtual projection – that illusion that sounds fill specific locations or volumes – without requiring so many speakers or particular environments. That makes a unique sound space in which artists can play, and while this isn’t cheap or yet ready for club environments, it is able to make it to festivals. 4DSOUND came to Berlin’s Atonal Festival last month, for instance, and in a series of events (including a lab co-hosted by CDM), will next head back to Amsterdam Dance Event.

I’ve been working with 4DSOUND now in my own music, in a collaboration with Robert Lippok, and it’s been a unique learning experience. I couldn’t agree more with Stimming that it can change how you listen to music and sonic environments. Stereo is artificial enough that it’s easy to lose sight of sounds in terms of how they exist in space. It’s simply too distant from how we hear. But when you can manipulate sounds in a virtual environment, you really begin to appreciate the spatial as a compositional element.

In our project, we’re working to use those elements to create our own virtual architectures. It’s a first opportunity to see how you might perceive architecture purely as sonic, non-physical form. We’re working with Berlin’s Arno Brandlhuber, who constructed a form in a proposal for housing that perfectly fits the grid of the 4DSOUND – real and virtual.

lippokkirn

lippokkirn2

lippokkirn3

Above: Translating architecture into sound, in process on 4DSOUND. Photos by Robert Lippok.

As seen in the video, you’re not only positioning sounds: you can produce volumes, paths with motion, and create effects that are calculated around the space (for reflections, delays, and more). You can add Doppler effect and other filtering to enhance the illusion that sound sources are moving around you. You can create sonic perceptions that seem real, and others that would normally be impossible.

To implement this system, you’re granted per-voice controls of each sonic object. Ableton Live is a bit ill-equipped to work in this way; music software in general is built around mixers that assume stereo recordings are the end result. But those voices are represented by graphical controls added to an Ableton session, built in Max for Live. It in turn is a front-end, alongside a Lemur remote control communicating over OSC, for a back-end system that does the processing necessary to pipe 57 channels of audio out the RME audio interfaces to the amps. (The back end is built in Max/MSP, with apparently heavy use of gen~ DSP objects for performance.)

So many of our sonic habits have been constructed by the stereo mixdown and its crude virtual space that we may be unaware how much it impacts our composition and sound design. So it’s interesting to listen to a binaural recording of Stimming. You’ll want to not only listen to headphones, but be patient as the work builds up. Obviously, even binaural recordings don’t really capture the impact. But you will begin to hear panning that’s vertical, with a great deal of distance in the mix rather than the packed recordings common in dance music. This will be less evident if you haven’t heard the 4D in person, but a lot of the timbres you hear, the sense of these sonic objects in some real space and the way they reverberate, is also a feature of working in this way. It will no doubt transform habits producing and mixing even in stereo – once you’ve done this, you can’t ever go back to even mono and stereo in the same way.

Stimming explains:

Equipment used: MFB Tanzbär, Clavia Nordrack2, TeenageEngineering OP-1, Arturia Microbrute and AbletonLive as master clock, sampler and midi sequencer.

Everything on the 4D sound was tweaked by hand in real-time, as well as the whole arrangement. I preprogrammed some chords and grooves on my machines though.

The 4D System is an advanced spatial sound system and the set is binaural (also called dummy head) recorded – in order to get an idea of how it sounded you need to use your headphones.

For the full binaural experience, I made the lossless AIFF file available for download. Please note that the download is over 1 GB in size.

Imagine being INSIDE the music, and the sounds move around you in all three dimensions.

It really is thinking in four dimensions – the three spatial dimensions, plus time (and adding that fourth element truly feels like a fourth dimension).

And the 4DSOUND setup is complex enough to feel like an instrument, the combination of its spatial capabilities and various effects and live controls.

So, it’s significant that in Amsterdam, we’ll have a full program of new music for the 4DSOUND (including Stimming, a Raster Noton showcase including Robert and myself along Grischa Lichtenberger, Frank Bretschneider, and Senking), Max Cooper, and Vladislav Delay.

It’s just as important that we’ll have developers from Ableton joining a select lineup of artists and researchers of lots of backgrounds on Spatial Audio Hack Lab we’re co-hosting. We have everyone from doctoral experts in spatialization to singers.

This isn’t a gimmick or a fad or some cool new toy. There is a lot of work remaining to be done, on 4DSOUND and spatial audio in general. The 4DSOUND itself is a canvas for all kinds of work; it’s not obvious how to work with it or what it should do. Imagining how interfaces should look is a wide-open question. And on 4D and spatial audio in general, there’s a huge opening for people to suggest new ideas for sound, composition, performance, and control. That can relate to architecture, to data sonification, to simulation. In Paul’s case, sensors on singers can produce a new way of enhancing theatre with amplified and electronic sound, as audio follows performers.

And the whole field is about to blow wide open. New microphone and headphone technology could make 4DSOUND’s specific system still more relevant – a playground for challenging ideas that will become increasingly commonplace.

So, if you’re in Amsterdam, I hope you’ll join us. If not, we’ll keep piping these spatial possibilities to you.

thinkingin3d_1

thinkingin3d_2

thinkingin3d_3

Thinking in 3D – or 4D – will be a new challenge. Above, photos from our recent working sessions.

Amsterdam events:
http://www.facebook.com/4dsoundonline

More of the latest from 4DSOUND:
http://4dsound.net/news/

The post Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos] appeared first on Create Digital Music.

Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos]

It was inspired by Nikolas Tesla’s radical ideas about energy in air – and site-specific opera. It breaks every notion you have of how to mix, how to set volume, and what “panning” or “stereo” means. It’s, specifically, the forest of metal columns filled with omni-directional speakers we’ve come to know as 4DSOUND. And it’s all coming to Amsterdam Dance Event in October in a big way.

But what’s most important about 4DSOUND isn’t just this particular, not-inexpensive and specific installation. It’s the fact that once you start imagining sound as virtually projected into three-dimensional space, you probably won’t really think about sound in the same way.

Taking something like a site-specific spatial audio system and putting it into an online video is a recipe for failure. But the team at Ableton have done a pretty bang-on job of doing just that in two films, one focused more on the system in general and its significance, and one on specifically how the technique works.

Various composers have worked on 4DSOUND; this film focuses on Stimming. That makes an interesting choice, because his set is so live. In his work, Ableton Live is mostly a control interface for the spatialization; its audio duties are limited to mixing in the system and adding some clips. Everything else is outboard, like the MFB Tanzbär drum machine, a Teenage Engineering OP-1, and an acoustic piano.

Just as important, 4DSOUND’s Paul Oomen, a classical composer, talks about the connections to Tesla and theater. See the deeper meaning introduced at top, then the technical – and thoughts for the future – below.

With that conceptual background, it’s likewise important to understand that this system is neither a surround setup like those in cinemas (most recently Dolby’s Atmos), nor Wave Field Synthesis.

Cinema sound is generally a different animal. Those systems, or crude systems like quad (or even stereo), are capable of spatializing sounds, but they’re dependent on listener position. Wave field synthesis is closer, in that it does produce virtual sonic locations, as if sounds are in specific places beyond the speakers, even as you move around. Wave field is also interesting in that it has been adopted by MPEG. But wave field synthesis, while very precise, works on a horizontal plane, and requires very specific settings and speakers.

4DSOUND takes a different approach, using something called vertical phantom imaging. By taking advantage of omni-directional speakers, they get the advantages of virtual projection – that illusion that sounds fill specific locations or volumes – without requiring so many speakers or particular environments. That makes a unique sound space in which artists can play, and while this isn’t cheap or yet ready for club environments, it is able to make it to festivals. 4DSOUND came to Berlin’s Atonal Festival last month, for instance, and in a series of events (including a lab co-hosted by CDM), will next head back to Amsterdam Dance Event.

I’ve been working with 4DSOUND now in my own music, in a collaboration with Robert Lippok, and it’s been a unique learning experience. I couldn’t agree more with Stimming that it can change how you listen to music and sonic environments. Stereo is artificial enough that it’s easy to lose sight of sounds in terms of how they exist in space. It’s simply too distant from how we hear. But when you can manipulate sounds in a virtual environment, you really begin to appreciate the spatial as a compositional element.

In our project, we’re working to use those elements to create our own virtual architectures. It’s a first opportunity to see how you might perceive architecture purely as sonic, non-physical form. We’re working with Berlin’s Arno Brandlhuber, who constructed a form in a proposal for housing that perfectly fits the grid of the 4DSOUND – real and virtual.

lippokkirn

lippokkirn2

lippokkirn3

Above: Translating architecture into sound, in process on 4DSOUND. Photos by Robert Lippok.

As seen in the video, you’re not only positioning sounds: you can produce volumes, paths with motion, and create effects that are calculated around the space (for reflections, delays, and more). You can add Doppler effect and other filtering to enhance the illusion that sound sources are moving around you. You can create sonic perceptions that seem real, and others that would normally be impossible.

To implement this system, you’re granted per-voice controls of each sonic object. Ableton Live is a bit ill-equipped to work in this way; music software in general is built around mixers that assume stereo recordings are the end result. But those voices are represented by graphical controls added to an Ableton session, built in Max for Live. It in turn is a front-end, alongside a Lemur remote control communicating over OSC, for a back-end system that does the processing necessary to pipe 57 channels of audio out the RME audio interfaces to the amps. (The back end is built in Max/MSP, with apparently heavy use of gen~ DSP objects for performance.)

So many of our sonic habits have been constructed by the stereo mixdown and its crude virtual space that we may be unaware how much it impacts our composition and sound design. So it’s interesting to listen to a binaural recording of Stimming. You’ll want to not only listen to headphones, but be patient as the work builds up. Obviously, even binaural recordings don’t really capture the impact. But you will begin to hear panning that’s vertical, with a great deal of distance in the mix rather than the packed recordings common in dance music. This will be less evident if you haven’t heard the 4D in person, but a lot of the timbres you hear, the sense of these sonic objects in some real space and the way they reverberate, is also a feature of working in this way. It will no doubt transform habits producing and mixing even in stereo – once you’ve done this, you can’t ever go back to even mono and stereo in the same way.

Stimming explains:

Equipment used: MFB Tanzbär, Clavia Nordrack2, TeenageEngineering OP-1, Arturia Microbrute and AbletonLive as master clock, sampler and midi sequencer.

Everything on the 4D sound was tweaked by hand in real-time, as well as the whole arrangement. I preprogrammed some chords and grooves on my machines though.

The 4D System is an advanced spatial sound system and the set is binaural (also called dummy head) recorded – in order to get an idea of how it sounded you need to use your headphones.

For the full binaural experience, I made the lossless AIFF file available for download. Please note that the download is over 1 GB in size.

Imagine being INSIDE the music, and the sounds move around you in all three dimensions.

It really is thinking in four dimensions – the three spatial dimensions, plus time (and adding that fourth element truly feels like a fourth dimension).

And the 4DSOUND setup is complex enough to feel like an instrument, the combination of its spatial capabilities and various effects and live controls.

So, it’s significant that in Amsterdam, we’ll have a full program of new music for the 4DSOUND (including Stimming, a Raster Noton showcase including Robert and myself along Grischa Lichtenberger, Frank Bretschneider, and Senking), Max Cooper, and Vladislav Delay.

It’s just as important that we’ll have developers from Ableton joining a select lineup of artists and researchers of lots of backgrounds on Spatial Audio Hack Lab we’re co-hosting. We have everyone from doctoral experts in spatialization to singers.

This isn’t a gimmick or a fad or some cool new toy. There is a lot of work remaining to be done, on 4DSOUND and spatial audio in general. The 4DSOUND itself is a canvas for all kinds of work; it’s not obvious how to work with it or what it should do. Imagining how interfaces should look is a wide-open question. And on 4D and spatial audio in general, there’s a huge opening for people to suggest new ideas for sound, composition, performance, and control. That can relate to architecture, to data sonification, to simulation. In Paul’s case, sensors on singers can produce a new way of enhancing theatre with amplified and electronic sound, as audio follows performers.

And the whole field is about to blow wide open. New microphone and headphone technology could make 4DSOUND’s specific system still more relevant – a playground for challenging ideas that will become increasingly commonplace.

So, if you’re in Amsterdam, I hope you’ll join us. If not, we’ll keep piping these spatial possibilities to you.

thinkingin3d_1

thinkingin3d_2

thinkingin3d_3

Thinking in 3D – or 4D – will be a new challenge. Above, photos from our recent working sessions.

Amsterdam events:
http://www.facebook.com/4dsoundonline

More of the latest from 4DSOUND:
http://4dsound.net/news/

The post Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos] appeared first on Create Digital Music.

A Week of Spatial Sound on 4DSOUND at Amsterdam Dance Event; Open Hack Lab Call

4DSOUND_ADE_A2_03_web

As if Amsterdam Dance Event, the electronic music mecca of Europe and the world’s largest festival of its kind, weren’t packed enough already – there’s more.

Tucked inside the festival we’ve got five days of programming devoted to spatial audio, on the 4DSOUND system. As part of ADE Sound Academy, itself focusing on threads between technology, practice, and music, the event at Amsterdam’s Companietheater will explore the frontiers of new settings for music and sound. From plumbing the possibilities of the 4DSOUND’s forest of speakers to opening a discussion of immersive sound and music now and in the future, a combination of master classes, hands-on workshops, and live performances will challenge us to imagine what is possible as music fills new environments.

Meeting that challenge necessarily requires us to be engineers and artists, teachers and students, all at the same time. So I’m humbled to myself be involved in this program variously from all those perspectives, as an artist venturing into connections between architecture and music with Robert Lippok (Raster Noton), and via CDM, hosting discussions on how to push this and other technologies forward.

And you can be, too. The event is open to public attendance during ADE, and because we want your input, CDM is hosting an open call for participants to join us on a weekend-long Hack Lab. In that laboratory, limited in participation to facilitate maximum collaboration and time on the system, we’ll get to see what we can discover in finding new ways of exploiting spatial sound (and visuals).

The Schedule

With a varied program of artists, technologists, performances, and discussions, there will be plenty of reason to hang about Compagnietheater during ADE. Here’s an overview:

4dsound_ADE_A2_program_timetable

Wednesday, Oct 15
14:00 – 17:00 MASTERCLASS WITH PAUL OOMEN
Founder of 4DSOUND and composer Paul Oomen gives an overview of best practices in spatial sound design

21:00 – 23:00 MAX COOPER LIVE PERFORMANCE
Max Cooper explores a next level of psycho-acoustics and spatiality in his compositions specially arranged for 4DSOUND

Thursday, Oct 16
14:00 – 16:00 MASTERCLASS WITH MAX COOPER
Exclusive behind-the-scenes with Max Cooper including the world premiere of six new sound sculptures specially designed for 4DSOUND

21:00 – 03:00 RASTER-NOTON SHOWCASE
– Grischa Lichtenberger
– Frank Bretschneider
– Senking
– Robert Lippok & Peter Kirn
Live performances on the 4DSOUND system

Friday, Oct 17
14:00 – 17:00 MASTERCLASS WITH ROBERT LIPPOK & PETER KIRN
Robert Lippok & Peter Kirn go in-depth with the architectural concepts behind their 4DSOUND live show

21:00 – OPEN END VLADISLAV DELAY EXTENDED SET
Vladislav Delay’s exhibition performance in 4DSOUND lasting the entire night

Saturday, Oct 18
14:00 – 17:00 MASTERCLASS WITH VLADISLAV DELAY

Vladislav Delay reveals his working processes with intuitive and self-built control environments

21:00 – 23:00 STIMMING LIVE PERFORMANCE
Stimming presents his completely intuitive and improvisational club night on the 4DSOUND system

Sunday, Oct 19
19:00 SPATIAL SOUND HACK LAB PERFORMANCE
A series of short public performances by the participants of the ADE Sound Academy and panel discussion including: Peter Kirn (CDM), Ableton, Gareth Williams (Liine), Paul Oomen (4DSOUND), Jarl Schulp (Fiber) and Martin Stimming

4DSOUND at ADE: ADE Sound Academy, October 15 -19th

RSVP for 4DSOUND events on Facebook

Tickets are available now; we’ll have more on that soon (including whether a pass to the five-day program is available). But one way to get into all of it free is to apply for the Hack Lab – below.
Online tickets

ADE pass holders have limited access to the event, but capacity is extremely constrained; first come, first served.

More About 4DSOUND

The 4DSOUND system is an ideal platform for the week, because it makes all of these discussions material and invites ADE participants into a first-hand experience. For more on how this system works, some past coverage:

Full Immersion in Audio, as Artists Explore 4DSOUND in a Spatial Grid [Ableton, Max, Lemur] [CDM]

4DSOUND: A new sound experience [Resident Advisor’s Jordan Rothlein does an in-depth story on the system]

Enter An Alternate Sensory Reality With Max Cooper’s 4D Sound Show [The Creators Project]

Also, Max Cooper is the one gent who has done a binaural recording so that – provided you have headphones on – you can experience a pathway through the work.

Also worth watching Lucy’s take on the system, which made use of sensing.

Of course, I think what we discover about immersive experience, architecture and sound, and spatiality in music can extend far beyond just this system.

4DSOUND_ADE_A2_spatial_sound_hack_lab_low_info

Spatialization Examples : 1 from 4DSOUND on Vimeo.

Open call for participants

Spatial Sound Hack Lab
10h Saturday 18 October – 21h Sunday 19 October

Hosted by CDM (createdigitalmusic.com), Ableton, Liine, and Fiber Festival at Amsterdam Dance Event (ADE)

What’s possible when the experience of sound can happen in more dimensions? We invite artists and hackers to explore the 4DSOUND system using a variety of tools of their own choice, culminating in new experiments in sound, music, and performance.

Media can include:

  • New music and sonic creations in Max for Live and Ableton Live
  • New control interfaces with Liine’s Lemur app
  • New interactive UIs built in Canvas on Lemur (familiar to those who have worked with Web technologies)
  • New sound, control creations in other tools (Reaktor, Pure Data, Processing, etc.)
  • Visuals projected onto the 4DSOUND system (projection mapping, live VJing)

Previous experience in all of these tools is not required. Artists and coders alike are welcome. You might be a musician interested in exploring your instrument or voice with a spatial interface, or an artist interested in making visuals on the surface. You might be a coder or patcher eager to explore new tools. You might have some novel technology you would like to interface with a spatial sound rig. Perhaps you’re doing something completely different we haven’t thought of!

A limited number of participants will be invited to participate. We provide:

  • A pass to the week’s 4DSOUND performance series at ADE, including the Raster-Noton showcase and other events
  • Free entry to the full Hack Lab weekend
  • Access to the 4DSOUND system and gathered experts from 4DSOUND, Ableton, Liine, and CDM
  • Workshops and education, including a master class by Vladislav Delay and panel discussion on spatial and immersive sound
  • A chance to give a short public performance/demo
  • Food and refreshments for the weekend

We regret that we can’t provide other expenses. Travel and accommodation is the responsibility of participants. However, there’s no charge to participate.

Rough schedule:
Saturday 18 October
Arrival of all participants (latest, 10 am, assuming you weren’t already enjoying the week’s 4DSOUND events)
Master class by Vladislav Delay
Meeting fellow participants, collaboration discussions
Introduction to 4DSOUND system
Introduction to control tools in Ableton Live, Max for Live, and Lemur (and compatibility with other tools)
Attend evening performance by Martin Stimming

Sunday 19 October
Full day of development with access to the system
19:00 Short performance/presentations
Final panel discussion on the future of spatial production and immersive sound, with Martin Stimming, Peter Kirn (CDM), Jarl Schulp (Fiber Festival), Olaf Bohn (Ableton), Gareth Williams (Liine), and Paul Oomen (4DSOUND)

To apply, fill out the entry form by 23:59, Sunday 14 September, and let us know a little bit about your work and skill set and what you hope to accomplish. We will select a group that we think can collaborate well on the system and that represents a variety of interests.

Apply Online for the Hack Lab

Accepted participants will be notified that week.

We look forward to meeting some of you in Amsterdam!

The post A Week of Spatial Sound on 4DSOUND at Amsterdam Dance Event; Open Hack Lab Call appeared first on Create Digital Music.

Coding the Club: How the Sensory Experience of Electronic Music Could Expand [Video]

Coded Matter(s) #5: Coding the Club from FIBER on Vimeo.

For so much of the world, the club experience is nothing if not predictable. The sound, the aesthetic, the entire evening fit to a predictable mold. But drawing upon decades of work in experimental audiovisual performance, a new generation of sound and light artists are applying today’s tools to build live creations that can transcend all of that, that appeal directly to the senses and transform the architecture of the musical environment.

One highlight for me this year was the chance to moderate an installment in March of Fiber Festival’s terrific Coded Matters series. (Also on their 2014 agenda: algorithmic, generative raving. More on that separately.)

Building atop a tradition of immersive performance, the artists in this talk are part of a growing community of creators who are experimenting with how sonic, visual, and musical structures are built in clubs (and galleries, and installations, and venues that look beyond clubland specifically). They replace liquid light shows with new motorized mirrors, light, three-dimensional sound, and music and visual programming to match.

We talked – a lot, about a great deal. But filmmaker Tanja Busking manages to draw those threads together in a 9-minute film that adds some clarity to these questions. Thanks, Tanja; it’s a pleasure to watch.

Details about what you’re seeing:

On March 6th FIBER and music festival 5 Days Off presented an evening about new developments in the production of dance floor environments, groundbreaking audiovisual live shows and electronic music.
This video report gives an overview of the content and atmosphere of this edition of Coded Matter(s), including some footage from the workshop and the Electric Deluxe Label Night which started after the evening programme. Featuring Simon Geilfus (ANTIVJ), 4DSOUND, Peter Kirn, Children of the Light and Matthijs Munnik.
For more information about the event:
codedmatters.nl/event/coding-the-club/
For more information about Coded Matter(s):
codedmatters.nl
Coded Matter(s) is supported by the Creative Industries Fund NL

Additional credits
Music:
Murcof + Simon Geilfus – Music from their performance
Peter Kirn + Ganucheau – Fairytale fashion mix
Darkside – Golden Arrow
Abdulla Rashim – live PA in Berlin 09/03/13
Footage:
ANTIVJ
– 3Deconstruct – vimeo.com/32935093
– Murcof + Simon Geilfus – vimeo.com/7844683
– Onion Skin by Olivier Ratsi – vimeo.com/76521918
– PALEODICTYON – making of – vimeo.com/60209340
4DSOUND
– Sound Spatialization Examples:1 – vimeo.com/81682623
– Studio Science: Lucy at 4DSOUND – youtube.com/watch?v=Mhk_LxxJuqY
Children of the Light
-Darkside at Concertgebouw, ADE 2013

Coded Matter(s) #5: Coding the Club, Electric Deluxe at 5 Days Off. Photo by Raymond van Mil.

Coded Matter(s) #5: Coding the Club, Electric Deluxe at 5 Days Off. Photo by Raymond van Mil.

See also:
photo gallery from the event

The post Coding the Club: How the Sensory Experience of Electronic Music Could Expand [Video] appeared first on Create Digital Music.