Mics that record in “3D” ambisonics are the next big thing

Call it the virtual reality microphone … or just think of it as an evolution of microphones that capture sounds more as you hear them. But mics purporting to give you 3D recording are arriving in waves – and they could change both immersive sound and how we record music.

Let’s back up from the hype a little bit here. Once we’re talking virtual reality or you’re imagining people in goggles, Lawnmower Man style, we’re skipping ahead to the application of these mic solutions, beyond the mics themselves.

The microphone technology itself may wind up being the future of recording with or without consumers embracing VR tech.

Back in the glorious days of mono audio, a single microphone that captured an entire scene was … well, any single microphone. And in fact, to this day there are plenty of one-mic recording rigs – think voice overs, for instance.

The reason this didn’t satisfy anyone is more about human perception than it is technology. Your ears and brain are able to perceive extremely accurate spatial positioning in more or less a 360-degree sphere through a wide range of frequencies. Plus, the very things that screw up that precise spatial perception – like reflections – contribute to the impact of sound and music in other ways.

And so we have stereo. And with stereo sound delivery, a bunch of two-microphone arrangements become useful ways of capturing spatial information. Eventually, microphone makers work out ways of building integrated capsules with two microphone diaphragms instead of just one, and you get the advantages of two mics in a single housing. Those in turn are especially useful in mobile devices.

So all these buzzwords you’re seeing in mics all of a sudden – “virtual reality,” “three-dimensional” sound, “surround mics,” and “ambisonic mics” are really about extending this idea. They’re single microphones that capture spatial sound, just like those stereo mics, but in a way that gives them more than just two-channel left/right (or mid/center) information. To do that, these solutions have two components:

1. A mic capsule with multiple diaphragms for capturing full-spectrum sound from all directions
2. Software processing so you can decode that directional audio, and (generally speaking) encode it into various surround delivery formats or ambisonic sound

(“Surround” here generally means the multichannel formats beyond just stereo; ambisonics are a standard way of encoding full 360-degree sound information, so not just positioning on the same plane as your ears, but above and below, too.)

The B360 ambisonics encoder from plug-in maker WAVES.

The software encoding is part of what’s interesting here. Once you have a mic that captures 360-degree sound, you can use it in a number of ways. These sorts of mic capsules are useful in modeling different microphones, since you can adjust the capture pattern in software after the fact. So these spherical mics could model different classic mics, in different arrangements, making it seem as though you recorded with multiple mics when you only used one. Just like your computer can become a virtual studio full of gear, that single mic can – in theory, anyway – act like more than one microphone. That may prove useful for production applications other than just “stuff for VR.”

There are a bunch of these microphones showing up all at once. I’m guessing that’s for two reasons – one, a marketing push around VR recording, but two, likely some system-on-a-chip developments that make this possible. (All those Chinese-made components could get hit with hefty US tariffs soon, so we’ll see how that plays out. But I digress.)

Here is a non-comprehensive selection of examples of new or notable 360-degree mics.

8ball

Maker: HEAR360, a startup focused on this area

Cost: US$2500

The pitch: Here’s a heavy-duty, serious solution – camera-mountable, “omni-binaural” mic that gives you 8 channels of sound that comes closest to how we hear, complete with head tracking-capable recordings. PS, if you’re wondering which DAW to use – they support Pro Tools and, surprise, Reaper.

Who it’s for: High-end video productions focused on capturing spatial audio with the mic.

https://hear360.io/shop/8ball

NT-SF1

Maker: RØDE, collaborating with 40-year veteran of these sorts of mics, Soundfield (acquired by RØDE’s parent in 2016)

Cost: US$999

The pitch: Make full-360, head-trackable recordings in a single mic (records in A-format, converts to B-format) for ambisonic audio you can use across formats. Works with Dolby Atmos, works with loads of DAWs (Reaper and Pro Tools, Cubase and Nuendo, and Logic Pro). 4-channel to the 8-ball’s titular eight, but much cheaper and with more versatile software.

Who it’s for: Studios and producers wanting a moderately-priced, flexible solution right now. Plus it’s a solid mic that lets you change mic patterns at will.

Software matters as does the mic in these applications; RØDE supports DAWs like Cubase/Nuendo, Pro Tools, Reaper, and Logic.

https://en.rode.com/nt-sf1

H3-VR

Maker: ZOOM

Cost: US$350

The pitch: ZOOM is making this dead simple – like the GoPro camera of VR mics. 4-capsule ambisonic mic plus 6-axis motion sensor with automatic positioning and level detection promise to make this the set-it-and-forget-it solution. And to make this more mobile, the encoding and recording is included on the device itself. Record ambisonics, stereo inaural, or just use it like a normal stereo mic, all controlled onboard with buttons or using an iOS device as a remote. Your recording is saved on SD cards, even with slate tone and metadata. And you can monitor the 3D sound, sort of, using stereo binaural output of the ambisonic signal (not perfect, but you’ll get the idea).

Who it’s for: YouTube stars wanting to go 3D, obviously, plus one-stop live streaming and music streaming and recording. The big question mark here to me is what’s sacrificed in quality for the low price, but maybe that’s a feature, not a bug, given this area is so new and people want to play around.

https://www.zoom-na.com/products/field-video-recording/field-recording/zoom-h3-vr-handy-recorder

ZYLIA

Maker: ZYLIA, a Polish startup that IndieGogo-funded its first run last year. But the electronics inside come from Infineon, the German semiconductor giant that spun off of Siemens.

Cost: US$1199 list (Pro) / $699 for the basic model

The pitch: This futuristic football contains some 19 mic capsules to the 4-8 above. But the idea isn’t necessarily VR – instead, Zylia claims they use this technology to automatically separate sound sources from this single device. In other words, put the soccer ball in your studio, and the software separates out your drums, keys, and vocalist. Or get the Pro model and capture 3rd-order ambisonics – with more spatial precision than the other offerings here, if it works as advertised.

Who it’s for: Musicians wanting a new-fangled solution for multichannel recording from just one mic (on the basic model), useful for live recording and education, or people doing 3D recordings wanting the same plug-and-play simplicity and more spatial information.

Oh yeah, also – 69dB signal-to-noise ratio is nothing to sneeze at.

Pro Tools Expert did a review late last year, though I think we soon need a more complete review for the 3D applications.

http://www.zylia.co/

What did we miss? With this area growing fast, plenty, I suspect, so sound off. This is one big area in mics to watch, for sure – and the latest example that software processing and intelligence will continue to transform music and audio hardware, even if the fundamental hardware components remain the same.

And, uh, I guess we’ll all soon wind up like this guy?

(Photo source, without explanation, is the very useful archives of the ambisonics symposium.

The post Mics that record in “3D” ambisonics are the next big thing appeared first on CDM Create Digital Music.

Watch The Black Madonna DJ live from … inside a video game

Algorithmic selection, soulless streaming music, DJ players that tell you what to play next and then do it for you… let’s give you an alternative, and much more fun and futuristic future. Let’s watch The Black Madonna DJ from inside a video game.

This is some reality-bending action here. The Black Madonna, an actual human, played an actual DJ set in an actual club, as that entire club set was transformed into a virtual rendition. That in turn was then streamed as a promotion via Resident Advisor. Eat your heart out, Boiler Room. Just pointing cameras at people? So last decade.

From Panorama Bar to afterhours in the uncanny valley:

This is less to do with CDM, but… I enjoy watching the trailer about the virtual club, just because I seriously never get tired of watching Marea punching a cop. (Create Digital Suckerpunches?)

Um… apologies to members of law enforcement for that. Just a game.

So, back to why this is significant.

First, I think actually The Black Madonna doesn’t get nearly the credit she deserves for how she’s been able to make her personality translate across the cutthroat-competitive electronic music industry of the moment. There’s something to learn from her approach – to the fact that she’s relatable, as she plays and in her outspoken public persona.

And somehow, seeing The Black Madonna go all Andy Serkis here puts that into relief. (See video at bottom.) I mean, what better metaphor is there for life in the 21st century? You have to put on a weird, uncomfortable, hot suit, then translate all the depth of your humanness into a virtual realm that tends to strip you of dimensions, all in front of a crowd of strangers online you can’t see. You have to be uncannily empathic inside the uncanny valley. A lot of people see the apparent narcissism on social media and assume they’re witnessing a solution to the formula, when in fact it may be simply signs of desperation.

Marea isn’t the only DJ to play Grand Theft Auto’s series, but she’s the one who seems to actually manage to establish herself as a character in the game.

To put it bluntly: whatever you think of The Black Madonna, take this as a license to ignore the people who try to stop you from being who you are. It’s not going to get you success, but it is going to allow you to be human in a dehumanizing world.

And then there’s the game itself, now a platform for music. Rockstar Games have long been incurable music nerds – yeah, our people. That’s why you hear well curated music playlists all over the place, as well as elaborate interactive audio and music systems for industry-leading immersion. They’re nerds enough that they’ve even made some side trips like trying to make a beat production tool for the Sony PSP with Timbaland. (Full disclosure: I consulted on an educational program around that.)

This is unquestionably a commercial, mass market platform, but it’s nonetheless a pretty experimental concept.

Yes, yes – lots of flashbacks to the days of Second Life and its fledgling attempts to work as a music venue.

The convergence of virtual reality tech, motion capture, and virtual venues on one hand with music, the music industry, and unique electronic personalities on the other I think is significant – even if only as a sign of what could be possible.

I’m talking now to Rockstar to find out more about how they pulled this off. Tune in next time as we hopefully get some behind-the-scenes look at what this meant for the developers and artists.

While we wait on that, let’s nerd out with Andy Serkis about motion capture performance technique:

The post Watch The Black Madonna DJ live from … inside a video game appeared first on CDM Create Digital Music.

These fanciful new apps weave virtual music worlds in VR and AR

Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).

Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)

And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.

But what is this stuff actually for?

That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.

They’ve got two apps now, one for VR, and one for AR.

Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:

Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).

But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.

The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.

The results can be totally crazy. Here’s one example:

Pitchfork go into some detail as to how this app came about:

Fields Wants to Be The Augmented Reality App for Experimental Music Fans and Creators Alike

More on the app, including a download, on its site:

http://fields.planeta.cc/

And then there’s Drops – a “rhythm garden.”

We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.

It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.

And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!

And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:

One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.

VR Visionaries: Planeta

Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.

Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)

Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)

We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”

Check out that whole article, as it’s also a great read:

Launch: Planeta, addressing the future of interface-sound composition

Find the apps:

http://fields.planeta.cc
http://drops.garden

And let us know if you have any questions or comments for the developers, or on this topic in general – or if you’ve got a creation of your own using these technologies.

The post These fanciful new apps weave virtual music worlds in VR and AR appeared first on CDM Create Digital Music.

The best news for iOS, macOS musicians and artists from WWDC

Apple’s WWDC, while focused on developers, tends to highlight consumer features of its OS – not so much production stuff. But as usual, there are some tidbits for creative Mac and iOS users.

Here’s the stuff that looks like good news, at least in previews. Note that Apple tends to focus on just major new features they want to message, so each OS may reveal more in time.

iOS 12

Performance. On both iPad and iPhone, Apple is promising big performance optimizations. They’ve made it sound like they’re particularly targeting older devices, which should come as welcome news to users finding their iThings feel sluggish with age. (iPhone 5s and iPad Air onwards get the update.)

A lot of this has to do with responsiveness when launching apps or bringing up the keyboard or camera, so it may not directly impact audio apps – most of which do their heavy work at a pretty low level. But it’s nice to see Apple improve the experience for long-term owners, not just show off things that are new. And even as Android devices boast high-end specs on paper, that platform still lags iOS badly when it comes to things like touch response or low-latency audio.

Smoother animation is also a big one.

Augmented reality. Apple has updated their augmented reality to ARKit 2. These are the tools that let you map 3D objects and visualizations to a real-world camera feed – it basically lets you hold up a phone or tablet instead of don goggles, and mix the real world view with the virtual one.

New for developers: persist your augmented reality between sessions (without devs having to do that themselves), object detection and tracking, and multi-user support. They’ve also unveiled a new format for objects.

I know AR experimentation is already of major interest to digital artists. The readiness of iOS as a platform means they have a canvas for those experiments.

There are also compelling music and creative applications, some still to be explored. Imagine using an augmented reality view to help visualize spatialized audio. Or use a camera to check out how a modular rack or gear will fit in your studio. And there are interesting possibilities in education. (Think a 3D visualization of acoustic waves, for instance.)

Both augmented reality and virtual reality offer some new immersive experiences musicians and artists are sure to exploit. Hey, no more playing Dark Side of the Moon to The Wizard of Oz; now you can deliver an integrated AV experience.

Google’s Android and Apple are neck and neck here, but because Apple delivers updates faster, they can rightfully claim to be the largest platform for the technology. (They also have devices: iPhone SE / 6s and up and 5th generation iPad and iPads Pro all work.) Google’s challenge here I think is really adoption.

Apple’s also pretty good at telling the story here:
https://www.apple.com/ios/augmented-reality/

That said, Google has some really compelling 3D audio solutions – more on this landscape soon, on both platforms.

Real Do Not Disturb. This is overdue, but I think a major addition for those of us wanting to focus on music on iOS and not have to deal with notifications.

Siri Shortcuts. This is a bit like the third-party power app Workflow; it allows you to chain activities and apps together. I expect that could be meaningful to advanced iOS users; we’ll just have to see more details. It could mean, for instance, handy record + process audio batches.

Voice Memos on iPad. I know a lot of musicians still use this so – now you’ve got it in both places, with cloud sync.

https://www.apple.com/ios/ios-12-preview/features/

macOS 10.14 Mojave

Dark Mode. Finally. And another chance to keep screens from blaring at us in studios or onstage – though Windows and Linux users have this already, of course.

Improved Finder. This is more graphics oriented than music oriented, of course – but creative users in general will appreciate the complete metadata preview pane.

Also nice: Quick Actions, which also support the seldom-used, ill-documented, but kind of amazing Automator. Automator also has a lot of audio-specific actions with some apps; it’s worth checking out.

There are also lots of nice photo and image markup tools.

Stacks. Iterations of this concept have been around since the 90s, but finally we see it in an official Apple OS release. Stacks organize files on your desktop automatically, so you don’t have a pile of icons everywhere. Apple got us in this mess in the 80s (or is that Xerox in the 70s) but … finally they’re helping us dig out again.

App Store and the iOS-Mac ecosystem. Apple refreshing their App Store may be a bigger deal than it seems. A number of music developers are seeing big gains on Apple mobile platforms – and they’re trying to leverage that success by bringing apps to desktop Mac, something that the Windows ecosystem really doesn’t provide. It sounds like Intua, creators of BeatMaker, might even have a desktop app in store.

And having a better App Store means that it’s more likely developers will be able to sell their apps – meaning more Mac music apps.

https://www.apple.com/macos/mojave-preview/

That’s about it

There’s of course a lot more to these updates, but more on either the developer side or consumer things less relevant to our audience.

The big question for Apple remains – what is their hardware roadmap? The iPad has no real rivals apart from shifting focus to Windows-native tablets like the Surface, but the Mac has loads of competition for visual and music production.

Generally, I don’t know that either Windows or macOS can deliver a lot for pro users in these kinds of updates. We’re at a mature, iterative phase for desktop OSes. But that’s okay.

Now, what we hope as always is that updates don’t just break our existing stuff. Case in point: Apple moving away from OpenCL and OpenGL.

But even there, as one reader comments, hardware is everything. Apple dropping OpenCL isn’t as big to some developers and artists as the fact that you can’t buy a machine with an NVIDIA card in it.

Well, we’ll be watching. And as usual, anything that may or may not break audio and music tools, we’ll find out only closer to release.

The post The best news for iOS, macOS musicians and artists from WWDC appeared first on CDM Create Digital Music.

SPAT Revolution announces integration with Avid VENUE | S6L system

Flux SPAT RevolutionThis year at InfoComm 2018, Flux and Avid will be previewing the integration of the highly acclaimed SPAT Revolution software with Avid’s VENUE | S6L systems, creating a powerful and innovative real-time multichannel immersive 3D-audio platform for Live, Theatrical, and Installed Sound applications. Avid has introduced a number of highly anticipated workflow enhancements to their […]

Unreal game engine’s modular sound features explained: video

Unreal Engine may be built for games, but under the hood, it’s got a powerful audio, music, and modular synthesis engine. Its lead audio programmer explained this afternoon in a livestream from HQ.

Now a little history: back when I first met Aaron McLeran, he was at EA and working with Brian Eno and company on Spore. Generative music in games and dreams of real interactive audio engines to drive it have some history. As it happens, those conversations indirectly led us to create libpd. But that’s another story.

Aaron has led an effort to build real synthesis capabilities into Unreal. That could open a new generation of music and sound for games, enabling scores that are more responsive to action and scale better to immersive environments (including VR and AR). And it could mean that Unreal itself becomes a tool for art, even without a game per se, by giving creators access to a set of tools that handle a range of 3D visual and sound capabilities, plus live, responsive sound and music structures, on the cheap. (Getting started with Unreal is free.)

I’ll write about this more soon, but here’s what they cover in the video:

  • Submix graph and source rendering (that’s how your audio bits get mixed together)
  • Effects processing
  • Realtime synthesis (which is itself a modular environment)
  • Plugin extensions

Aaron is joined by Community Managers Tim Slager and Amanda Bott.

I’m just going to put this out there —

— and let you ask CDM some questions. (Or let us know if you’re using Unreal in your own work, as an artist, or as a sound designer or composer for games!)

Forum topic with the stream:

Unreal Engine Livestream – Unreal Audio: Features and Architecture – May 24 – Live from Epic HQ

The post Unreal game engine’s modular sound features explained: video appeared first on CDM Create Digital Music.

Free new tools for Live 10 unlock 3D spatial audio, VR, AR

Envelop began life by opening a space for exploring 3D sound, directed by Christopher Willits. But today, the nonprofit is also releasing a set of free spatial sound tools you can use in Ableton Live 10 – and we’ve got an exclusive first look.

First, let’s back up. Listening to sound in three dimensions is not just some high-tech gimmick. It’s how you hear naturally with two ears. The way that actually works is complex – the Wikipedia overview alone is dense – but close your eyes, tilt your head a little, and listen to what’s around you. Space is everything.

And just as in the leap from mono to stereo, space can change a musical mix – it allows clarity and composition of sonic elements in a new way, which can transform its impact. So it really feels like the time is right to add three dimensions to the experience of music and sound, personally and in performance.

Intuitively, 3D sound seems even more natural than visual counterparts. You don’t need to don weird new stuff on your head, or accept disorienting inputs, or rely on something like 19th century stereoscopic illusions. Sound is already as ephemeral as air (quite literally), and so, too, is 3D sound.

So, what’s holding us back?

Well, stereo sound required a chain of gear, from delivery to speaker. But those delivery mechanisms are fast evolving for 3D, and not just in terms of proprietary cinema setups.

But stereo audio also required something else to take off: mixers with pan pots. Stereo effects. (Okay, some musicians still don’t know how to use this and leave everything dead center, but that only proves my point.) Stereo only happened because tools made its use accessible to musicians.

Looking at something like Envelop’s new tools for Ableton Live 10, you see something like the equivalent of those first pan pots. Add some free devices to Live, and you can improvise with space, hear the results through headphones, and scale up to as many speakers as you want, or deliver to a growing, standardized set of virtual reality / 3D / game / immersive environments.

And that could open the floodgates for 3D mixing music. (Maybe even it could open your own floodgates there.)

Envelop tools for Live 10

Today, Envelop for Live (E4L) has hit GitHub. It’s not a completely free set of tools – you need the full version of Ableton Live Suite. Live 10 minimum is required (since it provides the requisite set of multi-point audio plumbing.) Provided you’re working from that as a base, though, musicians get a set of Max for Live-powered devices for working with spatial audio production and live performance, and developers get a set of tools for creating their own effects.

Start here for the download:

http://www.envelop.us/software/

See also the more detailed developer site:

https://github.com/EnvelopSound/EnvelopForLive/

Read an overview of the system, and some basic explanations of how it works (including some definitions of 3D sound terminology):

https://github.com/EnvelopSound/EnvelopForLive/wiki/System-Overview

And then find a getting started guide, routing, devices, and other reference materials on the wiki:

https://github.com/EnvelopSound/EnvelopForLive/wiki

It’s beautiful, elegant software – the friendliest I’ve seen yet to take on spatial audio, and very much in the spirit of Ableton’s own software. Kudos to core developers Mark Slee, Roddy Lindsay, and Rama Gotfried.

Here’s the basic idea of how the whole package works.

Output. There’s a Master Bus device that stands in for your output buses. It decodes your spatial audio, and adapts routing to however many speakers you’ve got connected – whether that’s just your headphones or four speakers or a huge speaker array. (That’s the advantage of having a scalable system – more on that in a moment.)

Sources. Live 10’s Mixer may be built largely with the idea of mixing tracks down to stereo, but you probably already think of it sort of as a set of particular musical materials – as sources. The Source Panner device, added to each track, lets you position that particular musical/sonic entity in three-dimensional space.

Processors. Any good 3D system needs not only 3D positioning, but also separate effects and tools – because normal delays, reverbs, and the like presume left/right or mid/side stereo output. (Part of what completes the immersive effect is hearing not only the positioning of the source, but reflections around it.)

In this package, you get:
Spinner: automates motion in 3D space horizontally and with vertical oscillations
B-Format Sampler: plays back existing Ambisonics wave files (think samples with spatial information already encoded in them)
B-Format Convolution Reverb: imagine a convolution reverb that works with three-dimensional information, not just two-dimensional – in other words, exactly what you’d want from a convolution reverb
Multi-Delay: cascading, three-dimensional delays out of a mono source
HOA Transform: without explaining Ambisonics, this basically molds and shapes the spatial sound field in real-time
Meter: Spatial metering. Cool.

Spinner, for automating movement.

Spatial multi-delay.

Convolution reverb, Ambisonics style.

Envelop SF and Envelop Satellite venues also have some LED effects, so you’ll find some devices for controlling those (which might also be useful templates for stuff you’re doing).

All of this spatial information is represented via a technique called Ambisonics. Basically, any spatial system – even stereo – involves applying some maths to determine relative amplitude and timing of a signal to create particular impressions of space and depth. What sets Ambisonics apart is, it represents the spatial field – the sphere of sound positions around the listener – separately from the individual speakers. So you can imagine your sound positions existing in some perfect virtual space, then being translated back to however many speakers are available.

This scalability really matters. Just want to check things out with headphones? Set your master device to “binaural,” and you’ll get a decent approximation through your headphones. Or set up four speakers in your studio, or eight. Or plug into a big array of speakers at a planetarium or a cinema. You just have to route the outputs, and the software decoding adapts.

Envelop is by no means the first set of tools to help you do this – the technique dates back to the 70s, and various software implementations have evolved over the years, many of them free – but it is uniquely easy to use inside Ableton Live.

Open source, standards

Free software. It’s significant that Envelop’s tools are available as free and open source. Max/MSP, Max for Live, and Ableton Live are proprietary tools, but the patches and externals exist independently, and a free license means you’re free to learn from or modify the code and patches. Plus, because they’re free in cost, you can share your projects across machines and users, provided everybody’s on Live 10 Suite.

Advanced Max/MSP users will probably already be familiar with the basic tools on which the Envelop team have built. They’re the work of the Institute for Computer Music and Sound Technology, at the Zürcher Hochschule der Künste in Zurich, Switzerland. ICMST have produced a set of open source externals for Max/MSP:

https://www.zhdk.ch/downloads-ambisonics-externals-for-maxmsp-5381

Their site is a wealth of research and other free tools, many of them additionally applicable to fully free and open source environments like Pure Data and Csound.

But Live has always been uniquely accessible for trying out ideas. Building a set of friendly Live devices takes these tools and makes them make more sense in the Live paradigm.

Non-proprietary standards. There’s a strong push to proprietary techniques in spatial audio in the cinema – Dolby, for instance, we’re looking at you. But while proprietary technology and licensing may make sense for big cinema distributors, it’s absolute death for musicians, who likely want to tour with their work from place to place.

The underlying techniques here are all fully open and standardized. Ambisonics work with a whole lot of different 3D use cases, from personal VR to big live performances. By definition, they don’t define the sound space in a way that’s particular to any specific set of speakers, so they’re mobile by design.

The larger open ecosystem. Envelop will make these tools new to people who haven’t seen them before, but it’s also important that they share an approach, a basis in research, and technological compatibility with other tools.

That includes the German ZKM’s Zirkonium system, HoaLibrary (that repository is deprecated but links to a bunch of implementations for Pd, Csound, OpenFrameworks, and so on), and IRCAM’s SPAT. All these systems support ambisonics – some support other systems, too – and some or all components include free and open licensing.

I bring that up because I think Envelop is stronger for being part of that ecosystem. None of these systems requires a proprietary speaker delivery system – though they’ll work with those cinema setups, too, if called upon to do so. Musical techniques, and even some encoded spatial data, can transfer between systems.

That is, if you’re learning spatial sound as a kind of instrument, here you don’t have to learn each new corporate-controlled system as if it’s a new instrument, or remake your music to move from one setting to another.

Envelop, the physical version

You do need compelling venues to make spatial sound’s payoff apparent – and Envelop are building their own venues for musicians. Their Envelop SF venue is a permanent space in San Francisco, dedicated to spatial listening and research. Envelop Satellite is a mobile counterpart to that, which can tour festivals and so on.

Envelop SF: 32 speakers with speakers above. 24 speakers set in 3 rings of 8 (the speakers in the columns) + 4 subs, and 4 ceiling speakers. (28.4)

Envelop Satellite: 28 speakers. 24 in 3 rings + 4 subs (overhead speakers coming soon) (24.4)

The competition, as far as venues: 4DSOUND and Berlin’s Monom, which houses a 4DSOUND system, are similar in function, but use their own proprietary tools paired with the system. They’ve said they plan a mobile system, but no word on when it will be available. The Berlin Institute of Sound and Music’s Hexadome uses off-the-shelf ZKM and IRCAM tools and pairs projection surfaces. It’s a mobile system by design, but there’s nothing particularly unique about its sound array or toolset. In fact, you could certainly use Envelop’s tools with any of these venues, and I suspect some musicians will.

There are also many multi-speaker arrays housed in music venues, immersive audiovisual venues, planetariums, cinemas, and so on. So long as you can get access to multichannel interfacing with those systems, you could use Envelop for Live with all of these. The only obstacle, really, is whether these venues embrace immersive, 3D programming and live performance.

But if you thought you had to be Brian Eno to get to play with this stuff, that’s not likely to be the situation for long.

VR, AR, and beyond

In addition to venues, there’s also a growing ecosystem of products for production and delivery, one that spans musical venues and personal immersive media.

To put that more simply: after well over a century of recording devices and production products assuming mono or stereo, now they’re also accommodating the three dimensions your two ears and brain have always been able to perceive. And you’ll be able to enjoy the results whether you’re on your couch with a headset on, or whether you prefer to go out to a live venue.

Ambisonics-powered products now include Facebook 360, Google VR, Waves, GoPro, and others, with more on the way, for virtual and augmented reality. So you can use Live 10 and Envelop for Live as a production tool for making music and sound design for those environments.

Steinberg are adopting ambisonics, too (via Nuendo). Here’s Waves’ guide – they now make plug-ins that support the format, and this is perhaps easier to follow than the Wikipedia article (and relevant to Envelop for Live, too):

https://www.waves.com/ambisonics-explained-guide-for-sound-engineers

Ableton Live with Max for Live has served as an effective prototyping environment for audio plug-ins, too. So developers could pick up Envelop for Live’s components, try out an idea, and later turn that into other software or hardware.

I’m personally excited about these tools and the direction of live venues and new art experiences – well beyond what’s just in commercial VR and gaming. And I’ve worked enough on spatial audio systems to at least say, there’s real potential. I wouldn’t want to keep stereo panning to myself, so it’s great to get to share this with you, too. Let us know what you’d like to see in terms of coverage, tutorial or otherwise, and if there’s more you want to know from the Envelop team.

Thanks to Christopher Willits for his help on this.

More to follow…

http://envelop.us

https://github.com/EnvelopSound/EnvelopForLive/

Further reading

Inside a new immersive AV system, as Brian Eno premieres it in Berlin [Extensive coverage of the Hexadome system and how it works]

Here’s a report from the hacklab on 4DSOUND I co-hosted during Amsterdam Dance Event in 2014 – relevant to these other contexts, having open tools and more experimentation will expand our understanding of what’s possible, what works, and what doesn’t work:

Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND

And some history and reflection on the significance of that system:
Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos]

Plus, for fun, here’s Robert Lippok [Raster] and me playing live on that system and exploring architecture in sound, as captured in a binaural recording by Frank Bretschneider [also Raster] during our performance for 2014 ADE. Binaural recording of spatial systems is really challenging, but I found it interesting in that it created its own sort of sonic entity. Frank’s work was just on the Hexadome.

One thing we couldn’t easily do was move that performance to other systems. Now, this begins to evolve.

The post Free new tools for Live 10 unlock 3D spatial audio, VR, AR appeared first on CDM Create Digital Music.

How to try GPU-accelerated live visuals in a few steps, for free

The growing power of gaming architectures for visuals has a side benefit: it can produce elaborate visuals without touching the CPU, which is busy on musicians’ machines dealing with sound.

But how do you go about exploring some of that power? The code language spoken natively by the GPU is a little frightening at first. Fortunately, you can actually have a play in a few minutes. It’s easy enough that I prepared this lightning tutorial:

I shared this with the #RazerMusic program as it’s in fact a good artistic application for laptops with gaming architectures – and it’s terrific having that NVIDIA GTX 1060 with 6 GB of memory. (This example can’t even begin to show that off, in fact.) These steps will work on the Mac, too, though.

I’m stealing a demo here. Isadora creator Mark Coniglio showed off his team’s GLSL support more or less like this when they unveiled the feature at the Isadora Werkstatt a couple of summers ago. But Isadora, while known among a handful of live visualists and people working with dance and theater tech, itself I think is underrated. And sure enough, this support makes the powers of GLSL friendly to non-programmers. You can grab some shader code and then modify parameters or combine with other effects, modular style, without delving into the code itself. Or if you are learning (or experienced, even) with GLSL, Isadora provides an uncommonly convenient environment to work with graphics-accelerated generative visuals and effects.

If you’re not quite ready to commit to the tool, Isadora has a full-functioning demo version so you can get this far – and look around and decide if buying a license is right for you. What I do like about it is, apart from some easy-to-use patching powers, Isadora’s scene-based architecture works well in live music, theater, dance, and other performance arts. (I still happily use it alongside stuff like Processing, Open Frameworks, and Touch Designer.)

There is a lot of possibility here. And if you dig around, you’ll see pretty radically different aesthetics are possible, too.

Here’s an experiment also using mods to the GLSL facility in Isadora, by Czech artist Gabriela Prochazka (as I jam on one of my tunes live).

Resources:

https://troikatronix.com/

https://www.shadertoy.com/

Planning to do more like this, so open to requests!

The post How to try GPU-accelerated live visuals in a few steps, for free appeared first on CDM Create Digital Music.

Inside a new immersive AV system, as Brian Eno premieres it in Berlin

“Hexadome,” a new platform for audiovisual performance and installation began a world-hopping tour with its debut today – with Brian Eno and Peter Chilvers the opening act.

I got the chance to go behind the scenes in discussion with the team organizing, as well as some of the artists, to try to understand both how the system works technically and what the artistic intention of launching a new delivery platform.

Brian Eno and Peter Chilvers present the debut work on the system – from earlier today. Photo courtesy ISM.

It’s not that immersive projection and sound is anything new in itself. Even limiting ourselves to the mechanical/electronic age, there’s of course been a long succession of ideas in panoramic projection, spatialized audio, and temporary and permanent architectural constructions. You’ve got your local cineplex, too. But as enhanced 3D sound and image is more accessible from virtual and augmented reality on personal devices, the same enhanced computational horsepower is also scaling to larger contexts. And that means if you fancy a nice date night instead of strapping some ugly helmet on your head, there’s hope.

But if 3D media tech is as ubiquitous and your phone, cultural venues haven’t kept up. Here in Germany, there are a number of big multichannel arrays. But they’ve tended to be limited to institutions – planetariums, academies, and a couple of media centers. So art has remained somewhat frozen in time, to single cinematic projections and stereo sound. The projection can get brighter, the sound can get louder, but very often those parameters stay the same. And that keeps artists from using space in their compositions.

A handful of spaces are beginning to change that around the world. An exhaustive survey I’ll leave for another time, but here in Germany, we’ve already got the Zentrum für Kunst und Medien Karlsruhe (ZKM) in Karlsruhe, and the 4DSOUND installation Monom in Berlin, each running public programs. (In 2014, I got to organize an open lab on the 4DSOUND while it was in Amsterdam at ADE, while also making a live performance on the system.)

The Hexadome is the new entry, launching this week. What makes it unique is that it couples visuals and sound in a single installation that will tour. Then it will make a round trip back to Berlin where the long-term plan is to establish a permanent home for this kind of work. It’s the first project of an organization dubbing itself the Institute for Sound and Music, Berlin – with the hope that name will someday grace a permanent museum dedicated to “sound, immersive arts, and electronic music culture.”

For now, ISM just has the Hexadome, so it’s parked in the large atrium of the Martin Gropius Bau, a respected museum in the heart of town.

And it’s launching with a packed program – a selection of installation-style pieces, plus a series of live audiovisual performances. On the live program:

Michael Tan’s design for CAO.

Holly Herndon & Mathew Dryhurst
Tarik Barri
Lara Sarkissian & Jemma Woolmore
Frank Bretschneider & Pierce Warnecke
Ben Frost & MFO
Peter van Hoesen & Heleen Blanken
CAO & Michael Tan
René Löwe & Pfadfinderei

Brian Eno’s installation launches a series of works that simply play back on the system, though the experience is still similar – you wander in and soak in projected images and spatial sound. The other artists all contributed installation versions of their work, plus a collaboration between Tarik Barri and Thom Yorke.

But before we get to the content, let’s consider the system and how it works.

Hexadome technology

The two halves of the Hexadome describe what this is – it’s a hexagonal projection arrangement, plus a dome-shaped sound array.

I spoke to Holger Stenschke, Lead Support Technician, from ZKM Karlsruhe, as well as Toby Götz, director of the Pfadfinderei collective. (Toby doubles up here, as the designer of the visual installation, and as one of the visual artists.) So they filled me in both on technical details and the intention of the whole thing.

Projection. The visuals are the simpler part to describe. You get six square projection screens, arranged in a hexagon, with large gaps in between. These are driven by two new iMacs Pro – that’s the current top-of-range from Apple as we launch – supplemented by still more external GPUs connected via Thunderbolt. MadMapper runs on the iMacs, and then the artists are free to fill all those pixels as needed. (Each screen is a little less than 4K resolution – so multiply that by six. Some shows will actually require both iMacs Pro.)

Jemma Woolmore shares this in-progress image of her visuals, as mapped to those six screens.

Sound. In the hemispherical sound array, there are 52 Meyer Sound speakers, arranged on a frame that looks a bit like a playground jungle gym. Why 52? Well, they’re arranged into a triangular tesselation around the dome. That’s not just to make this look impressive – that means that the sound dispersal from the speakers line up in such a way that you cover the space with sound.

The speakers also vary in size. There are three subwoofers, spaced around the hexagonal periphery, bigger speakers with more bass toward the bottom, and smaller, lighter speakers overhead. In Karlsruhe, where ZKM has a permanent installation, more of the individual speakers are bigger. But the Hexadome is meant to be portable, so weight counts. I can also attest from long hours experimenting on 4DSOUND that for whatever reason, lower frequency sounds seem to make more sense to the ears closer to the ground, and higher frequency sounds overhead. There’s actually no obvious reason for this – researchers I’ve heard who investigated how we localize sound find there’s no significant difference in how well we can localize across frequency range. (Ever heard people claim it doesn’t matter where you put a subwoofer? They’re flat out wrong.) So it’s more an expectation of experience than anything else, presumably. (Any psychoacoustics researchers wanting to chime in on comments, feel free.)

Audio interfaces. MOTU are all over this rig, because of AVB. AVB is the standard (IEEE, no less) for pro audio networking, letting you run sound over Ethernet connections. AVB audio interfaces from MOTU are there to connect to an AVB network that drives all those individual speakers.

Sound spatialization software. Visualists here are pretty much on their own – your job is to fill up those screens. But on the auditory side, there’s actually some powerful and reasonably easy to understand software to guide the process of positioning sound in space.

It’s actually significant that the Hexadome isn’t proprietary. Whereas the 4DSOUND system uses its own bespoke software and various Max patches, the Hexadome is built on some standard tools.

Artists have a choice between IRCAM’s Panoramix and Spat, and ZKM’s Zirkonium.

IRCAM Spat.

ZKM ZIrkonium – here, a screenshot of the work of Lara Sarkissian (in collaboration with Jemma Woolmore). Thanks, Lara, for the picture in progress! (The artists have been in residence at ZKM working away on this.)

On the IRCAM side, there’s not so much one toolchain as a bunch of smaller tools that work in concert. Panoramix is the standalone mixing tool an artist is likely to use, and it works with, for example, JACK (so you can pipe in sound from your software of choice). Then Spat comprises Max/MSP implementation of IRCAM spatialization, perception, and reverb tools. Panoramix is deep software – you can choose per sound source to use various spatialization techniques, and the reverb and other effects are capable of some terrific sonic effects.

Zirkonium is what the artists on the Hexadome seemed to gravitate toward. (Residencies at ZKM offered mentorship on both tools.) It’s got a friendly, single UI, and it’s free and open source. (Its sound engine is actually built in Pure Data.)

Then it’s a matter of whether the works are made for an installation, in which case they’re typically rendered (“freezing” the spatialization information) and played back in Reaper, or if they’re played live. For live performance, artists might choose to control the spatialization engine by sending OSC data, and using some kind of tool as a remote control (an iPad, for example).

I’ve so far only heard Brian Eno’s piece (both the sound check the other day and the installation), but the spatialization is already convincing. Spatialization will always work best when there are limited reflections from the physical space. The more reflected sound reaches your ear, the harder it is to localize the sound source. (The inverse is true, as well: the reason adding reverberation to a part of a mix seems to make it distance in the stereo field is, you already recognize that you hear more direct sound from sources that are close and more reflected sound from sources that are far away.)

Holger tells CDM that the team worked to mediate this effect by precisely positioning speakers in such a way that, once you’re inside the “dome” area, you hear mainly direct sound. In addition, a multichannel reverb like the IRCAM plug-in can be used to tune virtualized early reflections, making reverberation seem to emanate from beyond the dome.

In Eno’s work, at least, you have a sense of being enveloped in gong-like tones that emerge from all directions, across distances. You hear the big reverb tail of the building mixed in with that sound, but there’s a blend of virtual and real space – and there’s still a sense of precise distance between sounds in that hemispherical field.

That’s hard to describe in words, but think about the leap from mono to stereo. While mono music can be satisfying to hear, stereo creates a sense of space and makes individual elements more distinct. There’s a similar leap when you go to these larger immersive systems, and more so than the cartoonish effects you tend to get from cinematic multichannel – or even the wrap-around effects of four-channel.

What does it all mean?

Okay, so that’s all well and good. But everything I’ve described – multi-4K projection, spatial audio across lots of speakers – is readily available, with or without the Hexadome per se. You can actually go download Zirkonium and Panoramix right now. (You’ll need a few hundred euros if you want plug-in versions of all the fancy IRCAM stuff, but the rest is a free downloads, and ZKM’s software is even open source.) You don’t even necessarily need 50 speakers to try it out – Panoramix, for instance, lets you choose a binaural simulation for trying stuff out in headphones, even if it’ll sound a bit primitive by comparison.

The Hexadome for now has two advantages: one, this program, and two, the fact that it’s going mobile. Plus, it is a particular configuration.

The six square screens may at first seem unimpressive, at least in theory. You don’t get the full visual effect that you do from even conventional 180-degree panoramic projection, let alone the ability to fill your visual field as full domes or a big IMAX screen can do. Speaking to the team, though, I understood that part of the vision of the Hexadome was to project what Toby calls “windows.” And because of the brightness and contrast of each, they’re still stunning when you’re there with them in person.

This fits Eno’s work nicely, in that his collaboration with Peter Chilvers projects gallery-style images into the space, like slowly transforming paintings in light.

The gaps between the screens and above mean that you’re also aware of the space you’re in. So this is immersive in a different sense, which fits ISM’s goal of inserting these works in museum environments.

How that’s used in the other works, we’ll get to see. Projection it seems is a game of tradeoffs – domes give you greater coverage and real immersion, but distort images and also create reflections in both sound and light. (On the other hand, domes have also been in architectural use for centuries, as have rectangles, so don’t expect either to go away!)

The question “why” is actually harder to answer. There wasn’t a clear statement of mission from ISM and the Hexadome and its backers – this thing is what it is because they wanted it to be what it is, essentially. There’s no particular curatorial theme to the works. They touted some diversity of established and emerging artists. Though just about anyone may seem like they’re emerging next to Eno, that includes both local Berlin and international artists from Peru and the USA, among others, and a mix of ages and backgrounds.

The overall statement launching the Hexadome was therefore of a blank canvas, which will be approached by a range of people. And Eno/Chilvers made it literally seem a canvas, with brilliantly colored, color field-style images, filling the rectangles with oversized circles and strips of shifting color. Chilvers uses a slightly esoteric C++ raytracing engine, generating those images in realtime, in a digital, generative, modern take on the kind of effects found in Georges Seurat. Eno’s sounds were precisely what you’d expect – neutral chimes and occasional glitchy tune fragments, floating on their own or atop gentle waves of noise that arrive and recede like a tide. Organic if abstract tones resonated across the harmonic series, in groupings of fifths. Both image and sound are, in keeping with Eno’s adherence to stochastic ideas, produced in real-time according to a set of parameters, so that nothing ever exactly repeats. These are not ideas originated by Eno – stochastic processes and chance operations obviously have a long history – but as always, his rendition is tranquil and distinctively his own.

In a press conference this morning, Eno said he’d adjusted that piece until the one we heard was the fifth iteration – presumably an advantage of a system that’s controlled by a set of parameters. (Eno works with a set of software that allows this. You can try similar software, and read up on the history of the developers’ collaboration with the artist, at the Intermorphic site.)

What strikes me about beginning with Eno is that it sets a controlled tone for the installation. Eno/Chilvers’ aesthetic was at home on this system; the arrangement of screens fit the set of pictures, and Eno’s music is organic enough that when it’s projected into space, it seems almost naturally occurring.

And Eno in a press conference found a nice metaphor for justifying the connection of the Hexadome’s “windows” to the gallery experience. He noted that Chilvers’ subtly-shifting, ephemeral color compositions exploded the notion of painting or still image as something that could be consumed as a snapshot. Effectively, their work suggests the raison d’etre that the ISM curators seemed unable to articulate. The Hexadome is a canvas that incorporates time.

But that also raises a question. If spatial audio and immersive visuals have often been confined to institutions, this doesn’t so much liberate them as make a statement as to how museums can capitalize on deploying them. An inward-looking set of square images also seems firmly rooted in the museum frame (literally).

And the very fact that Eno’s work is so comfortable sets the stage for some interesting weeks.

Now we’ll see whether the coming lineup can find any subversive threads with the same setup, and in the longer run, what comes of this latest batch of installations. Will these sorts of setups incubate new ideas – especially as there’s a mix of artists and engineers engaged in the underlying tech? Or will spend-y installations like the Hexadome simply be a showy way to exhibit the tastes of big institutional partners? With some new names on that lineup for the coming weeks, I think we’ll at least get some different answers to where this could go. And looking beyond the Hexadome, the power of today’s devices to drive spatialization and visuals more easily means there’s more to come. Stay tuned.

Institute for Sound and Music, Berlin

Martin Gropius Bau: Program – ISM Hexadome

In coming installments, I’ll look deeper at some of these tools, and talk to some of the up-and-coming artists doing new things with the Hexadome.

The post Inside a new immersive AV system, as Brian Eno premieres it in Berlin appeared first on CDM Create Digital Music.

IK Multimedia’s 3D Cab Room now FREE for all AmpliTube Custom Shop users

IK Multimedia Amplitube 4 CS Cab RoomIK Multimedia has announced it has added 3D mic positioning and other Cab Room features for AmpliTube Custom Shop, the free version of their AmpliTube guitar amp and effects modelling software. Previously only available for AmpliTube 4 users, these premium features are now available at no extra cost to all users. AmpliTube delivers a hyper-realistic, […]