Explode your face with Detroit Underground’s AR mask, melt it with these releases

Being a label – hell, being a human – can feel pretty virtual these days. So let’s lean into that, huh? Face exploded – filter, check. Face melted – music, yes.

The post Explode your face with Detroit Underground’s AR mask, melt it with these releases appeared first on CDM Create Digital Music.

Imagine clubbing, space odyssey-style, with this pandemic-proof suit concept

If you can’t be rid of the pandemic, why not transform the clubber? A speculative project has you suiting up like you’re going to encounter Alien or ask to open the pod bay doors – but its futuristic features are all real and doable, right now.

Micrashell, announced this week, is the work of Production Club, a creative studio who specialize in immersive experiences and have worked with everyone from The Chainsmokers and Skrillex to Amazon. Of course, that also means that like the rest of us in the arts, they’ve also got time on their hands to ponder what to do when there’s no audience.

And the results are wild. This suit doesn’t just protect you from the virus. It also integrates a phone, the ability to vape (yeah, really) or sip on a drink, and reimagines how you might communicate and hear sound. Read on, because that includes the ability to mute people in a way that has to be the coolest made-up notion in sound since Get Smart‘s Cone of Silence.

It’s extreme, but like good speculative work, as you dig in, you find creative ideas that could lead somewhere.

Is it a workable solution to the current situation? Well, no, probably not, given that vital protective equipment isn’t available to front-line health workers. But we’re already seeing strangely science fiction scenarios we wouldn’t have imagined before – and the 1918 pandemic victims certainly wouldn’t have envisioned digital tracking or drone surveillance on one hand, or sophisticated protein sequencing to produce faster vaccines on the other. So there are real ideas to be explored here, and there’s no question the global notion of what you would wear has shifted, just as the pandemic a century ago inspired masks. So it’s worth pulling this apart and understanding why – and how – it could be made.

Mike808 (Miguel Risueño), Head of Innovations, tells us more.

CDM: You’ve done of course some major event production. Can you tell us where you come from that led to this work, and this project?

Miguel: Jaja, thanks! My background is originally in music technology and audiovisual engineering. My current resume on events started by designing the stage for my own DJ show 12 years ago. ^_^ From there, it kept escalating; we started Production Club in 2012 and that’s how I worked with Zhu, Skrillex, Zedd, Martin Garrix, Chainsmokers, Notch (Creator of Minecraft), Intel, Amazon, YouTube Gaming and some other cool cats.

How about this project? Obviously, it’s partly imaginative and speculative, which is great but – was there any consultation of people who could tell you a bit about how to make something like this work?

The idea was born in a brainstorming session where we tried to vomit as many ideas as we could on “how to solve this problem” – [that problem] being, the concert and event industry going to s***. Our background comes from “if it doesn’t exist, then go build it”, so that’s what we are doing.

The design – as you well said – is speculative and imaginative in nature, because that’s pretty much the only way we know to come up with big ideas. Production’s Club mentality is always concept comes first, execution after. So far we have always been able to figure out how to successfully build our ideas, but of course, this is an especially ambitious one. I always remember a quote from the last Miyazaki movie [Hayao Miyazaki’s The Wind Rises] that said something like “Inspiration unlocks the future. Technology eventually catches up.” And I feel that perfectly synthesizes our mindset.

We consulted some third parties we believed necessary when designing the suit – this was mainly a doctor, a biologist, a sport scientist, a systems architect, and a fashion designer. The concept design team on the suit itself is also pretty badass and already known for some of our best designs (Sadgas, Juan Civera, Fran Zurita and this cat). Finally, we have a Technical Director in-house who’s main responsibility is to figure out how to build what we design (scenic, lighting, production, automation, spatial, etc.) so all those + a bunch of other Production Club badass members helped to get where we are. [Check Sadgas’ site for more conceptual 3D work from this Spanish designer, including wild suits and designs.]

Oh yeah, there’s a sonic element, too? Sub resonators? Is this something like a more elaborate take on the SubPac (immersive audio system)? Or how does it work?

Yes! There’s a sonic element to this which is one of the most important parts. The SubPac-ish system is cool, but what is most interesting is the audio and music processing. Since the suit can be used in a club with a pre-existing audio system or just on its own we have defined three ways to listen to music.

One way is by using the integrated external mics to feed the internal speakers, kinda like when you go to the bank and talk to the cashier. Another one would be more like a silent disco, where the DJ or FOH [front of house sound] directly streams pure audio to the different Micrashell suits out there.

The final one is kind of a mix of the two — a direct feed from the DJ gets processed using spatial and psychoacoustic rules, so the audio that the user perceives feels more realistic and “club-like” even if the club doesn’t have a PA. At the end of the day, it’s a way of tricking the brain to make everything more real and immersive, as being inside of the suit will isolate you on some level.

Honestly, this clip is probably more relevant now than when it was made. I’m sure I’m far from the first person to reference it in the context of privacy or personal listening technology.

I saw some other specific sound features, you’re proposing — machine learning analyses of sounds, or a “software system that allows you to control the audio levels of different sources individually”?

The part of this you can’t see is the fact that you can decide to “not listen” to somebody, based on certain rules that you define. It’s similar to privacy settings on a social network but with audio. So, for example, if you are dancing by yourself and don’t want to be bothered, you could create a rule where only your friends or friends of friends can talk to you. The voice signal is digitally controlled so we can do things like that. Of course there are other [use cases], like having different levels and processing paths for different people or music sources.

Have you constructed any of these elements before, in other contexts? Are you building anything now? (Are you also making cloth masks like so many of us?)

Jajaja, didn’t have the time to make the masks myself, although I do love sewing. 

Regarding having built some of these items before… most of the ideas come from stuff that already exists, or that we have built before, or that we are positive that could be built. Creating a suit that could go “straight into production” was one of our main design constraints since day one. Funny enough, we already created a hi-tech suit for Skrillex’s show years ago, although it had nothing to do with this one. ^_^

In the days of audiences, the crew has also produced futuristic environments for stage shows. Look at all those unprotected people, so close together – wow. ZHU – ‘Dune Tour.’ Stage, lighting, and visual design by Production Club.

Had you already thought of partners who might be up for this, or who might be interested in the concept?

Currently, we are working in our own prototype which is based on what we can get done in-house with our 3D printers (Form [from Formlabs] and Prusa), sewing machines, Arduino [hardware prototyping platform], and Unreal Engine [3D/graphics platform].

But this is actually the right step before anything else; otherwise we won’t be able to tell a potential fabricator or partner what’s really needed or where are they f***ing up. >_< Fortunately, a lot of unexpected people and brands have already contacted us, but there’s not much I can say yet besides that.

Lastly, where are you now? How are you spending the lockdown, especially with events off for a while?

We have temporarily closed our design studio in DTLA [downtown Los Angeles] to comply with the social distancing orders. Physical events are cancelled for a while yes but that’s why we are working to bring em back soon! At this very specific moment, I’m blasting Noisia Radio in my home studio, it’s 1 am, and my two cats are fkn around trying to break shit — actually, one is sleeping, as that’s pretty much all he does, I have realized during this quarantine.

And full specs:

PHONE INTEGRATION
• Seamless integration and suit control based on smartphone app
• Connection provided to charge/recharge phones/devices

VOICE COMMUNICATION
• Wireless voice communication system based on physical proximity and orientation
• Privacy driven communication system based on user-defined rules for social interaction.
Options include:
– Everyone can speak to you
– Only certain groups of people (i.e. people in your contact list)
– Specific people you select
• Software system that allows you to control the audio levels of different sources individually (DJ,
ambiance, friend_1, friend_2, … friend_n)
• Voice subsystem that allows you to modify how your voice is presented/streamed to other
users in real time – think like AR filters for audio – for example vocoder, talkbox, octaver, pitch
modulation, etc.
• Internal (voice) and external (ambiance) microphones

SOUND SYSTEM & AUDIO PROCESSING
• Integrated, controllable internal speaker system that allows you to listen to live music in 3
modes:
– Directly streamed from the DJ/band (dry)
– As an emulation of the room’s sound based on psychoacoustics (wet)
– As a passthrough from the room thanks to the suit’s embedded microphon
• Contact bass speaker cones integrated in the back area to transmit low frequencies under
150hz by direct contact with the user’s body

BASIC NEEDS & SUIT HANDLING
• “Top only” suit design allows the user to wear their normal clothes, use the toilet and engage in
intercourse without being exposed to respiratory risks
• Hand latch system to facilitate dressing and undressing the suit

FASHION ACCESSORIES & ADD-ONS
• Accessible NFC pouch
• Strap system allowing expandable garment to fit people of different sizes
• Quick attachment features across the pouch and suit allowing for add-ons and fashion
customization (i.e. patches, velcro, magnets, hooks)

SUPPLY SYSTEM
• Supply system based on partially disposable canisters allows users to vape and/or drink safely
from your suit. Drink can be alcoholic, non-alcoholic or a liquid meal replacement
• Snap system based on magnets and differentiated plug-in shape makes it easy to plug your
canister in the proper slot
• Remaining amount of drink and vape monitored through canister embedded RGB light and
smartphone app
• This system removes the possibility of being roofied as the drink remains enclosed
inside of a custom canister and not exposed to external agents once the user starts
drinking
• This system allows for pre-made drinks so long lines at the bar could be mitigated or
fully eliminated
• Supply nozzles are controlled from smartphone app and have 4 modes each: clean, fully
deployed, fully retracted and scratch mode (doubles as a stick to allow to reach different
parts of the face)

LIGHTING
User customized monitoring and emotion broadcast lighting system comprised of
several groups of screens and addressable RGBWA SMD LEDs to serve as indicators of
the user’s mood, needs, warnings, messages, desires and more. For example, a rainbow
lighting chase effect across your suit can express joy, while a static red light could
express “busy” or a “green” slowly intermittent shimmering light could express “idle” or
“resting” state

CAMERA
Pan + tilt camera system with RGB LED monitoring has three main functionalities:
– “Camera app” function as an added extra POV camera that connects with your phone to
take snaps and videos
– Proactive computer vision safety recording based on AI analysis of external agents can
be set up to record based on the system’s perceived level of threat or just “trigger
word” that records remotely based on cloud platform
– “Chest eye” system that allows you to see in realtime things that your suit or helmet
subsystems might be occluding

The post Imagine clubbing, space odyssey-style, with this pandemic-proof suit concept appeared first on CDM Create Digital Music.

GameSoundCon open call for speakers about video game music & sound design

GameSoundCon Speakers Open Call

The GameSoundCon conference for video game music and sound design has announced an open call for video game sound designers and game audio experts to speak at GSC in Los Angeles on October 29th & 30th, 2019. “This is one of my favorite parts of GameSoundCon!”, says Executive Director Brian Schmidt. “Calling on experts to […]

The post GameSoundCon open call for speakers about video game music & sound design appeared first on rekkerd.org.

AES announces 2018 International Conference on Audio for Virtual and Augmented Reality

AES AVAR 2018The Audio Engineering Society has announced the second International Conference on Audio for Virtual and Augmented Reality, taking place August 20-22, 2018, at the DigiPen Institute of Technology, Redmond, WA. The conference and exhibition will bring together a community of influential research scientists, engineers, VR and AR developers, and content creators. The conference’s esteemed keynote […]

These fanciful new apps weave virtual music worlds in VR and AR

Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).

Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)

And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.

But what is this stuff actually for?

That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.

They’ve got two apps now, one for VR, and one for AR.

Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:

Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).

But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.

The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.

The results can be totally crazy. Here’s one example:

Pitchfork go into some detail as to how this app came about:

Fields Wants to Be The Augmented Reality App for Experimental Music Fans and Creators Alike

More on the app, including a download, on its site:

http://fields.planeta.cc/

And then there’s Drops – a “rhythm garden.”

We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.

It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.

And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!

And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:

One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.

VR Visionaries: Planeta

Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.

Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)

Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)

We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”

Check out that whole article, as it’s also a great read:

Launch: Planeta, addressing the future of interface-sound composition

Find the apps:

http://fields.planeta.cc
http://drops.garden

And let us know if you have any questions or comments for the developers, or on this topic in general – or if you’ve got a creation of your own using these technologies.

The post These fanciful new apps weave virtual music worlds in VR and AR appeared first on CDM Create Digital Music.

The best news for iOS, macOS musicians and artists from WWDC

Apple’s WWDC, while focused on developers, tends to highlight consumer features of its OS – not so much production stuff. But as usual, there are some tidbits for creative Mac and iOS users.

Here’s the stuff that looks like good news, at least in previews. Note that Apple tends to focus on just major new features they want to message, so each OS may reveal more in time.

iOS 12

Performance. On both iPad and iPhone, Apple is promising big performance optimizations. They’ve made it sound like they’re particularly targeting older devices, which should come as welcome news to users finding their iThings feel sluggish with age. (iPhone 5s and iPad Air onwards get the update.)

A lot of this has to do with responsiveness when launching apps or bringing up the keyboard or camera, so it may not directly impact audio apps – most of which do their heavy work at a pretty low level. But it’s nice to see Apple improve the experience for long-term owners, not just show off things that are new. And even as Android devices boast high-end specs on paper, that platform still lags iOS badly when it comes to things like touch response or low-latency audio.

Smoother animation is also a big one.

Augmented reality. Apple has updated their augmented reality to ARKit 2. These are the tools that let you map 3D objects and visualizations to a real-world camera feed – it basically lets you hold up a phone or tablet instead of don goggles, and mix the real world view with the virtual one.

New for developers: persist your augmented reality between sessions (without devs having to do that themselves), object detection and tracking, and multi-user support. They’ve also unveiled a new format for objects.

I know AR experimentation is already of major interest to digital artists. The readiness of iOS as a platform means they have a canvas for those experiments.

There are also compelling music and creative applications, some still to be explored. Imagine using an augmented reality view to help visualize spatialized audio. Or use a camera to check out how a modular rack or gear will fit in your studio. And there are interesting possibilities in education. (Think a 3D visualization of acoustic waves, for instance.)

Both augmented reality and virtual reality offer some new immersive experiences musicians and artists are sure to exploit. Hey, no more playing Dark Side of the Moon to The Wizard of Oz; now you can deliver an integrated AV experience.

Google’s Android and Apple are neck and neck here, but because Apple delivers updates faster, they can rightfully claim to be the largest platform for the technology. (They also have devices: iPhone SE / 6s and up and 5th generation iPad and iPads Pro all work.) Google’s challenge here I think is really adoption.

Apple’s also pretty good at telling the story here:
https://www.apple.com/ios/augmented-reality/

That said, Google has some really compelling 3D audio solutions – more on this landscape soon, on both platforms.

Real Do Not Disturb. This is overdue, but I think a major addition for those of us wanting to focus on music on iOS and not have to deal with notifications.

Siri Shortcuts. This is a bit like the third-party power app Workflow; it allows you to chain activities and apps together. I expect that could be meaningful to advanced iOS users; we’ll just have to see more details. It could mean, for instance, handy record + process audio batches.

Voice Memos on iPad. I know a lot of musicians still use this so – now you’ve got it in both places, with cloud sync.

https://www.apple.com/ios/ios-12-preview/features/

macOS 10.14 Mojave

Dark Mode. Finally. And another chance to keep screens from blaring at us in studios or onstage – though Windows and Linux users have this already, of course.

Improved Finder. This is more graphics oriented than music oriented, of course – but creative users in general will appreciate the complete metadata preview pane.

Also nice: Quick Actions, which also support the seldom-used, ill-documented, but kind of amazing Automator. Automator also has a lot of audio-specific actions with some apps; it’s worth checking out.

There are also lots of nice photo and image markup tools.

Stacks. Iterations of this concept have been around since the 90s, but finally we see it in an official Apple OS release. Stacks organize files on your desktop automatically, so you don’t have a pile of icons everywhere. Apple got us in this mess in the 80s (or is that Xerox in the 70s) but … finally they’re helping us dig out again.

App Store and the iOS-Mac ecosystem. Apple refreshing their App Store may be a bigger deal than it seems. A number of music developers are seeing big gains on Apple mobile platforms – and they’re trying to leverage that success by bringing apps to desktop Mac, something that the Windows ecosystem really doesn’t provide. It sounds like Intua, creators of BeatMaker, might even have a desktop app in store.

And having a better App Store means that it’s more likely developers will be able to sell their apps – meaning more Mac music apps.

https://www.apple.com/macos/mojave-preview/

That’s about it

There’s of course a lot more to these updates, but more on either the developer side or consumer things less relevant to our audience.

The big question for Apple remains – what is their hardware roadmap? The iPad has no real rivals apart from shifting focus to Windows-native tablets like the Surface, but the Mac has loads of competition for visual and music production.

Generally, I don’t know that either Windows or macOS can deliver a lot for pro users in these kinds of updates. We’re at a mature, iterative phase for desktop OSes. But that’s okay.

Now, what we hope as always is that updates don’t just break our existing stuff. Case in point: Apple moving away from OpenCL and OpenGL.

But even there, as one reader comments, hardware is everything. Apple dropping OpenCL isn’t as big to some developers and artists as the fact that you can’t buy a machine with an NVIDIA card in it.

Well, we’ll be watching. And as usual, anything that may or may not break audio and music tools, we’ll find out only closer to release.

The post The best news for iOS, macOS musicians and artists from WWDC appeared first on CDM Create Digital Music.

Free new tools for Live 10 unlock 3D spatial audio, VR, AR

Envelop began life by opening a space for exploring 3D sound, directed by Christopher Willits. But today, the nonprofit is also releasing a set of free spatial sound tools you can use in Ableton Live 10 – and we’ve got an exclusive first look.

First, let’s back up. Listening to sound in three dimensions is not just some high-tech gimmick. It’s how you hear naturally with two ears. The way that actually works is complex – the Wikipedia overview alone is dense – but close your eyes, tilt your head a little, and listen to what’s around you. Space is everything.

And just as in the leap from mono to stereo, space can change a musical mix – it allows clarity and composition of sonic elements in a new way, which can transform its impact. So it really feels like the time is right to add three dimensions to the experience of music and sound, personally and in performance.

Intuitively, 3D sound seems even more natural than visual counterparts. You don’t need to don weird new stuff on your head, or accept disorienting inputs, or rely on something like 19th century stereoscopic illusions. Sound is already as ephemeral as air (quite literally), and so, too, is 3D sound.

So, what’s holding us back?

Well, stereo sound required a chain of gear, from delivery to speaker. But those delivery mechanisms are fast evolving for 3D, and not just in terms of proprietary cinema setups.

But stereo audio also required something else to take off: mixers with pan pots. Stereo effects. (Okay, some musicians still don’t know how to use this and leave everything dead center, but that only proves my point.) Stereo only happened because tools made its use accessible to musicians.

Looking at something like Envelop’s new tools for Ableton Live 10, you see something like the equivalent of those first pan pots. Add some free devices to Live, and you can improvise with space, hear the results through headphones, and scale up to as many speakers as you want, or deliver to a growing, standardized set of virtual reality / 3D / game / immersive environments.

And that could open the floodgates for 3D mixing music. (Maybe even it could open your own floodgates there.)

Envelop tools for Live 10

Today, Envelop for Live (E4L) has hit GitHub. It’s not a completely free set of tools – you need the full version of Ableton Live Suite. Live 10 minimum is required (since it provides the requisite set of multi-point audio plumbing.) Provided you’re working from that as a base, though, musicians get a set of Max for Live-powered devices for working with spatial audio production and live performance, and developers get a set of tools for creating their own effects.

Start here for the download:

http://www.envelop.us/software/

See also the more detailed developer site:

https://github.com/EnvelopSound/EnvelopForLive/

Read an overview of the system, and some basic explanations of how it works (including some definitions of 3D sound terminology):

https://github.com/EnvelopSound/EnvelopForLive/wiki/System-Overview

And then find a getting started guide, routing, devices, and other reference materials on the wiki:

https://github.com/EnvelopSound/EnvelopForLive/wiki

It’s beautiful, elegant software – the friendliest I’ve seen yet to take on spatial audio, and very much in the spirit of Ableton’s own software. Kudos to core developers Mark Slee, Roddy Lindsay, and Rama Gotfried.

Here’s the basic idea of how the whole package works.

Output. There’s a Master Bus device that stands in for your output buses. It decodes your spatial audio, and adapts routing to however many speakers you’ve got connected – whether that’s just your headphones or four speakers or a huge speaker array. (That’s the advantage of having a scalable system – more on that in a moment.)

Sources. Live 10’s Mixer may be built largely with the idea of mixing tracks down to stereo, but you probably already think of it sort of as a set of particular musical materials – as sources. The Source Panner device, added to each track, lets you position that particular musical/sonic entity in three-dimensional space.

Processors. Any good 3D system needs not only 3D positioning, but also separate effects and tools – because normal delays, reverbs, and the like presume left/right or mid/side stereo output. (Part of what completes the immersive effect is hearing not only the positioning of the source, but reflections around it.)

In this package, you get:
Spinner: automates motion in 3D space horizontally and with vertical oscillations
B-Format Sampler: plays back existing Ambisonics wave files (think samples with spatial information already encoded in them)
B-Format Convolution Reverb: imagine a convolution reverb that works with three-dimensional information, not just two-dimensional – in other words, exactly what you’d want from a convolution reverb
Multi-Delay: cascading, three-dimensional delays out of a mono source
HOA Transform: without explaining Ambisonics, this basically molds and shapes the spatial sound field in real-time
Meter: Spatial metering. Cool.

Spinner, for automating movement.

Spatial multi-delay.

Convolution reverb, Ambisonics style.

Envelop SF and Envelop Satellite venues also have some LED effects, so you’ll find some devices for controlling those (which might also be useful templates for stuff you’re doing).

All of this spatial information is represented via a technique called Ambisonics. Basically, any spatial system – even stereo – involves applying some maths to determine relative amplitude and timing of a signal to create particular impressions of space and depth. What sets Ambisonics apart is, it represents the spatial field – the sphere of sound positions around the listener – separately from the individual speakers. So you can imagine your sound positions existing in some perfect virtual space, then being translated back to however many speakers are available.

This scalability really matters. Just want to check things out with headphones? Set your master device to “binaural,” and you’ll get a decent approximation through your headphones. Or set up four speakers in your studio, or eight. Or plug into a big array of speakers at a planetarium or a cinema. You just have to route the outputs, and the software decoding adapts.

Envelop is by no means the first set of tools to help you do this – the technique dates back to the 70s, and various software implementations have evolved over the years, many of them free – but it is uniquely easy to use inside Ableton Live.

Open source, standards

Free software. It’s significant that Envelop’s tools are available as free and open source. Max/MSP, Max for Live, and Ableton Live are proprietary tools, but the patches and externals exist independently, and a free license means you’re free to learn from or modify the code and patches. Plus, because they’re free in cost, you can share your projects across machines and users, provided everybody’s on Live 10 Suite.

Advanced Max/MSP users will probably already be familiar with the basic tools on which the Envelop team have built. They’re the work of the Institute for Computer Music and Sound Technology, at the Zürcher Hochschule der Künste in Zurich, Switzerland. ICMST have produced a set of open source externals for Max/MSP:

https://www.zhdk.ch/downloads-ambisonics-externals-for-maxmsp-5381

Their site is a wealth of research and other free tools, many of them additionally applicable to fully free and open source environments like Pure Data and Csound.

But Live has always been uniquely accessible for trying out ideas. Building a set of friendly Live devices takes these tools and makes them make more sense in the Live paradigm.

Non-proprietary standards. There’s a strong push to proprietary techniques in spatial audio in the cinema – Dolby, for instance, we’re looking at you. But while proprietary technology and licensing may make sense for big cinema distributors, it’s absolute death for musicians, who likely want to tour with their work from place to place.

The underlying techniques here are all fully open and standardized. Ambisonics work with a whole lot of different 3D use cases, from personal VR to big live performances. By definition, they don’t define the sound space in a way that’s particular to any specific set of speakers, so they’re mobile by design.

The larger open ecosystem. Envelop will make these tools new to people who haven’t seen them before, but it’s also important that they share an approach, a basis in research, and technological compatibility with other tools.

That includes the German ZKM’s Zirkonium system, HoaLibrary (that repository is deprecated but links to a bunch of implementations for Pd, Csound, OpenFrameworks, and so on), and IRCAM’s SPAT. All these systems support ambisonics – some support other systems, too – and some or all components include free and open licensing.

I bring that up because I think Envelop is stronger for being part of that ecosystem. None of these systems requires a proprietary speaker delivery system – though they’ll work with those cinema setups, too, if called upon to do so. Musical techniques, and even some encoded spatial data, can transfer between systems.

That is, if you’re learning spatial sound as a kind of instrument, here you don’t have to learn each new corporate-controlled system as if it’s a new instrument, or remake your music to move from one setting to another.

Envelop, the physical version

You do need compelling venues to make spatial sound’s payoff apparent – and Envelop are building their own venues for musicians. Their Envelop SF venue is a permanent space in San Francisco, dedicated to spatial listening and research. Envelop Satellite is a mobile counterpart to that, which can tour festivals and so on.

Envelop SF: 32 speakers with speakers above. 24 speakers set in 3 rings of 8 (the speakers in the columns) + 4 subs, and 4 ceiling speakers. (28.4)

Envelop Satellite: 28 speakers. 24 in 3 rings + 4 subs (overhead speakers coming soon) (24.4)

The competition, as far as venues: 4DSOUND and Berlin’s Monom, which houses a 4DSOUND system, are similar in function, but use their own proprietary tools paired with the system. They’ve said they plan a mobile system, but no word on when it will be available. The Berlin Institute of Sound and Music’s Hexadome uses off-the-shelf ZKM and IRCAM tools and pairs projection surfaces. It’s a mobile system by design, but there’s nothing particularly unique about its sound array or toolset. In fact, you could certainly use Envelop’s tools with any of these venues, and I suspect some musicians will.

There are also many multi-speaker arrays housed in music venues, immersive audiovisual venues, planetariums, cinemas, and so on. So long as you can get access to multichannel interfacing with those systems, you could use Envelop for Live with all of these. The only obstacle, really, is whether these venues embrace immersive, 3D programming and live performance.

But if you thought you had to be Brian Eno to get to play with this stuff, that’s not likely to be the situation for long.

VR, AR, and beyond

In addition to venues, there’s also a growing ecosystem of products for production and delivery, one that spans musical venues and personal immersive media.

To put that more simply: after well over a century of recording devices and production products assuming mono or stereo, now they’re also accommodating the three dimensions your two ears and brain have always been able to perceive. And you’ll be able to enjoy the results whether you’re on your couch with a headset on, or whether you prefer to go out to a live venue.

Ambisonics-powered products now include Facebook 360, Google VR, Waves, GoPro, and others, with more on the way, for virtual and augmented reality. So you can use Live 10 and Envelop for Live as a production tool for making music and sound design for those environments.

Steinberg are adopting ambisonics, too (via Nuendo). Here’s Waves’ guide – they now make plug-ins that support the format, and this is perhaps easier to follow than the Wikipedia article (and relevant to Envelop for Live, too):

https://www.waves.com/ambisonics-explained-guide-for-sound-engineers

Ableton Live with Max for Live has served as an effective prototyping environment for audio plug-ins, too. So developers could pick up Envelop for Live’s components, try out an idea, and later turn that into other software or hardware.

I’m personally excited about these tools and the direction of live venues and new art experiences – well beyond what’s just in commercial VR and gaming. And I’ve worked enough on spatial audio systems to at least say, there’s real potential. I wouldn’t want to keep stereo panning to myself, so it’s great to get to share this with you, too. Let us know what you’d like to see in terms of coverage, tutorial or otherwise, and if there’s more you want to know from the Envelop team.

Thanks to Christopher Willits for his help on this.

More to follow…

http://envelop.us

https://github.com/EnvelopSound/EnvelopForLive/

Further reading

Inside a new immersive AV system, as Brian Eno premieres it in Berlin [Extensive coverage of the Hexadome system and how it works]

Here’s a report from the hacklab on 4DSOUND I co-hosted during Amsterdam Dance Event in 2014 – relevant to these other contexts, having open tools and more experimentation will expand our understanding of what’s possible, what works, and what doesn’t work:

Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND

And some history and reflection on the significance of that system:
Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos]

Plus, for fun, here’s Robert Lippok [Raster] and me playing live on that system and exploring architecture in sound, as captured in a binaural recording by Frank Bretschneider [also Raster] during our performance for 2014 ADE. Binaural recording of spatial systems is really challenging, but I found it interesting in that it created its own sort of sonic entity. Frank’s work was just on the Hexadome.

One thing we couldn’t easily do was move that performance to other systems. Now, this begins to evolve.

The post Free new tools for Live 10 unlock 3D spatial audio, VR, AR appeared first on CDM Create Digital Music.

NuSpace Audio releases Zephyr 3D audio binaural/surround reverb

NuSpace Audio ZephyrNuSpace Audio has announced the release of Zephyr, a 3D audio binaural/surround reverb effect plugin based in higher-order Ambisonics (HOA), a speaker-independent format for representing 3D sound-fields suited for spatial audio systems. Zephyr operates directly in the Ambisonics domain, extending the immersive quality and control of binaural and surround systems. The plugin allows parameterizing important […]

Plugin Alliance releases dearVR music and dearVR pro 3D audio plugins

dearVR featPlugin Alliance has introduced dearVR, a 3D audio reality engine from Dear Reality that delivers focused 3D object-based workflows for music and sound design. dearVR lets you turn your flat stereo multitrack sessions into an immersive 360º soundscape that envelops your listeners when they hear your music on headphones. Build an interactive, virtual acoustic environment […]