Save up to 40% off NUGEN Audio’s Halo Upmix & Downmix plugins!

NugenAudio Halo ween PluginBoutique

Plugin Boutique has announced a Halo-Ween Flash Sale, offering up to 40% off NUGEN Audio’s Halo Upmix and Halo Downmix stereo placement plugins for a limited time only. From naturally extracted and expanded soundscapes to full cinematic big-stage enhancement, Halo Upmix delivers with intuitive ease, all the control you need to fine-tune your surround mix […]

The post Save up to 40% off NUGEN Audio’s Halo Upmix & Downmix plugins! appeared first on rekkerd.org.

Max 8: Multichannel, mappable, faster patching is here

Max 8 is released today, as the latest version of the audiovisual development environment brings new tools, faster performance, multichannel patching, MIDI learn, and more.

Max is now 30 years old, with a direct lineage to the beginning of visual programming for musicians – creating your own custom tools by connecting virtual cables on-screen instead of typing in code. Since then, its developers have incorporated additional facilities for other code languages (like JavaScript), different data types, real-time visuals (3D and video), and integrated support inside Ableton Live (with Max for Live). Max 8 actually hits all of those different points with improvements. Here’s what’s new:

MC multichannel patching.

It’s always been possible to do multichannel patching – and therefore support multichannel audio (as with spatial sound) – in Max and Pure Data. But Max’s new MC approach makes this far easier and more powerful.

  • Any sound object can be made into multiples, just by typing mc. in front of the object name.
  • A single patch cord can incorporate any number of channels.
  • You can edit multiple objects all at once.

So, yes, this is about multichannel audio output and spatial audio. But it’s also about way more than that – and it addresses one of the most significant limitations of the Max/Pd patching paradigm.

Polyphony? MC.

Synthesis approaches with loads of oscillators (like granular synthesis or complex additive synthesis)? MC.

MPE assignments (from controllers like the Linnstrument and ROLI Seaboard)? MC.

MC means the ability to use a small number of objects and cords to do a lot – from spatial sound to mass polyphony to anything else that involves multiples.

It’s just a much easier way to work with a lot of stuff at once. That was present in open code environment SuperCollider, for instance, if you were willing to put in some time learning SC’s code language. But it was never terribly easy in Max. (Pure Data, your move!)

MIDI mapping

Mappings lets you MIDI learn from controllers, keyboards, and whatnot, just by selecting a control, and moving your controller.

Computer keyboard mappings work the same way.

The whole implementation looks very much borrowed from Ableton Live, down to the list of mappings for keyboard and MIDI. It’s slightly disappointing they didn’t cover OSC messages with the same interface, though, given this is Max.

It’s faster

Max 8 has various performance optimizations, says Cycling ’74. But in particular, look for 2x (Mac) – 20x (Windows) faster launch times, 4x faster patching loading, and performance enhancements in the UI, Jitter, physics, and objects like coll.

Also, Max 8’s Vizzie library of video modules is now OpenGL-accelerated, which additionally means you can mix and match with Jitter OpenGL patching. (No word yet on what that means for OpenGL deprecation by Apple.)

Node.JS

This is I suspect a pretty big deal for a lot of Max patchers who moonlight in some JavaScript coding. NodeJS support lets you run Node applications from inside a patch – for extending what Max can do, running servers, connecting to the outside world, and whatnot.

There’s full NPM support, which is to say all the ability to share code via that package manager is now available inside Max.

Patching works better, and other stuff that will make you say “finally”

Actually, this may be the bit that a lot of long-time Max users find most exciting, even despite the banner features.

Patching is now significantly enhanced. You can patch and unpatch objects just by dragging them in and out of patch cords, instead of doing this in multiple steps. Group dragging and whatnot finally works the way it should, without accidentally selecting other objects. And you get real “probing” of data flowing through patch cords by hovering over the cords.

There’s also finally an “Operate While Unlocked” option so you can use controls without constantly locking and unlocking patches.

There’s also a refreshed console, color themes, and a search sidebar for quickly bringing up help.

Plus there’s external editor support (coll, JavaScript, etc.). You can use “waypoints” to print stuff to the console.

And additionally, essential:

High definition and multitouch support on Windows
UI support for the latest Mac OS
Plug-in scanning

And of course a ton of new improvements for Max objects and Jitter.

What about Max for Live?

Okay, Ableton and Cycling ’74 did talk about “lockstep” releases of Max and Max for Live. But… what’s happening is not what lockstep usually means. Maybe it’s better to say that the releases of the two will be better coordinated.

Max 8 today is ahead of the Max for Live that ships with Ableton Live. But we know Max for Live incorporated elements of Max 8, even before its release.

For their part, Cycling ’74 today say that “in the coming months, Max 8 will become the basis of Max for Live.”

Based on past conversations, that means that as much functionality as possibly can be practically delivered in Max for Live will be there. And with all these Max 8 improvements, that’s good news. I’ll try to get more clarity on this as information becomes available.

Max 8 now…

Ther’s a 30-day free trial. Upgrades are US$149; full version is US$399, plus subscription and academic discount options.

Full details on the new release are neatly laid out on Cycling’s website today:

https://cycling74.com/products/max-features?utm_source=press&utm_campaign=max8-release

The post Max 8: Multichannel, mappable, faster patching is here appeared first on CDM Create Digital Music.

Mics that record in “3D” ambisonics are the next big thing

Call it the virtual reality microphone … or just think of it as an evolution of microphones that capture sounds more as you hear them. But mics purporting to give you 3D recording are arriving in waves – and they could change both immersive sound and how we record music.

Let’s back up from the hype a little bit here. Once we’re talking virtual reality or you’re imagining people in goggles, Lawnmower Man style, we’re skipping ahead to the application of these mic solutions, beyond the mics themselves.

The microphone technology itself may wind up being the future of recording with or without consumers embracing VR tech.

Back in the glorious days of mono audio, a single microphone that captured an entire scene was … well, any single microphone. And in fact, to this day there are plenty of one-mic recording rigs – think voice overs, for instance.

The reason this didn’t satisfy anyone is more about human perception than it is technology. Your ears and brain are able to perceive extremely accurate spatial positioning in more or less a 360-degree sphere through a wide range of frequencies. Plus, the very things that screw up that precise spatial perception – like reflections – contribute to the impact of sound and music in other ways.

And so we have stereo. And with stereo sound delivery, a bunch of two-microphone arrangements become useful ways of capturing spatial information. Eventually, microphone makers work out ways of building integrated capsules with two microphone diaphragms instead of just one, and you get the advantages of two mics in a single housing. Those in turn are especially useful in mobile devices.

So all these buzzwords you’re seeing in mics all of a sudden – “virtual reality,” “three-dimensional” sound, “surround mics,” and “ambisonic mics” are really about extending this idea. They’re single microphones that capture spatial sound, just like those stereo mics, but in a way that gives them more than just two-channel left/right (or mid/center) information. To do that, these solutions have two components:

1. A mic capsule with multiple diaphragms for capturing full-spectrum sound from all directions
2. Software processing so you can decode that directional audio, and (generally speaking) encode it into various surround delivery formats or ambisonic sound

(“Surround” here generally means the multichannel formats beyond just stereo; ambisonics are a standard way of encoding full 360-degree sound information, so not just positioning on the same plane as your ears, but above and below, too.)

The B360 ambisonics encoder from plug-in maker WAVES.

The software encoding is part of what’s interesting here. Once you have a mic that captures 360-degree sound, you can use it in a number of ways. These sorts of mic capsules are useful in modeling different microphones, since you can adjust the capture pattern in software after the fact. So these spherical mics could model different classic mics, in different arrangements, making it seem as though you recorded with multiple mics when you only used one. Just like your computer can become a virtual studio full of gear, that single mic can – in theory, anyway – act like more than one microphone. That may prove useful for production applications other than just “stuff for VR.”

There are a bunch of these microphones showing up all at once. I’m guessing that’s for two reasons – one, a marketing push around VR recording, but two, likely some system-on-a-chip developments that make this possible. (All those Chinese-made components could get hit with hefty US tariffs soon, so we’ll see how that plays out. But I digress.)

Here is a non-comprehensive selection of examples of new or notable 360-degree mics.

8ball

Maker: HEAR360, a startup focused on this area

Cost: US$2500

The pitch: Here’s a heavy-duty, serious solution – camera-mountable, “omni-binaural” mic that gives you 8 channels of sound that comes closest to how we hear, complete with head tracking-capable recordings. PS, if you’re wondering which DAW to use – they support Pro Tools and, surprise, Reaper.

Who it’s for: High-end video productions focused on capturing spatial audio with the mic.

https://hear360.io/shop/8ball

NT-SF1

Maker: RØDE, collaborating with 40-year veteran of these sorts of mics, Soundfield (acquired by RØDE’s parent in 2016)

Cost: US$999

The pitch: Make full-360, head-trackable recordings in a single mic (records in A-format, converts to B-format) for ambisonic audio you can use across formats. Works with Dolby Atmos, works with loads of DAWs (Reaper and Pro Tools, Cubase and Nuendo, and Logic Pro). 4-channel to the 8-ball’s titular eight, but much cheaper and with more versatile software.

Who it’s for: Studios and producers wanting a moderately-priced, flexible solution right now. Plus it’s a solid mic that lets you change mic patterns at will.

Software matters as does the mic in these applications; RØDE supports DAWs like Cubase/Nuendo, Pro Tools, Reaper, and Logic.

https://en.rode.com/nt-sf1

H3-VR

Maker: ZOOM

Cost: US$350

The pitch: ZOOM is making this dead simple – like the GoPro camera of VR mics. 4-capsule ambisonic mic plus 6-axis motion sensor with automatic positioning and level detection promise to make this the set-it-and-forget-it solution. And to make this more mobile, the encoding and recording is included on the device itself. Record ambisonics, stereo inaural, or just use it like a normal stereo mic, all controlled onboard with buttons or using an iOS device as a remote. Your recording is saved on SD cards, even with slate tone and metadata. And you can monitor the 3D sound, sort of, using stereo binaural output of the ambisonic signal (not perfect, but you’ll get the idea).

Who it’s for: YouTube stars wanting to go 3D, obviously, plus one-stop live streaming and music streaming and recording. The big question mark here to me is what’s sacrificed in quality for the low price, but maybe that’s a feature, not a bug, given this area is so new and people want to play around.

https://www.zoom-na.com/products/field-video-recording/field-recording/zoom-h3-vr-handy-recorder

ZYLIA

Maker: ZYLIA, a Polish startup that IndieGogo-funded its first run last year. But the electronics inside come from Infineon, the German semiconductor giant that spun off of Siemens.

Cost: US$1199 list (Pro) / $699 for the basic model

The pitch: This futuristic football contains some 19 mic capsules to the 4-8 above. But the idea isn’t necessarily VR – instead, Zylia claims they use this technology to automatically separate sound sources from this single device. In other words, put the soccer ball in your studio, and the software separates out your drums, keys, and vocalist. Or get the Pro model and capture 3rd-order ambisonics – with more spatial precision than the other offerings here, if it works as advertised.

Who it’s for: Musicians wanting a new-fangled solution for multichannel recording from just one mic (on the basic model), useful for live recording and education, or people doing 3D recordings wanting the same plug-and-play simplicity and more spatial information.

Oh yeah, also – 69dB signal-to-noise ratio is nothing to sneeze at.

Pro Tools Expert did a review late last year, though I think we soon need a more complete review for the 3D applications.

http://www.zylia.co/

What did we miss? With this area growing fast, plenty, I suspect, so sound off. This is one big area in mics to watch, for sure – and the latest example that software processing and intelligence will continue to transform music and audio hardware, even if the fundamental hardware components remain the same.

And, uh, I guess we’ll all soon wind up like this guy?

(Photo source, without explanation, is the very useful archives of the ambisonics symposium.

The post Mics that record in “3D” ambisonics are the next big thing appeared first on CDM Create Digital Music.

Watch The Black Madonna DJ live from … inside a video game

Algorithmic selection, soulless streaming music, DJ players that tell you what to play next and then do it for you… let’s give you an alternative, and much more fun and futuristic future. Let’s watch The Black Madonna DJ from inside a video game.

This is some reality-bending action here. The Black Madonna, an actual human, played an actual DJ set in an actual club, as that entire club set was transformed into a virtual rendition. That in turn was then streamed as a promotion via Resident Advisor. Eat your heart out, Boiler Room. Just pointing cameras at people? So last decade.

From Panorama Bar to afterhours in the uncanny valley:

This is less to do with CDM, but… I enjoy watching the trailer about the virtual club, just because I seriously never get tired of watching Marea punching a cop. (Create Digital Suckerpunches?)

Um… apologies to members of law enforcement for that. Just a game.

So, back to why this is significant.

First, I think actually The Black Madonna doesn’t get nearly the credit she deserves for how she’s been able to make her personality translate across the cutthroat-competitive electronic music industry of the moment. There’s something to learn from her approach – to the fact that she’s relatable, as she plays and in her outspoken public persona.

And somehow, seeing The Black Madonna go all Andy Serkis here puts that into relief. (See video at bottom.) I mean, what better metaphor is there for life in the 21st century? You have to put on a weird, uncomfortable, hot suit, then translate all the depth of your humanness into a virtual realm that tends to strip you of dimensions, all in front of a crowd of strangers online you can’t see. You have to be uncannily empathic inside the uncanny valley. A lot of people see the apparent narcissism on social media and assume they’re witnessing a solution to the formula, when in fact it may be simply signs of desperation.

Marea isn’t the only DJ to play Grand Theft Auto’s series, but she’s the one who seems to actually manage to establish herself as a character in the game.

To put it bluntly: whatever you think of The Black Madonna, take this as a license to ignore the people who try to stop you from being who you are. It’s not going to get you success, but it is going to allow you to be human in a dehumanizing world.

And then there’s the game itself, now a platform for music. Rockstar Games have long been incurable music nerds – yeah, our people. That’s why you hear well curated music playlists all over the place, as well as elaborate interactive audio and music systems for industry-leading immersion. They’re nerds enough that they’ve even made some side trips like trying to make a beat production tool for the Sony PSP with Timbaland. (Full disclosure: I consulted on an educational program around that.)

This is unquestionably a commercial, mass market platform, but it’s nonetheless a pretty experimental concept.

Yes, yes – lots of flashbacks to the days of Second Life and its fledgling attempts to work as a music venue.

The convergence of virtual reality tech, motion capture, and virtual venues on one hand with music, the music industry, and unique electronic personalities on the other I think is significant – even if only as a sign of what could be possible.

I’m talking now to Rockstar to find out more about how they pulled this off. Tune in next time as we hopefully get some behind-the-scenes look at what this meant for the developers and artists.

While we wait on that, let’s nerd out with Andy Serkis about motion capture performance technique:

The post Watch The Black Madonna DJ live from … inside a video game appeared first on CDM Create Digital Music.

These fanciful new apps weave virtual music worlds in VR and AR

Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).

Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)

And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.

But what is this stuff actually for?

That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.

They’ve got two apps now, one for VR, and one for AR.

Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:

Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).

But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.

The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.

The results can be totally crazy. Here’s one example:

Pitchfork go into some detail as to how this app came about:

Fields Wants to Be The Augmented Reality App for Experimental Music Fans and Creators Alike

More on the app, including a download, on its site:

http://fields.planeta.cc/

And then there’s Drops – a “rhythm garden.”

We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.

It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.

And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!

And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:

One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.

VR Visionaries: Planeta

Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.

Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)

Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)

We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”

Check out that whole article, as it’s also a great read:

Launch: Planeta, addressing the future of interface-sound composition

Find the apps:

http://fields.planeta.cc
http://drops.garden

And let us know if you have any questions or comments for the developers, or on this topic in general – or if you’ve got a creation of your own using these technologies.

The post These fanciful new apps weave virtual music worlds in VR and AR appeared first on CDM Create Digital Music.

The best news for iOS, macOS musicians and artists from WWDC

Apple’s WWDC, while focused on developers, tends to highlight consumer features of its OS – not so much production stuff. But as usual, there are some tidbits for creative Mac and iOS users.

Here’s the stuff that looks like good news, at least in previews. Note that Apple tends to focus on just major new features they want to message, so each OS may reveal more in time.

iOS 12

Performance. On both iPad and iPhone, Apple is promising big performance optimizations. They’ve made it sound like they’re particularly targeting older devices, which should come as welcome news to users finding their iThings feel sluggish with age. (iPhone 5s and iPad Air onwards get the update.)

A lot of this has to do with responsiveness when launching apps or bringing up the keyboard or camera, so it may not directly impact audio apps – most of which do their heavy work at a pretty low level. But it’s nice to see Apple improve the experience for long-term owners, not just show off things that are new. And even as Android devices boast high-end specs on paper, that platform still lags iOS badly when it comes to things like touch response or low-latency audio.

Smoother animation is also a big one.

Augmented reality. Apple has updated their augmented reality to ARKit 2. These are the tools that let you map 3D objects and visualizations to a real-world camera feed – it basically lets you hold up a phone or tablet instead of don goggles, and mix the real world view with the virtual one.

New for developers: persist your augmented reality between sessions (without devs having to do that themselves), object detection and tracking, and multi-user support. They’ve also unveiled a new format for objects.

I know AR experimentation is already of major interest to digital artists. The readiness of iOS as a platform means they have a canvas for those experiments.

There are also compelling music and creative applications, some still to be explored. Imagine using an augmented reality view to help visualize spatialized audio. Or use a camera to check out how a modular rack or gear will fit in your studio. And there are interesting possibilities in education. (Think a 3D visualization of acoustic waves, for instance.)

Both augmented reality and virtual reality offer some new immersive experiences musicians and artists are sure to exploit. Hey, no more playing Dark Side of the Moon to The Wizard of Oz; now you can deliver an integrated AV experience.

Google’s Android and Apple are neck and neck here, but because Apple delivers updates faster, they can rightfully claim to be the largest platform for the technology. (They also have devices: iPhone SE / 6s and up and 5th generation iPad and iPads Pro all work.) Google’s challenge here I think is really adoption.

Apple’s also pretty good at telling the story here:
https://www.apple.com/ios/augmented-reality/

That said, Google has some really compelling 3D audio solutions – more on this landscape soon, on both platforms.

Real Do Not Disturb. This is overdue, but I think a major addition for those of us wanting to focus on music on iOS and not have to deal with notifications.

Siri Shortcuts. This is a bit like the third-party power app Workflow; it allows you to chain activities and apps together. I expect that could be meaningful to advanced iOS users; we’ll just have to see more details. It could mean, for instance, handy record + process audio batches.

Voice Memos on iPad. I know a lot of musicians still use this so – now you’ve got it in both places, with cloud sync.

https://www.apple.com/ios/ios-12-preview/features/

macOS 10.14 Mojave

Dark Mode. Finally. And another chance to keep screens from blaring at us in studios or onstage – though Windows and Linux users have this already, of course.

Improved Finder. This is more graphics oriented than music oriented, of course – but creative users in general will appreciate the complete metadata preview pane.

Also nice: Quick Actions, which also support the seldom-used, ill-documented, but kind of amazing Automator. Automator also has a lot of audio-specific actions with some apps; it’s worth checking out.

There are also lots of nice photo and image markup tools.

Stacks. Iterations of this concept have been around since the 90s, but finally we see it in an official Apple OS release. Stacks organize files on your desktop automatically, so you don’t have a pile of icons everywhere. Apple got us in this mess in the 80s (or is that Xerox in the 70s) but … finally they’re helping us dig out again.

App Store and the iOS-Mac ecosystem. Apple refreshing their App Store may be a bigger deal than it seems. A number of music developers are seeing big gains on Apple mobile platforms – and they’re trying to leverage that success by bringing apps to desktop Mac, something that the Windows ecosystem really doesn’t provide. It sounds like Intua, creators of BeatMaker, might even have a desktop app in store.

And having a better App Store means that it’s more likely developers will be able to sell their apps – meaning more Mac music apps.

https://www.apple.com/macos/mojave-preview/

That’s about it

There’s of course a lot more to these updates, but more on either the developer side or consumer things less relevant to our audience.

The big question for Apple remains – what is their hardware roadmap? The iPad has no real rivals apart from shifting focus to Windows-native tablets like the Surface, but the Mac has loads of competition for visual and music production.

Generally, I don’t know that either Windows or macOS can deliver a lot for pro users in these kinds of updates. We’re at a mature, iterative phase for desktop OSes. But that’s okay.

Now, what we hope as always is that updates don’t just break our existing stuff. Case in point: Apple moving away from OpenCL and OpenGL.

But even there, as one reader comments, hardware is everything. Apple dropping OpenCL isn’t as big to some developers and artists as the fact that you can’t buy a machine with an NVIDIA card in it.

Well, we’ll be watching. And as usual, anything that may or may not break audio and music tools, we’ll find out only closer to release.

The post The best news for iOS, macOS musicians and artists from WWDC appeared first on CDM Create Digital Music.

SPAT Revolution announces integration with Avid VENUE | S6L system

Flux SPAT RevolutionThis year at InfoComm 2018, Flux and Avid will be previewing the integration of the highly acclaimed SPAT Revolution software with Avid’s VENUE | S6L systems, creating a powerful and innovative real-time multichannel immersive 3D-audio platform for Live, Theatrical, and Installed Sound applications. Avid has introduced a number of highly anticipated workflow enhancements to their […]

Unreal game engine’s modular sound features explained: video

Unreal Engine may be built for games, but under the hood, it’s got a powerful audio, music, and modular synthesis engine. Its lead audio programmer explained this afternoon in a livestream from HQ.

Now a little history: back when I first met Aaron McLeran, he was at EA and working with Brian Eno and company on Spore. Generative music in games and dreams of real interactive audio engines to drive it have some history. As it happens, those conversations indirectly led us to create libpd. But that’s another story.

Aaron has led an effort to build real synthesis capabilities into Unreal. That could open a new generation of music and sound for games, enabling scores that are more responsive to action and scale better to immersive environments (including VR and AR). And it could mean that Unreal itself becomes a tool for art, even without a game per se, by giving creators access to a set of tools that handle a range of 3D visual and sound capabilities, plus live, responsive sound and music structures, on the cheap. (Getting started with Unreal is free.)

I’ll write about this more soon, but here’s what they cover in the video:

  • Submix graph and source rendering (that’s how your audio bits get mixed together)
  • Effects processing
  • Realtime synthesis (which is itself a modular environment)
  • Plugin extensions

Aaron is joined by Community Managers Tim Slager and Amanda Bott.

I’m just going to put this out there —

— and let you ask CDM some questions. (Or let us know if you’re using Unreal in your own work, as an artist, or as a sound designer or composer for games!)

Forum topic with the stream:

Unreal Engine Livestream – Unreal Audio: Features and Architecture – May 24 – Live from Epic HQ

The post Unreal game engine’s modular sound features explained: video appeared first on CDM Create Digital Music.

Free new tools for Live 10 unlock 3D spatial audio, VR, AR

Envelop began life by opening a space for exploring 3D sound, directed by Christopher Willits. But today, the nonprofit is also releasing a set of free spatial sound tools you can use in Ableton Live 10 – and we’ve got an exclusive first look.

First, let’s back up. Listening to sound in three dimensions is not just some high-tech gimmick. It’s how you hear naturally with two ears. The way that actually works is complex – the Wikipedia overview alone is dense – but close your eyes, tilt your head a little, and listen to what’s around you. Space is everything.

And just as in the leap from mono to stereo, space can change a musical mix – it allows clarity and composition of sonic elements in a new way, which can transform its impact. So it really feels like the time is right to add three dimensions to the experience of music and sound, personally and in performance.

Intuitively, 3D sound seems even more natural than visual counterparts. You don’t need to don weird new stuff on your head, or accept disorienting inputs, or rely on something like 19th century stereoscopic illusions. Sound is already as ephemeral as air (quite literally), and so, too, is 3D sound.

So, what’s holding us back?

Well, stereo sound required a chain of gear, from delivery to speaker. But those delivery mechanisms are fast evolving for 3D, and not just in terms of proprietary cinema setups.

But stereo audio also required something else to take off: mixers with pan pots. Stereo effects. (Okay, some musicians still don’t know how to use this and leave everything dead center, but that only proves my point.) Stereo only happened because tools made its use accessible to musicians.

Looking at something like Envelop’s new tools for Ableton Live 10, you see something like the equivalent of those first pan pots. Add some free devices to Live, and you can improvise with space, hear the results through headphones, and scale up to as many speakers as you want, or deliver to a growing, standardized set of virtual reality / 3D / game / immersive environments.

And that could open the floodgates for 3D mixing music. (Maybe even it could open your own floodgates there.)

Envelop tools for Live 10

Today, Envelop for Live (E4L) has hit GitHub. It’s not a completely free set of tools – you need the full version of Ableton Live Suite. Live 10 minimum is required (since it provides the requisite set of multi-point audio plumbing.) Provided you’re working from that as a base, though, musicians get a set of Max for Live-powered devices for working with spatial audio production and live performance, and developers get a set of tools for creating their own effects.

Start here for the download:

http://www.envelop.us/software/

See also the more detailed developer site:

https://github.com/EnvelopSound/EnvelopForLive/

Read an overview of the system, and some basic explanations of how it works (including some definitions of 3D sound terminology):

https://github.com/EnvelopSound/EnvelopForLive/wiki/System-Overview

And then find a getting started guide, routing, devices, and other reference materials on the wiki:

https://github.com/EnvelopSound/EnvelopForLive/wiki

It’s beautiful, elegant software – the friendliest I’ve seen yet to take on spatial audio, and very much in the spirit of Ableton’s own software. Kudos to core developers Mark Slee, Roddy Lindsay, and Rama Gotfried.

Here’s the basic idea of how the whole package works.

Output. There’s a Master Bus device that stands in for your output buses. It decodes your spatial audio, and adapts routing to however many speakers you’ve got connected – whether that’s just your headphones or four speakers or a huge speaker array. (That’s the advantage of having a scalable system – more on that in a moment.)

Sources. Live 10’s Mixer may be built largely with the idea of mixing tracks down to stereo, but you probably already think of it sort of as a set of particular musical materials – as sources. The Source Panner device, added to each track, lets you position that particular musical/sonic entity in three-dimensional space.

Processors. Any good 3D system needs not only 3D positioning, but also separate effects and tools – because normal delays, reverbs, and the like presume left/right or mid/side stereo output. (Part of what completes the immersive effect is hearing not only the positioning of the source, but reflections around it.)

In this package, you get:
Spinner: automates motion in 3D space horizontally and with vertical oscillations
B-Format Sampler: plays back existing Ambisonics wave files (think samples with spatial information already encoded in them)
B-Format Convolution Reverb: imagine a convolution reverb that works with three-dimensional information, not just two-dimensional – in other words, exactly what you’d want from a convolution reverb
Multi-Delay: cascading, three-dimensional delays out of a mono source
HOA Transform: without explaining Ambisonics, this basically molds and shapes the spatial sound field in real-time
Meter: Spatial metering. Cool.

Spinner, for automating movement.

Spatial multi-delay.

Convolution reverb, Ambisonics style.

Envelop SF and Envelop Satellite venues also have some LED effects, so you’ll find some devices for controlling those (which might also be useful templates for stuff you’re doing).

All of this spatial information is represented via a technique called Ambisonics. Basically, any spatial system – even stereo – involves applying some maths to determine relative amplitude and timing of a signal to create particular impressions of space and depth. What sets Ambisonics apart is, it represents the spatial field – the sphere of sound positions around the listener – separately from the individual speakers. So you can imagine your sound positions existing in some perfect virtual space, then being translated back to however many speakers are available.

This scalability really matters. Just want to check things out with headphones? Set your master device to “binaural,” and you’ll get a decent approximation through your headphones. Or set up four speakers in your studio, or eight. Or plug into a big array of speakers at a planetarium or a cinema. You just have to route the outputs, and the software decoding adapts.

Envelop is by no means the first set of tools to help you do this – the technique dates back to the 70s, and various software implementations have evolved over the years, many of them free – but it is uniquely easy to use inside Ableton Live.

Open source, standards

Free software. It’s significant that Envelop’s tools are available as free and open source. Max/MSP, Max for Live, and Ableton Live are proprietary tools, but the patches and externals exist independently, and a free license means you’re free to learn from or modify the code and patches. Plus, because they’re free in cost, you can share your projects across machines and users, provided everybody’s on Live 10 Suite.

Advanced Max/MSP users will probably already be familiar with the basic tools on which the Envelop team have built. They’re the work of the Institute for Computer Music and Sound Technology, at the Zürcher Hochschule der Künste in Zurich, Switzerland. ICMST have produced a set of open source externals for Max/MSP:

https://www.zhdk.ch/downloads-ambisonics-externals-for-maxmsp-5381

Their site is a wealth of research and other free tools, many of them additionally applicable to fully free and open source environments like Pure Data and Csound.

But Live has always been uniquely accessible for trying out ideas. Building a set of friendly Live devices takes these tools and makes them make more sense in the Live paradigm.

Non-proprietary standards. There’s a strong push to proprietary techniques in spatial audio in the cinema – Dolby, for instance, we’re looking at you. But while proprietary technology and licensing may make sense for big cinema distributors, it’s absolute death for musicians, who likely want to tour with their work from place to place.

The underlying techniques here are all fully open and standardized. Ambisonics work with a whole lot of different 3D use cases, from personal VR to big live performances. By definition, they don’t define the sound space in a way that’s particular to any specific set of speakers, so they’re mobile by design.

The larger open ecosystem. Envelop will make these tools new to people who haven’t seen them before, but it’s also important that they share an approach, a basis in research, and technological compatibility with other tools.

That includes the German ZKM’s Zirkonium system, HoaLibrary (that repository is deprecated but links to a bunch of implementations for Pd, Csound, OpenFrameworks, and so on), and IRCAM’s SPAT. All these systems support ambisonics – some support other systems, too – and some or all components include free and open licensing.

I bring that up because I think Envelop is stronger for being part of that ecosystem. None of these systems requires a proprietary speaker delivery system – though they’ll work with those cinema setups, too, if called upon to do so. Musical techniques, and even some encoded spatial data, can transfer between systems.

That is, if you’re learning spatial sound as a kind of instrument, here you don’t have to learn each new corporate-controlled system as if it’s a new instrument, or remake your music to move from one setting to another.

Envelop, the physical version

You do need compelling venues to make spatial sound’s payoff apparent – and Envelop are building their own venues for musicians. Their Envelop SF venue is a permanent space in San Francisco, dedicated to spatial listening and research. Envelop Satellite is a mobile counterpart to that, which can tour festivals and so on.

Envelop SF: 32 speakers with speakers above. 24 speakers set in 3 rings of 8 (the speakers in the columns) + 4 subs, and 4 ceiling speakers. (28.4)

Envelop Satellite: 28 speakers. 24 in 3 rings + 4 subs (overhead speakers coming soon) (24.4)

The competition, as far as venues: 4DSOUND and Berlin’s Monom, which houses a 4DSOUND system, are similar in function, but use their own proprietary tools paired with the system. They’ve said they plan a mobile system, but no word on when it will be available. The Berlin Institute of Sound and Music’s Hexadome uses off-the-shelf ZKM and IRCAM tools and pairs projection surfaces. It’s a mobile system by design, but there’s nothing particularly unique about its sound array or toolset. In fact, you could certainly use Envelop’s tools with any of these venues, and I suspect some musicians will.

There are also many multi-speaker arrays housed in music venues, immersive audiovisual venues, planetariums, cinemas, and so on. So long as you can get access to multichannel interfacing with those systems, you could use Envelop for Live with all of these. The only obstacle, really, is whether these venues embrace immersive, 3D programming and live performance.

But if you thought you had to be Brian Eno to get to play with this stuff, that’s not likely to be the situation for long.

VR, AR, and beyond

In addition to venues, there’s also a growing ecosystem of products for production and delivery, one that spans musical venues and personal immersive media.

To put that more simply: after well over a century of recording devices and production products assuming mono or stereo, now they’re also accommodating the three dimensions your two ears and brain have always been able to perceive. And you’ll be able to enjoy the results whether you’re on your couch with a headset on, or whether you prefer to go out to a live venue.

Ambisonics-powered products now include Facebook 360, Google VR, Waves, GoPro, and others, with more on the way, for virtual and augmented reality. So you can use Live 10 and Envelop for Live as a production tool for making music and sound design for those environments.

Steinberg are adopting ambisonics, too (via Nuendo). Here’s Waves’ guide – they now make plug-ins that support the format, and this is perhaps easier to follow than the Wikipedia article (and relevant to Envelop for Live, too):

https://www.waves.com/ambisonics-explained-guide-for-sound-engineers

Ableton Live with Max for Live has served as an effective prototyping environment for audio plug-ins, too. So developers could pick up Envelop for Live’s components, try out an idea, and later turn that into other software or hardware.

I’m personally excited about these tools and the direction of live venues and new art experiences – well beyond what’s just in commercial VR and gaming. And I’ve worked enough on spatial audio systems to at least say, there’s real potential. I wouldn’t want to keep stereo panning to myself, so it’s great to get to share this with you, too. Let us know what you’d like to see in terms of coverage, tutorial or otherwise, and if there’s more you want to know from the Envelop team.

Thanks to Christopher Willits for his help on this.

More to follow…

http://envelop.us

https://github.com/EnvelopSound/EnvelopForLive/

Further reading

Inside a new immersive AV system, as Brian Eno premieres it in Berlin [Extensive coverage of the Hexadome system and how it works]

Here’s a report from the hacklab on 4DSOUND I co-hosted during Amsterdam Dance Event in 2014 – relevant to these other contexts, having open tools and more experimentation will expand our understanding of what’s possible, what works, and what doesn’t work:

Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND

And some history and reflection on the significance of that system:
Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos]

Plus, for fun, here’s Robert Lippok [Raster] and me playing live on that system and exploring architecture in sound, as captured in a binaural recording by Frank Bretschneider [also Raster] during our performance for 2014 ADE. Binaural recording of spatial systems is really challenging, but I found it interesting in that it created its own sort of sonic entity. Frank’s work was just on the Hexadome.

One thing we couldn’t easily do was move that performance to other systems. Now, this begins to evolve.

The post Free new tools for Live 10 unlock 3D spatial audio, VR, AR appeared first on CDM Create Digital Music.

How to try GPU-accelerated live visuals in a few steps, for free

The growing power of gaming architectures for visuals has a side benefit: it can produce elaborate visuals without touching the CPU, which is busy on musicians’ machines dealing with sound.

But how do you go about exploring some of that power? The code language spoken natively by the GPU is a little frightening at first. Fortunately, you can actually have a play in a few minutes. It’s easy enough that I prepared this lightning tutorial:

I shared this with the #RazerMusic program as it’s in fact a good artistic application for laptops with gaming architectures – and it’s terrific having that NVIDIA GTX 1060 with 6 GB of memory. (This example can’t even begin to show that off, in fact.) These steps will work on the Mac, too, though.

I’m stealing a demo here. Isadora creator Mark Coniglio showed off his team’s GLSL support more or less like this when they unveiled the feature at the Isadora Werkstatt a couple of summers ago. But Isadora, while known among a handful of live visualists and people working with dance and theater tech, itself I think is underrated. And sure enough, this support makes the powers of GLSL friendly to non-programmers. You can grab some shader code and then modify parameters or combine with other effects, modular style, without delving into the code itself. Or if you are learning (or experienced, even) with GLSL, Isadora provides an uncommonly convenient environment to work with graphics-accelerated generative visuals and effects.

If you’re not quite ready to commit to the tool, Isadora has a full-functioning demo version so you can get this far – and look around and decide if buying a license is right for you. What I do like about it is, apart from some easy-to-use patching powers, Isadora’s scene-based architecture works well in live music, theater, dance, and other performance arts. (I still happily use it alongside stuff like Processing, Open Frameworks, and Touch Designer.)

There is a lot of possibility here. And if you dig around, you’ll see pretty radically different aesthetics are possible, too.

Here’s an experiment also using mods to the GLSL facility in Isadora, by Czech artist Gabriela Prochazka (as I jam on one of my tunes live).

Resources:

https://troikatronix.com/

https://www.shadertoy.com/

Planning to do more like this, so open to requests!

The post How to try GPU-accelerated live visuals in a few steps, for free appeared first on CDM Create Digital Music.