Roland’s revised VT-4 – the replacement for the first AIRA VT-3 – makes it look like someone finally gets what vocalists want in effects. More effects options, actual control over harmony, and MIDI could make all the difference.
The original VT-3 is a little too simple to recommend. A big dial locks you into some stock effects, without any parameter controls beyond pitch and formant. But at the same time, it is unusually direct and accessible, and it doubles as a USB interface, meaning for singers it’s carry-on luggage friendly. So as a cheap, fun effect, it does have potential. It’s cheap on the used market, but then so are a number of pedals.
Roland have apparently been listening, though. Just as the fun but simplistic TR-8 was replaced with the sample-loading, all-around improved TR-8S, so to the VT-3 has gotten a revamp. The Slimer-green trim is gone, but more importantly, you now can control the way it sounds, via expanded effects options and controls. And it does MIDI input.
Here’s the thing: there are lots of great sounding vocal effects out there, but none of them seems designed with singers in mind. They fit into two categories: pedals that seem to have been created by guitarists, or “studio” boxes that have way too much menu diving. (If you can think of an exception, shout in comments.) The VT-3 was already significant in that it was live friendly. Now the VT-4 fills in the gaps the VT-3 left open.
From the VT-3, and still a good idea:
USB audio interface functionality (so you can use this with a computer)
XLR mic in with phantom power, plus minijack in
Four faders: pitch, formant, balance (for controlling wet/dry of the effect), reverb
Push-button preset recall
Dedicated bypass switch
But new on the VT-4:
A friendly “key” dial at the top right
Direct access to “vocoder” and “harmonizer” modes
Multiple effects at once
MIDI input – so play in the notes/harmonies you want for the vocoder, harmonizer, and pitch engines
Variations for all the effects
It’s finally what you want to sing with, whether you’re a great singer or can barely sing at all – direct access to effects, performance-friendly controls. Singers don’t necessarily want to have to do everything with their feet or in pages of menus. This hardware’s designers seems to understand that.
It’s the effects that appear to be totally overhauled. The only variations on the VT-3 are printed directly around the dial – as in, you get two alternatives for the auto pitch, and that’s it.
On the VT-4, there’s a whole slew of effects hidden behind the variation buttons. (Those buttons still double as preset storage and recall, so what you’ll likely do is explore to find the ones you like, then lock them in at the top.)
There are still some toy-like presets as on the VT-3 – though some of those are interesting for processing drums and the like. In addition, though, you also get a bunch of new, musical effects, and enough variations that you can dial in what you need.
There’s a chorus effect (categorized inexplicably as “megaphone.”) There’s a model of the classic Roland VP in the vocoder, along with talk box, advanced, and Speak & Spell (sorry, trademark – “spell toy”) variations. The Harmony option lets you choose intervals (fifth, third, forth below, and combinations, though you can also use MIDI for more). Even robot has octave options and a new feedback variation.
Also, that fixed “reverb” is now really a multi effects unit – reverb, echo, synced tempo delay, and dub echo are now available.
I’d likely buy it for those upgrades alone, but then you can also use a MIDI keyboard as input to control pitch.
I need to research more how multiple effects work and exactly how these models relate to those available on the VT-3 and other Roland AIRA and Boutique series models. But generally these days Roland are constantly improving their modeling and sounds, thanks to architectures that are more flexible than those of the past.
There’s a nice gift for Red Bull Music Academy attendees: a hardware convolver effect from the man who led the team at KORG that gave us volcas and minilogues. Here’s a sneak preview.
The Granular Convolver is a collaboration between Tatsuya, now working as an independent designer and relocated to Germany from Japan (while still in an advisory position with KORG), and Berlin’s own E-RM Erfindungsbüro, maker of obsessive-quality clock devices. (Founder Maximilian Rest is the design mind there.)
I’ve got one in-hand, and will detail its operation with some sound samples shortly, but here’s a quick teaser.
First, a Jony Ives (sorry)-style video from Tats:
The important thing: this Raspberry Pi-powered device feels amazing, like a heavyweight metal luxury item, and makes wonderful sounds.
The basic operation:
1. Record a sound snippet.
2. Play back that sound snippet via a granular engine.
3. Convolve that playback with a live input, combining the two sounds – the timbre of your original sound, the envelope of what you’re playing now.
There are also some features for storing and recalling presets, which make this performance friendly.
Why this matters: it gives you an expressive way of “playing” an effect, like an instrument.
And it’s a unique boutique hardware making project, for the particular context of an event – very different than the mass-manufactured designs of something like the volca series. The units were all hand-assembled (by Tats himself) here in Berlin, and even the boards and cases were made here, as well, so it really is a Berlin manufacturing product in a way most things aren’t.
More on this soon – and you can bet if you follow any RBMA attendees, you’ll see some of their experiments with this hardware show up in social channels!
Czech builder Bastl Instruments are working simultaneously in modular and desktop instruments. But it’s not about choosing one or the other – it’s getting inspired to play musically, either way.
So Patchení s Nikol is back, with Nikol to show you some serious patching techniques. And yes, of course, this is a nice showcase of Bastl’s own skiff of modules. But it’s also a nice example of what you can do with modulated envelopes – adding modulation to an amplitude envelope to give it a more complicated shape than just attack and release and so on. You could certainly apply this to other modular environments.
Actually, one of my favorite modules Bastl have put out lately is this one: Hendrikson is designed just to make it easier to add stomp box and external effects to your modular rig. It gives you easy-access jacks for patching in your pedal or pedal chain, some handy knobs, and all-important wet/dry mix. Plus, you can patch control into that wet/dry to automate wet dry controls with your modular if you like.
Speaking of economizing, how about that Zoom MultiStomp you see in the middle of the video? It’s got a whole massive list of different effects, all of which you control, and a street price of around $100 right now.
Vaclav I believe turned me on to that Zoom. And now switching to the desktop hardware they make, here’s a personal testimonial about how much he’s appreciating their THYME looper – seen here played live and with some destructive looping.
Vaclav tells us: “I have been playing the THYME for quite a while and has a certain instrumental quality that is quite hard to master – as with any other instrument… it really became one of the most essential pieces of musical gear that I use all the time. I am really proud of it being a real instrument now and not just a dream that I had more than 3 years ago!”
I’m here in Moscow now for Synthposium where we’ll see Bastl at the Expo and in a talk on music gear business in the online age. Stay tuned.
This is a serious Frankenstein’s monster: a DIY synth made of a 1981 Casio keyboard, an AM radio, stompboxes, and more – and held together with glue and tape.
Legowelt is somewhere between modding, circuit bending, and instrument design here, concocting a kind of wonky workstation of weirdness from the cannibalized bits of other stuff.
Essentially, it’s a Casio keyboard fed through a series of effects and circuit-bent circuitry, with a looper pedal thrown in and an AM radio as noise source. Maestro Legowelt explains:
Enter the STAR SHEPHERD a synth I Build/bent/hacked/modified from old guitar pedals FX and EQ boxes, a small AM radio and a 1981 Casio 403 keyboard. The oscillator section is made out of Pitchshifter/Harmonizers/Sub Octavers and a graphic EQ pedal to create complex harmonic tones – transmorphed from the simple keyboard sounds fed by the Casio. The sound then goes through a bunch of circuitbend Analog delays, reverbs, Tremolos & Vibratos (figuring as makeshift LFO sources) and Wahwah pedals as filters. The AM radio is figuring as a random noise source. There is also a very simple keyboard style ‘sequencer’ made from a looper pedal.
The case is made out of cheap plywood and everything is held together with screws, glue and tape. There are also some LED strips pulsating from the inside for some extra intense magic.
It is very noisey, crackly and sometimes starts doing its own thing like some sentient synthesizer being that is alive. This makes it quite an adventurous experience.
It has all the spirit of electronics pioneer Reed Ghazala’s original notion of circuit bending: it’s modification of equipment as a way to “evolve” it into some organic machine life. But that AM radio alone gives it some unique and scifi sounds. It sounds like a whole studio for some rich communist-era space epic. And the formants on the filters give you the impression it’s singing to you.
Oh yeah, and there’s a painting, entitled “The Star Shepherd guiding his flock through Palm Springs”. Of course:
Your store-bought synth is now way too new, too generic, and involves too little taped-together assembly.
More of this on the official site, which has an impressive 1996 Web design:
Sound processing machines have always tread a line between necessary tool and creative effect. The latest mixing and mastering bundle from Eventide promises on both those fronts.
It’s called the Elevate 1.5 bundle, with two new plug-ins, and two major updates to existing plug-ins (including the titular Elevate). And it’s made for mixing and mastering, though there’s clearly appeal for production, too.
Kudos to Eventide for coming up at least with clever titles for the stuff in their most recent bundle, in an industry that, like carmakers, so often resorted to unintelligible combinations of letters and numbers. (BMW 325i? UA 1776 LN revision H? Who knows?)
So, you get the Punctuate, the Saturate, the Elevate … which are at least descriptive, if slightly sounding like EDM festival stages or energy drink flavors. (I’ll meet you at Saturate! E!-vent1d3 is on in 30! PLUR, bro!) And the … EQuivocate. Okay, that one earns extra points on pun factor, minus a few by sounding like what happens when you get yelled at by your mixing and mastering engineer.
But it’s what the Eventide processors do that’s very cool. Rather than simply emulate vintage gear – since you’ve got plenty of options for that – these are modern processors that focus on redesigning processing in a way that’s closer to how your human hearing works.
And appropriately enough, then, they’re part of a collaboration between Eventide and Newfangled Audio, a sort of boutique DSP algorithm house founded by veteran Eventide engineer Dan Gillespie. (Science!)
EQuivocate is a “human ear” EQ – so a graphic equalizer that’s designed not around a set of theoretical frequency bands, but around frequencies that you actually hear.
Elevate is a combination of multi-band limiter (so you get frequency-specific dynamics if you want), that “human ear” EQ, and audio maximizer. The idea, then, is to control both dynamics and frequency domain to max out your sound in a way that’s human-focused, bringing those integrated tools to bear on the mastering process.
New to those tools, EQuivocate has more controls and range adjustments, and Elevate adds a true peak limiter – so you get the futuristic features but without clipping or becoming broadcast unsafe in the process.
Added in this release, while the new bundle is only dubbed “1.5,” are two fascinating all-new creations. And they’re both all about driving the sound, in a day and age that calls for louder sounds, without squashing.
Saturate is a spectral clipper – so in addition to the 24dB drive, you can continuously control the way the sound curves and distorts. (Hey, I said some of this stuff could be fun to abuse in the production phase.)
Punctuate is a transient emphasis plug-in – so you take 26 bands, again shaped around the human ear, and emphasize or suppress attacks. And that seems really appealing – the idea that you dig into shaping the envelopes of elements in a mix, beyond just applying conventional dynamics processing or compression with some blanket controls over everything. It’s less “big hammer,” more precision tool. (I think it’ll also be interesting to compare this to Accusonus’ Beatformer which – while not the same thing has some related ideas. DSP zeitgeist, basically.)
I always get a little nervous when magic tools for mixing and mastering are unleashed on producers who don’t entirely know what they’re doing. I should know – I’m one of those people. But on the other hand, the hearing-focused design of these tools and the ways they let you work with dynamics and frequency domain make them interesting to the creative process, too, when it’s actually okay that you’re messing around and breaking the rules.
I wanted to go ahead and write up this news in advance of a review, because I’m going to take a look with a couple of other producers/engineers so we can go 360-degrees on how you might use this. Let us know if this raises any questions you’d like answered (and anything else you’d like to see us review).
AU/VST/AAX, macOS 10.7 and later, Windows 7 and later.
US$139 promo, $199 after that; upgrade from EQuivocate US$99 intro, $149 after. (It’s a free upgrade if you have the original Elevate.)
For you Eventide devotees, here’s the full list of what’s new in the existing two plug-ins in the bundle:
Elevate 1.5 Release Notes:
1. Added True Peak Limiting mode to Elevate as well as True Peak output metering
2. Added new Saturate Spectral Clipper plug-in
3. Added new Punctuate Auditory Transient Emphasis plug-in
4. Alt click now sets sliders to default in both DRAW CURVE on and off modes.
5. Updated some graphics
6. New UPDATE button will inform user when further updates are available
EQuivocate 1.5 Release Notes:
1. Added Range Parameter which will allow you to scale and invert the EQ curve, even after MATCH EQ is locked in.
2. Added Band Activate/Deactivate switches to allow you to hear the effect of each band in context.
3. Alt click now sets sliders to default in both DRAW CURVE on and off modes.
4. Updated some graphics
5. New UPDATE button will inform user when further updates are available
It’s full of gun sounds. But because of a combination of a unique sample architecture and engine and a whole lot of unique assets, the Weaponiser plug-in becomes a weapon of a different kind. It helps you make drum sounds.
Call me a devoted pacifist, call me a wimp – really, either way. Guns actually make me uncomfortable, at least in real life. Of course, we have an entirely separate industry of violent fantasy. And to a sound designer for games or soundtracks, Weaponiser’s benefits should be obvious and dazzling.
But I wanted to take a different angle, and imagine this plug-in as a sort of swords into plowshares project. And it’s not a stretch of the imagination. What better way to create impacts and transients than … well, fire off a whole bunch of artillery at stuff and record the result? With that in mind, I delved deep into Weaponiser. And as a sound instrument, it’s something special.
Like all advanced sound libraries these days, Weaponiser is both an enormous library of sounds, and a powerful bespoke sound engine in which those sounds reside. The Edinburgh-based developers undertook an enormous engineering effort here both to capture field recordings and to build their own engine.
It’s not even all about weapons here, despite the name. There are sound elements unrelated to weapons – there’s even an electronic drum kit. And the underlying architecture combines synthesis components and a multi-effects engine, so it’s not limited to playing back the weapon sounds.
What pulls Weaponiser together, then, is an approach to weapon sounds as a modularized set of components. The top set of tabs is divided into ONSET, BODY, THUMP, and TAIL – which turns out to be a compelling way to conceptualize hard-hitting percussion, generally. We often use vaguely gunshot-related metaphors when talking about percussive sounds, but here, literally, that opens up some possibilities. You “fire” a drum sound, or choose “burst” mode (think automatic and semi-automatic weapons) with an adjustable rate.
This sample-based section is then routed into a mixer with multi-effects capabilities.
In music production, we’ve grown accustomed to repetitive samples – a Roland TR clap or rimshot that sounds the same every single time. In foley or game sound design, of course, that’s generally a no-no; our ears quickly detect that something is amiss, since real-world sound never repeats that way. So the Krotos engine is replete with variability, multi-sampling, and synthesis. Applied to musical applications, those same characteristics produce a more organic, natural sound, even if the subject has become entirely artificial.
Let’s have a look at those components in turn.
Gun sounds. This is still, of course, the main attraction. Krotos have field recordings of a range of weapons:
For those of you who don’t know gun details, that amounts to pistol, rifle, automatic, semiautomatic, and submachine gun (SMG). These are divided up into samples by the onset/body/thump/tail architecture I’ve already described, plus there are lots of details based on shooting scenario. There are bursts and single fires, sniper shots from a distance, and the like. But maybe most interesting actually are all the sounds around guns – cocking and reloading vintage mechanical weapons, or the sound of bullets impacting bricks or concrete. (Bricks sound different than concrete, in fact.) There are bullets whizzing by.
And that’s just the real weapons. There’s an entire bank devoted to science fiction weapons, and these are entirely speculative. (Try shooting someone with a laser; it … doesn’t really work the way it does in the movies and TV.) Those presets get interesting, too, because they’re rooted in reality. There’s a Berretta fired interdimensionally, for example, and the laser shotguns, while they defy present physics and engineering, still have reloading variants.
In short, these Scottish sound designers spent a lot of time at the shooting range, and then a whole lot more time chained to their desk working with the sampler.
Things that aren’t gun sounds. I didn’t expect to find so many sounds in the non-gun variety, however. There are twenty dedicated kits, which tend in a sort of IDM / electro crossover, just building drum sounds on this engine. There are a couple of gems in there, too – enough so that I could imagine Krotos following up this package with a selection of drum production tools built on the Weaponiser engine but having nothing to do with bullets or artillery.
Until that happens, you can think of that as a teaser for what the engine can do if you spend time building your own presets. And to that end, you have some other tools:
Variations for each parameter randomize settings to avoid repetition.
Four engines, each polyphonic with their own sets of samples, combine. But the same things that allow you different triggering/burst modes for guns prove useful for percussion. And yes, there’s a “drunk” mode.
A deep multi-effects section with mixing and routing serves up still more options.
Four engines, synthesis. Onset, Body, Thump, and Tail each have associated synthesis engines. Onset and Body are specialized FM synthesizers. Thump is essentially a bass synth. Tail is a convolution reverb – but even that is a bit deeper than it may sound. Tail provides both audio playback and spatialization controls. It might use a recorded tail, or it might trigger an impulse response.
Also, the way samples are played here is polyphonic. Add more samples to a particular engine, and you will trigger different variants, not simply keep re-triggering the same sounds over and over again. That’s the norm for more advanced percussion samplers, but lately electronic drum engines have tended to dumb that down. And – there’s a built-in timeline with adjustable micro-timings, which is something I’ve never seen in a percussion synth/sampler.
The synth bits have their own parameters, as well, and FM and Amplitude Modulation modes. You can customize carriers and modulators. And you can dive into sample settings, including making radical changes to start and end points, envelope, and speed.
Effects and mixing. Those four polyphonic engines are mixed together in a four-part mix engine, with multi-effects that can be routed in various ways. Then you can apply EQ, Compression, Limiting, Saturation, Ring Modulation, Flanging, Transient Shaping, and Noise Gating.
Oh, you can also use this entire effects engine to process sounds from your DAW, making this a multi-effects engine as well as an instrument.
Is your head spinning yet?
About the sounds
Depending on which edition you grab, from the limited selection of the free 10-day demo up to the “fully loaded” edition, you’ll get as many as 2228 assets, with 1596 edited weapon recordings. There are also 692 “sweeteners” – a grab bag of still more sounds, from synths to a black leopard (the furry feilne, really), and the sound recordists messing around with their recording rig, keys, Earth, a bicycle belt… you get the idea. There are also various impulse responses for the convolution reverb engine, allowing you to place your sound in different rooms, stairwells, and synthetic reverbs.
The recording chain itself is worth a look. There are the expected mid/side and stereo recordings, classic Neumann and Sennheiser mics, and a whole lot of use by the Danish maker DPA – including mics positioned directly on the guns in some recordings. But they’ve also included recordings made with the Sennheiser Ambeo VR Mic for 360-degree, virtual reality sound.
They’ve shared some behind-the-scenes shots with CDM, and there’s a short video explaining the process.
In use, for music
Some of the presets are realistic enough that it did really make me uncomfortable at first working with these sounds in a music project – but that was sort of my aim. What I found compelling is, because of this synth engine, I was quickly able to transform those sounds into new, organic, even unrecognizable variations.
There are a number of strategies here that make this really interesting.
You can mess with samples. Adjusting speed and other parameters, as with any samples, of course gives you organic, complex new sounds.
There’s the synthesis engine. Working with the synth options either to reprocess the sounds or on their own allows you to treat Weaponiser basically as a drum synth.
The variations make this sound like acoustic percussion. With subtle or major variations, you can produce sound that’s less repetitive than electronic drums would be.
Mix and match. And, of course, you have presets to warp and combine, the ability to meld synthetic sounds and gun sounds, to sweeten conventional percussion with those additions (synths and guns and leopard sounds)… the mind reels.
Routing, of course is vital, too; here’s their look at that:
In fact, there’s so much, that I could almost go on a separate tangent just working with this musically. I may yet do that, but here is a teaser at what’s possible – starting with the obvious:
But I’m still getting lost in the potential here, reversing sounds, trying the drum kits, working with the synth and effects engines.
The plug-in can get heavy on CPU with all of that going on, obviously, but it’s also possible to render out layers or whole sounds, useful both in production and foley/sound design. Really, my main complaint is the tiny, complex UI, which can mean it takes some time to get the hang of working with everything. But as a sound tool, it’s pretty extraordinary. And you don’t need to have firing shotguns in all your productions – you can add some subtle sweetening, or additional layers and punch to percussion without anyone knowing they’re hearing the Krotos team messing with bike chains and bullets hitting bricks and an imaginary space laser.
Weaponiser runs on a Mac or PC, 64-bit only VST AU AAX. You’ll need about five and a half gigs of space free. Basic, which is already pretty vast, runs $399 / £259/ €337. Full loaded is over twice that size, and costs $599 / £379 / €494.
What makes a delay more interesting? A delay that’s combined with spectral controls. What makes that better? Getting it for free. MSpectralDelay is here – and looks like a must-download.
It’s been a while – I’m sure I’m not alone in missing Native Instruments’ Spektral Delay, discontinued some years back. MSpectralDelay is a different animal – NI’s offering had a whopping 160 bands, whereas this has just six – but you do get a powerful, musical interface that lets you treat delays in a different way.
The idea is this: divide up your sound by frequency, with one to six bands, then add the delay effect with tempo sync and apply modulation.
What the developers Melda have done that set their offering apart is to provide really precise parameter controls with clear visual feedback, MIDI control of everything, and clever features like automatic gain compensation and a “safety” limiter to prevent you from overdriving the results.
Also surprising: not only is there mid/side processing, but you can set up to eight channels of surround, offering some spatial applications.
Melda plugins also feature some nice standard features like modulators with time signatures, morphing and preset recall, different channel modes, and more.
Full feature list from the devs:
The most advanced user interface on the market – stylable, resizable, GPU accelerated
Dual user interface, easy screen for beginners, edit screen for professionals
Unique visualisation engine with classic meters and time graphs
1-6 fully configurable independent bands
Adjustable oscillator shape technology
M/S, single channel, up to 8 channels surround processing…
Automatic gain compensation (AGC)
Adjustable up-sampling 1x-16x
Synchronization to host tempo
MIDI controllers with MIDI learn
64-bit processing and an unlimited sampling rate
Extremely fast, optimized for newest AVX2 capable processors
Global preset management and online preset exchange
Supports VST, VST3, AU and AAX interfaces on Windows & Mac, both 32-bit and 64-bit
No dongle nor internet access is required for activation
There’s also this kind of funny demo video, which first explains why you want a delay, and then – as is custom in our industry – tell you that, naturally, everyone from complete beginners who barely know how to switch on their computer to advanced professionals will be able to have exactly the same experience because presets parameters blah blah.
That said… well, you do need a delay. And this is awesome. And beginners and pros will probably have fun with it. And there are presets. So… fair points, all.
It’s time for another trip into the strange and wonderful world of artist-created Reaktor ensembles. This time, our guide is dub techno maestro Deadbeat.
The Canadian-born, Berlin-based Scott Monteith is an artist whose chops are at peak maturity, from timbre to rhythm, recording to mix. And Scott’s latest, Wax Poetic For This Our Great Resolve, is both more personal — pulling from inspirational texts from friends — and more sonically intimate. The entire album sounds open and airy and organic, thanks to using acoustic re-recording of electronic elements. Every percussion hit, every synth line was either recorded in real space in the studio or recorded out of the box and into that open space and then miked.
With all this focus on acoustic recoridng and re-recording, you’d think there wouldn’t be much to say about software – but you’d be wrong. There’s yet more shade and color around these sounds that’s produced by synthetic processing, a whole lot of it in Reaktor.
“There’s tons and tons of extra stuff that you would normally delete in vocal takes or guitar takes or whatever that ended up as sauce for feeding vocoders or feeding [Reaktor ensemble] grainstates,” says Scott, “or even some of the real classic [ensembles].” You’re hearing some of that in the hyperreal, clear color of the arrangements and mix.
“I think it’s nice to treat that stuff completely independently,” Scott says, “and then you end up with this bank of stuff that you know is going to be in key. And it’s somehow relatable, whether it be melodically or aesthetically – because you’ve fed it this stuff from a particular track. And then you go back to arrangement mode, because then I can take off my sound designer’s hat and put on my arrangers’ hat.”
Scott is confident enough in his skills to give that secret sauce away, so here’s a tour. Some of these are some long-lost gems of the library, too, so don’t expect to find them just by sorting for the latest or most popular ensembles. Some of these were used on this particular record, others represent a related techniques but have been used on other productions.
“I’m using that just to add color to things. I love vocoders, period.
It’s like taking the vocals of Gudrun talking or Fatima talking, and using that as the modulator and the carrier signal being the chords in the track. Or it could also be the extra recording of the high hats in the room, and vocoding the vocals with that. So, then you have something rhythmic that’s the same, and in the same air, but then can be free as its own track. Or taking the guitar or the bass…”
“There’s this preset – ‘Coming Up From Hell.’ I use that a lot – I’ve been using that for years. If you’re rolling along, and you want to create density, it’s like, okay, flip this into the Ultimate Reverb, and all of a sudden you’ve got this underlying loud of ffffoooooosssssh. You’ve made things thick without adding another element.
And that with some sort of distortion, and some sort of sidechain compression to make sure that it doesn’t get in the way of anything — all of a sudden, you’ve created raging hell.”
Don’t forget the granular Reaktor ensemble that started the craze. Martin’s landmark granular processor has had an influence even outside the Reaktor community on imagining how grain processing effects can be used as instruments.
Hacking together custom ensembles
The biggest advantage of using Reaktor as a modular environment is, you can hack together what you need if a particular tool doesn’t do exactly what you want. Scott long ago made his name as a Reaktor patcher, but don’t feel obligated to achieve mastery — even he doesn’t necessarily go that route now. “The last one that I did … this thing [Deadbeats] 13 years ago.”
The aforementioned Grain Cloud synth, for instance, he used to substitute oscillators inside a drum machine. Or with granular processors, he’s swapped a sample player with a live input, as on The Swarm. These aren’t complicated hacks – you barely need to know how to operate Reaktor to pull them off. But they then open worlds of new performance and sound design possibilities.
In another instance, Scott had a happy accident hacking mmmd1, the “morphing minimal drum machine” by grainstates creator Martin Brinkmann. That ensemble includes a series of assignable X/Y controllers which can modulate the filter, bitcrush, and so on, with step-based sequencing.
Scott tried applying a child ensemble with a crossfader for interpolating between presets – and that’s when he was surprised. “Because this is step-based, morphing between presets on this thing, as you would go across, it would go thththththththththt …. and you would get these totally twisted, glitchy crossfade things.”
Thanks, Scott! Got more favorite Reaktor ensembles, other granular tools, or the like? Let us know in comments.
Envelop began life by opening a space for exploring 3D sound, directed by Christopher Willits. But today, the nonprofit is also releasing a set of free spatial sound tools you can use in Ableton Live 10 – and we’ve got an exclusive first look.
First, let’s back up. Listening to sound in three dimensions is not just some high-tech gimmick. It’s how you hear naturally with two ears. The way that actually works is complex – the Wikipedia overview alone is dense – but close your eyes, tilt your head a little, and listen to what’s around you. Space is everything.
And just as in the leap from mono to stereo, space can change a musical mix – it allows clarity and composition of sonic elements in a new way, which can transform its impact. So it really feels like the time is right to add three dimensions to the experience of music and sound, personally and in performance.
Intuitively, 3D sound seems even more natural than visual counterparts. You don’t need to don weird new stuff on your head, or accept disorienting inputs, or rely on something like 19th century stereoscopic illusions. Sound is already as ephemeral as air (quite literally), and so, too, is 3D sound.
So, what’s holding us back?
Well, stereo sound required a chain of gear, from delivery to speaker. But those delivery mechanisms are fast evolving for 3D, and not just in terms of proprietary cinema setups.
But stereo audio also required something else to take off: mixers with pan pots. Stereo effects. (Okay, some musicians still don’t know how to use this and leave everything dead center, but that only proves my point.) Stereo only happened because tools made its use accessible to musicians.
Looking at something like Envelop’s new tools for Ableton Live 10, you see something like the equivalent of those first pan pots. Add some free devices to Live, and you can improvise with space, hear the results through headphones, and scale up to as many speakers as you want, or deliver to a growing, standardized set of virtual reality / 3D / game / immersive environments.
And that could open the floodgates for 3D mixing music. (Maybe even it could open your own floodgates there.)
Envelop tools for Live 10
Today, Envelop for Live (E4L) has hit GitHub. It’s not a completely free set of tools – you need the full version of Ableton Live Suite. Live 10 minimum is required (since it provides the requisite set of multi-point audio plumbing.) Provided you’re working from that as a base, though, musicians get a set of Max for Live-powered devices for working with spatial audio production and live performance, and developers get a set of tools for creating their own effects.
It’s beautiful, elegant software – the friendliest I’ve seen yet to take on spatial audio, and very much in the spirit of Ableton’s own software. Kudos to core developers Mark Slee, Roddy Lindsay, and Rama Gotfried.
Here’s the basic idea of how the whole package works.
Output. There’s a Master Bus device that stands in for your output buses. It decodes your spatial audio, and adapts routing to however many speakers you’ve got connected – whether that’s just your headphones or four speakers or a huge speaker array. (That’s the advantage of having a scalable system – more on that in a moment.)
Sources. Live 10’s Mixer may be built largely with the idea of mixing tracks down to stereo, but you probably already think of it sort of as a set of particular musical materials – as sources. The Source Panner device, added to each track, lets you position that particular musical/sonic entity in three-dimensional space.
Processors. Any good 3D system needs not only 3D positioning, but also separate effects and tools – because normal delays, reverbs, and the like presume left/right or mid/side stereo output. (Part of what completes the immersive effect is hearing not only the positioning of the source, but reflections around it.)
In this package, you get: Spinner: automates motion in 3D space horizontally and with vertical oscillations B-Format Sampler: plays back existing Ambisonics wave files (think samples with spatial information already encoded in them) B-Format Convolution Reverb: imagine a convolution reverb that works with three-dimensional information, not just two-dimensional – in other words, exactly what you’d want from a convolution reverb Multi-Delay: cascading, three-dimensional delays out of a mono source HOA Transform: without explaining Ambisonics, this basically molds and shapes the spatial sound field in real-time Meter: Spatial metering. Cool.
Spinner, for automating movement.
Convolution reverb, Ambisonics style.
Envelop SF and Envelop Satellite venues also have some LED effects, so you’ll find some devices for controlling those (which might also be useful templates for stuff you’re doing).
All of this spatial information is represented via a technique called Ambisonics. Basically, any spatial system – even stereo – involves applying some maths to determine relative amplitude and timing of a signal to create particular impressions of space and depth. What sets Ambisonics apart is, it represents the spatial field – the sphere of sound positions around the listener – separately from the individual speakers. So you can imagine your sound positions existing in some perfect virtual space, then being translated back to however many speakers are available.
This scalability really matters. Just want to check things out with headphones? Set your master device to “binaural,” and you’ll get a decent approximation through your headphones. Or set up four speakers in your studio, or eight. Or plug into a big array of speakers at a planetarium or a cinema. You just have to route the outputs, and the software decoding adapts.
Envelop is by no means the first set of tools to help you do this – the technique dates back to the 70s, and various software implementations have evolved over the years, many of them free – but it is uniquely easy to use inside Ableton Live.
Open source, standards
Free software. It’s significant that Envelop’s tools are available as free and open source. Max/MSP, Max for Live, and Ableton Live are proprietary tools, but the patches and externals exist independently, and a free license means you’re free to learn from or modify the code and patches. Plus, because they’re free in cost, you can share your projects across machines and users, provided everybody’s on Live 10 Suite.
Advanced Max/MSP users will probably already be familiar with the basic tools on which the Envelop team have built. They’re the work of the Institute for Computer Music and Sound Technology, at the Zürcher Hochschule der Künste in Zurich, Switzerland. ICMST have produced a set of open source externals for Max/MSP:
Their site is a wealth of research and other free tools, many of them additionally applicable to fully free and open source environments like Pure Data and Csound.
But Live has always been uniquely accessible for trying out ideas. Building a set of friendly Live devices takes these tools and makes them make more sense in the Live paradigm.
Non-proprietary standards. There’s a strong push to proprietary techniques in spatial audio in the cinema – Dolby, for instance, we’re looking at you. But while proprietary technology and licensing may make sense for big cinema distributors, it’s absolute death for musicians, who likely want to tour with their work from place to place.
The underlying techniques here are all fully open and standardized. Ambisonics work with a whole lot of different 3D use cases, from personal VR to big live performances. By definition, they don’t define the sound space in a way that’s particular to any specific set of speakers, so they’re mobile by design.
The larger open ecosystem. Envelop will make these tools new to people who haven’t seen them before, but it’s also important that they share an approach, a basis in research, and technological compatibility with other tools.
That includes the German ZKM’s Zirkonium system, HoaLibrary (that repository is deprecated but links to a bunch of implementations for Pd, Csound, OpenFrameworks, and so on), and IRCAM’s SPAT. All these systems support ambisonics – some support other systems, too – and some or all components include free and open licensing.
I bring that up because I think Envelop is stronger for being part of that ecosystem. None of these systems requires a proprietary speaker delivery system – though they’ll work with those cinema setups, too, if called upon to do so. Musical techniques, and even some encoded spatial data, can transfer between systems.
That is, if you’re learning spatial sound as a kind of instrument, here you don’t have to learn each new corporate-controlled system as if it’s a new instrument, or remake your music to move from one setting to another.
Envelop, the physical version
You do need compelling venues to make spatial sound’s payoff apparent – and Envelop are building their own venues for musicians. Their Envelop SF venue is a permanent space in San Francisco, dedicated to spatial listening and research. Envelop Satellite is a mobile counterpart to that, which can tour festivals and so on.
Envelop SF: 32 speakers with speakers above. 24 speakers set in 3 rings of 8 (the speakers in the columns) + 4 subs, and 4 ceiling speakers. (28.4)
The competition, as far as venues: 4DSOUND and Berlin’s Monom, which houses a 4DSOUND system, are similar in function, but use their own proprietary tools paired with the system. They’ve said they plan a mobile system, but no word on when it will be available. The Berlin Institute of Sound and Music’s Hexadome uses off-the-shelf ZKM and IRCAM tools and pairs projection surfaces. It’s a mobile system by design, but there’s nothing particularly unique about its sound array or toolset. In fact, you could certainly use Envelop’s tools with any of these venues, and I suspect some musicians will.
There are also many multi-speaker arrays housed in music venues, immersive audiovisual venues, planetariums, cinemas, and so on. So long as you can get access to multichannel interfacing with those systems, you could use Envelop for Live with all of these. The only obstacle, really, is whether these venues embrace immersive, 3D programming and live performance.
In addition to venues, there’s also a growing ecosystem of products for production and delivery, one that spans musical venues and personal immersive media.
To put that more simply: after well over a century of recording devices and production products assuming mono or stereo, now they’re also accommodating the three dimensions your two ears and brain have always been able to perceive. And you’ll be able to enjoy the results whether you’re on your couch with a headset on, or whether you prefer to go out to a live venue.
Ambisonics-powered products now include Facebook 360, Google VR, Waves, GoPro, and others, with more on the way, for virtual and augmented reality. So you can use Live 10 and Envelop for Live as a production tool for making music and sound design for those environments.
Steinberg are adopting ambisonics, too (via Nuendo). Here’s Waves’ guide – they now make plug-ins that support the format, and this is perhaps easier to follow than the Wikipedia article (and relevant to Envelop for Live, too):
Ableton Live with Max for Live has served as an effective prototyping environment for audio plug-ins, too. So developers could pick up Envelop for Live’s components, try out an idea, and later turn that into other software or hardware.
I’m personally excited about these tools and the direction of live venues and new art experiences – well beyond what’s just in commercial VR and gaming. And I’ve worked enough on spatial audio systems to at least say, there’s real potential. I wouldn’t want to keep stereo panning to myself, so it’s great to get to share this with you, too. Let us know what you’d like to see in terms of coverage, tutorial or otherwise, and if there’s more you want to know from the Envelop team.
Thanks to Christopher Willits for his help on this.
Here’s a report from the hacklab on 4DSOUND I co-hosted during Amsterdam Dance Event in 2014 – relevant to these other contexts, having open tools and more experimentation will expand our understanding of what’s possible, what works, and what doesn’t work:
Plus, for fun, here’s Robert Lippok [Raster] and me playing live on that system and exploring architecture in sound, as captured in a binaural recording by Frank Bretschneider [also Raster] during our performance for 2014 ADE. Binaural recording of spatial systems is really challenging, but I found it interesting in that it created its own sort of sonic entity. Frank’s work was just on the Hexadome.
One thing we couldn’t easily do was move that performance to other systems. Now, this begins to evolve.
The growing power of gaming architectures for visuals has a side benefit: it can produce elaborate visuals without touching the CPU, which is busy on musicians’ machines dealing with sound.
But how do you go about exploring some of that power? The code language spoken natively by the GPU is a little frightening at first. Fortunately, you can actually have a play in a few minutes. It’s easy enough that I prepared this lightning tutorial:
I shared this with the #RazerMusic program as it’s in fact a good artistic application for laptops with gaming architectures – and it’s terrific having that NVIDIA GTX 1060 with 6 GB of memory. (This example can’t even begin to show that off, in fact.) These steps will work on the Mac, too, though.
I’m stealing a demo here. Isadora creator Mark Coniglio showed off his team’s GLSL support more or less like this when they unveiled the feature at the Isadora Werkstatt a couple of summers ago. But Isadora, while known among a handful of live visualists and people working with dance and theater tech, itself I think is underrated. And sure enough, this support makes the powers of GLSL friendly to non-programmers. You can grab some shader code and then modify parameters or combine with other effects, modular style, without delving into the code itself. Or if you are learning (or experienced, even) with GLSL, Isadora provides an uncommonly convenient environment to work with graphics-accelerated generative visuals and effects.
If you’re not quite ready to commit to the tool, Isadora has a full-functioning demo version so you can get this far – and look around and decide if buying a license is right for you. What I do like about it is, apart from some easy-to-use patching powers, Isadora’s scene-based architecture works well in live music, theater, dance, and other performance arts. (I still happily use it alongside stuff like Processing, Open Frameworks, and Touch Designer.)
There is a lot of possibility here. And if you dig around, you’ll see pretty radically different aesthetics are possible, too.
Here’s an experiment also using mods to the GLSL facility in Isadora, by Czech artist Gabriela Prochazka (as I jam on one of my tunes live).