To master sound design, no technology can top your own hearing. That’s the message from Francis Preve, who gave a gripping talk at Ableton Loop. Now we’ve got video – and more discussion. Nothing is sacred – not even the vaunted TB-303 filter.
It’s really easy to fall into the trap of trying to define specialization in the narrowest terms possible, chasing worth in whatever trend is generating it at the moment. But part of why I’ve been glad to know Fran over the years is, he has knowledge and experience that is deep and far-reaching, and that he adapts that ability to a range work. That is, if ever you worry about how to live off your love of music and machines, Fran is a great model: he’s built a skill set that can shift to new opportunities when times change.
So, essentially what he can do is understand sound, technology, and music, put them together, and apply that to diverse results. He’s quietly been a big part of sound design for clients from Dave Smith to KORG to Ableton. He teaches, and keeps up a huge workload of writing and editing. He’s run a label, been a producer, and made hit remixes. And now he has his own unique sound design products, Symplesound and his Scapes series, which act as a calling card for his ability to produce sounds and articulate their significance.
Francis isn’t shy about sharing his thought process. But as with his presets, that means you can learn that thinking method and then apply it to your own work. And that’s how we started at Ableton Loop, beginning with some listening.
Maybe most poetic: finding the same joy in teaching as you do in gardening.
About the 303…
There are a bunch of mini TED talk-style inspirational moments in there, but maybe the most quotable came in Francis’ take on resonance – and the TB-303.
But wait a minute – even if you love the 303, it’s worth listening to Francis’ analysis of why it sits at the edge between success and failure. (And actually, part of why I like the TB-303 personally is because I don’t feel obligated by anyone else that I have to like it.) Fran re-watched our talk and chose to elaborate for CDM:
To further explain my point, Nate Harrison’s Bassline Baseline is a wonderful historical analysis the whole 303 phenomena and why it was initially unsuccessful.
That said, I feel quite differently about the TB-03 and expressed this in my 2016 review for Electronic Musician. For starters, it expands greatly on the original’s synthesis parameters—adding distortion, delay, and reverb—which vastly broadens its tonal palette. These effects were also essential components of the “acid house” sound, as most 303 owners relied on them to beef up its thin, resonant flavor. The TB-03 also addressed the original 303’s absolutely opaque approach to sequencing, which resolves my other issue with the first unit (and the music it produced).
So, while I generally dislike the sound of envelope modulated resonant lowpass filters, I wanted to clarify my statements on the 303 and specifically the TB-03. It’s common knowledge that I’m a diehard Roland user and frankly, the TR-8S and System-8 are cornerstones of my current rig (as well as an original SH-101), but after 35 years, I still can’t find a way to enjoy the original 303.
Here’s actually where Francis and I agree – and I’ve taken some flak for saying I thought the TB-03 improves on the original. But that little Boutique often finds its way into my luggage when I’m playing live for this very reason, and I know I’m not alone. (And I do like the original 303 and acid house and acid techno – and I love cilantro, too, as it happens!)
Get more of Fran’s brain (and sounds)
Francis has a regular masterclass series for Electronic Musician. Of particular interest: delve deep into Ableton’s new Wavetable in Live 10 and the latest Propellerhead Reason instruments, the phenomenal Europa and Grain.
Since 2016, Francis has added sounds to:
– Ableton Live 10
– Korg Prologue
– Dave Smith REV2
– Korg Gadget
– Korg iMonoPoly
– Propellerhead Reason
– Xfer preset packs
– Various Symplesound products
New physical modeling sounds for AAS’ unique Chromaphone.
Serum is a heavyweight among producers; Fran’s got your tools for Xfer.
(Other clients over the years: Propellerhead, Roland, iZotope)
And this year, so far:
DSI Prophet X
AAS Solids Chromaphone 2 Pack (arriving next week – rather keen for this one; physical modeling in Chromaphone is great!)
System-8 and Roland Cloud Synthwave pack (with Carma Studios)
Xfer Serum Toolkit Vol 3 (summer release)
Major multi-platform Symplesound release
More Scapes based on field recordings (Fran is roaming with a camper van now) – he says he’s “cracked the code for recreating fire in Ableton”
Live 10 (literally hundreds of presets, mostly Operator and quite a few wavetables)
Korg Prologue, Gadget, and iMonoPoly
Dave Smith REV2
Xfer Serum Toolkit Vol 2 expansion pack -https://www.xferrecords.com/preset_packs/serum_toolkit_2
Scapes – https://www.francispreve.com/scapes/ (or your piece)
But the big hit is perhaps the one we debuted here on CDM:
Envelop began life by opening a space for exploring 3D sound, directed by Christopher Willits. But today, the nonprofit is also releasing a set of free spatial sound tools you can use in Ableton Live 10 – and we’ve got an exclusive first look.
First, let’s back up. Listening to sound in three dimensions is not just some high-tech gimmick. It’s how you hear naturally with two ears. The way that actually works is complex – the Wikipedia overview alone is dense – but close your eyes, tilt your head a little, and listen to what’s around you. Space is everything.
And just as in the leap from mono to stereo, space can change a musical mix – it allows clarity and composition of sonic elements in a new way, which can transform its impact. So it really feels like the time is right to add three dimensions to the experience of music and sound, personally and in performance.
Intuitively, 3D sound seems even more natural than visual counterparts. You don’t need to don weird new stuff on your head, or accept disorienting inputs, or rely on something like 19th century stereoscopic illusions. Sound is already as ephemeral as air (quite literally), and so, too, is 3D sound.
So, what’s holding us back?
Well, stereo sound required a chain of gear, from delivery to speaker. But those delivery mechanisms are fast evolving for 3D, and not just in terms of proprietary cinema setups.
But stereo audio also required something else to take off: mixers with pan pots. Stereo effects. (Okay, some musicians still don’t know how to use this and leave everything dead center, but that only proves my point.) Stereo only happened because tools made its use accessible to musicians.
Looking at something like Envelop’s new tools for Ableton Live 10, you see something like the equivalent of those first pan pots. Add some free devices to Live, and you can improvise with space, hear the results through headphones, and scale up to as many speakers as you want, or deliver to a growing, standardized set of virtual reality / 3D / game / immersive environments.
And that could open the floodgates for 3D mixing music. (Maybe even it could open your own floodgates there.)
Envelop tools for Live 10
Today, Envelop for Live (E4L) has hit GitHub. It’s not a completely free set of tools – you need the full version of Ableton Live Suite. Live 10 minimum is required (since it provides the requisite set of multi-point audio plumbing.) Provided you’re working from that as a base, though, musicians get a set of Max for Live-powered devices for working with spatial audio production and live performance, and developers get a set of tools for creating their own effects.
It’s beautiful, elegant software – the friendliest I’ve seen yet to take on spatial audio, and very much in the spirit of Ableton’s own software. Kudos to core developers Mark Slee, Roddy Lindsay, and Rama Gotfried.
Here’s the basic idea of how the whole package works.
Output. There’s a Master Bus device that stands in for your output buses. It decodes your spatial audio, and adapts routing to however many speakers you’ve got connected – whether that’s just your headphones or four speakers or a huge speaker array. (That’s the advantage of having a scalable system – more on that in a moment.)
Sources. Live 10’s Mixer may be built largely with the idea of mixing tracks down to stereo, but you probably already think of it sort of as a set of particular musical materials – as sources. The Source Panner device, added to each track, lets you position that particular musical/sonic entity in three-dimensional space.
Processors. Any good 3D system needs not only 3D positioning, but also separate effects and tools – because normal delays, reverbs, and the like presume left/right or mid/side stereo output. (Part of what completes the immersive effect is hearing not only the positioning of the source, but reflections around it.)
In this package, you get: Spinner: automates motion in 3D space horizontally and with vertical oscillations B-Format Sampler: plays back existing Ambisonics wave files (think samples with spatial information already encoded in them) B-Format Convolution Reverb: imagine a convolution reverb that works with three-dimensional information, not just two-dimensional – in other words, exactly what you’d want from a convolution reverb Multi-Delay: cascading, three-dimensional delays out of a mono source HOA Transform: without explaining Ambisonics, this basically molds and shapes the spatial sound field in real-time Meter: Spatial metering. Cool.
Spinner, for automating movement.
Convolution reverb, Ambisonics style.
Envelop SF and Envelop Satellite venues also have some LED effects, so you’ll find some devices for controlling those (which might also be useful templates for stuff you’re doing).
All of this spatial information is represented via a technique called Ambisonics. Basically, any spatial system – even stereo – involves applying some maths to determine relative amplitude and timing of a signal to create particular impressions of space and depth. What sets Ambisonics apart is, it represents the spatial field – the sphere of sound positions around the listener – separately from the individual speakers. So you can imagine your sound positions existing in some perfect virtual space, then being translated back to however many speakers are available.
This scalability really matters. Just want to check things out with headphones? Set your master device to “binaural,” and you’ll get a decent approximation through your headphones. Or set up four speakers in your studio, or eight. Or plug into a big array of speakers at a planetarium or a cinema. You just have to route the outputs, and the software decoding adapts.
Envelop is by no means the first set of tools to help you do this – the technique dates back to the 70s, and various software implementations have evolved over the years, many of them free – but it is uniquely easy to use inside Ableton Live.
Open source, standards
Free software. It’s significant that Envelop’s tools are available as free and open source. Max/MSP, Max for Live, and Ableton Live are proprietary tools, but the patches and externals exist independently, and a free license means you’re free to learn from or modify the code and patches. Plus, because they’re free in cost, you can share your projects across machines and users, provided everybody’s on Live 10 Suite.
Advanced Max/MSP users will probably already be familiar with the basic tools on which the Envelop team have built. They’re the work of the Institute for Computer Music and Sound Technology, at the Zürcher Hochschule der Künste in Zurich, Switzerland. ICMST have produced a set of open source externals for Max/MSP:
Their site is a wealth of research and other free tools, many of them additionally applicable to fully free and open source environments like Pure Data and Csound.
But Live has always been uniquely accessible for trying out ideas. Building a set of friendly Live devices takes these tools and makes them make more sense in the Live paradigm.
Non-proprietary standards. There’s a strong push to proprietary techniques in spatial audio in the cinema – Dolby, for instance, we’re looking at you. But while proprietary technology and licensing may make sense for big cinema distributors, it’s absolute death for musicians, who likely want to tour with their work from place to place.
The underlying techniques here are all fully open and standardized. Ambisonics work with a whole lot of different 3D use cases, from personal VR to big live performances. By definition, they don’t define the sound space in a way that’s particular to any specific set of speakers, so they’re mobile by design.
The larger open ecosystem. Envelop will make these tools new to people who haven’t seen them before, but it’s also important that they share an approach, a basis in research, and technological compatibility with other tools.
That includes the German ZKM’s Zirkonium system, HoaLibrary (that repository is deprecated but links to a bunch of implementations for Pd, Csound, OpenFrameworks, and so on), and IRCAM’s SPAT. All these systems support ambisonics – some support other systems, too – and some or all components include free and open licensing.
I bring that up because I think Envelop is stronger for being part of that ecosystem. None of these systems requires a proprietary speaker delivery system – though they’ll work with those cinema setups, too, if called upon to do so. Musical techniques, and even some encoded spatial data, can transfer between systems.
That is, if you’re learning spatial sound as a kind of instrument, here you don’t have to learn each new corporate-controlled system as if it’s a new instrument, or remake your music to move from one setting to another.
Envelop, the physical version
You do need compelling venues to make spatial sound’s payoff apparent – and Envelop are building their own venues for musicians. Their Envelop SF venue is a permanent space in San Francisco, dedicated to spatial listening and research. Envelop Satellite is a mobile counterpart to that, which can tour festivals and so on.
Envelop SF: 32 speakers with speakers above. 24 speakers set in 3 rings of 8 (the speakers in the columns) + 4 subs, and 4 ceiling speakers. (28.4)
The competition, as far as venues: 4DSOUND and Berlin’s Monom, which houses a 4DSOUND system, are similar in function, but use their own proprietary tools paired with the system. They’ve said they plan a mobile system, but no word on when it will be available. The Berlin Institute of Sound and Music’s Hexadome uses off-the-shelf ZKM and IRCAM tools and pairs projection surfaces. It’s a mobile system by design, but there’s nothing particularly unique about its sound array or toolset. In fact, you could certainly use Envelop’s tools with any of these venues, and I suspect some musicians will.
There are also many multi-speaker arrays housed in music venues, immersive audiovisual venues, planetariums, cinemas, and so on. So long as you can get access to multichannel interfacing with those systems, you could use Envelop for Live with all of these. The only obstacle, really, is whether these venues embrace immersive, 3D programming and live performance.
In addition to venues, there’s also a growing ecosystem of products for production and delivery, one that spans musical venues and personal immersive media.
To put that more simply: after well over a century of recording devices and production products assuming mono or stereo, now they’re also accommodating the three dimensions your two ears and brain have always been able to perceive. And you’ll be able to enjoy the results whether you’re on your couch with a headset on, or whether you prefer to go out to a live venue.
Ambisonics-powered products now include Facebook 360, Google VR, Waves, GoPro, and others, with more on the way, for virtual and augmented reality. So you can use Live 10 and Envelop for Live as a production tool for making music and sound design for those environments.
Steinberg are adopting ambisonics, too (via Nuendo). Here’s Waves’ guide – they now make plug-ins that support the format, and this is perhaps easier to follow than the Wikipedia article (and relevant to Envelop for Live, too):
Ableton Live with Max for Live has served as an effective prototyping environment for audio plug-ins, too. So developers could pick up Envelop for Live’s components, try out an idea, and later turn that into other software or hardware.
I’m personally excited about these tools and the direction of live venues and new art experiences – well beyond what’s just in commercial VR and gaming. And I’ve worked enough on spatial audio systems to at least say, there’s real potential. I wouldn’t want to keep stereo panning to myself, so it’s great to get to share this with you, too. Let us know what you’d like to see in terms of coverage, tutorial or otherwise, and if there’s more you want to know from the Envelop team.
Thanks to Christopher Willits for his help on this.
Here’s a report from the hacklab on 4DSOUND I co-hosted during Amsterdam Dance Event in 2014 – relevant to these other contexts, having open tools and more experimentation will expand our understanding of what’s possible, what works, and what doesn’t work:
Plus, for fun, here’s Robert Lippok [Raster] and me playing live on that system and exploring architecture in sound, as captured in a binaural recording by Frank Bretschneider [also Raster] during our performance for 2014 ADE. Binaural recording of spatial systems is really challenging, but I found it interesting in that it created its own sort of sonic entity. Frank’s work was just on the Hexadome.
One thing we couldn’t easily do was move that performance to other systems. Now, this begins to evolve.
Ableton Live 10 has some great new drum synth devices, as part of Max for Live. But that kick could be better. Max modifications, to the rescue!
The Max for Live kick sounds great – especially if you combine it with a Drum Buss or even some distortion via the Pedal, also both new in Live 10. But it makes some peculiar decisions. The biggest problem is, it ignores the pitch of incoming MIDI.
Green Kick fixes that, by mapping MIDI note to Pitch of the Kick, so you can tap different pads or keyboard keys to pitch the kick where you want it. (You can still trigger a C0 by pressing the Kick button in the interface.)
Also: “It seemed strange to have Attack as a numbox and the Decay as a dial.”
Yes, that does seem strange. So you also get knobs for both Attack and Decay, which makes more sense.
Now, all of this is possible thanks to the fact that this is a Max for Live device, not a closed-box internal device. While it’s a pain to have to pony up for the full cost of Live Suite to get Max for Live, the upside is, everything is editable and modifiable. And it’d be great to see that kind of openness in other tools, for reasons just like this.
Likewise, if this green color bothers you, you can edit this mod and … so on.
There’s a big push among software makers to deliver integrated solutions – and that’s great. But if you’re a big user of both, say, MASCHINE MK3 and Ableton Live, here’s some good news.
NI made available two software updates yesterday, for their Maschine groove workstation software and for Komplete Kontrol, their software layer for hosting instruments and effects and interfacing with their keyboards. So, the hardware proposition there is the 4×4 pad grid of the MP3, and the Komplete Kontrol keyboards.
For Maschine users, the ability to use Ableton Live and Maschine seamlessly could make a lot of producers and live performers happy. Now, unlike working with Ableton Push, the setup isn’t entirely seamless, and there’s not total integration of hardware and software. But it’s still a big step forward. For instance, I often find myself starting a project with Maschine, because I’ve got a kit I like (including my own samples), or I’m using some of its internal drum synths or bass synth, or just want to wail on four pads and use its workflow for sampling and groove creation. But then, once I’ve built up some materials, I may shift back to playing with Ableton’s workflow in Session or Arrange view to compose an idea. And I know lots of users work the same way. It makes sense, given the whole idea of Maschine is to have the feeling of a piece of hardware.
So, you’ve got this big square piece of gear plugged in. Then sometimes literally you’re unplugging the USB port and connecting Push or something else… or it just sits there, useless.
Having these templates means you switch from one tool to the other, without changing workflow. You could already do this with Maschine Jam, which has a bunch of shortcuts for different tasks and a big grid of triggers (which fits Session View). But the appeal of Maschine for a lot of us is those big, expressive pads on the MK3, so this is what we were waiting for.
On the Komplete Kontrol side, there’s a related set of use cases. Whether you’re the sort to just pull up some presets from Komplete, or at the opposite end of the spectrum, you’re using Komplete Kontrol to manipulate custom Reaktor ensembles, it’s nice to have a set of encoders and transport controls at the ready. The MK2 keyboards brought that to the party – so, for instance, now it’s really easy in Apple’s Logic Pro to play some stuff on the keys, then do another take, without, like – ugh – moving over to the table your computer is on, fumbling for the mouse or keyboard shortcut … you get the idea.
And again, a lot of us are using Ableton Live. I love Logic, but there have been times where I find myself comically missing the Session View as a way of storing ideas.
The notion here is, of course, to get you to buy into Native Instruments’ keyboards. But there is an awfully big ecosystem now of third-party instruments (like those from Output, among some of my favorites) that take advantage of compatibility via the NKS format. (NI likes to call that a “standard,” which I think is a bit of a stretch, given for now there’s no SDK for other hardware and host software makers. But it’s a useful step for now, anyway.)
So, here’s how to get going and what else is new.
The big deal with 2.7.4 is new controller workflows (JAM, MK3) and Live integration (MK3). Live users, you’ll want to begin here:
There are actually two big improvements here workflow-wise. One is Live support, but the other is easier creation of Loop recordings. With the “Target” parameter, you can drop recordings into:
2. “Sounds” (the Audio plug-in, where you can layer up sounds)
3. Pattern (creates both an Audio plug-in recording and a pattern with the playback)
I think the two together could be a godsend, actually, for composing ideas in a more improvisatory flow. The Target workflow also works on MASCHINE JAM (via different controllers).
There’s also footswitch-triggered recording.
So, Native Instruments are finally listening to feedback from people for whom live sampling is at the heart of their music making process. It’s about time, given that Maschine was modeled on hardware samplers.
The Live integration includes just the basics, but important basics – and it might still be useful even with Push and Maschine side-by-side. The MK3 can access the mixer (Volume, Pan, Mute / Solo / Arm states), clip navigation and launching, recording and quantize, undo/redo, automation toggle, tap tempo, and loop tempo.
As always, you also get various other fixes.
Komplete Kontrol 2.0
Again, you’ll start with the (slightly annoying) installation process, and then you’ll get to playing. NI support has a set of instructions with that, plus some useful detailed links on how the integration works (scroll to the botto, read the whole thing!):
The other big update here is all about supporting more plug-ins, so your NI keyboard becomes the command center for lots of other instruments and effects you own. NI now boasts hundreds of supporting plug-ins for its NKS format, which maps hardware controls to instrument parameters.
Now that includes effects, too. And that’s cool, since sometimes playing is about loading an instrument on the keys, but manipulating the parameters of an effect that processes that instrument. Those plug-ins show up in the browser, now, if they’ve added support, and they also map to the controls.
Scoff if you like, but I know these keyboards have been big sellers. If nothing else, the lesson here is that making your software sounds and effects accessible with a keyboard for tangible control is something people like.
By the way, NI also quietly pushed out a Kontakt sampler update with a whole bunch of power-user improvements to KSP, their custom language for extending/scripting sound patches. That’s of immediate interest only to Kontakt sound content developers, but you can bet some of those little things will mean more improvements to Kontakt-based content you use, if you’re on NI’s ecosystem.
All three updates are available from NI’s Service Center.
If you’ve found a useful workflow with any of this, if you’ve got any tips or hacks, as always – shout out; we’re curious to hear! (I assume you might even be making some music with all this, so that, too.)
Ableton shared this set of three videos, that take a look cinematic ambient artists Mattokind, and how they translate their music from studio to stage. The first video, above, captures Mattokind’s live performance of Scenescape. The next videos feature the group breaking down their setup and discussing their process for adapting their studio sound to… Read More Mattokind On Translating Your Music From Studio To Stage
One of them likes to solve Rubik’s Cubes, blindfolded, on tour. The other is capable of executing elaborate drum programs programmed on a computer, on live percussion. Meet Tennyson and learn how they work.
As we saw before, Ableton Loop is a place not just for learning about a particular product for musicians, but gathering together ideas from the electronic music community as a whole. And Ableton have been sharing some of that work in an online minisite, so you get a free front row ticket to some of the event from wherever you are.
Tennyson is a good example of how explorations at Loop can cover playing technology as instrument – and everything that means for musicians. Watch:
Tennyson are a young Canadian brother and sister duo, with a unique musical idiom they tested together in live acoustic-electronic improvisations in jazz cafes. Complicated, angular rhythms flow effortlessly and gently, the line between kit and machine blurring. For Loop, they’re interviewed by Jesse Terry, who is product owner for Ableton Push (and has a long history working with the hardware that interacts with Live).
And the sample programming is insane: you get Runescape samples. A baby sneezing. The Mac volume control sound. It’s obsessive Internet-age programming – and then Tess plays this all as acoustic percussion and kit.
In this talk, they talk about jazz education, getting started as kids, Skype lessons. And then they get into the workings of a song.
The big trick here: the duo use Live’s Racks, using the Chain function, so that consistently mapped drum parts can cycle through different sounds as she plays. (I’ll review that technique in more detail soon.) 24 variable pads play all the sounds as Tess is playing.
Working with Chains in Ableton Live’s Device Racks can let you cycle through samples, patches, and layered/split instrument settings.
Part of why the video is interesting to watch is it’s really as much about how Tess has gradually learned how to memorize and recall these elaborate percussion parts. It’s a beautiful example of the human brain expanding to keep up with, then surpass, what the machine makes available.
For Luke’s part, there’s a monome [grid controller], keyboard triggers, and still more electronic pads. The monome loops chopped up samples, sticks can trigger more samples manually — it’s dense. He plays melodic parts both on keyboard and 4×4 pad grid.
The track makeup:
Arrangement view contains the song structure
A click track (obviously)
Software synths each have set lists of sounds, with clips triggering sound changes as MIDI program changes
The monome / mlrv sequencer
Here’s an (older) extended live set, so you can see more of how they play:
Here’s their dreamy, poppy latest music video (released March) – made all the more impressive when you realize they basically sound like this live:
The quiet addition of arbitrary audio routing in Max for Live in Live 10 has opened the floodgates to new tools. This one free device could transform how you route signal in the software.
One of the frustrations of ongoing Ableton Live users, in fact, is that routing options are fairly restricted. You’ve got sends and returns, sure, plus some easy and convenient drop-downs in the I/O section of each channel. But if you’ve ever discovered a particular sidechaining wasn’t possible, or you just couldn’t get there from here, you know what I’m talking about.
And so, you knew something like Outist was coming. Amidst a bunch of Max for Live plug-in developers thinking up creative things to do with the new routing tools (like spatialization or visualization), this one is dead-simple. It just uses that loophole to give you a device you can easily insert to add a routing wherever you want – a bit like having a virtual patch cable you can plug into your DAW.
And it’s free.
outist is a maxforlive device that lets you route any signal to any internal or external destination.
It’s originally designed to bypass Live’s restricted return buss routing. With outist you can have pre and post send PER return channel.
You can also simply use it to send the signal to any physical output or just anywhere in your set…
Findt Outist and a bunch of other weird and interesting stuff:
With those floodgates open, as I said, there may well be a better tool out there. So please, readers – don’t be shy. Happy to hear other tips, or about your patch that’s better, or other ideas – shoot!
And yeah, I definitely wish Ableton just did this by default, natively – but I’ll take this hack as a solution!