This free Ableton Live device makes images into wavetables

It’s the season of the wavetable – again. With Ableton Live 10.1 on the horizon and its free Wavetable device, we’ve got yet another free Max for Live device for making sound materials – and this time, you can make your wavetables from images.

Let’s catch you up first.

Ableton Live 10.1 will bring Wavetable as a new instrument to Standard and Suite editions – arguably one of the bigger native synth editions to Live in its history, ranking with the likes of Operator. And sure, as when Operator came out, you already have plug-ins that do the same; Ableton’s pitch is as always their unique approach to UI (love it or hate it), and integration with the host, and … having it right in the box:

Ableton Live 10.1: more sound shaping, work faster, free update

Earlier this week, we saw one free device that makes wavetables for you, built as a Max for Live device. (Odds are anyone able to run this will have a copy of Live with Wavetable in it, since it targets 10.1, but it also exports to other tools). Wave Weld focuses on dialing in the sounds you need and spitting out precise, algorithmic results:

Generate wavetables for free, for Ableton Live 10.1 and other synths

One thing Wave Weld cannot do, however, is make a wavetable out of a picture of a cat.

For that, you want Image2Wavetable. The name says it all: it generates wavetable samples from image data.

This means if you’re handy with graphics software, or graphics code like Processing, you can also make visual patterns that generate interesting wavetables. It reminds me of my happy hours and hours spent using U+I Software’s ground-breaking MetaSynth, which employs some similar concepts to build an entire sound laboratory around graphic tools. (It’s still worth a spin today if you’ve got a Mac; among other things, it is evidently responsible for those sweeping digital sounds in the original Matrix film, I’m told.)

Image2Wavetable is new, the creation of Dillon Bastan and Carlo Cattano – and there are some rough edges, so be patient and it sounds like they’re ready to hear some feedback on how it works.

But the workflow is really simple: drag and drop image, drag and drop resulting wavetable into the Wavetable instrument.

Okay, I suspect I know what I’m doing for the rest of the night.

Image2Wavetable Device [maxforlive.com]

The post This free Ableton Live device makes images into wavetables appeared first on CDM Create Digital Music.

Generate wavetables for free, for Ableton Live 10.1 and other synths

Wavetables are capable of a vast array of sounds. But just dumping arbitrary audio content into a wavetable is unlikely to get the results you want. And that’s why Wave Weld looks invaluable: it makes it easy to generate useful wavetables, in an add-on that’s free for Max for Live.

Ableton Live users are going to want their own wavetable maker very soon. Live 10.1 will add Wavetable, a new synth based on the technique. See our previous preview:

Ableton Live 10.1: more sound shaping, work faster, free update

Live 10.1 is in public beta now, and will be free to all Live 10 users soon.

So long as you have Max for Live to run it, Wave Weld will be useful to other synths, as well – including the developer’s own Wave Junction.

Because wavetables are periodic by their very nature, it’s more likely helpful to generate content algorithmically than just dump sample content of your own. (Nothing against the latter – it’s definitely fun – but you may soon find yourself limited by the results.)

Wave Wend handles generating those materials for you, as well as exporting them in the format you need.

1. Make the wavetable: use waveshaping controls to dial in the sound materials you want.

2. Build up a library: adapt existing content or collect your own custom creations.

3. Export in the format you need: adjusting the size les you support Live 10.1’s own device or other hardware and plug-ins.

The waveshaping features are really the cool part:

Unique waveshaping controls to generate custom wavetables
Sine waveshape phase shift and curve shape controls
Additive style synthesis via choice of twenty four sine waveshape harmonics for both positive and negative phase angles
Saw waveshape curve sharpen and partial controls
Pulse waveshape width, phase shift, curve smooth and curve sharpen controls
Triangle waveshape phase shift, curve smooth and curve sharpen controls
Random waveshape quantization, curve smooth and thinning controls

Wave Weld isn’t really intended as a synth, but one advantage of it being an M4L device is, you can easily preview sounds as you work.

More information on the developer’s site – http://metafunction.co.uk/wave-weld/

The download is free with a sign-up for their mailing list.

They’ve got a bunch of walkthrough videos to get you started, too:

Major kudos to Phelan Kane of Meta Function for this release. (Phelan is an Ableton Certified Trainer as well as a specialist in Reaktor and Maschine on the Native Instruments side, as well as London chairman for AES.)

I’m also interested in other ways to go about this – SuperCollider code, anyone?

Wavetable on!

The post Generate wavetables for free, for Ableton Live 10.1 and other synths appeared first on CDM Create Digital Music.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Electron (a popular cross-platform JavaScript tool), though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magenta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

That also has its own Ableton Live device, from a couple years back.

https://magenta.tensorflow.org/nsynth-instrument

NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

Where this could go next

There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.

As Jesse Engel tells CDM:

We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.

Things like more impressive MIDI generation (https://magenta.tensorflow.org/music-transformer – check out “Score Conditioned”)

And state of the art transcription: (https://magenta.tensorflow.org/onsets-frames)

And new controller paradigms:
(https://magenta.tensorflow.org/pianogenie)

Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.

So okay, music makers – have at it:

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

More surprise in your sequences, with ESQ for Ableton Live

With interfaces that look lifted from a Romulan warbird and esoteric instruments, effects, and sequencers, K-Devices have been spawning surprising outcomes in Ableton Live for some time now. ESQ is the culmination of that: a cure for preset sounds and ideas in a single device.

You likely know the problem already: all of the tools in software like Ableton Live that make it easy to quickly generate sounds and patterns also tend to do so in a way that’s … always the same. So instead of being inspiring, you can quickly feel stuck in a rut.

ESQ is a probability-based sequencer with parameters, so you adjust a few controls to generate a wide variety of possibilities – velocity, chance, and relative delay for each step. You can create polyrhythms (multiple tracks of the same length, but different steps), or different-length tracks, you can copy and paste, and there are various random functions to keep things fresh. The results are still somehow yours – maybe even more so – it’s just that you use probability and generative rules to get you to what you want when you aren’t sure how to describe what you want. Or maybe before you knew you wanted it.

Because you can trigger up to 12 notes, you can use ESQ to turn bland presets into something unexpected (like working with preset Live patches). Or you can use it as a sequencer with all those fun modular toys we’ve been talking about lately (VCV Rack, Softube Modular, Cherry Audio Voltage Modular, and so on) – because 5- and 8-step sequencers are often just dull.

There’s no sound produced by ESQ – it’s just a sequencer – but it can have a big enough impact on devices that this “audio” demo is just one instance of ESQ and one Drum Rack. Even those vanilla kits start to get more interesting.

K-Devices has been working this way for a while, but ESQ feels like a breakthrough. The generative sequence tools are uniquely complete and especially powerful for producing rhythms. You can make this sound crazy and random and IDM-y, but you can also add complexity without heading into deep space – it’s really up to you.

And they’ve cleverly made two screens – one full parameter screen that gets deep and detailed, but a compact device screen that lets you shift everything with single gestures or adjust everything as macros – ideal for live performance or for making bigger changes.

It seems like a good wildcard to keep at your disposal … for any of those moments when you’re getting stuck and boring.

And yes, of course Richard Devine already has it:

But you can certainly make things unlike Devine, too, if you want.

Right now ESQ is on sale, 40% off through December 31 – €29 instead of 49. So it can be your last buy of 2018.

Have fun, send sequences!

https://k-devices.com/products/esq/

The post More surprise in your sequences, with ESQ for Ableton Live appeared first on CDM Create Digital Music.

Ableton Live Looping gets its own custom controller

A crowd-funded custom controller has just arrived on the scene, designed to assist live triggering and looping in Ableton Live. And there’s already a free download for Max for Live to get you started, even without the hardware.

Hardware like Ableton’s Push lets you play Live with your fingers – but what about your feet? (Ableton Sole?) And what about looping? Pierre-Antoine Grison, Ableton Certified Trainer and producer/musician signed to Ed Banger Records, has come up with his own solution – just in time to show it this weekend at Ableton’s aptly-titled Loop “summit for music makers.” “State Of The Loop” is a custom MIDI controller for Ableton Live’s built-in Looper device.

The Looper in Ableton Live has been around for a few versions, after loads of requests from users. It delivered the kind of looping workflows you’d expect form a looping pedal. But that doesn’t mean everyone knows how to use it, or use it effectively. There are some nice resources online, including:

Ableton Looper Cheat Sheet (Free Download) [Beat Lab Academy]

Ableton Live Devices – How To Use Live Looper [Loopmasters.com articles]

and a ton of tips here:
http://looping.me.uk/category/ableton/

The stomp-style hardware controls not only the Looper device itself but also scenes. So it works for both controlling entire sets and for pedal-style looping, and you can use multiple (software) loopers so you can layer using different on-screen devices.

Features:

Display and control the state of Live’s Looper
Unlimited number of loopers !
2 Expression Pedal inputs with “dynamic mapping”
Scenes Mode to launch Scenes and display their color and name
Sturdy metal case
100% Made in France
USB or MIDI connection for longer distances (up to 15m/50ft)
USB powered
Very light on the CPU
Easy configuration
Weight : 1.7 kg / 3 lb
WxLxH : 30 x 13 x 6 cm / 12 x 5 x 2.5 inches

There’s even a free download that adds some features Ableton Live forgot – the equivalent of follow actions for scenes, plus a heads-up display so you can see what’s happening without hunching over your computer screen. (Seriously, Ableton, those belong as standard features in Live!)

You can use that download as long as you have a compatible version of Live and Max for Live; no hardware needed.

http://kblivesolutions.com/wp-content/uploads/2018/11/Scene-Launcher.zip

Dig this custom version too:

Pricing starts at 240EUR for an “early bird” price, 260EUR after that. (There’s also a 350EUR limited edition still available as I write this).

Project info on Kickstarter:

http://kck.st/2SH5gJE

The post Ableton Live Looping gets its own custom controller appeared first on CDM Create Digital Music.

Free Ableton Live add-ons will f*** up your mixes and insult you

That headline isn’t a mistake. If you’ve ever wanted a plug-in to f*** up your mixes, sabotage you, insult you, or “get passive aggressive,” this free collection of Max for Live Devices is for you.

Not to completely spoil the results here, but as I write this, my screen is covered with virtual bees. I cannot make the bees go away. I thought the “bees” instrument was going to make some sounds, but instead it has brought bees onto my screen, both inside and outside Ableton Live.

That’s the sort of results you can expect from Really Useful Plugins.

ru.bomb will take your mix and completely f*** it up, as my headline promises.

ru.no is basically an onscreen version of the nagging doubts inside your head.

Sad.

That is way too much f***ing reverb.

And that’s just the beginning.

Simon Kitmine and David Synth bring you 12 instruments, audio effects and midi effects for Ableton Live, featuring:

Insults!
Games!
Bombs!
Self importance!
Sabotage!
Ways to magically sound like everyone else!
The Chuckle Brothers!
Annoying insects!
Exploration!
Passive aggression!

Really Useful Plugins Set #1 now available!

How much would you pay for such a collection? $99? $299? $999 for a multi-seat license? Well, it’s … free, for some reason. (Can’t imagine why. Free as in bees. Erm, beer.)

Max for Live is required, so Live Suite or Live with the M4L add-on. I’ve said before that’s worth it. Now, there’s no doubt.

You know, it really is too much reverb.

Sigh.

PS, if you appreciate this kind of insight, definitely check out #gothscreenshots:

https://www.instagram.com/goth_screenshots/

It’s the curated collection of digital artist Sougwen, who has also participated at Ableton Loop, bringing this all full circle.

The post Free Ableton Live add-ons will f*** up your mixes and insult you appeared first on CDM Create Digital Music.

Free new tools for Live 10 unlock 3D spatial audio, VR, AR

Envelop began life by opening a space for exploring 3D sound, directed by Christopher Willits. But today, the nonprofit is also releasing a set of free spatial sound tools you can use in Ableton Live 10 – and we’ve got an exclusive first look.

First, let’s back up. Listening to sound in three dimensions is not just some high-tech gimmick. It’s how you hear naturally with two ears. The way that actually works is complex – the Wikipedia overview alone is dense – but close your eyes, tilt your head a little, and listen to what’s around you. Space is everything.

And just as in the leap from mono to stereo, space can change a musical mix – it allows clarity and composition of sonic elements in a new way, which can transform its impact. So it really feels like the time is right to add three dimensions to the experience of music and sound, personally and in performance.

Intuitively, 3D sound seems even more natural than visual counterparts. You don’t need to don weird new stuff on your head, or accept disorienting inputs, or rely on something like 19th century stereoscopic illusions. Sound is already as ephemeral as air (quite literally), and so, too, is 3D sound.

So, what’s holding us back?

Well, stereo sound required a chain of gear, from delivery to speaker. But those delivery mechanisms are fast evolving for 3D, and not just in terms of proprietary cinema setups.

But stereo audio also required something else to take off: mixers with pan pots. Stereo effects. (Okay, some musicians still don’t know how to use this and leave everything dead center, but that only proves my point.) Stereo only happened because tools made its use accessible to musicians.

Looking at something like Envelop’s new tools for Ableton Live 10, you see something like the equivalent of those first pan pots. Add some free devices to Live, and you can improvise with space, hear the results through headphones, and scale up to as many speakers as you want, or deliver to a growing, standardized set of virtual reality / 3D / game / immersive environments.

And that could open the floodgates for 3D mixing music. (Maybe even it could open your own floodgates there.)

Envelop tools for Live 10

Today, Envelop for Live (E4L) has hit GitHub. It’s not a completely free set of tools – you need the full version of Ableton Live Suite. Live 10 minimum is required (since it provides the requisite set of multi-point audio plumbing.) Provided you’re working from that as a base, though, musicians get a set of Max for Live-powered devices for working with spatial audio production and live performance, and developers get a set of tools for creating their own effects.

Start here for the download:

http://www.envelop.us/software/

See also the more detailed developer site:

https://github.com/EnvelopSound/EnvelopForLive/

Read an overview of the system, and some basic explanations of how it works (including some definitions of 3D sound terminology):

https://github.com/EnvelopSound/EnvelopForLive/wiki/System-Overview

And then find a getting started guide, routing, devices, and other reference materials on the wiki:

https://github.com/EnvelopSound/EnvelopForLive/wiki

It’s beautiful, elegant software – the friendliest I’ve seen yet to take on spatial audio, and very much in the spirit of Ableton’s own software. Kudos to core developers Mark Slee, Roddy Lindsay, and Rama Gotfried.

Here’s the basic idea of how the whole package works.

Output. There’s a Master Bus device that stands in for your output buses. It decodes your spatial audio, and adapts routing to however many speakers you’ve got connected – whether that’s just your headphones or four speakers or a huge speaker array. (That’s the advantage of having a scalable system – more on that in a moment.)

Sources. Live 10’s Mixer may be built largely with the idea of mixing tracks down to stereo, but you probably already think of it sort of as a set of particular musical materials – as sources. The Source Panner device, added to each track, lets you position that particular musical/sonic entity in three-dimensional space.

Processors. Any good 3D system needs not only 3D positioning, but also separate effects and tools – because normal delays, reverbs, and the like presume left/right or mid/side stereo output. (Part of what completes the immersive effect is hearing not only the positioning of the source, but reflections around it.)

In this package, you get:
Spinner: automates motion in 3D space horizontally and with vertical oscillations
B-Format Sampler: plays back existing Ambisonics wave files (think samples with spatial information already encoded in them)
B-Format Convolution Reverb: imagine a convolution reverb that works with three-dimensional information, not just two-dimensional – in other words, exactly what you’d want from a convolution reverb
Multi-Delay: cascading, three-dimensional delays out of a mono source
HOA Transform: without explaining Ambisonics, this basically molds and shapes the spatial sound field in real-time
Meter: Spatial metering. Cool.

Spinner, for automating movement.

Spatial multi-delay.

Convolution reverb, Ambisonics style.

Envelop SF and Envelop Satellite venues also have some LED effects, so you’ll find some devices for controlling those (which might also be useful templates for stuff you’re doing).

All of this spatial information is represented via a technique called Ambisonics. Basically, any spatial system – even stereo – involves applying some maths to determine relative amplitude and timing of a signal to create particular impressions of space and depth. What sets Ambisonics apart is, it represents the spatial field – the sphere of sound positions around the listener – separately from the individual speakers. So you can imagine your sound positions existing in some perfect virtual space, then being translated back to however many speakers are available.

This scalability really matters. Just want to check things out with headphones? Set your master device to “binaural,” and you’ll get a decent approximation through your headphones. Or set up four speakers in your studio, or eight. Or plug into a big array of speakers at a planetarium or a cinema. You just have to route the outputs, and the software decoding adapts.

Envelop is by no means the first set of tools to help you do this – the technique dates back to the 70s, and various software implementations have evolved over the years, many of them free – but it is uniquely easy to use inside Ableton Live.

Open source, standards

Free software. It’s significant that Envelop’s tools are available as free and open source. Max/MSP, Max for Live, and Ableton Live are proprietary tools, but the patches and externals exist independently, and a free license means you’re free to learn from or modify the code and patches. Plus, because they’re free in cost, you can share your projects across machines and users, provided everybody’s on Live 10 Suite.

Advanced Max/MSP users will probably already be familiar with the basic tools on which the Envelop team have built. They’re the work of the Institute for Computer Music and Sound Technology, at the Zürcher Hochschule der Künste in Zurich, Switzerland. ICMST have produced a set of open source externals for Max/MSP:

https://www.zhdk.ch/downloads-ambisonics-externals-for-maxmsp-5381

Their site is a wealth of research and other free tools, many of them additionally applicable to fully free and open source environments like Pure Data and Csound.

But Live has always been uniquely accessible for trying out ideas. Building a set of friendly Live devices takes these tools and makes them make more sense in the Live paradigm.

Non-proprietary standards. There’s a strong push to proprietary techniques in spatial audio in the cinema – Dolby, for instance, we’re looking at you. But while proprietary technology and licensing may make sense for big cinema distributors, it’s absolute death for musicians, who likely want to tour with their work from place to place.

The underlying techniques here are all fully open and standardized. Ambisonics work with a whole lot of different 3D use cases, from personal VR to big live performances. By definition, they don’t define the sound space in a way that’s particular to any specific set of speakers, so they’re mobile by design.

The larger open ecosystem. Envelop will make these tools new to people who haven’t seen them before, but it’s also important that they share an approach, a basis in research, and technological compatibility with other tools.

That includes the German ZKM’s Zirkonium system, HoaLibrary (that repository is deprecated but links to a bunch of implementations for Pd, Csound, OpenFrameworks, and so on), and IRCAM’s SPAT. All these systems support ambisonics – some support other systems, too – and some or all components include free and open licensing.

I bring that up because I think Envelop is stronger for being part of that ecosystem. None of these systems requires a proprietary speaker delivery system – though they’ll work with those cinema setups, too, if called upon to do so. Musical techniques, and even some encoded spatial data, can transfer between systems.

That is, if you’re learning spatial sound as a kind of instrument, here you don’t have to learn each new corporate-controlled system as if it’s a new instrument, or remake your music to move from one setting to another.

Envelop, the physical version

You do need compelling venues to make spatial sound’s payoff apparent – and Envelop are building their own venues for musicians. Their Envelop SF venue is a permanent space in San Francisco, dedicated to spatial listening and research. Envelop Satellite is a mobile counterpart to that, which can tour festivals and so on.

Envelop SF: 32 speakers with speakers above. 24 speakers set in 3 rings of 8 (the speakers in the columns) + 4 subs, and 4 ceiling speakers. (28.4)

Envelop Satellite: 28 speakers. 24 in 3 rings + 4 subs (overhead speakers coming soon) (24.4)

The competition, as far as venues: 4DSOUND and Berlin’s Monom, which houses a 4DSOUND system, are similar in function, but use their own proprietary tools paired with the system. They’ve said they plan a mobile system, but no word on when it will be available. The Berlin Institute of Sound and Music’s Hexadome uses off-the-shelf ZKM and IRCAM tools and pairs projection surfaces. It’s a mobile system by design, but there’s nothing particularly unique about its sound array or toolset. In fact, you could certainly use Envelop’s tools with any of these venues, and I suspect some musicians will.

There are also many multi-speaker arrays housed in music venues, immersive audiovisual venues, planetariums, cinemas, and so on. So long as you can get access to multichannel interfacing with those systems, you could use Envelop for Live with all of these. The only obstacle, really, is whether these venues embrace immersive, 3D programming and live performance.

But if you thought you had to be Brian Eno to get to play with this stuff, that’s not likely to be the situation for long.

VR, AR, and beyond

In addition to venues, there’s also a growing ecosystem of products for production and delivery, one that spans musical venues and personal immersive media.

To put that more simply: after well over a century of recording devices and production products assuming mono or stereo, now they’re also accommodating the three dimensions your two ears and brain have always been able to perceive. And you’ll be able to enjoy the results whether you’re on your couch with a headset on, or whether you prefer to go out to a live venue.

Ambisonics-powered products now include Facebook 360, Google VR, Waves, GoPro, and others, with more on the way, for virtual and augmented reality. So you can use Live 10 and Envelop for Live as a production tool for making music and sound design for those environments.

Steinberg are adopting ambisonics, too (via Nuendo). Here’s Waves’ guide – they now make plug-ins that support the format, and this is perhaps easier to follow than the Wikipedia article (and relevant to Envelop for Live, too):

https://www.waves.com/ambisonics-explained-guide-for-sound-engineers

Ableton Live with Max for Live has served as an effective prototyping environment for audio plug-ins, too. So developers could pick up Envelop for Live’s components, try out an idea, and later turn that into other software or hardware.

I’m personally excited about these tools and the direction of live venues and new art experiences – well beyond what’s just in commercial VR and gaming. And I’ve worked enough on spatial audio systems to at least say, there’s real potential. I wouldn’t want to keep stereo panning to myself, so it’s great to get to share this with you, too. Let us know what you’d like to see in terms of coverage, tutorial or otherwise, and if there’s more you want to know from the Envelop team.

Thanks to Christopher Willits for his help on this.

More to follow…

http://envelop.us

https://github.com/EnvelopSound/EnvelopForLive/

Further reading

Inside a new immersive AV system, as Brian Eno premieres it in Berlin [Extensive coverage of the Hexadome system and how it works]

Here’s a report from the hacklab on 4DSOUND I co-hosted during Amsterdam Dance Event in 2014 – relevant to these other contexts, having open tools and more experimentation will expand our understanding of what’s possible, what works, and what doesn’t work:

Spatial Sound, in Play: Watch What Hackers Did in One Weekend with 4DSOUND

And some history and reflection on the significance of that system:
Spatial Audio, Explained: How the 4DSOUND System Could Change How You Hear [Videos]

Plus, for fun, here’s Robert Lippok [Raster] and me playing live on that system and exploring architecture in sound, as captured in a binaural recording by Frank Bretschneider [also Raster] during our performance for 2014 ADE. Binaural recording of spatial systems is really challenging, but I found it interesting in that it created its own sort of sonic entity. Frank’s work was just on the Hexadome.

One thing we couldn’t easily do was move that performance to other systems. Now, this begins to evolve.

The post Free new tools for Live 10 unlock 3D spatial audio, VR, AR appeared first on CDM Create Digital Music.

Mod Max: One free download fixes Live 10’s new kick

Ableton Live 10 has some great new drum synth devices, as part of Max for Live. But that kick could be better. Max modifications, to the rescue!

The Max for Live kick sounds great – especially if you combine it with a Drum Buss or even some distortion via the Pedal, also both new in Live 10. But it makes some peculiar decisions. The biggest problem is, it ignores the pitch of incoming MIDI.

Green Kick fixes that, by mapping MIDI note to Pitch of the Kick, so you can tap different pads or keyboard keys to pitch the kick where you want it. (You can still trigger a C0 by pressing the Kick button in the interface.)

Also: “It seemed strange to have Attack as a numbox and the Decay as a dial.”

Yes, that does seem strange. So you also get knobs for both Attack and Decay, which makes more sense.

Now, all of this is possible thanks to the fact that this is a Max for Live device, not a closed-box internal device. While it’s a pain to have to pony up for the full cost of Live Suite to get Max for Live, the upside is, everything is editable and modifiable. And it’d be great to see that kind of openness in other tools, for reasons just like this.

Likewise, if this green color bothers you, you can edit this mod and … so on.

Go grab it:

http://maxforlive.com/library/device/4680/green-kick

Thanks to Sonic Bloom for this one. They’ve got tons more tips like this, so go check them out:

https://twitter.com/sonicbloomtuts

The post Mod Max: One free download fixes Live 10’s new kick appeared first on CDM Create Digital Music.

Route audio from anywhere to anywhere in Ableton, free

The quiet addition of arbitrary audio routing in Max for Live in Live 10 has opened the floodgates to new tools. This one free device could transform how you route signal in the software.

One of the frustrations of ongoing Ableton Live users, in fact, is that routing options are fairly restricted. You’ve got sends and returns, sure, plus some easy and convenient drop-downs in the I/O section of each channel. But if you’ve ever discovered a particular sidechaining wasn’t possible, or you just couldn’t get there from here, you know what I’m talking about.

And so, you knew something like Outist was coming. Amidst a bunch of Max for Live plug-in developers thinking up creative things to do with the new routing tools (like spatialization or visualization), this one is dead-simple. It just uses that loophole to give you a device you can easily insert to add a routing wherever you want – a bit like having a virtual patch cable you can plug into your DAW.

And it’s free.

Description:

outist is a maxforlive device that lets you route any signal to any internal or external destination.

It’s originally designed to bypass Live’s restricted return buss routing. With outist you can have pre and post send PER return channel.

You can also simply use it to send the signal to any physical output or just anywhere in your set…

Findt Outist and a bunch of other weird and interesting stuff:

https://gumroad.com/valiumdupeuple

With those floodgates open, as I said, there may well be a better tool out there. So please, readers – don’t be shy. Happy to hear other tips, or about your patch that’s better, or other ideas – shoot!

And yeah, I definitely wish Ableton just did this by default, natively – but I’ll take this hack as a solution!

The post Route audio from anywhere to anywhere in Ableton, free appeared first on CDM Create Digital Music.

BSOD simulates the sound your laptop makes when it crashes

Finally! Now you don’t have to wait for your computer to start glitching out – you can make it happen yourself, with this inexpensive Max for Live device.

Okay, so technically what we’re talking about is a “stockastic sample freezing effect.” Since it’s a Max for Live Device, you can drop its audio-munching powers on any track you want, making for glitched out percussion, vocals, or whatever you like. But if you’ve ever watched a computer melt down and listened to the resulting sounds and thought, “hey, actually, I could use that” – this is for you.

The reason it matches a BSOD is, computer stability issues cause the digital audio buffer to “freeze” on particular sounds rather than continue to process buffered audio normally. (Digital audio systems give the illusion of running in real time, without losing a continuous stream of audio, by dividing digital audio into chunks and feeding those chunks in sequence to the audio card… so that if the machine falls behind a few samples, you won’t notice.)

This creation is the second Max for Live invention from Isotonik Studios today – happy Valentine’s Day, y’all – and carries the price of €9.52. For that, you get some control over the effect – especially since it isn’t actually crashing your machine. The developers describe the parameters as follows:

Freeze: control the gate frequency in time signatures
Width: make the gating wider or tighter
Dry/Wet: master dry/wet control

And all of this is MIDI-controllable.

If you want to live more dangerously, the classic Smart Electronix effect Buffer Override actually does screw around with your machine. The work of developer Sophia Poirier, this is the opposite of what would normally constitute a stable plug-in. The idea: it “overcomes your host app’s audio processing buffer size and then (unsuccessfully) overrides that new buffer size to be a smaller buffer size.”

Beware, as that will actually cause some hosts to, you know, crash. But Buffer Override is free. (Well, it’d be a bit strange to charge for that!)

http://destroyfx.smartelectronix.com/

For safer, more playable operation, you should stick to Isotonik Studios’ creation. Have at it:

https://isotonikstudios.com/product/bsod/

The post BSOD simulates the sound your laptop makes when it crashes appeared first on CDM Create Digital Music.