You’ve got tons of devices that let you tweak sounds of synths and effects with knobs. So why not warp time, too?
That’s the idea of FlexGroove, the latest add-on for Ableton Live and Max for Live. Just as you use envelopes and breakpoints to control volume or effects parameters elsewhere in Live, this tool lets you go in and speed up time, slow down time, and transform groove and meter just as easily.
Even as a big believer in words (words rock!), that is something that screams out for a demo. And once you hear this, you’ll get right away why you might want something that does this:
Speeding up (accelerando), slowing down (deccelerando), expressive give and take (rubato), and meter changes are essential building blocks of music in a wide variety of genres and cultures. So on some level, it’s weird that they tend to be hidden in machine music interfaces, in hardware and software – or at least relegated to working on just a master tempo track.
That said, putting them into a dedicated device like this means you can treat these elements in a focused, compositional mindset. And device creator Martin von Frantzius, a composer and musician himself teaching in Germany, has pulled out all the stops.
So you get six timing modes, each with its own presets:
Free time (drawn in with breakpoints)
Sine/half sine curves
Ratio – (which lets you do metric modulations)
And there’s a built-in pair of step sequencers, plus controls for humanization and velocity, plus probability.
Basically, you fire this up, then spit out clips. Some of the ideas here are really performative, so it’s a shame in a way that it doesn’t focus on playing these things like an instrument. On the other hand, I think for composers, someone adding excitement to a score bed, creating a dynamic break/drop in dance music, and otherwise spawning a ton of more interesting clips – it looks seriously addictive.
And it should also cure you of the dreary feeling of a bunch of on-the-grid monotonous and unmusical clips in your Session View. I just now got the NFR, but this looks worth 39EUR to me.
Got patches of your own, or favorites from maxforlive.com? Let us know! The more time-warping devices, the merrier, really!
And it’s great to see Ableton continue to use ableton.com as a kind of label for creative Max patchers.
Check out Martin’s page for tons of interesting teaching and engineering and violin and composition projects, like an online church-organ you can play, and — this, for more experimental time-bending with violin:
Chordmaker, arpeggiator on steroids, harmonic processor – CHORDimist is another of the powerful Max for Live tools for composition.
I figured yesterday’s blitz of Max for Live news would bring out something I missed. Chris Hahn pointed us to this one, by South Korean-based developer Leestrument.
It’s a chord generator, but it’s also really an advanced arpeggiator / MIDI harmonizer, with modes for firing off, sustaining, or arpeggiating harmonies. Add in lots of parameters for direction and variation – both of the chords themselves and how they’re played – and you have a sophisticated MIDI effect.
CHORDimist is US$49 and requires the latest Max for Live, meaning you want Live Suite 10.1 or greater (or an equivalent Max for Live license).
Ha, also – I love that the filename for the screenshot on Lee’s site is _E1_84_89_E1_85_B3_E1_84_8F_E1_85_B3_E1_84_85_E1_85_B5_E1_86_AB_E1_84_89_E1_85_A3_E1_86_BA_202019-10-02_20_E1_84_8B_E1_85_A9_E1_84_8C_E1_85_A5_E1_86_AB_204.13.04.png.
Signal from Isotonik was already a revelation – a powerful toolkit for adding modulation to Ableton Live. But curves, step sequences, and crossfades add real motion and transformation to your music.
Darren of Isotonik Studios has been busy documenting how to use this with some no-nonsense, clear video tutorials. It’s the latest episode, adding Steps and Crossfader module, that gets really exciting:
The new module Steps alone is reason to write home. It’s capable both of the titular step-sequenced, fixed steps, but curves, as well. And while you’ll find modulation built in in tools like FL Studio, Reason, and Bitwig Studio, the implementation via Max for Live by Isotonik has some really lovely usability that stands alone.
The Crossfader is unique, too – this isn’t just a mixer for audio signals, but modulation sources, as well.
It’s worth checking the other videos, too. Episode two looked at the cult hit VST plug-in Serum, creating sound design with Signal in combination. And even with Massive X just out, this is some interesting stuff:
You’ll probably want to start at the beginning, which introduces Freeze and LFO (since I’m listing these in reverse chronological order):
You’ll notice the Chaos Culture moniker on there; this is their creation. You’ll probably want Live 10 Suite, but anything Live 9.7.5 or later, plus an active Max for Live 8.0.2 license, will work, across Mac and Windows.
It’s so deep, it suggests whole new workflows and compositional ideas, so I’ll be sure to start some music from scratch with this one. But it’s really quite well done, and a rich enough approach to modulation that developers on other environments may well want to have a look.
Signal is €88.05 – pricey for a Max for Live creation, but then possibly even bigger than any recent Live upgrade from Ableton themselves. If you have a go, let us know how it works; I’ll try to post some more impressions in August.
It’s an automatic glitching bass. It’s a transformative set of 128 Wavetable sounds. It’s a Max for Live chaining device. It’s all of that – it’s Leakage, the free/pay-what-you-will Ableton Live creation from Tom Cosm.
The idea is to give you ever-changing bassline sounds each time you hit a note, for colorful and glitchy results. To pull that off, you get a number of features:
128 custom Wavetable presets
Max for Live device that switches sounds
Preset switching, via chains – 128 chains, one for each sound
Set number of steps, up to 128, to determine rate of change
“Count MIDI” sets the step size to the number of notes in the active clip
Tom says this is the culmination of five years of work, but he’s been waiting for Ableton Live 10.1 and the processing bandwidth of current machines to unleash this. You’ll need of course Live 10.1 with Wavetable and a Max for Live license (probably, but not limited to, Suite).
This is pay what you want, starting at $0 to download. If you do put in some money, you’ll be added to an early access list for promised future editions, with bassline, lead, and effect features.
It’s really encouraging to hear Tom talk about how well that’s worked:
“To be honest, it blew my mind how many of you made a contribution. People chipped in 1, 2 or 5 bucks… but a lot of you did! It was so much it covered my rent and bills for a month, freeing up my time so I could work on this Leakage release. I was totally blown away by the generosity, so I am going to keep rolling with this system. Even if it’s just 2 dollars, it all adds up and means I can keep pumping out new and exciting tools, without having to restrict the availability to people who have money.”
It’s all about voltage these days. Ableton’s new CV Tools are designed for integrating with modular and semi-modular/desktop gear with CV. And they’re built in Max – meaning builders can learn from these tools and build their own.
The basic idea of CV Tools, like any software-CV integration, is to use your computer as an additional source of modulation and control. You route analog signal directly to your audio interface – you’ll need an interface that has DC coupled outputs (more about that separately). But once you do that, you can make your software and hardware rigs work together, and use your computer’s visual interface and open-ended possibilities to do still more stuff with analog gear.
This is coming on the eve of Superbooth, and certainly a lot of the audience will be people with modular racks. But nowadays, hardware with CV I/O is hardly limited to Eurorack – gear from the likes of Moog, Arturia, KORG, and others also makes sense with CV.
CV Tools aren’t the first Max for Live tools for Ableton Live – not by far. Spektro Audio makes the free CV Toolkit Mini, for instance. Its main advantage is a single, integrated interface – and a clever patch bay. There’s a more extensive version available for US$19.99.
Ableton’s own CV Tools is news, though, in that these modules are powerful, flexible, and polished, and have a very Ableton-esque UI. They also come from a collaboration with Skinnerbox, the live performance-oriented gearheads here in Berlin, so I have no doubt they’ll be useful. (Yep, that’s them in the video.) I think there’s no reason not to grab this and Spektro and go to town.
And since these are built in Max, Max patchers may want to take a look inside – to mod or use as the basis of your own.
What you get:
CV Instrument lets you treat outboard modular/analog gear as if it’s integrated with Live as a plug-in.
Trigger drums and rhythms with CV Triggers.
CV Utility is a signal processing hub inside Live.
CV Instrument, with complements existing Ableton devices for integrating outboard MIDI instruments and effects with your projects in Live
CV Triggers for sequencing drum modules
CV Utility for adding automation curves, add/shift/multiple signals, and other processing tools
CV Clock In and CV Clock Out for clocking Live from outboard analog gear and visa versa
CV In which connects outboard analog signal directly to modulation of parameters inside Live
CV Shaper, CV Envelope Follower, and CV LFO which gives you graphical tools for designing modulation inside Live and using it for CV control of your analog hardware
And there’s more: the Rotating Rhythm Generator, which lets you dial up polyrhythms. This one works with both MIDI and CV, so you can work with either kind of external hardware.
I got to chat with Skinnerbox, and there’s even more here than may be immediately obvious.
For one thing, you get what they tell us is “extremely accurate broad-range” auto calibration of oscillators, filters, and so on. That’s often an issue with analog equipment, especially once you start getting complex or adding polyphony (or creating polyphony by mixing your software instruments with your hardware). Here’s a quick demo:
Clocking they say is “jitter free” and “super high resolution.”
So this means you can make a monster hybrid combining your computer running Ableton Live (and all your software) with hardware, without having to have the clock be all over the place or everything out of tune. (Well, unless that’s what you’re going for!)
If you’re in Berlin, Skinnerbox will play live with the rig this Friday at Superbooth.
They sent us this quick demo of working with the calibration tools, resulting in an accurate ten-octave range (here with oscillator from Endorphin.es).
To interface with their gear, they’re using the Expert Sleepers ES8 interface in the modular. You could also use a DC-coupled audio interface, though – MOTU audio interfaces are a popular choice, since they’ve got a huge range of interfaces with DC coupling across various interface configurations.
CV Tools is listed as “coming soon,” but a beta version is available now.
For full CV control of analog gear, you’ll want a DC-coupled audio interface. Most audio interfaces lack that feature – I’m writing an explanation of this in a separate story – but if you do have one with compatible outputs, you’ll be able to take full advantage of the features here, including tuned pitch control. MOTU have probably made more interfaces that work than anyone else. You can also look to a dedicated interface like the Expert Sleepers one Skinnerbox used in the video above.
See MOTU and Expert Sleepers, both of which Skinnerbox have tested:
Universal Audio have already written to say they’ll be demoing DC coupling on their audio interfaces at Superbooth with Ableton’s CV Tools, so their stuff works, too. (Double-checking which models they’re using.)
But wait – just because you lack the hardware doesn’t mean you can’t use some of the functionality here with other audio interfaces. Skinnerbox remind us that any audio interface inputs will work with CV In in Pitch mode. Clock in and out will work with any device, too.
It’s the season of the wavetable – again. With Ableton Live 10.1 on the horizon and its free Wavetable device, we’ve got yet another free Max for Live device for making sound materials – and this time, you can make your wavetables from images.
Let’s catch you up first.
Ableton Live 10.1 will bring Wavetable as a new instrument to Standard and Suite editions – arguably one of the bigger native synth editions to Live in its history, ranking with the likes of Operator. And sure, as when Operator came out, you already have plug-ins that do the same; Ableton’s pitch is as always their unique approach to UI (love it or hate it), and integration with the host, and … having it right in the box:
Earlier this week, we saw one free device that makes wavetables for you, built as a Max for Live device. (Odds are anyone able to run this will have a copy of Live with Wavetable in it, since it targets 10.1, but it also exports to other tools). Wave Weld focuses on dialing in the sounds you need and spitting out precise, algorithmic results:
One thing Wave Weld cannot do, however, is make a wavetable out of a picture of a cat.
For that, you want Image2Wavetable. The name says it all: it generates wavetable samples from image data.
This means if you’re handy with graphics software, or graphics code like Processing, you can also make visual patterns that generate interesting wavetables. It reminds me of my happy hours and hours spent using U+I Software’s ground-breaking MetaSynth, which employs some similar concepts to build an entire sound laboratory around graphic tools. (It’s still worth a spin today if you’ve got a Mac; among other things, it is evidently responsible for those sweeping digital sounds in the original Matrix film, I’m told.)
Image2Wavetable is new, the creation of Dillon Bastan and Carlo Cattano – and there are some rough edges, so be patient and it sounds like they’re ready to hear some feedback on how it works.
But the workflow is really simple: drag and drop image, drag and drop resulting wavetable into the Wavetable instrument.
Okay, I suspect I know what I’m doing for the rest of the night.
Wavetables are capable of a vast array of sounds. But just dumping arbitrary audio content into a wavetable is unlikely to get the results you want. And that’s why Wave Weld looks invaluable: it makes it easy to generate useful wavetables, in an add-on that’s free for Max for Live.
Ableton Live users are going to want their own wavetable maker very soon. Live 10.1 will add Wavetable, a new synth based on the technique. See our previous preview:
Live 10.1 is in public beta now, and will be free to all Live 10 users soon.
So long as you have Max for Live to run it, Wave Weld will be useful to other synths, as well – including the developer’s own Wave Junction.
Because wavetables are periodic by their very nature, it’s more likely helpful to generate content algorithmically than just dump sample content of your own. (Nothing against the latter – it’s definitely fun – but you may soon find yourself limited by the results.)
Wave Wend handles generating those materials for you, as well as exporting them in the format you need.
1. Make the wavetable: use waveshaping controls to dial in the sound materials you want.
2. Build up a library: adapt existing content or collect your own custom creations.
3. Export in the format you need: adjusting the size les you support Live 10.1’s own device or other hardware and plug-ins.
The waveshaping features are really the cool part:
Unique waveshaping controls to generate custom wavetables
Sine waveshape phase shift and curve shape controls
Additive style synthesis via choice of twenty four sine waveshape harmonics for both positive and negative phase angles
Saw waveshape curve sharpen and partial controls
Pulse waveshape width, phase shift, curve smooth and curve sharpen controls
Triangle waveshape phase shift, curve smooth and curve sharpen controls
Random waveshape quantization, curve smooth and thinning controls
Wave Weld isn’t really intended as a synth, but one advantage of it being an M4L device is, you can easily preview sounds as you work.
The download is free with a sign-up for their mailing list.
They’ve got a bunch of walkthrough videos to get you started, too:
Major kudos to Phelan Kane of Meta Function for this release. (Phelan is an Ableton Certified Trainer as well as a specialist in Reaktor and Maschine on the Native Instruments side, as well as London chairman for AES.)
I’m also interested in other ways to go about this – SuperCollider code, anyone?
Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.
Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.
I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.
Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:
Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.
Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.
Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.
You may know Magenta from its involvement in the NSynth synthesizer —
NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.
But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.
Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.
Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.
One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)
What’s in Magenta Studio
Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.
Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.
Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.
The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)
Here are your options:
Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.
Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.
Interpolate: Instead of one clip, use two clips and merge/morph between them.
Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.
Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.
So, is it useful?
It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.
More to the point with something like Magenta is, do you really get musically useful results?
Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.
Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.
One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.
I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.
The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.
And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.
And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.
Where this could go next
There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.
As Jesse Engel tells CDM:
We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.
Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.
With interfaces that look lifted from a Romulan warbird and esoteric instruments, effects, and sequencers, K-Devices have been spawning surprising outcomes in Ableton Live for some time now. ESQ is the culmination of that: a cure for preset sounds and ideas in a single device.
You likely know the problem already: all of the tools in software like Ableton Live that make it easy to quickly generate sounds and patterns also tend to do so in a way that’s … always the same. So instead of being inspiring, you can quickly feel stuck in a rut.
ESQ is a probability-based sequencer with parameters, so you adjust a few controls to generate a wide variety of possibilities – velocity, chance, and relative delay for each step. You can create polyrhythms (multiple tracks of the same length, but different steps), or different-length tracks, you can copy and paste, and there are various random functions to keep things fresh. The results are still somehow yours – maybe even more so – it’s just that you use probability and generative rules to get you to what you want when you aren’t sure how to describe what you want. Or maybe before you knew you wanted it.
Because you can trigger up to 12 notes, you can use ESQ to turn bland presets into something unexpected (like working with preset Live patches). Or you can use it as a sequencer with all those fun modular toys we’ve been talking about lately (VCV Rack, Softube Modular, Cherry Audio Voltage Modular, and so on) – because 5- and 8-step sequencers are often just dull.
There’s no sound produced by ESQ – it’s just a sequencer – but it can have a big enough impact on devices that this “audio” demo is just one instance of ESQ and one Drum Rack. Even those vanilla kits start to get more interesting.
K-Devices has been working this way for a while, but ESQ feels like a breakthrough. The generative sequence tools are uniquely complete and especially powerful for producing rhythms. You can make this sound crazy and random and IDM-y, but you can also add complexity without heading into deep space – it’s really up to you.
And they’ve cleverly made two screens – one full parameter screen that gets deep and detailed, but a compact device screen that lets you shift everything with single gestures or adjust everything as macros – ideal for live performance or for making bigger changes.
It seems like a good wildcard to keep at your disposal … for any of those moments when you’re getting stuck and boring.
A crowd-funded custom controller has just arrived on the scene, designed to assist live triggering and looping in Ableton Live. And there’s already a free download for Max for Live to get you started, even without the hardware.
Hardware like Ableton’s Push lets you play Live with your fingers – but what about your feet? (Ableton Sole?) And what about looping? Pierre-Antoine Grison, Ableton Certified Trainer and producer/musician signed to Ed Banger Records, has come up with his own solution – just in time to show it this weekend at Ableton’s aptly-titled Loop “summit for music makers.” “State Of The Loop” is a custom MIDI controller for Ableton Live’s built-in Looper device.
The Looper in Ableton Live has been around for a few versions, after loads of requests from users. It delivered the kind of looping workflows you’d expect form a looping pedal. But that doesn’t mean everyone knows how to use it, or use it effectively. There are some nice resources online, including:
The stomp-style hardware controls not only the Looper device itself but also scenes. So it works for both controlling entire sets and for pedal-style looping, and you can use multiple (software) loopers so you can layer using different on-screen devices.
Display and control the state of Live’s Looper
Unlimited number of loopers !
2 Expression Pedal inputs with “dynamic mapping”
Scenes Mode to launch Scenes and display their color and name
Sturdy metal case
100% Made in France
USB or MIDI connection for longer distances (up to 15m/50ft)
Very light on the CPU
Weight : 1.7 kg / 3 lb
WxLxH : 30 x 13 x 6 cm / 12 x 5 x 2.5 inches
There’s even a free download that adds some features Ableton Live forgot – the equivalent of follow actions for scenes, plus a heads-up display so you can see what’s happening without hunching over your computer screen. (Seriously, Ableton, those belong as standard features in Live!)
You can use that download as long as you have a compatible version of Live and Max for Live; no hardware needed.