Free Downgrade turns Ableton Live into lo-fi wobbly vaporwave tape

Fidelity? High-quality sound? No – degradation! And if you don’t have a ragged VHS deck or cassette Walkman handy, these free effects racks in Ableton Live will sort you out.

Downgrade is the work of Tom Cosm, long-time Ableton guru. There are five effects:

Fluffer
Corrupt
Hiss
Morph
Flutter

— plus if you give him literally US$1 or more (you cheapskate), you get an additional Stutter rack.

Basically, you get loads of controls for manipulating downsampling, tape effects, saturation, distortion, modulation of various kinds, echo, vocoder, and more. It’s a sort of retro Vaporwave starter kit if you’d like to think of it that way – or an easy, dial-up greatest hits of everything Ableton Live can now do to make your sound worse. And by worse, I mean better, naturally.

Ableton have been gradually adding all these digital downsampling features (early on) and simulated analog tape and saturation effects and nonlinear modulation (more recently). Tom has neatly packed them into one very useful set of Racks.

Notice I say “Racks,” not Max for Live devices. That means these will mostly run on different editions of Live, and they’re a bit easier to pick apart and adjust/modify – without requiring Max knowledge.

Go download them:

https://gumroad.com/l/wmIbJ

The post Free Downgrade turns Ableton Live into lo-fi wobbly vaporwave tape appeared first on CDM Create Digital Music.

Max TV: go inside Max 8’s wonders with these videos

Max 8 – and by extension the latest Max for Live – offers some serious powers to build your own sonic and visual stuff. So let’s tune in some videos to learn more.

The major revolution in Max 8 – and a reason to look again at Max even if you’ve lapsed for some years – is really MC. It’s “multichannel,” so it has significance in things like multichannel speaker arrays and spatial audio. But even that doesn’t do it justice. By transforming the architecture of how Max treats multiple, well, things, you get a freedom in sketching new sonic and instrumental ideas that’s unprecedented in almost any environment. (SuperCollider’s bus and instance system is capable of some feats, for example, but it isn’t as broad or intuitive as this.)

The best way to have a look at that is via a video from Ableton Loop, where the creators of the tech talk through how it works and why it’s significant.

Description [via C74’s blog]:

In this presentation, Cycling ’74’s CEO and founder David Zicarelli and Content Specialist Tom Hall introduce us to MC – a new multi-channel audio programming system in Max 8.

MC unlocks immense sonic complexity with simple patching. David and Tom demonstrate techniques for generating rich and interesting soundscapes that they discovered during MC’s development. The video presentation touches on the psychoacoustics behind our recognition of multiple sources in an audio stream, and demonstrates how to use these insights in both musical and sound design work.

The patches aren’t all ready for download (hmm, some cleanup work being done?), but watch this space.

If that’s got you in the learning mood, there are now a number of great video tutorials up for Max 8 to get you started. (That said, I also recommend the newly expanded documentation in Max 8 for more at-your-own-pace learning, though this is nice for some feature highlights.)

dude837 has an aptly-titled “delicious” tutorial series covering both musical and visual techniques – and the dude abides, skipping directly to the coolest sound stuff and best eye candy.

Yes to all of these:

There’s a more step-by-step set of tutorials by dearjohnreed (including the basics of installation, so really hand-holding from step one):

For developers, the best thing about Max 8 is likely the new Node features. And this means the possibility of wiring musical inventions into the Internet as well as applying some JavaScript and Node.js chops to anything else you want to build. Our friends at C74 have the hook-up on that:

Suffice to say that also could mean some interesting creations running inside Ableton Live.

It’s not a tutorial, but on the visual side, Vizzie is also a major breakthrough in the software:

That’s a lot of looking at screens, so let’s close out with some musical inspiration – and a reminder of why doing this learning can pay off later. Here’s Second Woman, favorite of mine, at LA’s excellent Bl__K Noise series:

The post Max TV: go inside Max 8’s wonders with these videos appeared first on CDM Create Digital Music.

Use Ableton Live faster with the free Live Enhancement Suite

Day in, day out, a lot of producers spend a lot of time editing in Ableton Live. Here’s a free tool that automates some common tasks so you can work more quickly – easing some FL Studio envy in the process.

This one comes to us from Madeleine Bloom’s terrific Sonic Bloom, the best destination for resources on learning and using Ableton Live. Live Enhancement Suite is Windows-only for the moment, but a Mac version is coming soon.

The basic idea is, LES adds shortcuts for producers, and some custom features (like sane drawing) you might expect from other tools:

Add devices (like your favorite plug-ins) using a customizable pop-up menu of your favorites (with a double right-click)

Draw notes easily with the ~ key in Piano Roll.

Pop up a shortcut menu with scales in Piano Roll

Add locators (right shift + L) at the cursor

Pan with your mouse, not just the keyboard (via the middle mouse button, so you’ll need a three-button mouse for this one)

Save multiple versions (a feature FL Studio users know well)

Ctrl-shift-Z to redo

Alt-E to view envelope mode in piano roll

And there’s more customizations and multi-monitor support, too.

Ableton are gradually addressing long-running user requests to make editing easier; Live 10.1 builds on the work of Live 10. Case in point: 10.1 finally lets you solo a selected track (mentioned in the video as previously requiring one of these shortcuts). But it’s likewise nice to see users add in what’s missing.

Oh, and… you’re totally allowed to call it “Ableton.” People regularly refer to cars by the make rather than the model. We know what you mean.

Here’s a video walking through these tools and the creator Dylan Tallchief’s approach:

More info:

LES Collaborators:
Inverted Silence: https://soundcloud.com/invertedsilence
Aevi: https://twitter.com/aevitunes
Sylvian: https://sylvian.co/

https://www.patreon.com/dylantallchief
https://www.twitter.com/dylantallchief
https://soundcloud.com/dylantallchief
https://facebook.com/dylantallchief
https://www.twitch.tv/dylantallchief

Give it a go – will try to check in when there’s a Mac version.

https://enhancementsuite.me/

PS, Windows users will want to check out the excellent open source AutoHotkey for automation, generally.

The post Use Ableton Live faster with the free Live Enhancement Suite appeared first on CDM Create Digital Music.

This free Ableton Live device makes images into wavetables

It’s the season of the wavetable – again. With Ableton Live 10.1 on the horizon and its free Wavetable device, we’ve got yet another free Max for Live device for making sound materials – and this time, you can make your wavetables from images.

Let’s catch you up first.

Ableton Live 10.1 will bring Wavetable as a new instrument to Standard and Suite editions – arguably one of the bigger native synth editions to Live in its history, ranking with the likes of Operator. And sure, as when Operator came out, you already have plug-ins that do the same; Ableton’s pitch is as always their unique approach to UI (love it or hate it), and integration with the host, and … having it right in the box:

Ableton Live 10.1: more sound shaping, work faster, free update

Earlier this week, we saw one free device that makes wavetables for you, built as a Max for Live device. (Odds are anyone able to run this will have a copy of Live with Wavetable in it, since it targets 10.1, but it also exports to other tools). Wave Weld focuses on dialing in the sounds you need and spitting out precise, algorithmic results:

Generate wavetables for free, for Ableton Live 10.1 and other synths

One thing Wave Weld cannot do, however, is make a wavetable out of a picture of a cat.

For that, you want Image2Wavetable. The name says it all: it generates wavetable samples from image data.

This means if you’re handy with graphics software, or graphics code like Processing, you can also make visual patterns that generate interesting wavetables. It reminds me of my happy hours and hours spent using U+I Software’s ground-breaking MetaSynth, which employs some similar concepts to build an entire sound laboratory around graphic tools. (It’s still worth a spin today if you’ve got a Mac; among other things, it is evidently responsible for those sweeping digital sounds in the original Matrix film, I’m told.)

Image2Wavetable is new, the creation of Dillon Bastan and Carlo Cattano – and there are some rough edges, so be patient and it sounds like they’re ready to hear some feedback on how it works.

But the workflow is really simple: drag and drop image, drag and drop resulting wavetable into the Wavetable instrument.

Okay, I suspect I know what I’m doing for the rest of the night.

Image2Wavetable Device [maxforlive.com]

The post This free Ableton Live device makes images into wavetables appeared first on CDM Create Digital Music.

Generate wavetables for free, for Ableton Live 10.1 and other synths

Wavetables are capable of a vast array of sounds. But just dumping arbitrary audio content into a wavetable is unlikely to get the results you want. And that’s why Wave Weld looks invaluable: it makes it easy to generate useful wavetables, in an add-on that’s free for Max for Live.

Ableton Live users are going to want their own wavetable maker very soon. Live 10.1 will add Wavetable, a new synth based on the technique. See our previous preview:

Ableton Live 10.1: more sound shaping, work faster, free update

Live 10.1 is in public beta now, and will be free to all Live 10 users soon.

So long as you have Max for Live to run it, Wave Weld will be useful to other synths, as well – including the developer’s own Wave Junction.

Because wavetables are periodic by their very nature, it’s more likely helpful to generate content algorithmically than just dump sample content of your own. (Nothing against the latter – it’s definitely fun – but you may soon find yourself limited by the results.)

Wave Wend handles generating those materials for you, as well as exporting them in the format you need.

1. Make the wavetable: use waveshaping controls to dial in the sound materials you want.

2. Build up a library: adapt existing content or collect your own custom creations.

3. Export in the format you need: adjusting the size les you support Live 10.1’s own device or other hardware and plug-ins.

The waveshaping features are really the cool part:

Unique waveshaping controls to generate custom wavetables
Sine waveshape phase shift and curve shape controls
Additive style synthesis via choice of twenty four sine waveshape harmonics for both positive and negative phase angles
Saw waveshape curve sharpen and partial controls
Pulse waveshape width, phase shift, curve smooth and curve sharpen controls
Triangle waveshape phase shift, curve smooth and curve sharpen controls
Random waveshape quantization, curve smooth and thinning controls

Wave Weld isn’t really intended as a synth, but one advantage of it being an M4L device is, you can easily preview sounds as you work.

More information on the developer’s site – http://metafunction.co.uk/wave-weld/

The download is free with a sign-up for their mailing list.

They’ve got a bunch of walkthrough videos to get you started, too:

Major kudos to Phelan Kane of Meta Function for this release. (Phelan is an Ableton Certified Trainer as well as a specialist in Reaktor and Maschine on the Native Instruments side, as well as London chairman for AES.)

I’m also interested in other ways to go about this – SuperCollider code, anyone?

Wavetable on!

The post Generate wavetables for free, for Ableton Live 10.1 and other synths appeared first on CDM Create Digital Music.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Electron (a popular cross-platform JavaScript tool), though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magenta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

That also has its own Ableton Live device, from a couple years back.

https://magenta.tensorflow.org/nsynth-instrument

NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

Where this could go next

There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.

As Jesse Engel tells CDM:

We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.

Things like more impressive MIDI generation (https://magenta.tensorflow.org/music-transformer – check out “Score Conditioned”)

And state of the art transcription: (https://magenta.tensorflow.org/onsets-frames)

And new controller paradigms:
(https://magenta.tensorflow.org/pianogenie)

Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.

So okay, music makers – have at it:

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

Ableton Live 10.1: more sound shaping, work faster, free update

There’s something about point releases – not the ones with any radical changes, but just the ones that give you a bunch of little stuff you want. That’s Live 10.1; here’s a tour.

Live 10.1 was announced today, but I sat down with the team at Ableton last month and have been working with pre-release software to try some stuff out. Words like “workflow” are always a bit funny to me. We’re talking, of course, mostly music making. The deal with Live 10.1 is, it gives you some new toys on the sound side, and makes mangling sounds more fun on the arrangement side.

Oh, and VST3 plug-ins work now, too. (MOTU’s DP10 also has that in an upcoming build, among others, so look forward to the Spring of VST3 Support.)

Let’s look at those two groups.

Sound tools and toys

User wavetables. Wavetable just got more fun – you can drag and drop samples onto Wavetable’s oscillator now, via the new User bank. You can get some very edgy, glitchy results this way, or if you’re careful with sample selection and sound design, more organic sounds.

This looks compelling.

Here’s how it works: Live splits up your audio snippet into 1024 sample chunks. It then smooths out the results – fading the edges of each table to avoid zero-crossing clicks and pops, and normalizing and minimizing phase differences. You can also tick a box called “Raw” that just slices up the wavetable, for samples that are exactly 1024 samples or a regular periodic multiple of that.

Give me some time and we can whip up some examples of this, but basically you can glitch out, mangle sounds you’ve recorded, carefully construct sounds, or just grab ready-to-use wavetables from other sources.

But it is a whole lot of fun and it suggests Wavetable is an instrument that will grow over time.

Here’s that feature in action:

Delay. Simple Delay and Ping Pong Delay have merged into a single lifeform called … Delay. That finally updates an effect that hasn’t seen love since the last decade. (The original ones will still work for backwards project compatibility, though you won’t see them in a device list when you create a new project – don’t panic.)

At first glance, you might think that’s all that’s here, but in typical Ableton fashion, there are some major updates hidden behind those vanilla, minimalist controls. So now you have Repitch, Fade, and Jump modes. And there’s a Modulation section with rate, filter, and time controls (as found on Echo). Oh, and look at that little infinity sign next to the Feedback control.

Yeah, all of those things are actually huge from a sound design perspective. So since Echo has turned out to be a bit too much for some tasks, I expect we’ll be using Delay a lot. (It’s a bit like that moment when you figure out you really want Simpler and Drum Racks way more than you do Sampler.)

The old delays. Ah, memories…

And the new Delay. Look closely – there are some major new additions in there.

Channel EQ. This is a new EQ with visual feedback and filter curves that adapt across the frequency range – that is, “Low,” “Mid,” and “High” each adjust their curves as you change their controls. Since it has just three controls, that means Channel EQ sits somewhere between the dumbed down EQ Three and the complexity of EQ Eight. But it also means this could be useful as a live performance EQ when you don’t necessarily want a big DJ-style sweep / cut.

Here it is in action:

Arranging

The stuff above is fun, but you obviously don’t need it. Where Live 10.1 might help you actually finish music is in a slew of new arrangement features.

Live 10 felt like a work in progress as far as the Arrange view. I think it immediately made sense to some of us that Ableton were adjusting arrangement tools, and ironing out the difference between, say, moving chunks of audio around and editing automation (drawing all those lovely lines to fade things in and out, for instance).

But it felt like the story there wasn’t totally complete. In fact, the change may have been too subtle – different enough to disturb some existing users, but without a big enough payoff.

So here’s the payoff: Ableton have refined all those subtle Arrange tweaks with user feedback, and added some very cool shape drawing features that let you get creative in this view in a way that isn’t possible with other users.

Fixing “$#(*& augh undo I didn’t want to do that!” Okay, this problem isn’t unique to Live. In every traditional DAW, your mouse cursor does conflicting things in a small amount of space. Maybe you’re trying to move a chunk of audio. Maybe you want to resize it. Maybe you want to fade in and out the edges of the clip. Maybe it’s not the clip you’re trying to edit, but the automation curves around it.

In studio terms, this sounds like one of the following:

[silent, happy clicking, music production getting … erm … produced]

OR ….
$#(*&*%#*% …. Noo errrrrrrrgggggg … GAACK! SDKJJufffff ahhh….

Live 10 added a toggle between automation editing and audio editing modes. For me, I was already doing less of the latter. But 10.1 is dramatically better, thanks to some nearly imperceptible adjustments to the way those clip handles work, because you can more quickly change modes, and because you can zoom more easily. (The zoom part may not immediately seem connected to this, but it’s actually the most important part – because navigating from your larger project length to the bit you’re actually trying to edit is usually where things break down.)

In technical terms, that means the following:

Quick zoom shortcuts. I’ll do a separate story on these, because they’re so vital, but you can now jump to the whole song, details, zoom various heights, and toggle between zoom states via keyboard shortcuts. There are even a couple of MIDI-mappable ones.

Clips in Arrangement have been adjusted. From the release notes: “The visualisation of Arrangement clips has been improved with adjusted clip borders and refinements to the way items are colored.” Honestly, you won’t notice, but ask the person next to you how much you’re grunting / swearing like someone is sticking something pointy into your ribs.

Pitch gestures! You can pitch-zoom Arrangement and MIDI editor with Option or Alt keys – that works well on Apple trackpads and newer PC trackpads. And yeah, this means you don’t have to use Apple Logic Pro just to pinch zoom. Ahem.

The Clip Detail View is clearer, too, with a toggle between automation and modulation clearly visible, and color-coded modulation for everything.

The Arrangement Overview was also adjusted with better color coding and new resizing.

In addition, Ableton have worked a lot with how automation editing functions. New in 10.1:

Enter numerical values. Finally.

Free-hand curves more easily. With grid off, your free-hand, wonky mouse curves now get smoothed into something more logical and with fewer breakpoints – as if you can draw better with the mouse/trackpad than you actually can.

Simplify automation. There’s also a command that simplifies existing recorded automation. Again – finally.

So that fixes a bunch of stuff, and while this is pretty close to what other DAWs do, I actually find Ableton’s implementation to be (at last) quicker and friendlier than most other DAWs. But Ableton kept going and added some more creative ideas.

Insert shapes. Now you have some predefined shapes that you can draw over automation lanes. It’s a bit like having an LFO / modulation, but you can work with it visually – so it’s nice for those who prefer that editing phase as a way do to their composition. Sadly, you can only access these via the mouse menu – I’d love some keyboard shortcuts, please – but it’s still reasonably quick to work with.

Modify curves. Hold down Option/Ctrl and you can change the shape of curves.

Stretch and skew. Reshape envelopes to stretch, skew, stretch time / ripple edit.

Insert Shapes promises loads of fun in the Arrangement – words that have never been uttered before.

Check out those curve drawing and skewing/scaling features in action:

Freeze/Export

You can freeze tracks with sidechains, instead of a stupid dialog box popping up to tell you you can’t, because it would break the space-time continuum or crash the warp core injectors or … no, there’s no earthly reason why you shouldn’t be able to freeze sidechains on a computer.

You can export return and master effects on the actual tracks. I know, I know. You really loved bouncing out stems from Ableton or getting stems to remix and having little bits of effects from all the tracks on separate stems that were just echos, like some weird ghost of whatever it was you were trying to do. And I’m a lazy kid, who for some reason thinks that’s completely illogical since, again, this is a computer and all this material is digital. But yes, for people are soft like me, this will be a welcome feature.

So there you have it. Plus you now get VST3s, which is great, because VST3 … is so much … actually, you know, even I don’t care all that much about that, so let’s just say now you don’t have to check if all your plug-ins will run or not.

Go get it

One final note – Max for Live. 10.0.6 synchronized with Max 8.0.2. See those release notes from Cycling ’74:

https://cycling74.com/forums/max-8-0-2-released

Live 10.1 is keeping pace, with the beta you download now including Max 8.0.3.

Ableton haven’t really “integrated” Max for Live; they’re still separate products. And so that means you probably don’t want perfect lockstep between Max and Live, because that could mean instability on the Live side. It’d be more accurate to say that what Ableton have done is to improve the relationship between Max and Live, so that you don’t have to wait as long for Max improvements to appear in Max for Live.

Live 10.1 is in beta now with a final release coming soon.

Ableton Live 10.1 release notes

And if you own a Live 10.1 license, you can join the beta group:

Beta signup

Live 10.1: User wavetables, new devices and workflow upgrades

Thanks to Ableton for those short videos. More on these features soon.

The post Ableton Live 10.1: more sound shaping, work faster, free update appeared first on CDM Create Digital Music.

Akai Force: hands-on preview of the post-PC live-in-a-box music tool

The leak was real. Akai have a standalone box that can free you from a laptop, when you want that freedom. It works with your computer and gear, but it also does all the arranging and performance (and some monster sounds and sequencing) on its own. It’s what a lot of folks were waiting for – and we’ve just gotten our hands on it.

Akai have already had a bit of a hit with the latest MPCs, which work as a controller/software combo if you want, but also stand on their own.

The Akai Force (it’s not an MPC or APC in the end) is more than that. It’s a single musical device with computer-like power under the hood, but standalone stability. It’s a powerful enough sequencer (for MIDI and CV) that you some people might just buy it on those merits.

But it also performs all the Ableton Live-style workflows you know. So there’s an APC/Push style interface, clip launching and editing, grids for playing drums and instruments, and sampling capability. There’s also a huge selection of synths and effects (courtesy AIR Music Technology), so while it can’t run third-party VST plug-ins, you should feel comfortable using it on its own. And it integrates with your computer when you’re in your studio – in both directions, though more on that in a bit.

And it’s US$1499 – so it’s reasonable affordable, at least in that it’s possibly cheaper than upgrading your laptop, or buying a new controller and a full DAW license.

First – the specs:

• Standalone – no computer required
• 8×8 clip launch matrix with RGB LEDs
• 7″ color capacitive multitouch display
• Mic/Instrument/Line Inputs, 4 outputs
• MIDI In/Out/Thru via 1/8″ TRS inputs (5-pin DIN adapters included)
• (4) configurable CV/Gate Outputs to integrate your modular setup
• (8) touch-sensitive knobs with graphical OLED displays
• Time stretch/pitch shift in real time
• Comprehensive set of AIR effects and Hype, TubeSynth, Bassline and Electric synth engines
• Ability to record 8 stereo tracks
• 16GB of on-board storage (over 10 gigs of sound content included)
• 2 GB of RAM
• Full-Size SD card Slot
• User-expandable 2.5″ SATA drive connector (SATA or HDD)
• (2) USB 3.0 slots for thumb drives or MIDI controllers

Clarification: about those eight tracks. You can have eight stereo tracks of audio, but up to 128 tracks total.

And there’s a powerful and clever scheme here that lets the Force adapt to different combinations of onboard synths and effects. Akai tells us the synths use a “weighted voice management” scheme so you can maximize simultaneous voices. Effects are unlimited, until you run out of CPU power. Since this is integrated hardware and software, though, you don’t fail catastrophically when you run out of juice, as you do on a conventional computer. (Ahem.)

All that I/O – USB connectivity, USB host (for other USB gear), CV (for analog gear), MIDI (via standard minijacks), plus audio input / mic and separate out and cue outs.

US$1499 (confirming European pricing), shipping on 5 February to the USA and later in the month to other markets.

I’ve had a hands-on with AKAI Professional’s product managers. The software was still pre-release – this was literally built last night – but it was very close to final form, and we should have a detailed review once we get hardware next month.

The specs don’t really tell the whole story, so let’s go through what this thing is about.

In person, the arrangement turns out to be logical and tidy.

Form factor

The images leaked via an FCC filing of a prototype did make this thing look a bit homely. In person with the final hardware, it seems totally logical.

On the bottom of the unit is a grid with shortcut triggers, looking very much like a Push 2. On the top is a touch display and more shortcut keys that resemble the MPC Live. You also get a row of endless encoders, which now Akai call just “knobs.”

The “hump” that contains the touch display enables a ton of I/O crammed onto the back – even with minijacks for MIDI, the space is needed. And it means the displays for the knobs are tilted at an angle, so they’re easier to read as you play, from either sitting or standing position.

There are also some touches that tell you this is Akai hardware. Everything is labeled. Triggers most often do just one thing, rather than changing modes as on Ableton Push. And there are features like obvious, dedicated navigation, and a crossfader.

In short, you can tell this is from the folks who built the APC40. Whereas sometimes functions on Ableton Push can be maddeningly opaque, the Akai hardware makes things obvious. I’ll talk more about that in the review, of course, but it’s obvious even when looking at the unit what everything does and how to navigate.

Oh and – while this unit is big, it still looks like it’d fit snugly onto a table at a venue or DJ booth. Plus you don’t need a computer. And yeah, the lads from Akai brought it to Berlin on Ryanair. You can absolutely fit it in a backpack.

Workflows

What impresses me about this effort from Akai is that it takes into account a whole range of use cases. Rather than describe what it does, maybe I should jump straight into what I think it means for those use cases, based on what I’ve seen.

It runs live sets. Well, here this is clearly a winner. You get clip launching just like you do with Ableton Live, without a laptop. And so even if you still stick to Live for production (or Maschine, or Reason, or FL Studio, or whatever DAW), you can easily load up stems and clips on this and free yourself from the laptop later.

You get consistent color coding and near-constant feedback on the grid and heads-up display / touch display about where you are, what’s muted, what’s record-enabled, and what’s playing. My impression is that it’s far clearer than on other devices, thanks to the software being built around the hardware. (Maschine got further than some of its rivals, but it lacks this many controls, lights, and display.)

That feedback seemed like it’s also not overwhelming, either, because it’s spread out over this larger footprint. There’s also a handy overview of your whole clip layout on the touch display, so you can page through more clip slots easily.

Logical, dedicated triggers and loads of feedback so you don’t get lost.

Full-featured clip launching and mixing.

It’s a playable instrument – finger-drummer friendly. Of course, now that you can do all that stuff with clips, as with Push, you can also play instruments. There are onboard synths from AIR – Electric, Bassline, TubeSynth, and the new multifunctional FM + additive + wavetable hybid Hype. And there are a huge number of effects from lo-fi stuff to reverbs to delays, meaning you can get away without packing effects pedals. It’s literally the full range of AIR stuff – so like having a full Pro Tools plug-in folder on dedicated hardware.

That may or may not be enough for everyone, but you can also use MIDI and CV and USB to control external gear (or a computer).

The grid setup features are also easy to get into and powerful. There are a range of pitch-to-grid mappings, from guitar fret-style arrangements to a Tonnetz layout (5th on one axis, 3rd on another) to piano and chromatic layouts. There are of course scale and chord options – though no microtuning onboard, yet. (Wait until Aphex Twin gets his, I think.)

And there are drum layouts, too, or step sequencers if you want them.

Two major, major deviations from Push, though. You know how easy it is to accidentally change parts on Push when you’re trying to navigate clips and wind up playing the wrong instrument? Or how easy it is to get lost when recording clips? Or how suddenly a step sequencer turns up when you just want to finger drum a pad? Or…

Yeah, okay well – you have none of those problems here. Force makes it easy to select parts, easy to select tracks, easy to mute tracks, and lets you choose the layout you want when you want it without all that confusion.

Again, more on this in the review, but I’m thoroughly relieved that Akai seems to understand the need for dedicated triggers and less cognitive overhead when you play live.

Tons of playing options.

It can replace a computer for production, if you want. There’s deep clip editing and sampling and arrangement and mixing functionality here. Clips even borrow one of the best features from Bitwig Studio – you can edit and move and duplicate audio inside a clip, which you can’t do in Live without bringing that audio out into the Arrangement. So you could use this to start and even finish tracks.

The Force doesn’t have the same horsepower as a laptop, of course. So you’re limited to eight stereo tracks. Then again, back in the days of tape that bouncing process was also creatively useful – and the sampling capabilities here make it easy to resample work.

Powerful clip editing combines with sampling – and you can use both the touchscreen and dedicated hardware controls.

Or you can use it as a companion to a computer. You can also use Force as a sketchpad – much like some iPad tools now, but of course with physical controls. There’s even an export to ALS feature coming, so you could start tracks on Force and finish them in Ableton Live – with your full range of mixing an mastering tools and plug-ins. (I believe that doesn’t ship at launch, but is due soon.)

Also coming in the first part of this year, Akai are working on a controller mode so you can use Force as an Ableton Live controller when you are at your computer.

There’s wired connectivity. You can set up MIDI tracks, you can set up CV tracks. There’s also USB host mode. Like the grid, but wish you had some MPC-style velocity-sensitive pads? Or want some faders? Plug in inexpensive controllers via USB, just as you would on your computer. You only get two audio ins, but that’s of course still enough to do sampling – and you get the sorts of sampling and live time stretching capabilities you’d expect of the company that makes the MPC.

For audio output, there’s a dedicated cue out as well as the stereo audio output.

On the front – SD card loading (there’s also USB support and internal drive upgradeability), plus a dedicated cue output for your headphones.

The full range of AIR effects is onboard.

Powerful audio effects should help you grow with this one.

And there’s wireless connectivity, too. You can sync sample content via Splice.com – which includes your own samples, by the way. (Wow, do I wish Roland did this with Roland Cloud and the TR-8S – yeah, being able to have all my own kits and sample sets and sync them with a WiFi connection is huge to me, even just for the sounds I created myself.)

There’s Ableton Link support, so you can wirelessly sync up to your computer, iPad, and other tools – clocking the Force without wires.

There’s even wireless support for control and sound, meaning that Force is going to be useful even before you plug in cables.

Yeah, it’s a standalone instrument, but it’s also a monster sequencer / hub.

Bottom line. It replaces Ableton Live. It works with Ableton Live. It replaces your computer. It works with your computer. It’s a monster standalone instrument. It’s a monster sequencer for your other instruments. It does a bunch of stuff. It doesn’t try to do too much (manageable controls, clear menus).

Basically, this already looks like the post-PC device a lot of us were waiting for. Can’t wait to get one for review.

The post Akai Force: hands-on preview of the post-PC live-in-a-box music tool appeared first on CDM Create Digital Music.

Ableton Live as standalone hardware? Leaked Akai APC Live

It’s what a lot of people wanted – an MPC crossed with an Ableton Push – which could mean it’s too good to be true. But the APC Live leaked in images looks viable enough, and it could signal big changes for electronic performance in 2019.

Standalone hardware that does what software does – it’s a funny thing. It has seemed inevitable for a long time. But lots of hardware remains tethered to the ubiquitous computer (Ableton Push, Novation Launchpad, Native Instruments Maschine, Native Instruments Traktor) … or is exceptionally expensive (Pioneer CDJ). Then there was Akai’s own MPC Live, which seemed to be both affordable and flexible – you can use it with or without a computer – but failed to catch on. That may be because the MPC Live was too late to win people over to a new workflow. It wasn’t really like the original MPC hardware, and computer users had opted for Maschine, Live, and other tools.

That makes these leaked photos of the supposed Akai APC Live so interesting. Ableton, with a user base literally in the millions, doesn’t have to convince anyone of a new workflow. If the APC Live does what the MPC Live does – work as a controller with your computer plugged in, but then switch to standalone mode for onstage use – it could be a winner.

The ever leak-savvy sequencer.de get the scoop, in a forum post (which seems to get these from an FCC filing):

https://www.sequencer.de/synthesizer/media/apc-live-3.976/

Behold:

It seems to have everything you’d need:

A Push-style grid surface with shortcuts.
Encoders and heads-up display for parameter editing.
An MPC-style workspace with edit buttons.
USB connection (locked, so it doesn’t come out accidentally) and 2-port USB hub for expansion (or storage, hard to say).
SD card slot (load samples, sets?).
Separate cue mix for your headphones.
4 outs (so you can also have a separate cue line mix/monitors out, or easy quad output, or whatever)
CV and gate, MIDI – though crammed on minijacks, so you’ll need some dongles, no doubt.
XLR input for a vocal mic.

The only thing that’s odd about this is that the MPC-style screen is tacked rather awkwardly on top, giving this a really tall footprint.

The other big question will be what happens with plug-ins. Akai for their part first came out talking about embedded Windows on their MPC Live, but eventually shipped a Linux-based application. That makes their MPC software behave the same as a self-contained app on the hardware as it does on your computer. But Live users are accustomed to using third-party plug-ins; will they have to stick to Live internal devices when running in standalone mode?

Another possibility – maybe the “live” moniker doesn’t really mean this works on its own. This could just be an oversized controller for Ableton Live, but still tethered to the computer. That would make sense, too – it would be a lot of work to get Live to run on its own, and just shipping another controller would be an easy solution.

Just don’t rule out standalone as a possibility. It’s technically possible, and we know Ableton has posted some Linux and embedded engineering jobs on their site – plus Akai has done this once before, meaning they have the talent in-house to work on it.

I expect we’ll know later this month, either at the NAMM show or slightly before.

The post Ableton Live as standalone hardware? Leaked Akai APC Live appeared first on CDM Create Digital Music.

More surprise in your sequences, with ESQ for Ableton Live

With interfaces that look lifted from a Romulan warbird and esoteric instruments, effects, and sequencers, K-Devices have been spawning surprising outcomes in Ableton Live for some time now. ESQ is the culmination of that: a cure for preset sounds and ideas in a single device.

You likely know the problem already: all of the tools in software like Ableton Live that make it easy to quickly generate sounds and patterns also tend to do so in a way that’s … always the same. So instead of being inspiring, you can quickly feel stuck in a rut.

ESQ is a probability-based sequencer with parameters, so you adjust a few controls to generate a wide variety of possibilities – velocity, chance, and relative delay for each step. You can create polyrhythms (multiple tracks of the same length, but different steps), or different-length tracks, you can copy and paste, and there are various random functions to keep things fresh. The results are still somehow yours – maybe even more so – it’s just that you use probability and generative rules to get you to what you want when you aren’t sure how to describe what you want. Or maybe before you knew you wanted it.

Because you can trigger up to 12 notes, you can use ESQ to turn bland presets into something unexpected (like working with preset Live patches). Or you can use it as a sequencer with all those fun modular toys we’ve been talking about lately (VCV Rack, Softube Modular, Cherry Audio Voltage Modular, and so on) – because 5- and 8-step sequencers are often just dull.

There’s no sound produced by ESQ – it’s just a sequencer – but it can have a big enough impact on devices that this “audio” demo is just one instance of ESQ and one Drum Rack. Even those vanilla kits start to get more interesting.

K-Devices has been working this way for a while, but ESQ feels like a breakthrough. The generative sequence tools are uniquely complete and especially powerful for producing rhythms. You can make this sound crazy and random and IDM-y, but you can also add complexity without heading into deep space – it’s really up to you.

And they’ve cleverly made two screens – one full parameter screen that gets deep and detailed, but a compact device screen that lets you shift everything with single gestures or adjust everything as macros – ideal for live performance or for making bigger changes.

It seems like a good wildcard to keep at your disposal … for any of those moments when you’re getting stuck and boring.

And yes, of course Richard Devine already has it:

But you can certainly make things unlike Devine, too, if you want.

Right now ESQ is on sale, 40% off through December 31 – €29 instead of 49. So it can be your last buy of 2018.

Have fun, send sequences!

https://k-devices.com/products/esq/

The post More surprise in your sequences, with ESQ for Ableton Live appeared first on CDM Create Digital Music.