Vember Audio owner @Kurasu made this happen. But software just “being open sourced” often leads nowhere. In this case, Surge has a robust community around it, turning this uniquely open instrument into something you can happily use as a plug-in alongside proprietary choices.
And it really is deep: stack 3 oscillators per voice, use morphable classic or FM or ring modulation or noise engines, route through a rich filter block with feedback and every kind of variation imaginable – even more exotic notch or comb or sample & hold choices, and then add loads of modulation. There are some 12 LFOs per voice, multiple effects, a vocoder, a rotary speaker…
I mention it again because now you can grab Mac (64-bit AU/VST), Windows (32-bit and 64-bit VST), and Linux (64-bit VST) versions, built for you.
And there’s VST3 support.
And there’s support for MPE (MIDI Polyphonic Expression), meaning you can use hardware from ROLI, Roger Linn, Haken, and others – I’m keen to try the Sensel Morph, perhaps with that Buchla overlay.
Now there’s also an analog mode for the envelopes, too.
This also holds great promise for people who desire a deep synth but can’t afford expensive hardware. While Apple’s approach means backwards compatibility on macOS is limited, it’ll run on fairly modest machines – meaning this could also be an ideal starting point for building your own integrated hardware/software solution.
In fact, if you’re not much of a coder but are a designer, it looks like design is what they need most at this point. Plus you can contribute sound content, too.
Most encouraging is really that they are trying to build a whole community around this synth – not just make open source maintenance a chore, but really a shared endeavor.
We’ve been waiting, but now the waiting is done. Propellerhead has added the VST performance boost it had promised to Reason users – meaning plug-ins now benefit from Reason 10’s patchable racks.
I actually flew to Stockholm, Sweden back in the dark days of December to talk to the engineers about this update (among some other topics). Short version of that story: yeah, it took them longer than they’d hoped to get VST plug-ins operating as efficiently as native devices in the rack.
If you want all those nitty-gritty details, here’s the full story:
But now, suffice to say that the main reason for the hold-up – Reason’s patchable, modular virtual rack of gear – just became an asset rather than a liability. Now that VSTs in Reason perform roughly as they will in other DAWs, what Reason adds is the ability to route those plug-ins however you like, in Reason’s unique interface.
Combine that with Reason’s existing native effects and instruments and third-party Rack Extensions, and I think Reason becomes more interesting as both a live performance rig and a DAW for recording and arranging than before. It could also be interesting to stick a modular inside the modular – as with VCV Rack or this week’s Blocks Base and Blocks Prime from Native Instruments.
Anyway, that’s really all there is to say about 10.3 – it’s what Propellerhead call a “vitamin injection” (which, seeing those dark Swedish winters, I’m guessing all of them need about now.
This also means the engineers have gotten over a very serious and time-consuming hurdle and can presumably get onto other things. It’s also a development for the company that they’ve been upfront in talking about a flaw both before, during, and concluding development – and that’s welcome from any music software maker. So props to the Props – now go get some sunshine; you’ve earned it. (and the rest of us can tote these rigs out into the park, too)
Arturia have followed up their hit “3 Filters…” and “3 Preamps You’ll Actually Use” with the inevitable trio of compressors – but as with the other bundles, there are some twists (and lower intro prices on now).
Before they expanded into doing their own MIDI controllers and synth hardware, Arturia rose to prominence on their modeling chops. And they have tended to spin those modeling engines and competency in recreating vintage gear into spin-off products. The trick with the “3 [things] You’ll Actually Use” series has been rising above the crowd of vintage remakes now available to music producers. So that’s been about doing two things: one, picking three blockbusters to reproduce, and two, adding some functionality extras that lets producers get creative with the results.
And that’s to me what has made the series interesting – while lots of vendors will sell you reproductions of classic studio equipment, these have been ones you might well use in the production process. It’s not only about perfecting a recording or mix, but also about integrating into your creative process as you’re developing ideas.
The compressors trio go that route, too – so in addition to using these routinely in mixing or mastering, you can also use them for some inventive sidechain or saturation.
The three compressors getting the Arturia treatment – and the circuitry inside:
The 1176 is pretty ubiquitously desired at this point – and of course among other recreations you can keep it in the family of creator Bill Putnam Sr. and try Universal Audio’s own creation. It’s something you can use for subtle tonal shifts even at lower levels, in addition to cranking up compression if you want. So why add another reproduction to the pile? Arturia has added a “link” button for automatic volume leveling if you want – giving you the 1176 sound but more modern behavior on demand.
And you can use the 1176 as a sidechain. Oh – wait, that’s really huge. And there’s a creative “Time Warp” feature with pre-delay. So thanks to the fact that Arturia aren’t being quite as precious with the historical design as some of their rivals, you can choose either an “authentic” 1176 recreation, or something that’s 1176-ish but does things that were impossible on the original analog hardware.
It’s surprising enough for the 1176 to be new again, but the other models here have some similar ideas.
Next, you’ve got the “Over Easy” 165A, an essential compressor in a lot of studios, which has both some nice dirty, gritty timbral character of its own and punchy processing plus Mid/Side processing. For this model, Arturia have introduced a whole new panel of additional controls that fold out when you want them in the UI.
Don’t be fooled by the skeumorphic knobs; the original DBX didn’t have these options. That also includes their “Time Warp” pre-delay, convenient side-chaining (here with an easy “manual mode” trigger so you can preview what it’ll sound like), and now an integrated EQ. That EQ is modeled on SSL-style channels, so it’s a bit like having a pre-configured mixer rig to use with your sidechaining.
The STA-Level is maybe the most interesting of the three as far as rarity. It also gets (optional) modernization, with an input-output link for automatic leveling, a parallel compression mode that’s integrated with the software (plus an easy “mix” knob for adjusting how much parallel compression you want to hear), and sidechaining.
All in all, it’s an intriguing approach. On one hand, you get panels that look and operate and sound more like the original than a lot of software models at the low end of the price range. (For instance, the compressors added recently to Logic Pro X, while free, are more loose impressions than authentic recreations.)
But on the other, and here’s where Arturia clearly has an edge, you get new sidechaining and auto-leveling and other features that make these more fun to use in modern contexts and easier to drop into your creative flow.
Sidechaining these kinds of compressor models alone I think is a win; the convenience of the UIs and the fact that these are native on any platform to me makes them invaluable – maybe even compared to the existing filter and preamp offerings.
I’ve been playing around with them a bit already; I’m especially curious if I can run a couple in a live context – will report back on that. But I’m already impressed on sound and functionality.
Everything is on sale, so if you own existing Arturia stuff, you could get these for as little as $/EUR 49 (or half off if you’re new to the series), or in discounted bundles. You can also buy the compressors individually, if there’s one that really catches your fancy.
Plus, there are some new tutorials to get you started:
What’s to say a music idea can’t be both a tool and a tape, an instrument someone could play or an album they can get lost in? Puremagnetik are launching their new experimental label with two free tools that let you keep the drones and grains and ambient soundscapes flowing.
There’s a bunch of hype this week because Warner Music signed an algorithm. And in turn with everyone abusing the term “AI,” you might well think that a computer has automated a composer’s job. Except that’s not what happened – in the long tradition of algorithmic music, a group of composers applied their ideas to the development of software. One of the first apps launched for the iPhone, in fact, was the Brian Eno – Peter Chilvers “Bloom.” Endel has more in common with Bloom, I’d argue, than it does some dystopia where unseen, disembodied AI come to rob you of your lucrative ambient music recording contract. (At least, we’re not there yet. Endel is here in Berlin; I hope to talk to them soon – what they’ve done sounds very interesting, and maybe not quite what the press have reported.) Bloom in turn was a follow-up to Eno’s software-based generative music releases. Ableton co-founders Gerhard and Robert released software in the 90s, too.
So let’s talk about the role of musician as blurred with the role of instrument builder. Soundware and software shop Puremagnetik is made by musicians; founder Micah Frank was moonlighting in sound design for others as he worked on his own music. While this may come as shocking news to some, it turns out for many people, selling music tools is often a better day job than selling music or music performances. (I hope you were sitting down for that bombshell. Don’t tell my/your/anyone’s parents.)
But there are many ways to express something musically. Many of us who love tools as we do love playing live and recording and listening do so because all of these things embody sound and feeling.
It’s fitting, then, that Puremagnetik are launching their own record label to house some of the recorded experiments – Puremagnetik Tapes, which already has some beautiful music on cassette and as digital downloads.
And the perfect companion to those albums is these two free plug-ins. Like the label, they promise a trip for the mind.
The two first tapes (also available as digital)… gorgeous sound worlds to lose yourself in on loop.
The label announces it will focus on “experimental, ambient and acousmatic music.” That already yields two enchanting ambient forays. “Into a Bright Land” is in turns crystalline and delicate, warm and lush as a thick blanket. It’s Micah Frank himself, releasing under his Larum moniker. The musical craft is a digital-analog hybrid, part synths and tape machines – the kind the company has been known for sampling in its sound work – and partly Micah’s intricate custom coding work in the free environment Csound.
To accompany Into a Bright Land, there’s the plug-in “Expanse,” a “texture generator,” with a combination of “texture tone” filter, spectral blurring, adjustable pitch shift, and a healthy supply of noise generation and space.
Its drones and sonic landscapes draw from that same world.
Tyler Gilmore aka BlankFor.ms has crafted “Works for Tape and Piano,” pushing each instrument to its most vulnerable place, the tape itself becoming instrument, sounding almost as if at the point of a beautiful breakdown.
Since you can’t just borrow Tyler’s tape machines and such, Driftmaker is a digital equivalent – a “delay disintegration” device. Add your own audio, and the plug-in will model analog deterioration. The artist himself supplies the presets. Again, you have plenty of control – “parse” which sets the record buffer, “chop” which determines how much to recall, and then controls for delay, modulation, filtering, and wet/dry.
Both plug-ins are free with an email address or Gumroad login.
…and the plug-ins, each created to aesthetically accompany the albums.
There’s a pattern here, though. Far from a world where artists remove themselves from craft or automate the hard work, here, artists relish in getting close to everything that makes sound. They make music the hard way because each element of DIY is fun. And then they share that same fun. It might well be the opposite of the narrative we’re given about AI and automation (and I suspect that may also mean artists don’t approach machine learning for music in the way some people currently predict).
Or, well, even if you don’t believe that, I think you’ll easily lose whole evenings with these albums and plug-ins alike.
Native Instruments’ Massive synth defined a generation of soft synths and left a whole genre or two in its wake. But its sequel remains mysterious. Now the company is revealing some of what we can expect.
First, temper your expectations: NI aren’t giving us any sound samples or a release date. (It’s unclear whether the blog talking about “coming months” refers just to this blog series or … whether we’re waiting some months for the software, which seems possible.)
What you do get to see, though, is some of what I got a preview of last fall.
After a decade and a half, making a satisfying reboot of Massive is a tall order. What’s encouraging about Massive X is that it seems to return to some of the original vision of creator Mike Daliot. (Mike is still heavily involved in the new release, too, having crafted all 125 wavetables himself, among other things.)
So Massive X, like Massive before it, is all about making complex modulation accessible – about providing some of the depth of a modular in a fully designed semi-modular environment. Those are packaged into a UI that’s cleaner, clearer, prettier – and finally, scalable. And since this is not 2006, the sound engine beneath has been rewritten – another reason I’m eager to finally hear it in public form.
Massive X is still Massive. That means it incorporates features that are now so widely copied, you would be forgiven forgetting that Massive did them first. That includes drag-and drop modulation, the signature ‘saturn ring’ indicators of modulation around knobs, and even aspects of the approach to sections in the UI.
What’s promising is really the approach to sound and modulation. In short, revealed publicly in this blog piece for the first time:
Two dedicated phase modulation oscillators. Phase modulation was one of the deeper features of the original – and, if you could figure out Yamaha’s arcane approach to programming, instruments like the DX7. But now it’s more deeply integrated with the Massive architecture, and there’s more of it.
Lots of noise. In addition to those hundred-plus wavetables for the oscillators, you also get dozens of noise sources. (Rain! Birdies!) That rather makes Massive into an interesting noise synth, and should open up lots of sounds that aren’t, you know, angry EDM risers and basslines.
New filters. Comb filters, parallel and serial routing, and new sound. The filters are really what make a lot of NI’s latest generation stuff sound so good (as with a lot of newer software), so this is one to listen for.
New effects algorithms. Ditto.
Expanded Insert FX. This was another of the deeper features in Massive – and a case of the semi-modular offering some of the power of a full-blown modular, in a different (arguably, if you like, more useful) context. Since this can include both effects and oscillators, there are some major routing possibilities. Speaking of which:
Audio routing. Route an oscillator to itself (phase feedback), or to one another (yet more phase modulation), and make other connections you would normally expect of a modular synth, not necessarily even a semi-modular one.
Modulators route to the audio bus, too – so again like modular hardware, you can treat audio and modulation interchangeably.
More envelopes. Now you get up to nine of these, and unique new devices like a “switcher” LFO. New “Performers” can use time signature-specific rhythms for modulation, and you can trigger snapshots.
It’s a “tracker.” Four Trackers let you use MIDI as assignable modulation.
Maybe this is an oversimplification, but at the end of the day, it seems to me this is really about whether you want to get deep with this specific, semi-modular design, or go into a more open-ended modular environment. The tricky thing about Massive X is, it might have just enough goodies to draw in even the latter camp.
And, yeah, sure, it’s late. But … Reaktor has proven to us in the past that some of the stuff NI does slowest can also be the stuff the company does best. Blame some obsessive engineers who are totally uninterested in your calendar dates, or, like, the forward progression of time.
For a lot of us, Massive X will have to compete with the fact that on the one hand, the original Massive is easy and light on CPU, and on the other, there are so many new synths and modulars to play with in software. But let’s keep an eye on this one.
Hey, at least I can say – I think I was the first foreign press to see the original (maybe even the first press meeting, full stop), I’m sure because at the time, NI figured Massive would appeal only to CDM-ish synth nerds. (Then, oops, Skrillex happened.) So I look forward to Massive X accidentally creating the Hardstyle Bluegrass Laser Tag craze. Be ready.
K-Devices have brought alien interfaces and deep modulation to Max patches – now they’re doing plug-ins. And their approach to delay and tremolo isn’t quite like what you’ve seen before, a chance of break out of the usual patterns of how those work. Meet TTAP and WOV.
“Phoenix” is the new series of plug-ins from K-Devices, who previously had focused on Max for Live. Think equal parts glitchy IDM, part spacey analog retro – and the ability to mix the two.
TTAP is obviously both a play on multi-tap delay and tape, and there’s another multi-faceted experiment with analog and digital effects.
At its heart, there are two buffers with controls for delay time, speed, and feedback. You can sync time controls or set them free. But the basic idea here is you get smooth or glitchy buffers warping around based on modulation and time you can control. There are some really beautiful effects possible:
WOV is a tremolo that’s evolved into something new. So you can leave it as a plain vanilla tremolo (a regular rate amplitude shifter), but you can also adjust sensitivity to responding to an incoming signal. And there’s an eight-step sequencer. There are extensive controls for shaping waves for the effect, and a Depth section that’s well, deep – or that lets you turn this tremolo into a kind of gate.
These are the sorts of things you could do with a modular and a number of modules, but having it in a single, efficient, integrated plug-in where you get straight at the controls without having to do a bunch of patching – that’s something.
Right now, each plug-in is on sale (25% off) for 45EUR including VAT (about forty two bucks for the USA). 40% off if you buy both. Through March 17.
VCV Rack, the open source platform for software modular, keeps blossoming. If what you were waiting for was more maturity and stability and integration, the current pipeline looks promising. Here’s a breakdown.
Even with other software modulars on the scene, Rack stands out. Its model is unique – build a free, open source platform, and then build the business on adding commercial modules, supporting both the platform maker (VCV) and third parties (the module makers). That has opened up some new possibilities: a mixed module ecosystem of free and paid stuff, support for ports of open source hardware to software (Music Thing Modular, Mutable Instruments), robust Linux support (which other Eurorack-emulation tools currently lack), and a particular community ethos.
Of course, the trade-off with Rack 0.xx is that the software has been fairly experimental. Versions 1.0 and 2.0 are now in the pipeline, though, and they promise a more refined interface, greater performance, a more stable roadmap, and more integration with conventional DAWs.
New for end users
VCV founder and lead developer Andrew Belt has been teasing out what’s coming in 1.0 (and 2.0) online.
Here’s an overview:
Polyphony, polyphonic cables, polyphonic MIDI support and MPE
Multithreading and hardware acceleration
Tooltips, manual data entry, and right-click menus to more information on modules
Virtual CV to MIDI and direct MIDI mapping
2.0 version coming with fully-integrated DAW plug-in
More on that:
Polyphony and polyphonic cables. The big one – you can now use polyphonic modules and even polyphonic patching. Here’s an explanation:
Polyphonic MIDI and MPE. Yep, native MPE support. We’ve seen this in some competing platforms, so great to see here.
Multithreading. Rack will now use multiple cores on your CPU more efficiently. There’s also a new DSP framework that adds CPU acceleration (which helps efficiency for polyphony, for example). (See the developer section below.)
Oversampling for better audio quality. Users can set higher settings in the engine to reduce aliasing.
Tooltips and manual value entry. Get more feedback from the UI and precise control. You can also right-click to open other stuff – links to developer’s website, manual (yes!), source code (for those that have it readily available), or factory presets.
Core CV-MIDI. Send virtual CV to outboard gear as MIDI CC, gate, note data. This also integrates with the new polyphonic features. But even better –
Map MIDI directly. The MIDI map module lets you map parameters without having to patch through another module. A lot of software has been pretty literal with the modular metaphor, so this is a welcome change.
And that’s just what’s been announced. 1.0 is imminent, in the coming months, but 2.0 is coming, as well…
Rack 2.0 and VCV for DAWs. After 1.0, 2.0 isn’t far behind. “Shortly after” 2.0 is released, a DAW plug-in will be launched as a paid add-on, with support for “multiple instances, DAW automation with parameter labels, offline rendering, MIDI input, DAW transport, and multi-channel audio.”
These plans aren’t totally set yet, but a price around a hundred bucks and multiple ins and outs are also planned. (Multiple I/O also means some interesting integrations will be possible with Eurorack or other analog systems, for software/hardware hybrids.)
VCV Bridge is already deprecated, and will be removed from Rack 2.0. Bridge was effectively a stopgap for allowing crude audio and MIDI integration with DAWs. The planned plug-in sounds more like what users want.
Rack 2.0 itself will still be free and open source software, under the same license. The good thing about the plug-in is, it’s another way to support VCV’s work and pay the bills for the developer.
Rack v1 will bring a new, stabilized API – meaning you will need to do some work to port your modules. It’s not a difficult process, though – and I think part of Rack’s appeal is the friendly API and SDK from VCV.
You’ll also be able to take advantage of an SSE wrapper (simd.hpp) to take advantage of accelerated code on desktop CPUs, without hard coding manual calls to hardware that could break your plug-ins in the future. This also theoretically opens up future support for other platforms – like NEON or AVX acceleration. (It does seem like ARM platforms are the future, after all.)
While the Facebook group is still active and a place where a lot of people share work, there’s a new dedicated forum. That does things Facebook doesn’t allow, like efficient search, structured sections in chronological order so it’s easy to find answers, and generally not being part of a giant, evil, destructive platform.
Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.
Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.
I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.
Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:
Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.
Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.
Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.
You may know Magenta from its involvement in the NSynth synthesizer —
NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.
But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.
Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.
Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.
One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)
What’s in Magenta Studio
Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.
Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.
Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.
The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)
Here are your options:
Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.
Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.
Interpolate: Instead of one clip, use two clips and merge/morph between them.
Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.
Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.
So, is it useful?
It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.
More to the point with something like Magenta is, do you really get musically useful results?
Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.
Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.
One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.
I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.
The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.
And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.
And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.
Where this could go next
There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.
As Jesse Engel tells CDM:
We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.
Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.
VST brings more choice to Reason, but more support demands, too. Here’s an update on how Propellerhead are optimizing Reason to bring plug-in performance in line with what users expect.
For years, Reason was a walled-off garden. Propellerhead resisted supporting third-party plug-ins, and when they did, introduced their own native Rack Extensions technology for supporting them. That enables more integrated workflows, better user experience, greater stability, and easier installation and updates than a format like VST or AU allows.
But hey, we have a lot of VSTs we want to run inside Reason, engineering arguments be damned. And so Propellerhead finally listened to users, delivering support for VST effects and instruments on Mac and Windows in Reason 9.5. (Currently only VST2 plug-ins are supported, not VST3.)
Propellerhead have been working on improving stability and performance continuously since then. Reason 10.3 is a much-anticipated update, because it addresses a significant performance issue with VST plug-ins – without disrupting one of the things that makes Reason’s native devices work well.
The bad news is, 10.3 is delayed.
The good news is, it works really well. It puts Reason on par with other DAWs as far as VST performance. That’s a big deal to Reason users, just because in many other ways Reason is unlike other DAWs.
I met with Propellerhead engineers yesterday in Stockholm, including Mattias Häggström Gerdt (product manager for Reason). We got to discuss the issue, their whole development effort, and get hands-on with their alpha version.
Why this took a while
Okay, first, some technical discussion. “Real time” is actually not a thing in digital hardware and software. The illusion of a system working in real time is created by buffering – using very small windows of time to pass audio information, so small that the results seem instantaneous to the user.
There’s a buffer size you set for your audio interface – this one you may already know about. But software also have internal buffers for processing, hidden to the user. In a modular environment, you really want this buffer to be as small as possible, so that patching and processing feels reponsive – just as it would if you were using analog hardware. Reason accordingly has an internal buffer of 64 frames to do just that. That means without any interruptions to your audio stream, you can patch and repatch and tweak and play to your heart’s content.
Here’s the catch: some plug-ins developers for design reasons prefer larger buffers (higher latency), in order to reduce CPU consumption even though their plug-in technically work in Reason’s small buffer environment. This is common in plug-ins where ultra-low latency internal processing isn’t as important. But running inside Reason, that approach adds strain to your CPU. Some users won’t notice anything, because they don’t use these plug-ins or use fewer instances of them. But some will see their machine run out of CPU resources faster in Reason than in other DAWs. The result: the same plug-in setup you used in another DAW will make Reason sputter, which is of course not what you want.
Another catch: if you have ever tried adjusting the audio buffer size on your interface to reduce CPU usage, in this case, that won’t help. So users encountering this issue are left frustrated.
This is a fixable problem. You give those plug-ins larger buffers when they demand them, while Reason and its devices continue to work as they always have. It’s just there’s a lot of work going back through all the rest of Reason’s code to adjust for the change. And like a lot of coding work, that takes time. Adding more people doesn’t necessarily even speed this up, either. (Ever tried adding more people to a kitchen to “speed up” cooking dinner? Like that.)
When it’s done, existing Reason users won’t notice anything. But users of the affected plug-ins will see big performance gains.
What to expect when it ships
I sat with the engineers looking at an alpha and we measured CPU usage. The results by plug-in are what you might expect.
We worked with three plug-ins by way of example – charts are here. With Izotope Ozone 7, there’s a massive gain in the new build. That makes sense – a mastering plug-in isn’t so concerned about low latency performance. With Xfer Records Serum, there’s almost none. Native Instruments’ Massive is somewhere in between. These are just typical examples – many other plug-ins will also fall along this range.
Native Instruments’ Massive gets a marginal but significant performance boost. Left: before. Right: after.
iZotope’s Ozone is a more dramatic example. Stack some instances of this mastering-focused plug-in, and you can max out the CPU quickly in Reason. (left) But in Reason 10.3 alpha, you can see the “big batch” approach yields resolves that performance issue. (right)
Those graphs are on the Mac but OS in this case won’t really matter.
The fix is coming to the public. The alpha is not something you want to run; it’s already in the hands of testers who don’t mind working with prerelease softare. A public beta won’t happen in the couple of weeks we have left in 2018, but it is coming soon – as soon as it’s done. And of course 10.3 will be a free upgrade for Reason 10 users.
When it ships, Reason 10.3 will give you performance on par with other DAWs. That is, your performance will depend on your CPU and which plug-ins you’re using, but Reason will be more or less the same as other hosts beyond that.
So this isn’t really exciting stuff, but it will make your life easier. We’ll let you know how it comes and try to test that final version.
Hey, hardware modular – the computer is back. Cherry Audio’s Voltage Modular is another software modular platform. Its angle: be better for users — and now, easier and more open to developers, with a new free tool.
Voltage Modular was shown at the beginning of the year, but its official release came in September – and now is when it’s really hitting its stride. Cherry Audio’s take certainly isn’t alone; see also, in particular, Softube Modular, the open source VCV Rack, and Reason’s Rack Extensions. Each of these supports live patching of audio and control signal, hardware-style interfaces, and has rich third-party support for modules with a store for add-ons. But they’re all also finding their own particular take on the category. That means now is suddenly a really nice time for people interested in modular on computers, whether for the computer’s flexibility, as a supplement to hardware modular, or even just because physical modular is bulky and/or out of budget.
So, what’s special about Voltage Modular?
Easy patching. Audio and control signals can be freely mixed, and there’s even a six-way pop-up multi on every jack, so each jack has tons of routing options. (This is a computer, after all.)
Each jack can pop up to reveal a multi.
It’s polyphonic. This one’s huge – you get true polyphony via patch cables and poly-equipped modules. Again, you know, like a computer.
It’s open to development. There’s now a free Module Designer app (commercial licenses available), and it’s impressively easy to code for. You write DSP in Java, and Cherry Audio say they’ve made it easy to port existing code. The app also looks like it reduces a lot of friction in this regard.
There’s an online store for modules – and already some strong early contenders. You can buy modules, bundles, and presets right inside the app. The mighty PSP Audioware, as well as Vult (who make some of my favorite VCV stuff) are already available in the store.
There’s an online store for free and paid add-ons – modules and presets. But right now, a hundred bucks gets you started with a bunch of stuff right out of the gate.
Voltage Modular is a VST/AU/AAX plug-in and runs standalone. And it supports 64-bit double-precision math with zero-latency module processes – but, impressively in our tests, isn’t so hard on your CPU as some of its rivals.
Right now, Voltage Modular Core + Electro Drums are on sale for just US$99.
Real knobs and patch cords are fun, but … let’s be honest, this is a hell of a lot of fun, too.
So what about that development side, if that interests you? Well, Apple-style, there’s a 70/30 split in developers’ favor. And it looks really easy to develop on their platform:
Java may be something of a bad word to developers these days, but I talked to Cherry Audio about why they chose it, and it definitely makes some sense here. Apart from being a reasonably friendly language, and having unparalleled support (particularly on the Internet connectivity side), Java solves some of the pitfalls that might make a modular environment full of third-party code unstable. You don’t have to worry about memory management, for one. I can also imagine some wackier, creative applications using Java libraries. (Want to code a MetaSynth-style image-to-sound module, and even pull those images from online APIs? Java makes it easy.)
Just don’t think of “Java” as in legacy Java applications. Here, DSP code runs on a Hotspot virtual machine, so your DSP is actually running as machine language by the time it’s in an end user patch. It seems Cherry have also thought through GUI: the UI is coded natively in C++, while you can create custom graphics like oscilloscopes (again, using just Java on your side). This is similar to the models chosen by VCV and Propellerhead for their own environments, and it suggests a direction for plug-ins that involves far less extra work and greater portability. It’s no stretch to imagine experienced developers porting for multiple modular platforms reasonably easily. Vult of course is already in that category … and their stuff is so good I might almost buy it twice.
Or to put that in fewer words: the VM can match or even best native environments, while saving developers time and trouble.
Cherry also tell us that iOS, Linux, and Android could theoretically be supported in the future using their architecture.
Of course, the big question here is installed user base and whether it’ll justify effort by developers, but at least by reducing friction and work and getting things rolling fairly aggressively, Cherry Audio have a shot at bypassing the chicken-and-egg dangers of trying to launch your own module store. Plus, while this may sound counterintuitive, I actually think that having multiple players in the market may call more attention to the idea of computers as modular tools. And since porting between platforms isn’t so hard (in comparison to VST and AU plug-in architectures), some interested developers may jump on board.
Well, that and there’s the simple matter than in music, us synth nerds love to toy around with this stuff both as end users and as developers. It’s fun and stuff. On that note:
Modulars gone soft
Stay tuned; I’ve got this for testing and will let you know how it goes.