NI Massive X synth sees first features, interface revealed

Native Instruments’ Massive synth defined a generation of soft synths and left a whole genre or two in its wake. But its sequel remains mysterious. Now the company is revealing some of what we can expect.

First, temper your expectations: NI aren’t giving us any sound samples or a release date. (It’s unclear whether the blog talking about “coming months” refers just to this blog series or … whether we’re waiting some months for the software, which seems possible.)

What you do get to see, though, is some of what I got a preview of last fall.

After a decade and a half, making a satisfying reboot of Massive is a tall order. What’s encouraging about Massive X is that it seems to return to some of the original vision of creator Mike Daliot. (Mike is still heavily involved in the new release, too, having crafted all 125 wavetables himself, among other things.)

So Massive X, like Massive before it, is all about making complex modulation accessible – about providing some of the depth of a modular in a fully designed semi-modular environment. Those are packaged into a UI that’s cleaner, clearer, prettier – and finally, scalable. And since this is not 2006, the sound engine beneath has been rewritten – another reason I’m eager to finally hear it in public form.

Massive X is still Massive. That means it incorporates features that are now so widely copied, you would be forgiven forgetting that Massive did them first. That includes drag-and drop modulation, the signature ‘saturn ring’ indicators of modulation around knobs, and even aspects of the approach to sections in the UI.

What’s promising is really the approach to sound and modulation. In short, revealed publicly in this blog piece for the first time:

Two dedicated phase modulation oscillators. Phase modulation was one of the deeper features of the original – and, if you could figure out Yamaha’s arcane approach to programming, instruments like the DX7. But now it’s more deeply integrated with the Massive architecture, and there’s more of it.

Lots of noise. In addition to those hundred-plus wavetables for the oscillators, you also get dozens of noise sources. (Rain! Birdies!) That rather makes Massive into an interesting noise synth, and should open up lots of sounds that aren’t, you know, angry EDM risers and basslines.

New filters. Comb filters, parallel and serial routing, and new sound. The filters are really what make a lot of NI’s latest generation stuff sound so good (as with a lot of newer software), so this is one to listen for.

New effects algorithms. Ditto.

Expanded Insert FX. This was another of the deeper features in Massive – and a case of the semi-modular offering some of the power of a full-blown modular, in a different (arguably, if you like, more useful) context. Since this can include both effects and oscillators, there are some major routing possibilities. Speaking of which:

Audio routing. Route an oscillator to itself (phase feedback), or to one another (yet more phase modulation), and make other connections you would normally expect of a modular synth, not necessarily even a semi-modular one.

Modulators route to the audio bus, too – so again like modular hardware, you can treat audio and modulation interchangeably.

More envelopes. Now you get up to nine of these, and unique new devices like a “switcher” LFO. New “Performers” can use time signature-specific rhythms for modulation, and you can trigger snapshots.

It’s a “tracker.” Four Trackers let you use MIDI as assignable modulation.

Maybe this is an oversimplification, but at the end of the day, it seems to me this is really about whether you want to get deep with this specific, semi-modular design, or go into a more open-ended modular environment. The tricky thing about Massive X is, it might have just enough goodies to draw in even the latter camp.

And, yeah, sure, it’s late. But … Reaktor has proven to us in the past that some of the stuff NI does slowest can also be the stuff the company does best. Blame some obsessive engineers who are totally uninterested in your calendar dates, or, like, the forward progression of time.

For a lot of us, Massive X will have to compete with the fact that on the one hand, the original Massive is easy and light on CPU, and on the other, there are so many new synths and modulars to play with in software. But let’s keep an eye on this one.

And yes, NI, can we please hear the thing soon?

https://blog.native-instruments.com/massive-x-lab-welcome-to-massive-x/

Hey, at least I can say – I think I was the first foreign press to see the original (maybe even the first press meeting, full stop), I’m sure because at the time, NI figured Massive would appeal only to CDM-ish synth nerds. (Then, oops, Skrillex happened.) So I look forward to Massive X accidentally creating the Hardstyle Bluegrass Laser Tag craze. Be ready.

The post NI Massive X synth sees first features, interface revealed appeared first on CDM Create Digital Music.

Unique takes on delay and tremolo from K-Devices, now as plug-ins

K-Devices have brought alien interfaces and deep modulation to Max patches – now they’re doing plug-ins. And their approach to delay and tremolo isn’t quite like what you’ve seen before, a chance of break out of the usual patterns of how those work. Meet TTAP and WOV.

“Phoenix” is the new series of plug-ins from K-Devices, who previously had focused on Max for Live. Think equal parts glitchy IDM, part spacey analog retro – and the ability to mix the two.

TTAP

TTAP is obviously both a play on multi-tap delay and tape, and there’s another multi-faceted experiment with analog and digital effects.

At its heart, there are two buffers with controls for delay time, speed, and feedback. You can sync time controls or set them free. But the basic idea here is you get smooth or glitchy buffers warping around based on modulation and time you can control. There are some really beautiful effects possible:

WOV

WOV is a tremolo that’s evolved into something new. So you can leave it as a plain vanilla tremolo (a regular rate amplitude shifter), but you can also adjust sensitivity to responding to an incoming signal. And there’s an eight-step sequencer. There are extensive controls for shaping waves for the effect, and a Depth section that’s well, deep – or that lets you turn this tremolo into a kind of gate.

These are the sorts of things you could do with a modular and a number of modules, but having it in a single, efficient, integrated plug-in where you get straight at the controls without having to do a bunch of patching – that’s something.


Right now, each plug-in is on sale (25% off) for 45EUR including VAT (about forty two bucks for the USA). 40% off if you buy both. Through March 17.

VST/VST3/AU/AAX, Mac and Windows.

More:

https://k-devices.com/

The post Unique takes on delay and tremolo from K-Devices, now as plug-ins appeared first on CDM Create Digital Music.

VCV Rack nears 1.0, new features, as software modular matures

VCV Rack, the open source platform for software modular, keeps blossoming. If what you were waiting for was more maturity and stability and integration, the current pipeline looks promising. Here’s a breakdown.

Even with other software modulars on the scene, Rack stands out. Its model is unique – build a free, open source platform, and then build the business on adding commercial modules, supporting both the platform maker (VCV) and third parties (the module makers). That has opened up some new possibilities: a mixed module ecosystem of free and paid stuff, support for ports of open source hardware to software (Music Thing Modular, Mutable Instruments), robust Linux support (which other Eurorack-emulation tools currently lack), and a particular community ethos.

Of course, the trade-off with Rack 0.xx is that the software has been fairly experimental. Versions 1.0 and 2.0 are now in the pipeline, though, and they promise a more refined interface, greater performance, a more stable roadmap, and more integration with conventional DAWs.

New for end users

VCV founder and lead developer Andrew Belt has been teasing out what’s coming in 1.0 (and 2.0) online.

Here’s an overview:

  • Polyphony, polyphonic cables, polyphonic MIDI support and MPE
  • Multithreading and hardware acceleration
  • Tooltips, manual data entry, and right-click menus to more information on modules
  • Virtual CV to MIDI and direct MIDI mapping
  • 2.0 version coming with fully-integrated DAW plug-in

More on that:

Polyphony and polyphonic cables. The big one – you can now use polyphonic modules and even polyphonic patching. Here’s an explanation:

https://community.vcvrack.com/t/how-polyphonic-cables-will-work-in-rack-v1/

New modules will help you manage this.

Polyphonic MIDI and MPE. Yep, native MPE support. We’ve seen this in some competing platforms, so great to see here.

Multithreading. Rack will now use multiple cores on your CPU more efficiently. There’s also a new DSP framework that adds CPU acceleration (which helps efficiency for polyphony, for example). (See the developer section below.)

Oversampling for better audio quality. Users can set higher settings in the engine to reduce aliasing.

Tooltips and manual value entry. Get more feedback from the UI and precise control. You can also right-click to open other stuff – links to developer’s website, manual (yes!), source code (for those that have it readily available), or factory presets.

Core CV-MIDI. Send virtual CV to outboard gear as MIDI CC, gate, note data. This also integrates with the new polyphonic features. But even better –

Map MIDI directly. The MIDI map module lets you map parameters without having to patch through another module. A lot of software has been pretty literal with the modular metaphor, so this is a welcome change.

And that’s just what’s been announced. 1.0 is imminent, in the coming months, but 2.0 is coming, as well…

Rack 2.0 and VCV for DAWs. After 1.0, 2.0 isn’t far behind. “Shortly after” 2.0 is released, a DAW plug-in will be launched as a paid add-on, with support for “multiple instances, DAW automation with parameter labels, offline rendering, MIDI input, DAW transport, and multi-channel audio.”

These plans aren’t totally set yet, but a price around a hundred bucks and multiple ins and outs are also planned. (Multiple I/O also means some interesting integrations will be possible with Eurorack or other analog systems, for software/hardware hybrids.)

VCV Bridge is already deprecated, and will be removed from Rack 2.0. Bridge was effectively a stopgap for allowing crude audio and MIDI integration with DAWs. The planned plug-in sounds more like what users want.

Rack 2.0 itself will still be free and open source software, under the same license. The good thing about the plug-in is, it’s another way to support VCV’s work and pay the bills for the developer.

New for developers

Rack v1 is under a BSD license – proper free and open source software. There’s even a mission statement that deals with this.

Rack v1 will bring a new, stabilized API – meaning you will need to do some work to port your modules. It’s not a difficult process, though – and I think part of Rack’s appeal is the friendly API and SDK from VCV.

https://vcvrack.com/manual/Migrate1.html

You’ll also be able to take advantage of an SSE wrapper (simd.hpp) to take advantage of accelerated code on desktop CPUs, without hard coding manual calls to hardware that could break your plug-ins in the future. This also theoretically opens up future support for other platforms – like NEON or AVX acceleration. (It does seem like ARM platforms are the future, after all.)

Plus check this port for adding polyphony to your stuff.

And in other Rack news…

Also worth mentioning:

While the Facebook group is still active and a place where a lot of people share work, there’s a new dedicated forum. That does things Facebook doesn’t allow, like efficient search, structured sections in chronological order so it’s easy to find answers, and generally not being part of a giant, evil, destructive platform.

https://community.vcvrack.com/

It’s powered by open source forum software Discourse.

For a bunch of newly free add-ons, check out the wonder XFX stuff (I paid for at least one of these, and would do again if they add more commercial stuff):

http://blamsoft.com/vcv-rack/

Vult is a favorite of mine, and there’s a great review this week, with 79 demo patches too:

There’s also a new version of Mutable Instruments Tides, Tidal Modular 2, available in the Audible Instruments Preview add-on – and 80% of your money goes to charity.

https://vcvrack.com/AudibleInstruments.html#preview

And oh yeah, remember that in the fall Rack already added support for hosting VST plugins, with VST Host. It will even work inside the forthcoming plugin, so you can host plugins inside a plugin.

https://vcvrack.com/Host.html

Here it is with the awesome d16 stuff, another of my addictions:

Great stuff. I’m looking forward to some quality patching time.

http://vcvrack.com/

The post VCV Rack nears 1.0, new features, as software modular matures appeared first on CDM Create Digital Music.

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Instead of just accepting all this machine learning hype, why not put it to the test? Magenta Studio lets you experiment with open source machine learning tools, standalone or inside Ableton Live.

Magenta provides a pretty graspable way to get started with an field of research that can get a bit murky. By giving you easy access to machine learning models for musical patterns, you can generate and modify rhythms and melodies. The team at Google AI first showed Magenta Studio at Ableton’s Loop conference in LA in November, but after some vigorous development, it’s a lot more ready for primetime now, both on Mac and Windows.

If you’re working with Ableton Live, you can use Magenta Studio as a set of devices. Because they’re built with Electron (a popular cross-platform JavaScript tool), though, there’s also a standalone version. Developers can dig far deeper into the tools and modify them for your own purposes – and even if you have just a little comfort with the command line, you can also train your own models. (More on that in a bit.)

Side note of interest to developers: this is also a great showcase for doing powerful stuff with machine learning using just JavaScript, applying even GPU acceleration without having to handle a bunch of complex, platform-specific libraries.

I got to sit down with the developers in LA, and also have been playing with the latest builds of Magenta Studio. But let’s back up and first talk about what this means.

Magenta Studio is out now, with more information on the Magenta project and other Google work on musical applications on machine learning:

g.co/magenta
g.co/magenta/studio

AI?

Artificial Intelligence – well, apologies, I could have fit the letters “ML” into the headline above but no one would know what I was talking about.

Machine learning is a better term. What Magenta and TensorFlow are based on is applying algorithmic analysis to large volumes of data. “TensorFlow” may sound like some kind of stress exercise ball you keep at your desk. But it’s really about creating an engine that can very quickly process lots of tensors – geometric units that can be combined into, for example, artificial neural networks.

Seeing the results of this machine learning in action means having a different way of generating and modifying musical information. It takes the stuff you’ve been doing in music software with tools like grids, and lets you use a mathematical model that’s more sophisticated – and that gives you different results you can hear.

You may know Magenta from its involvement in the NSynth synthesizer —

https://nsynthsuper.withgoogle.com/

That also has its own Ableton Live device, from a couple years back.

https://magenta.tensorflow.org/nsynth-instrument

NSynth uses models to map sounds to other sounds and interpolate between them – it actually applies the techniques we’ll see in this case (for notes/rhythms) to audio itself. Some of the grittier artifacts produced by this process even proved desirable to some users, as they’re something a bit unique – and you can again play around in Ableton Live.

But even if that particular application didn’t impress you – trying to find new instrument timbres – the note/rhythm-based ideas make this effort worth a new look.

Recurrent Neural Networks are a kind of mathematical model that algorithmically loops over and over. We say it’s “learning” in the sense that there are some parallels to very low-level understandings of how neurons work in biology, but this is on a more basic level – running the algorithm repeatedly means that you can predict sequences more and more effectively given a particular data set.

Magenta’s “musical” library applies a set of learning principles to musical note data. That means it needs a set of data to “train” on – and part of the results you get are based on that training set. Build a model based on a data set of bluegrass melodies, for instance, and you’ll have different outputs from the model than if you started with Gregorian plainchant or Indonesian gamelan.

One reason that it’s cool that Magenta and Magenta Studio are open source is, you’re totally free to dig in and train your own data sets. (That requires a little more knowledge and some time for your computer or a server to churn away, but it also means you shouldn’t judge Magenta Studio on these initial results alone.)

What’s in Magenta Studio

Magenta Studio has a few different tools. Many are based on MusicVAE – a recent research model that looked at how machine learning could be applied to how different melodies relate to one another. Music theorists have looked at melodic and rhythmic transformations for a long time, and very often use mathematical models to make more sophisticated descriptions of how these function. Machine learning lets you work from large sets of data, and then not only make a model, but morph between patterns and even generate new ones – which is why this gets interesting for music software.

Crucially, you don’t have to understand or even much care about the math and analysis going on here – expert mathematicians and amateur musicians alike can hear and judge the results. If you want to read a summary of that MusicVAE research, you can. But it’s a lot better to dive in and see what the results are like first. And now instead of just watching a YouTube demo video or song snippet example, you can play with the tools interactively.

Magenta Studio lets you work with MIDI data, right in your Ableton Live Session View. You’ll make new clips – sometimes starting from existing clips you input – and the device will spit out the results as MIDI you can use to control instruments and drum racks. There’s also a slide called “Temperature” which determines how the model is sampled mathematically. It’s not quite like adjusting randomness – hence they chose this new name – but it will give you some control over how predictable or unpredictable the results will be (if you also accept that the relationship may not be entirely linear). And you can choose number of variations, and length in bars.

The data these tools were trained on represents millions of melodies and rhythms. That is, they’ve chosen a dataset that will give you fairly generic, vanilla results – in the context of Western music, of course. (And Live’s interface is fairly set up with expectations about what a drum kit is, and with melodies around a 12-tone equal tempered piano, so this fits that interface… not to mention, arguably there’s some cultural affinity for that standardization itself and the whole idea of making this sort of machine learning model, but I digress.)

Here are your options:

Generate: This makes a new melody or rhythm with no input required – it’s the equivalent of rolling the dice (erm, machine learning style, so very much not random) and hearing what you get.

Continue: This is actually a bit closer to what Magenta Studio’s research was meant to do – punch in the beginning of a pattern, and it will fill in where it predicts that pattern could go next. It means you can take a single clip and finish it – or generate a bunch of variations/continuations of an idea quickly.

Interpolate: Instead of one clip, use two clips and merge/morph between them.

Groove: Adjust timing and velocity to “humanize” a clip to a particular feel. This is possibly the most interesting of the lot, because it’s a bit more focused – and immediately solves a problem that software hasn’t solved terribly well in the past. Since the data set is focused on 15 hours of real drummers, the results here sound more musically specific. And you get a “humanize” that’s (arguably) closer to what your ears would expect to hear than the crude percentage-based templates of the past. And yes, it makes quantized recordings sound more interesting.

Drumify: Same dataset as Groove, but this creates a new clip based on the groove of the input. It’s … sort of like if Band-in-a-Box rhythms weren’t awful, basically. (Apologies to the developers of Band-in-a-Box.) So it works well for percussion that ‘accompanies’ an input.

So, is it useful?

It may seem un-human or un-musical to use any kind of machine learning in software. But from the moment you pick up an instrument, or read notation, you’re working with a model of music. And that model will impact how you play and think.

More to the point with something like Magenta is, do you really get musically useful results?

Groove to me is really interesting. It effectively means you can make less rigid groove quantization, because instead of some fixed variations applied to a grid, you get a much more sophisticated model that adapts based on input. And with different training sets, you could get different grooves. Drumify is also compelling for the same reason.

Generate is also fun, though even in the case of Continue, the issue is that these tools don’t particularly solve a problem so much as they do give you a fun way of thwarting your own intentions. That is, much like using the I Ching (see John Cage, others) or a randomize function (see… all of us, with a plug-in or two), you can break out of your usual habits and create some surprise even if you’re alone in a studio or some other work environment.

One simple issue here is that a model of a sequence is not a complete model of music. Even monophonic music can deal with weight, expression, timbre. Yes, theoretically you can apply each of those elements as new dimensions and feed them into machine learning models, but – let’s take chant music, for example. Composers were working with less quantifiable elements as they worked, too, like the meaning and sound of the text, positions in the liturgy, multi-layered quotes and references to other compositions. And that’s the simplest case – music from punk to techno to piano sonatas will challenge these models in Magenta.

I bring this up not because I want to dismiss the Magenta project – on the contrary, if you’re aware of these things, having a musical game like this is even more fun.

The moment you begin using Magenta Studio, you’re already extending some of the statistical prowess of the machine learning engine with your own human input. You’re choosing which results you like. You’re adding instrumentation. You’re adjusting the Temperature slider using your ear – when in fact there’s often no real mathematical indication of where it “should” be set.

And that means that hackers digging into these models could also produce new results. People are still finding new applications for quantize functions, which haven’t changed since the 1980s. With tools like Magenta, we get a whole new slew of mathematical techniques to apply to music. Changing a dataset or making minor modifications to these plug-ins could yield very different results.

And for that matter, even if you play with Magenta Studio for a weekend, then get bored and return to practicing your own music, even that’s a benefit.

Where this could go next

There are lots of people out there selling you “AI” solutions and – yeah, of course, with this much buzz, a lot of it is snake oil. But that’s not the experience you have talking to the Magenta team, partly because they’re engaged in pure research. That puts them in line with the history of pure technical inquiries past, like the team at Bell Labs (with Max Mathews) that first created computer synthesis. Magenta is open, but also open-ended.

As Jesse Engel tells CDM:

We’re a research group (not a Google product group), which means that Magenta Studio is not static and has a lot more interesting models are probably on the way.

Things like more impressive MIDI generation (https://magenta.tensorflow.org/music-transformer – check out “Score Conditioned”)

And state of the art transcription: (https://magenta.tensorflow.org/onsets-frames)

And new controller paradigms:
(https://magenta.tensorflow.org/pianogenie)

Our goal is just for someone to come away with an understanding that this is an avenue we’ve created to close the gap between our ongoing research and actual music making, because feedback is important to our research.

So okay, music makers – have at it:

g.co/magenta
g.co/magenta/studio

The post Magenta Studio lets you use AI tools for inspiration in Ableton Live appeared first on CDM Create Digital Music.

Reason 10.3 will improve VST performance – here’s how

VST brings more choice to Reason, but more support demands, too. Here’s an update on how Propellerhead are optimizing Reason to bring plug-in performance in line with what users expect.

For years, Reason was a walled-off garden. Propellerhead resisted supporting third-party plug-ins, and when they did, introduced their own native Rack Extensions technology for supporting them. That enables more integrated workflows, better user experience, greater stability, and easier installation and updates than a format like VST or AU allows.

But hey, we have a lot of VSTs we want to run inside Reason, engineering arguments be damned. And so Propellerhead finally listened to users, delivering support for VST effects and instruments on Mac and Windows in Reason 9.5. (Currently only VST2 plug-ins are supported, not VST3.)

Propellerhead have been working on improving stability and performance continuously since then. Reason 10.3 is a much-anticipated update, because it addresses a significant performance issue with VST plug-ins – without disrupting one of the things that makes Reason’s native devices work well.

The bad news is, 10.3 is delayed.

The good news is, it works really well. It puts Reason on par with other DAWs as far as VST performance. That’s a big deal to Reason users, just because in many other ways Reason is unlike other DAWs.

I met with Propellerhead engineers yesterday in Stockholm, including Mattias Häggström Gerdt (product manager for Reason). We got to discuss the issue, their whole development effort, and get hands-on with their alpha version.

Why this took a while

Okay, first, some technical discussion. “Real time” is actually not a thing in digital hardware and software. The illusion of a system working in real time is created by buffering – using very small windows of time to pass audio information, so small that the results seem instantaneous to the user.

There’s a buffer size you set for your audio interface – this one you may already know about. But software also have internal buffers for processing, hidden to the user. In a modular environment, you really want this buffer to be as small as possible, so that patching and processing feels reponsive – just as it would if you were using analog hardware. Reason accordingly has an internal buffer of 64 frames to do just that. That means without any interruptions to your audio stream, you can patch and repatch and tweak and play to your heart’s content.

Here’s the catch: some plug-ins developers for design reasons prefer larger buffers (higher latency), in order to reduce CPU consumption even though their plug-in technically work in Reason’s small buffer environment. This is common in plug-ins where ultra-low latency internal processing isn’t as important. But running inside Reason, that approach adds strain to your CPU. Some users won’t notice anything, because they don’t use these plug-ins or use fewer instances of them. But some will see their machine run out of CPU resources faster in Reason than in other DAWs. The result: the same plug-in setup you used in another DAW will make Reason sputter, which is of course not what you want.

Another catch: if you have ever tried adjusting the audio buffer size on your interface to reduce CPU usage, in this case, that won’t help. So users encountering this issue are left frustrated.

This is a fixable problem. You give those plug-ins larger buffers when they demand them, while Reason and its devices continue to work as they always have. It’s just there’s a lot of work going back through all the rest of Reason’s code to adjust for the change. And like a lot of coding work, that takes time. Adding more people doesn’t necessarily even speed this up, either. (Ever tried adding more people to a kitchen to “speed up” cooking dinner? Like that.)

When it’s done, existing Reason users won’t notice anything. But users of the affected plug-ins will see big performance gains.

What to expect when it ships

I sat with the engineers looking at an alpha and we measured CPU usage. The results by plug-in are what you might expect.

We worked with three plug-ins by way of example – charts are here. With Izotope Ozone 7, there’s a massive gain in the new build. That makes sense – a mastering plug-in isn’t so concerned about low latency performance. With Xfer Records Serum, there’s almost none. Native Instruments’ Massive is somewhere in between. These are just typical examples – many other plug-ins will also fall along this range.

Native Instruments’ Massive gets a marginal but significant performance boost. Left: before. Right: after.

iZotope’s Ozone is a more dramatic example. Stack some instances of this mastering-focused plug-in, and you can max out the CPU quickly in Reason. (left) But in Reason 10.3 alpha, you can see the “big batch” approach yields resolves that performance issue. (right)

Those graphs are on the Mac but OS in this case won’t really matter.

The fix is coming to the public. The alpha is not something you want to run; it’s already in the hands of testers who don’t mind working with prerelease softare. A public beta won’t happen in the couple of weeks we have left in 2018, but it is coming soon – as soon as it’s done. And of course 10.3 will be a free upgrade for Reason 10 users.

When it ships, Reason 10.3 will give you performance on par with other DAWs. That is, your performance will depend on your CPU and which plug-ins you’re using, but Reason will be more or less the same as other hosts beyond that.

So this isn’t really exciting stuff, but it will make your life easier. We’ll let you know how it comes and try to test that final version.

Official announcement:

Update on Reason and VST performance

For more on Reason and VST support, see their support section:

Propellerhead Software Rack Extensions, ReFills and VSTs VSTs

The post Reason 10.3 will improve VST performance – here’s how appeared first on CDM Create Digital Music.

Cherry Audio Voltage Modular: a full synth platform, open to developers

Hey, hardware modular – the computer is back. Cherry Audio’s Voltage Modular is another software modular platform. Its angle: be better for users — and now, easier and more open to developers, with a new free tool.

Voltage Modular was shown at the beginning of the year, but its official release came in September – and now is when it’s really hitting its stride. Cherry Audio’s take certainly isn’t alone; see also, in particular, Softube Modular, the open source VCV Rack, and Reason’s Rack Extensions. Each of these supports live patching of audio and control signal, hardware-style interfaces, and has rich third-party support for modules with a store for add-ons. But they’re all also finding their own particular take on the category. That means now is suddenly a really nice time for people interested in modular on computers, whether for the computer’s flexibility, as a supplement to hardware modular, or even just because physical modular is bulky and/or out of budget.

So, what’s special about Voltage Modular?

Easy patching. Audio and control signals can be freely mixed, and there’s even a six-way pop-up multi on every jack, so each jack has tons of routing options. (This is a computer, after all.)

Each jack can pop up to reveal a multi.

It’s polyphonic. This one’s huge – you get true polyphony via patch cables and poly-equipped modules. Again, you know, like a computer.

It’s open to development. There’s now a free Module Designer app (commercial licenses available), and it’s impressively easy to code for. You write DSP in Java, and Cherry Audio say they’ve made it easy to port existing code. The app also looks like it reduces a lot of friction in this regard.

There’s an online store for modules – and already some strong early contenders. You can buy modules, bundles, and presets right inside the app. The mighty PSP Audioware, as well as Vult (who make some of my favorite VCV stuff) are already available in the store.

There’s an online store for free and paid add-ons – modules and presets. But right now, a hundred bucks gets you started with a bunch of stuff right out of the gate.

Voltage Modular is a VST/AU/AAX plug-in and runs standalone. And it supports 64-bit double-precision math with zero-latency module processes – but, impressively in our tests, isn’t so hard on your CPU as some of its rivals.

Right now, Voltage Modular Core + Electro Drums are on sale for just US$99.

Real knobs and patch cords are fun, but … let’s be honest, this is a hell of a lot of fun, too.

For developers

So what about that development side, if that interests you? Well, Apple-style, there’s a 70/30 split in developers’ favor. And it looks really easy to develop on their platform:

Java may be something of a bad word to developers these days, but I talked to Cherry Audio about why they chose it, and it definitely makes some sense here. Apart from being a reasonably friendly language, and having unparalleled support (particularly on the Internet connectivity side), Java solves some of the pitfalls that might make a modular environment full of third-party code unstable. You don’t have to worry about memory management, for one. I can also imagine some wackier, creative applications using Java libraries. (Want to code a MetaSynth-style image-to-sound module, and even pull those images from online APIs? Java makes it easy.)

Just don’t think of “Java” as in legacy Java applications. Here, DSP code runs on a Hotspot virtual machine, so your DSP is actually running as machine language by the time it’s in an end user patch. It seems Cherry have also thought through GUI: the UI is coded natively in C++, while you can create custom graphics like oscilloscopes (again, using just Java on your side). This is similar to the models chosen by VCV and Propellerhead for their own environments, and it suggests a direction for plug-ins that involves far less extra work and greater portability. It’s no stretch to imagine experienced developers porting for multiple modular platforms reasonably easily. Vult of course is already in that category … and their stuff is so good I might almost buy it twice.

Or to put that in fewer words: the VM can match or even best native environments, while saving developers time and trouble.

Cherry also tell us that iOS, Linux, and Android could theoretically be supported in the future using their architecture.

Of course, the big question here is installed user base and whether it’ll justify effort by developers, but at least by reducing friction and work and getting things rolling fairly aggressively, Cherry Audio have a shot at bypassing the chicken-and-egg dangers of trying to launch your own module store. Plus, while this may sound counterintuitive, I actually think that having multiple players in the market may call more attention to the idea of computers as modular tools. And since porting between platforms isn’t so hard (in comparison to VST and AU plug-in architectures), some interested developers may jump on board.

Well, that and there’s the simple matter than in music, us synth nerds love to toy around with this stuff both as end users and as developers. It’s fun and stuff. On that note:

Modulars gone soft

Stay tuned; I’ve got this for testing and will let you know how it goes.

https://cherryaudio.com/voltage-modular

https://cherryaudio.com/voltage-module-designer

The post Cherry Audio Voltage Modular: a full synth platform, open to developers appeared first on CDM Create Digital Music.

Pigments is a new hybrid synth from Arturia, and you can try it free now

Arturia made their name emulating classic synths, and then made their name again in hardware synths and handy hardware accessories. But they’re back with an original synthesizer in software. It’s called Pigments, and it mixes vintage and new together. You know, like colors.

The funny thing is, wavetable synthesis as an idea is as old or older than a lot of the vintage synths that spring to mind – you can trace it back to the 1970s and Wolfgang Palm, before instruments from PPG and Waldorf.

But “new” is about sound, not history. And now it’s possible to make powerful morphing wavetable engines with loads of voice complexity and modulation that certainly only became practical recently – plus now we have computer displays for visualizing what’s going on.

Pigments brings together the full range of possible colors to work with – vintage to modern, analog to advanced digital. And it does so in a way that feels coherent and focused.

I’ve just started playing around with Pigments – expect a real hands-on shortly – and it’s impressive. You get the edgier sounds of wavetable synthesis with all the sonic language you expect from virtual analog, including all those classic and dirty and grimy sounds. (I can continue my ongoing mission to make everyone think I’m using analog hardware when I’m in the box. Fun.)

Arturia’s marketing copy here is clever – like I wish I’d thought of this phrase: “Pigments can sound like other synths, [but] no other synth can sound like Pigments.”

Okay, so what’s under the hood that makes them claim that?

Two engines: one wavetable, one virtual analog, each now the latest stuff from Arturia. The waveshaping side gives you lots of options for sculpting the oscillator and fluidly controlling the amount of aliasing, which determines so much of the sound’s harmonic character.

Advanced pitch modulation which you can quantize to scale – so you can make complex modulations melodic.

From the modeling Arturia has been doing and their V Collection, you get the full range of filters, classic and modern (surgeon and comb). There’s also a bunch of effects, like wavefolder, overdrive, parametric EQ, and delay.

There’s also extensive routing for all those toys – drag and drop effects into inserts or sends, choose series or parallel routings, and so on.

The effects section is as deep as modulation, but somehow everything is neatly organized, visual, and never overwhelming.

You can modulate anything with anything, Arturia says – which sounds about right. And for modulation, you have tons of choices in envelopes, modulation shapes, and even function generators and randomization sources. But all of this is also graphical and neatly organized, so you don’t get lost. Best of all, there are “heads-up” graphical displays that show you what’s happening under the hood of even the most complex patch.

The polyphonic sequencer alone is huge, meaning you could work entirely inside Pigments.

Color-coded and tabbed, the UI is constantly giving you subtle visual feedback of what waveforms of modulation, oscillators, and processors are doing at any given time, which is useful both in building up sounds from scratch or picking apart the extensive presets available. You can build something step by step if you like, with a sense that inside this semi-modular world, you’re free to focus on one thing at a time while doing something more multi-layered.

Then on top of all of that, it’s not an exaggeration to say that Pigments is really a synth combined with a sequencer. The polyphonic sequencer/arpeggiator is full of trigger options and settings that mean it’s totally possible to fire up Pigments in standalone mode and make a whole piece, just as you would with a full synth workstation or modular rig.

Instead of a short trial, you get a full month to enjoy this – a free release for everyone, expiring only on January the 10th. So now you know what to do with any holiday break. During that time, pricing is $149 / 149€, rising to 199 after that.

I’m having a great deal of fun with it already. And we’re clearing at a new generation of advanced soft synths. Stay tuned.

Product page:

https://www.arturia.com/products/analog-classics/pigments/media

The post Pigments is a new hybrid synth from Arturia, and you can try it free now appeared first on CDM Create Digital Music.

What it’s like calibrating headphones and monitors with Sonarworks tools

No studio monitors or headphones are entirely flat. Sonarworks Reference calibrate any studio monitors or headphones with any source. Here’s an explanation of how that works and what the results are like – even if you’re not someone who’s considered calibration before.

CDM is partnering with Sonarworks to bring some content on listening with artist features this month, and I wanted to explore specifically what calibration might mean for the independent producer working at home, in studios, and on the go.

That means this isn’t a review and isn’t independent, but I would prefer to leave that to someone with more engineering background anyway. Sam Inglis wrote one at the start of this year for Sound on Sound of the latest version; Adam Kagan reviewed version 3 for Tape Op. (Pro Tools Expert also compared IK Multimedia’s ARC and chose Sonarworks for its UI and systemwide monitoring tools.)

With that out of the way, let’s actually explain what this is for people who might not be familiar with calibration software.

In a way, it’s funny that calibration isn’t part of most music and sound discussions. People working with photos and video and print all expect to calibrate color. Without calibration, no listening environment is really truly neutral and flat. You can adjust a studio to reduce how much it impacts the sound, and you can choose reasonably neutral headphones and studio monitors. But those elements nonetheless color the sound.

I came across Sonarworks Reference partly because a bunch of the engineers and producers I know were already using it – even my mastering engineer.

But as I introduced it to first-time calibration product users, I found they had a lot of questions.

How does calibration work?

First, let’s understand what calibration is. Even studio headphones will color sound – emphasizing certain frequencies, de-emphasizing others. That’s with the sound source right next to your head. Put studio headphones in a room – even a relatively well-treated studio – and you combine the coloration of the speakers themselves as well as reflections and character of the environment around them.

The idea of calibration is to process the sound to cancel out those modifications. Headphones can use existing calibration data. For studio speakers, you take some measurements. You play a known test signal and record it inside the listening environment, then compare the recording to the original and compensate.

Hold up this mic, measure some whooping sounds, and you’re done calibration. No expertise needed.

What can I calibrate?

One of the things that sets Sonarworks Reference apart is that it’s flexible enough to deal with both headphones and studio monitors, and works both as a plug-in and a convenient universal driver.

The Systemwide driver works on Mac and Windows with the final output. That means you can listen everywhere – I’ve listened to SoundCloud audio through Systemwide, for instance, which has been useful for checking how the streaming versions of my mixes sound. This driver works seamlessly with Mac and Windows, supporting Core Audio on the Mac and the latest WASAPI Windows support, which is these days perfectly useful and reliable on my Windows 10 machine. (There’s unfortunately no Linux support, though maybe some enterprising user could get that Windows VST working.)

On the Mac, you select the calibrated output via a pop-up on the menu bar. On Windows, you switch to it just like you would any other audio interface. Once selected, everything you listen to in iTunes, Rekordbox, your Web browser, and anywhere else will be calibrated.

That works for everyday listening, but in production you often want your DAW to control the audio output. (Choosing the plug-in is essential on Windows for use with ASIO; Systemwide doesn’t yet support ASIO though Sonarworks says that’s coming.) In this case, you just add a plug-in to the master bus and the output will be calibrated. You just have to remember to switch it off when you bounce or export audio, since that output is calibrated for your setup, not anyone else’s.

Three pieces of software and a microphone. Sonarworks is a measurement tool, a plug-in and systemwide tool for outputting calibrated sound from any source, and a microphone for measuring.

Do I need a special microphone?

If you’re just calibrating your headphones, you don’t need to do any measurement. But for any other monitoring environment, you’ll need to take a few minutes to record a profile. And so you need a microphone for the job.

Calibrating your headphones is as simple as choosing the make and model number for most popular models.

Part of the convenience of the Sonarworks package is that it includes a ready-to-use measurement mic, and the software is already pre-configured to work with the calibration. These mics are omnidirectional – since the whole point is to pick up a complete image of the sound. And they’re meant to be especially neutral.

Sonarworks’ software is pre-calibrated for use with their included microphone.

Any microphone whose vendor provides a calibration profile – available in standard text form – can also use the software in a fully calibrated mode. If you have some cheap musician-friendly omni mic, though, those makers usually don’t do anything of the sort in the way a calibration mic maker would.

I think it’s easier to just use these mics, but I don’t have a big mic cabinet. Production Expert did a test of generic omni mics – mics that aren’t specifically for calibration – and got results that approximate the results of the test mic. In short, they’re good enough if you want to try this out, though Production Expert were being pretty specific with which omni mics they tested, and then you don’t get the same level of integration with the calibration software.

Once you’ve got the mics, you can test different environments – so your untreated home studio and a treated studio, for instance. And you wind up with what might be a useful mic in other situations – I’ve been playing with mine to sample reverb environments, like playing and re-recording sound in a tile bathroom, for instance.

What’s the calibration process like?

Let’s actually walk through what happens.

With headphones, this job is easy. You select your pair of headphones – all the major models are covered – and then you’re done. So when I switch from my Sony to my Beyerdynamic, for instance, I can smooth out some of the irregularities of each of those. That’s made it easier to mix on the road.

For monitors, you run the Reference 4 Measure tool. Beginners I showed the software got slightly discouraged when they saw the measurement would take 20 minutes but – relax. It’s weirdly kind of fun and actually once you’ve done it once, it’ll probably take you half that to do it again.

The whole thing feels a bit like a Nintendo Wii game. You start by making a longer measurement at the point where your head would normally be sitting. Then you move around to different targets as the software makes whooping sounds through the speakers. Once you’ve covered the full area, you will have dotted a screen with measurements. Then you’ve got a customized measurement for your studio.

Here’s what it looks like in pictures:

Simulate your head! The Measure tool walks you through exactly how to do this with friendly illustrations. It’s easier than putting together IKEA furniture.

You’ll also measure the speakers themselves.

Eventually, you measure the main listening spot in your studio. (And you can see why this might be helpful in studio setup, too.)

Next, you move the mic to each measurement location. There’s interactive visual feedback showing you as you get it in the right position.

Hold the mic steady, and listen as a whooping sound comes out of your speakers and each measurement is completed.

You’ll make your way through a series of these measurements until you’ve dotted the whole screen – a bit like the fingerprint calibration on smartphones.

Oh yeah, so my studio monitors aren’t so flat. When you’re done, you’ll see a curve that shows you the irregularities introduced by both your monitors and your room.

Now you’re ready to listen to a cleaner, clearer, more neutral sound – switch your new calibration on, and if all goes to plan, you’ll get much more neutral sound for listening!

There are other useful features packed into the software, like the ability to apply the curve used by the motion picture industry. (I loved this one – it was like, oh, yeah, that sound!)

It’s also worth noting that Sonarworks have created different calibration types made for real-time usage (great for tracking and improv) and accuracy (great for mixing).

Is all of this useful?

Okay, disclosure statement is at the top, but … my reaction was genuinely holy s***. I thought there would be some subtle impact on the sound. This was more like the feeling – well, as an eyeglass wearer, when my glasses are filthy and I clean them and I can actually see again. Suddenly details of the mix were audible again, and moving between different headphones and listening environments was no longer jarring – like that.

Double blind A/B tests are really important when evaluating the accuracy of these things, but I can at least say, this was a big impact, not a small one. (That is, you’d want to do double blind tests when tasting wine, but this was still more like the difference between wine and beer.)

How you might actually use this: once they adapt to the calibrated results, most people leave the calibrated version on and work from a more neutral environment. Cheap monitors and headphones work a little more like expensive ones; expensive ones work more as intended.

There are other uses cases, too, however. Previously I didn’t feel comfortable taking mixes and working on them on the road, because the headphone results were just too different from the studio ones. With calibration, it’s far easier to move back and forth. (And you can always double-check with the calibration switched off, of course.)

The other advantage of Sonarworks’ software is that it does give you so much feedback as you measure from different locations, and that it produces detailed reports. This means if you’re making some changes to a studio setup and moving things around, it’s valuable not just in adapting to the results but giving you some measurements as you work. (It’s not a measurement suite per se, but you can make it double as one.)

Calibrated listening is very likely the future even for consumers. As computation has gotten cheaper, and as software analysis has gotten smarter, it makes sense that these sort of calibration routines will be applied to giving consumers more reliable sound and in adapting to immersive and 3D listening. For now, they’re great for us as creative people, and it’s nice for us to have them in our working process and not only in the hands of other engineers.

If you’ve got any questions about how this process works as an end user, or other questions for the developers, let us know.

And if you’ve found uses for calibration, we’d love to hear from you.

Sonarworks Reference is available with a free trial:

https://www.sonarworks.com/reference

And some more resources:

Erica Synths our friends on this tool:

Plus you can MIDI map the whole thing to make this easier:

The post What it’s like calibrating headphones and monitors with Sonarworks tools appeared first on CDM Create Digital Music.

You can now add VST support to VCV Rack, the virtual modular

VCV Rack is already a powerful, free modular platform that synth and modular fans will want. But a $30 add-on makes it more powerful when integrating with your current hardware and software – VST plug-in support.

Watch:

It’s called Host, and for $30, it adds full support for VST2 instruments and effects, including the ability to route control, gate, audio, and MIDI to the appropriate places. This is a big deal, because it means you can integrate VST plug-ins with your virtual modular environment, for additional software instruments and effects. And it also means you can work with hardware more easily, because you can add in VST MIDI controller plug-ins. For instance, without our urging, someone just made a MIDI controller plug-in for our own MeeBlip hardware synth (currently not in stock, new hardware coming soon).

You already are able to integrate VCV’s virtual modular with hardware modular using audio and a compatible audio interface (one with DC coupling, like the MOTU range). Now you can also easily integrate outboard MIDI hardware, without having to manually select CC numbers and so on as previously.

Hell, you could go totally crazy and run Softube Modular inside VCV Rack. (Yo dawg, I heard you like modular, so I put a modular inside your modular so you can modulate the modular modular modules. Uh… kids, ask your parents who Xzibit was? Or what MTV was, even?)

What you need to know

Is this part of the free VCV Rack? No. Rack itself is free, but you have to buy “Host” as a US$30 add-on. Still, that means the modular environment and a whole bunch of amazing modules are totally free, so that thirty bucks is pretty easy to swallow!

What plug-ins will work? Plug-ins need to be 64-bit, they need to be VST 2.x (that’s most plugs, but not some recent VST3-only models), and you can run on Windows and Mac.

What can you route? Modular is no fun without patching! So here we go:

There’s Host for instruments – 1v/octave CV for controlling pitch, and gate input for controlling note events. (Forget MIDI and start thinking in voltages for a second here: VCV notes that “When the gate voltages rises, a MIDI note is triggered according to the current 1V/oct signal, rounded to the nearest note. This note is held until the gate falls to 0V.”)

Right now there’s only monophonic input. But you do also get easy access to note velocity and pitch wheel mappings.

Host-FX handles effects, pedals, and processors. Input stereo audio (or mono mapped to stereo), get stereo output. It doesn’t sound like multichannel plug-ins are supported yet.

Both Host and Host-FX let you choose plug-in parameters and map them to CV – just be careful mapping fast modulation signals, as plug-ins aren’t normally built for audio-rate modulation. (We’ll have to play with this and report back on some approaches.)

Will I need a fast computer? Not for MIDI integration, no. But I find the happiness level of VCV Rack – like a lot of recent synth and modular efforts – is directly proportional to people having fast CPUs. (The Windows platform has some affordable options there if Apple is too rich for your blood.)

What platforms? Mac and Windows, it seems. VCV also supports Linux, but there your best bet is probably to add the optional installation of JACK, and … this is really the subject for a different article.

How to record your work

I actually was just pondering this. I’ve been using ReaRoute with Reaper to record VCV Rack on Windows, which for me was the most stable option. But it also makes sense to have a recorder inside the modular environment.

Our friend Chaircrusher recommends the NYSTHI modules for VCV Rack. It’s a huge collection but there’s both a 2-channel and 4-/8-track recorder in there, among many others – see pic:

NYSTHI modules for VCV Rack (free):
https://vcvrack.com/plugins.html#nysthi
https://github.com/nysthi/nysthi/blob/master/README.md

And have fun with the latest Rack updates.

Just remember when adding Host, plug-ins inside a host can cause… stability issues.

But it’s definitely a good excuse to crack open VCV Rack again! And also nice to have this when traveling… a modular studio in your hotel room, without needing a carry-on allowance. Or hide from your family over the holiday and make modular patches. Whatever.

https://vcvrack.com/Host.html

The post You can now add VST support to VCV Rack, the virtual modular appeared first on CDM Create Digital Music.

You can now add VST support to VCV Rack, the virtual modular

VCV Rack is already a powerful, free modular platform that synth and modular fans will want. But a $30 add-on makes it more powerful when integrating with your current hardware and software – VST plug-in support.

Watch:

It’s called Host, and for $30, it adds full support for VST2 instruments and effects, including the ability to route control, gate, audio, and MIDI to the appropriate places. This is a big deal, because it means you can integrate VST plug-ins with your virtual modular environment, for additional software instruments and effects. And it also means you can work with hardware more easily, because you can add in VST MIDI controller plug-ins. For instance, without our urging, someone just made a MIDI controller plug-in for our own MeeBlip hardware synth (currently not in stock, new hardware coming soon).

You already are able to integrate VCV’s virtual modular with hardware modular using audio and a compatible audio interface (one with DC coupling, like the MOTU range). Now you can also easily integrate outboard MIDI hardware, without having to manually select CC numbers and so on as previously.

Hell, you could go totally crazy and run Softube Modular inside VCV Rack. (Yo dawg, I heard you like modular, so I put a modular inside your modular so you can modulate the modular modular modules. Uh… kids, ask your parents who Xzibit was? Or what MTV was, even?)

What you need to know

Is this part of the free VCV Rack? No. Rack itself is free, but you have to buy “Host” as a US$30 add-on. Still, that means the modular environment and a whole bunch of amazing modules are totally free, so that thirty bucks is pretty easy to swallow!

What plug-ins will work? Plug-ins need to be 64-bit, they need to be VST 2.x (that’s most plugs, but not some recent VST3-only models), and you can run on Windows and Mac.

What can you route? Modular is no fun without patching! So here we go:

There’s Host for instruments – 1v/octave CV for controlling pitch, and gate input for controlling note events. (Forget MIDI and start thinking in voltages for a second here: VCV notes that “When the gate voltages rises, a MIDI note is triggered according to the current 1V/oct signal, rounded to the nearest note. This note is held until the gate falls to 0V.”)

Right now there’s only monophonic input. But you do also get easy access to note velocity and pitch wheel mappings.

Host-FX handles effects, pedals, and processors. Input stereo audio (or mono mapped to stereo), get stereo output. It doesn’t sound like multichannel plug-ins are supported yet.

Both Host and Host-FX let you choose plug-in parameters and map them to CV – just be careful mapping fast modulation signals, as plug-ins aren’t normally built for audio-rate modulation. (We’ll have to play with this and report back on some approaches.)

Will I need a fast computer? Not for MIDI integration, no. But I find the happiness level of VCV Rack – like a lot of recent synth and modular efforts – is directly proportional to people having fast CPUs. (The Windows platform has some affordable options there if Apple is too rich for your blood.)

What platforms? Mac and Windows, it seems. VCV also supports Linux, but there your best bet is probably to add the optional installation of JACK, and … this is really the subject for a different article.

How to record your work

I actually was just pondering this. I’ve been using ReaRoute with Reaper to record VCV Rack on Windows, which for me was the most stable option. But it also makes sense to have a recorder inside the modular environment.

Our friend Chaircrusher recommends the NYSTHI modules for VCV Rack. It’s a huge collection but there’s both a 2-channel and 4-/8-track recorder in there, among many others – see pic:

NYSTHI modules for VCV Rack (free):
https://vcvrack.com/plugins.html#nysthi
https://github.com/nysthi/nysthi/blob/master/README.md

And have fun with the latest Rack updates.

Just remember when adding Host, plug-ins inside a host can cause… stability issues.

But it’s definitely a good excuse to crack open VCV Rack again! And also nice to have this when traveling… a modular studio in your hotel room, without needing a carry-on allowance. Or hide from your family over the holiday and make modular patches. Whatever.

https://vcvrack.com/Host.html

The post You can now add VST support to VCV Rack, the virtual modular appeared first on CDM Create Digital Music.