Reason 10.3 will improve VST performance – here’s how

VST brings more choice to Reason, but more support demands, too. Here’s an update on how Propellerhead are optimizing Reason to bring plug-in performance in line with what users expect.

For years, Reason was a walled-off garden. Propellerhead resisted supporting third-party plug-ins, and when they did, introduced their own native Rack Extensions technology for supporting them. That enables more integrated workflows, better user experience, greater stability, and easier installation and updates than a format like VST or AU allows.

But hey, we have a lot of VSTs we want to run inside Reason, engineering arguments be damned. And so Propellerhead finally listened to users, delivering support for VST effects and instruments on Mac and Windows in Reason 9.5. (Currently only VST2 plug-ins are supported, not VST3.)

Propellerhead have been working on improving stability and performance continuously since then. Reason 10.3 is a much-anticipated update, because it addresses a significant performance issue with VST plug-ins – without disrupting one of the things that makes Reason’s native devices work well.

The bad news is, 10.3 is delayed.

The good news is, it works really well. It puts Reason on par with other DAWs as far as VST performance. That’s a big deal to Reason users, just because in many other ways Reason is unlike other DAWs.

I met with Propellerhead engineers yesterday in Stockholm, including Mattias Häggström Gerdt (product manager for Reason). We got to discuss the issue, their whole development effort, and get hands-on with their alpha version.

Why this took a while

Okay, first, some technical discussion. “Real time” is actually not a thing in digital hardware and software. The illusion of a system working in real time is created by buffering – using very small windows of time to pass audio information, so small that the results seem instantaneous to the user.

There’s a buffer size you set for your audio interface – this one you may already know about. But software also have internal buffers for processing, hidden to the user. In a modular environment, you really want this buffer to be as small as possible, so that patching and processing feels reponsive – just as it would if you were using analog hardware. Reason accordingly has an internal buffer of 64 frames to do just that. That means without any interruptions to your audio stream, you can patch and repatch and tweak and play to your heart’s content.

Here’s the catch: some plug-ins developers for design reasons prefer larger buffers (higher latency), in order to reduce CPU consumption even though their plug-in technically work in Reason’s small buffer environment. This is common in plug-ins where ultra-low latency internal processing isn’t as important. But running inside Reason, that approach adds strain to your CPU. Some users won’t notice anything, because they don’t use these plug-ins or use fewer instances of them. But some will see their machine run out of CPU resources faster in Reason than in other DAWs. The result: the same plug-in setup you used in another DAW will make Reason sputter, which is of course not what you want.

Another catch: if you have ever tried adjusting the audio buffer size on your interface to reduce CPU usage, in this case, that won’t help. So users encountering this issue are left frustrated.

This is a fixable problem. You give those plug-ins larger buffers when they demand them, while Reason and its devices continue to work as they always have. It’s just there’s a lot of work going back through all the rest of Reason’s code to adjust for the change. And like a lot of coding work, that takes time. Adding more people doesn’t necessarily even speed this up, either. (Ever tried adding more people to a kitchen to “speed up” cooking dinner? Like that.)

When it’s done, existing Reason users won’t notice anything. But users of the affected plug-ins will see big performance gains.

What to expect when it ships

I sat with the engineers looking at an alpha and we measured CPU usage. The results by plug-in are what you might expect.

We worked with three plug-ins by way of example – charts are here. With Izotope Ozone 7, there’s a massive gain in the new build. That makes sense – a mastering plug-in isn’t so concerned about low latency performance. With Xfer Records Serum, there’s almost none. Native Instruments’ Massive is somewhere in between. These are just typical examples – many other plug-ins will also fall along this range.

Native Instruments’ Massive gets a marginal but significant performance boost. Left: before. Right: after.

iZotope’s Ozone is a more dramatic example. Stack some instances of this mastering-focused plug-in, and you can max out the CPU quickly in Reason. (left) But in Reason 10.3 alpha, you can see the “big batch” approach yields resolves that performance issue. (right)

Those graphs are on the Mac but OS in this case won’t really matter.

The fix is coming to the public. The alpha is not something you want to run; it’s already in the hands of testers who don’t mind working with prerelease softare. A public beta won’t happen in the couple of weeks we have left in 2018, but it is coming soon – as soon as it’s done. And of course 10.3 will be a free upgrade for Reason 10 users.

When it ships, Reason 10.3 will give you performance on par with other DAWs. That is, your performance will depend on your CPU and which plug-ins you’re using, but Reason will be more or less the same as other hosts beyond that.

So this isn’t really exciting stuff, but it will make your life easier. We’ll let you know how it comes and try to test that final version.

Official announcement:

Update on Reason and VST performance

For more on Reason and VST support, see their support section:

Propellerhead Software Rack Extensions, ReFills and VSTs VSTs

The post Reason 10.3 will improve VST performance – here’s how appeared first on CDM Create Digital Music.

Cherry Audio Voltage Modular: a full synth platform, open to developers

Hey, hardware modular – the computer is back. Cherry Audio’s Voltage Modular is another software modular platform. Its angle: be better for users — and now, easier and more open to developers, with a new free tool.

Voltage Modular was shown at the beginning of the year, but its official release came in September – and now is when it’s really hitting its stride. Cherry Audio’s take certainly isn’t alone; see also, in particular, Softube Modular, the open source VCV Rack, and Reason’s Rack Extensions. Each of these supports live patching of audio and control signal, hardware-style interfaces, and has rich third-party support for modules with a store for add-ons. But they’re all also finding their own particular take on the category. That means now is suddenly a really nice time for people interested in modular on computers, whether for the computer’s flexibility, as a supplement to hardware modular, or even just because physical modular is bulky and/or out of budget.

So, what’s special about Voltage Modular?

Easy patching. Audio and control signals can be freely mixed, and there’s even a six-way pop-up multi on every jack, so each jack has tons of routing options. (This is a computer, after all.)

Each jack can pop up to reveal a multi.

It’s polyphonic. This one’s huge – you get true polyphony via patch cables and poly-equipped modules. Again, you know, like a computer.

It’s open to development. There’s now a free Module Designer app (commercial licenses available), and it’s impressively easy to code for. You write DSP in Java, and Cherry Audio say they’ve made it easy to port existing code. The app also looks like it reduces a lot of friction in this regard.

There’s an online store for modules – and already some strong early contenders. You can buy modules, bundles, and presets right inside the app. The mighty PSP Audioware, as well as Vult (who make some of my favorite VCV stuff) are already available in the store.

There’s an online store for free and paid add-ons – modules and presets. But right now, a hundred bucks gets you started with a bunch of stuff right out of the gate.

Voltage Modular is a VST/AU/AAX plug-in and runs standalone. And it supports 64-bit double-precision math with zero-latency module processes – but, impressively in our tests, isn’t so hard on your CPU as some of its rivals.

Right now, Voltage Modular Core + Electro Drums are on sale for just US$99.

Real knobs and patch cords are fun, but … let’s be honest, this is a hell of a lot of fun, too.

For developers

So what about that development side, if that interests you? Well, Apple-style, there’s a 70/30 split in developers’ favor. And it looks really easy to develop on their platform:

Java may be something of a bad word to developers these days, but I talked to Cherry Audio about why they chose it, and it definitely makes some sense here. Apart from being a reasonably friendly language, and having unparalleled support (particularly on the Internet connectivity side), Java solves some of the pitfalls that might make a modular environment full of third-party code unstable. You don’t have to worry about memory management, for one. I can also imagine some wackier, creative applications using Java libraries. (Want to code a MetaSynth-style image-to-sound module, and even pull those images from online APIs? Java makes it easy.)

Just don’t think of “Java” as in legacy Java applications. Here, DSP code runs on a Hotspot virtual machine, so your DSP is actually running as machine language by the time it’s in an end user patch. It seems Cherry have also thought through GUI: the UI is coded natively in C++, while you can create custom graphics like oscilloscopes (again, using just Java on your side). This is similar to the models chosen by VCV and Propellerhead for their own environments, and it suggests a direction for plug-ins that involves far less extra work and greater portability. It’s no stretch to imagine experienced developers porting for multiple modular platforms reasonably easily. Vult of course is already in that category … and their stuff is so good I might almost buy it twice.

Or to put that in fewer words: the VM can match or even best native environments, while saving developers time and trouble.

Cherry also tell us that iOS, Linux, and Android could theoretically be supported in the future using their architecture.

Of course, the big question here is installed user base and whether it’ll justify effort by developers, but at least by reducing friction and work and getting things rolling fairly aggressively, Cherry Audio have a shot at bypassing the chicken-and-egg dangers of trying to launch your own module store. Plus, while this may sound counterintuitive, I actually think that having multiple players in the market may call more attention to the idea of computers as modular tools. And since porting between platforms isn’t so hard (in comparison to VST and AU plug-in architectures), some interested developers may jump on board.

Well, that and there’s the simple matter than in music, us synth nerds love to toy around with this stuff both as end users and as developers. It’s fun and stuff. On that note:

Modulars gone soft

Stay tuned; I’ve got this for testing and will let you know how it goes.

https://cherryaudio.com/voltage-modular

https://cherryaudio.com/voltage-module-designer

The post Cherry Audio Voltage Modular: a full synth platform, open to developers appeared first on CDM Create Digital Music.

FL Studio 20.1 arrives, studio-er, loop-ier, better

The just-before-the-holiday-break software updates just keep coming. Next: the evergreen, lifetime-free-updates latest release of the DAW the developer calls FL Studio, and everyone else calls “Fruity Loops.”

FL Studio has given people reason to take it more seriously of late, too. There’s a real native Mac version, so FL is no longer a PC-vs-Mac thing. There’s integrated controller hardware from Akai (the new Fire), and that in turn exploits all those quick-access record and step sequence features that made people love FL in the first place.

AKAI Fire and the Mac version might make lapsed or new users interested anew – but hardcore users, this software release is really for you.

The snapshot view:

Does your DAW have a visualizer built on a game engine inside it? No? FL does. And you thought you were going to just have to make your next music video be a bunch of shaky iPhone footage you ran through some weird black and white filter. No!

Stepsequencer looping is back (previously seen in FL 11), but now has more per-channel controls so you can make polyrhythms – or not, lining everything up instead if you’d rather.

Plus if you’re using FIRE hardware, you get options to set channel loop length and the ability to burn to Patterns.

Audio recording is improved, making it easier to arm and record and get audio and pre/post effects where you want.

And there are 55 new minimal kick drum samples.

And now you can display the GUI FPS.

And you have a great way of making music videos by exporting from the included video game engine visualizer.

Actually, you know, I’m just going to stop -t here’s just a whole bunch of new stuff, and you get it for free. And they’ve made a YouTube video. And as you watch the tutorial, it’s evident that FL really has matured into a serious DAW to stand toe-to-toe with everything else, without losing its personality.

https://www.image-line.com/flstudio/

20.1 update

The post FL Studio 20.1 arrives, studio-er, loop-ier, better appeared first on CDM Create Digital Music.

Pigments is a new hybrid synth from Arturia, and you can try it free now

Arturia made their name emulating classic synths, and then made their name again in hardware synths and handy hardware accessories. But they’re back with an original synthesizer in software. It’s called Pigments, and it mixes vintage and new together. You know, like colors.

The funny thing is, wavetable synthesis as an idea is as old or older than a lot of the vintage synths that spring to mind – you can trace it back to the 1970s and Wolfgang Palm, before instruments from PPG and Waldorf.

But “new” is about sound, not history. And now it’s possible to make powerful morphing wavetable engines with loads of voice complexity and modulation that certainly only became practical recently – plus now we have computer displays for visualizing what’s going on.

Pigments brings together the full range of possible colors to work with – vintage to modern, analog to advanced digital. And it does so in a way that feels coherent and focused.

I’ve just started playing around with Pigments – expect a real hands-on shortly – and it’s impressive. You get the edgier sounds of wavetable synthesis with all the sonic language you expect from virtual analog, including all those classic and dirty and grimy sounds. (I can continue my ongoing mission to make everyone think I’m using analog hardware when I’m in the box. Fun.)

Arturia’s marketing copy here is clever – like I wish I’d thought of this phrase: “Pigments can sound like other synths, [but] no other synth can sound like Pigments.”

Okay, so what’s under the hood that makes them claim that?

Two engines: one wavetable, one virtual analog, each now the latest stuff from Arturia. The waveshaping side gives you lots of options for sculpting the oscillator and fluidly controlling the amount of aliasing, which determines so much of the sound’s harmonic character.

Advanced pitch modulation which you can quantize to scale – so you can make complex modulations melodic.

From the modeling Arturia has been doing and their V Collection, you get the full range of filters, classic and modern (surgeon and comb). There’s also a bunch of effects, like wavefolder, overdrive, parametric EQ, and delay.

There’s also extensive routing for all those toys – drag and drop effects into inserts or sends, choose series or parallel routings, and so on.

The effects section is as deep as modulation, but somehow everything is neatly organized, visual, and never overwhelming.

You can modulate anything with anything, Arturia says – which sounds about right. And for modulation, you have tons of choices in envelopes, modulation shapes, and even function generators and randomization sources. But all of this is also graphical and neatly organized, so you don’t get lost. Best of all, there are “heads-up” graphical displays that show you what’s happening under the hood of even the most complex patch.

The polyphonic sequencer alone is huge, meaning you could work entirely inside Pigments.

Color-coded and tabbed, the UI is constantly giving you subtle visual feedback of what waveforms of modulation, oscillators, and processors are doing at any given time, which is useful both in building up sounds from scratch or picking apart the extensive presets available. You can build something step by step if you like, with a sense that inside this semi-modular world, you’re free to focus on one thing at a time while doing something more multi-layered.

Then on top of all of that, it’s not an exaggeration to say that Pigments is really a synth combined with a sequencer. The polyphonic sequencer/arpeggiator is full of trigger options and settings that mean it’s totally possible to fire up Pigments in standalone mode and make a whole piece, just as you would with a full synth workstation or modular rig.

Instead of a short trial, you get a full month to enjoy this – a free release for everyone, expiring only on January the 10th. So now you know what to do with any holiday break. During that time, pricing is $149 / 149€, rising to 199 after that.

I’m having a great deal of fun with it already. And we’re clearing at a new generation of advanced soft synths. Stay tuned.

Product page:

https://www.arturia.com/products/analog-classics/pigments/media

The post Pigments is a new hybrid synth from Arturia, and you can try it free now appeared first on CDM Create Digital Music.

Bitwig Studio 2.5 beta arrives with features inspired by the community

We’re coasting to the end of 2019, but Bitwig has managed to squeeze in Studio 2.5, with feature the company says were inspired by or directly requested by users.

The most interesting of these adds some interactive arrangement features to the linear side of the DAW. Traditional DAWs like Cubase have offered interactive features, but they generally take place on the timeline. Or you can loop individual regions in most DAWs, but that’s it.

Bitwig are adding interactive actions to the clips themselves, right in the arrangement. “Clip Blocks” apply Next Action features to individual clips.

Also in this release:

“Audio Slide” lets you slide audio inside clips without leaving the arranger. That’s possible in many other DAWs, but it’s definitely a welcome addition in Bitwig Studio – especially because an audio clip can contain multiple audio events, which isn’t necessarily possible elsewhere.

Note FX Selector lets you sweep through multiple layers of MIDI effects. We’ve seen something like this before, too, but this implementation is really nice.

There’s also a new set of 60 Sampler presets with hundreds of full-frequency waveforms – looks great for building up instruments. (This makes me ready to boot into Linux with Bitwig, too, where I don’t necessarily have my full plug-in library at my disposal.)

Other improvements:

  • Browser results by relevance
  • Faster plug-in scanning
  • 50 more functions accessible as user-definable key commands

To me, the thing that makes this newsworthy, and the one to test, is really this notion of an interactive arrangement view.

Ableton pioneered Follow Actions in their Session View years back in Ableton Live, but they’ve failed to apply that concept even inside Session View to scenes. (Some Max for Live hacks fill in the gap, but that only proves that people are looking for this feature.)

Making the arrangement itself interactive at the clip level – that’s really something new.

Now, that said, let’s play with Clip Blocks in Bitwig 2.5 and see if this is helpful or just confusing or superfluous in arrangements. (Presumably you can toy with different arrangement possibilities and then bounce out whatever you’ve chosen? I have to test this myself.) And there’s also the question of whether this much interactivity actually just has you messing around instead of making decisions, but that’s another story.

Go check out the release, and if you’re a Bitwig user, you can immediately try out the beta. Let us know what you think and how those Clip Blocks impact your creative process. (Or share what you make!)

Just please – no EDM tabla. (I think that moment sent a chill of terror down my spine in the demo video.)

https://www.bitwig.com/en/18/bitwig-studio-2_5.html

The post Bitwig Studio 2.5 beta arrives with features inspired by the community appeared first on CDM Create Digital Music.

Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws

Musicians don’t just endure technology when it breaks. They embrace the broken. So it’s fitting that Holly Herndon’s team have produced a demonic spawn of machine learning algorithms – and that the results are wonderful.

The new music video for the Holly Herndon + Jlin collaboration have been making the rounds online, so you may have seen it already:


n
But let’s talk about what’s going on here. Holly is continuing a long-running collaboration with producer Jlin, here joined by technologist Mat Dryhurst and coder Jules LaPlace. (The music video itself is directed by Daniel Costa Neves with software developer Leif Ryge, employing still more machine learning technique to merge the two artists’ faces.)

Machine learning processes are being explored in different media in parallel – characters and text, images, and sound, voice, and music. But the results can be all over the place. And ultimately, there are humans as the last stage. We judge the results of the algorithms, project our own desires and fears on what they produce, and imagine anthropomorphic intents and characteristics.

Sometimes errors like over-fitting then take on a personality all their own – even as mathematically sophisticated results fail to inspire.

But that’s not to say these reactions aren’t just as real. An part of may make the video “Godmother” compelling is not just the buzzword of AI, but the fact that it genuinely sounds different.

The software ‘Spawn,’ developed by Ryge working with the team, is a machine learning-powered encoder. Herndon and company have anthropomorphized that code in their description, but that itself is also fair – not least because the track is composed in such a way to suggest a distinct vocalist.

I love Holly’s poetic description below, but I think it’s also important to be precise about what we’re hearing. That is, we can talk about the evocative qualities of an oboe, but we should definitely still call an oboe an oboe.

So in this case, I confirmed with Dryhurst that what I was hearing. The analysis stage employs neural network style transfers – some links on that below, though LaPlace and the artists here did make their own special code brew. And then they merged that with a unique vocoder – the high-quality WORLD vocoder. That is, they feed a bunch of sounds into the encoder, and get some really wild results.

And all of that in turn makes heavy use of the unique qualities of Jlin’s voice, Holly’s own particular compositional approach and the arresting percussive take on these fragmented sounds, Matt’s technological sensibilities, LaPlace’s code, a whole lot of time spent on parameters and training and adaptation…

Forget automation in this instance. All of this involves more human input and more combined human effort that any conventionally produced track would.

Is it worth it? Well, aesthetically, you could make comparisons to artists like Autechre, but then you could do that with anything with mangled sample content in it. And on a literal level, the result is the equivalent of a mangled sample. The results retain recognizable spectral components of the original samples, and they add a whole bunch of sonic artifacts which sound (correctly, really) ‘digital’ and computer-based to our ears.

But it’s also worth noting that what you hear is particular to this vocoder technique and especially to audio texture synthesis and neutral network-based style transfer of sound. It’s a commentary on 2018 machine learning not just conceptually, but because what you hear sounds the way it does because of the state of that tech.

And that’s always been the spirit of music. The peculiar sound and behavior of a Theremin says a lot about how radios and circuits respond to a human presence. Vocoders have ultimately proven culturally significant for their aesthetic peculiarities even if their original intention was encoding speech. We respond to broken circuits and broken code on an emotional and cultural level, just as we do acoustic instruments.

In a blog post that’s now a couple of years old – ancient history in machine learning terms, perhaps – Dmitry Ulyanov and Vadim Lebedev acknowledged that some of the techniques they used for “audio texture synthesis and style transfer” used a technique intended for something else. And they implied that the results didn’t work – that they had “stylistic” interest more than functional ones.

Dmitry even calls this a partial failure: “I see a slow but consistent interest increase in music/audio by the community, for sure amazing things are just yet to come. I bet in 2017 already we will find a way to make WaveNet practical but my attempts failed so far :)”

Spoiler – that hasn’t really happened in 2017 or 2018. But “failure” to be practical isn’t necessarily a failure. The rising interest has been partly in producing strange results – again, recalling that the vocoder, Theremin, FM synthesis, and many other techniques evolved largely because musicians thought the sounds were cool.

But this also suggests that musicians may uniquely be able to cut through the hype around so-called AI techniques. And that’s important, because these techniques are assigned mystical powers, Wizard of Oz-style.

Big corporations can only hype machine learning when it seems to be magical. But musicians can hype up machine learning even when it breaks – and knowing how and when it breaks is more important than ever. Here’s Holly’s official statement on the release:

For the past two years, we have been building an ensemble in Berlin.

One member is a nascent machine intelligence we have named Spawn. She is being raised by listening to and learning from her parents, and those people close to us who come through our home or participate at our performances.

Spawn can already do quite a few wonderful things. ‘Godmother’ was generated from her listening to the artworks of her godmother Jlin, and attempting to reimagine them in her mother’s voice.

This piece of music was generated from silence with no samples, edits, or overdubs, and trained with the guidance of Spawn’s godfather Jules LaPlace.

In nurturing collaboration with the enhanced capacities of Spawn, I am able to create music with my voice that far surpass the physical limitations of my body.

Going through this process has brought about interesting questions about the future of music. The advent of sampling raised many concerns about the ethical use of material created by others, but the era of machine legible culture accelerates and abstracts that conversation. Simply through witnessing music, Spawn is already pretty good at learning to recreate signature composition styles or vocal characters, and will only get better, sufficient that anyone collaborating with her might be able to mimic the work of, or communicate through the voice of, another.

Are we to recoil from these developments, and place limitations on the ability for non-human entities like Spawn to witness things that we want to protect? Is permission-less mimicry the logical end point of a data-driven new musical ecosystem surgically tailored to give people more of what they like, with less and less emphasis on the provenance, or identity, of an idea? Or is there a more beautiful, symbiotic, path of machine/human collaboration, owing to the legacies of pioneers like George Lewis, that view these developments as an opportunity to reconsider who we are, and dream up new ways of creating and organizing accordingly.

I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster.

– Holly Herndon

Some interesting code:
https://github.com/DmitryUlyanov/neural-style-audio-tf

https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder

Go hear the music:

http://smarturl.it/Godmother

Previously, from the hacklab program I direct, talks and a performance lab with CTM Festival:

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

A look at AI’s strange and dystopian future for art, music, and society

I also wrote about machine learning:

Minds, machines, and centralization: AI and music

The post Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws appeared first on CDM Create Digital Music.

Save $229 with Music Maker Premium Edition 2019 deal at Magix

Magix Music Maker Premium 2019 sale

Magix has a limited time sale on Music Maker 2019 Premium Edition, the music software that comes with the same audio engine used by the Samplitude DAW. Music Maker gives you the room you need to bring your ideas to life. And now you can even customize Music Maker to fit your personal style. Simply […]

The post Save $229 with Music Maker Premium Edition 2019 deal at Magix appeared first on rekkerd.org.

Magix launches Holiday Sale with 25% off Samplitude, Sound Forge, ACID & more

Magix Holiday Sale 2018

Magix has announced the launch of its 2018 Holiday Sale, offering a 25% discount on selected products for a limited time. The sale includes the newly released Samplitude Pro X4, Sound Forge Audio Studio 12, Acid Pro 8, Music Maker 2019, and lots more. To take advantage of this offer, use coupon code SAVE25MGX at […]

The post Magix launches Holiday Sale with 25% off Samplitude, Sound Forge, ACID & more appeared first on rekkerd.org.

Samplitude Pro X4 recording, production, mixing & mastering DAW now available!

Samplitude Pro X4

Magix has announced the launch of Samplitude Pro X4, the latest version of its digital audio workstation for Windows. With professional effects, efficient editing functions, as well as a powerful audio engine, the new version continues to set new standards in the professional audio sector. Samplitude Pro X4 provides the highest level of quality for […]

The post Samplitude Pro X4 recording, production, mixing & mastering DAW now available! appeared first on rekkerd.org.

What it’s like calibrating headphones and monitors with Sonarworks tools

No studio monitors or headphones are entirely flat. Sonarworks Reference calibrate any studio monitors or headphones with any source. Here’s an explanation of how that works and what the results are like – even if you’re not someone who’s considered calibration before.

CDM is partnering with Sonarworks to bring some content on listening with artist features this month, and I wanted to explore specifically what calibration might mean for the independent producer working at home, in studios, and on the go.

That means this isn’t a review and isn’t independent, but I would prefer to leave that to someone with more engineering background anyway. Sam Inglis wrote one at the start of this year for Sound on Sound of the latest version; Adam Kagan reviewed version 3 for Tape Op. (Pro Tools Expert also compared IK Multimedia’s ARC and chose Sonarworks for its UI and systemwide monitoring tools.)

With that out of the way, let’s actually explain what this is for people who might not be familiar with calibration software.

In a way, it’s funny that calibration isn’t part of most music and sound discussions. People working with photos and video and print all expect to calibrate color. Without calibration, no listening environment is really truly neutral and flat. You can adjust a studio to reduce how much it impacts the sound, and you can choose reasonably neutral headphones and studio monitors. But those elements nonetheless color the sound.

I came across Sonarworks Reference partly because a bunch of the engineers and producers I know were already using it – even my mastering engineer.

But as I introduced it to first-time calibration product users, I found they had a lot of questions.

How does calibration work?

First, let’s understand what calibration is. Even studio headphones will color sound – emphasizing certain frequencies, de-emphasizing others. That’s with the sound source right next to your head. Put studio headphones in a room – even a relatively well-treated studio – and you combine the coloration of the speakers themselves as well as reflections and character of the environment around them.

The idea of calibration is to process the sound to cancel out those modifications. Headphones can use existing calibration data. For studio speakers, you take some measurements. You play a known test signal and record it inside the listening environment, then compare the recording to the original and compensate.

Hold up this mic, measure some whooping sounds, and you’re done calibration. No expertise needed.

What can I calibrate?

One of the things that sets Sonarworks Reference apart is that it’s flexible enough to deal with both headphones and studio monitors, and works both as a plug-in and a convenient universal driver.

The Systemwide driver works on Mac and Windows with the final output. That means you can listen everywhere – I’ve listened to SoundCloud audio through Systemwide, for instance, which has been useful for checking how the streaming versions of my mixes sound. This driver works seamlessly with Mac and Windows, supporting Core Audio on the Mac and the latest WASAPI Windows support, which is these days perfectly useful and reliable on my Windows 10 machine. (There’s unfortunately no Linux support, though maybe some enterprising user could get that Windows VST working.)

On the Mac, you select the calibrated output via a pop-up on the menu bar. On Windows, you switch to it just like you would any other audio interface. Once selected, everything you listen to in iTunes, Rekordbox, your Web browser, and anywhere else will be calibrated.

That works for everyday listening, but in production you often want your DAW to control the audio output. (Choosing the plug-in is essential on Windows for use with ASIO; Systemwide doesn’t yet support ASIO though Sonarworks says that’s coming.) In this case, you just add a plug-in to the master bus and the output will be calibrated. You just have to remember to switch it off when you bounce or export audio, since that output is calibrated for your setup, not anyone else’s.

Three pieces of software and a microphone. Sonarworks is a measurement tool, a plug-in and systemwide tool for outputting calibrated sound from any source, and a microphone for measuring.

Do I need a special microphone?

If you’re just calibrating your headphones, you don’t need to do any measurement. But for any other monitoring environment, you’ll need to take a few minutes to record a profile. And so you need a microphone for the job.

Calibrating your headphones is as simple as choosing the make and model number for most popular models.

Part of the convenience of the Sonarworks package is that it includes a ready-to-use measurement mic, and the software is already pre-configured to work with the calibration. These mics are omnidirectional – since the whole point is to pick up a complete image of the sound. And they’re meant to be especially neutral.

Sonarworks’ software is pre-calibrated for use with their included microphone.

Any microphone whose vendor provides a calibration profile – available in standard text form – can also use the software in a fully calibrated mode. If you have some cheap musician-friendly omni mic, though, those makers usually don’t do anything of the sort in the way a calibration mic maker would.

I think it’s easier to just use these mics, but I don’t have a big mic cabinet. Production Expert did a test of generic omni mics – mics that aren’t specifically for calibration – and got results that approximate the results of the test mic. In short, they’re good enough if you want to try this out, though Production Expert were being pretty specific with which omni mics they tested, and then you don’t get the same level of integration with the calibration software.

Once you’ve got the mics, you can test different environments – so your untreated home studio and a treated studio, for instance. And you wind up with what might be a useful mic in other situations – I’ve been playing with mine to sample reverb environments, like playing and re-recording sound in a tile bathroom, for instance.

What’s the calibration process like?

Let’s actually walk through what happens.

With headphones, this job is easy. You select your pair of headphones – all the major models are covered – and then you’re done. So when I switch from my Sony to my Beyerdynamic, for instance, I can smooth out some of the irregularities of each of those. That’s made it easier to mix on the road.

For monitors, you run the Reference 4 Measure tool. Beginners I showed the software got slightly discouraged when they saw the measurement would take 20 minutes but – relax. It’s weirdly kind of fun and actually once you’ve done it once, it’ll probably take you half that to do it again.

The whole thing feels a bit like a Nintendo Wii game. You start by making a longer measurement at the point where your head would normally be sitting. Then you move around to different targets as the software makes whooping sounds through the speakers. Once you’ve covered the full area, you will have dotted a screen with measurements. Then you’ve got a customized measurement for your studio.

Here’s what it looks like in pictures:

Simulate your head! The Measure tool walks you through exactly how to do this with friendly illustrations. It’s easier than putting together IKEA furniture.

You’ll also measure the speakers themselves.

Eventually, you measure the main listening spot in your studio. (And you can see why this might be helpful in studio setup, too.)

Next, you move the mic to each measurement location. There’s interactive visual feedback showing you as you get it in the right position.

Hold the mic steady, and listen as a whooping sound comes out of your speakers and each measurement is completed.

You’ll make your way through a series of these measurements until you’ve dotted the whole screen – a bit like the fingerprint calibration on smartphones.

Oh yeah, so my studio monitors aren’t so flat. When you’re done, you’ll see a curve that shows you the irregularities introduced by both your monitors and your room.

Now you’re ready to listen to a cleaner, clearer, more neutral sound – switch your new calibration on, and if all goes to plan, you’ll get much more neutral sound for listening!

There are other useful features packed into the software, like the ability to apply the curve used by the motion picture industry. (I loved this one – it was like, oh, yeah, that sound!)

It’s also worth noting that Sonarworks have created different calibration types made for real-time usage (great for tracking and improv) and accuracy (great for mixing).

Is all of this useful?

Okay, disclosure statement is at the top, but … my reaction was genuinely holy s***. I thought there would be some subtle impact on the sound. This was more like the feeling – well, as an eyeglass wearer, when my glasses are filthy and I clean them and I can actually see again. Suddenly details of the mix were audible again, and moving between different headphones and listening environments was no longer jarring – like that.

Double blind A/B tests are really important when evaluating the accuracy of these things, but I can at least say, this was a big impact, not a small one. (That is, you’d want to do double blind tests when tasting wine, but this was still more like the difference between wine and beer.)

How you might actually use this: once they adapt to the calibrated results, most people leave the calibrated version on and work from a more neutral environment. Cheap monitors and headphones work a little more like expensive ones; expensive ones work more as intended.

There are other uses cases, too, however. Previously I didn’t feel comfortable taking mixes and working on them on the road, because the headphone results were just too different from the studio ones. With calibration, it’s far easier to move back and forth. (And you can always double-check with the calibration switched off, of course.)

The other advantage of Sonarworks’ software is that it does give you so much feedback as you measure from different locations, and that it produces detailed reports. This means if you’re making some changes to a studio setup and moving things around, it’s valuable not just in adapting to the results but giving you some measurements as you work. (It’s not a measurement suite per se, but you can make it double as one.)

Calibrated listening is very likely the future even for consumers. As computation has gotten cheaper, and as software analysis has gotten smarter, it makes sense that these sort of calibration routines will be applied to giving consumers more reliable sound and in adapting to immersive and 3D listening. For now, they’re great for us as creative people, and it’s nice for us to have them in our working process and not only in the hands of other engineers.

If you’ve got any questions about how this process works as an end user, or other questions for the developers, let us know.

And if you’ve found uses for calibration, we’d love to hear from you.

Sonarworks Reference is available with a free trial:

https://www.sonarworks.com/reference

And some more resources:

Erica Synths our friends on this tool:

Plus you can MIDI map the whole thing to make this easier:

The post What it’s like calibrating headphones and monitors with Sonarworks tools appeared first on CDM Create Digital Music.