Your questions answered: Sonarworks Reference calibration tools

If getting your headphones and studio monitors calibrated sounds like a good New Years’ Resolution, we’ve got you covered. Some good questions came up in our last story on Sonarworks Reference, the automated calibration tool, so we’ve gotten answers for you.

First, if you’re just joining us, Sonarworks Reference is a tool for automatically calibrating your studio listening environment and headphones so that the sound you hear is as uncolored as possible – more consistent with the source material. Here’s our previous write-up, produced in cooperation with Sonarworks:

What it’s like calibrating headphones and monitors with Sonarworks tools

CDM is partnering with Sonarworks to help users better understand how to use the tool to their benefit. And so that means in part answering some questions with Sonarworks engineers. If you’re interested in the product, there’s also a special bundle discount on now: you get the True-Fi mobile app for calibration on your mobile device, free with a Sonarworks Studio Edition purchase (usually US$79):

https://try.sonarworks.com/christmasspecial/

Readers have been sending in questions, so I’ll answer as many as I can as accurately as possible.

Does it work?

Oh yeah, this one is easy. I found it instantly easier to mix both on headphones and sitting in the studio, in that you hear far more consistency from one listening environment / device to another, and in that you get a clearer sense of the mix. It feels a little bit like how I feel when I clean my eyeglasses. You’re removing stuff that’s in the way. That’s my own personal experience, anyway; I linked some full reviews and comparisons with other products in the original story. But my sense in general is that automated calibration has become a fact of life for production and live situations. It doesn’t eliminate the role of human experts, not by a long shot – but then color calibration in graphics didn’t get rid of the need for designers and people who know how to operate the printing press, either. It’s just a tool.

Does it work when outside of the sweet spot in the studio?

This is a harder question, actually, but anecdotally, yeah, I still left it on. You’re calibrating for the sweet spot in your studio, so from a calibration perspective, yeah, you do want to sit in that location when monitoring – just as you always would. But a lot of what Sonarworks Reference is doing is about frequency response as much as space, I found it was still useful to leave the calibration on even when wandering around my studio space. It’s not as though the calibration suddenly stops working when you move around. You only notice the calibration stops working if you have the wrong calibration profile selected or you make the mistake of bouncing audio with it left on (oops). But that’s of course exactly what you’d expect to happen.

What about Linux support?

Linux is officially unsupported, but you can easily calibrate on Windows (or Mac) and then use the calibration profile on Linux. It’s a 64-bit Linux-native VST, in beta form.

If you run the plug-in the handy plug-in host Carla, you can calibrate any source you like (via JACK). So this is really great – it means you can have calibrated results while working with SuperCollider or Bitwig Studio on Linux, for example.

This is beta only so I’m really keen to hear results. Do let us know, as I suspect if a bunch of CDM readers start trying the Linux build, there will be added incentive for Sonarworks to expand Linux support. And we have seen some commercial vendors from the Mac/Windows side (Pianoteq, Bitwig, Renoise, etc.) start to toy with support of this OS.

If you want to try this out, go check the Facebook group:
https://www.facebook.com/groups/1751390588461118/

(Direct compiled VST download link is available here, though that may change later.)

What’s up with latency?

You get a choice of either more accuracy and higher latency, or lower accuracy and lower latency. So if you need real-time responsiveness, you can prioritize low latency performance – and in that mode, you basically won’t notice the plug-in is on at all in my experience. Or if you aren’t working live / tracking live, and don’t mind adding latency, you can prioritize accuracy.

Sonarworks clarifies for us:

Reference 4 line-up has two different *filter* modes – zero latency and linear phase. Zero latency filter adds, like the name states, zero latency, whereas linear phase mode really depends on sample-rate but typically adds about 20ms of latency. These numbers hold true in plugin form. Systemwide, however, has the variable of driver introduced latency which is set on top of the filter latency (zero for Zero latency and approx 20ms for linear phase mode) so the numbers for actual Systemwide latency can vary depending on CPU load, hardware specs etc. Sometimes on MacOS, latency can get up to very high numbers which we are investigating at the moment.

What about loudness? Will this work in post production, for instance?

Some of you are obviously concerned about loudness as you work on projects where that’s important. Here’s an explanation from Sonarworks:

So what we do in terms of loudness as a dynamic range character is – nothing. What we do apply is overall volume reduction to account for the highest peak in correction to avoid potential clipping of output signal. This being said, you can turn the feature off and have full 0dBFS volume coming out of our software, controlled by either physical or virtual volume control.

Which headphones are supported?

There’s a big range of headphones with calibration profiles included with Sonarworks Reference. Right now, I’ve got that folder open, and here’s what you get at the moment:

AIAIAI TMA-1

AKG K72, K77, K121, K141 MKII, K240, K240 MKII, K271 MKII, K550 MKII, K553 Pro, K612 Pro, K701, K702, K712 Pro, K812, Q701

Apple AirPods

Audeze KCD-2, LCD-X

Audio-Technica ATH-M20x, M30x, M40x, M50x, M70x, MSR7, R70x

Beats EP, Mixr, Pro, Solo2, Solo3 wireless, Studio (2nd generation), X Average

Beyerdynamic Custom One Pro, DT 150, DT 250 80 Ohm, DT 770 Pro (80 Ohm, 32 Ohm PPRO, 80 Ohm Pro, 250 Ohm Pro), DT 990 Pro 250 Ohm, DT 1770 Pro, DT 1990 Pro (analytical + balanced), T 1

Blue Lola, Mo-Fi (o/On+)

Bose QuietComfort 25, 35, 35 II, SoundLink II

Bowers & Wilkins P7 Wireless

Extreme Isolation EX-25, EX-29

Focal Clear Professional, Clear, Listen Professional, Spirit Professional

Fostex TH900 mk2, TX-X00

Grado SR60e, SR80e

HiFiMan HE400i

HyperX Cloud II

JBL Everest Elite 700

Koss Porta Pro Classic

KRK KNS 6400, 8400

Marshall Major II, Monitor

Master & Dynamic MH40

Meze 99, 99 NEO

Oppo PM-3

Philips Fidelio X2HR, SHP9500

Phonen SMB-02

Pioneer HDJ-500

Plantronics BackBeat Pro 2

PreSonus HD 7

Samson SR850

Sennheiser HD, HD 25 (&0 Ohm, Light), HD-25-C II, HD 201, HD 202, HD 205, HD 206, HD 215-II, HD 280 Pro (incl. new facelift version), HD 380 Pro, HD 518, HD 598, HD 598 C, HD 600, HD 650, HD 660 , HD 700, HD 800, HD 800 S, Moometum On-Ear Wireless, PX 100-II

Shure SE215, SRH440, SRH840, SRH940, SRH1440, SRH1540, SRH1840

Skullcandy Crusher (with and without battery), Hesh 2.0

Sony MDR-1A, MDR-1000X, MDR-7506, MDR-7520, MDR-CD900ST, MDR-V150, MDR-XB450, MDR-XB450AP, MDR-XB650BT, MDR-XB950AP, BDR-XB950BT, MDR-Z7, MDR-XZ110, MDR-ZX110AP, MDR-ZX310, MR-XZ310AP, MDR-ZX770BN, WH-1000MX2

Status Audio CB-1

Superlux HD 668B, HD-330, HD681

Ultrasone Pro 580i, 780i, Signature Studio

V-Moda Crossfade II, M-100

Yamaha HPH-MT5, HPH-MT7, HPH-MT8, HPH-MT220

So there you have it – lots of favorites, and lots of … well, actually, some truly horrible consumer headphones in the mix, too. But I not lots of serious mixers like testing a mix on consumer cans. The advantage of doing that with calibration is presumably that you get to hear the limitations of different headphones, but at the same time, you still hear the reference version of the mix – not the one exaggerated by those particular headphones. That way, you get greater benefit from those additional tests. And you can make better use of random headphones you have around, clearly, even if they’re … well, fairly awful, they can be now still usable.

Even after that long list, I’m sure there’s some stuff you want that’s missing. Sonarworks doesn’t yet support in-ear headphones for its calibration tools, so you can rule that out. For everything else, you can either request support or if you want to get really serious, opt for individual mail-in calibration in Latvia.

More:

https://www.sonarworks.com/reference

The post Your questions answered: Sonarworks Reference calibration tools appeared first on CDM Create Digital Music.

How MIDI Works on the Linnstrument, or “Sometimes 14 Bits is Enough”

Roger at his creation. Photo courtesy Roger Linn Designs.

Roger at his creation. Photo courtesy Roger Linn Designs.

“How can an instrument be truly expressive if it only supports MIDI?”

This seemed to be a frequently-asked question in our coverage of the upcoming Roger Linn Linnstrument. While OSC certainly has its merits, however, it is possible to get higher-resolution data via MIDI. You’re likely most familiar with MIDI’s standard 0-127 values, or the 7-bit data, as used in simple Control Change messages. A 14-bit message, by contrast, gives you over 16,000 levels of resolution – most for most tasks.

The way the Linnstrument works is to send that higher-resolution data for pitch, via standard pitch bend messages. And if you really needed that resolution for other messages, you can get it.

I’ll let Roger explain directly, since this was such a source of reader confusion:

Is 14-bit support possible? Or even desirable?

Roger: For X/pitch, it’s already there in MIDI Bend messages.
For Z/pressure, it current uses 7-bit Poly Pressure or Channel Pressure messages, which seems to work fine. If someone wants more, it’s easy to send an additional 7 bits of resolution in a CC.
For Y/timbre, 7 bits seems enough to report the vertical position within a 3/4” pad. But again, if someone wants more, it’s easy enough to send an additional 7 bits of resolution in a CC.

Is polyphonic aftertouch supported?

Roger: Yes, either in Note-Per-Channel mode or in Single Channel Poly Pressure mode.

In addition to note-per-channel, what other schemes are supported?

Roger: There are 3 MIDI modes:

1) Note Per Channel, in which each note and its X, Y and Z movements are sent on a separate MIDI channel. This works well with Logic Pro X’s MIDI Mono Mode, in which each voice receives Note On/off, Bend and Channel Pressure (and hopefully in future, a CC for Y-axis) on its own channel.

2) Single Channel / Poly Pressure, in which everything is sent on a single MIDI channel, but continuous X and Y messages are sent only from movements of the most-recent touch.

3) Single Channel / Channel Pressure, in which everything is sent on a single MIDI channel, but continuous X, Y and Z messages are sent only from movements of the most-recent touch.

Thanks. That clears things up. And, of course, it’s possible someone could write custom firmware to, say, support OSC for applications that can make it useful. I can also imagine a monome app – with velocity.

Previously:

Roger Linn’s Linnstrument Could Finally Make Grids Expressive for Music [Hands On]

The post How MIDI Works on the Linnstrument, or “Sometimes 14 Bits is Enough” appeared first on Create Digital Music.

Reason 7′s New Tools for Slicing, Stretching, Retiming Audio: Q&A, Tutorial Vid

It slices! It dices! No, really - it does. Finally, you don't have to leave Reason to prep samples and loops.

It slices! It dices! No, really – it does. Finally, you don’t have to leave Reason to prep samples and loops or re-time recorded sound.

Far beyond the simple sampling that first appeared in hardware, slicing, re-timing, and stretching audio keeps getting more sophisticated, manipulating recorded sound in musical ways. But a lot of the popularity of this technique traces back to Propellerhead and their ReCycle tool. By bringing together smart digital slicing with its REX file format for loops, ReCycle helped launch the looping craze in software.

REX support has always been part of Reason, since the start. But the way sound works in Reason has gradually evolved, particularly as Swedish developers Propellerhead made Reason into less of a rack of synths and more of a full production environment. Bringing integrated recording, live sampling, and time stretching into the mix, literally, meant that you might go directly from a mic into an instrument.

And that brings us to Reason 7. If you want to do your own sampling work, you probably want the ability to have everything happen inside Reason rather than rely on an external tool like ReCycle. Propellerhead certainly kept you waiting for the chance to do that, but in typical form, they’ve also got their own way of going about it.

CDM talked to Propellerhead about what they’ve done and why they think it’s worth your attention. It’s a companion to the first conversation we had with them, about the addition of MIDI; see:
When Reason Met MIDI Out: How MIDI, Virtual CV Work in the New Reason 7 [Pictures, Details]

But if the last story got their answers on what Reason 7 could do for your favorite synth or drum machine, let’s put them in the hot seat on the question of what it does for your microphone.

Why add ReCycle integration now?

It just made sense! :-)

What we’ve done in Reason isn’t exactly ReCycle though. It’s a combination of our amazing time stretch now being used for audio slicing, too, and the added ability for Reason to create REX files. We haven’t actually integrated ReCycle and Reason. We have given Reason the ability to convert audio into REX loops, but that happens completely independent of ReCycle! It’s a feature available to all Reason 7 users, natively in Reason.

The focus for the implementation in Reason is to make it fast and easy so you can quickly get a recording or imported audio file from the sequencer into Dr. Octo Rex, the NN-XT sampler or Kong drum designer and continue working creatively with it in the rack. Think of ReCycle as an “editor” and Reason as a music-making program and you get a pretty good idea of how the implementations differ.

ReCycle still remains as a standalone tool, and for those who use other programs than Reason for their music-making, create sample libraries or just want the control that ReCycle brings (like threshold for slice-creation, stretch amount, and more) it’s great for that. As an example, about half of the current ReCycle user base are not Reason users — they use ReCycle to create REX files for other programs, like Live or Logic, or Stylus RMX, or are sample library creators who will need the added control that ReCycle gives.

For most of us, though, Reason’s REX file implementation will do everything we’ll ever need, without interrupting the workflow. The automatic slicing is really accurate, and twisting your recordings into something new is a lot of fun!

What’s different about it?

The REX file creation is one puzzle piece in the improved audio handling in Reason’s sequencer! Reason 7 now instantly analyzes any recorded or imported audio, finds the transients and adds slices to it. This opens up for quantizing, stretching the individual slices, or bouncing the audio to a REX file. The slice detection is very accurate, and the slice stretching uses the same time stretch algorithms as we already have for full tracks in Reason, which we know is of extremely high quality, so you can trust that your songs will still sound great. So really, when we’re talking about stretch, there’s both the classic ReCycle type stretch of increasing the space between the slices when you’re using REX files, and our modern time stretch that actually stretches the audio.

For the user, this means that recordings or imported audio files can be worked with in a number of new creative ways. You can change the timing, change the tempo, make the recording “better” (tighter) or groovier. And then when you’re happy with it, put it in a REX player in the rack to play the slices from a keyboard, rearrange the performance live and even put effects on the individual slices in Kong.

What sorts of workflows might people use with this integrated functionality, as far as sampling, slicing, recording in different contexts?

Resampling is of course a big concept in electronic music-making today. Being able to take your resampled sounds directly into a REX player in Reason opens up a ton of new possibilities.

Recording something, instantly have it sliced and ready to throw into a REX player means you can hammer out beats and work creatively with your recordings with just a few mouse clicks.

It really is about taking down the barriers between the rack and the sequencer in Reason, and open up for more creative possibilities with audio.

Tutorial: How to get going

So, that’s the rationale, some clarification, and the marketing pitch. If you’re grabbing Reason 7, though, you’ll want to know how to get working.

Product specialist Mattias produced a tutorial video on slice markers and the newly-integrated functionality, and includes a number of useful tips:

So, what do you think? Is this something you’ll use? Are you sticking to a different tool of choice, or excited to see this in Reason? (And we’ll be keen to hear how you work with it once that Reason 7 download finishes – or if you’ve been on the beta.)

Update: Drag and drop workflow

Via comments, it appears some people are confused about how the drag-and-drop sample loading workflow actually … works.

eXode explains:

1. Drag the desired .wav from browser in Explorer to Reason 7 [Ed.: or any other supported format, or from OS X Finder]
2. double click the clip in the sequencer, then choose “Bounce -> Bounce Clip to REX loop” on the rick click context menu..
3. The “Tool” window will automatically pop up with the new REX file selected. In that same tool window you click the “To Rack” button.
Voila! Your brand new REX loop is now loaded into a Dr. OctoRex.

It actually works rather nicely. Of course, all of this assumes that you already like Dr. OctoRex, Kong, or other built-in devices – as otherwise you wouldn’t be using Reason. But that to me is rather the point: yes, multiple apps will slice and dice and stretch audio. Ultimately, it depends on how you prefer to work with that audio, whether in Reason’s semi-modular environment, traditional DAWs (SONAR comes up in comments), drum machine-style tools like Maschine, clip-based live performance environments like Ableton Live (okay, that’s still sort of a category of one), or something entirely different. Reason’s advantage is these devices that feel a bit like hardware, but still have software flexibility, in a semi-modular environment you can rig up however you like.

When Reason Met MIDI Out: How MIDI, Virtual CV Work in the New Reason 7 [Pictures, Details]

reason7midi

It’s been a long time coming, to say the least. But you can be sure that even when Propellerhead do something as basic as MIDI output, they’ll do it in, well, a Reason way. So, it was intriguing to hear Reason was adding MIDI out precisely because in Reason it’s integrated with virtual patch cords and connections that make it work differently than in other hosts. And that means we wanted details.

Propellerhead’s Leo Nathorst-Böös answers some of our questions, as CDM awaits the beta version.

Most helpfully, we have some new images, so you can see better what’s going on. To handle MIDI output to your external, physical hardware, Reason 7 has a virtual device and connections that correspond to the outboard gear. The External MIDI Instrument device in the rack is fairly basic, with connections for gate and “CV” (Reason’s nomenclature for internal, virtual control signal), and pitch and mod, plus an assignable output for Control Change messages.

Note, analog fans, that when we say “CV” we don’t mean external analog signal. If you are lucky enough to have some analog or modular gear, you’ll need to use MIDI somehow to connect to them. (That’s not to rule out future possibilities, though; in comments on the last story, we got a very interesting thread on the possibility of providing this functionality via a Rack Extension.)

Back to this device, just how does that “CC assign” work?

The CC assign sets what the knob on the front and the CV input on the back currently controls.

The knob is there on the front to make it easier for those who record automation by turning knobs in the rack with the mouse. All CC values are always accessible through Remote (from a hardware controller) and in the sequencer for automation, regardless of what the knob is set to.

If you select a CC# on the front, record automation by turning the knob, and then change what the knob controls, your automation will remain on the original CC#, and you can continue to record additional automation for more parameters using the knob on the front.

Of course, the big deal here is that you can route virtual control signal from other Reason devices out to your external gear – something you can’t necessarily do in many other software environments (or not without some effort or translation).

And how does that assignment work? Leo gives us a basic idea:

The External MIDI Instrument has inputs for Gate and Note CV, to be controlled from Reason’s Matrix or RPG-8 devices—or other sequencers, like some of the Rack Extensions available. In addition to that there are inputs for Mod Wheel, Pitch Bend and one selectable CV->CC input.

externalmidi_lane

externalMIDI2

The External MIDI Device lets you connect outboard gear to the virtual instruments, sequencers, and control inside Reason itself. Click for larger versions, from front and (for modular connections) back. Images courtesy Propellerhead.

Q. Are the Program and MIDI Channel controls, etc., themselves MIDI assignable? That is, could you take a ReMote device and use it to map program changes on outboard gear?

Program Change can be automated, but not MIDI channel. You can control both from a hardware controller through Remote.

Q. Why does the device have pitch and mod on it – so you can test these from a mouse?

Yes, and for visual feedback when it is automated in the sequencer.

Hopefully that gives you a basic idea of what’s at work. I’m sure this in turn prompts new questions, so ask away. Reason 7 is now hitting beta, which means you testers out there should have something to try first-hand soon, and a final release will follow in the near future.

Next, I want to talk more about slicing and audio loop workflows, also something I’ve been waiting for a long time in Reason. So, if you’ve been eager, too, send us your questions, here or via Facebook.

More on the upgrade:
http://www.propellerheads.se/products/reason/new/

Looping Technique: New BOSS, VOX Loopers Will Do One-Shots

So, you’re the fastest one-shot sampler in the West, huh? We’ve got good news for you, then – you can now proceed to spend money on new gear. Photo (CC-BY) William Clifford.

What was the most-asked question around new music tech announcements earlier this month, coinciding with the industry’s NAMM trade show? Was it, “What’s the best accessory for my iPad?” Was it, “what was the game changer for music workstations?”

Nope – not among CDM readers, anyway. It was, “can I do one-shot samples with the new loopers?

A one-shot sample – for those of you thinking of True Grit – is just a sample that plays once and then stops, instead of immediately looping. It shouldn’t be rocket science, but makers of loopers are often convinced you want everything looping e. The nice thing about one-shot samples is that they provide more opportunities to be musically expressive and virtuosic than you would if you were knee-deep in never-ending loops.

VOX (Korg) and BOSS (Roland) each had dueling looper introductions – and each, in turn, earned some attention from readers. Those readers wanted to know if one shots were practical on the hardware. The answer: yes.

The BOSS LoopStations offer extensive sample time and memory storage; you can even use them as mobile recorders:
New Boss Loop Stations Add Features, Up to Three Hours of Recording; the Loopers to Beat

The VOX looper lacks that recording flexibility with small sample times, but many readers liked its live performance-oriented features and effects:
VOX Gets in Looping Game with Dynamic Looper – 90 Seconds, But with Live Features

So, about those 1-shots… First, Amanda Whiting confirms the new RC LoopStations each offer one-shot looping. Add that together with other usability enhancements, and I’d say the LoopStations really are looking a lot better.

Second, I asked Korg’s Leslie Buttonow if the VOX Dynamic Looper will do one shots:

Yes, it does because the Looper allows you versatile ways of ending your loops. Ex.—-“Stop at end of a loop; playback; fade out; delay out (like a fade while repeating last note).”

So, using the “stop at end of loop” mode would in essence give people a “one-shot” loop trigger of sorts.

Actually, I’d say that’s not just a one-shot “of sorts” – that there, pardner, is a gosh-honest, one hundred percent-authentic one-shot. Put that in your … sampler … and … smoke it. Erm. Yeah.

There you have it, folks. Each looper looks like it holds some serious potential. Oddly, talking to Roger Linn about his new drum machine with Dave Smith, the Tempest, our conversation turned to looping as an ideal way to translate the act of recording into performance. So, there’s great interest in this stuff. If you put together some fantastic looping performance, whether you’re sampling your singing or your ukelele or your crumhorn, do send it our way!