You can make music with test equipment – Hainbach explains

Before modulars became a product, some of the first electronic synthesis experiments made use of test equipment – gear intended to make sound, but not necessarily musically. And now that approach is making a comeback.

Hainbach, the Berlin-based experimental artist, has been helping this time-tested approach to sound reach new audiences.

I actually have never seen a complete, satisfying explanation of the relationship of abstract synthesis, as developed by engineers and composers, to test gear. Maybe it’s not even possible to separate the two. But suffice to say, early in the development of synthesis, you could pick up a piece of gear intended for calibration and testing of telecommunications and audio systems, and use it to make noise.

Why the heck would you do that now, given the availability of so many options for synthesis? Well, for one – until folks like Hainbach and me make a bunch of people search the used market – a lot of this gear is simply being scrapped. Since it’s heavy and bulky, it ranges from cheap to “if you get this out of my garage, you can have it” pricing. And the sound quality of a lot of it is also exceptional. Sold to big industry back in a time when slicing prices of this sort of equipment wasn’t essential, a lot of it feels and sounds great. And just like any other sound design or composition exercise that begins with finding something unexpected, the strange wonderfulness of these devices can inspire.

I got a chance to play a few days with the Waveform Research Centre in Rotterdam’s WORM, a strange and wild collection of these orphaned devices lovingly curated by Dennis Verschoor. And I got sounds unlike anything I was used to. It wasn’t just the devices and their lovely dials that made that possible – it was also the unique approach required when the normal envelope generators and such aren’t available. Human creativity does tend to respond well to obstacles.

Whether or not you go that route, it is worth delving into the history and possibilities – and Hainbach’s video is a great start. It might at the very least change how you approach your next Reaktor patch, SuperCollider code, synth preset, or Eurorack rig.

Previously:

Immerse yourself in Rotterdam’s sonic voltages, in the WORM laboratory

The post You can make music with test equipment – Hainbach explains appeared first on CDM Create Digital Music.

Get going with MOTU’s DP10 with these videos

DP10 for Mac and Windows, unveiled this spring, brought breakthrough features to the long-standing favorite DAW called Digital Performer. So now it’s time to dig in and start using the new stuff.

DP has never been short on updates, but some of them certainly felt iterative. And the software had to make the jump from Mac to Windows, which initially got tricky with Windows’ archaic high-density display support and left the screen hard to see.

DP10 is interesting because it brings some genuinely new ideas. There’s a Clip View that looks an awful lot like Ableton’s Session View, but with some new twists – and in a more traditional DAW, with stuff like proper video and cue support which Live so sorely lacks. There are more ways to manipulate audio and pitch without jumping into a plug-in. There’s a substantially beefed-up waveform editor. If you missed it before, I covered this when it debuted in February:

DP10 adds clip launching, improved audio editing to MOTU’s DAW

Or watch Sound on Sound‘s breakdown of the upgrade:

I’m a great fan of written tutorials, but some of this stuff really does benefit from a visual aid. So let’s get started. As it happens, while it’s a bit hidden, you can now download a 30-day demo – enough time to try finishing a project in DP and see if you like it. They’ve got a US$395 upgrade from competing products, so DP fits nicely in a mid-range price point when some competing options have crept up to a grand or more. (Cough, you know who you are.)

http://www.motu.com/download

First, Thomas Foster will hold your hand and walk you through a total-beginner walkthrough of how to get started with DP10. And unlike MOTU’s own videos, this one is also oriented toward in-the-box electronic production – so it’ll be friendly to a lot of the sorts who read this site.

From the absolute beginning, here’s a look at actually creating something, using the Model12 and the BassLine instruments:

(If you want to get more advanced with BassLine, check the MOTU videos below.)

And also at the 101-level, importing audio and applying audio effects to vocals:

VCA Faders are one of the more unique new features – here’s a walkthrough focused on that:

Lastly, round about March MOTU posted a huge trove of demos and tutorials from seminars at NAMM. It’s maybe doubly interesting for including some industry heavyweights – Family Guy composer Walter Murphy, LA producer/composer David Das, Mike McKnight who programs and plays keyboards for Roger Waters, music tech legend Craig Anderton, and more.

It’s easier to navigate what’s available from MOTU’s blog than in the distracting maze that is YouTube, so have a look here:

MOTU demos from NAMM 2019

I expect some CDM readers out there are DP users, so I’d love to hear from you about how you feel about this update and how you use the software in your work.

And as always, if there’s a tool you want to see featured, don’t hesitate to write.

The post Get going with MOTU’s DP10 with these videos appeared first on CDM Create Digital Music.

The ABCs of Live 10.1: 2 minutes of shortcuts will help you work faster

A is for Ableton Live – and Madeleine Bloom can get you up and running with a bunch of 10.1 shortcuts in just over two minutes.

Madeleine of Sonic Bloom is one of the world’s top experts for staying productive in Live (to say nothing of helping us re-skin the thing so the colors are the way we want).

Live 10.1 actually added a lot of shortcuts to save you time – it’s what 10 promised, but implemented in a way that makes more sense. And she plows through them in a hurry:

via SonicBloom, which has loads more

F lets you get at fades right away.

H makes everything fill space vertically in the Arrangement so you don’t have to squit.

My personal favorite – Z, which zooms right to what’s selected and fills the Arrangement so you can focus and see easily.

And more…

This is all so much better than hunting around.

Z is so much my favorite that it just earned this:

For more on Live 10.1 and how to get started:

Ableton Live 10.1 is out now; here are the first things you should try

The post The ABCs of Live 10.1: 2 minutes of shortcuts will help you work faster appeared first on CDM Create Digital Music.

Ableton Live 10.1 arrives; here are the first things you should try

Ableton Live 10.1 is here, a free update to Live 10.1 with some new devices, streamlined automation and editing, and new sound features. So what should dig into after the download? Here’s a place to start.

There’s no surprise reveal here since 10.1 has been in public beta and was announced in the winter. Here’s the full run-down of what’s in the release from February (still accurate):

Ableton Live 10.1: more sound shaping, work faster, free update

I’ve been working with the beta for some time, to the point of not wanting to go back even to 10.0 (or even getting a bit confused when switching to a friend’s machine that didn’t have 10.1).

So let’s skip ahead to stuff you should check out right away when you download:

Refresh a track in Arrangement View

I will shortly do a separate story just on getting around Arrangement View quickly, but — there’s a lot of fun to be had. (Yes, fun, not just screaming at the screen as you painstakingly move envelopes around.)

Ableton have accordingly updated their Arrangement View tutorials:

(Video is actually a terrible medium for shortcuts, but more on those soon.)

Here are some quick things to try:

Resize the Arrangement Overview (that’s the bit at the top of Arrangement View)

Draw some shapes! Right click, pick some shapes, and you can draw in envelopes. Try this actually two ways: first, select some time and draw in shapes. Next, deselect time, and try drawing with different grid values – you’ll get different corresponding quantization.

Get at fades directly. Press the F key.

Clean up envelopes. Right click on a time selection and choose Simplify Envelope.

Stretch and scale! Select some time in automation, and you’ll see handles so you can move both horizontally (amount/scale) and vertically (in time).

Enter some specific values. Right click, choose Edit value, type in a number, and hit enter.

There’s a lot more. But all of this is an opportunity to duplicate one of your projects and give it a refresh by going nuts with some modulation because – why not.

You know, conventional wisdom says, don’t mess with your existing tracks too much. The hell with that. If I were a painter, I would definitely be the kind constantly scraping away and painting over canvases. You can always save a backup. Sometimes it’s fun to mess around and take something somewhere else entirely.

Everything Freezes

Go ahead and freeze whatever you want! Track has a sidechain? It’ll freeze. It’ll even still be a source for other sidechains. (There are actually a bunch of things that had to happen for this to work – check Arrangement Editing in release notes if you’re curious. But the beauty is you don’t really have to think about it.)

Here’s a new explanation of how it works:

Try your own wavetables

User wavetables make the Live 10 Wavetable synth far more interesting.

Like arrangement, this probably deserves it’s own story, but here’s a place to get started:

And for extra help exploiting that feature, there are some useful utilities that will assist you in creating wavetables:

Generate wavetables for free, for Ableton Live 10.1 and other synths

While you’re in there, Ableton quietly added a very powerful randomization feature inside Wavetable for glitching out still more:

Added a new “Rand” modulation source to Wavetable’s MIDI tab, which generates a random value when a note starts.

Pinch and zoom

Trackpads and touchscreens (most of them, anyway) now support pinch gestures in Arrangement View, so try that out. It works for me both on a Razer and (of course) Apple laptop; lots of other hardware will work, too. It’s a little thing, but zooming is a big part of getting around an arrangement.

Try Channel EQ as a creative tool or live

There are already a lot of EQs out there. The Channel EQ however has some draw as a potential equivalent for live PA / experimental sets of what the EQ Three has been for DJ sets.

Stop futzing around with sends when you export stems

Okay, see if this is familiar:

You output stems – say for a remix artist or to mix in a different tool – and suddenly everything sounds completely differently than you expected because you used sends and returns and/or master effects.

That’s no longer an issue in 10.1, as there’s now a new export option that addresses this.

So, time to go make some stems, right?

Make some new sounds with Delay

Okay, Delay at first glance may seem like a step backward from the excitement of Space Echo-ish Echo in Live 10. Isn’t it just a combination of Simple Delay and Ping Pong Delay into one Device?

Well, it is that, but it also has an LFO built in that can modulate both delay time and filter frequency.

These modes were there before, but you now surface Repitch, Fade, and Jump modes as buttons.

So put all of this together, and the combination of things that were there that you didn’t notice, with new things that are simple but very powerful, all together in one unit becomes very powerful indeed.

That is, if you’re modulating something like delay time, then changing between Repitch, Fade, and Jump actually gives you a lot of different sonic possibilities. And yes, this is the sort of thing people with modular rigs like to do with wires but… if you’re a Live 10 owner, it’ll cost you nothing to check out right now.

Specifically, maxforcats pointed us to some cool granular-ish sounds when you choose Fade mode and start modulating delay time.

And keep using Echo. The big challenge with an effect like Echo is balancing loudness. As it happens, there’s a little right-click option that solves this for you in Echo:

In the Echo device, the Dry/Wet knob now features a context menu to switch to “Equal-Loudness”. When enabled, a 50/50 mix will sound equally loud for most signals, instead of being attenuated. In the Delay device, the maximum delay time offset is now consistent with the Simple Delay and Ping Pong Delay devices.

Discover Simpler, again

Simpler is weirdly a lot of the time a reason to use Ableton Live for its absurd combination of directness and power – in contrast to mostly overcomplicated software (and harwdare, for that matter).

Now you can mess around with volume envelopes (even synced ones) and loop time, previously only in Sampler – for both powerful sound design and beat-synced ideas:

Added a Loop Mode chooser, Loop Time slider and Beat Sync/Rate slider to the Volume Envelope in Simpler’s Classic Playback Mode. Previously, these controls were exclusively available in Sampler.

Oh, and go map some macros

You’d probably easily miss this, too – it means that now mapping macros works the way you’d expect, in fewer steps:

When mapping a parameter to an empty macro, the macro assumes the full range of the target parameter, and will be set to the current value of the target parameter.

— and while using mice for everything is no fun, macros are also a great intermediary between what you’re doing onscreen and twisting knobs on controller hardware (Push, certainly, but lots of other gear, too).

Speaking of which, that nice compact NI keyboard controller works thanks to this update, too, making it an ideal thing to throw in your bag with a laptop for a mobile Ableton Live work rig.

Where to find more on 10.1

Detailed ongoing release notes on Live 10 are here:
Live 10 Release Notes

The post Ableton Live 10.1 arrives; here are the first things you should try appeared first on CDM Create Digital Music.

Max TV: go inside Max 8’s wonders with these videos

Max 8 – and by extension the latest Max for Live – offers some serious powers to build your own sonic and visual stuff. So let’s tune in some videos to learn more.

The major revolution in Max 8 – and a reason to look again at Max even if you’ve lapsed for some years – is really MC. It’s “multichannel,” so it has significance in things like multichannel speaker arrays and spatial audio. But even that doesn’t do it justice. By transforming the architecture of how Max treats multiple, well, things, you get a freedom in sketching new sonic and instrumental ideas that’s unprecedented in almost any environment. (SuperCollider’s bus and instance system is capable of some feats, for example, but it isn’t as broad or intuitive as this.)

The best way to have a look at that is via a video from Ableton Loop, where the creators of the tech talk through how it works and why it’s significant.

Description [via C74’s blog]:

In this presentation, Cycling ’74’s CEO and founder David Zicarelli and Content Specialist Tom Hall introduce us to MC – a new multi-channel audio programming system in Max 8.

MC unlocks immense sonic complexity with simple patching. David and Tom demonstrate techniques for generating rich and interesting soundscapes that they discovered during MC’s development. The video presentation touches on the psychoacoustics behind our recognition of multiple sources in an audio stream, and demonstrates how to use these insights in both musical and sound design work.

The patches aren’t all ready for download (hmm, some cleanup work being done?), but watch this space.

If that’s got you in the learning mood, there are now a number of great video tutorials up for Max 8 to get you started. (That said, I also recommend the newly expanded documentation in Max 8 for more at-your-own-pace learning, though this is nice for some feature highlights.)

dude837 has an aptly-titled “delicious” tutorial series covering both musical and visual techniques – and the dude abides, skipping directly to the coolest sound stuff and best eye candy.

Yes to all of these:

There’s a more step-by-step set of tutorials by dearjohnreed (including the basics of installation, so really hand-holding from step one):

For developers, the best thing about Max 8 is likely the new Node features. And this means the possibility of wiring musical inventions into the Internet as well as applying some JavaScript and Node.js chops to anything else you want to build. Our friends at C74 have the hook-up on that:

Suffice to say that also could mean some interesting creations running inside Ableton Live.

It’s not a tutorial, but on the visual side, Vizzie is also a major breakthrough in the software:

That’s a lot of looking at screens, so let’s close out with some musical inspiration – and a reminder of why doing this learning can pay off later. Here’s Second Woman, favorite of mine, at LA’s excellent Bl__K Noise series:

The post Max TV: go inside Max 8’s wonders with these videos appeared first on CDM Create Digital Music.

How to make a multitrack recording in VCV Rack modular, free

In the original modular synth era, your only way to capture ideas was to record to tape. But that same approach can be liberating even in the digital age – and it’s a perfect match for the open VCV Rack software modular platform.

Competing modular environments like Reaktor, Softube Modular, and Cherry Audio Voltage Modular all run well as plug-ins. That functionality is coming soon to a VCV Rack update, too – see my recent write-up on that. In the meanwhile, VCV Rack is already capable of routing audio into a DAW or multitrack recorder – via the existing (though soon-to-be-deprecated) VST Bridge, or via inter-app routing schemes on each OS, including JACK.

Those are all good solutions, so why would you bother with a module inside the rack?

Well, for one, there’s workflow. There’s something nice about being able to just keep this record module handy and grab a weird sound or nice groove at will, without having to shift to another tool.

Two, the big ongoing disadvantage of software modular is that it’s still pretty CPU intensive – sometimes unpredictably so. Running Rack standalone means you don’t have to worry about overhead from the host, or its audio driver settings, or anything like that.

A free recording solution inside VCV Rack

What you’ll need to make this work is the free NYSTHI modules for VCV Rack, available via Rack’s plug-in manager. They’re free, though – get ready, there’s a hell of a lot of them.

Big thanks to chaircrusher for this tip and some other ones that informed this article – do go check his music.

Type “recorder” into the search box for modules, and you’ll see different options options from NYSTHI – current at least as of this writing.

2 Channel MasterRecorder is a simple stereo recorder.
2 Channel MasterReocorder 2 adds various features: monitoring outs, autosave, a compressor, and “stereo massaging.”
Multitrack Recorder is an multitrack recorder with 4- or 8-channel modes.

The multitrack is the one I use the most. It allows you to create stems you can then mix in another host, or turn into samples (or, say, load onto a drum machine or the like), making this a great sound design tool and sound starter.

This is creatively liberating for the same reason it’s actually fun to have a multitrack tape recorder in the same studio as a modular, speaking of vintage gear. You can muck about with knobs, find something magical, and record it – and then not worry about going on to do something else later.

The AS mixer, routed into NYSTHI’s multitrack recorder.

Set up your mix. The free included Fundamental modules in Rack will cover the basics, but I would also go download Alfredo Santamaria’s excellent selection , the AS modules, also in the Plugin Manager, and also free. Alfredo has created friendly, easy-to-use 2-, 4-, and 8-channel mixers that pair perfectly with the NYSTHI recorders.

Add the mixer, route your various parts, set level (maybe with some temporary panning), and route the output of the mixer to the Audio device for monitoring. Then use the ‘O’ row to get a post-fader output with the level.

(Alternatively, if you need extra features like sends, there’s the mscHack mixer, though it’s more complex and less attractive.)

Prep that signal. You might also consider a DC Offset and Compressor between your raw sources and the recording. (Thanks to Jim Aikin for that tip.)

Configure the recorder. Right-click on the recorder for an option to set 24-bit audio if you want more headroom, or to pre-select a destination. Set 4- or 8-track mode with the switch. Set CHOOSE FILE if you want to manually select where to record.

There are trigger ins and outs, too, so apart from just pressing the START and STOP buttons, you can either trigger a sequencer or clock directly from the recorder, or visa versa.

Record away! And go to town… when you’re done, you’ll get a stereo WAV file, or a 4- or 8-track WAV file. Yes, that’s one file with all the tracks. So about that…

Splitting up the multitrack file

This module produces a single, multichannel WAV file. Some software will know what to do with that. Reaper, for instance, has excellent multichannel support throughout, so you can just drag and drop into it. Adobe’s Audition CS also opens these files, but it can’t quickly export all the stems.

Software like Ableton Live, meanwhile, will just throw up an error if you try to open the file. (Bad Ableton! No!)

It’s useful to have individual stems anyway. ffmpeg is an insanely powerful cross-platform tool capable of doing all kinds of things with media. It’s completely free and open source, it runs on every platform, and it’s fast and deep. (It converts! It streams! It records!)

Installing is easier than it used to be, thanks to a cleaned-up site and pre-built binaries for Mac and Windows (plus of course the usual easy Linux installs):

https://ffmpeg.org/

Unfortunately, it’s so deep and powerful, it can also be confusing to figure out how to do something. Case in point – this audio channel manipulation wiki page.

In this case, you can use the map channel “filter” to make this happen. So for eight channels, I do this:

ffmpeg -i input.wav -map_channel 0.0.0 0.wav -map_channel 0.0.1 1.wav -map_channel 0.0.2 2.wav -map_channel 0.0.3 3.wav -map_channel 0.0.4 4.wav -map_channel 0.0.5 5.wav -map_channel 0.0.6 6.wav -map_channel 0.0.7 7.wav

But because this is a command line tool, you could create some powerful automated workflows for your modular outputs now that you know this technique.

Sound Devices, the folks who make excellent multichannel recorders, also have a free Mac and Windows tool called Wave Agent which handles this task if you want a GUI instead of the command line.

https://www.sounddevices.com/products/accessories/software/wave-agent

That’s worth keeping around, too, since it can also mix and monitor your output. (No Linux version, though.)

Record away!

Bonus tutorial here – the other thing apart from recording you’ll obviously want with VCV Rack is some hands-on control. Here’s a nice tutorial this week on working with BeatStep Pro from Arturia (also a favorite in the hardware modular world):

I really like this way of working, in that it lets you focus on the modular environment instead of juggling tools. I actually hope we’ll see a Fundamental module for the task in the future. Rack’s modular ecosystem changes fast, so if you find other useful recorders, let us know.

https://vcvrack.com/

Previously:

Step one: How to start using VCV Rack, the free modular software

How to make the free VCV Rack modular work with Ableton Link

The post How to make a multitrack recording in VCV Rack modular, free appeared first on CDM Create Digital Music.

Your questions answered: Sonarworks Reference calibration tools

If getting your headphones and studio monitors calibrated sounds like a good New Years’ Resolution, we’ve got you covered. Some good questions came up in our last story on Sonarworks Reference, the automated calibration tool, so we’ve gotten answers for you.

First, if you’re just joining us, Sonarworks Reference is a tool for automatically calibrating your studio listening environment and headphones so that the sound you hear is as uncolored as possible – more consistent with the source material. Here’s our previous write-up, produced in cooperation with Sonarworks:

What it’s like calibrating headphones and monitors with Sonarworks tools

CDM is partnering with Sonarworks to help users better understand how to use the tool to their benefit. And so that means in part answering some questions with Sonarworks engineers. If you’re interested in the product, there’s also a special bundle discount on now: you get the True-Fi mobile app for calibration on your mobile device, free with a Sonarworks Studio Edition purchase (usually US$79):

https://try.sonarworks.com/christmasspecial/

Readers have been sending in questions, so I’ll answer as many as I can as accurately as possible.

Does it work?

Oh yeah, this one is easy. I found it instantly easier to mix both on headphones and sitting in the studio, in that you hear far more consistency from one listening environment / device to another, and in that you get a clearer sense of the mix. It feels a little bit like how I feel when I clean my eyeglasses. You’re removing stuff that’s in the way. That’s my own personal experience, anyway; I linked some full reviews and comparisons with other products in the original story. But my sense in general is that automated calibration has become a fact of life for production and live situations. It doesn’t eliminate the role of human experts, not by a long shot – but then color calibration in graphics didn’t get rid of the need for designers and people who know how to operate the printing press, either. It’s just a tool.

Does it work when outside of the sweet spot in the studio?

This is a harder question, actually, but anecdotally, yeah, I still left it on. You’re calibrating for the sweet spot in your studio, so from a calibration perspective, yeah, you do want to sit in that location when monitoring – just as you always would. But a lot of what Sonarworks Reference is doing is about frequency response as much as space, I found it was still useful to leave the calibration on even when wandering around my studio space. It’s not as though the calibration suddenly stops working when you move around. You only notice the calibration stops working if you have the wrong calibration profile selected or you make the mistake of bouncing audio with it left on (oops). But that’s of course exactly what you’d expect to happen.

What about Linux support?

Linux is officially unsupported, but you can easily calibrate on Windows (or Mac) and then use the calibration profile on Linux. It’s a 64-bit Linux-native VST, in beta form.

If you run the plug-in the handy plug-in host Carla, you can calibrate any source you like (via JACK). So this is really great – it means you can have calibrated results while working with SuperCollider or Bitwig Studio on Linux, for example.

This is beta only so I’m really keen to hear results. Do let us know, as I suspect if a bunch of CDM readers start trying the Linux build, there will be added incentive for Sonarworks to expand Linux support. And we have seen some commercial vendors from the Mac/Windows side (Pianoteq, Bitwig, Renoise, etc.) start to toy with support of this OS.

If you want to try this out, go check the Facebook group:
https://www.facebook.com/groups/1751390588461118/

(Direct compiled VST download link is available here, though that may change later.)

What’s up with latency?

You get a choice of either more accuracy and higher latency, or lower accuracy and lower latency. So if you need real-time responsiveness, you can prioritize low latency performance – and in that mode, you basically won’t notice the plug-in is on at all in my experience. Or if you aren’t working live / tracking live, and don’t mind adding latency, you can prioritize accuracy.

Sonarworks clarifies for us:

Reference 4 line-up has two different *filter* modes – zero latency and linear phase. Zero latency filter adds, like the name states, zero latency, whereas linear phase mode really depends on sample-rate but typically adds about 20ms of latency. These numbers hold true in plugin form. Systemwide, however, has the variable of driver introduced latency which is set on top of the filter latency (zero for Zero latency and approx 20ms for linear phase mode) so the numbers for actual Systemwide latency can vary depending on CPU load, hardware specs etc. Sometimes on MacOS, latency can get up to very high numbers which we are investigating at the moment.

What about loudness? Will this work in post production, for instance?

Some of you are obviously concerned about loudness as you work on projects where that’s important. Here’s an explanation from Sonarworks:

So what we do in terms of loudness as a dynamic range character is – nothing. What we do apply is overall volume reduction to account for the highest peak in correction to avoid potential clipping of output signal. This being said, you can turn the feature off and have full 0dBFS volume coming out of our software, controlled by either physical or virtual volume control.

Which headphones are supported?

There’s a big range of headphones with calibration profiles included with Sonarworks Reference. Right now, I’ve got that folder open, and here’s what you get at the moment:

AIAIAI TMA-1

AKG K72, K77, K121, K141 MKII, K240, K240 MKII, K271 MKII, K550 MKII, K553 Pro, K612 Pro, K701, K702, K712 Pro, K812, Q701

Apple AirPods

Audeze KCD-2, LCD-X

Audio-Technica ATH-M20x, M30x, M40x, M50x, M70x, MSR7, R70x

Beats EP, Mixr, Pro, Solo2, Solo3 wireless, Studio (2nd generation), X Average

Beyerdynamic Custom One Pro, DT 150, DT 250 80 Ohm, DT 770 Pro (80 Ohm, 32 Ohm PPRO, 80 Ohm Pro, 250 Ohm Pro), DT 990 Pro 250 Ohm, DT 1770 Pro, DT 1990 Pro (analytical + balanced), T 1

Blue Lola, Mo-Fi (o/On+)

Bose QuietComfort 25, 35, 35 II, SoundLink II

Bowers & Wilkins P7 Wireless

Extreme Isolation EX-25, EX-29

Focal Clear Professional, Clear, Listen Professional, Spirit Professional

Fostex TH900 mk2, TX-X00

Grado SR60e, SR80e

HiFiMan HE400i

HyperX Cloud II

JBL Everest Elite 700

Koss Porta Pro Classic

KRK KNS 6400, 8400

Marshall Major II, Monitor

Master & Dynamic MH40

Meze 99, 99 NEO

Oppo PM-3

Philips Fidelio X2HR, SHP9500

Phonen SMB-02

Pioneer HDJ-500

Plantronics BackBeat Pro 2

PreSonus HD 7

Samson SR850

Sennheiser HD, HD 25 (&0 Ohm, Light), HD-25-C II, HD 201, HD 202, HD 205, HD 206, HD 215-II, HD 280 Pro (incl. new facelift version), HD 380 Pro, HD 518, HD 598, HD 598 C, HD 600, HD 650, HD 660 , HD 700, HD 800, HD 800 S, Moometum On-Ear Wireless, PX 100-II

Shure SE215, SRH440, SRH840, SRH940, SRH1440, SRH1540, SRH1840

Skullcandy Crusher (with and without battery), Hesh 2.0

Sony MDR-1A, MDR-1000X, MDR-7506, MDR-7520, MDR-CD900ST, MDR-V150, MDR-XB450, MDR-XB450AP, MDR-XB650BT, MDR-XB950AP, BDR-XB950BT, MDR-Z7, MDR-XZ110, MDR-ZX110AP, MDR-ZX310, MR-XZ310AP, MDR-ZX770BN, WH-1000MX2

Status Audio CB-1

Superlux HD 668B, HD-330, HD681

Ultrasone Pro 580i, 780i, Signature Studio

V-Moda Crossfade II, M-100

Yamaha HPH-MT5, HPH-MT7, HPH-MT8, HPH-MT220

So there you have it – lots of favorites, and lots of … well, actually, some truly horrible consumer headphones in the mix, too. But I not lots of serious mixers like testing a mix on consumer cans. The advantage of doing that with calibration is presumably that you get to hear the limitations of different headphones, but at the same time, you still hear the reference version of the mix – not the one exaggerated by those particular headphones. That way, you get greater benefit from those additional tests. And you can make better use of random headphones you have around, clearly, even if they’re … well, fairly awful, they can be now still usable.

Even after that long list, I’m sure there’s some stuff you want that’s missing. Sonarworks doesn’t yet support in-ear headphones for its calibration tools, so you can rule that out. For everything else, you can either request support or if you want to get really serious, opt for individual mail-in calibration in Latvia.

More:

https://www.sonarworks.com/reference

The post Your questions answered: Sonarworks Reference calibration tools appeared first on CDM Create Digital Music.

What it’s like calibrating headphones and monitors with Sonarworks tools

No studio monitors or headphones are entirely flat. Sonarworks Reference calibrate any studio monitors or headphones with any source. Here’s an explanation of how that works and what the results are like – even if you’re not someone who’s considered calibration before.

CDM is partnering with Sonarworks to bring some content on listening with artist features this month, and I wanted to explore specifically what calibration might mean for the independent producer working at home, in studios, and on the go.

That means this isn’t a review and isn’t independent, but I would prefer to leave that to someone with more engineering background anyway. Sam Inglis wrote one at the start of this year for Sound on Sound of the latest version; Adam Kagan reviewed version 3 for Tape Op. (Pro Tools Expert also compared IK Multimedia’s ARC and chose Sonarworks for its UI and systemwide monitoring tools.)

With that out of the way, let’s actually explain what this is for people who might not be familiar with calibration software.

In a way, it’s funny that calibration isn’t part of most music and sound discussions. People working with photos and video and print all expect to calibrate color. Without calibration, no listening environment is really truly neutral and flat. You can adjust a studio to reduce how much it impacts the sound, and you can choose reasonably neutral headphones and studio monitors. But those elements nonetheless color the sound.

I came across Sonarworks Reference partly because a bunch of the engineers and producers I know were already using it – even my mastering engineer.

But as I introduced it to first-time calibration product users, I found they had a lot of questions.

How does calibration work?

First, let’s understand what calibration is. Even studio headphones will color sound – emphasizing certain frequencies, de-emphasizing others. That’s with the sound source right next to your head. Put studio headphones in a room – even a relatively well-treated studio – and you combine the coloration of the speakers themselves as well as reflections and character of the environment around them.

The idea of calibration is to process the sound to cancel out those modifications. Headphones can use existing calibration data. For studio speakers, you take some measurements. You play a known test signal and record it inside the listening environment, then compare the recording to the original and compensate.

Hold up this mic, measure some whooping sounds, and you’re done calibration. No expertise needed.

What can I calibrate?

One of the things that sets Sonarworks Reference apart is that it’s flexible enough to deal with both headphones and studio monitors, and works both as a plug-in and a convenient universal driver.

The Systemwide driver works on Mac and Windows with the final output. That means you can listen everywhere – I’ve listened to SoundCloud audio through Systemwide, for instance, which has been useful for checking how the streaming versions of my mixes sound. This driver works seamlessly with Mac and Windows, supporting Core Audio on the Mac and the latest WASAPI Windows support, which is these days perfectly useful and reliable on my Windows 10 machine. (There’s unfortunately no Linux support, though maybe some enterprising user could get that Windows VST working.)

On the Mac, you select the calibrated output via a pop-up on the menu bar. On Windows, you switch to it just like you would any other audio interface. Once selected, everything you listen to in iTunes, Rekordbox, your Web browser, and anywhere else will be calibrated.

That works for everyday listening, but in production you often want your DAW to control the audio output. (Choosing the plug-in is essential on Windows for use with ASIO; Systemwide doesn’t yet support ASIO though Sonarworks says that’s coming.) In this case, you just add a plug-in to the master bus and the output will be calibrated. You just have to remember to switch it off when you bounce or export audio, since that output is calibrated for your setup, not anyone else’s.

Three pieces of software and a microphone. Sonarworks is a measurement tool, a plug-in and systemwide tool for outputting calibrated sound from any source, and a microphone for measuring.

Do I need a special microphone?

If you’re just calibrating your headphones, you don’t need to do any measurement. But for any other monitoring environment, you’ll need to take a few minutes to record a profile. And so you need a microphone for the job.

Calibrating your headphones is as simple as choosing the make and model number for most popular models.

Part of the convenience of the Sonarworks package is that it includes a ready-to-use measurement mic, and the software is already pre-configured to work with the calibration. These mics are omnidirectional – since the whole point is to pick up a complete image of the sound. And they’re meant to be especially neutral.

Sonarworks’ software is pre-calibrated for use with their included microphone.

Any microphone whose vendor provides a calibration profile – available in standard text form – can also use the software in a fully calibrated mode. If you have some cheap musician-friendly omni mic, though, those makers usually don’t do anything of the sort in the way a calibration mic maker would.

I think it’s easier to just use these mics, but I don’t have a big mic cabinet. Production Expert did a test of generic omni mics – mics that aren’t specifically for calibration – and got results that approximate the results of the test mic. In short, they’re good enough if you want to try this out, though Production Expert were being pretty specific with which omni mics they tested, and then you don’t get the same level of integration with the calibration software.

Once you’ve got the mics, you can test different environments – so your untreated home studio and a treated studio, for instance. And you wind up with what might be a useful mic in other situations – I’ve been playing with mine to sample reverb environments, like playing and re-recording sound in a tile bathroom, for instance.

What’s the calibration process like?

Let’s actually walk through what happens.

With headphones, this job is easy. You select your pair of headphones – all the major models are covered – and then you’re done. So when I switch from my Sony to my Beyerdynamic, for instance, I can smooth out some of the irregularities of each of those. That’s made it easier to mix on the road.

For monitors, you run the Reference 4 Measure tool. Beginners I showed the software got slightly discouraged when they saw the measurement would take 20 minutes but – relax. It’s weirdly kind of fun and actually once you’ve done it once, it’ll probably take you half that to do it again.

The whole thing feels a bit like a Nintendo Wii game. You start by making a longer measurement at the point where your head would normally be sitting. Then you move around to different targets as the software makes whooping sounds through the speakers. Once you’ve covered the full area, you will have dotted a screen with measurements. Then you’ve got a customized measurement for your studio.

Here’s what it looks like in pictures:

Simulate your head! The Measure tool walks you through exactly how to do this with friendly illustrations. It’s easier than putting together IKEA furniture.

You’ll also measure the speakers themselves.

Eventually, you measure the main listening spot in your studio. (And you can see why this might be helpful in studio setup, too.)

Next, you move the mic to each measurement location. There’s interactive visual feedback showing you as you get it in the right position.

Hold the mic steady, and listen as a whooping sound comes out of your speakers and each measurement is completed.

You’ll make your way through a series of these measurements until you’ve dotted the whole screen – a bit like the fingerprint calibration on smartphones.

Oh yeah, so my studio monitors aren’t so flat. When you’re done, you’ll see a curve that shows you the irregularities introduced by both your monitors and your room.

Now you’re ready to listen to a cleaner, clearer, more neutral sound – switch your new calibration on, and if all goes to plan, you’ll get much more neutral sound for listening!

There are other useful features packed into the software, like the ability to apply the curve used by the motion picture industry. (I loved this one – it was like, oh, yeah, that sound!)

It’s also worth noting that Sonarworks have created different calibration types made for real-time usage (great for tracking and improv) and accuracy (great for mixing).

Is all of this useful?

Okay, disclosure statement is at the top, but … my reaction was genuinely holy s***. I thought there would be some subtle impact on the sound. This was more like the feeling – well, as an eyeglass wearer, when my glasses are filthy and I clean them and I can actually see again. Suddenly details of the mix were audible again, and moving between different headphones and listening environments was no longer jarring – like that.

Double blind A/B tests are really important when evaluating the accuracy of these things, but I can at least say, this was a big impact, not a small one. (That is, you’d want to do double blind tests when tasting wine, but this was still more like the difference between wine and beer.)

How you might actually use this: once they adapt to the calibrated results, most people leave the calibrated version on and work from a more neutral environment. Cheap monitors and headphones work a little more like expensive ones; expensive ones work more as intended.

There are other uses cases, too, however. Previously I didn’t feel comfortable taking mixes and working on them on the road, because the headphone results were just too different from the studio ones. With calibration, it’s far easier to move back and forth. (And you can always double-check with the calibration switched off, of course.)

The other advantage of Sonarworks’ software is that it does give you so much feedback as you measure from different locations, and that it produces detailed reports. This means if you’re making some changes to a studio setup and moving things around, it’s valuable not just in adapting to the results but giving you some measurements as you work. (It’s not a measurement suite per se, but you can make it double as one.)

Calibrated listening is very likely the future even for consumers. As computation has gotten cheaper, and as software analysis has gotten smarter, it makes sense that these sort of calibration routines will be applied to giving consumers more reliable sound and in adapting to immersive and 3D listening. For now, they’re great for us as creative people, and it’s nice for us to have them in our working process and not only in the hands of other engineers.

If you’ve got any questions about how this process works as an end user, or other questions for the developers, let us know.

And if you’ve found uses for calibration, we’d love to hear from you.

Sonarworks Reference is available with a free trial:

https://www.sonarworks.com/reference

And some more resources:

Erica Synths our friends on this tool:

Plus you can MIDI map the whole thing to make this easier:

The post What it’s like calibrating headphones and monitors with Sonarworks tools appeared first on CDM Create Digital Music.

How to recreate vintage polyphonic character, using Softube Modular

It’s not about which gear you own any more – it’s about understanding techniques. That’s especially true when a complete modular rig in software runs you roughly the cost of a single hardware module. All that remains is learning – so let’s get going, with Softube Modular as an example.

David Abravanel joins us to walk us through technique here using Softube’s Modular platform, all with built-in modules. If you missed the last sale, by the way, Modular is on sale now for US$65, as are a number of the add-on modules that might draw you into their platform in the first place. But if you have other hardware or software, of course, this same approach applies. -Ed.

Classic Style Polyphony with Softube Modular

If you’ve ever played an original Korg Mono/Poly synthesizer, then you know why it’s so prized for its polyphonic character. Compared to fully polyphonic offerings (such as Korg’s own Polysix synthesizer), the Mono/Poly features four analog oscillators which can either be played stacked (monophonic), or triggered in order for “polyphony” (though still with just the one filter).

The original KORG classic Mono/Poly synth, introduced in 1981.

The resulting sound is richly imperfect – each time a chord is played, the minute difference in timing between individual fingers affect a difference in sound.

The cool thing is – we can easily re-create this in the Softube Modular environment, using the unique “Quad MIDI to CV” interface module. Follow along:

Our chord progression.

To start with, I need a reason for having four voices. In this case, it’s the simple chord sequence above. In order to play those notes simultaneously using Modular, I’ll need a dedicated oscillator for each. Each virtual voice will consist of one oscillator, ADSR envelope, and VCA amplifier. Here’s the basic setup – the VCO / ADSR / VCA modules will be repeated three more times to give us four voices:

Wiring up the first oscillator.

For the first oscillator, I’ve selected a pulse wave – go with whichever sounds you’d like to hear (things sound especially nice with multiple waveforms stacked on top of one another). With all four voices, the patch should look like this:

Note that each voice has its own dedicated note and gate channels from the Quad MIDI to CV. Now, we need to combine the voices – for this, we’ll use the Audio Mix module. I’m also adding a VCF filter, with its own ADSR. Because the filter needs to be triggered every time any note is input, I’m going to add a single MIDI to CV module to gate the filter envelope. It all looks like this:

Now, let’s hear what we’ve got:

That’s not bad, but we can spice it up a little bit. I went with two pulse waves, a saw wave, and a tri wave for my four oscillators – I’ll add a couple LFOs to modulate the pulsewidths of the two pulse waves and add some thickness. For extra dubby space, I’m also adding the Doepfer BBD module, a recent addition to Softube Modular which includes a toggle option for the clock noise bleed-through of the analog original. I’m also adding one more LFO, for a bit of modulation on the filter.

Adding in some additional modules for flavor. The Doepfer BBD (an add-on for the Softube Modular) adds unique retro delays and other effects, including bitcrushing, distortion, and lots of other chorusing, flanging, ambience, and general swirly crunchy stuff.

Honestly, the characterful BBD module deserves its own article – and may get one! Stay tuned.

Here’s our progression, really moving and spacey now:

And there we have it! A polyphonic patch with serious analog character. You can also try playing monophonic melodies through it – in Quad MIDI to CV’s “rotate” mode, each incoming note will go to a different oscillator.

Want to try this out for yourself? Download the preset and run it in Modular (requires Modular and the BBD add-on, both of which you can demo from Softube).

DHLA poly + BBD.softubepreset

We’re just scratching the surface with Modular here – there’s an enormous well of potential, and they’ve really nailed the sound of many of these modules. Modular is a CPU-hungry beast – don’t try to run more than one or two instances of a rich patch like this one without freezing some tracks – but sound-wise it’s really proved its worth.

Stay tuned for future features, as we dive into some of Modulars other possibilities, including the vast potential found in the first ever model of Buchla’s legendary Twisted Waveform oscillator!

Softube Modular

The post How to recreate vintage polyphonic character, using Softube Modular appeared first on CDM Create Digital Music.

Escape vanilla modulation: Nikol shows you waveshaping powers

You wouldn’t make music with just simple oscillators, so why only use basic, repetitive modulation? In the latest video in Bastl’s how-to series hosted by Patchení’s Nikol, waveshaping gets applied to control signals.

A-ha! But what’s waveshaping? Well, Nikol teaches basic classes in modular synthesis to beginners, but she did skip over that. Waveshapers add more complex harmonic content to simple waveform inputs. Basic vanilla waveform in, nice wiggly complex waveform out. (See Wikipedia for that moment when you say, oh, well, why didn’t my math teacher bring in synthesizers when she taught us polynomials, then I would have stayed awake!)

Bastl unveiled the Timber waveshaping module back in May, and we all thought it was cool:

Bastl do waveshaping, MIDI, and magically tune your modules

But when most people hear waveshapers, they think of them just as a fancy oscillator – as a sound source. But in the modular world, you can also imagine it as a way of adding harmonics (read: complexity) to simple control signals, which is what Nikol demonstrates here.

That is, instead of Waveshaper -> out, you’ll route [modulation/control signal/LFO] -> Waveshaper in, and mess with that signal. WahWahWahWah can turn into WahwrrEEEEkittyglrblMrcbb… ok, okay, video:

Keep watching, because this eventually gets into adding variation to a sequenced signal.

You can try this in any software or hardware environment, but you do need your waveshaper to work with your control input. What’s relatively special about Timber in the hardware domain at least is its ability to process slow circuits.

https://www.bastl-instruments.com/modular/timber/

You can also follow Nikol on Instagram.

But more of Deina the modular dog, please!

Tragically, while Nikol’s English is getting fluent, us Americans are not doing any better with our Czech. So, Bastl, we may need an immersion language program more than synthesis.

The post Escape vanilla modulation: Nikol shows you waveshaping powers appeared first on CDM Create Digital Music.