Split MIDI, without latency, for under $40: meet MeeBlip cubit

You want to play with your music toys together, and instead you wind up unplugging and repatching MIDI. That’s no fun. We wanted to solve this problem for ourselves, without having to trade high performance for low cost or simplicity. The result is MeeBlip cubit.

cubit is the first of a new generation of MeeBlip tools from us, as we make work to make synths more accessible and fun for everybody. cubit’s mission is simple: take one input, and turn it into four outputs, with minimum latency, minimum fuss, and at the lowest price possible.

MeeBlip cubit

Why cubit?

Rock-solid timing. Everything you throw at the input jack is copied to the four outputs with ultra-low latency. Result: you can use it for MIDI messages, you can use it for clock, and keep your timing tight. (Under the hood is a hardware MIDI passthrough circuit, with active processing for each individual output – but that just translates to you not having to worry.)

It fits anywhere. Ports are on the top, so you can use it in tight spaces. It’s small and lightweight, so you can always keep it with you.

You’ll always have power. The USB connection means you can get power from your laptop or a standard USB power adapter (optional).

Don’t hum along. Opto-isolated MIDI IN to reduce ground loops. Everything should have this, but not everything does.

Blink! LED light flashes so you know MIDI is coming in – ideal for troubleshooting.

One input, four outputs, no lag – easy.

We love little, inexpensive gear. But those makers often leave out MIDI out/thru just to save space. With cubit, you can put together a portable rig and keep everything jamming together – alone or with friends.

Right now, cubit is launching at just US$39.95 with a USB cable thrown in.

If you need extra adapters or cables, we’ve got those too, so you can start playing right when the box arrives. (Shipping in the US is free, with affordable shipping to Canada and worldwide.) And if you’re grabbing some stocking stuffers, don’t forget to add in a cubit so your gifts can play along and play with others.

Get one while our stocks last. And don’t look in stores – we sell direct to keep costs low.

Full specs from our engineer, James:

  • Passes all data from the MIDI IN to four MIDI OUT jacks
  • Ultra-low latency hardware MIDI pass-through
  • Runs on 5V Power from a computer USB port or optional USB power adapter
  • Opto-isolated MIDI IN to reduce ground loops
  • Individual active signal processing for each MIDI OUT
  • Bright green MIDI data indicator LED flashes when you’re receiving MIDI
  • Measures: 4.25″ x 3″ x 1″, weighs 92 g (3.25 oz)
  • Includes 3 ft (1 m) USB cable
  • Optional 5V USB power adapter available
  • Made in Canada

MeeBlip cubit product and order page

The post Split MIDI, without latency, for under $40: meet MeeBlip cubit appeared first on CDM Create Digital Music.

Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws

Musicians don’t just endure technology when it breaks. They embrace the broken. So it’s fitting that Holly Herndon’s team have produced a demonic spawn of machine learning algorithms – and that the results are wonderful.

The new music video for the Holly Herndon + Jlin collaboration have been making the rounds online, so you may have seen it already:


n
But let’s talk about what’s going on here. Holly is continuing a long-running collaboration with producer Jlin, here joined by technologist Mat Dryhurst and coder Jules LaPlace. (The music video itself is directed by Daniel Costa Neves with software developer Leif Ryge, employing still more machine learning technique to merge the two artists’ faces.)

Machine learning processes are being explored in different media in parallel – characters and text, images, and sound, voice, and music. But the results can be all over the place. And ultimately, there are humans as the last stage. We judge the results of the algorithms, project our own desires and fears on what they produce, and imagine anthropomorphic intents and characteristics.

Sometimes errors like over-fitting then take on a personality all their own – even as mathematically sophisticated results fail to inspire.

But that’s not to say these reactions aren’t just as real. An part of may make the video “Godmother” compelling is not just the buzzword of AI, but the fact that it genuinely sounds different.

The software ‘Spawn,’ developed by Ryge working with the team, is a machine learning-powered encoder. Herndon and company have anthropomorphized that code in their description, but that itself is also fair – not least because the track is composed in such a way to suggest a distinct vocalist.

I love Holly’s poetic description below, but I think it’s also important to be precise about what we’re hearing. That is, we can talk about the evocative qualities of an oboe, but we should definitely still call an oboe an oboe.

So in this case, I confirmed with Dryhurst that what I was hearing. The analysis stage employs neural network style transfers – some links on that below, though LaPlace and the artists here did make their own special code brew. And then they merged that with a unique vocoder – the high-quality WORLD vocoder. That is, they feed a bunch of sounds into the encoder, and get some really wild results.

And all of that in turn makes heavy use of the unique qualities of Jlin’s voice, Holly’s own particular compositional approach and the arresting percussive take on these fragmented sounds, Matt’s technological sensibilities, LaPlace’s code, a whole lot of time spent on parameters and training and adaptation…

Forget automation in this instance. All of this involves more human input and more combined human effort that any conventionally produced track would.

Is it worth it? Well, aesthetically, you could make comparisons to artists like Autechre, but then you could do that with anything with mangled sample content in it. And on a literal level, the result is the equivalent of a mangled sample. The results retain recognizable spectral components of the original samples, and they add a whole bunch of sonic artifacts which sound (correctly, really) ‘digital’ and computer-based to our ears.

But it’s also worth noting that what you hear is particular to this vocoder technique and especially to audio texture synthesis and neutral network-based style transfer of sound. It’s a commentary on 2018 machine learning not just conceptually, but because what you hear sounds the way it does because of the state of that tech.

And that’s always been the spirit of music. The peculiar sound and behavior of a Theremin says a lot about how radios and circuits respond to a human presence. Vocoders have ultimately proven culturally significant for their aesthetic peculiarities even if their original intention was encoding speech. We respond to broken circuits and broken code on an emotional and cultural level, just as we do acoustic instruments.

In a blog post that’s now a couple of years old – ancient history in machine learning terms, perhaps – Dmitry Ulyanov and Vadim Lebedev acknowledged that some of the techniques they used for “audio texture synthesis and style transfer” used a technique intended for something else. And they implied that the results didn’t work – that they had “stylistic” interest more than functional ones.

Dmitry even calls this a partial failure: “I see a slow but consistent interest increase in music/audio by the community, for sure amazing things are just yet to come. I bet in 2017 already we will find a way to make WaveNet practical but my attempts failed so far :)”

Spoiler – that hasn’t really happened in 2017 or 2018. But “failure” to be practical isn’t necessarily a failure. The rising interest has been partly in producing strange results – again, recalling that the vocoder, Theremin, FM synthesis, and many other techniques evolved largely because musicians thought the sounds were cool.

But this also suggests that musicians may uniquely be able to cut through the hype around so-called AI techniques. And that’s important, because these techniques are assigned mystical powers, Wizard of Oz-style.

Big corporations can only hype machine learning when it seems to be magical. But musicians can hype up machine learning even when it breaks – and knowing how and when it breaks is more important than ever. Here’s Holly’s official statement on the release:

For the past two years, we have been building an ensemble in Berlin.

One member is a nascent machine intelligence we have named Spawn. She is being raised by listening to and learning from her parents, and those people close to us who come through our home or participate at our performances.

Spawn can already do quite a few wonderful things. ‘Godmother’ was generated from her listening to the artworks of her godmother Jlin, and attempting to reimagine them in her mother’s voice.

This piece of music was generated from silence with no samples, edits, or overdubs, and trained with the guidance of Spawn’s godfather Jules LaPlace.

In nurturing collaboration with the enhanced capacities of Spawn, I am able to create music with my voice that far surpass the physical limitations of my body.

Going through this process has brought about interesting questions about the future of music. The advent of sampling raised many concerns about the ethical use of material created by others, but the era of machine legible culture accelerates and abstracts that conversation. Simply through witnessing music, Spawn is already pretty good at learning to recreate signature composition styles or vocal characters, and will only get better, sufficient that anyone collaborating with her might be able to mimic the work of, or communicate through the voice of, another.

Are we to recoil from these developments, and place limitations on the ability for non-human entities like Spawn to witness things that we want to protect? Is permission-less mimicry the logical end point of a data-driven new musical ecosystem surgically tailored to give people more of what they like, with less and less emphasis on the provenance, or identity, of an idea? Or is there a more beautiful, symbiotic, path of machine/human collaboration, owing to the legacies of pioneers like George Lewis, that view these developments as an opportunity to reconsider who we are, and dream up new ways of creating and organizing accordingly.

I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster.

– Holly Herndon

Some interesting code:
https://github.com/DmitryUlyanov/neural-style-audio-tf

https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder

Go hear the music:

http://smarturl.it/Godmother

Previously, from the hacklab program I direct, talks and a performance lab with CTM Festival:

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

A look at AI’s strange and dystopian future for art, music, and society

I also wrote about machine learning:

Minds, machines, and centralization: AI and music

The post Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws appeared first on CDM Create Digital Music.

TUNNELS imagines Eurorack if you could multiply and patch anywhere

Kids today. First, they want synth modules with the power of computers but the faceplate of vintage hardware – and get just that. Next, they take for granted the flexibility of patching that virtual systems in software have. Well, enter TUNNELS: “infinite multiple” for your Eurorack.

TUNNELS is a set of modules that doesn’t do anything on its own. It’s just a clever patch bay for your modular system. But with the IN and OUT modules, what you get is the ability to duplicate signals (so a signal from one patch cord can go multiple places), and then route signals anywhere you like.

“Infinite” is maybe a bit hyperbolic. (Well, I suppose what you might do with this is potentially, uh, infinite.) It’s really a bus for signals. And maybe not surprisingly, this freer, ‘virtual’ way of thinking about signal comes from people with some software background on one side, and the more flexible Buchla patching methodology on the other. TUNNELS is being launched by Olympia Modular, a collaboration between Patterning developer Ben Kamen and Buchla Development Engineer Charles Seeholzer.

There are two module types. TUNNEL IN just takes a signal and duplicates it to multiple outs. In signal to out signal, that’s 1:6, 2:3 (each signal gets three duplicates, for two signals), or 3:2 (each signal gets two duplicates, for three signals).

You might be fine with just IN, but you can also add one or more OUT modules. That connects via a signal link cable, but duplicates the outputs from the IN module. (Cool!) So as you add more OUT modules, this can get a lot fancier, if you so desire. It means some patches that were impossible before become possible, and other patches that were messy tangles of spaghetti become clean and efficient.

Actually, I’m comparing to software (think Reaktor, Pd, Max), but even some dataflow software could use some utility modules like this just to clean things up. (Most dataflow software does let you connect as many outputs from a patch point as you want. Code environments like SuperCollider also make it really easy to work with virtual ‘buses’ for signal… but then hardware has the advantage of making the results visible.)

Tunnels is on Kickstarter, with a module for as little as US$75 (limited supply). But, come on, spring for the t-shirt, right?

Specs:
TUNNEL IN: buffered multiple, duplicate input across multiple outputs
TUNNEL OUT: add additional outputs at another location – chain infinitely for massive multiple banks, or use as sends for signals like clock and 1v/oct

Add more OUTs, and you get a big bank of multiples.

I’d say it’s like send and receive objects in Max/Pd, but… that’ll only make sense to Max/Pd people, huh? But yeah, like that.

On Kickstarter:
https://www.kickstarter.com/projects/639167978/tunnels-infinite-multiple-for-eurorack-synthesizer

The post TUNNELS imagines Eurorack if you could multiply and patch anywhere appeared first on CDM Create Digital Music.

The new iPad Pro has a USB-C port – so what can it do, exactly?

The iPad finally gets a dedicated port for connectivity, as you’d find on a “desktop” computer – and it’s loaded with potential uses, from power to music gear. Let’s break down exactly what it can do.

“USB-C” is a port type; it refers to the reversible, slim, oval-shaped connector on the newest gadgets. But it doesn’t actually describe what the port can do as far as capabilities. So initially, Apple’s reference to the “USB-C” port on the latest iPad Pro generation was pretty vague.

Since then, press have gotten their hands on hardware and Apple themselves have posted technical documentation. Specifically, they’ve got a story up explaining the port’s powers:

https://support.apple.com/en-us/HT209186

Now, keep in mind the most confusing thing about Apple and USB-C is the two different kinds of ports. There’s a Thunderbolt-3 port, as found on the high-end MacBooks Pro and the Mac mini. It’s got a bolt of lightning indicator on it, and is compatible with audio devices like those from Universal Audio, and high-performance video gadgetry. And then there’s the plain-vanilla USB-C port, which has the standard USB icon on it.

All Thunderbolt 3 ports also double as USB-C ports, just not the other way around. The Thunderbolt 3 one is the faster port.

Also important, USB-C is backwards compatible with older USB formats if you have the right cable.

So here’s what you can do with USB-C. The basic story: do more, with fewer specialized adapters and dongles.

You can charge your iPad. Standard USB-C power devices as well as Apple’s own adapter. Nicely enough, you might even charge faster with a third-party adapter – like one you could share with a laptop that uses USB-C power.

Connect your iPad to a computer. Just as with Lightning-to-USB, you can use USB cables to connect to a USB-C port or older standard USB-A port, for charge and sync.

Connect to displays, projectors, TVs. Here you’ve got a few options, but they all max out at far higher quality than before:

  • USB-C to HDMI. (up to 4K resolution, 60 Hz, with HDMI 2.0 adapter.)
  • USB-C Digital AV Multiport. Apple’s own adapter supports up to 4K resolution, 30Hz. (The iPad display itself is 1080p / 60Hz, video up to 4K, 30Hz.)
  • USB-C displays. Up to 5K, with HR10 high dynamic range support. Some will even charge the iPad Pro in the process.

High end video makes the new iPad Pro look indispensable as a delivery device for many visual applications – including live visuals. It’s not hard to imagine people carrying these to demo high-end graphics with, or even writing custom software using the latest Apple APIs for 3D graphics and using the iPad Pro live.

Connect storage – a lot of it. Fast. USB-C is now becoming the standard for fast hard drives – USB 3.1/3.2. That theoretically allows for up to 2500 MB/s data access, and Apple says the iPad Pro will now work with 1 TB of storage. I’ve asked them for more clarification, but basically, yes, you can plug in big, fast storage and use it with your iPad, not limiting yourself to internal storage capacity. So that’s a revelation for pros, especially when using the iPad as an accessory to process video and photos and field recordings on the go.

Play audio. There’s no minijack audio output (grrr), but what you do get is audio playback to USB-C audio interfaces, docks, and specialized headphones. There’s also a USB-C to 3.m mm headphone jack adapter, but that’s pretty useless because it doesn’t include power passthrough – it’s a step backward from what you had before. Better to use a specialized USB-C adapter, which could also mean getting an analog audio output that’s higher quality than the one previous included internally on the iPad range.

And of course you can use AirPlay or Bluetooth, though it doesn’t appear Apple yet supports higher quality Bluetooth streaming, so wires seem to win for those of us who care about sound.

Oh, also interesting – Apple says they’ve added Dolby Digital Plus support over HDMI, but not Dolby Atmos. That hints a bit at consumer devices that do support Atmos – these are rare so far, but it’ll be interesting to watch, and to see whether Apple and Dolby work together or compete in this space.

Speaking of audio and music, though, here’s the other big one:

Work with USB devices. Apple specifically calls out audio and MIDI tools, presumably because musicians remain a big target Pro audience. What’s great here is, you no longer have the extra Lightning to USB “Camera” adapter required on older iPads, which was expensive and only worked with the iPad, and you should be free of some of the more restrictive electrical power capabilities of those past models.

You could also use a standard external keyboard to type on, or wired Ethernet – the latter great for wired use of applications like Liine’s Lemur.

The important thing here is there’s more bandwidth and more power. (Hardware that draws more power may still require external power – but that’s already true on a computer, too.)

The iPad Pro is at last closer to a computer, which makes it a much more serious tool for soft synths, controller tools, audio production, and more.

Charge other stuff. This is also cool – if you ever relied on a laptop as a mobile battery for phones and other accessories, now you can do that with the USB-C on the iPad Pro, too. So that means iPhones as well as other non-Apple phones. You can even plug one iPad into another iPad Pro.

Thunderbolt – no. Note that what you can’t do is connect Thunderbolt hardware. For that, you still want a laptop or desktop computer.

What about Made for iPhone? Apple’s somewhat infamous “MFI” program, which began as “Made for iPod,” is meant to certify certain hardware as compatible with their products. Presumably, that still exists – it would have to do so for the Lightning port products, but it seems likely certain iPad-specific products will still carry the certification.

That isn’t all bad – there are a lot of dodgy USB-C products out there, so some Apple seal of approval may be welcome. But MFI has hamstrung some real “pro” products. The good news as far as USB-C is, because it’s a standard port, devices made for particular “pro” music and audio and video uses no longer need to go through Apple’s certification just to plug directly into the iPad Pro. (And they don’t have to rely on something like the Camera Connection Kit to act as a bridge.)

Apple did not initially respond to CDM’s request for comment on MFI as it relates to the USB-C port.

More resources

MacStories tests the new fast charging and power adapter.

9to5Mac go into some detail on what works and what doesn’t (largely working from the same information I am, I think, but you get another take):
What can you connect to the new iPad Pro with USB-C?

And yeah, this headline gives it away, but agree totally. Note that Android is offering USB-C across a lot of devices, but that platform lacks some of the support for high-end displays and robust music hardware support that iOS does – meaning it’d be more useful coming from Apple than coming from those Android vendors.

The iPad Pro’s USB-C port is great. It should be on my iPhone, too

The post The new iPad Pro has a USB-C port – so what can it do, exactly? appeared first on CDM Create Digital Music.

What it’s like calibrating headphones and monitors with Sonarworks tools

No studio monitors or headphones are entirely flat. Sonarworks Reference calibrate any studio monitors or headphones with any source. Here’s an explanation of how that works and what the results are like – even if you’re not someone who’s considered calibration before.

CDM is partnering with Sonarworks to bring some content on listening with artist features this month, and I wanted to explore specifically what calibration might mean for the independent producer working at home, in studios, and on the go.

That means this isn’t a review and isn’t independent, but I would prefer to leave that to someone with more engineering background anyway. Sam Inglis wrote one at the start of this year for Sound on Sound of the latest version; Adam Kagan reviewed version 3 for Tape Op. (Pro Tools Expert also compared IK Multimedia’s ARC and chose Sonarworks for its UI and systemwide monitoring tools.)

With that out of the way, let’s actually explain what this is for people who might not be familiar with calibration software.

In a way, it’s funny that calibration isn’t part of most music and sound discussions. People working with photos and video and print all expect to calibrate color. Without calibration, no listening environment is really truly neutral and flat. You can adjust a studio to reduce how much it impacts the sound, and you can choose reasonably neutral headphones and studio monitors. But those elements nonetheless color the sound.

I came across Sonarworks Reference partly because a bunch of the engineers and producers I know were already using it – even my mastering engineer.

But as I introduced it to first-time calibration product users, I found they had a lot of questions.

How does calibration work?

First, let’s understand what calibration is. Even studio headphones will color sound – emphasizing certain frequencies, de-emphasizing others. That’s with the sound source right next to your head. Put studio headphones in a room – even a relatively well-treated studio – and you combine the coloration of the speakers themselves as well as reflections and character of the environment around them.

The idea of calibration is to process the sound to cancel out those modifications. Headphones can use existing calibration data. For studio speakers, you take some measurements. You play a known test signal and record it inside the listening environment, then compare the recording to the original and compensate.

Hold up this mic, measure some whooping sounds, and you’re done calibration. No expertise needed.

What can I calibrate?

One of the things that sets Sonarworks Reference apart is that it’s flexible enough to deal with both headphones and studio monitors, and works both as a plug-in and a convenient universal driver.

The Systemwide driver works on Mac and Windows with the final output. That means you can listen everywhere – I’ve listened to SoundCloud audio through Systemwide, for instance, which has been useful for checking how the streaming versions of my mixes sound. This driver works seamlessly with Mac and Windows, supporting Core Audio on the Mac and the latest WASAPI Windows support, which is these days perfectly useful and reliable on my Windows 10 machine. (There’s unfortunately no Linux support, though maybe some enterprising user could get that Windows VST working.)

On the Mac, you select the calibrated output via a pop-up on the menu bar. On Windows, you switch to it just like you would any other audio interface. Once selected, everything you listen to in iTunes, Rekordbox, your Web browser, and anywhere else will be calibrated.

That works for everyday listening, but in production you often want your DAW to control the audio output. (Choosing the plug-in is essential on Windows for use with ASIO; Systemwide doesn’t yet support ASIO though Sonarworks says that’s coming.) In this case, you just add a plug-in to the master bus and the output will be calibrated. You just have to remember to switch it off when you bounce or export audio, since that output is calibrated for your setup, not anyone else’s.

Three pieces of software and a microphone. Sonarworks is a measurement tool, a plug-in and systemwide tool for outputting calibrated sound from any source, and a microphone for measuring.

Do I need a special microphone?

If you’re just calibrating your headphones, you don’t need to do any measurement. But for any other monitoring environment, you’ll need to take a few minutes to record a profile. And so you need a microphone for the job.

Calibrating your headphones is as simple as choosing the make and model number for most popular models.

Part of the convenience of the Sonarworks package is that it includes a ready-to-use measurement mic, and the software is already pre-configured to work with the calibration. These mics are omnidirectional – since the whole point is to pick up a complete image of the sound. And they’re meant to be especially neutral.

Sonarworks’ software is pre-calibrated for use with their included microphone.

Any microphone whose vendor provides a calibration profile – available in standard text form – can also use the software in a fully calibrated mode. If you have some cheap musician-friendly omni mic, though, those makers usually don’t do anything of the sort in the way a calibration mic maker would.

I think it’s easier to just use these mics, but I don’t have a big mic cabinet. Production Expert did a test of generic omni mics – mics that aren’t specifically for calibration – and got results that approximate the results of the test mic. In short, they’re good enough if you want to try this out, though Production Expert were being pretty specific with which omni mics they tested, and then you don’t get the same level of integration with the calibration software.

Once you’ve got the mics, you can test different environments – so your untreated home studio and a treated studio, for instance. And you wind up with what might be a useful mic in other situations – I’ve been playing with mine to sample reverb environments, like playing and re-recording sound in a tile bathroom, for instance.

What’s the calibration process like?

Let’s actually walk through what happens.

With headphones, this job is easy. You select your pair of headphones – all the major models are covered – and then you’re done. So when I switch from my Sony to my Beyerdynamic, for instance, I can smooth out some of the irregularities of each of those. That’s made it easier to mix on the road.

For monitors, you run the Reference 4 Measure tool. Beginners I showed the software got slightly discouraged when they saw the measurement would take 20 minutes but – relax. It’s weirdly kind of fun and actually once you’ve done it once, it’ll probably take you half that to do it again.

The whole thing feels a bit like a Nintendo Wii game. You start by making a longer measurement at the point where your head would normally be sitting. Then you move around to different targets as the software makes whooping sounds through the speakers. Once you’ve covered the full area, you will have dotted a screen with measurements. Then you’ve got a customized measurement for your studio.

Here’s what it looks like in pictures:

Simulate your head! The Measure tool walks you through exactly how to do this with friendly illustrations. It’s easier than putting together IKEA furniture.

You’ll also measure the speakers themselves.

Eventually, you measure the main listening spot in your studio. (And you can see why this might be helpful in studio setup, too.)

Next, you move the mic to each measurement location. There’s interactive visual feedback showing you as you get it in the right position.

Hold the mic steady, and listen as a whooping sound comes out of your speakers and each measurement is completed.

You’ll make your way through a series of these measurements until you’ve dotted the whole screen – a bit like the fingerprint calibration on smartphones.

Oh yeah, so my studio monitors aren’t so flat. When you’re done, you’ll see a curve that shows you the irregularities introduced by both your monitors and your room.

Now you’re ready to listen to a cleaner, clearer, more neutral sound – switch your new calibration on, and if all goes to plan, you’ll get much more neutral sound for listening!

There are other useful features packed into the software, like the ability to apply the curve used by the motion picture industry. (I loved this one – it was like, oh, yeah, that sound!)

It’s also worth noting that Sonarworks have created different calibration types made for real-time usage (great for tracking and improv) and accuracy (great for mixing).

Is all of this useful?

Okay, disclosure statement is at the top, but … my reaction was genuinely holy s***. I thought there would be some subtle impact on the sound. This was more like the feeling – well, as an eyeglass wearer, when my glasses are filthy and I clean them and I can actually see again. Suddenly details of the mix were audible again, and moving between different headphones and listening environments was no longer jarring – like that.

Double blind A/B tests are really important when evaluating the accuracy of these things, but I can at least say, this was a big impact, not a small one. (That is, you’d want to do double blind tests when tasting wine, but this was still more like the difference between wine and beer.)

How you might actually use this: once they adapt to the calibrated results, most people leave the calibrated version on and work from a more neutral environment. Cheap monitors and headphones work a little more like expensive ones; expensive ones work more as intended.

There are other uses cases, too, however. Previously I didn’t feel comfortable taking mixes and working on them on the road, because the headphone results were just too different from the studio ones. With calibration, it’s far easier to move back and forth. (And you can always double-check with the calibration switched off, of course.)

The other advantage of Sonarworks’ software is that it does give you so much feedback as you measure from different locations, and that it produces detailed reports. This means if you’re making some changes to a studio setup and moving things around, it’s valuable not just in adapting to the results but giving you some measurements as you work. (It’s not a measurement suite per se, but you can make it double as one.)

Calibrated listening is very likely the future even for consumers. As computation has gotten cheaper, and as software analysis has gotten smarter, it makes sense that these sort of calibration routines will be applied to giving consumers more reliable sound and in adapting to immersive and 3D listening. For now, they’re great for us as creative people, and it’s nice for us to have them in our working process and not only in the hands of other engineers.

If you’ve got any questions about how this process works as an end user, or other questions for the developers, let us know.

And if you’ve found uses for calibration, we’d love to hear from you.

Sonarworks Reference is available with a free trial:

https://www.sonarworks.com/reference

And some more resources:

Erica Synths our friends on this tool:

Plus you can MIDI map the whole thing to make this easier:

The post What it’s like calibrating headphones and monitors with Sonarworks tools appeared first on CDM Create Digital Music.

Roland’s little VT-4 vocal wonder box just got new reverbs

Roland’s VT-4 is more than a vocal processor. It’s best thought of as a multi-effects box that happens to be vocal friendly. And it’s getting deeper, with new reverb models, downloadable now.

Roland tried this once before with an AIRA vocoder/vocal processor, the VT-1. But that model proved a bit shallow: limited presets and pitch control only through the vocal input meant that it works great in some situations, but doesn’t fit others.

The VT-4 is really about retaining a simple interface, but adding a lot more versatility (and better sound).

As some of you noted in comments when I wrote it up last time, it’s not a looper. (Roland or someone else will gladly sell you one of those.) But what you do get are dead simple controls, including intuitive access to pitch, formant, balance, and reverb on faders. And you can control pitch through either a dial on the front panel or MIDI input. I’ll have a full hands-on review soon, as I’m particularly interested in this as a live processor for vocalists and other live situations.

If your use case is sometimes you want a vocoder, and sometimes you want some extra effects, and sometimes you’re playing with gear or sometimes with a laptop, the VT-4 is all those things. It’s got USB audio support, so you can pack this as your only interface if you just need mic in and stereo output.

And it has a bunch of effects now: re-pitch, harmonize, feedback, chorus, vocoder, echo, tempo-synced delay, dub delay … and some oddities like robot and megaphone and radio. More on that next time.

This update brings new reverb effects. They’re thick, lush, digital-style reverbs:

DEEP REVERB
LARGE REVERB
DARK REVERB
… and the VT-1’s rather nice retro-ish reverb is back as VT-1 REVERB

Deep dark say what? So the VT-1 reverb already was deeper (more reflections) and had a longer tail than the new VT-4 default; that preset restores those possibilities. “Deep” is deeper (more reflections). “Large” has longer duration reflections or simulates a larger room. And “DARK” is like the default, but with more high frequency filtering. You’ll flash the new settings via USB.

Roland is pushing more toward adding features to their gear over time, now via the AIRA minisite, so you can grab this pack there:
https://aira.roland.com/soundlibrary/reverb-pack-1/

And this being Japan, they introduce the pack by saying “It will set you in a magnificent space.” Yes, indeed, it will. That’s lovely.

The VT-4 got a firmware update, too.

1. PITCH AND FORMANT can be active now irrespective of input signal level and length, via a new settings. (Basically, this lets you disable a tracking threshold, I think. I have to play with this a bit.)
2. ROBOT VOICE now won’t hang notes; it disables with note off events.
3. There’s a new MUTE function setting.

VT-4 page:
http://www.roland.co.in/products/vt-4/

I mean, a really easy-to-use pitch + vocoder + delay + reverb for just over $200, and sometimes you can swap it for an audio interface? Seems a no brainer to me. So if you have some questions or things you’d like me to try with this unit I just got in, let me know.

http://www.roland.co.in/products/vt-4/

The post Roland’s little VT-4 vocal wonder box just got new reverbs appeared first on CDM Create Digital Music.

It’s time for music and music technology to be a voice for migrants

From countries across Europe to the USA, migration is at the center of Western politics at the moment. But that raises a question: why aren’t more people who make music, music instruments, and music tech louder about these issues?

Migration – temporary and permanent – is simply a fact of life for a huge group of people, across backgrounds and aspirations. That can involve migration to follow opportunities, and refugees and asylum seekers who move for their own safety and freedom. So if you don’t picture immigrants, migrants, and refugees when you think of your society, you just aren’t thinking.

Musicians ought to be uniquely qualified to speak to these issues, though. Extreme anti-immigration arguments all assume that migrants take away more from a society than they give back. And people in the music world ought to know better. Music has always been based on cultural exchange. Musicians across cultures have always considered touring to make a living. And to put it bluntly, music isn’t a zero sum game. The more you add, the more you create.

Music gets schooled in borders

As music has grown more international, as more artists tour and cross borders, at least the awareness is changing. That’s been especially true in electronic music, in a DJ industry that relies on travel. Resident Advisor has consistently picked up this story over the last couple of years, as artists spoke up about being denied entry to countries while touring.

In a full-length podcast documentary last year, they dug into the ways in which the visa system hurts artists outside the US and EU, with a focus on non-EU artists trying to gain entry to the UK:

Andrew Ryce also wrote about a visa rate hike in the USA back in 2016 – and this in the Obama Administration, not under Trump:

US raises touring artist visa fees by 42%

Now, being a DJ crossing a border isn’t the same as being a refugee running for your life. But then on some other level, it can allow artists to experience immigration infrastructure – both when it works for them, and when it works against them. A whole generation of artists, including even those from relatively privileged Western nations, is now learning the hard way about the immigration system. And that’s something they might have missed as tourists, particularly if they come from places like the USA, western Europe, Australia, and other places well positioned in the system.

The immigration system they see will often come off as absurdist. National policies worldwide categorize music as migrant labor and require a visa. In many countries, these requirements are unenforced in all but big-money gigs. But in some countries – the USA, Canada, and UK being prime examples – they’re rigorously enforced, and not coincidentally, the required visas have high fees.

Showing up at a border carrying music equipment or a bag of vinyl records is an instant red flag – whether a paid gig is your intention or not. (I’m surprised, actually, that no one talks about this in regards to the rise of the USB stick DJ. If you aren’t carrying a controller or any records, sailing through as a tourist is a lot easier.) Border officials will often ask visitors to unlock phones, hand over social media passwords. They’ll search Facebook events by name to find gigs. Or they’ll even just view the presence of a musical instrument as a violation.

Being seen as “illegal” because you’re traveling with a guitar or some records is a pretty good illustration of how immigration can criminalize simple, innocent acts. Whatever the intention behind that law, it’s clear there’s something off here – especially given the kinds of illegality that can cross borders.

When protection isn’t

This is not to argue for open borders. There are times when you want border protections. I worked briefly in environmental advocacy as we worked on invasive species that were hitching a ride on container ships – think bugs killing trees and no more maple syrup on your pancakes, among other things. I was also in New York on 9/11 and watched from my roof – that was a very visible demonstration of visa security oversight that had failed. Part of the aim of customs and immigration is to stop the movement of dangerous people and things, and I don’t think any rational person would argue with that.

But even as a tiny microcosm of the larger immigration system, music is a good example of how laws can be uneven, counter-intuitive, and counterproductive. The US and Canada, for instance, do have an open border for tourists. So if an experimental ambient musician from Toronto comes to play a gig in Cleveland, that’s not a security threat – they could do the same as a tourist. It’s also a stretch of the imagination that this individual would have a negative impact on the US economy. Maybe the artist makes a hundred bucks cash and … spends it all inside the USA, not to mention brings in more money for the venue and the people employed by it. Or maybe they make $1000 – a sum that would be wiped out by the US visa fee, to say nothing of slow US visa processing. Again, that concert creates more economic activity inside the US economy, and it’s very likely the American artist sharing the bill goes up to Montreal and plays with them next month on top of it. I could go on, but it’s … well, boring and obvious.

Artists and presenters worldwide often simply ignore this visa system because it’s slow, expensive, and unreliable. And so it costs economies (and likely many immigration authorities) revenue. It costs societies value and artistic and cultural exchange.

Of course, scale that up and the same is true, across other fields. Immigrants tend to give more into government services than they take out, they tend to own businesses that employ more local people (so they create jobs), they tend to invent new technologies (so they create jobs again), and so on.

Ellis Island, NYC. 12 million people passed through here – not all of my family who came to the USA, but some. I’ve now come the other way through Tegel Airport and the Ausländerbehörde , Berlin. Photo (CC-BY-ND
“>A. Strakey.

Advocacy and music

Immigration advocacy could be seen as something in the charter of anyone in the music industry or musical instruments industry.

Music technology suffers as borders are shut down, too. Making musical instruments and tools requires highly specialized labor working in highly specialized environments. From production to engineering to marketing, it’s an international business. I actually can’t think of any major manufacturer that doesn’t rely on immigrants in key roles. (Even many tiny makers involve immigrants.)

And the traditional music industry lean heavily on immigrant talent, too. Those at the top of the industry have powerful lobbying efforts – efforts that could support greater cultural exchange and rights for travelers. Certainly, its members are often on the road. But let’s take the Recording Academy (the folks behind the Grammy Awards).

Instead, their efforts seem to fixate on domestic intellectual property law. So the Recording Academy and others were big on the Music Modernization Act – okay, fine,
a law to support compensation for creators.

But while the same organization advocated on behalf of instruments traveling – domestic rules around carry-on and checked instruments – but not necessarily humans. So it could be that there’s more interest in your guitar getting across borders than people.

I don’t want to be unfair to the Recording Association – and not just because I think it might hurt my Grammy winning chances. (Hey, stop laughing.) No, I think it’s more that we as a community have generally failed to take up this issue in any widespread way. (I sincerely hope someone out there works for the record industry and writes to say that you’re actually working on this and I’m wrong.)

More than anything else, music can cross borders. It can speak to people when you don’t speak their language, literally. When music travels, emotion and expression travels – artists and technology alike.

It’s personal – isn’t it for you?

I personally feel the impact of all of this, now having been seven years in Berlin, and able to enjoy opportunities, connections, and perspective that come from living in Germany and working with people both from Germany and abroad. I feel hugely grateful to the German state for allowing my business to immigrate (my initial visa was a business visa, which involved some interesting bureaucracy explaining to the Berlin Senate what this site is about). I’ve even benefited from the support of programs like the Goethe Institut and host governments to work in cultural diplomacy.

I’ve also had the chance to be involved writing in support of visas and financial backing for artists coming from Iran, Mexico, Kazakhstan, and many other places, for programs I’ve worked on.

And all of this is really a luxury – even when we’re talking about artists traveling to support their careers and feed themselves. For many people, migration is a matter of survival. Sometimes the threats to their lives come from geopolitical and economic policies engineered by the governments we come from – meaning as citizens, we share some responsibility for the impact others have felt. But whether or not that’s the case, I would hope we feel that obligation as human beings. That’s the basis of international rule of law on accepting refugees and granting asylum. It’s the reason those principles are uncompromising and sometimes even challenging. Our world is held together – or not – based on that basic fairness we afford to fellow humans. If people come to where we live and claim their survival and freedom depends on taking them in, we accept the obligation to at least listen to their case.

Those of us in the music world could use our privilege, and the fact that our medium is so essential to human expression, to be among the loudest voices for these human rights. When we live in countries who listen to us, we should talk to other citizens and talk to our governments. We should tell the stories that make these issues more relatable. We should do what some people I know are doing in the music world, too – work on education and involvement for refugees, help them to feel at home in our communities and to develop whatever they need to make a home here, and make people feel welcome at the events we produce.

That’s just the principles, not policies. But I know a lot of people in my own circle have worked on the policy and advocacy sides here. I certainly would invite you to share what we might do. If you’ve been impacted by immigration obstacles and have ideas of how we help, I hope we hear that, too.

Some likely policy areas:
Supporting the rights of refugees and asylum seekers
Supporting refugee and asylum seeker integration
Advocating for more open visa policies for artists – keeping fees low, and supporting exchange
Advocating the use of music and culture, and music technology, as a form of cultural diplomacy
Supporting organizations that connect artists and creative technologists across borders

And so on…

But I do hope that as musicians, we work with people who share basic beliefs in caring for other people. I know there’s no single “community” or “industry” that can offer that. But we certainly can try to build our own circle in a way that does.

Some examples from here in Berlin working on refugee issues here. I would argue immigration policy can find connections across refugees and migrants, asylum seekers and touring musicians, as everyone encounters the same larger apparatus and set of laws:

Photo at top: CC-BY Nicola Romagna.

The post It’s time for music and music technology to be a voice for migrants appeared first on CDM Create Digital Music.

You can now add VST support to VCV Rack, the virtual modular

VCV Rack is already a powerful, free modular platform that synth and modular fans will want. But a $30 add-on makes it more powerful when integrating with your current hardware and software – VST plug-in support.

Watch:

It’s called Host, and for $30, it adds full support for VST2 instruments and effects, including the ability to route control, gate, audio, and MIDI to the appropriate places. This is a big deal, because it means you can integrate VST plug-ins with your virtual modular environment, for additional software instruments and effects. And it also means you can work with hardware more easily, because you can add in VST MIDI controller plug-ins. For instance, without our urging, someone just made a MIDI controller plug-in for our own MeeBlip hardware synth (currently not in stock, new hardware coming soon).

You already are able to integrate VCV’s virtual modular with hardware modular using audio and a compatible audio interface (one with DC coupling, like the MOTU range). Now you can also easily integrate outboard MIDI hardware, without having to manually select CC numbers and so on as previously.

Hell, you could go totally crazy and run Softube Modular inside VCV Rack. (Yo dawg, I heard you like modular, so I put a modular inside your modular so you can modulate the modular modular modules. Uh… kids, ask your parents who Xzibit was? Or what MTV was, even?)

What you need to know

Is this part of the free VCV Rack? No. Rack itself is free, but you have to buy “Host” as a US$30 add-on. Still, that means the modular environment and a whole bunch of amazing modules are totally free, so that thirty bucks is pretty easy to swallow!

What plug-ins will work? Plug-ins need to be 64-bit, they need to be VST 2.x (that’s most plugs, but not some recent VST3-only models), and you can run on Windows and Mac.

What can you route? Modular is no fun without patching! So here we go:

There’s Host for instruments – 1v/octave CV for controlling pitch, and gate input for controlling note events. (Forget MIDI and start thinking in voltages for a second here: VCV notes that “When the gate voltages rises, a MIDI note is triggered according to the current 1V/oct signal, rounded to the nearest note. This note is held until the gate falls to 0V.”)

Right now there’s only monophonic input. But you do also get easy access to note velocity and pitch wheel mappings.

Host-FX handles effects, pedals, and processors. Input stereo audio (or mono mapped to stereo), get stereo output. It doesn’t sound like multichannel plug-ins are supported yet.

Both Host and Host-FX let you choose plug-in parameters and map them to CV – just be careful mapping fast modulation signals, as plug-ins aren’t normally built for audio-rate modulation. (We’ll have to play with this and report back on some approaches.)

Will I need a fast computer? Not for MIDI integration, no. But I find the happiness level of VCV Rack – like a lot of recent synth and modular efforts – is directly proportional to people having fast CPUs. (The Windows platform has some affordable options there if Apple is too rich for your blood.)

What platforms? Mac and Windows, it seems. VCV also supports Linux, but there your best bet is probably to add the optional installation of JACK, and … this is really the subject for a different article.

How to record your work

I actually was just pondering this. I’ve been using ReaRoute with Reaper to record VCV Rack on Windows, which for me was the most stable option. But it also makes sense to have a recorder inside the modular environment.

Our friend Chaircrusher recommends the NYSTHI modules for VCV Rack. It’s a huge collection but there’s both a 2-channel and 4-/8-track recorder in there, among many others – see pic:

NYSTHI modules for VCV Rack (free):
https://vcvrack.com/plugins.html#nysthi
https://github.com/nysthi/nysthi/blob/master/README.md

And have fun with the latest Rack updates.

Just remember when adding Host, plug-ins inside a host can cause… stability issues.

But it’s definitely a good excuse to crack open VCV Rack again! And also nice to have this when traveling… a modular studio in your hotel room, without needing a carry-on allowance. Or hide from your family over the holiday and make modular patches. Whatever.

https://vcvrack.com/Host.html

The post You can now add VST support to VCV Rack, the virtual modular appeared first on CDM Create Digital Music.

You can now add VST support to VCV Rack, the virtual modular

VCV Rack is already a powerful, free modular platform that synth and modular fans will want. But a $30 add-on makes it more powerful when integrating with your current hardware and software – VST plug-in support.

Watch:

It’s called Host, and for $30, it adds full support for VST2 instruments and effects, including the ability to route control, gate, audio, and MIDI to the appropriate places. This is a big deal, because it means you can integrate VST plug-ins with your virtual modular environment, for additional software instruments and effects. And it also means you can work with hardware more easily, because you can add in VST MIDI controller plug-ins. For instance, without our urging, someone just made a MIDI controller plug-in for our own MeeBlip hardware synth (currently not in stock, new hardware coming soon).

You already are able to integrate VCV’s virtual modular with hardware modular using audio and a compatible audio interface (one with DC coupling, like the MOTU range). Now you can also easily integrate outboard MIDI hardware, without having to manually select CC numbers and so on as previously.

Hell, you could go totally crazy and run Softube Modular inside VCV Rack. (Yo dawg, I heard you like modular, so I put a modular inside your modular so you can modulate the modular modular modules. Uh… kids, ask your parents who Xzibit was? Or what MTV was, even?)

What you need to know

Is this part of the free VCV Rack? No. Rack itself is free, but you have to buy “Host” as a US$30 add-on. Still, that means the modular environment and a whole bunch of amazing modules are totally free, so that thirty bucks is pretty easy to swallow!

What plug-ins will work? Plug-ins need to be 64-bit, they need to be VST 2.x (that’s most plugs, but not some recent VST3-only models), and you can run on Windows and Mac.

What can you route? Modular is no fun without patching! So here we go:

There’s Host for instruments – 1v/octave CV for controlling pitch, and gate input for controlling note events. (Forget MIDI and start thinking in voltages for a second here: VCV notes that “When the gate voltages rises, a MIDI note is triggered according to the current 1V/oct signal, rounded to the nearest note. This note is held until the gate falls to 0V.”)

Right now there’s only monophonic input. But you do also get easy access to note velocity and pitch wheel mappings.

Host-FX handles effects, pedals, and processors. Input stereo audio (or mono mapped to stereo), get stereo output. It doesn’t sound like multichannel plug-ins are supported yet.

Both Host and Host-FX let you choose plug-in parameters and map them to CV – just be careful mapping fast modulation signals, as plug-ins aren’t normally built for audio-rate modulation. (We’ll have to play with this and report back on some approaches.)

Will I need a fast computer? Not for MIDI integration, no. But I find the happiness level of VCV Rack – like a lot of recent synth and modular efforts – is directly proportional to people having fast CPUs. (The Windows platform has some affordable options there if Apple is too rich for your blood.)

What platforms? Mac and Windows, it seems. VCV also supports Linux, but there your best bet is probably to add the optional installation of JACK, and … this is really the subject for a different article.

How to record your work

I actually was just pondering this. I’ve been using ReaRoute with Reaper to record VCV Rack on Windows, which for me was the most stable option. But it also makes sense to have a recorder inside the modular environment.

Our friend Chaircrusher recommends the NYSTHI modules for VCV Rack. It’s a huge collection but there’s both a 2-channel and 4-/8-track recorder in there, among many others – see pic:

NYSTHI modules for VCV Rack (free):
https://vcvrack.com/plugins.html#nysthi
https://github.com/nysthi/nysthi/blob/master/README.md

And have fun with the latest Rack updates.

Just remember when adding Host, plug-ins inside a host can cause… stability issues.

But it’s definitely a good excuse to crack open VCV Rack again! And also nice to have this when traveling… a modular studio in your hotel room, without needing a carry-on allowance. Or hide from your family over the holiday and make modular patches. Whatever.

https://vcvrack.com/Host.html

The post You can now add VST support to VCV Rack, the virtual modular appeared first on CDM Create Digital Music.

Cyber Monday means still more deals on music software

If you snoozed on some deals this weekend, and you’re longing to build out your software arsenal, erm, legally, it’s not too late. Here are some of the best deals we missed over the weekend plus some Cyber Monday news.

And yes, if you think I’d do this just as an excuse to run an image of some Cybermen, vintage ones looking like BBC actors dressed in a combination of balaclavas and some combination of hardware store parts that make it look like they have an air conditioner strapped to their chest, oh absolutely I would.

Ah, back to deals.

pluginboutique.com continues the sale of the weekend with a bunch of Monday “flash” deals. That includes ROLI’s wonderful new Cypher2 synth on sale, Softube Tape for thirty bucks, and many others – plus loads of plug-ins are $1 or free meaning you can go shopping for next to nothing or actually nothing. Also, pluginboutique.com’s site is up, which isn’t always the case with some of these flash deals from plug-in developers, so they’re a good place to check out.

Some examples:
Loopmasters Studio Bundle at 90% off, or $132 for a bunch of stuff.

iZotope at 78% off (weird number, but great)!

AAS / Applied Acoustics for 50% off – I’ve always loved their unique physical modeling creations.

The beautiful Sinevibes creations for 30% off.

Harrison make wonderful consoles. Now Mixbus was already kind of ridiculously affordable – US$79 buys you a full console emulation that’s great for mixdowns and mastering and the like. But for Cyber Monday, that’s a “okay, you have to buy this” $19, which is just stupidly good. Alternatively get Mixbus plus 5 plug-ins for $39. They didn’t pay me to say that, either; at those prices, I don’t imagine they have much marketing budget!

Enter code CYBERMON18 when you shop their store.

Trakction are back with 50% off everything today only. Try entering code EPIC2018, too.

Sugarbytes have everything on sale: EUR69 plug-ins, EUR333 bundle, plus up to 50% on iOS Apps.

Propellerhead have a huge Cyber Monday sale, and with loads of big discounts on Reason add-ons and the cheapest ever price on an upgrade, it’s nice fodder for their loyal users. Euclidean rhythms, the KORG Polysix, the Parsec “spectral synth,” the Resonans physical modeling synth – some serious goodies there on sale. And €99 for the upgrade means you can finally stop putting off getting the latest Reason 10. (Not only is VST compatibility in there, but the Props have done a lot lately on usability and stability meaning now seems a good time to jump for Reason users.)

Eventide have their software on sale through the end of the month. This is really the most affordable way to get Eventide sound in your productions (short of a subscription deal).

Anthology XI for US$699 instead of the usual $1799 is especially notable. Having those 23 plug-ins feels a bit like you’ve just rented a serious studio, virtually.

If that’s too much to budget, consider also the new Elevate Bundle – makes your sounds utterly massive, and the three do fit well together, so $79 is a steal.

There’s also the excellent H3000 delay on steep discount, and the luscious Blackhole reverb for just $69. (Or for more studio reverb sounds, the ‘Heroes’/Visconti-inspired Tverb for $99.) And of course the rest of the lineup, too.

Waves had a big sale over the weekend, but for Cyber Monday they also have a new synth – the Flow Motion FM Synth. This crazy UI is certainly a new take on making FM easier to grasp, and it’s got an intro price of US$39. (I have no idea how good it is as I haven’t tried it yet, but they’ve got my attention – and NI aren’t shipping the new Massive yet, so Waves gets in here first with their own hybrid take!) And Waves are doing a buy 2 get 1 free deal, as well.

After introducing a vocal plug-in over the weekend, Waves are using Cyber Monday for a product launch, too – the Flow Motion FM synth seen here.

Output have added a 25% off discount on their software, even including their already discounted bundle, for Cyber Monday.

Steinberg have a big sale this week, including apps, with up to 60% off. That’s a big deal for fans of their production software and plug-ins, but also take note that their terrific mobile app Cubasis – perhaps the most feature-complete DAW for iOS – is half off, as is the Waves in-app purchase for the same.

App lovers, it’s worth checking the Android App Store / Google Play as a bunch of stuff is on sale now – too much to track, probably. But some top picks this week: Imaginando’s Traktor and Live controllers, iOS and Android, are all 40% off – everything.

KORG’s apps are still 50% off.

And the terrific MoMinstruments line is all on sale:
Elastic Drums: 10,99€ -> 5,49€, $9.99 -> $4.99
Elastic FX: 10,99€ -> 5,49€, $9.99 -> $4.99
iLep: 10,99€ -> 5,49€, $9.99 -> $4.99
fluXpad: 8,99€ -> 3,99, $7.99 -> $4.49
WretchUp: 4,49€ -> 2,29€, $3.99 -> $1.99

Puremagnetik have US$10 Cyber Monday deals – $20 each, then enter code BLACKFRIDAY18 for 50% off on top of that – so ten bucks for String Machines XL, Retro Computers +, and Soniq’s classic synths.

Still going… A lot of the deals I wrote up over the weekend are still on, including Arturia and Soundtoys.

Native Instruments have a 50% off sale still going. Tons of stuff in there, but Reaktor 6 for a hundred bucks – full version, meaning you don’t need a past version – that’s insane. That’s a hundred bucks to buy you what could be the last plug-in you ever need.

IRRUPT/audio have a 50% off deal on their unique sound selection if you enter code IRRUPT-VIP.

Sonic Faction have a 40% off sale on instruments for Ableton Live and Native Instruments Kontakt – enter code CYBRMNDY40

Need to learn things and not just buy them? Askvideo/Macprovideo have a deal for today only with US$75 for a yearly pass (the price that usually gets you just three months), or 75% off all à la carte training.

And SONAR+D in Bacelona has a 200EUR delegate pass sale today only.

Some of the deals are expiring, but some last through today or through Friday (with a few straggling into December), so check out previous guide and guide to other guides:

Here’s where to find all the don’t-miss deals for Black Friday weekend

The post Cyber Monday means still more deals on music software appeared first on CDM Create Digital Music.