You want to play with your music toys together, and instead you wind up unplugging and repatching MIDI. That’s no fun. We wanted to solve this problem for ourselves, without having to trade high performance for low cost or simplicity. The result is MeeBlip cubit.
cubit is the first of a new generation of MeeBlip tools from us, as we make work to make synths more accessible and fun for everybody. cubit’s mission is simple: take one input, and turn it into four outputs, with minimum latency, minimum fuss, and at the lowest price possible.
Rock-solid timing. Everything you throw at the input jack is copied to the four outputs with ultra-low latency. Result: you can use it for MIDI messages, you can use it for clock, and keep your timing tight. (Under the hood is a hardware MIDI passthrough circuit, with active processing for each individual output – but that just translates to you not having to worry.)
It fits anywhere. Ports are on the top, so you can use it in tight spaces. It’s small and lightweight, so you can always keep it with you.
You’ll always have power. The USB connection means you can get power from your laptop or a standard USB power adapter (optional).
Don’t hum along. Opto-isolated MIDI IN to reduce ground loops. Everything should have this, but not everything does.
Blink! LED light flashes so you know MIDI is coming in – ideal for troubleshooting.
One input, four outputs, no lag – easy.
We love little, inexpensive gear. But those makers often leave out MIDI out/thru just to save space. With cubit, you can put together a portable rig and keep everything jamming together – alone or with friends.
If you need extra adapters or cables, we’ve got those too, so you can start playing right when the box arrives. (Shipping in the US is free, with affordable shipping to Canada and worldwide.) And if you’re grabbing some stocking stuffers, don’t forget to add in a cubit so your gifts can play along and play with others.
Get one while our stocks last. And don’t look in stores – we sell direct to keep costs low.
Full specs from our engineer, James:
Passes all data from the MIDI IN to four MIDI OUT jacks
Ultra-low latency hardware MIDI pass-through
Runs on 5V Power from a computer USB port or optional USB power adapter
Opto-isolated MIDI IN to reduce ground loops
Individual active signal processing for each MIDI OUT
Bright green MIDI data indicator LED flashes when you’re receiving MIDI
Musicians don’t just endure technology when it breaks. They embrace the broken. So it’s fitting that Holly Herndon’s team have produced a demonic spawn of machine learning algorithms – and that the results are wonderful.
The new music video for the Holly Herndon + Jlin collaboration have been making the rounds online, so you may have seen it already:
But let’s talk about what’s going on here. Holly is continuing a long-running collaboration with producer Jlin, here joined by technologist Mat Dryhurst and coder Jules LaPlace. (The music video itself is directed by Daniel Costa Neves with software developer Leif Ryge, employing still more machine learning technique to merge the two artists’ faces.)
Machine learning processes are being explored in different media in parallel – characters and text, images, and sound, voice, and music. But the results can be all over the place. And ultimately, there are humans as the last stage. We judge the results of the algorithms, project our own desires and fears on what they produce, and imagine anthropomorphic intents and characteristics.
Sometimes errors like over-fitting then take on a personality all their own – even as mathematically sophisticated results fail to inspire.
But that’s not to say these reactions aren’t just as real. An part of may make the video “Godmother” compelling is not just the buzzword of AI, but the fact that it genuinely sounds different.
The software ‘Spawn,’ developed by Ryge working with the team, is a machine learning-powered encoder. Herndon and company have anthropomorphized that code in their description, but that itself is also fair – not least because the track is composed in such a way to suggest a distinct vocalist.
I love Holly’s poetic description below, but I think it’s also important to be precise about what we’re hearing. That is, we can talk about the evocative qualities of an oboe, but we should definitely still call an oboe an oboe.
So in this case, I confirmed with Dryhurst that what I was hearing. The analysis stage employs neural network style transfers – some links on that below, though LaPlace and the artists here did make their own special code brew. And then they merged that with a unique vocoder – the high-quality WORLD vocoder. That is, they feed a bunch of sounds into the encoder, and get some really wild results.
And all of that in turn makes heavy use of the unique qualities of Jlin’s voice, Holly’s own particular compositional approach and the arresting percussive take on these fragmented sounds, Matt’s technological sensibilities, LaPlace’s code, a whole lot of time spent on parameters and training and adaptation…
Forget automation in this instance. All of this involves more human input and more combined human effort that any conventionally produced track would.
Is it worth it? Well, aesthetically, you could make comparisons to artists like Autechre, but then you could do that with anything with mangled sample content in it. And on a literal level, the result is the equivalent of a mangled sample. The results retain recognizable spectral components of the original samples, and they add a whole bunch of sonic artifacts which sound (correctly, really) ‘digital’ and computer-based to our ears.
But it’s also worth noting that what you hear is particular to this vocoder technique and especially to audio texture synthesis and neutral network-based style transfer of sound. It’s a commentary on 2018 machine learning not just conceptually, but because what you hear sounds the way it does because of the state of that tech.
And that’s always been the spirit of music. The peculiar sound and behavior of a Theremin says a lot about how radios and circuits respond to a human presence. Vocoders have ultimately proven culturally significant for their aesthetic peculiarities even if their original intention was encoding speech. We respond to broken circuits and broken code on an emotional and cultural level, just as we do acoustic instruments.
In a blog post that’s now a couple of years old – ancient history in machine learning terms, perhaps – Dmitry Ulyanov and Vadim Lebedev acknowledged that some of the techniques they used for “audio texture synthesis and style transfer” used a technique intended for something else. And they implied that the results didn’t work – that they had “stylistic” interest more than functional ones.
Dmitry even calls this a partial failure: “I see a slow but consistent interest increase in music/audio by the community, for sure amazing things are just yet to come. I bet in 2017 already we will find a way to make WaveNet practical but my attempts failed so far :)”
Spoiler – that hasn’t really happened in 2017 or 2018. But “failure” to be practical isn’t necessarily a failure. The rising interest has been partly in producing strange results – again, recalling that the vocoder, Theremin, FM synthesis, and many other techniques evolved largely because musicians thought the sounds were cool.
But this also suggests that musicians may uniquely be able to cut through the hype around so-called AI techniques. And that’s important, because these techniques are assigned mystical powers, Wizard of Oz-style.
Big corporations can only hype machine learning when it seems to be magical. But musicians can hype up machine learning even when it breaks – and knowing how and when it breaks is more important than ever. Here’s Holly’s official statement on the release:
For the past two years, we have been building an ensemble in Berlin.
One member is a nascent machine intelligence we have named Spawn. She is being raised by listening to and learning from her parents, and those people close to us who come through our home or participate at our performances.
Spawn can already do quite a few wonderful things. ‘Godmother’ was generated from her listening to the artworks of her godmother Jlin, and attempting to reimagine them in her mother’s voice.
This piece of music was generated from silence with no samples, edits, or overdubs, and trained with the guidance of Spawn’s godfather Jules LaPlace.
In nurturing collaboration with the enhanced capacities of Spawn, I am able to create music with my voice that far surpass the physical limitations of my body.
Going through this process has brought about interesting questions about the future of music. The advent of sampling raised many concerns about the ethical use of material created by others, but the era of machine legible culture accelerates and abstracts that conversation. Simply through witnessing music, Spawn is already pretty good at learning to recreate signature composition styles or vocal characters, and will only get better, sufficient that anyone collaborating with her might be able to mimic the work of, or communicate through the voice of, another.
Are we to recoil from these developments, and place limitations on the ability for non-human entities like Spawn to witness things that we want to protect? Is permission-less mimicry the logical end point of a data-driven new musical ecosystem surgically tailored to give people more of what they like, with less and less emphasis on the provenance, or identity, of an idea? Or is there a more beautiful, symbiotic, path of machine/human collaboration, owing to the legacies of pioneers like George Lewis, that view these developments as an opportunity to reconsider who we are, and dream up new ways of creating and organizing accordingly.
I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster.
Kids today. First, they want synth modules with the power of computers but the faceplate of vintage hardware – and get just that. Next, they take for granted the flexibility of patching that virtual systems in software have. Well, enter TUNNELS: “infinite multiple” for your Eurorack.
TUNNELS is a set of modules that doesn’t do anything on its own. It’s just a clever patch bay for your modular system. But with the IN and OUT modules, what you get is the ability to duplicate signals (so a signal from one patch cord can go multiple places), and then route signals anywhere you like.
“Infinite” is maybe a bit hyperbolic. (Well, I suppose what you might do with this is potentially, uh, infinite.) It’s really a bus for signals. And maybe not surprisingly, this freer, ‘virtual’ way of thinking about signal comes from people with some software background on one side, and the more flexible Buchla patching methodology on the other. TUNNELS is being launched by Olympia Modular, a collaboration between Patterning developer Ben Kamen and Buchla Development Engineer Charles Seeholzer.
There are two module types. TUNNEL IN just takes a signal and duplicates it to multiple outs. In signal to out signal, that’s 1:6, 2:3 (each signal gets three duplicates, for two signals), or 3:2 (each signal gets two duplicates, for three signals).
You might be fine with just IN, but you can also add one or more OUT modules. That connects via a signal link cable, but duplicates the outputs from the IN module. (Cool!) So as you add more OUT modules, this can get a lot fancier, if you so desire. It means some patches that were impossible before become possible, and other patches that were messy tangles of spaghetti become clean and efficient.
Actually, I’m comparing to software (think Reaktor, Pd, Max), but even some dataflow software could use some utility modules like this just to clean things up. (Most dataflow software does let you connect as many outputs from a patch point as you want. Code environments like SuperCollider also make it really easy to work with virtual ‘buses’ for signal… but then hardware has the advantage of making the results visible.)
Tunnels is on Kickstarter, with a module for as little as US$75 (limited supply). But, come on, spring for the t-shirt, right?
TUNNEL IN: buffered multiple, duplicate input across multiple outputs
TUNNEL OUT: add additional outputs at another location – chain infinitely for massive multiple banks, or use as sends for signals like clock and 1v/oct
Add more OUTs, and you get a big bank of multiples.
I’d say it’s like send and receive objects in Max/Pd, but… that’ll only make sense to Max/Pd people, huh? But yeah, like that.
The iPad finally gets a dedicated port for connectivity, as you’d find on a “desktop” computer – and it’s loaded with potential uses, from power to music gear. Let’s break down exactly what it can do.
“USB-C” is a port type; it refers to the reversible, slim, oval-shaped connector on the newest gadgets. But it doesn’t actually describe what the port can do as far as capabilities. So initially, Apple’s reference to the “USB-C” port on the latest iPad Pro generation was pretty vague.
Since then, press have gotten their hands on hardware and Apple themselves have posted technical documentation. Specifically, they’ve got a story up explaining the port’s powers:
Now, keep in mind the most confusing thing about Apple and USB-C is the two different kinds of ports. There’s a Thunderbolt-3 port, as found on the high-end MacBooks Pro and the Mac mini. It’s got a bolt of lightning indicator on it, and is compatible with audio devices like those from Universal Audio, and high-performance video gadgetry. And then there’s the plain-vanilla USB-C port, which has the standard USB icon on it.
All Thunderbolt 3 ports also double as USB-C ports, just not the other way around. The Thunderbolt 3 one is the faster port.
Also important, USB-C is backwards compatible with older USB formats if you have the right cable.
So here’s what you can do with USB-C. The basic story: do more, with fewer specialized adapters and dongles.
You can charge your iPad. Standard USB-C power devices as well as Apple’s own adapter. Nicely enough, you might even charge faster with a third-party adapter – like one you could share with a laptop that uses USB-C power.
Connect your iPad to a computer. Just as with Lightning-to-USB, you can use USB cables to connect to a USB-C port or older standard USB-A port, for charge and sync.
Connect to displays, projectors, TVs. Here you’ve got a few options, but they all max out at far higher quality than before:
USB-C to HDMI. (up to 4K resolution, 60 Hz, with HDMI 2.0 adapter.)
USB-C Digital AV Multiport. Apple’s own adapter supports up to 4K resolution, 30Hz. (The iPad display itself is 1080p / 60Hz, video up to 4K, 30Hz.)
USB-C displays. Up to 5K, with HR10 high dynamic range support. Some will even charge the iPad Pro in the process.
High end video makes the new iPad Pro look indispensable as a delivery device for many visual applications – including live visuals. It’s not hard to imagine people carrying these to demo high-end graphics with, or even writing custom software using the latest Apple APIs for 3D graphics and using the iPad Pro live.
Connect storage – a lot of it. Fast. USB-C is now becoming the standard for fast hard drives – USB 3.1/3.2. That theoretically allows for up to 2500 MB/s data access, and Apple says the iPad Pro will now work with 1 TB of storage. I’ve asked them for more clarification, but basically, yes, you can plug in big, fast storage and use it with your iPad, not limiting yourself to internal storage capacity. So that’s a revelation for pros, especially when using the iPad as an accessory to process video and photos and field recordings on the go.
Play audio. There’s no minijack audio output (grrr), but what you do get is audio playback to USB-C audio interfaces, docks, and specialized headphones. There’s also a USB-C to 3.m mm headphone jack adapter, but that’s pretty useless because it doesn’t include power passthrough – it’s a step backward from what you had before. Better to use a specialized USB-C adapter, which could also mean getting an analog audio output that’s higher quality than the one previous included internally on the iPad range.
And of course you can use AirPlay or Bluetooth, though it doesn’t appear Apple yet supports higher quality Bluetooth streaming, so wires seem to win for those of us who care about sound.
Oh, also interesting – Apple says they’ve added Dolby Digital Plus support over HDMI, but not Dolby Atmos. That hints a bit at consumer devices that do support Atmos – these are rare so far, but it’ll be interesting to watch, and to see whether Apple and Dolby work together or compete in this space.
Speaking of audio and music, though, here’s the other big one:
Work with USB devices. Apple specifically calls out audio and MIDI tools, presumably because musicians remain a big target Pro audience. What’s great here is, you no longer have the extra Lightning to USB “Camera” adapter required on older iPads, which was expensive and only worked with the iPad, and you should be free of some of the more restrictive electrical power capabilities of those past models.
You could also use a standard external keyboard to type on, or wired Ethernet – the latter great for wired use of applications like Liine’s Lemur.
The important thing here is there’s more bandwidth and more power. (Hardware that draws more power may still require external power – but that’s already true on a computer, too.)
The iPad Pro is at last closer to a computer, which makes it a much more serious tool for soft synths, controller tools, audio production, and more.
Charge other stuff. This is also cool – if you ever relied on a laptop as a mobile battery for phones and other accessories, now you can do that with the USB-C on the iPad Pro, too. So that means iPhones as well as other non-Apple phones. You can even plug one iPad into another iPad Pro.
Thunderbolt – no. Note that what you can’t do is connect Thunderbolt hardware. For that, you still want a laptop or desktop computer.
What about Made for iPhone? Apple’s somewhat infamous “MFI” program, which began as “Made for iPod,” is meant to certify certain hardware as compatible with their products. Presumably, that still exists – it would have to do so for the Lightning port products, but it seems likely certain iPad-specific products will still carry the certification.
That isn’t all bad – there are a lot of dodgy USB-C products out there, so some Apple seal of approval may be welcome. But MFI has hamstrung some real “pro” products. The good news as far as USB-C is, because it’s a standard port, devices made for particular “pro” music and audio and video uses no longer need to go through Apple’s certification just to plug directly into the iPad Pro. (And they don’t have to rely on something like the Camera Connection Kit to act as a bridge.)
Apple did not initially respond to CDM’s request for comment on MFI as it relates to the USB-C port.
And yeah, this headline gives it away, but agree totally. Note that Android is offering USB-C across a lot of devices, but that platform lacks some of the support for high-end displays and robust music hardware support that iOS does – meaning it’d be more useful coming from Apple than coming from those Android vendors.
No studio monitors or headphones are entirely flat. Sonarworks Reference calibrate any studio monitors or headphones with any source. Here’s an explanation of how that works and what the results are like – even if you’re not someone who’s considered calibration before.
CDM is partnering with Sonarworks to bring some content on listening with artist features this month, and I wanted to explore specifically what calibration might mean for the independent producer working at home, in studios, and on the go.
With that out of the way, let’s actually explain what this is for people who might not be familiar with calibration software.
In a way, it’s funny that calibration isn’t part of most music and sound discussions. People working with photos and video and print all expect to calibrate color. Without calibration, no listening environment is really truly neutral and flat. You can adjust a studio to reduce how much it impacts the sound, and you can choose reasonably neutral headphones and studio monitors. But those elements nonetheless color the sound.
I came across Sonarworks Reference partly because a bunch of the engineers and producers I know were already using it – even my mastering engineer.
But as I introduced it to first-time calibration product users, I found they had a lot of questions.
How does calibration work?
First, let’s understand what calibration is. Even studio headphones will color sound – emphasizing certain frequencies, de-emphasizing others. That’s with the sound source right next to your head. Put studio headphones in a room – even a relatively well-treated studio – and you combine the coloration of the speakers themselves as well as reflections and character of the environment around them.
The idea of calibration is to process the sound to cancel out those modifications. Headphones can use existing calibration data. For studio speakers, you take some measurements. You play a known test signal and record it inside the listening environment, then compare the recording to the original and compensate.
Hold up this mic, measure some whooping sounds, and you’re done calibration. No expertise needed.
What can I calibrate?
One of the things that sets Sonarworks Reference apart is that it’s flexible enough to deal with both headphones and studio monitors, and works both as a plug-in and a convenient universal driver.
The Systemwide driver works on Mac and Windows with the final output. That means you can listen everywhere – I’ve listened to SoundCloud audio through Systemwide, for instance, which has been useful for checking how the streaming versions of my mixes sound. This driver works seamlessly with Mac and Windows, supporting Core Audio on the Mac and the latest WASAPI Windows support, which is these days perfectly useful and reliable on my Windows 10 machine. (There’s unfortunately no Linux support, though maybe some enterprising user could get that Windows VST working.)
On the Mac, you select the calibrated output via a pop-up on the menu bar. On Windows, you switch to it just like you would any other audio interface. Once selected, everything you listen to in iTunes, Rekordbox, your Web browser, and anywhere else will be calibrated.
That works for everyday listening, but in production you often want your DAW to control the audio output. (Choosing the plug-in is essential on Windows for use with ASIO; Systemwide doesn’t yet support ASIO though Sonarworks says that’s coming.) In this case, you just add a plug-in to the master bus and the output will be calibrated. You just have to remember to switch it off when you bounce or export audio, since that output is calibrated for your setup, not anyone else’s.
Three pieces of software and a microphone. Sonarworks is a measurement tool, a plug-in and systemwide tool for outputting calibrated sound from any source, and a microphone for measuring.
Do I need a special microphone?
If you’re just calibrating your headphones, you don’t need to do any measurement. But for any other monitoring environment, you’ll need to take a few minutes to record a profile. And so you need a microphone for the job.
Calibrating your headphones is as simple as choosing the make and model number for most popular models.
Part of the convenience of the Sonarworks package is that it includes a ready-to-use measurement mic, and the software is already pre-configured to work with the calibration. These mics are omnidirectional – since the whole point is to pick up a complete image of the sound. And they’re meant to be especially neutral.
Sonarworks’ software is pre-calibrated for use with their included microphone.
Any microphone whose vendor provides a calibration profile – available in standard text form – can also use the software in a fully calibrated mode. If you have some cheap musician-friendly omni mic, though, those makers usually don’t do anything of the sort in the way a calibration mic maker would.
I think it’s easier to just use these mics, but I don’t have a big mic cabinet. Production Expert did a test of generic omni mics – mics that aren’t specifically for calibration – and got results that approximate the results of the test mic. In short, they’re good enough if you want to try this out, though Production Expert were being pretty specific with which omni mics they tested, and then you don’t get the same level of integration with the calibration software.
Once you’ve got the mics, you can test different environments – so your untreated home studio and a treated studio, for instance. And you wind up with what might be a useful mic in other situations – I’ve been playing with mine to sample reverb environments, like playing and re-recording sound in a tile bathroom, for instance.
What’s the calibration process like?
Let’s actually walk through what happens.
With headphones, this job is easy. You select your pair of headphones – all the major models are covered – and then you’re done. So when I switch from my Sony to my Beyerdynamic, for instance, I can smooth out some of the irregularities of each of those. That’s made it easier to mix on the road.
For monitors, you run the Reference 4 Measure tool. Beginners I showed the software got slightly discouraged when they saw the measurement would take 20 minutes but – relax. It’s weirdly kind of fun and actually once you’ve done it once, it’ll probably take you half that to do it again.
The whole thing feels a bit like a Nintendo Wii game. You start by making a longer measurement at the point where your head would normally be sitting. Then you move around to different targets as the software makes whooping sounds through the speakers. Once you’ve covered the full area, you will have dotted a screen with measurements. Then you’ve got a customized measurement for your studio.
Here’s what it looks like in pictures:
Simulate your head! The Measure tool walks you through exactly how to do this with friendly illustrations. It’s easier than putting together IKEA furniture.
You’ll also measure the speakers themselves.
Eventually, you measure the main listening spot in your studio. (And you can see why this might be helpful in studio setup, too.)
Next, you move the mic to each measurement location. There’s interactive visual feedback showing you as you get it in the right position.
Hold the mic steady, and listen as a whooping sound comes out of your speakers and each measurement is completed.
You’ll make your way through a series of these measurements until you’ve dotted the whole screen – a bit like the fingerprint calibration on smartphones.
Oh yeah, so my studio monitors aren’t so flat. When you’re done, you’ll see a curve that shows you the irregularities introduced by both your monitors and your room.
Now you’re ready to listen to a cleaner, clearer, more neutral sound – switch your new calibration on, and if all goes to plan, you’ll get much more neutral sound for listening!
There are other useful features packed into the software, like the ability to apply the curve used by the motion picture industry. (I loved this one – it was like, oh, yeah, that sound!)
It’s also worth noting that Sonarworks have created different calibration types made for real-time usage (great for tracking and improv) and accuracy (great for mixing).
Is all of this useful?
Okay, disclosure statement is at the top, but … my reaction was genuinely holy s***. I thought there would be some subtle impact on the sound. This was more like the feeling – well, as an eyeglass wearer, when my glasses are filthy and I clean them and I can actually see again. Suddenly details of the mix were audible again, and moving between different headphones and listening environments was no longer jarring – like that.
Double blind A/B tests are really important when evaluating the accuracy of these things, but I can at least say, this was a big impact, not a small one. (That is, you’d want to do double blind tests when tasting wine, but this was still more like the difference between wine and beer.)
How you might actually use this: once they adapt to the calibrated results, most people leave the calibrated version on and work from a more neutral environment. Cheap monitors and headphones work a little more like expensive ones; expensive ones work more as intended.
There are other uses cases, too, however. Previously I didn’t feel comfortable taking mixes and working on them on the road, because the headphone results were just too different from the studio ones. With calibration, it’s far easier to move back and forth. (And you can always double-check with the calibration switched off, of course.)
The other advantage of Sonarworks’ software is that it does give you so much feedback as you measure from different locations, and that it produces detailed reports. This means if you’re making some changes to a studio setup and moving things around, it’s valuable not just in adapting to the results but giving you some measurements as you work. (It’s not a measurement suite per se, but you can make it double as one.)
Calibrated listening is very likely the future even for consumers. As computation has gotten cheaper, and as software analysis has gotten smarter, it makes sense that these sort of calibration routines will be applied to giving consumers more reliable sound and in adapting to immersive and 3D listening. For now, they’re great for us as creative people, and it’s nice for us to have them in our working process and not only in the hands of other engineers.
If you’ve got any questions about how this process works as an end user, or other questions for the developers, let us know.
And if you’ve found uses for calibration, we’d love to hear from you.
Sonarworks Reference is available with a free trial:
Latlaus Sky’s Pythian Drift is a gorgeous ambient concept album, the kind that’s easy to get lost in. The set-up: a probe discovered on Neptune in the 26th Century will communicate with just one woman back on Earth.
The Portland, Oregon-based artists write CDM to share the project, which is accompanied by this ghostly video (still at top). It’s the work of Ukrainian-born filmmaker Viktoria Haiboniuk (now also based in Portland), who composed it from three years’ worth of 120mm film images.
Taking in the album even before checking the artists’ perspective, I was struck by the sense of post-rocket age music about the cosmos. In this week when images of Mars’ surface spread as soon as they were received, a generation that grew up as the first native space-faring humans, space is no longer alien and unreachable, but present.
In slow-motion harmonies and long, aching textures, this seems to be cosmic music that sings of longing. It calls out past the Earth in hope of some answer.
The music is the work of duo Brett and Abby Larson. Brett explains his thinking behind this album:
This album has roots in my early years of visiting the observatory in Sunriver, Oregon with my Dad. Seeing the moons of Jupiter with my own eyes had a profound effect on my understanding of who and where I was. It slowly came to me that it would actually be possible to stand on those moons. The ice is real, it would hold you up. And looking out your black sky would be filled with the swirling storms of Jupiter’s upper clouds. From the ice of Europa, the red planet would be 24 times the size of the full moon.
Though these thoughts inspire awe, they begin to chill your bones as you move farther away from the sun. Temperatures plunge. There is no air to breathe. Radiation is immense. Standing upon Neptune’s moon Triton, the sun would begin to resemble the rest of the stars as you faded into the nothing.
Voyager two took one of the only clear images we have of Neptune. I don’t believe we were meant to see that kind of image. Unaided our eyes are only prepared to see the sun, the moon, and the stars. Looking into the blue clouds of the last planet you cannot help but think of the black halo of space that surrounds the planet and extends forever.
I cannot un-see those images. They have become a part of human consciousness. They are the dawn of an unnamed religion. They are more powerful and more fearsome than the old God. In a sense, they are the very face of God. And perhaps we were not meant to see such things.
This album was my feeble attempt to make peace with the blackness. The immense cold that surrounds and beckons us all. Our past and our future.
The album closes with an image of standing amidst Pluto’s Norgay mountains. Peaks of 20,000 feet of solid ice. Evening comes early in the mountains. On this final planet we face the decision of looking back toward Earth or moving onward into the darkness.
Abby with pedals. BOSS RC-50 LoopStation (predecessor to today’s RC-300), Strymon BlueSky, Electro Harmonix Soul Food stand out.
Plus more on the story:
Pythia was the actual name of the Oracle at Delphi in ancient Greece. She was a real person who, reportedly, could see the future. This album, “Pythian Drift” is only the first of three parts. In this part, the craft is discovered and Dr. Amala Chandra begins a dialogue with the craft. Dr Chandra then begins publishing papers that rock the scientific world and reformulate our understanding of mathematics and physics. There is also a phenomenon called Pythian Drift that begins to spread from the craft. People begin to see images and hear voices, prophecies. Some prepare for an interstellar pilgrimage to the craft’s home galaxy in Andromeda.
Part two will be called Black Sea. Part three will be Andromeda.
And some personal images connected to that back story:
Brett as a kid, with ski.
Abby aside a faux fire.
More on the duo and their music at the Látlaus Ský site:
Roland’s VT-4 is more than a vocal processor. It’s best thought of as a multi-effects box that happens to be vocal friendly. And it’s getting deeper, with new reverb models, downloadable now.
Roland tried this once before with an AIRA vocoder/vocal processor, the VT-1. But that model proved a bit shallow: limited presets and pitch control only through the vocal input meant that it works great in some situations, but doesn’t fit others.
The VT-4 is really about retaining a simple interface, but adding a lot more versatility (and better sound).
As some of you noted in comments when I wrote it up last time, it’s not a looper. (Roland or someone else will gladly sell you one of those.) But what you do get are dead simple controls, including intuitive access to pitch, formant, balance, and reverb on faders. And you can control pitch through either a dial on the front panel or MIDI input. I’ll have a full hands-on review soon, as I’m particularly interested in this as a live processor for vocalists and other live situations.
If your use case is sometimes you want a vocoder, and sometimes you want some extra effects, and sometimes you’re playing with gear or sometimes with a laptop, the VT-4 is all those things. It’s got USB audio support, so you can pack this as your only interface if you just need mic in and stereo output.
And it has a bunch of effects now: re-pitch, harmonize, feedback, chorus, vocoder, echo, tempo-synced delay, dub delay … and some oddities like robot and megaphone and radio. More on that next time.
This update brings new reverb effects. They’re thick, lush, digital-style reverbs:
… and the VT-1’s rather nice retro-ish reverb is back as VT-1 REVERB
Deep dark say what? So the VT-1 reverb already was deeper (more reflections) and had a longer tail than the new VT-4 default; that preset restores those possibilities. “Deep” is deeper (more reflections). “Large” has longer duration reflections or simulates a larger room. And “DARK” is like the default, but with more high frequency filtering. You’ll flash the new settings via USB.
And this being Japan, they introduce the pack by saying “It will set you in a magnificent space.” Yes, indeed, it will. That’s lovely.
The VT-4 got a firmware update, too.
1. PITCH AND FORMANT can be active now irrespective of input signal level and length, via a new settings. (Basically, this lets you disable a tracking threshold, I think. I have to play with this a bit.)
2. ROBOT VOICE now won’t hang notes; it disables with note off events.
3. There’s a new MUTE function setting.
I mean, a really easy-to-use pitch + vocoder + delay + reverb for just over $200, and sometimes you can swap it for an audio interface? Seems a no brainer to me. So if you have some questions or things you’d like me to try with this unit I just got in, let me know.
From countries across Europe to the USA, migration is at the center of Western politics at the moment. But that raises a question: why aren’t more people who make music, music instruments, and music tech louder about these issues?
Migration – temporary and permanent – is simply a fact of life for a huge group of people, across backgrounds and aspirations. That can involve migration to follow opportunities, and refugees and asylum seekers who move for their own safety and freedom. So if you don’t picture immigrants, migrants, and refugees when you think of your society, you just aren’t thinking.
Musicians ought to be uniquely qualified to speak to these issues, though. Extreme anti-immigration arguments all assume that migrants take away more from a society than they give back. And people in the music world ought to know better. Music has always been based on cultural exchange. Musicians across cultures have always considered touring to make a living. And to put it bluntly, music isn’t a zero sum game. The more you add, the more you create.
Music gets schooled in borders
As music has grown more international, as more artists tour and cross borders, at least the awareness is changing. That’s been especially true in electronic music, in a DJ industry that relies on travel. Resident Advisor has consistently picked up this story over the last couple of years, as artists spoke up about being denied entry to countries while touring.
In a full-length podcast documentary last year, they dug into the ways in which the visa system hurts artists outside the US and EU, with a focus on non-EU artists trying to gain entry to the UK:
Andrew Ryce also wrote about a visa rate hike in the USA back in 2016 – and this in the Obama Administration, not under Trump:
Now, being a DJ crossing a border isn’t the same as being a refugee running for your life. But then on some other level, it can allow artists to experience immigration infrastructure – both when it works for them, and when it works against them. A whole generation of artists, including even those from relatively privileged Western nations, is now learning the hard way about the immigration system. And that’s something they might have missed as tourists, particularly if they come from places like the USA, western Europe, Australia, and other places well positioned in the system.
The immigration system they see will often come off as absurdist. National policies worldwide categorize music as migrant labor and require a visa. In many countries, these requirements are unenforced in all but big-money gigs. But in some countries – the USA, Canada, and UK being prime examples – they’re rigorously enforced, and not coincidentally, the required visas have high fees.
Showing up at a border carrying music equipment or a bag of vinyl records is an instant red flag – whether a paid gig is your intention or not. (I’m surprised, actually, that no one talks about this in regards to the rise of the USB stick DJ. If you aren’t carrying a controller or any records, sailing through as a tourist is a lot easier.) Border officials will often ask visitors to unlock phones, hand over social media passwords. They’ll search Facebook events by name to find gigs. Or they’ll even just view the presence of a musical instrument as a violation.
Being seen as “illegal” because you’re traveling with a guitar or some records is a pretty good illustration of how immigration can criminalize simple, innocent acts. Whatever the intention behind that law, it’s clear there’s something off here – especially given the kinds of illegality that can cross borders.
When protection isn’t
This is not to argue for open borders. There are times when you want border protections. I worked briefly in environmental advocacy as we worked on invasive species that were hitching a ride on container ships – think bugs killing trees and no more maple syrup on your pancakes, among other things. I was also in New York on 9/11 and watched from my roof – that was a very visible demonstration of visa security oversight that had failed. Part of the aim of customs and immigration is to stop the movement of dangerous people and things, and I don’t think any rational person would argue with that.
But even as a tiny microcosm of the larger immigration system, music is a good example of how laws can be uneven, counter-intuitive, and counterproductive. The US and Canada, for instance, do have an open border for tourists. So if an experimental ambient musician from Toronto comes to play a gig in Cleveland, that’s not a security threat – they could do the same as a tourist. It’s also a stretch of the imagination that this individual would have a negative impact on the US economy. Maybe the artist makes a hundred bucks cash and … spends it all inside the USA, not to mention brings in more money for the venue and the people employed by it. Or maybe they make $1000 – a sum that would be wiped out by the US visa fee, to say nothing of slow US visa processing. Again, that concert creates more economic activity inside the US economy, and it’s very likely the American artist sharing the bill goes up to Montreal and plays with them next month on top of it. I could go on, but it’s … well, boring and obvious.
Artists and presenters worldwide often simply ignore this visa system because it’s slow, expensive, and unreliable. And so it costs economies (and likely many immigration authorities) revenue. It costs societies value and artistic and cultural exchange.
Of course, scale that up and the same is true, across other fields. Immigrants tend to give more into government services than they take out, they tend to own businesses that employ more local people (so they create jobs), they tend to invent new technologies (so they create jobs again), and so on.
Ellis Island, NYC. 12 million people passed through here – not all of my family who came to the USA, but some. I’ve now come the other way through Tegel Airport and the Ausländerbehörde , Berlin. Photo (CC-BY-ND “>A. Strakey.
Advocacy and music
Immigration advocacy could be seen as something in the charter of anyone in the music industry or musical instruments industry.
Music technology suffers as borders are shut down, too. Making musical instruments and tools requires highly specialized labor working in highly specialized environments. From production to engineering to marketing, it’s an international business. I actually can’t think of any major manufacturer that doesn’t rely on immigrants in key roles. (Even many tiny makers involve immigrants.)
And the traditional music industry lean heavily on immigrant talent, too. Those at the top of the industry have powerful lobbying efforts – efforts that could support greater cultural exchange and rights for travelers. Certainly, its members are often on the road. But let’s take the Recording Academy (the folks behind the Grammy Awards).
Instead, their efforts seem to fixate on domestic intellectual property law. So the Recording Academy and others were big on the Music Modernization Act – okay, fine,
a law to support compensation for creators.
I don’t want to be unfair to the Recording Association – and not just because I think it might hurt my Grammy winning chances. (Hey, stop laughing.) No, I think it’s more that we as a community have generally failed to take up this issue in any widespread way. (I sincerely hope someone out there works for the record industry and writes to say that you’re actually working on this and I’m wrong.)
More than anything else, music can cross borders. It can speak to people when you don’t speak their language, literally. When music travels, emotion and expression travels – artists and technology alike.
It’s personal – isn’t it for you?
I personally feel the impact of all of this, now having been seven years in Berlin, and able to enjoy opportunities, connections, and perspective that come from living in Germany and working with people both from Germany and abroad. I feel hugely grateful to the German state for allowing my business to immigrate (my initial visa was a business visa, which involved some interesting bureaucracy explaining to the Berlin Senate what this site is about). I’ve even benefited from the support of programs like the Goethe Institut and host governments to work in cultural diplomacy.
I’ve also had the chance to be involved writing in support of visas and financial backing for artists coming from Iran, Mexico, Kazakhstan, and many other places, for programs I’ve worked on.
And all of this is really a luxury – even when we’re talking about artists traveling to support their careers and feed themselves. For many people, migration is a matter of survival. Sometimes the threats to their lives come from geopolitical and economic policies engineered by the governments we come from – meaning as citizens, we share some responsibility for the impact others have felt. But whether or not that’s the case, I would hope we feel that obligation as human beings. That’s the basis of international rule of law on accepting refugees and granting asylum. It’s the reason those principles are uncompromising and sometimes even challenging. Our world is held together – or not – based on that basic fairness we afford to fellow humans. If people come to where we live and claim their survival and freedom depends on taking them in, we accept the obligation to at least listen to their case.
Those of us in the music world could use our privilege, and the fact that our medium is so essential to human expression, to be among the loudest voices for these human rights. When we live in countries who listen to us, we should talk to other citizens and talk to our governments. We should tell the stories that make these issues more relatable. We should do what some people I know are doing in the music world, too – work on education and involvement for refugees, help them to feel at home in our communities and to develop whatever they need to make a home here, and make people feel welcome at the events we produce.
That’s just the principles, not policies. But I know a lot of people in my own circle have worked on the policy and advocacy sides here. I certainly would invite you to share what we might do. If you’ve been impacted by immigration obstacles and have ideas of how we help, I hope we hear that, too.
Some likely policy areas:
Supporting the rights of refugees and asylum seekers
Supporting refugee and asylum seeker integration
Advocating for more open visa policies for artists – keeping fees low, and supporting exchange
Advocating the use of music and culture, and music technology, as a form of cultural diplomacy
Supporting organizations that connect artists and creative technologists across borders
And so on…
But I do hope that as musicians, we work with people who share basic beliefs in caring for other people. I know there’s no single “community” or “industry” that can offer that. But we certainly can try to build our own circle in a way that does.
Some examples from here in Berlin working on refugee issues here. I would argue immigration policy can find connections across refugees and migrants, asylum seekers and touring musicians, as everyone encounters the same larger apparatus and set of laws:
What’s the sound of an exhibition devoted to silence? From John Cage recreations to the latest in interactive virtual reality tech, it turns out there’s a lot. The exhibition’s lead Jascha Dormann tells us more – and gives us a look inside.
The results are surprisingly poetic – like a surrealist listening playground on the topic of isolation.
“Sounds of Silence” opened this month at the Museum of Communication in Bern, Switzerland, and is on through July 2019. Just as John Cage’s revelation that visiting an anechoic chamber was, in fact, noisey, “silence” in this case challenges listening and exploration. It’s about surprise, not void. As the exhibition creators say, “the search for a place where stillness may be experienced, however, becomes difficult: stillness is holding sway only in outer space – yet even there the astronaut is hearing his own breaths.”
Inside the exhibition, there’s not a word of written text, and few traditional photos or videos. Instead, you get abstract spatial graphics. Tracking systems respond as you navigate the exhibit, and an unseen voice hints at what you might do. There’s a snowy cotton-like entry, radio-like sound effects, and then a pathway to explore silence from the start of the universe until this century.
And you get some unique experiences: the isolation tank invented by neurophysiologist John C. Lilly, 3D soundscapes, Sarah Maitland talking to you about her experience in seclusion on the Isle of Skye, and yes, Cage’s iconic if ironic “4’33”.” The Cage work is realized as an eight-channel ORTF 3D audio recording, from a performance by Staatsorchester Stuttgart at the Beethovensaal Stuttgart. (That has to be silence’s largest-ever orchestration, I suppose.) It’s silence in full immersive sound.
“The piece had never been recorded in 3D-audio before,” says Dormann. “We have then implemented the recording into the interactive sound system so visitors can experience it in a version that’s binauralized in real-time.”
Recording silence – in 3D! The session in Stuttgart, Germany.
Photos source: Museum of Communication Bern
Sound Concept and Sound Production Lead: Jascha Dormann (Idee und Klang GmbH)
Sound Concept and Sound Design: Ramon De Marco (Idee und Klang GmbH)
Sound Design: Simon Hauswirth (Idee und Klang GmbH)
Development Sound System: Steffen Armbruster (Framed immersive projects GmbH & Co. KG)
Sound Implementation: Marc Trinkhaus (Framed immersive projects GmbH & Co. KG)
Performance John Cage – 4’33’’: Staatsorchester Stuttgart conducted by Cornelius Meister
Recording John Cage – 4’33’’: Jascha Dormann at Beethovensaal / Liederhalle Stuttgart
Project in general
Project Lead and Curator: Kurt Stadelmann (Museum of Communication)
Project Manager: Angelina Keller (Museum of Communication)
Scenography: ZMIK spacial design, / Rolf Indermühle
Exhibition Graphics: Büro Berrel Gschwind, / Dominique Berrel
Author: Bettina Mittelstrass
Head of Exhibitions at Museum of Communication: Christian Rohner (Museum of Communication)
Various events are running alongside the exhibition; full details on the museum’s site:
VCV Rack is already a powerful, free modular platform that synth and modular fans will want. But a $30 add-on makes it more powerful when integrating with your current hardware and software – VST plug-in support.
It’s called Host, and for $30, it adds full support for VST2 instruments and effects, including the ability to route control, gate, audio, and MIDI to the appropriate places. This is a big deal, because it means you can integrate VST plug-ins with your virtual modular environment, for additional software instruments and effects. And it also means you can work with hardware more easily, because you can add in VST MIDI controller plug-ins. For instance, without our urging, someone just made a MIDI controller plug-in for our own MeeBlip hardware synth (currently not in stock, new hardware coming soon).
You already are able to integrate VCV’s virtual modular with hardware modular using audio and a compatible audio interface (one with DC coupling, like the MOTU range). Now you can also easily integrate outboard MIDI hardware, without having to manually select CC numbers and so on as previously.
Hell, you could go totally crazy and run Softube Modular inside VCV Rack. (Yo dawg, I heard you like modular, so I put a modular inside your modular so you can modulate the modular modular modules. Uh… kids, ask your parents who Xzibit was? Or what MTV was, even?)
What you need to know
Is this part of the free VCV Rack? No. Rack itself is free, but you have to buy “Host” as a US$30 add-on. Still, that means the modular environment and a whole bunch of amazing modules are totally free, so that thirty bucks is pretty easy to swallow!
What plug-ins will work? Plug-ins need to be 64-bit, they need to be VST 2.x (that’s most plugs, but not some recent VST3-only models), and you can run on Windows and Mac.
What can you route? Modular is no fun without patching! So here we go:
There’s Host for instruments – 1v/octave CV for controlling pitch, and gate input for controlling note events. (Forget MIDI and start thinking in voltages for a second here: VCV notes that “When the gate voltages rises, a MIDI note is triggered according to the current 1V/oct signal, rounded to the nearest note. This note is held until the gate falls to 0V.”)
Right now there’s only monophonic input. But you do also get easy access to note velocity and pitch wheel mappings.
Host-FX handles effects, pedals, and processors. Input stereo audio (or mono mapped to stereo), get stereo output. It doesn’t sound like multichannel plug-ins are supported yet.
Both Host and Host-FX let you choose plug-in parameters and map them to CV – just be careful mapping fast modulation signals, as plug-ins aren’t normally built for audio-rate modulation. (We’ll have to play with this and report back on some approaches.)
Will I need a fast computer? Not for MIDI integration, no. But I find the happiness level of VCV Rack – like a lot of recent synth and modular efforts – is directly proportional to people having fast CPUs. (The Windows platform has some affordable options there if Apple is too rich for your blood.)
What platforms? Mac and Windows, it seems. VCV also supports Linux, but there your best bet is probably to add the optional installation of JACK, and … this is really the subject for a different article.
How to record your work
I actually was just pondering this. I’ve been using ReaRoute with Reaper to record VCV Rack on Windows, which for me was the most stable option. But it also makes sense to have a recorder inside the modular environment.
Our friend Chaircrusher recommends the NYSTHI modules for VCV Rack. It’s a huge collection but there’s both a 2-channel and 4-/8-track recorder in there, among many others – see pic:
Just remember when adding Host, plug-ins inside a host can cause… stability issues.
But it’s definitely a good excuse to crack open VCV Rack again! And also nice to have this when traveling… a modular studio in your hotel room, without needing a carry-on allowance. Or hide from your family over the holiday and make modular patches. Whatever.