Arturia’s V Collection 7 continues to expand as the go-to software library of every vintage synth you would ever want. But let’s focus on one new gem: the brilliant CZ-101 remake.
First off, V Collection 7 is worth a look. Arturia keep making their mega-bundle software instrument bundle better. That means both reworking the modeling inside these tools, and adding new features, as well as – of course – continuing to expand the library of available instruments. As modeling has improved, these instruments have gotten more and more like the originals in sound and not just in function and look. At the same time, Arturia keeps beefing up those originals with new features – so the authentic sound engines get new sound design features atop them.
The EMS Synthi V makes an appearance in the new V Collection, too – if your tastes go more 70s than 80s. And it’s a big deal.
Version 7 continues to balance the desires of the casual keyboardist and the obsessive synth sound designer – and everything in between. So if you just want to add a convincing Mellotron or B-3, you’re covered – with an all-new Mellotron and a total ground-up sound engine overhaul for the B-3 V2. Jimmy Smith Strawberry Fields Forever, check and mate.
If the idea of a whole bunch of unfamiliar keyboards and control layouts is unappealing, V Collection 7 also includes the new Analog Lab 4, which consolidates all these things into easy presets and macro controls, and hundreds of new presets in their “Synthopedia.” That way if you do want to look up the way a familiar sound was produced – then tweak it yourself – you can.
Of course, if you read CDM, your favorite preset may be “default template,” and the idea of getting lost for hours in a vintage synth control layout may be the whole selling point. For that crowd, the V Collection 7 adds the EMI Synthi V and the CZ-101 from Casio, circa 1985.
The ability to just dial up a menu and say, “do I want an Oberheim SEM or a CS-80” is already pretty crazy, and the number of choices continues to grow. So my approach to V Collection is actually to ignore all those presets – apologies, dear sound designer friends – and try to focus on one instrument. It’s a bit like what you do in a packed studio – you pull out one piece of gear, and say, hey, tonight is going to be about me and this instrument and very little else.
I want to talk about the CZ-101 because it’s long been one of my favorite instruments, and it’s a fairly unsung one. The CZ is somehow too easy, too friendly, too compact, too inexpensive to have the kind of adoration of some of the other 80s and 70s throwbacks. It’s not a collectors’ item. You can still find them at flea makrets. So yeah, Arturia are quick to drop names who have used it, like Salt-N-Pepa and Vince Clarke. But to me the whole appeal of the CZ-101 is that it’s for people who love synths, not people trying to emulate their heroes.
Of course, you could for these reasons go get an actual CZ-101. That means Arturia has to sweeten the deal a bit so the software can compete. They did just that. Let’s dive in.
CZ V reproduces the simple hardware interface (at bottom) but also expands to this view with lots of additional visual feedback and features, at top.
Phase Distortion lovers, rejoice
The original CZ-101 is about two things: a simple front panel layout, and phase distortion. If you just want to drop the CZ into a session as-is, CZ V does that.
Phase distortion synthesis isn’t so much a different synthesis method as it is a compelling way of mucking about with two digital oscillators. It’s easy enough to dismiss PD as Casio’s cheaper, non-patented answer to Yamaha’s DX7 and frequency modulation (FM). But now as we grow more accustomed to digital, non-harmonic timbres, PD is better appreciated on its own terms – as a way of producing unique digital color.
In short, what phase distortion does for you is to add rich harmonic content to sound. It can be a distortion. It can sound something like a resonant filter – in its own way. And because it’s normally using synced oscillators – here’s the important bit – it’s way easier to control than FM generally is.
On the Casio, this allows some unique filtering and sound shaping and distortion sounds that can easily be controlled by macros. And on the Arturia remake, graphical access to envelopes and expanded power means that you can use that shaping creatively.
The CZ V kind of goes a bit nuts versus what an original CZ-101 would give you. Let’s compare 1985 and 2019.
Arturia’s effects mean you don’t have to listen to the CZ dry.
The modulation matrix makes this feel as much modern soft synth as 1980s hardware.
The original oscillators are there – sine, saw, square, pulse, resonance, double-sine, saw-pulse – as are the 8-stage envelope generators and vibrato and LFOs. You can even import SysEx from the original. But being able to program these features on a display makes sound design accessible.
In addition to making hidden CZ features more visible, Arturia have expanded what’s possible:
32-voice polyphony (the original had just 8).
A modulation matrix – no, really.
More modulation: a Sample and Hold module, 2 LFOs with 6 waveforms, 3 sources combinators and an Arpeggiator
New effects – while an authentic approach to the CZ might leave it dry, now you get all the Arturia multi-effects (adding things like chorus and reverb sound especially nice, for instance)
There’s visual feedback for everything, too.
Where the CZ fits in
In some ways, the CZ-101 is weirdly going from dated 80s thrift store find to … ahead of its time? After all, we’re seeing modular makers embrace these kinds of digital oscillator effects, and phase and phase distortion even inspired the upcoming sequel to Native Instruments’ Massive, the new Massive X.
Envelope editing is powerful – and includes animated visual feedback.
The CZ architecture is uniquely suited to making a lot of different sounds – including percussion and modulating timbres and edgy digital business – with a minimum of resources. So there’s a noise source built-in. You can modulate with the noise source. There’s ring modulation.
Using the CZ, DADSR, and multi-segment envelopes, you can them sculpt those percussive and metallic timbres over time – including using the DCW (Digitally Controlled Waveform) envelope that morphs between a sine wave and distorted wave.
The reason I’m using the CZ V to talk about the new V Collection edition, though, is that it’s an instrument where it feels like Arturia’s authentic side matches up with the “vintage on steroids” additions. So, by the time you have something like the new Synthi, you’re already presented with tons of sound design possibilities. Arturia has added some amazing ideas there – a step sequencer, a beat-synced LFO, plus onboard effects, atop all the new graphical options for working with envelopes and modulation.
The thing is, on a Synthi, that starts to feel like too much. I almost was tempted with the Synthi to force myself not to expand the tab full of new stuff. If I want an open-ended sound environment on a computer, I can use Reaktor, not try to recreate a 1970s take on the idea.
On the Arturia edition of the Casio, though, all these additions help the CZ graduate from fun toy to serious sound design tool. The visual envelopes make more sense. Effects are something most CZ owners invested in anyway. And more polyphony means you can run one instance and do a lot with it. Heck, even the matrix is easier to follow than on the original EMS Synthi because the architecture of the CZ-101 is so straightforward. In other words, because the original did less, it’s both a good match for software remake and for some thoughtful additions – which Arturia delivers.
Check these templates for an easy way to get started making your own sounds.
Here’s a little sketch I made with this. This is all one patch – noise and ring modulation and layering the ring source, plus some DCW and pitch envelope use, are what generate all those sounds. I added Arturia’s Trid-A Pre and some reverb from Softube’s TSAR-1 Reverb and … that was it.
By laying out faders, encoders, displays, and an 8×8 expressive grid, Polyend hopes you’ll play their Medusa’s synths sounds. So here’s some sound of what was going on in my studio.
Here’s a live jam, just getting a bit lost in the Medusa world:
It’s not really a demo so much as me enjoying what the instrument can do. Because they’re new, we rely on musical performance of instruments. But that’s not to say it’s obvious how to do so. We “demo” an instrument – even though we’d never expect to “demo” a violin (not any more, anyway).
A few features stand out to me as useful to play, which you’ll see getting some use:
Swapping and modulating wavetables: this was recently expanded with a bunch of additional wavetable sources; there’s a particular character to the Medusa offerings that I really enjoy
Grid Mode: this lets you sequence and even ‘play’ different parameters stored in each individual grid
Different internal scale modes (no custom scales/tunings or Scala support yet, though there’s a nice scale/mode assortment, and you can set custom tunings in Grid Mode by manually tuning them in)
Envelopes and modulation: obviously, this adds additional motion in the music; what sets the Medusa apart is on-the-fly assignment, which you can think of as a digital equivalent to patching cables
FM adjustment – well, just because this can sound wild, as frequency modulation does (both on the filter and oscillators)
Mixing oscillators: with three digital + three analog + noise source, you can add and subtract layers in the sound via the faders
I also went ahead and added some effects and an extended version of this live set:
The first recording is dry apart from some very very light plate reverb and compression. The SoundCloud upload includes my favorite Eventide effects – Ultratap [multitap delay], Omnipressor [compressor], Blackhole [reverb].
Here’s a more straightforward play with the different oscillators and basic voice structure:
Logic Pro X 10.4.5, seen onstage at WWDC, is now available. And yes, it supports the new ultra-high-end Mac Pro – but there are fixes and performance optimizations for everyone, with or without new machines.
10.4.5 looks like the most pro-oriented Logic Pro in a long time. Apple has been aggressive with its update cadence for Logic for years running now, even with free upgrades, and this version is no exception.
This release also marks the end of the road for Mac OS X 10.12 Sierra. The new minimum OS requirement is 10.13.16 High Sierra. (Mojave is seeming stable these days, and it’s summertime, so maybe now is a good time to do a full backup and take the plunge.)
First up – yes, the banner feature from Apple’s perspective is that the new Logic runs on the new Mac Pro. Under the hood, that means support for up to 56 cores, the kind of massive multiprocessing the new Mac Pro can do.
The use case for this kind of processing power is slim, but then, that’s what the ‘pro’ concept is all about. Doing artist relations, you may have a film composer with advanced technical needs and a shelf full of Academy Awards. Even one user in your user base can be critical.
That said, I think the real story here is that Apple is shaking the tree across the whole code base – meaning these performance optimizations and fixes could benefit you even if you’re running on a beat-up older MacBook, too.
So, think really big track counts – which could be meaningful since even some mid-range CPUs can theoretically churn through a lot of tracks, to say nothing of that shiny Mac Pro tower.
Increased Number of Tracks and Channels, up to:
1000 stereo audio channel strips
1000 software instrument channel strips
1000 auxiliary channel strips
1000 external MIDI tracks
12 sends per channel strip
Way back around 2006, I heard from a Macworld reader complaining about lifting exactly these limitations and how I didn’t mention them in a review. (See above: that one user thing.)
16 ports of MIDI clock, MTC, and MMC from Logic – yeah, expect to see a Mac Pro in broadcast/film/TV applications running audio
Mixer configuration can be set to your own user-definable defaults (huge time saver there, finally)
A clever automatic duplicate erase when you’re merging MIDI recordings
And some new keyboard shortcuts to save you time when editing:
Option + Shift while rubber-band selecting in the Piano Roll: new time Handles selection.
Option-click track on/off button: loads/unloads the plug-ins on the channel strip (wow, easy A/B!)
Shift-double-click Tracks background: start playback from that position.
And for the “I have an old beat-up MacBook you can pry out of my dead fingers” crowd – finally Freeze works the way it should. (How many of you were desperately freezing tracks while cursing Logic as the CPU meter refused to go down?) From the release notes:
“Freezing a track now unloads its plug-ins to free up resources.”
There are also fixes and performance optimizations and workflow and display improvements throughout. As you’d expect, fixes are concentrated on newer features – Smart Tempo, ARA, Flex, and the like.
So you don’t get any earth-shaking new features unless you’re really into de-essing, but what you do get is some evidence the Apple engineers are working through their log of stuff.
Full disclosure: I will now this week refresh my MacBook Pro’s OS and then Logic release.
Hey, say what you will about Apple, but Logic Pro these days is pretty accessible from a user experience perspective. There’s some powerful and numerous competitors that either fail that ease of use test or that simply lack features you need to get big jobs done in scoring and the like. Maddeningly, a lot get the ease right but lack features, or have insanely powerful features but demand you contort your brain to use them. (Once upon a time, an earlier version of Logic was also far harder to use.)
I know from CDM’s own site stats and plenty of anecdotal evidence that all this matters to music makers. It’s not just Apple brand loyalty that makes Logic last.
It’s an analog-wavetable polysynth with an expressive grid – but that only begins to describe what makes the Polyend Medusa such a unique instrument. Here’s a deep dive into this hybrid synthesizer and what it means musically.
A year after its public debut, the Polyend-Dreadbox collaboration Medusa hybrid synth has gotten a flurry of updates expanding its capabilities. The Medusa caught my eye when it was previewed at last year’s Superbooth extravaganza in Berlin – and has since reappeared full of refined functionality at this year’s edition. The instrument combines Polyend’s expressive grid with a gnarly synthesizer made in collaboration with Dreadbox. So you get a hybrid analog-digital sound engine, which you can use in monophonic or one of two polyphonic modes, and a grid you can use for performance or sequencing.
That description seems obvious and straightforward, but it also doesn’t really fully describe what this thing is. It’s really about the combination of elements. The synth engine gets delightfully grimy – the Dreadbox filter can really scream, especially paired with frequency modulation. And the digital oscillators (from Polyend) stack to give you metallic edge and wavetable madness atop a thick 3-oscillator analog beast. The copious modulation and multiple envelopes provide loads of sound design possibilities, too – you can really go deep with this, since basically everything is assignable to LFOs or envelopes. (That’d be a lot of rack space to get this many oscillators and modulation sources in a Eurorack form.) Combining digital control and wavetables with Dreadbox-supplied analog grunge make this as much an all-in-one studio as a polysynth.
What really binds this together for me, though, is using the grid to make this more like an instrument. You can lock parameters and scales to steps in the sequencer, and then use elaborate scale mappings and expression options to put sounds beneath your fingertips. This isn’t about menus, but it’s also unlike conventional keyboard synths. The grid and one-press modulation and envelope assignment make the Medusa a portal to sound design, composition, and performance.
The workflow then fits spatially. On your right, you can sculpt sounds and (thanks to a recent update) make on-the-fly assignments of modulation and envelopes with just one press. On your left, the grid can be configured for sequencing and playing. Mix oscillators and shape envelopes and dial modulation live atop that. You can also use the sequencer as a kind of sketchpad for ideas, since sequences are saved with presets.
All of this comes in a long, metal case with MIDI I/O and external audio input. Even the form factor suggests this is an instrument you focus on directly. So whatever you do in sound design should naturally translate to sequencing and playing live.
Here’s the basic approach to sound design workflow – dialing in and layering different analog and digital oscillators, playing with wavetables, shaping envelopes and filter, adding FM (including on the filter), and assigning modulation. Improvised / no talking:
Let’s look at those components individually (now with some of the recent firmware updates in place):
On the synth side, the Medusa has a hybrid 3+3 structure – three analog oscillators, plus three digital oscillators, for a total of six. (There’s an additional noise source, as well, with adjustable color.) To that, you add a filter derived from the Dreadbox Erebus (highpass, 2-pole lowpass, and 4-pole lowpass). There are two fixed envelopes (filter and amplitude), plus three more assignable envelopes. You also get five (!) assignable LFOs. That’s just enough to be readily accessible, but also focused enough that it neatly structures your use of the onboard controls and assignable modulation and sequencing.
The idea is to mix analog + digital + noise in different combinations, which you can layer as monophonic lines or chords, or trigger in turn, with always-accessible mixer controls for each voice + noise.
Oscillator controls. The oscillator section does double duty as analog and digital, so you’ll need to understand how those relate. To save space, there’s a button in the oscillator section labeled DIGITAL.
With digital mode off (analog mode), you get control over the three analog oscillators, plus a pulse width control, and a frequency modulation control for FM between oscillators 1 and 2. You can select ramp, PWM, triangle, and sine waves for each oscillator. You can also hard sync oscillators – 1+2 (sync 2) and 2+3 (sync 3). Note that you will need to give the Medusa some warmup time for these analog oscillators to be in tune; there’s also automated calibration to tune up.
With the digital mode on, you control the three digital oscillators, and get a wavetable shape in addition to the four wave shapes, plus a wavetable control that modulates between different wavetables. (There’s no FM between oscillators 1 and 2, and you don’t get the pulse width control for the digital oscillators – which in the end doesn’t matter much given all the wavetable options.)
The other controls are doubled up to save space, as well. Instead of dedicated macro and fine tuning, there’s a FINETUNE switch. The FM knob has two functions, also via switches.
Modulation. There’s more modulation than you’ll likely ever need, between the sequencer steps, five envelopes, and five LFOs. Since there’s only one set of encoders and sliders, you choose which envelope or LFO you want to target. You can toggle that modulation on and off by double-pressing the controls for each.
The latest firmware adds on-the-fly parameter assignment, so you can simply hold down an envelope or LFO, then twist the parameter you want to target. That’s much more fun than scrolling through menus.
Sound design is a blast, but there’s some room for growth, too. LFO shapes morph between square, sine, ramp, and triangle, but there’s no random or sample & hold option, which seems an obvious future addition. Also, it could be nice, I think, to have different wavetables on different oscillators, or separate wavetable position controls. (At least for now, you can set LFOs to target all wavetables or just one wavetable when modulating position, so you can separately modulate the three digital oscillators if you wish.)
Now, you can assign both modulation and envelopes with just one tap, on the fly. With multiple envelopes and LFOs, combined with the sequencer, there’s plenty of choice for composition and sound design.
FM can be applied to the filter and between analog oscillators 1+2.
Musical ideas: synth
Use envelopes and modulation. Envelopes have free-flowing timing, but can each be (independently) looped, creating subtle or rhythmic modulation. And LFOs can be either free or clock-synced. With these two features in concert, you can create both shifting timbres and rhythmic patterns – while assigning them hands-on, rather than diving into menus. (That can be even faster than working with patch cords.)
Work with the different polyphonic modes. Mono play mode stacks all six oscillators onto a single voice, which is great for thick sounds. But the two polyphonic modes offer some unique features. P1 is three-voice polyphonic, with two oscillators per voice. P2 is six-voice polyphonic, and has one amp envelope for each of the six voices.
Change voice priority. In CONFIG > Voice Priority, you can set P1 and P2 from “First” to “Next,” and each trigger will rotate through each of the available oscillators. Remember with P2, that means you have separate envelopes. So you can retrigger the same pitch, or “strum” or roll a chord, or create rhythmic variations… it all makes for some lively variations.
Self-oscillate the filter with tracking. If you turn up resonance and crank TRACK on the filter, you’ll get self-oscillation that’s mapped to the pitch range. (You’ll probably want to turn down master volume here; I don’t yet have a trick for that, but you could also save lower oscillator mixer values with a preset.)
Go mad with FM. Frequency modulating the OSC 1+2 combination can create some wild ring mod-style effects as you play with different octave ranges and tunings.
I think one confusion about the Medusa is, because people see an 8×8 grid of pads, they assume the main function is sequencing. That’s really not how to think of the Medusa pad matrix – it’s better to imagine it as a performance and editing interface as much as a sequencer, and to see ambient/drone/non-metric possibilities along with the usual things you’d expect of an 8×8 layout.
Sequences themselves have a length from 1 and 64 steps. (Yes, with a 1-step sequence, you get basically a repeat function, and with a few steps, a sort of fixed phrase arpeggiator – more on how you’d play that live below.) Steps are fixed rhythm, with no sub-steps – I do wish there were a way to clock divide step length from the master tempo, or add subdivisions of a step, or even control step timing individually. For now, if you want that, you’ll need to do it externally, via MIDI.
You can set tempo from 10-300 bpm or use an external clock source. And you get control for swing, plus different sequence playback directions (forward, backward, ping pong, and random).
In NOTES mode, you enter pitch. With REC enabled but not PLAY, you can enter and edit steps one at a time. (Pressing a pad creates a pitch, rather than sets a step, so you’d use the big menu encoder to the right of the pads to dial through steps.) With PLAY enabled, you can live record, though everything is still quantized to the step.
The pitch and rhythm stuff is a bit basic, but it’s the GRID mode where the Medusa shines. There, you can set specific steps to contain parameter data. Again, this works in both step and live modes – in live modes, you’ll overwrite parameter data as you move a control. This is what some sequencers call “p-locks” / parameter locks, but here the workflow is different. You can stop the transport, and manually tweak parameters while holding a pad to modify parameters for that step. This means an individual step may contain a whole bunch of layered information.
At first, it may seem counter-intuitive to separate notes and parameter data on two different screens, but it opens up some new possibilities. You can step-sequence really elaborate sequences of timbral changes. Or – here’s the interesting one – you can trigger different presets as your sequence plays. That lets you ‘perform’ the presets – play with the timbres – the way you normally would with notes.
Not only do you have a powerful step sequencer page dedicated to parameter control, you can think of presets as something you can play live. I don’t know of another sequencer that works quite like this.
Musical ideas: sequencer
Trigger play modes, voice priority, sequence length live: With a sequence playing, it’s possible to toggle play modes (between unison and polyphony), and the Voice Priority setting (first or last, in either of the polyphonic modes), and sequence length, all live without impact sequenced playback. So you can have some fun messing about with these settings.
Use GRID for variation. The sequencer only triggers preset changes when the GRID mode is enabled. So you can start a sequence, then toggle your sequenced parameters on and off by switching GRID mode on and off. (You can combine this with live-triggered parameters – more on that below.)
Glide! Combining glide with the polyphonic modes (and adjusting the amplitude envelope, particularly Release as needed) will create some lovely, overlapping portamento effects.
Arpeggiate/transpose. You can now press HOLD + a pad to transpose a sequence live as it plays. With short sequences, this can be a bit like running an arpeggiator or phrase sequencer.
If you just use the pads as a sequencer, you’re really missing half the power of the instrument. The pads also work for playing live, with the option of up to three axis additional expression (z-axis pressure, and x- and y- position). The pads are also low-profile, so you can easily strum your fingers across pads.
Three-axis control can be a little confusing. Only the last pad adds modulation, and it takes a bit of muscle memory to get used to modulating with just the last finger press if you’re playing in a polyphonic context. But the pads are nicely sensitive; I hope there’s the possibility of polyphonic expression internally in future.
As an external controller, Medusa does support an MPE mode, so you can use this – like the Roger Linn Linnstrument – as an MPE controller with compatible devices.
The grid in general is expressive and inspiring. In particular, you might try one of the 40 included scales, which include various exotic options apart from the usual church modes. I especially like the Japanese and Engimatic options. You can also change not only the scale but the layout (the relationship of notes on the pads).
Musical ideas: pads
Drone mode. Use HOLD to trigger multiple up to six at a time and drone away (press HOLD, then toggle on and off individual notes). And again, this is also interesting with different polyphonic modes and glide. You can also use, for instance, the Z-axis pressure to add additional modulation as you drone. (One confusing thing about X/Y/Z and HOLD – since only the last trigger uses the X/Y/Z modulation, it can get a bit strange additionally toggling off that step as you hold. I’m working on whether there’s a better solution there.)
Use GRID for triggering: With GRID instead of notes, you can use individual pads to trigger different sounds, or even map an ensemble of sounds (setting up particular pads for percussion, and others for melody, for instance). This also opens up other features, like:
DIY scales. A new feature of the Medusa firmware adds the ability to store pitch in pads, and thus make custom scales. Turn GRID on, and REC, then with FINETUNE on, you can use the oscillator to tune a custom scale, including with microtuning. I’d love to see custom scale modes or Scala support, but this in the meantime has a beautiful analog feel to it.
Bend it: You can bend between notes by targeting Pitch with the x-axis. To keep that range manageable and slide between notes, I suggest a value of just 1 or 2 (instead of the full 100, which will slide over the whole pitch range as you wiggle your finger). You might also consider adding the same on the y-axis, since it is a grid.
Trigger expression. Not only can you trigger modulation live over a sequence in GRID mode, you can also use those triggers to modulate X, Y, and Z targets of your choice as a sequence plays. You can also try modulating expression in NOTES mode over a playing sequence.
Use external control. You can also map to external MIDI aftertouch, pitch, and mod, which opens up novel external or even DIY controllers. (You could connect a LEAP Motion or something if you want to get creative. Or combine a keyboard and the grid, for some wild possibilities)
Medusa takes a little time to get into, as you start to feel comfortable with the sound engine, and adapting to a new way of thinking about the pads – as performance controller plus separate note and parameter sequencer. Once you do, though, I think you begin to get into this as an instrument – one with rich and sometimes wild sound capabilities, always beneath your fingertips.
The result is something that’s really unique and creative. The combination of that edgy, deep digital+analog sound engine with the superb Dreadbox filter, plus all this modulation and sequencing and performance possibility makes the whole feel like a particular instrument – something you want to learn to play.
I really have fallen in love with it as a special instrument in that way. And I find I am really wanting to practice it, both as sound designer and instrumentalist.
At 999EUR, it also holds up against some other fine polysynth choices from Dave Smith, Novation, KORG, and most recently, Elektron. Most importantly, it’s unlike any of those tools, both with its unique and expressive controller and its copious controls and access to sound.
The presence of an instrument like this from a boutique maker, charting some new territory and in a desktop form factor and not only a set of modules, seems a promising sign for synth innovation.
Before modulars became a product, some of the first electronic synthesis experiments made use of test equipment – gear intended to make sound, but not necessarily musically. And now that approach is making a comeback.
Hainbach, the Berlin-based experimental artist, has been helping this time-tested approach to sound reach new audiences.
I actually have never seen a complete, satisfying explanation of the relationship of abstract synthesis, as developed by engineers and composers, to test gear. Maybe it’s not even possible to separate the two. But suffice to say, early in the development of synthesis, you could pick up a piece of gear intended for calibration and testing of telecommunications and audio systems, and use it to make noise.
Why the heck would you do that now, given the availability of so many options for synthesis? Well, for one – until folks like Hainbach and me make a bunch of people search the used market – a lot of this gear is simply being scrapped. Since it’s heavy and bulky, it ranges from cheap to “if you get this out of my garage, you can have it” pricing. And the sound quality of a lot of it is also exceptional. Sold to big industry back in a time when slicing prices of this sort of equipment wasn’t essential, a lot of it feels and sounds great. And just like any other sound design or composition exercise that begins with finding something unexpected, the strange wonderfulness of these devices can inspire.
I got a chance to play a few days with the Waveform Research Centre in Rotterdam’s WORM, a strange and wild collection of these orphaned devices lovingly curated by Dennis Verschoor. And I got sounds unlike anything I was used to. It wasn’t just the devices and their lovely dials that made that possible – it was also the unique approach required when the normal envelope generators and such aren’t available. Human creativity does tend to respond well to obstacles.
Whether or not you go that route, it is worth delving into the history and possibilities – and Hainbach’s video is a great start. It might at the very least change how you approach your next Reaktor patch, SuperCollider code, synth preset, or Eurorack rig.
Gen X and Y just got their Beatles Anthology, basically – and it’s fantastic. Radiohead remind us why we love them with nearly two gigs of demos ripped from (seriously) MiniDiscs.
Maybe it’s taking Radiohead back to the “just a band” phase, but there’s something gorgeous about these stripped-down and earnest productions. And if you don’t want to burden yourself with the 1.8GB, you can stream them to get a rough impression of one of the biggest bands of their generation when … they were developing ideas and didn’t bother to tune their guitars.
Live sets in there, too, sketches, the lot…
The amazing thing about this story is, they evidently are kidding about being “hacked” – it seems someone really did try to ransom all these all recordings. (Maybe. It’s certainly a believable possibility.)
Of course, unlike the previous generation’s demos, the 90s produced recordings that were pretty half-decent. You’ll hear some charming sounds as mics are moved about, but the quality is pretty crisp – and then you get an in-the-room quality missing in the umpteen times we’ve heard Radiohead’s albums and then various covers.
Heck, even though I run a site that celebrates technology, you might just say the band is even a bit better in this raw, punk format, without all the studio work. There’s just way too much to listen to all at once, but £18 GBP gets you what in the 90s we thought was a big file (two gigs is a lot of dialup download time).
Someone could say something about the value of music here, except Radiohead already have given away albums, so really, this is a slight increase of value? I guess?
Enjoy. And maybe dust off your MiniDisc recorder and go make something.
Hard-hitting sub bass and percussion is the focus of SubLab, a new instrument from Future Audio Workshop. And it puts a ton of sound elements into an uncommonly friendly interface. Let’s get our hands on it.
This begins our Tools of Summer series of selections – stuff you’ll want to use when the nights are long (erm, northern hemisphere) and you need some new inspiration from instruments to actually use.
We hadn’t heard much lately from Future Audio Workshop. Their ground-breaking Circle instrument was uniquely friendly, clean, and easy to use. At a time when nearly all virtual instruments had virtually unreadable, tiny UIs, Circle broke from the norm with displays you could see easily. Beginners could track signal flow and modulation, and experts (erm, many of them, you know, older and with aging eyes) could be more productive and focused.
SubLab takes that same approach – so much so that a couple of quick shots I posted to Instagram got immediate feedback.
And then it’s just chock full of bass – with a whole lot of potential applications.
Sound layers, plus filter, plus distortion, plus compressor – deceptively simple and powerful.
So, sure, FAW talk trap and hip-hop and future bass and sub basslines – you’ll get those, for sure. But I think you’ll start using SubLab all over the place.
If you just want a recipe for 808 bass, this instrument is there for you. You can layer and filter and overdrive and distort sounds into basslines made from punchy drum bits. Then you discover that this produces interesting melodic lines, too. Or that while you have all the elements of various kick drums not only from Roland but sampled from a studio full of drum machines (Vermona to JoMoX), you … might as well make some punchy kicks and toms.
It’s just too fast. And that’s not because the interface is particularly dumbed down – on the contrary, it’s because once all the chrome and tiny controls are out of the way and the designers focused on what this does, you can get at a lot of options more quickly.
The synth has an easy-to-follow structure – sound, distortion, compressor. Sound is divided into a simple multi-oscillator synth, a sample playback engine, and then the trademarked ‘x-sub’ sub-oscillator. You can then mix these separately, and route a percentage of the synth and sampler to a multi-mode filter. (Don’t miss the essential ‘glide’ control lurking just at the bottom, as I did at first.) Pulling it all together, you get a ‘master’ overview that shows you how each element layers in the resulting sound spectrum.
Also in the sound > synth section, you can easily access multiple envelopes with visual feedback. (Arturia, who I’m also writing about this week, have also gone this route, and it makes a big difference being able to see as well as hear.)
The sampler has essential tracking, pitching, and looping features for this application. The x-sub bit is uniquely controllable – you can set individual harmonic levels just by dragging around purple vertical bars. It’s rare to sculpt sub-bass like this so easily, and it’s addictive.
X-sub (trademarked?) means you can sculpt the harmonics inside the sub-oscillator section just by dragging.
The interface is easy enough, but a couple of characteristic additions really complete the package. The sampler section is full of inspiring hardware samples to use as building blocks – great stuff that you might use for your non-melodic kicks, or try out for punchy percussion and melodies even in higher registers. The Distortion also has some compelling modes, like the lovely “darkdrive” and convincing tube and overdrive options.
Tons of hardware samples abound for layering.
There aren’t a lot of presets – it looks like FAW’s plan is to get you hooked, then add more patch packs. But with enough sound design options here, including custom sample loading, you might be fine just making your own.
Really, my only complaint is that I find the filter and compressor a bit vanilla, particularly in this age of so many beautiful modeled options from Native Instruments, Arturia, u-he, and others.
I figured I would be writing this glowing review and telling you, oh yeah, it’s definitely worth $149.
But — damn, this thing is $70, on sale for $40.
Sheesh. Just get it, then. There are lots of deeper and more complex things out there. But this is something else – simple enough that you’ll actually use it to design your own creative sounds. As FAW has shown us before, visual feedback and accessible interfaces combine to make sound design connect with your brain more effectively.
Following nerve damage, Icelandic composer/producer/musician was unable to play the piano. With his ‘Ghost Pianos’, he gets that ability back, through intelligent custom software and mechanical pianos.
It’s moving to hear him tell the story (to the CNN viral video series) – with, naturally, the obligatory shots of Icelandic mountains and close-up images of mechanical pianos working. No complaints:
This frames accessibility in terms any of us can understand. Our bodies are fragile, and indeed piano history is replete with musicians who lost the original use of their two hands and had to adapt. Here, an accident caused him to lose left hand dexterity, so he needed a way to connect one hand to more parts.
And in the end, as so often is the case with accessibility stories and music technology, he created something that was more than what he had before.
With all the focus on machine learning, a lot of generative algorithmic music continues to work more traditionally. That appears to be the case here – the software analyzes incoming streams and follows rules and music theory to accompany the work. (As I learn more about machine learning, though, I suspect the combination of these newer techniques with the older ones may slowly yield even sharper algorithms – and challenge us to hone our own compositional focus and thinking.)
I’ll try to reach out to the developers, but meanwhile it’s fun squinting at screenshots as you can tell a lot. There’s a polyphonic step sequencer / pattern sequencer of sorts in there, with some variable chance. You can also tell in the screen shots that the pattern lengths are set to be irregular, so that you get these lovely polymetric echoes of what Olafur is playing.
Of course, what makes this most interesting is that Olafur responds to that machine – human echoes of the ‘ghost.’ I’m struck by how even a simple input can do this for you – like even a basic delay and feedback. We humans are extraordinarily sensitive to context and feedback.
The music itself is quite simple – familiar minimalist elements. If that isn’t your thing, you should definitely keep watching so you get to his trash punk stage. But it won’t surprise you at all that this is a guy who plays Clapping Music backstage – there’s some serious Reich influence.
You can hear the ‘ghost’ elements in the reent release ‘ekki hugsa’, which comes with some lovely joyful dancing in the music video:
re:member debuted the software:
There is a history here of adapting composition to injury. (That’s not even including Robert Schumann, who evidently destroyed his own hands in an attempt to increase dexterity.)
Paul Wittgenstein had his entire right arm amputated following World War I injury, commissioned a number of works for just the left hand. (There’s a surprisingly extensive article on Wikipedia, which definitely retrieves more than I had lying around inside my brain.) Ravel’s Piano Concerto for the Left Hand is probably the best-known result, and there’s even a 1937 recording by Wittgenstein himself. It’s an ominous, brooding performance, made as Europe was plunging itself into violence a second time. But it’s notable in that it’s made even more virtuosic in the single hand – it’s a new kind of piano idiom, made for this unique scenario.
I love Arnalds’ work, but listening to the Ravel – a composer known as whimsical, crowd pleasing even – I do lament a bit of what’s been lost in the push for cheery, comfortable concert music. It seems to me that some of that dark and edge could come back to the music, and the circumstances of the composition in that piece ought to remind us how necessary those emotions are to our society.
I don’t say that to diss Mr. Arnalds. On the contrary, I would love to hear some of his punk side return. And his quite beautiful music aside, I also hope that these ideas about harnessing machines in concert music may also find new, punk, even discomforting conceptions among some readers here.
Here’s a more intimate performance, including a day without Internet:
And lastly, more detail on the software:
Meanwhile, whatever kind of music you make, you should endeavor to have a promo site that is complete, like this – also, sheet music!
In glitching collisions of faces, percussive bolts of lightning, Lorem has ripped open machine learning’s generative powers in a new audiovisual work. Here’s the artist on what he’s doing, as he’s about to join a new inquisitive club series in Berlin.
Machine learning that derives gestures from System Exclusive MIDI data … surprising spectacles of unnatural adversarial neural nets … Lorem’s latest AV work has it all.
And by pairing producer Francesco D’Abbraccio with a team of creators across media, it brings together a serious think tank of artist-engineers pushing machine learning and neural nets to new places. The project, as he describes it:
Lorem is a music-driven mutidisciplinary project working with neural networks and AI systems to produce sounds, visuals and texts. In the last three years I had the opportunity to collaborate with AI artists (Mario Klingemann, Yuma Kishi), AI researchers (Damien Henry, Nicola Cattabiani), Videoartists (Karol Sudolski, Mirek Hardiker) and music intruments designers (Luca Pagan, Paolo Ferrari) to produce original materials.
Adversarial Feelings is the first release by Lorem, and it’s a 22 min AV piece + 9 music tracks and a book. The record will be released on APR 19th on Krisis via Cargo Music.
And what about achieving intimacy with nets? He explains:
Neural Networks are nowadays widely used to detect, classify and reconstruct emotions, mainly in order to map users behaviours and to affect them in effective ways. But what happens when we use Machine Learning to perform human feelings? And what if we use it to produce autonomous behaviours, rather then to affect consumers? Adversarial Feelings is an attempt to inform non-human intelligence with “emotional data sets”, in order to build an “algorithmic intimacy” through those intelligent devices. The goal is to observe subjective/affective dimension of intimacy from the outside, to speak about human emotions as perceived by non-human eyes. Transposing them into a new shape helps Lorem to embrace a new perspective, and to recognise fractured experiences.
I spoke with Francesco as he made the plane trip toward Berlin. Friday night, he joins a new series called KEYS, which injects new inquiry into the club space – AV performance, talks, all mixed up with nightlife. It’s the sort of thing you get in festivals, but in festivals all those ideas have been packaged and finished. KEYS, at a new post-industrial space called Trauma Bar near Hauptbahnhof, is a laboratory. And, of course, I like laboratories. So I was pleased to hear what mad science was generating all of this – the team of humans and machines alike.
So I understand the ‘AI’ theme – am I correct in understanding that the focus to derive this emotional meaning was on text? Did it figure into the work in any other ways, too?
Neural Networks and AI were involved in almost every step of the project. On the musical side, they were used mainly to generate MIDI patterns, to deal with SysEx from a digital sampler and to manage recursive re-sampling and intelligent timestretch. Rather then generating the final audio, the goal here was to simulate musician’s behaviors and his creative processes.
On the video side, [neural networks] (especially GANs [generative adverserial networks]) were employed both to generate images and to explore the latent spaces through custom tailored algorithms, in order to let the system edit the video autonomously, according with the audio source.
What data were you training on for the musical patterns?
MIDI – basically I trained the NN on patterns I create.
And wait, SysEx, what? What were you doing with that?
Basically I record every change of state of a sampler (i.e. the automations on a knob), and I ask the machine to “play” the same patch of the sampler according to what it learned from my behavior.
What led you to getting involved in this area? And was there some education involved just given the technical complexity of machine learning, for instance?
I always tried to express my work through multidisciplinary projects. I am very fascinated by the way AI approaches data, allowing us to work across different media with the same perspective. Intelligent devices are really a great tool to melt languages. On the other hand, AI emergency discloses political questions we try to face since some years at Krisis Publishing.
I started working through the Lorem project three years ago, and I was really a newbie on the technical side. I am not a hyper-skilled programmer, and building a collaborative platform has been really important to Lorem’s development. I had the chance to collaborate with AI artists (Klingemann, Kishi), researchers (Henry, Cattabiani, Ferrari), digital artists (Sudolski, Hardiker)…
How did the collaborations work – Mario I’ve known for a while; how did you work with such a diverse team; who did what? What kind of feedback did you get from them?
To be honest, I was very surprised about how open and responsive is the AI community! Some of the people involved are really huge points of reference for me (like Mario, for instance), and I didn’t expect to really get them on Adversarial Feelings. Some of the people involved prepared original contents for the release (Mario, for instance, realised a video on “The Sky would Clear What the …”, Yuma Kishi realized the girl/flower on “Sonnet#002” and Damien Henry did the train hallucination on “Shonx – Canton” remix. With other people involved, the collaboration was more based on producing something together, such a video, a piece of code or a way to explore Latent Spaces.
What was the role of instrument builders – what are we hearing in the sound, then?
Some of the artists and researchers involved realized some videos from the audio tracks (Mario Klingemann, Yuma Kishi). Damien Henry gave me the right to use a video he made with his Next Frame Prediction model. Karol Sudolski and Nicola Cattabiani worked with me in developing respectively “Are Eyes invisible Socket Contenders” + “Natural Readers” and “3402 Selves”. Karol Sudolski also realized the video part on “Trying to Speak”. Nicola Cattabiani developed the ELERP algorithm with me (to let the network edit videos according with the music) and GRUMIDI (the network working with my midi files). Mirek Hardiker built the data set for the third chapter of the book.
I wonder what it means for you to make this an immersive performance. What’s the experience you want for that audience; how does that fit into your theme?
I would say Adversarial Feelings is a AV show totally based on emotions. I always try to prepare the most intense, emotional and direct experience I can.
You talk about the emotional content here and its role in the machine learning. How are you relating emotionally to that content; what’s your feeling as you’re performing this? And did the algorithmic material produce a different emotional investment or connection for you?
It’s a bit like when I was a kid and I was listening at my recorded voice… it was always strange: I wasn’t fully able to recognize my voice as it sounded from the outside. I think neural networks can be an interesting tool to observe our own subjectivity from external, non-human eyes.
The AI hook is of course really visible at the moment. How do you relate to other artists who have done high-profile material in this area recently (Herndon/Dryhurst, Actress, etc.)? And do you feel there’s a growing scene here – is this a medium that has a chance to flourish, or will the electronic arts world just move on to the next buzzword in a year before people get the chance to flesh out more ideas?
I messaged a couple of times Holly Herndon online… I’m really into her work since her early releases, and when I heard she was working on AI systems I was trying to finish Adversarial Feelings videos… so I was so curious to discover her way to deal with intelligent systems! She’s a really talented artist, and I love the way she’s able to embed conceptual/political frameworks inside her music. Proto is a really complex, inspiring device.
More in general, I think the advent of a new technology always discloses new possibilities in artistic practices. I directly experienced the impact of internet (and of digital culture) on art, design and music when I was a kid. I’m thrilled by the fact at this point new configurations are not yet codified in established languages, and I feel working on AI today give me the possibility to be part of a public debate about how to set new standards for the discipline.
What can we expect to see / hear today in Berlin? Is it meaningful to get to do this in this context in KEYS / Trauma Bar?
I am curious too, to be honest. I am very excited to take part of such situation, beside artists and researchers I really respect and enjoy. I think the guys at KEYS are trying to do something beautiful and challenging.
Live in Berlin, 7 June
Lorem will join Lexachast (an ongoing collaborative work by Amnesia Scanner, Bill Kouligas and Harm van den Dorpel), N1L (an A/V artist, producer/dj based between Riga, Berlin, and Cairo), and a series of other tantalizing performances and lectures at Trauma Bar.
808 day, sure. But let’s pause for 606 day – the logical anniversary of the 1982 TR-606, a drum machine squeezed inside a tiny enclosure that looks like a 303 but isn’t. It’s the lesser known runt of the Roland family, and you kind of love it for that alone.
If you think about it, the 606 was way ahead of its time. Now selling customers on buying a little bass machine, then buying a little drum machine to go with it is par for the course. But in the early 80s, the music that would make the 303 and even the 606 desirable … hadn’t been made yet.
The TR-606 is certainly simple. It’s got all analog circuitry inside, for seven parts – kick, snare, two toms, open and closed hats, cymbal. There’s an accent control. It isn’t the most sought-after sound of the TR series, by any stretch, but now that you’ve heard way too many 808 and 909 hats, you might appreciate this just for some variety.
It can trigger other gear. It’s got accent. It was designed so you could chain 606 models together. So it’s not a terrible little machine. And it is – I’ll stand by this – the cutest drum machine Roland ever made. (I have to admit, I just went back to my boutique TR-09 this week and had a blast. Sometimes getting something tiny and restricted is oddly inspiring. An itsy bitsy teenie weenie silver TR drum machine-y?)
It’s famous, and yet mercifully no one has ever called it iconic. It just is what it is. Here’s Tatsuya of KORG fame giving it a one-over – as he should, as nothing channels the spirit of the 606 (even from Roland) quite like the entry of the KORG volca series he helmed:
And here’s Reverb.com giving it the once over:
The 606 has been in some great music – Aphex Twin, Nine Inch Nails, Autechre, Orbital, plus one favorite artist that shares its name – Kid606. Moby I think also used one, probably in that spell when he and I were dating that he doesn’t like to talk about. (Man, did that beetroot smoothie we shared together while programming 606 patterns mean nothing to you? Nothing?!)
It’s also been heavily modded and copied. It’s a reminder, basically, that drum machines need not look like a truck. They can be a funny sidecar you can easily squeeze into spaces where no one else can parallel park. When people talked about buying unloved Roland drum machines for $50 in pawn shops in the 80s – the TR-606 was one likely candidate. This was one of the machines cheap enough to enable people without cash to change music.
You know the sound. Because it was tinnier than the 808 and 909, the 606 often stood in when someone wanted something with an even thinner Roland sound.
Put that sound with the 303, and you really do get a combo that makes sense.
Bonus – this bit. You can swap between PLAY and WRITE pattern modes while the TR-606 is running – so you can edit as a pattern is playing. The other TRs would ideally work that way, but they don’t.
And yes, Roland at various times has brought this back in … strange ways, like on the SP-606 which really … has nothing to do with the TR-606. But here it is, because D-Beam! It’s also been spotted inside the recent recreations like the TR-8S and even the Serato-collaboration DJ controllers.