Arturia’s V Collection 7 continues to expand as the go-to software library of every vintage synth you would ever want. But let’s focus on one new gem: the brilliant CZ-101 remake.
First off, V Collection 7 is worth a look. Arturia keep making their mega-bundle software instrument bundle better. That means both reworking the modeling inside these tools, and adding new features, as well as – of course – continuing to expand the library of available instruments. As modeling has improved, these instruments have gotten more and more like the originals in sound and not just in function and look. At the same time, Arturia keeps beefing up those originals with new features – so the authentic sound engines get new sound design features atop them.
The EMS Synthi V makes an appearance in the new V Collection, too – if your tastes go more 70s than 80s. And it’s a big deal.
Version 7 continues to balance the desires of the casual keyboardist and the obsessive synth sound designer – and everything in between. So if you just want to add a convincing Mellotron or B-3, you’re covered – with an all-new Mellotron and a total ground-up sound engine overhaul for the B-3 V2. Jimmy Smith Strawberry Fields Forever, check and mate.
If the idea of a whole bunch of unfamiliar keyboards and control layouts is unappealing, V Collection 7 also includes the new Analog Lab 4, which consolidates all these things into easy presets and macro controls, and hundreds of new presets in their “Synthopedia.” That way if you do want to look up the way a familiar sound was produced – then tweak it yourself – you can.
Of course, if you read CDM, your favorite preset may be “default template,” and the idea of getting lost for hours in a vintage synth control layout may be the whole selling point. For that crowd, the V Collection 7 adds the EMI Synthi V and the CZ-101 from Casio, circa 1985.
The ability to just dial up a menu and say, “do I want an Oberheim SEM or a CS-80” is already pretty crazy, and the number of choices continues to grow. So my approach to V Collection is actually to ignore all those presets – apologies, dear sound designer friends – and try to focus on one instrument. It’s a bit like what you do in a packed studio – you pull out one piece of gear, and say, hey, tonight is going to be about me and this instrument and very little else.
I want to talk about the CZ-101 because it’s long been one of my favorite instruments, and it’s a fairly unsung one. The CZ is somehow too easy, too friendly, too compact, too inexpensive to have the kind of adoration of some of the other 80s and 70s throwbacks. It’s not a collectors’ item. You can still find them at flea makrets. So yeah, Arturia are quick to drop names who have used it, like Salt-N-Pepa and Vince Clarke. But to me the whole appeal of the CZ-101 is that it’s for people who love synths, not people trying to emulate their heroes.
Of course, you could for these reasons go get an actual CZ-101. That means Arturia has to sweeten the deal a bit so the software can compete. They did just that. Let’s dive in.
CZ V reproduces the simple hardware interface (at bottom) but also expands to this view with lots of additional visual feedback and features, at top.
Phase Distortion lovers, rejoice
The original CZ-101 is about two things: a simple front panel layout, and phase distortion. If you just want to drop the CZ into a session as-is, CZ V does that.
Phase distortion synthesis isn’t so much a different synthesis method as it is a compelling way of mucking about with two digital oscillators. It’s easy enough to dismiss PD as Casio’s cheaper, non-patented answer to Yamaha’s DX7 and frequency modulation (FM). But now as we grow more accustomed to digital, non-harmonic timbres, PD is better appreciated on its own terms – as a way of producing unique digital color.
In short, what phase distortion does for you is to add rich harmonic content to sound. It can be a distortion. It can sound something like a resonant filter – in its own way. And because it’s normally using synced oscillators – here’s the important bit – it’s way easier to control than FM generally is.
On the Casio, this allows some unique filtering and sound shaping and distortion sounds that can easily be controlled by macros. And on the Arturia remake, graphical access to envelopes and expanded power means that you can use that shaping creatively.
The CZ V kind of goes a bit nuts versus what an original CZ-101 would give you. Let’s compare 1985 and 2019.
Arturia’s effects mean you don’t have to listen to the CZ dry.
The modulation matrix makes this feel as much modern soft synth as 1980s hardware.
The original oscillators are there – sine, saw, square, pulse, resonance, double-sine, saw-pulse – as are the 8-stage envelope generators and vibrato and LFOs. You can even import SysEx from the original. But being able to program these features on a display makes sound design accessible.
In addition to making hidden CZ features more visible, Arturia have expanded what’s possible:
32-voice polyphony (the original had just 8).
A modulation matrix – no, really.
More modulation: a Sample and Hold module, 2 LFOs with 6 waveforms, 3 sources combinators and an Arpeggiator
New effects – while an authentic approach to the CZ might leave it dry, now you get all the Arturia multi-effects (adding things like chorus and reverb sound especially nice, for instance)
There’s visual feedback for everything, too.
Where the CZ fits in
In some ways, the CZ-101 is weirdly going from dated 80s thrift store find to … ahead of its time? After all, we’re seeing modular makers embrace these kinds of digital oscillator effects, and phase and phase distortion even inspired the upcoming sequel to Native Instruments’ Massive, the new Massive X.
Envelope editing is powerful – and includes animated visual feedback.
The CZ architecture is uniquely suited to making a lot of different sounds – including percussion and modulating timbres and edgy digital business – with a minimum of resources. So there’s a noise source built-in. You can modulate with the noise source. There’s ring modulation.
Using the CZ, DADSR, and multi-segment envelopes, you can them sculpt those percussive and metallic timbres over time – including using the DCW (Digitally Controlled Waveform) envelope that morphs between a sine wave and distorted wave.
The reason I’m using the CZ V to talk about the new V Collection edition, though, is that it’s an instrument where it feels like Arturia’s authentic side matches up with the “vintage on steroids” additions. So, by the time you have something like the new Synthi, you’re already presented with tons of sound design possibilities. Arturia has added some amazing ideas there – a step sequencer, a beat-synced LFO, plus onboard effects, atop all the new graphical options for working with envelopes and modulation.
The thing is, on a Synthi, that starts to feel like too much. I almost was tempted with the Synthi to force myself not to expand the tab full of new stuff. If I want an open-ended sound environment on a computer, I can use Reaktor, not try to recreate a 1970s take on the idea.
On the Arturia edition of the Casio, though, all these additions help the CZ graduate from fun toy to serious sound design tool. The visual envelopes make more sense. Effects are something most CZ owners invested in anyway. And more polyphony means you can run one instance and do a lot with it. Heck, even the matrix is easier to follow than on the original EMS Synthi because the architecture of the CZ-101 is so straightforward. In other words, because the original did less, it’s both a good match for software remake and for some thoughtful additions – which Arturia delivers.
Check these templates for an easy way to get started making your own sounds.
Here’s a little sketch I made with this. This is all one patch – noise and ring modulation and layering the ring source, plus some DCW and pitch envelope use, are what generate all those sounds. I added Arturia’s Trid-A Pre and some reverb from Softube’s TSAR-1 Reverb and … that was it.
Logic Pro X 10.4.5, seen onstage at WWDC, is now available. And yes, it supports the new ultra-high-end Mac Pro – but there are fixes and performance optimizations for everyone, with or without new machines.
10.4.5 looks like the most pro-oriented Logic Pro in a long time. Apple has been aggressive with its update cadence for Logic for years running now, even with free upgrades, and this version is no exception.
This release also marks the end of the road for Mac OS X 10.12 Sierra. The new minimum OS requirement is 10.13.16 High Sierra. (Mojave is seeming stable these days, and it’s summertime, so maybe now is a good time to do a full backup and take the plunge.)
First up – yes, the banner feature from Apple’s perspective is that the new Logic runs on the new Mac Pro. Under the hood, that means support for up to 56 cores, the kind of massive multiprocessing the new Mac Pro can do.
The use case for this kind of processing power is slim, but then, that’s what the ‘pro’ concept is all about. Doing artist relations, you may have a film composer with advanced technical needs and a shelf full of Academy Awards. Even one user in your user base can be critical.
That said, I think the real story here is that Apple is shaking the tree across the whole code base – meaning these performance optimizations and fixes could benefit you even if you’re running on a beat-up older MacBook, too.
So, think really big track counts – which could be meaningful since even some mid-range CPUs can theoretically churn through a lot of tracks, to say nothing of that shiny Mac Pro tower.
Increased Number of Tracks and Channels, up to:
1000 stereo audio channel strips
1000 software instrument channel strips
1000 auxiliary channel strips
1000 external MIDI tracks
12 sends per channel strip
Way back around 2006, I heard from a Macworld reader complaining about lifting exactly these limitations and how I didn’t mention them in a review. (See above: that one user thing.)
16 ports of MIDI clock, MTC, and MMC from Logic – yeah, expect to see a Mac Pro in broadcast/film/TV applications running audio
Mixer configuration can be set to your own user-definable defaults (huge time saver there, finally)
A clever automatic duplicate erase when you’re merging MIDI recordings
And some new keyboard shortcuts to save you time when editing:
Option + Shift while rubber-band selecting in the Piano Roll: new time Handles selection.
Option-click track on/off button: loads/unloads the plug-ins on the channel strip (wow, easy A/B!)
Shift-double-click Tracks background: start playback from that position.
And for the “I have an old beat-up MacBook you can pry out of my dead fingers” crowd – finally Freeze works the way it should. (How many of you were desperately freezing tracks while cursing Logic as the CPU meter refused to go down?) From the release notes:
“Freezing a track now unloads its plug-ins to free up resources.”
There are also fixes and performance optimizations and workflow and display improvements throughout. As you’d expect, fixes are concentrated on newer features – Smart Tempo, ARA, Flex, and the like.
So you don’t get any earth-shaking new features unless you’re really into de-essing, but what you do get is some evidence the Apple engineers are working through their log of stuff.
Full disclosure: I will now this week refresh my MacBook Pro’s OS and then Logic release.
Hey, say what you will about Apple, but Logic Pro these days is pretty accessible from a user experience perspective. There’s some powerful and numerous competitors that either fail that ease of use test or that simply lack features you need to get big jobs done in scoring and the like. Maddeningly, a lot get the ease right but lack features, or have insanely powerful features but demand you contort your brain to use them. (Once upon a time, an earlier version of Logic was also far harder to use.)
I know from CDM’s own site stats and plenty of anecdotal evidence that all this matters to music makers. It’s not just Apple brand loyalty that makes Logic last.
It’s an analog-wavetable polysynth with an expressive grid – but that only begins to describe what makes the Polyend Medusa such a unique instrument. Here’s a deep dive into this hybrid synthesizer and what it means musically.
A year after its public debut, the Polyend-Dreadbox collaboration Medusa hybrid synth has gotten a flurry of updates expanding its capabilities. The Medusa caught my eye when it was previewed at last year’s Superbooth extravaganza in Berlin – and has since reappeared full of refined functionality at this year’s edition. The instrument combines Polyend’s expressive grid with a gnarly synthesizer made in collaboration with Dreadbox. So you get a hybrid analog-digital sound engine, which you can use in monophonic or one of two polyphonic modes, and a grid you can use for performance or sequencing.
That description seems obvious and straightforward, but it also doesn’t really fully describe what this thing is. It’s really about the combination of elements. The synth engine gets delightfully grimy – the Dreadbox filter can really scream, especially paired with frequency modulation. And the digital oscillators (from Polyend) stack to give you metallic edge and wavetable madness atop a thick 3-oscillator analog beast. The copious modulation and multiple envelopes provide loads of sound design possibilities, too – you can really go deep with this, since basically everything is assignable to LFOs or envelopes. (That’d be a lot of rack space to get this many oscillators and modulation sources in a Eurorack form.) Combining digital control and wavetables with Dreadbox-supplied analog grunge make this as much an all-in-one studio as a polysynth.
What really binds this together for me, though, is using the grid to make this more like an instrument. You can lock parameters and scales to steps in the sequencer, and then use elaborate scale mappings and expression options to put sounds beneath your fingertips. This isn’t about menus, but it’s also unlike conventional keyboard synths. The grid and one-press modulation and envelope assignment make the Medusa a portal to sound design, composition, and performance.
The workflow then fits spatially. On your right, you can sculpt sounds and (thanks to a recent update) make on-the-fly assignments of modulation and envelopes with just one press. On your left, the grid can be configured for sequencing and playing. Mix oscillators and shape envelopes and dial modulation live atop that. You can also use the sequencer as a kind of sketchpad for ideas, since sequences are saved with presets.
All of this comes in a long, metal case with MIDI I/O and external audio input. Even the form factor suggests this is an instrument you focus on directly. So whatever you do in sound design should naturally translate to sequencing and playing live.
Here’s the basic approach to sound design workflow – dialing in and layering different analog and digital oscillators, playing with wavetables, shaping envelopes and filter, adding FM (including on the filter), and assigning modulation. Improvised / no talking:
Let’s look at those components individually (now with some of the recent firmware updates in place):
On the synth side, the Medusa has a hybrid 3+3 structure – three analog oscillators, plus three digital oscillators, for a total of six. (There’s an additional noise source, as well, with adjustable color.) To that, you add a filter derived from the Dreadbox Erebus (highpass, 2-pole lowpass, and 4-pole lowpass). There are two fixed envelopes (filter and amplitude), plus three more assignable envelopes. You also get five (!) assignable LFOs. That’s just enough to be readily accessible, but also focused enough that it neatly structures your use of the onboard controls and assignable modulation and sequencing.
The idea is to mix analog + digital + noise in different combinations, which you can layer as monophonic lines or chords, or trigger in turn, with always-accessible mixer controls for each voice + noise.
Oscillator controls. The oscillator section does double duty as analog and digital, so you’ll need to understand how those relate. To save space, there’s a button in the oscillator section labeled DIGITAL.
With digital mode off (analog mode), you get control over the three analog oscillators, plus a pulse width control, and a frequency modulation control for FM between oscillators 1 and 2. You can select ramp, PWM, triangle, and sine waves for each oscillator. You can also hard sync oscillators – 1+2 (sync 2) and 2+3 (sync 3). Note that you will need to give the Medusa some warmup time for these analog oscillators to be in tune; there’s also automated calibration to tune up.
With the digital mode on, you control the three digital oscillators, and get a wavetable shape in addition to the four wave shapes, plus a wavetable control that modulates between different wavetables. (There’s no FM between oscillators 1 and 2, and you don’t get the pulse width control for the digital oscillators – which in the end doesn’t matter much given all the wavetable options.)
The other controls are doubled up to save space, as well. Instead of dedicated macro and fine tuning, there’s a FINETUNE switch. The FM knob has two functions, also via switches.
Modulation. There’s more modulation than you’ll likely ever need, between the sequencer steps, five envelopes, and five LFOs. Since there’s only one set of encoders and sliders, you choose which envelope or LFO you want to target. You can toggle that modulation on and off by double-pressing the controls for each.
The latest firmware adds on-the-fly parameter assignment, so you can simply hold down an envelope or LFO, then twist the parameter you want to target. That’s much more fun than scrolling through menus.
Sound design is a blast, but there’s some room for growth, too. LFO shapes morph between square, sine, ramp, and triangle, but there’s no random or sample & hold option, which seems an obvious future addition. Also, it could be nice, I think, to have different wavetables on different oscillators, or separate wavetable position controls. (At least for now, you can set LFOs to target all wavetables or just one wavetable when modulating position, so you can separately modulate the three digital oscillators if you wish.)
Now, you can assign both modulation and envelopes with just one tap, on the fly. With multiple envelopes and LFOs, combined with the sequencer, there’s plenty of choice for composition and sound design.
FM can be applied to the filter and between analog oscillators 1+2.
Musical ideas: synth
Use envelopes and modulation. Envelopes have free-flowing timing, but can each be (independently) looped, creating subtle or rhythmic modulation. And LFOs can be either free or clock-synced. With these two features in concert, you can create both shifting timbres and rhythmic patterns – while assigning them hands-on, rather than diving into menus. (That can be even faster than working with patch cords.)
Work with the different polyphonic modes. Mono play mode stacks all six oscillators onto a single voice, which is great for thick sounds. But the two polyphonic modes offer some unique features. P1 is three-voice polyphonic, with two oscillators per voice. P2 is six-voice polyphonic, and has one amp envelope for each of the six voices.
Change voice priority. In CONFIG > Voice Priority, you can set P1 and P2 from “First” to “Next,” and each trigger will rotate through each of the available oscillators. Remember with P2, that means you have separate envelopes. So you can retrigger the same pitch, or “strum” or roll a chord, or create rhythmic variations… it all makes for some lively variations.
Self-oscillate the filter with tracking. If you turn up resonance and crank TRACK on the filter, you’ll get self-oscillation that’s mapped to the pitch range. (You’ll probably want to turn down master volume here; I don’t yet have a trick for that, but you could also save lower oscillator mixer values with a preset.)
Go mad with FM. Frequency modulating the OSC 1+2 combination can create some wild ring mod-style effects as you play with different octave ranges and tunings.
I think one confusion about the Medusa is, because people see an 8×8 grid of pads, they assume the main function is sequencing. That’s really not how to think of the Medusa pad matrix – it’s better to imagine it as a performance and editing interface as much as a sequencer, and to see ambient/drone/non-metric possibilities along with the usual things you’d expect of an 8×8 layout.
Sequences themselves have a length from 1 and 64 steps. (Yes, with a 1-step sequence, you get basically a repeat function, and with a few steps, a sort of fixed phrase arpeggiator – more on how you’d play that live below.) Steps are fixed rhythm, with no sub-steps – I do wish there were a way to clock divide step length from the master tempo, or add subdivisions of a step, or even control step timing individually. For now, if you want that, you’ll need to do it externally, via MIDI.
You can set tempo from 10-300 bpm or use an external clock source. And you get control for swing, plus different sequence playback directions (forward, backward, ping pong, and random).
In NOTES mode, you enter pitch. With REC enabled but not PLAY, you can enter and edit steps one at a time. (Pressing a pad creates a pitch, rather than sets a step, so you’d use the big menu encoder to the right of the pads to dial through steps.) With PLAY enabled, you can live record, though everything is still quantized to the step.
The pitch and rhythm stuff is a bit basic, but it’s the GRID mode where the Medusa shines. There, you can set specific steps to contain parameter data. Again, this works in both step and live modes – in live modes, you’ll overwrite parameter data as you move a control. This is what some sequencers call “p-locks” / parameter locks, but here the workflow is different. You can stop the transport, and manually tweak parameters while holding a pad to modify parameters for that step. This means an individual step may contain a whole bunch of layered information.
At first, it may seem counter-intuitive to separate notes and parameter data on two different screens, but it opens up some new possibilities. You can step-sequence really elaborate sequences of timbral changes. Or – here’s the interesting one – you can trigger different presets as your sequence plays. That lets you ‘perform’ the presets – play with the timbres – the way you normally would with notes.
Not only do you have a powerful step sequencer page dedicated to parameter control, you can think of presets as something you can play live. I don’t know of another sequencer that works quite like this.
Musical ideas: sequencer
Trigger play modes, voice priority, sequence length live: With a sequence playing, it’s possible to toggle play modes (between unison and polyphony), and the Voice Priority setting (first or last, in either of the polyphonic modes), and sequence length, all live without impact sequenced playback. So you can have some fun messing about with these settings.
Use GRID for variation. The sequencer only triggers preset changes when the GRID mode is enabled. So you can start a sequence, then toggle your sequenced parameters on and off by switching GRID mode on and off. (You can combine this with live-triggered parameters – more on that below.)
Glide! Combining glide with the polyphonic modes (and adjusting the amplitude envelope, particularly Release as needed) will create some lovely, overlapping portamento effects.
Arpeggiate/transpose. You can now press HOLD + a pad to transpose a sequence live as it plays. With short sequences, this can be a bit like running an arpeggiator or phrase sequencer.
If you just use the pads as a sequencer, you’re really missing half the power of the instrument. The pads also work for playing live, with the option of up to three axis additional expression (z-axis pressure, and x- and y- position). The pads are also low-profile, so you can easily strum your fingers across pads.
Three-axis control can be a little confusing. Only the last pad adds modulation, and it takes a bit of muscle memory to get used to modulating with just the last finger press if you’re playing in a polyphonic context. But the pads are nicely sensitive; I hope there’s the possibility of polyphonic expression internally in future.
As an external controller, Medusa does support an MPE mode, so you can use this – like the Roger Linn Linnstrument – as an MPE controller with compatible devices.
The grid in general is expressive and inspiring. In particular, you might try one of the 40 included scales, which include various exotic options apart from the usual church modes. I especially like the Japanese and Engimatic options. You can also change not only the scale but the layout (the relationship of notes on the pads).
Musical ideas: pads
Drone mode. Use HOLD to trigger multiple up to six at a time and drone away (press HOLD, then toggle on and off individual notes). And again, this is also interesting with different polyphonic modes and glide. You can also use, for instance, the Z-axis pressure to add additional modulation as you drone. (One confusing thing about X/Y/Z and HOLD – since only the last trigger uses the X/Y/Z modulation, it can get a bit strange additionally toggling off that step as you hold. I’m working on whether there’s a better solution there.)
Use GRID for triggering: With GRID instead of notes, you can use individual pads to trigger different sounds, or even map an ensemble of sounds (setting up particular pads for percussion, and others for melody, for instance). This also opens up other features, like:
DIY scales. A new feature of the Medusa firmware adds the ability to store pitch in pads, and thus make custom scales. Turn GRID on, and REC, then with FINETUNE on, you can use the oscillator to tune a custom scale, including with microtuning. I’d love to see custom scale modes or Scala support, but this in the meantime has a beautiful analog feel to it.
Bend it: You can bend between notes by targeting Pitch with the x-axis. To keep that range manageable and slide between notes, I suggest a value of just 1 or 2 (instead of the full 100, which will slide over the whole pitch range as you wiggle your finger). You might also consider adding the same on the y-axis, since it is a grid.
Trigger expression. Not only can you trigger modulation live over a sequence in GRID mode, you can also use those triggers to modulate X, Y, and Z targets of your choice as a sequence plays. You can also try modulating expression in NOTES mode over a playing sequence.
Use external control. You can also map to external MIDI aftertouch, pitch, and mod, which opens up novel external or even DIY controllers. (You could connect a LEAP Motion or something if you want to get creative. Or combine a keyboard and the grid, for some wild possibilities)
Medusa takes a little time to get into, as you start to feel comfortable with the sound engine, and adapting to a new way of thinking about the pads – as performance controller plus separate note and parameter sequencer. Once you do, though, I think you begin to get into this as an instrument – one with rich and sometimes wild sound capabilities, always beneath your fingertips.
The result is something that’s really unique and creative. The combination of that edgy, deep digital+analog sound engine with the superb Dreadbox filter, plus all this modulation and sequencing and performance possibility makes the whole feel like a particular instrument – something you want to learn to play.
I really have fallen in love with it as a special instrument in that way. And I find I am really wanting to practice it, both as sound designer and instrumentalist.
At 999EUR, it also holds up against some other fine polysynth choices from Dave Smith, Novation, KORG, and most recently, Elektron. Most importantly, it’s unlike any of those tools, both with its unique and expressive controller and its copious controls and access to sound.
The presence of an instrument like this from a boutique maker, charting some new territory and in a desktop form factor and not only a set of modules, seems a promising sign for synth innovation.
In glitching collisions of faces, percussive bolts of lightning, Lorem has ripped open machine learning’s generative powers in a new audiovisual work. Here’s the artist on what he’s doing, as he’s about to join a new inquisitive club series in Berlin.
Machine learning that derives gestures from System Exclusive MIDI data … surprising spectacles of unnatural adversarial neural nets … Lorem’s latest AV work has it all.
And by pairing producer Francesco D’Abbraccio with a team of creators across media, it brings together a serious think tank of artist-engineers pushing machine learning and neural nets to new places. The project, as he describes it:
Lorem is a music-driven mutidisciplinary project working with neural networks and AI systems to produce sounds, visuals and texts. In the last three years I had the opportunity to collaborate with AI artists (Mario Klingemann, Yuma Kishi), AI researchers (Damien Henry, Nicola Cattabiani), Videoartists (Karol Sudolski, Mirek Hardiker) and music intruments designers (Luca Pagan, Paolo Ferrari) to produce original materials.
Adversarial Feelings is the first release by Lorem, and it’s a 22 min AV piece + 9 music tracks and a book. The record will be released on APR 19th on Krisis via Cargo Music.
And what about achieving intimacy with nets? He explains:
Neural Networks are nowadays widely used to detect, classify and reconstruct emotions, mainly in order to map users behaviours and to affect them in effective ways. But what happens when we use Machine Learning to perform human feelings? And what if we use it to produce autonomous behaviours, rather then to affect consumers? Adversarial Feelings is an attempt to inform non-human intelligence with “emotional data sets”, in order to build an “algorithmic intimacy” through those intelligent devices. The goal is to observe subjective/affective dimension of intimacy from the outside, to speak about human emotions as perceived by non-human eyes. Transposing them into a new shape helps Lorem to embrace a new perspective, and to recognise fractured experiences.
I spoke with Francesco as he made the plane trip toward Berlin. Friday night, he joins a new series called KEYS, which injects new inquiry into the club space – AV performance, talks, all mixed up with nightlife. It’s the sort of thing you get in festivals, but in festivals all those ideas have been packaged and finished. KEYS, at a new post-industrial space called Trauma Bar near Hauptbahnhof, is a laboratory. And, of course, I like laboratories. So I was pleased to hear what mad science was generating all of this – the team of humans and machines alike.
So I understand the ‘AI’ theme – am I correct in understanding that the focus to derive this emotional meaning was on text? Did it figure into the work in any other ways, too?
Neural Networks and AI were involved in almost every step of the project. On the musical side, they were used mainly to generate MIDI patterns, to deal with SysEx from a digital sampler and to manage recursive re-sampling and intelligent timestretch. Rather then generating the final audio, the goal here was to simulate musician’s behaviors and his creative processes.
On the video side, [neural networks] (especially GANs [generative adverserial networks]) were employed both to generate images and to explore the latent spaces through custom tailored algorithms, in order to let the system edit the video autonomously, according with the audio source.
What data were you training on for the musical patterns?
MIDI – basically I trained the NN on patterns I create.
And wait, SysEx, what? What were you doing with that?
Basically I record every change of state of a sampler (i.e. the automations on a knob), and I ask the machine to “play” the same patch of the sampler according to what it learned from my behavior.
What led you to getting involved in this area? And was there some education involved just given the technical complexity of machine learning, for instance?
I always tried to express my work through multidisciplinary projects. I am very fascinated by the way AI approaches data, allowing us to work across different media with the same perspective. Intelligent devices are really a great tool to melt languages. On the other hand, AI emergency discloses political questions we try to face since some years at Krisis Publishing.
I started working through the Lorem project three years ago, and I was really a newbie on the technical side. I am not a hyper-skilled programmer, and building a collaborative platform has been really important to Lorem’s development. I had the chance to collaborate with AI artists (Klingemann, Kishi), researchers (Henry, Cattabiani, Ferrari), digital artists (Sudolski, Hardiker)…
How did the collaborations work – Mario I’ve known for a while; how did you work with such a diverse team; who did what? What kind of feedback did you get from them?
To be honest, I was very surprised about how open and responsive is the AI community! Some of the people involved are really huge points of reference for me (like Mario, for instance), and I didn’t expect to really get them on Adversarial Feelings. Some of the people involved prepared original contents for the release (Mario, for instance, realised a video on “The Sky would Clear What the …”, Yuma Kishi realized the girl/flower on “Sonnet#002” and Damien Henry did the train hallucination on “Shonx – Canton” remix. With other people involved, the collaboration was more based on producing something together, such a video, a piece of code or a way to explore Latent Spaces.
What was the role of instrument builders – what are we hearing in the sound, then?
Some of the artists and researchers involved realized some videos from the audio tracks (Mario Klingemann, Yuma Kishi). Damien Henry gave me the right to use a video he made with his Next Frame Prediction model. Karol Sudolski and Nicola Cattabiani worked with me in developing respectively “Are Eyes invisible Socket Contenders” + “Natural Readers” and “3402 Selves”. Karol Sudolski also realized the video part on “Trying to Speak”. Nicola Cattabiani developed the ELERP algorithm with me (to let the network edit videos according with the music) and GRUMIDI (the network working with my midi files). Mirek Hardiker built the data set for the third chapter of the book.
I wonder what it means for you to make this an immersive performance. What’s the experience you want for that audience; how does that fit into your theme?
I would say Adversarial Feelings is a AV show totally based on emotions. I always try to prepare the most intense, emotional and direct experience I can.
You talk about the emotional content here and its role in the machine learning. How are you relating emotionally to that content; what’s your feeling as you’re performing this? And did the algorithmic material produce a different emotional investment or connection for you?
It’s a bit like when I was a kid and I was listening at my recorded voice… it was always strange: I wasn’t fully able to recognize my voice as it sounded from the outside. I think neural networks can be an interesting tool to observe our own subjectivity from external, non-human eyes.
The AI hook is of course really visible at the moment. How do you relate to other artists who have done high-profile material in this area recently (Herndon/Dryhurst, Actress, etc.)? And do you feel there’s a growing scene here – is this a medium that has a chance to flourish, or will the electronic arts world just move on to the next buzzword in a year before people get the chance to flesh out more ideas?
I messaged a couple of times Holly Herndon online… I’m really into her work since her early releases, and when I heard she was working on AI systems I was trying to finish Adversarial Feelings videos… so I was so curious to discover her way to deal with intelligent systems! She’s a really talented artist, and I love the way she’s able to embed conceptual/political frameworks inside her music. Proto is a really complex, inspiring device.
More in general, I think the advent of a new technology always discloses new possibilities in artistic practices. I directly experienced the impact of internet (and of digital culture) on art, design and music when I was a kid. I’m thrilled by the fact at this point new configurations are not yet codified in established languages, and I feel working on AI today give me the possibility to be part of a public debate about how to set new standards for the discipline.
What can we expect to see / hear today in Berlin? Is it meaningful to get to do this in this context in KEYS / Trauma Bar?
I am curious too, to be honest. I am very excited to take part of such situation, beside artists and researchers I really respect and enjoy. I think the guys at KEYS are trying to do something beautiful and challenging.
Live in Berlin, 7 June
Lorem will join Lexachast (an ongoing collaborative work by Amnesia Scanner, Bill Kouligas and Harm van den Dorpel), N1L (an A/V artist, producer/dj based between Riga, Berlin, and Cairo), and a series of other tantalizing performances and lectures at Trauma Bar.
808 day, sure. But let’s pause for 606 day – the logical anniversary of the 1982 TR-606, a drum machine squeezed inside a tiny enclosure that looks like a 303 but isn’t. It’s the lesser known runt of the Roland family, and you kind of love it for that alone.
If you think about it, the 606 was way ahead of its time. Now selling customers on buying a little bass machine, then buying a little drum machine to go with it is par for the course. But in the early 80s, the music that would make the 303 and even the 606 desirable … hadn’t been made yet.
The TR-606 is certainly simple. It’s got all analog circuitry inside, for seven parts – kick, snare, two toms, open and closed hats, cymbal. There’s an accent control. It isn’t the most sought-after sound of the TR series, by any stretch, but now that you’ve heard way too many 808 and 909 hats, you might appreciate this just for some variety.
It can trigger other gear. It’s got accent. It was designed so you could chain 606 models together. So it’s not a terrible little machine. And it is – I’ll stand by this – the cutest drum machine Roland ever made. (I have to admit, I just went back to my boutique TR-09 this week and had a blast. Sometimes getting something tiny and restricted is oddly inspiring. An itsy bitsy teenie weenie silver TR drum machine-y?)
It’s famous, and yet mercifully no one has ever called it iconic. It just is what it is. Here’s Tatsuya of KORG fame giving it a one-over – as he should, as nothing channels the spirit of the 606 (even from Roland) quite like the entry of the KORG volca series he helmed:
And here’s Reverb.com giving it the once over:
The 606 has been in some great music – Aphex Twin, Nine Inch Nails, Autechre, Orbital, plus one favorite artist that shares its name – Kid606. Moby I think also used one, probably in that spell when he and I were dating that he doesn’t like to talk about. (Man, did that beetroot smoothie we shared together while programming 606 patterns mean nothing to you? Nothing?!)
It’s also been heavily modded and copied. It’s a reminder, basically, that drum machines need not look like a truck. They can be a funny sidecar you can easily squeeze into spaces where no one else can parallel park. When people talked about buying unloved Roland drum machines for $50 in pawn shops in the 80s – the TR-606 was one likely candidate. This was one of the machines cheap enough to enable people without cash to change music.
You know the sound. Because it was tinnier than the 808 and 909, the 606 often stood in when someone wanted something with an even thinner Roland sound.
Put that sound with the 303, and you really do get a combo that makes sense.
Bonus – this bit. You can swap between PLAY and WRITE pattern modes while the TR-606 is running – so you can edit as a pattern is playing. The other TRs would ideally work that way, but they don’t.
And yes, Roland at various times has brought this back in … strange ways, like on the SP-606 which really … has nothing to do with the TR-606. But here it is, because D-Beam! It’s also been spotted inside the recent recreations like the TR-8S and even the Serato-collaboration DJ controllers.
There’s a giant expensive cheese grater Mac and display and new versions of all Apple’s platforms. But what’s going on with iTunes? iPadOS? And what else might matter to musicians and visual artists? Here’s a round-up.
iTunes is getting split into Music, Podcasts, and TV. This you probably heard – Apple is breaking up iTunes and releasing fresh new Mac apps with more focus. That’s caused some people to panic – but don’t panic yet. Apart from the likelihood that you’ll be able to continue using iTunes for now, the new Music app may give you reason to switch – without losing existing functionality or libraries.
iTunes download sales aren’t going away. Apple made a big change when it went from the iTunes Music Store – which offered paid downloads – to the ability to stream most of its catalog in Apple Music, for a subscription fee. But that announcement was made in June 2015. Apple confirms you’ll still be able to buy downloads and access purchases in the new Music app. The music industry is still torn between the download and streaming models, but this week’s announcements don’t really change much as far as Apple.
Apple Music may turn out to be more iTunes than iTunes.
Music Store is “a click away.” Here’s the thing: far from being bad news for download sales, if the Music app is cleaner and more pleasurable to use than iTunes, it could actually improve visibility of the Music Store and give a little boost to sales. You still see streaming options by default, and Apple is promoting their own recommendations. But that’s the trend with Spotify, too – it’s not necessarily good for music producers and independent music, but it’s also not news.
In fact, the real news is, Apple might be more interested in growing music revenue, not less. Here’s the thing to remember – Apple is an iPhone business ($31 billion in the second quarter of this year), but it’s also a services business. Services are what is growing, and services are what set records in the quarter Apple just reported. In fact, Services outpaced the Mac and iPad businesses in that same quarter – combined.
$11.45 billion: Services
$5.51 billion: Mac
$4.87 billion: iPad
Killing downloads makes no sense for Apple. If anything, it makes sense for them to find ways to grow music purchases. Basically, Apple cares about music revenue just as musicians care about it – even if Apple’s goal is to get a bite of that, uh, fruit.
Music appears to do what iTunes did. All the major playlist, library management, and sync and conversion features of iTunes appear to be coming to the new Music app, too. It reportedly will even burn CDs, a feature dating back to the early iTunes “Rip, Mix, Burn” days. Apple also says you’ll see updated Library pages and easier typing to find what you want, plus a refreshed player. (9to5mac called it weeks ago.)
Ars Technica got some clarification of this. The main thing is, you can import your existing library without losing anything. And you’ll sync in the file system (which makes more sense, frankly). Apple Music may turn out to be more iTunes than iTunes.
Devices are now in the Finder, not iTunes. Sync, backup, update, restore in Finder, plus get cloud sync options – rather than digging around iTunes.
Music may even work with your DJ software. Many DJs currently manage libraries in iTunes, then sync with desktop software like Rekordbox, TRAKTOR, and Serato. We don’t have a specific answer on how this will work – specifically, if something like the current iTunes XML format for metadata will be available. But the fact that the new Music app syncs using Finder, in the file system, is encouraging. Watch this space for more information.
It’s not clear what happens to iTunes on Windows going forward. If you think iTunes on the Mac is due for a refresh, you should see the clunky Windows port. Since Apple is making “Apple Music” part of macOS, and building as it always does with native tools, it’s unclear what Windows users will get going forward. Given the new sync stuff is all tied to the file system, this gets even murkier.
In the same Ars piece, Apple confirmed they’re keeping iTunes for Windows for now. But that goes without saying – otherwise Apple would break their music product for a huge number of their users – and still doesn’t answer the future situation.
Sidecar looks very cool – for everything from sketching and drawing to a new gestural input method and shortcuts.
Apple’s Sidecar will make it easier to use your iPad with your Mac. It’s what Duet Display already does – and that app was made by ex-Apple engineers – but Apple is promising native integration of the iPad as a second display, plus support for Apple Pencil. I’ll keep using Duet on my Windows machie, but I’m betting the Apple-native integration will dominate on the Mac. Sidecar also does more than Duet ever did – with additional gestures, inserting sketches into apps, modifiers for pro apps, and native developer support.
(So far, of pro apps, Final Cut Pro, Motion, and Illustrator are listed – though not Logic, in case you think of a way of sketching into your music arrangements.)
Zoom a second display. Independent second monitor zoom should come in very handy in multi-monitor editing of both video and music.
Uh, this might break some drivers. I’ll quote Apple’s documentation here: “Previously many hardware peripherals and sophisticated features needed to run their code directly within macOS using kernel extensions, or kexts. Now these programs run separately from the operating system, just like any other app, so they can’t affect macOS if something goes wrong.” Obviously, we’ll need to check in on compatibility of audio drivers and copy protection for audio software.
Sophisticated voice control. Apple is significantly developing everyone’s “Tea, Earl Gray, Hot” Star Trek voice command fantasies with new, more accurate, more powerful, more integrated lower-latency voice control. There’s no sign yet to how this might get used in pro audio or visual apps, but you can bet someone is thinking about it.
QuickTime gets an update. It’s probably been since the days of the long-lamented QuickTime Pro 7 that we got QuickTime application features to write how about. But there are some compelling new features – turn a folder of images into a motion sequence in any format (yes!), open a more powerful Movie Inspector, and show accurate Timecode, plus export transparency in ProRes 4444.
Snapshots with restore. I’ve long complained that macOS lacks the snapshot features of Windows – which let you easily roll back your system to a state before you, like, screwed something up. There’s now “Restore from snapshot.” Apple only mentions third-party software, but it seems recent file system changes will mean this should also work with ill-behaved OS updates from Apple, too. (Yes, sometimes even Apple tech can go wrong.)
Apple not only announced major updates to iOS in iOS 13, but also a new more pro-focused iPadOS.
Expect more sharing between macOS and iOS/iPadOS development AudioUnit is listed as a shared framework allowing developers to target Mac and iOS with a single SDK. You can also expect AV frameworks like Core Audio, and other media and 3D tools. Of course, that was always the vision Apple had with its mobile OS – and even can trace some lineage back to early work done pre-Apple at NeXT. That said, while this SDK is appealing, many developers will continue to look elsewhere so they’re not restricted to Apple platforms, depending on their use case.
You’ll need specific devices to support the new OS. iOS 13 requires iPhone SE / 6s or better, or 7th-gen iPod touch. iPadOS is even more limited – the iPad Pro, iPad Air 3rd gen or Air 2 or better, iPad mini 4 or better, and 5th-gen or better iPad.
iPadOS: external storage. Finally, you can plug USB storage into your iPad and navigate the external file system – a huge boon to managing photos, video, audio recordings, and even USB sticks for DJ sets. Yes, of course, Android and all desktop OSes do this already, but it’s definitely welcome on the iPad.
iPadOS: better file management. The Files app has been updated with columns, and you can share whole folders via iCloud Drive. (Finally and … finally.)
iPadOS: ‘desktop’-style browser. Apple says you get something more like the desktop Safari on your iPad – so you can use more sites and you get a download manager.
iPadOS: mouse support. This is an accessibility feature, but the combination of touch and mouse will be useful to everyone – like so many accessibility features. I expect it’ll also make working with tools like Cubasis way more fun. Basically, your external mouse or trackpad gets to behave like a very accurate finger. It’s not a desktop mouse so much as it is a way to access touch via the mouse:
So, on mouse support… Apple made clear to me it is an ACCESSIBILITY FEATURE first and foremost. Meant for users who literally cannot access their devices without a mouse, joystick, whatnot. As @stroughtonsmith found, it’s in AssistiveTouch menu.
Many iOS music makers want to route audio between apps – just as you would in a studio. But news came this week that Apple would drop support for its own IAA (Inter App Audio), used by apps like KORG Gadget, Animoog, and Reason Compact. What will that mean? I spoke with Audiobus’ creator to find out.
Michael Tyson created popular music apps Audiobus and Loopy. And he’s made frameworks for other developers, too, not only supporting countless developers working with Audiobus, but also creating the framework The Amazing Audio Engine, now part of Audiokit. So he’s familiar with both what users and developers want here.
Audiobus is key. At first, iOS music apps were each an island. Audiobus changed all that, by suggesting users might want to combine apps the way they do on an stompbox pedalboard or wiring gear together in a studio. Take an interesting synth, add a delay that sounds nice with it, patch that into a recording app – you get the idea. That expectation was also familiar from plug-in formats on desktop and inter-app tools like the open source JACK and Soundflower. And Tyson’s team developed this before Apple followed with their own IAA or the plug-in format AUv3.
So now, having pushed their own format, Apple is abandoning it. iOS and the new iPadOS will deprecate IAA, according to the iOS 13 beta release notes.
This won’t mean you lose access to your IAA apps right away. “Deprecated” in Apple speak generally means that something remains available in this OS release but will disappear in some major release that follows. Apple often deprecates tech quickly – as in one major release later (iOS 14?) – but that’s anyone’s guess, and can take longer.
That is still a worry for many users, as many iOS developers do abandon apps without updates. It’s tough enough to make money on an initial release, tougher still to squeeze any money out of upgrades – and iOS developers are often as small as one-person operations. Sometimes they just go get another job. That may mean for backwards compatibility it even makes sense to hold on to one old iPad and keep it from updating – not only because of this development, but to retain consistent support for a selection of instruments and effects.
But if you’re worried about Audiobus dying in iOS 13 – don’t. Michael explains to CDM what’s going on.
Can you comment on the deprecation of Audiobus and IAA for iOS? It’s safe to say this should mean compatibility at least for the forseeable future, but not much future in OS updates after that, given Apple’s past record?
To be specific, this is a depreciation of IAA rather than Audiobus – Audiobus is a combination of a host app, and a communication technology built into supporting third party apps. The latter is presently based on IAA, but doesn’t have to be.
As for the IAA deprecation, I consider this a very positive move by Apple. The technology that replaces it, Audio Unit v3, is a big step forward in terms of usability and robustness, and focusing their own attention and that of the developer community on AUv3 is a good thing. I doubt IAA is going anywhere any time soon though; deprecations can last many years.
Does this mean the Audiobus app will reach its end of life? Do you have plans for further development in other areas?
Not at all. I’ve got lots of plans for Audiobus, to increase its value as an audio unit host, and possibly to fill the gap left by IAA if it’s ever switched off.
Do we lose anything by shifting to AUv3 versus IAA? (I have to admit I have a slightly tough time wrapping my head round this myself, in that there’s a workflow paradigm shift here, so it’s not so fair to compare the enabling technologies alone…)
AUv3 is actually quite impressive lately, and continues to grow. As you say, they’re pretty different workflows, so it can be tricky to compare. The shortcomings we see I largely put down to developers not fully exploiting the opportunities of the platform – myself included! This will only improve going forward, I suspect.
There is one pretty big downside, which is that implementing AUv3 support in an app is a lot harder than implementing IAA, which itself is harder than implementing Audiobus support. It’s the difference between just a few lines of code, and a whole restructure of an app. Minutes vs days or weeks; worse if there’s file management involved. For apps that want to host audio units (on the receiving end), it’s a lot more work too, as they would need to implement all of the audio unit selection and routing themselves, rather than letting Audiobus do all the work and just receiving the audio at the end.
This is the reason there are still plenty of apps that only do Audiobus or IAA – my own apps Loopy and Samplebot included! If those apps that don’t have AUv3 yet don’t update in time and Apple ever pull the plug on IAA, those will just stop working. And it’s possible we’ll see less adoption of AUv3 for new apps.
But if things do go that way, I’m completely open to the possibility of stepping in to fill the gap left by IAA; there’s no reason Audiobus couldn’t continue to function as it does right now without IAA, as this is how it worked in the beginning. But we’ll wait and see what happens.
AUv3 plug-in format is supported by instruments and effects, like this RM-1 Wave Modulator from Numerical Audio.
Is there some way to re-imagine Audiobus using AUv3?
Audiobus actually already has great AUv3 support built in, and lots of users are already on exclusively AUv3 setups. I’m continuing to add stuff to make the workflow even better, like MIDI learn and MIDI sync – and 2-up split screen coming soon.
Have you heard reaction from other developers?
Not as yet, no.
So you see a justification to Apple going this direction?
Sure, I’d say it’s so we can all focus on the new hotness that is AUv3. IAA was never enormously stable, and felt like a bridging technology until something like AUv3 came along. The resources of the audio team at Apple are just better put towards working on AUv3.
Thanks, Michael. We’ll keep an eye on this one, and if there’s anything CDM can do to pass on useful information to developers interested in adding AUv3 support, I imagine we can do that, too.
Apple promised something special – and modular – at the pro desktop level, and they’re delivering something ambitious. Welcome the return of big metal desktops with tons of PCI slots and maxed-out power supplies. And if you missed Macs that look like cheese graters, you also get your wish.
In a nice touch, they’ve also added wheels so you can roll this thing around a studio.
And studio is what they have in mind. Following an hour of consumer-focused iPad and iPhone stuff and things on your watch, the pitch for the new Mac Pro is about video editing, music and audio production, 3D, and gaming.
Most of this stuff is on the visual side, even despite the mention of lots of “virtual instruments” in the keynote (cue cheesy “Kenya” music), partly because audio users don’t need this amount of power for most of what they do. But visual people, this again looks exciting – at last.
Music folks, there’s also a new release of Logic Pro, which I’ll write about separately. It seems to be the Hans Zimmer school of benchmarking, with 1000 virtual instruments. For the first time in a long time, Apple shows Logic and Final Cut together.
The system appears to max out everything:
PCI slots are back.
CPU. Now top-of-range Intel Xeon – with up to 28 cores. GPU. A huge power supply and new connections, plus “Thunderbolt throughout” mean support for one or two top-of-the-line AMD Radeon Pro Vega II GPUs. (Apple continues their preference for AMD; there was no mention of rival NVIDIA.) It’s all part of a new connection module Apple calls MPX. Afterburner for video. This is actually an FPGA that assists in handling video codec processing, for faster proxies and whatnot or direct RAW editing. PCI expansion. This is finally back – tons of it. So you can add (mostly graphics) add-in cards. It’ll be interesting to see if the audio market goes back to working with PCI, after largely moving to interconnects like USB and Thunderbolt, which allow them to target laptop and desktop owners at the same time. But for graphics, it’s huge. Lots of electricity. Don’t expect to save on your electricity bill or save the planet with this one. A massive 1.4 kW power supply runs the whole thing. Quiet cooling. The GPU interestingly uses a massive heat sink, but there are tons of fans to move air through the device. Some of us remember when this went awry with the “jet engine” Macs of the past, but Apple promises if it’ll be quiet.
Apple shows 1000 tracks in Logic at once.
Logic’s threading makes use of all that insane number of cores.
And there’s a new display, of course – a Pixel Display for your desktop. Apple’s displays have tended to command a price premium, so that a lot of pros opt for other brands, but here they seem to be leaning in to that with an ultra-high-end option. There’s a massive contrast ratio, color range (“extreme”), and high density. It’s called the Pixel Display XDR. It all connects with Thunderbolt 3, and there’s a new stand design.
That means you can use this with your MacBook Pro, too. They would like you to buy six of them for your desktop. (“Whoo!” she shouts, Ballmer style. I’m sure they would like to sell that many.)
High-spec memory is part of the story here. And lots of of it, if you want.
Apple goes to a high-end GPU again – and has another modular format for updating it (MPX).
MPX allows one or two high-end AMD GPUs.
The CPU is a star here – loaded up with cores.
Afterburner is an FPGA-based add-on for assisting video editors with handling high-res footage.
There’s a display to go with this, too.
Just get ready for some sticker shock.
There’s tons of innovation here, but that also means early adopters will be taking some risks. More ambitions tend to mean more potential points of failure. But it’s exciting to see Apple do this kind of innovation on the Mac again – and with the actual needs of pros in mind. I look forward to seeing how this pans out.
High end Mac will cost you – US$5999, though Apple compares to high-end machines at the 8 grand level. Coming in the fall.
I think I’ll probably think about getting one when… the next generation arrives and these prices drop. But the logic here makes sense: the pro market for studios has become more rarified. If Apple can make a case that these are machines that will last a longer time, I could imagine these machines becoming very popular in those core segments. Musicians, even serious pros, will probably still largely stick with capable laptops, but for video and high-end visuals, the need is real. And the pay is better. Just saying.
You know the Minimoog and the modular. But do you know The Operator – a business telephone? Or the Moog air hockey game? The Moog name wound up in some strange places in the 80s.
These creations have little to do with Bob Moog. The company first known as R.A. Moog underwent buyouts by other manufacturers, before Bob Moog left the company bearing his name in 1977. Then around 1981, Moog turned to contract manufacturing – at aroundthe same time as the last Minimoog came off the assembly line. Management bought out the company in 1983 and did even more contract work.
But some of the weird side tracks that happened next are nothing if not intriguing. And synth manufacturers diversifying isn’t actually that strange a concept. We have to remember that part of what allows our industry to make weird devices like boutique modules is that we can source components and contract manufacturing from companies making other stuff. (Case in point – I spent Friday morning at ALFA in Riga, who partner with Erica Synths, Gamechanger Audio, and others. Even ALFA gets the lion’s share of revenue from other stuff – in their case, it seemed to be electronic safe circuitry and supplying the Russian car industry. That’s to say nothing of factories in Shenzhen, China.)
So, sure, the most infamous contract synth was the SSK Concertmate for Tandy Corp (aka the brand name used by Radio Shack). But there’s more. As Moog Electronics in the mid-80s, the company made subway door openers and climate control systems. And then these:
The Operator (originally the Telesys 3) in 1983 was a business phone with some features I’d find handy today, even if they’re dated:
A digital clock with stopwatch, automatic call timing, and alarms
Custom ring tones, plus a timer that sets the ringer to mute automatically
Tons of memory positions and automation
Built-in paper address book
Automatic redial for getting through on busy numbers
A “privacy detector” that warns you if someone has picked up the line and is listening in
— plus this being the 80s, it also boasted all kinds of archaic compatibility features so it would work with touch, rotary, and pulse lines and corporate PBX and interfacing. Some things we definitely won’t miss.
Of course, the main synth connection here is, Moog Electronics accidentally predicted the FM synth that would one day come from Ableton. Ahem. But the “Moog Telecommunications” name tells you they aspired to make more devices, even if that never happened.
The Moog air hockey table surfaced in 2012 on a Gearslutz, captured by user plaidemu. If we look back to 2004, we find some trivia background on what this was – evidently also around 1983 or so.
Moog’s logo is on the scoreboard because they made the sound generation circuits. User vorlon42 (whoa, is that a Babylon 5 reference crossed with a Hitchhikers’ Guide reference?):
About 20 years ago, a Buffalo, NY-based company called Innovative Concepts in Entertainment rolled out a heavy-duty arcade-quality table hockey game called Chexx. Like the old “slot hockey” games many of us who grew up in the northern US and Canada had when we were kids, we could control each player (forwards, defensemen, and goaltender) by pushing and pulling a rod for each player, and turn the player by twisting the rod left and right. The playing arena was encased in a hard lucite dome, so that the puck wouldn’t fly out of the arena.
On top of the dome was an box scoreboard with three lights on each of its four sides, and sound-generation circuitry that would play crowd noises and organ “charge” riffs. The electronics for the game was manufactured by…..Moog Music. The Moog logo was featured prominently on the scoreboard.
The Chexx game, and successive versions, can be found in various game rooms, arcades, amusement parks, and sports bars around the world. The most recent version is called Super Chexx. (Unfortunately, it lacks the Moog music circuitry.)
I love that the Russia-US matchup lets you recreate the miracle on ice. (Well, unless Russia wins, of course.)
A music system for the Commodore
The Moog Song Producer was a very useful looking interface for the Commodore 64 – something you might want even now, if you’re a chip music fan. It’s a combination of software (for sequencing) and I/O for both MIDI and analog signal:
· 1 MIDI in
· 1 MIDI thru
· 4 MIDI outs
· 8 drum trigger outs
· 2 Footswitch ins
· 1 Clock/sync in
· 1 Clock/sync out
Friend of the site (and Retro Thing alum) Bohus Blahut wrote into Matrixsynth in the heady days of 2005 to add more detail:
These aren’t actually rare at all. I’ve seen them on Ebay dozens of times. I think that I got mine for $30 a few years back. I haven’t used it yet (know how that feels?), but it is an amazing package. The thing that would make it even more amazing is if Moog had ever come out with the device mentioned in the manual; an analog sound module. How hip would that be?
Long before the 2008 Paul Vo Moog Guitar, there was the Gibson-Moog collaboration RD series guitar. This even predates Moog Electronics, so Bob Moog himself designed the circuit – an active preamp intended to widen tonal range and make the sound compete with the synth. Or something. With bright, treble, and bass modes, plus compression and expansion, it was more complex than guitarists might have wanted at the time – but also more capable. You can read up on it at Reverb.com:
Composer Hildur Guðnadóttir went the extra distance for her score for Chernobyl = taking a real power plant as inspiration for her haunting score.
In a fascinating interview for Score: The Podcast, Guðnadóttir recounts how she followed the film crew to a decommissioned nuclear power plant in Lithuania – even donning a Haz-Mat suit for the research. (Lithuania here is a stand-in for the original site in Ukraine.)
Guðnadóttir, the composer and cellist (she’s played with Throbbing Gristle, scored films, and toured with Sunn O)))) was joined by Chris Watson on field recording. But this wasn’t just about gathering cool samples, but as she puts it, about listening. So every sound you hear is indeed drawn from the landscape of a similar Soviet-era nuclear plant, but as she tells it, the act of observing was just as important.
“I wanted to experience what it feels like to be inside a power plant,” she says. “Trying to make music out of a story – most of it is about listening.” So they go into this world just to listen – with a man who records ants.
And yes, this finally gets us away from Geiger counters and other cliches.
It’s funny to be here in Riga, just last night talking to Erica Synths founder Girts about his experience of the documentary – having lived through the incident within reach of radiation fallout.
Thanks to Noncompliant for this link.
The HBO drama trailer (though a poor representation of the score – like many trailers, it’s edited to materials outside the actual film score):