Polish maker Polyend has one special grid – expressive sensing meets powerful sequencing and recording. And now, combined with a dedicated synth made with Dreadbox, it starts to really come alive.
The first impression of Medusa, the new instrument shown last week at Superbooth, is a little bit of a Dreadbox synth tacked into a case with the grid sequencer from Polyend’s SEQ. But that’s really not what you’re getting here. For one thing, Polyend had a hand in the synth portion of this instrument, too, suggesting new architectural features. And for another, because every single parameter on the synth side can be played live and sequenced from the grid, you really get the sense of a complete, integrated instrument.
That’s not to say that SEQ, Polyend’s expansive sequencer product, doesn’t work well at these features, too. In fact, Medusa acts as a nice calling card / advertisement for what SEQ can do. But there’s something about immediately getting sound when you press into a space on the grid that makes a big difference.
And even before you start up the step sequencer, Medusa’s grid is irresistible to play. Each pad responds to x/y/z input, not just pressure. It’s sort of the opposite of the lifeless, on/off digital feeling of the monome – every continuous variation of the finger, every movement around the pad controls the sound. (Apologies to the monome, but that to me is a significant evolution – now that we’re accustomed to the once-radical grid interactions of the monome, we might well expect this kind of expressive dimension.)
Polyend have equipped that grid with a dedicated display, and mapped every parameter from the synth. So you can play live, you can record those performances, or you can increment through steps and play or program detailed changes as steps, then play back and jam.
This is what it’s all about – deep control of parameters, which you can then assign to individual pads and automate step-by-step.
Of course, the other advantage of an integrated instrument is, you don’t have the bandwidth problems of MIDI. The internal architecture is there both for synth and sequencer, so you can modulate everything as fast as you like. (Richard Devine was on hand to turn up the bpm knob really high to test that.)
The Medusa is planned for availability August/September 2018 at 999€.
That’s 999 including VAT and shipping, so figure even a bit less in USD.
And yeah, if you want to know my favorite thing from Superbooth – this is it. It seemed to be a crowd favorite, as well.
Here are the full planned, confirmed specs as provided to CDM – though Polyend hinted there may be more in the works by launch, too. (Dreadbox may have more to say about this, too; I only had time to talk to Polyend!)
64 customizable three-dimension-expressive pads for a controller/sequencer
Step, live, and incremental sequence modes
256 independent sequences and voice presets
Per-step sequencing of notes, parameter locks, or even entire synth voice presets
Assign X and Y pressure axis to any modulation parameter, per pad
Randomization of voice and sequence
OLED display with customizable user menus
The synth is a nice digital-analog hybrid – 3 + 3, analog + digital wavetable (and comes with its own separate OLED display):
This synth end of Medusa means business, too.
Three analog oscillators with sync, four wave types per oscillator
Three wavetable oscillators
24dB Dreadbox analog multimode filter (2- or 4-pole lowpass, highpass)
Play modes: monophonic, paraphonic x 3, paraphonic x 6 (so you can route the digital oscillators through the analog filter, yes)
Frequency modulation for oscillators and filter
Noise generator with color shaping
Powerful, assignable envelopes and LFOs let you shape the 3 analog + 3 digital oscillators… and all of this is accessible from the grid/sequencer, too.
Modulation + control:
5 independent LFOs, which you can route into almost anything
5 independent DADSR envelopes with looping and its own parameter assignment
Mixer for all seven analog/digital/noise voices
Separate volume control for headphone and main audio out
USB MIDI in + out and DIN MIDI in + out + thru
Here’s Piotr talking about it in a couple minutes to FACT:
What if you had all the modules you need to make techno and industrial in one rack? Meet Erica’s line of drum and synth modules. They seem to know their market.
Now, it’s meaningful this is coming from Erica. The Latvian-based company with some ex-Soviet Polivoks lineage has a knack for making simply mental boxes that bring that grimy, dirty industrial sound straight out of the actual post-Communist industrial landscape of Riga. If I had to sum up that user experience, it’d run something like this: turn knob, machine screams.
But that’s saying something. Making wild sounds intuitive is a feat. And Erica have earned their reputation by putting those sounds into boxes that are reliable, easy to understand, and deliver a punch without hitting the high end of the cost spectrum.
Running down these modules, you just have to keep nodding – yes, that’s what I want out of this module, and yes, that’s the sensible way to lay out these controls. I can’t really judge sound quality at a trade show, but the sound was good enough that it actually blew me away over the din of Superbooth, out of some small monitors – and that’s saying a lot. We’ll get to check out Erica’s crew at a club tonight here in Berlin, and this is one I think we’ll need to give a full review.
(Bonus: they’re also coming with the effects collaboration they built with Ninja Tune. I’m keen to see that, as well.)
I also think it’s totally reasonable to build systems around musical applications like techno. Plenty of modular instruments have morphed into particular configurations to make them musically accessible. And then since this is still patchable, you don’t have to make this sound like techno you’ve heard before – you can push that flexible sequencer and patch things together to bend something into your own genre and voice. Or, this being modular, you also have now a big line of components that could fill gaps in whatever setup you choose.
Here’s a look at those modules.
Sample slicing and triggering, WAV file (even imports CUE points), with assignable CV inputs. Actually, there’s nothing to say this has to be a drum module – it’s also a general-purpose sample slicer/module.
microSD for loading sounds.
Well, here’s your distortion. Three dedicated modes for each side, cascaded in series for extreme distortion. This is really the heart and soul of the Erica Techno System sound, and even if you didn’t get the rest of the line here, this one could be a must.
Built on the Spin FV-1 chip – a custom reverb platform – the dual FX has a set of custom mono and stereo effects from Erica’s in-house musician-madman KODEK.
It’s all about the bass – and here, those basslines will be more than a little acidic. Erica’s Acidbox proved how crazy their filters can be. It apparently inspired the filter here – so expect really aggressive, terror-inducing acid.
Full analogue circuit
BBD-based VCO detune emulation
Built in VCF and VCA decay envelope
External VCO FM and VCF cutoff CV inputs
Of course, what keeps this compact is, the sequencing all falls to the dedicated sequencer unit (or a sequencer module of your choice – Superbooth has had a lot of them).
Toms can easily be a throwaway, but here there was a lot of attention to detail. Toms has dedicated controls for low, mid, and high, and promises 909-inspired tom sounds. Erica says they built this in collaboration with e-licktronic – that’s the boutique/DIY maker who’s perhaps best known for their Roland clones and custom kits.
Erica are actually introducing three different hat/cymbal models. There’s an analog module (“A”) with accent and individual CV controls of everything, also made with e-licktronic. There’s a digital sample-based “D.” And there are sample-based cymbals (“Cymbals”).
It’s easy to overlook this one. But when you’re actually in the heat of the moment playing live, you need that ability to just reach over, twist a knob, and add in a particular part.
And the Drum Mixer looks just about perfect. It boasts vactrol-based compression to keep everything properly loud and intense without losing clarity, plus a stupidly easy setup for controlling compression and the various parts, with seven inputs and both main and aux outs.
Erica also plan a more compact 6-input “Lite” version of the same, and a 4-channel Stereo Mixer.
Oh yeah, and if you’re not into the black craze, they plan to release everything again in white.
Lastly, the sequencing here comes from the Erica Drum Sequencer. Announced in January, it debuted in March – but now it has some modules to sequence:
Features of that are numerous:
12x Accent outputs
1x CV/GATE track
2xLFO with independent or synced to the BPM frequency
Time signature per track
Pattern length per track
Shuffle per track
Probability per step
Retrigger per step
Instant pattern switching
Step/Tap record modes
16 Banks of 16 Patterns
Instant pattern switching
Midi sync in with start/stop
Firmware upgrade via MIDI SySex
It’s an analog drum machine plus bassline synth. It’s a digital drum machine with sample loading. It’s packed with live features and modulation. The coming MFB box could be … The One.
While big brands have focused on digital machines (or even software/hardware combos), MFB out of Berlin are the little boutique brand who have come out with a steady stream of analog boxes that are nonetheless compact and accessibly priced. And it’s not so much the fact that they have analog circuitry inside them as the fact that they’re different. Those drum timbres will hammer through your music when called upon, just like the Roland classics and whatnot, but they also sound distinctive. And with so much music already made on the well-known machines, different is good.
That said, for all the lovely sounds packed into any of these boxes, they all fell a little short of “must-have” – great-sounding but a bit fiddly and more focused on sound than performance features and sequencing. Then there was the confusing availability of two similar compact boxes, the Tanzmaus and Tanzbär Lite, alongside the Tanzbär flagship which was also … a bit similar to the other two.
Well, forget all that: because even in prototype form, the Tanzbär-2 is a whole new beast. If Roland’s TR-8S and Elektron Digitakt look poised to be the live drum machines for the mainstream, then the MFB might be the best boutique rival.
Or to put it another way – plug this thing in, and you can jam like a crazy person, with bassline and drums all ready to go.
Highlights (there’s no press release so … I’m doing this from memory):
A built-in bass synth that sounds totally brilliant, with internal melodic programming
Analog drum parts, plus digital drum parts (hey, it worked for the 909)
Sample loading, via MIDI dump or over USB, so you can load your own samples
Tons of front panel parameters for hands-on control of both the analog and digital sections’ parts
Dedicated faders for all the parts’ volumes
Two additional parameters for each part (accessed by the screen)
An LFO you can route to absolutely anything
Step sequencer, with per-step parameter automation
Separate outs for each part
And it’s really compact, too – not exactly lightweight (though that’s okay when you’re jamming hard on it), but easily slipped into a bag with a small footprint.
Really the only missing feature is, there aren’t internal effects … but that would complicate the design, and it does have separate outs.
The TB2 is really three instruments in one. There’s a simple analog bassline synth. The analog percussion section houses kicks, toms, congas, and snares. And then a digital section handles hats and additional percussion – or load your own digital samples for more choices. Sounds about perfect.
Faders! Dedicated outs! And it’s all really compact. Those knobs feel great, too, if you had a more fiddly experience with older MFB gear.
There are already a lot of parameters on the front panel, but parts also have additional parameters accessed by the two data knobs, with feedback on this display. (You’ll see some hints as to those features on the silkscreen, too.)
I’m sold. I think the fact that it includes a bassline synth internally is already great. I’ve got lots of questions, but they’re working on finishing this up this summer, so it’ll be better to make a separate trip to MFB after Superbooth. Then we can get some real sound samples without a convention going on behind us, and learn more about the details.
Cost isn’t confirmed, but they’re planning for under a grand (USD/EUR). Given you could pretty much do all your live dance sets on this box alone, that sounds good.
But wait — there’s more! MFB also new modules coming, too. Here’s a sneak peak of that:
From the creators of the monome grid, there’s a teaser out now for a new standalone box that could replace the computer for various creative tasks – and that builds on the legendary mlr patch.
The story so far
The monome 40h was arguably the most important first invention in electronic music in the century’s first decade. Its minimalist aesthetics broke from industry norms at the time (and earned accolades in modern art museums, even). It set the tone for music products built on open, community-driven ecosystems. It defined the grid as paradigm for computer music interaction, and in particular a bi-directional relationship that gave feedback with lights. And it set up the value of a controller combined with software to create new interactions with digital sound. Every single one of these things has been endlessly duplicated by makers big and small – it’s actually pretty astonishing just how much Brian Crabtree and partner Kelli Cain were ahead of the curve.
But the thing that really made the original monome 40h work wasn’t that it was an undifferentiated grid. That made a strong visual statement, but Yamaha’s Tenori-On did that, too, and had nowhere near the impact. The monome community took off as music makers, spawning albums, meetups and festivals, and eventually seeing controllers from Novation, Akai, Ableton, and others follow suit, partly because of the software that went with the grid. mlr, build by monome’s Brian Crabtree in Max/MSP, gave the grid musical utility by carving up samples into grids and allowing them to be triggered rhythmically.
At the same time, this meant monome users were tethered to computers. And that destroys the image of the monome as a singular instrument. Brian has had some ideas over the years that could help users get away from that, including the teletype algorithmic module. But the new thing he’s teasing most resembles the previous aleph – a standalone computer stand-in powered by a DSP platform.
If the beginning of the century was about figuring out how to create a computer and controller combination that worked (see Ableton Live, Maschine, et al), maybe now we’ll finally tackle new standalone instruments built on the open-ended possibilities of software.
norns is the new monome box. And like teletype and aleph, it seems to be built around making a dedicated computational device that’s focused on typing as an interface.
Brian has composed some lofty text around what this thing is about, but I’ll … reduce a little.
It looks beautiful – a luxurious block with minimal encoders and display. And the opening teaser “poem” suggests that it can do a variety of tasks related to sound, interfacing, and control (MIDI and CV):
It’s also nice to see a musical intro, which is how Brian brought out the original grid. And it features stretta, aka engineer/designer/musician and Berklee professor Matthew Davidson, who drove a lot of original innovative development for monome. (He’s the prof with a GitHub.)
Apparently, it’s a DSP platform. I’m rather hoping it’s ARM-powered, as that platform offers greater horsepower for the money and can open up additional processing powers, but it may be Bluefin DSP-based as aleph was – we’ll see.
It’s the way it’s scripted that gets interesting. Not only is it scriptable with Lua, but the plan appears to be to make an online IDE and community database of scripts, so you can load up a granulator or a delay somebody as built and play with it right away. tehn also promises some interesting features like keypress performance – it’ll be interesting to see how that online scripting works in this golden age of musical livecoding.
Brian also gets into some details of his next take on mlr – an “evolution,’ he calls it – which may be what sells this thing:
virtual tape loops are mapped to grid rows where playback position is displayed and key presses cut to the location.
playback speed (with reverse) is mapped to the grid in addition to record punch in and overdub.
keypresses can be recorded and played back in patterns to automate gestures.
within the cutting interface smaller sub-loops can be selected and looped.
there’s a lot more.
Only 100 aleph units were ever made. It’ll be interesting to see if this makes it further. While it’s easy to knock commodity computers as ugly and inelegant, they’re also what allows access to this kind of music making for most people. Look no further to the livecoding movement, which does this on hardware that can run as cheap as a Raspberry Pi – and which is accordingly spreading all over the world, including in markets where importing gear is expensive.
Then again, that being the case, it remains nice to see something luxurious and beautiful and artful, even if a symbol of what the rest of that can be. We’ve had expansive conversations with Brian since the beginning of his project, so let us know questions for him and we’ll check in.
In the meantime, the monome community are more than a little excited over on the forum.
“The Norns (Old Norse: norn, plural: nornir) in Norse mythology are female beings who rule the destiny of gods and men. They roughly correspond to other controllers of humans’ destiny, such as the Fates, elsewhere in European mythology.”
Well, then, let the Norns decide how this one plays out. But we’ll be watching.
KORG’s analog flagship synth, introduced earlier this year, hinted at a tantalizing feature – open programmability. It seems we’re about to learn what that’s about.
Amidst some other teasers floating around in advance of Berlin’s Superbooth synth conference this week, the newly-birthed “KORG Analogue” account on Instagram showed us what the SDK looks like. It’s an actual dev board, which KORG seem to be just releasing to interested DIYers.
This should also mean we get to find out more about what KORG are actually offering. The open SDK promises the ability to program your own oscillators and modulation effects, taking advantage of the Prologue’s wavetable capabilities and deep modulation architecture, respectively. Here’s a look:
Now, whether that appeals to you or not, this also will mean a library of community-contributed hacks that any Prologue owners can enjoy.
I can’t think of anything quite like this in synth hardware. There have certainly been software-based solutions for making sounds and community libraries of mods and sounds before. But it’s pretty wild that one of the biggest synth manufacturers is taking what would normally be a developer board for internal use only, and pitching it to the synth community at large. It shows just how much the synth world has embraced its nerdier side. And presumably the notion here is, that nerdy side is palatable, not frightening, to musicians at large.
And why not? If this means the average Prologue owner can go to a website and download some new sounds, bring it on.
Curious if KORG will have anything else this week in Berlin. Looking forward to seeing them – stay tuned.
From the extraordinary first digital breakthroughs of the 70s, when lightbulbs stood in for LEDs, to what may have been the first use of the word “plug-in,” we the inventors of Eventide’s classics – who now have a Grammy nod of their own.
Rock and pop have their heroes, their great records. But when you’ve got an engineering hero, their work finds realization behind the scenes in all that music, in hit music and obscure music. And then it can find its way into your work, too.
These inventions have already indirectly won plenty of Grammy Awards, if you care about that sort of thing. But at the beginning of this year, the pioneers at Eventide got a Lifetime Achievement Award, putting their technical achievements alongside the musical contributions of Tina Turner, Emmylou Harris, and Queen, among others.
Why are these engineers smiling? Because they got a Grammy for their inventions. Tony Agnello (left) and Richard Factor (right) at the headquarters.
Electrical engineers and inventors are rarely household names. But you’ve heard the creations of Richard Factor and Tony Agnello, who remain at Eventide today (as do those inventions, in various hardware and software recreations, including for the Universal Audio platform). For instance, David Bowie’s “Low,” Kraftwerk’s “Computer World” and AC/DC’s “Back In Black” all use their H910 harmonizer, the gear called out specifically by the Grammy organization. And that’s before even getting into Eventide’s harmonizers, delays, the Omnipressor, and many others.
1974 radio advertising:
Here’s the thing – whether or not you care about sounding like a classic record or lived through all of the 1970s (that’s, uh, “not so much” for me on both of those, sorry), the story of how this gear was made is totally fascinating. You’d expect an electrical engineering tale to be dry as dust, but – this is frontier adventure stuff, like, if you’re a total nerd.
Here’s the story of the DDL 1745 from 1971, back when engineers had to “rewind the f***ing tape machines” just to hear a delay.
Eventide founder Richard Factor started experimenting with digital delays while working a day job in the defense industry, at the height of the Vietnam War, working with shift registers that work in bits.
Their advice from the 70s still holds. What do you do with a delay? “Put stuff in it!” Do you need to know what the knobs are doing? No! (Sorry, I may have just spoiled potentially thousands of dollars in audio training. My apologies to the sound schools of the world.)
Susan Rogers of Prince fame (who we’ve been talking about lately) also talks about how she “had to have” her Eventide harmonizer and delays. I now have come to feel that way about my plug-in folder, and their software recreations, just because then you have the ability to dial up unexpected possibilities.
Or, there’s the Omnipressor, the classic early 70s gear that introduced the very concept of the dynamics processor. Here, inventor Richard Factor explains how its creation grew out of the Richard Nixon tapes. No – seriously. I’ll let him tell the story:
Tony deals with those philosophical questions of imaginative possibility, perhaps most eloquently – in a way perhaps only an engineer can. Let’s get to it.
The first commercial digital delay looked like… this. DDL1745, 1971.
So you’ve already told this amazing story of the Omnipressor. Maybe you can tell us a bit about how the H910 came about?
When I joined Eventide in early 1973, the first model of the Digital Delay Line, the DDL1745, had just started shipping. At that time, there were no digital audio products of any kind in any studio anywhere.
The DDL was a primitive box. It predated memory (no RAM), LEDs (it had incandescent bulbs), and integrated Analog-to-Digital Converters [ADCs]. It offered 200 msec of delay for the price of a new car — US$4,100 in 1973 which is equivalent to ~$22,000 today! The fact is that DDLs were expensive and rare and only installed in a few world-class studios. They were used to replace tape delay.
At the time, studios were using tape delay for ADT (automatic double tracking) and, in some cases, as a pre-delay to feed plate reverbs. Plate reverbs had replaced ‘echo chambers’ but fell short in that, unlike a real room, a plate reverb’s onset is instantaneous.
I don’t believe that any recording studio had more than one DDL installed because they were so expensive. I was lucky. On the second floor of Eventide’s building was a recording studio – Sound Exchange. I was able to use the studio when it wasn’t booked to record my friends and relatives. And I had access to several DDLs! I remember carrying a few DDLs up to the studio and patching them into the console and having fun (a la Les Paul) with varying delay and using the console’s faders and feedback. By 1974 Richard Factor had designed the 1745M DDL which used RAM and had an option for a simple pitch change module.
At that point, I became convinced that I could create a product that combined delay, feedback, and pitch change that would open up a world of possible effects. I also thought that a keyboard would make it possible to ‘play’ a harmony while singing. In fact, my prototype had a 2-octave keyboard bolted to the top. Playing the keyboard was unorthodox in that center C was unison, C# would shift the voice up a half step, B down a half step, etc.
Now you can “f***” (to use the technical term) with the H910 in plug-in form, which turns out to be f***ing fun, actually.
Squint at this outboard gear shot for Michael Jackson’s “Thriller” and you can see the H910 – essential.
I liked in particular the idea of trying things out from an engineering perspective – as you put it, from what you think might sound interesting, rather than guessing in advance what the musical application would be. So, how do you decide something will sound interesting before it exists? How much is trial and error; how much do you envision how things will sound in advance?
Hmmm. First off, it starts with a technical advance. Integrated circuits made digital audio practical and every advance in technology makes new techniques/things possible, and new capabilities ensue.
At the dawn of digital audio, the mission was clear and simple from my perspective. I had studied DSP in grad school and read about the work being done at places like Bell Labs. At the time, the researchers couldn’t experiment with real-time audio, which was a huge limitation.
It was obvious that if you could digitize audio, you could delay it. It was also somewhat obvious that you should be able to play the audio back at a different rate than it was recorded (sampled). The question was, how can you do that without changing duration? In retrospect, splicing is obvious and that’s what I did in the H910. Splicing resulted in glitches, however (I’m pretty sure that we introduced that word into the audio lexicon). So, my next challenge: I needed to come up with a method for splicing without glitches.
My design of the H949 was the first de-glitched pitch changer. With that project behind me, the next obvious challenge was digitally simulating a room – reverb. At Bell Labs, Manfred Schroeder had done some preliminary work, and I tried implementing his approach, but the results were awful. I came to the conclusion that I needed a programmable array processor to meet this challenge. This was before DSP chips became available. I designed the SP2016 and developed reverb algorithms that are now available as plug-ins and still highly regarded.
The “de-glitched” classic, the H949, also in plug-in form (thanks to Eventide Anthology).
Given that the SP2016 was general purpose, I had some other ideas that seemed obvious. For instance, Band Delays — create a set of band pass filters and delay their outputs differentially. Suzanne Ciani famously used Band Delays on her ground-breaking “Seven Waves” composition.
I also developed vocoders, timescramble, and gated reverb for the SP2016. The SP2016 had a complete development system that allowed third parties to create their own effects. The effects were stored in EPROMs (Erasable Programmable Read Only Memory) that plugged into sockets. We called them ‘plug-ins’ back in 1982 long before anyone else in the audio community used that phase.
Did I think that these effects would be musical? Yes! For example, while my goal with reverb was to create a convincing simulation of a real room, I mindfully brought out user controls to allow the algorithm to sound unreal. I was never concerned that an artist would have a ‘failure of imagination.’ I simply strove to create new and flexible tools.
On that same note, I wonder if maybe what made this inventions – and hopefully future inventions – useful to musicians is that they were just some new sound. Do you get the sense that this makes them more useful in different musical applications, more novel? Or maybe you just don’t know in advance?
I think that novel is good in that it broadens the acoustic pallet. Music is a uniquely human phenomenon. It conveys emotion in a rich and powerful way. Broadening the pallet broadens the impact. We don’t create a single static effect; we create a tool that can be manipulated. Our recent breakthrough with Physion is a wonderful example. We’re now able to surgically separate the tonal and transient components of a sound – what the artist does what does pieces of the puzzle is up to them.
It’s funny in that a sound is a sound. It’s tonal and transient components are simply have we perceive the sound. I find it amazing that our team has developed software that perceives these components of sound the way that we humans do and have figured out how to split sounds accordingly.
We’re really fortunate to have all these reissues. Your Grammy nomination referred mainly seminal, big-selling records. Do you think there’s special significance to that – or have you found interest in more experimental applications? What about your users, are they largely looking to recreate those things, or to find new applications – or is it a balance of those two things?
Well the H910 was used not only because it did something new but because it had a particular sound. In the same sense that artists prefer different mics or EQs or amps, a device like the H910 has a certain characteristic. The digital portion of the H910 was simple – most of the audio path was analog and the analog portion was tuned to sound good to me! Recreating the analog subtleties and (not so subtleties) was quite the challenge but I think nailed it. The Omnipressor is another case in point. That product deserves a lot more respect and attention than it gets and the plugin emulation is excellent. On the other hand, our emulation of the Instant Phaser isn’t even close. That’s why we don’t offer it as a standalone plugin. In fact, we’re working on a much improved version of it and are getting pretty darn close. Stay tuned…
On the third hand, our Stereo Room emulation of the original reverb of the SP2016 is very close, but even so, we’re not satisfied so we’re busily measuring it in fine detail with the hope of improving it. In fact, there are a couple of other SP2016 reverbs that were popular and we’ve taken a look at emulating those.
The Stereo Room plug-in recreates the Eventide SP2016 reverb. And while it’s really good, Tony says they’re still thinking how to make it better – ah, obsessive engineers, we love you.
And, yes while there’s a balance between old and new, our goal is always to take the next step. The algorithms in our stompboxes and plugins are mostly new and in a few cases ground-breaking. Crushstation, PitchFuzz and Sculpt represent advances in simulating the non-linearities of analog distortion.
[Ed.: This is a topic I’ve heard repeated many, many times by DSP engineers. If you’re curious why software sounds better, and why it now can pass for outboard gear whereas in the past it very much couldn’t, the ability to recreate analog distortion is a big key. And it turns out our ears seem to like those kind of non-linearities, with or without a historical context.]
What’s the relationship you have with engineers and artists? What kind of feedback do you get from them – and does it change your products at all? (Any specific examples in terms of products we’d know?)
We have a good relationship with artists. They give us ideas for new products and, more often, help us create better UIs by explaining how they would like to work.
One specific example that is our work with Tony Visconti. I am honored that he was open to working with us to create a plug-in, Tverb, that emulated his 3 mic recording technique used on Bowie’s “Heroes.” Tony was generous with his time and brilliant in suggesting enhancements that weren’t possible in the real world. The industry response to Tverb has been incredibly gratifying – there is nothing else like it.
Eventide’s Tverb plug-in, which allows you, impossibly, to say “I wish I had Tony Visconti’s entire recording studio rig from “Heroes” on this channel in my DAW.” And it does still more from there. Visconti himself was a collaborator.
We are currently exploring new ways to use our structural effects method and having discussions with engineers and artists. We also have a few secret projects.
How would you relate what something like the H9 or the H9000 [Eventide’s new digital effects platforms] is to the early history like the H910 and Omnipressor? What does that heritage mean – and what do you do to move it forward? Where do recreations fit in with the newer ideas?
The consistent thread over all these years is ‘the next step.’ As technology advances, as processing power increases, new techniques and new approaches become possible. The H9000 is capable of thousands of times the sheer processing power of the H910, plus it is our first network-attached processor. Its ability to sit on an audio network and handle 32 channels of audio opens up possibilities for surround processing.
Ed.: I tried out the H9000 in a technical demo at AES in Berlin last year. It’s astonishingly powerful – and also represents the first Eventide gear to make use of the ARM platform instead of DSPs (or native software running on Intel, etc.).
One major difference, obviously, is that you now have so many plug-in users – even so many more hardware users than before. What does it mean for Eventide to have a global culture where there are so many producers? Is that expanding the kind of musical applications?
As I said earlier, there is no fear of failure of imagination of our species. Art and music define us, enrich us. The more the merrier.
What was your experience of the Grammies – obviously, nice to have this recognition; did anything come out of it personally or in terms of how this made people reflect on Eventide’s history and present?
The ‘lifetime achievement’ aspect if the Grammy award is confirmation that I’m old.
Ha, well you just have to achieve more after, and you’re fine! Thanks, Tony – as far as I’m concerned, your stuff always makes me feel like a kid.
Also, because I know that bundle is out of reach of beginning producers or musicians on a budget, it’s worth checking out Gobbler’s subscription plans. That gives you all the essentials here, including my personal must-haves, the H3000 band delays, Omnipressor, Blackhole reverb, and the H910, plus – well a lot of other great ones, too:
At the tail end of China’s Cultural Revolution, one inventor secretly created a futuristic take on traditional instruments – and it easily still inspires today.
I don’t know much about this instrument, but given CDM’s readership, I expect our collective knowledge should say something (not to mention some of you speak the language). But according to the video, it’s the work of Tian Jin Qin, a ribbon-controlled analog synthesizer first prototyped in 1978 and featured here in a documentary movie entitled “Dian Zi Qin / 电子琴” (1980).
There’s some irony to the fact that a simple touch instrument was something driven underground in China just one generation ago. Now, of course, China leads the world in manufacturing touch interfaces, has been the center of a global revolution in touch-powered smartphones (based loosely on the same principle, even), and even drives a significant portion of today’s technological innovation.
But… even without getting into that, this design is freaking great. It’ll make you immediately wonder why a single ribbon design is so popular, when the ability to finger multiple ribbons, fretless style, both relates to traditional instrument designs and allows more sophisticated melodic playing and expression.
Like… you’ll watch this video and want to go build one right now.
The synth is essentially two connected designs. An main synth console features organ-like push-button timbre controls and rotaries, plus four touch plates that respond both to being depressed and to continuous control vertically along the surface. (That arrangement, in turn, closely resembles the ROLI Seaboard keys, as well as having some lineage to the Buchla modular’s touch plates. In fact, a couple elements of the design suggest that the creator may have seen something like the Buchla 112 keyboard.)
The Chinese twist, though, is really the upright, fretless touch interface. This instrument is as subtle and sophisticated as Keith Emerson’s ribbon controller for the Moog wasn’t. Zithers are among the most ancient of instruments across a range of cultures, as antecedents what we’d now consider both southeast Asian and European musics. Someone following the narration here or with background in Chinese instruments (which I largely lack) could say more, but it seems inspired by instruments like the guqin. That family of instrument can be plucked or fingered with glissandi (or played with a slide). The electronic rendition here simplifies a bit by using 4 metal strips whereas Chinese classical instruments can feature more strings.
So I will indeed put this out to CDM readers. Anyone out there who’s done research on this creator or knows about this instrument?
Anyone built something like this?
(Apologies, I’d normally do the research first and then write but … as Ted Pallas who tipped me off to this promised, I indeed wanted to share it right away.)
For all the turbulence of our modern time, one thing I believe can keep us out of a Dark Ages is the fact that we are more connected globally than ever, or at least potentially so. From the walls around China and the east to the former Iron Curtain, we’re discovering that a lot of the people kept unknown to those of us in the West were pretty ingenious. And maybe we get a second chance to learn from them and share.
There’s a big push among software makers to deliver integrated solutions – and that’s great. But if you’re a big user of both, say, MASCHINE MK3 and Ableton Live, here’s some good news.
NI made available two software updates yesterday, for their Maschine groove workstation software and for Komplete Kontrol, their software layer for hosting instruments and effects and interfacing with their keyboards. So, the hardware proposition there is the 4×4 pad grid of the MP3, and the Komplete Kontrol keyboards.
For Maschine users, the ability to use Ableton Live and Maschine seamlessly could make a lot of producers and live performers happy. Now, unlike working with Ableton Push, the setup isn’t entirely seamless, and there’s not total integration of hardware and software. But it’s still a big step forward. For instance, I often find myself starting a project with Maschine, because I’ve got a kit I like (including my own samples), or I’m using some of its internal drum synths or bass synth, or just want to wail on four pads and use its workflow for sampling and groove creation. But then, once I’ve built up some materials, I may shift back to playing with Ableton’s workflow in Session or Arrange view to compose an idea. And I know lots of users work the same way. It makes sense, given the whole idea of Maschine is to have the feeling of a piece of hardware.
So, you’ve got this big square piece of gear plugged in. Then sometimes literally you’re unplugging the USB port and connecting Push or something else… or it just sits there, useless.
Having these templates means you switch from one tool to the other, without changing workflow. You could already do this with Maschine Jam, which has a bunch of shortcuts for different tasks and a big grid of triggers (which fits Session View). But the appeal of Maschine for a lot of us is those big, expressive pads on the MK3, so this is what we were waiting for.
On the Komplete Kontrol side, there’s a related set of use cases. Whether you’re the sort to just pull up some presets from Komplete, or at the opposite end of the spectrum, you’re using Komplete Kontrol to manipulate custom Reaktor ensembles, it’s nice to have a set of encoders and transport controls at the ready. The MK2 keyboards brought that to the party – so, for instance, now it’s really easy in Apple’s Logic Pro to play some stuff on the keys, then do another take, without, like – ugh – moving over to the table your computer is on, fumbling for the mouse or keyboard shortcut … you get the idea.
And again, a lot of us are using Ableton Live. I love Logic, but there have been times where I find myself comically missing the Session View as a way of storing ideas.
The notion here is, of course, to get you to buy into Native Instruments’ keyboards. But there is an awfully big ecosystem now of third-party instruments (like those from Output, among some of my favorites) that take advantage of compatibility via the NKS format. (NI likes to call that a “standard,” which I think is a bit of a stretch, given for now there’s no SDK for other hardware and host software makers. But it’s a useful step for now, anyway.)
So, here’s how to get going and what else is new.
The big deal with 2.7.4 is new controller workflows (JAM, MK3) and Live integration (MK3). Live users, you’ll want to begin here:
There are actually two big improvements here workflow-wise. One is Live support, but the other is easier creation of Loop recordings. With the “Target” parameter, you can drop recordings into:
2. “Sounds” (the Audio plug-in, where you can layer up sounds)
3. Pattern (creates both an Audio plug-in recording and a pattern with the playback)
I think the two together could be a godsend, actually, for composing ideas in a more improvisatory flow. The Target workflow also works on MASCHINE JAM (via different controllers).
There’s also footswitch-triggered recording.
So, Native Instruments are finally listening to feedback from people for whom live sampling is at the heart of their music making process. It’s about time, given that Maschine was modeled on hardware samplers.
The Live integration includes just the basics, but important basics – and it might still be useful even with Push and Maschine side-by-side. The MK3 can access the mixer (Volume, Pan, Mute / Solo / Arm states), clip navigation and launching, recording and quantize, undo/redo, automation toggle, tap tempo, and loop tempo.
As always, you also get various other fixes.
Komplete Kontrol 2.0
Again, you’ll start with the (slightly annoying) installation process, and then you’ll get to playing. NI support has a set of instructions with that, plus some useful detailed links on how the integration works (scroll to the botto, read the whole thing!):
The other big update here is all about supporting more plug-ins, so your NI keyboard becomes the command center for lots of other instruments and effects you own. NI now boasts hundreds of supporting plug-ins for its NKS format, which maps hardware controls to instrument parameters.
Now that includes effects, too. And that’s cool, since sometimes playing is about loading an instrument on the keys, but manipulating the parameters of an effect that processes that instrument. Those plug-ins show up in the browser, now, if they’ve added support, and they also map to the controls.
Scoff if you like, but I know these keyboards have been big sellers. If nothing else, the lesson here is that making your software sounds and effects accessible with a keyboard for tangible control is something people like.
By the way, NI also quietly pushed out a Kontakt sampler update with a whole bunch of power-user improvements to KSP, their custom language for extending/scripting sound patches. That’s of immediate interest only to Kontakt sound content developers, but you can bet some of those little things will mean more improvements to Kontakt-based content you use, if you’re on NI’s ecosystem.
All three updates are available from NI’s Service Center.
If you’ve found a useful workflow with any of this, if you’ve got any tips or hacks, as always – shout out; we’re curious to hear! (I assume you might even be making some music with all this, so that, too.)
With some 128 voices, the Valkyrie packs dense sound and effects that never let up. The all new UK-built synth was available to try in prototype form at Musikmesse – and it’s seriously impressive.
When I say “play with your forearm,” I’m not kidding. I got my hands on the prototype. Glancing around, I noticed people were cautiously plucking a note or two there and noodling some melodic lines.
With that much polyphony, I wanted to hear a cloud – a doomsday-sized swarm – of oscillations. And this literally involved cranking up various parameters, dialing up portamento, and then playing the keys with… my fist… my arm… I decided sticking a leg up there might upset someone, but we’re talking a serious amount of sound.
The heart of this machine is an FPGA. You don’t need to care about that if you’re not an engineer, but suffice to say the idea of the thing is hardware that can be “re-wired” on the fly. So you get the power of dedicated hardware, without the enormous investment of time and money to create something so inflexible. That means the Valkyrie has horsepower DSP chips – or your high-end laptop – can’t reliably deliver.
And it’s not just about having a bunch of voices, though that’s already formidable. The Valkyrie drives 10 oscillators for each voice
It probably really is the synth Richard Wagner would have bought, were he alive today, so… nice brand name. Now, ride:
Multiple synthesis methods: FM, dual wavetable, hard sync
4096 different waveshapes, ring mod, hypersaw
Dual 2- and 4-pole ladder filters
10 oscillators per voice (double to 20 by combining voices)
Dedicated outs: four balanced outputs, 32-bit/96kHz each, or separate parts streamed over USB2 at 24/96
9-unit dedicated effects, with shelving EQ on each part
The interface for all of this is a lovely high-res OLED. There are quick, slick animations to help you navigate. With that many parts/voices, of course, some menu dialing is a necessity – otherwise, the thing would take up a city block. But that navigation is quick and effortless, so you feel like you can dial up hands-on control easily. The menus were pretty logical, too, once you understand the structure of parts navigation. And everything is kept reasonably flat, which is stunning for an instrument of this complexity.
And the key is that you turn on this firehouse of sound and it never skips or steps – including with all the effects running. It’s a bit like having a Vangelis/Hans Zimmer-sized electronic studio, in a compact unit. It sounds utterly epic.
Pricing: expected under two grand (Exodus said that was their main purpose at Messe, to talk to dealers and figure that out) Availability: Expected at volume early Q3 2018
And do have a listen:
I have to say, if you’re going to spend nearly two-grand on some hardware and want it to sound futuristic, this could be the one. It seems to be just the right kind of crazy for the job. Hope we get to try one more.
As if you hadn’t had enough of the retro 808 drum machine craze, Puma are creating a pair of runners in collaboration with Roland. And this time, you can actually buy them.
So, who needs some new kicks?
I got a chance to take a look at the new Roland TR-808-inspired Puma sneakers. They’re basically just a color scheme for Puma’s relaunch of the RS (RUNNING SYSTEM) shoe line. The RS-0 is a reboot; the 1980s original was built around a unique-for-the-time cushioning system. To capitalize on 80s nostalgia, Puma went to Polaroid, Roland, and Sega for special looks for the shoes. Sadly, you don’t get any special drum machines sneakers. (No built-in metronome or clock source; no TB-303 runners that have acid basslines printed in the soles. In other words, I didn’t design them.)
What you do get is a slick-but-subtle black color scheme, with accents taken from the drum machine and a nod on the heel to the front panel label on the original.
“Jeez, CDM,” say the readers, “first balalaikas and now some branded runners, just how desperate are you this week to avoid the subject matter of the site?” Ah-ha – but we’re not done yet, folks. First twist to the story: this isn’t the first time someone has designed 808 sneaks. Less than one year ago, design agency Neely & Daughters produced a considerably less subtle pair of hi-top sneakers.
And – gasp – they were for Puma’s arch-nemsis, Adidas. And that in turn leads us to a bitter rivalry between two brothers from small-town Germany – Herzogenaurach, to be exact! Let’s go back in time to the 1920s, when… oh, no, actually, let’s not do that, as I’m sure neither Adidas nor Puma really want to get into their legacy here. (In the words of Fawlty Towers, don’t mention the War. But it is a fascinating story.)
Anyway, here’s that 2017 remake, which you couldn’t buy, as reported by Synthtopia:
Twist number two: Puma can claim the RS was “innovative,” but it’s really the RC Computer Shoe that lives up to that. Chunky protruding wings on the heel of these 1986 runners contained microcomputer pedometers. Connect them by wire (16-pin) to an Apple II, IBM PC, or Commodore 64/128, load up Puma’s software, and a graphical display would tell you time, calories burned, and distance run. That’s actually fairly forward thinking, in terms of predicting today’s fitness bands and smartphone accessories. Also, even now, you have to admit there’s something more intuitive about this being embedded in your athletic shoes rather than worn on a bracelet.
Now, I’m sure someone out there will read this post, grab a tiny Arduino or Teensy, and figure out a way to connect the 808 to shoe sensors. Who likes a challenge?
Meanwhile, since the marketing event I went to in Berlin seemed lost on the crowd – they needed “nerdsters” rather than hipsters, as one marketing blogger once dubbed the crowd I ran with Make and Etsy in New York – here you go. You’re welcome.
Disclosure: If anyone thinks this is a paid promotion, I did … manage to mention both Adidas and the Third Reich. I’m supremely sorry, Puma.