A great live set brews up new musical directions before your ears. It’s a burst of creativity and energy that’s distinct from what happens alone in a studio, with layers of process. From Liverpool (Madeline T Hall) and Moscow (Nikita Zabelin x Xandr.vasiliev), here are two fine examples to take you into the weekend.
Acid-tinged synths unfold over this brilliant half hour from M T Hall (pictured, top), at a party hosted earlier this year by HMT Liverpool x Cartier 4 Everyone:
I love that this set feels so organic and colors outside the lines, without ever losing forward drive or focus. It organically morphs from timbre to timbre, genre to genre. So just when it seems like it’s just going to be a straight-ahead acid set (that’s not actually a 303, by the way, it seems), it proceeds to perpetually surprise.
I think people are afraid to create contrast in live sets, but each shift here feels intentioned and confident, and so the result is – you won’t mistake this for someone else’s set.
Check out her artist site; she’s got a wildly diverse set of creative endeavors, including immersive drawing and sound performances, and work as an artist covering sculpture, sound, video and installation. (Madeleine, if you’re reading this, hope we can feature your work in more depth! I just can’t wait to release this particular set first!)
Darker (well, and redder, thanks to the lighting), but related in its free-flowing machine explorations, we’ve got another set from Moscow from this month:
It’s the project of Nikita Zabelin x Xandr.vasiliev, at Moscow’s Pluton club, a repurposed factory building giving a suitably raw industrial setting.
This is connected for me, though. Dark as it is, the duo isn’t overly serious – weird and whimsical sounds still bubble out of the shadows. And it shows that grooves and free-form sections can intermix successfully. I got to play after this duo in St. Petersburg and you really do get the sense of open improvisation.
Facing off at Moscow’s Pluton.
xandr aka Alexander has a bunch more here:
That inspires me for the coming days. Have a good weekend, everybody.
Synesthesia is a term that gets thrown around a lot, usually to describe the common associations of color, image, and music. But for some people, intermingling of senses can be far more extreme. Listen to LJ Rich talk about what happens when hearing and taste intersect.
LJ Rich is most widely known as presenter of the BBC program Click. She’s also got vast musical experience, from composition to engineering. But for someone so involved in music, her experience is out of the ordinary. Her sense of taste evokes music, and her sense of music evokes taste.
The thing about our senses is, each of us assumes our own experience is the same as everyone else’s – until we hear otherwise. And then, it’s almost impossible to describe; human experience is far more relativistic than any of us can ever hope to understand. But listening to LJ talk about this will both give you insight into her unique way of taking in sound, as well as giving some clues to why music is profound and often cross-sensory for so many of us. She can describe what Debussy tastes like.
This opens up some new interdisciplinary performances; at Music Tech Fest (MTF) in Umeå, Sweden, LJ performed onstage with a bartender, who mixed a cocktail onstage.
LJ’s talk is the content of the first episode of the new MTF podcast. Listen at their site:
It’s a great time to love synths, even on a budget. The latest entry is the DIY Brunswick kit from Future Sound Systems in the UK. It’s simple (one oscillator), but weird and dirty sounding – and you can patch this semi-modular instrument to your own delight. And the price is under £99.
So yeah, if you want to mess about with synths and patch things together, modular is hardly your only option. There are loads of ways to make noise.
Brunswick made its debut at Synthfest in Sheffield earlier this month:
One oscillator (pulse/saw) only, but that’s paired with a multimode analog filter and analog envelope, and FM inputs to spice up the sound (plus other modulation). Add 24 patch points, and you can patch together other sound design options. The patchabilityhas obviously made this a hit; the first batch sold out but another is arriving in November.
Oh, and it says “BEEF” on it, which is important.
£82.50 means that’s just over 110EUR with VAT, or around US$100 (before shipping costs).
It is a DIY kit, not assembled. I’d say it’s an intermediate beginner build – nothing especially difficult, but it’ll take some time and you might want a simple project under your belt before you use this to learn soldering.
What’s notable is that Future Sound Systems are giving you a semi-modular instrument that works perfectly well on its own as well as a voice in a modular environment. They make a lot of other lovely stuff but more in the Eurorack domain.
It’s trending now just based on a Reddit member pointing to the box arriving, so I guess people want it!
Full Synthesizer Voice
VCO PWM & FM
2-Pole VCF with FM
Internal Triangle & Square wave LFOs
Internal Envelope & VCA
PLL & Phase Comparator
24 point Patch bay
Power: 2x PP3 9V batteries (+35, -20mA current draw)
Batteries not included
Dimensions: 194 x 120 mm
Patch bay I/O:
VCO 1V/Octave pitch control input
VCO PulseWidth Modulation input (normalled to LFO Triangle output)
VCO FM 1 input (normalled to LFO Triangle output)
VCO FM 2 input (normalled to Envelope output)
VCO “Sawtooth” output
VCO Pulse output
Phase Comparator input
Phase Comparator output
Phase Locked Loop input
Phase Locked Loop output
LFO Triangle output
LFO Square output
Low-Pass Filter input (normalled to switched VCO output)
Band-Pass Filter input
High-Pass Filter input
VCF FM 1 input (normalled to LFO Triangle output)
VCF FM 2 input (normalled to Envelope output)
VCA AM 1 input (normalled to LFO Triangle output)
VCA AM 2 input (normalled to Envelope output)
Envelope Gate input (normalled to LFO Square output)
You know – for kids. Mini.mu is a musical glove that can get young people coding and crafting and making music and electronics work. And it’s off to a simple, elegant, and affordable start, courtesy artist Imogen Heap and designer Helen Leigh.
It’s one thing for music stars to try out bleeding edge technology and explore performance using gestural interfaces. It’s another thing for kids to tackle computing and electronics – and to make teaching tools that serve them. But a new musical glove design could reach a far wider audience.
MI.MU gloves have been a story we’ve followed since the beginning. With artist Imogen Heap, the effort was to expand on musical gloves past and make something that could expressively navigate a performance.
But MI.MU’s work has tended to be technically complex and pricey. Not so MINI.MU.
You make this glove from scratch, with everything kids need included in the kit. (Helen Leigh is not only a brilliant engineer, but also a children’s author and workshop instructor – so she gets how to teach and how kids get going quickly. The kit is rated for age 6+.)
The price: retailing at £39.95. (just about fifty bucks USD). For many in the UK, it’ll be even cheaper, as schools already have the micro:bit “brains” of the glove
Apart from a cute-looking glove to put on your hand, the MINI.MU has a speaker, an accelerometer, and buttons. You use those sensors to pick up the position of the hand and particular events (like tilt or shake). Then code running on an included chip translates those motions into sounds – which you hear right on the glove, without any additional hardware.
The UK-based project takes advantage of the BBC micro:bit, an initiative to get UK schoolchildren into coding and embedded computing. There are loads of micro:bits around, so the glove is designed to build on this platform, but you can also buy the glove with a bundled micro:bit if you don’t have one.
And this can be extended, too. Pins on the board let you connect additional sensors, like flex sensors.
Helen worked with the MI.MU team, Imogen, and kit maker Pimoroni to make this happen.
What’s promising about MINI.MU is that it makes computing and crafting personal – you’re coding something that’s expressive and literally in your hand. If the creators can keep kids (and adults) interested in doing stuff with a glove, and building code around music, there’s real potential.
It looks like the beginning of a platform that could be a lot more – and that realizes some longstanding dreams to bring new ways of interacting with music and learning about STEM through music technology. We’ll be watching.
Check out how kids would get coding with this:
Visual coding using musical examples. (Check these things out in your browser, free.)
What if music were made mechanically, with giant wheels and bellows and valves? The Mammoth Beat Organ makes that happen, using parts from toilets, a hearse, and a treadmill.
Yes, it has balloons connected by tubes and something called a “wind sequencer” with pegs and … it sounds like a calliope that’s gone a bit mental. And it comes with roll-on “modules” so you can add different layers of sound (like mechanically played drums). Watch:
It’s the Dunning Underwood Mammoth Beat Organ, the creation of two wild musical minds – Sam Underwood and Graham Dunning – in their first collaboration. It has the sonic thinking of the Giant Feedback Organ (Underwood) and the mechanical performance approach of Mechanical Techno (Dunning). And accordingly, it’s even meant to be a two-player contraction, involving both artists.
That performance spectacle is really part of the magic, as components are wheeled around and bits and bobs added and subtracted. Having seen Graham’s live show, that performance energy drives things in a way different than you’d get from just an installation – it has improvisation in it.
More on how this works – in particular, still more deep research into historical instruments and the alternative histories it suggests, and how they incorporated the back of a hearse and a treadmill into construction:
This project is just getting going, so it’ll be fun to watch it evolve – especially if we get to see it in person.
It’s worth noting that they talk about the need to have years and years to continue building and rehearsing with the invention. We of course value novelty in tech, but that’s telling, whatever your fantasies are (whether large and mechanical or compact and digital or anything else). So I do hope they’ll keep us posted as they continue developing, and as they use this instrument to spark new creative directions in their own imaginations.
The video at top is shot and explained by Michael Forrest of Michael & Ivanka’s Grand Podcast – well worth a listen: http://grandpodcast.com
I’m not a fan of YouTube and the next videos it plays, but following this with Sir Simon Rattle conducting Chariots of Fire with Mr. Bean sure as hell works. In case you need some motivation for today’s soldering / hammering DIY instruments, have at it.
Music, film/TV, games… yes. But another frontier is opening for sound design you might not expect: cars. That has led automaker Jaguar to sound designer Richard Devine, and that in turn means when this Jag accelerates, it sounds like it’s headed into hyperdrive, bound for the outer rim.
Sounds will be another differentiation point of the auto brand experience, a way to set luxury vehicles apart, it’s true. But when it comes to engine noise, there is actually a safety issue. Fully electric cars don’t make the noise that internal combustion engines do, which means you can’t hear them coming – which makes them dangerous.
The cool thing is, manufacturers are finally beginning to consider aesthetics in sound design. And in a world that’s flooded with repetitions of the Windows startup sound, that Nokia theme tune (only mostly driven away by the iPhone), horrible sirens, beeps, and whatnot, this couldn’t come a moment too soon.
Richard Devine has been doing sound design across various industries, from sounds used in films to strange presets you find lurking in your plug-ins (as well as making some great music himself). Now at last he can share publicly that he did sound for the mighty Jaguar, and its all-electric I‑PACE car.
The engine acceleration noise is cool, and with good reason – this car may be ecologically minded, but it also does 0 to 60 in 4.5 seconds. (I’m not advertising for Jaguar, though… uh, hey Jag, I accept money. And automobiles. Be in touch.)
Iain Suffield, Acoustics Technical Specialist at Jaguar:
“We have taken a completely blank canvas and worked with electronic musician and sound designer Richard Devine to interpret the design language of the vehicle, to create building blocks of sound we can craft into the I-PACE.”
And they’ve worked on every aspect of the sound: “The Stop/Start noise of the motors, the audible vehicle alert system, the dynamic driving sounds all have been designed completely from scratch.”
From the outside, the car hums. Inside the cabin, you get different sound sets to reward you as you engage “dynamic” mode, and there is manual customization. (Yes, your car has sound sets. I’m waiting until I can drive a car that looks like a LADA on the outside but sounds like the Enterprise-D on the inside. I’ll keep dreaming.)
You can expect major car companies to enlist these sorts of sound departments more frequently, along with other manufacturers of various products keen to engage customers. And since these teams are developing internally, as well as hiring outside creative talent as with Richard Devine, that means more opportunities for music producers and audio engineers.
So the next time you’re obsessing over getting a sound right and layering instead of just dialing in a preset the easy way, think of it as a career investment. It worked for Richard.
Previously on CDM, German maker Audi following a similar path:
For all the great sounds they can make, software synths eventually fit a repetitive mold: lots of knobs onscreen, simplistic keyboard controls when you actually play. ROLI’s Cypher2 could change that. Lead developer Angus chats with us about why.
Angus Hewlett has been in the plug-in synth game a while, having founded his own FXpansion, maker of various wonderful software instruments and drums. That London company is now part of another London company, fast-paced ROLI, and thus has a unique charge to make instruments that can exploit the additional control potential of ROLI’s controllers. The old MIDI model – note on, note off, and wheels and aftertouch that impact all notes at once – gives way to something that maps more of the synth’s sounds to the gestures you make with your hands.
So let’s nerd out with Angus a bit about what they’ve done with Cypher2, the new instrument. Background:
Peter: Okay, Cypher2 is sounding terrific! Who made the demos and so on?
Angus: Demos – Rafael Szaban, Heen-Wah Wai, Rory Dow. Sound Design – Rory Dow, Mayur Maha, Lawrence King & Rafael Szaban
Can you tell us a little bit about what architecture lies under the hood here?
Sure – think of it as a multi-oscillator subtractive synth. Three oscillators with audio-rate intermodulation (FM, S&H, waveshape modulation and ring mod), each switchable between Saw and Sin cores. Then you’ve got two waveshapers (each with a selection of analogue circuit models and tone controls, and a couple of digital wavefolders), and two filters, each with a choice of five different analogue filter circuit models – two variations on the diode ladder type, OTA ladder, state variable, Sallen-Key – and a digital comb filter. Finally, you’ve got a polyphonic, twin stereo output amp stage which gives you a lot of control over how the signal hits the effects chain – for example, you can send just the attack of every note to the “A” chain and the sustain/release phase to the “B” chain, all manner of possibilities there.
Controlling all of that, you’ve got our most powerful TransMod yet. 16 assignable modulation slots, each with over a hundred possible sources to choose from, everything from basics like Velocity and LFO through to function processors, step sequencers, paraphonic mod sources and other exotics. Then there’s eight fixed-function mod slots to support the five dimensions of MPE control and the three performance macros. So 24 TransMods in total, three times as many as v1.
Okay, so Cypher2 is built around MPE, or MIDI Polyphonic Expression. For those readers just joining us, this is a development of the existing MIDI specification that standardizes additional control around polyphonic inputs – that is, instead of adding expression to the whole sound all at once, you can get control under each finger, which makes way more sense and is more fun to play. What does it mean to build a synth around MPE control? How did you think about that in designing it?
It’s all about giving the sound designers maximum possibility to create expressive sound, and to manage how their sound behaves across the instrument’s range. When you’re patching for a conventional synth, you really only need to think about pitch and velocity: does the sound play nicely across the keyboard. With 5D MPE sounds, sound designers start having to think more like a software engineer or a game world designer – there’s so many possibilities for how the player might interact with the sound, and they’ve got to have the tools to make it sound musical and believable across the whole range.
What this translates to in the specific case of Cypher2 is adapting our TransMod system (which is, at its heart, a sophisticated modulation matrix) to make it easy for sound designers to map the various MPE control inputs, via dynamically controllable transfer function curves, on to any and every parameter on the synth.
How does this relate to your past line of instruments?
Clearly, Cypher2 is a successor to the original Cypher which was one of the DCAM Synth Squad synths; it inherits many of the same functional upgrades that Strobe 2 gained over its predecessor a couple of years ago – the extended TransMod system, the effects engine, the Retina-friendly, scalable, skinnable GUI – but goes further, and builds on a lot of user and sound-designer feedback we had from Strobe2. So the modulation system is friendlier, the effects engine is more powerful, and it’s got a brand new and much more powerful step-sequencer and arpeggiator. In terms of its relationship to the original Cypher – the overall layout is similar, but the oscillator section has been upgraded with the sine cores and additional FM paths; the shaper section gains wavefolders and tone controls; the filters have six circuits to chose from, up from two in the original, so there’s a much wider range of tones available there; the envelopes give you more choice of curve responses; the LFOs each have a sub oscillator and quadrature outputs; and obviously there’s MPE as described above.
Of course, ROLI hope that folks will use this with their hardware, naturally. But since part of the beauty is that this is open on MPE, any interesting applications working with some other MPE hardware; have you tried it out on non-ROLI stuff (or with testers, etc.)?
Yes, we’ve tried it (with Linnstrument, mainly), and yes, it very much works – although with one caveat. Namely, MPE, as with MIDI, is a protocol which specifies how devices should talk to one another – but it doesn’t specify, at a higher level, what the interaction between the musician and their sound should feel like.
That’s a problem that I actually first encountered during the development of BFD2 in the mid-2000s: “MIDI Velocity 0-127” is adequate to specify the interaction between a basic keyboard and a sound module, and some of the more sophisticated stage controller boards (Kurzweil, etc.) have had velocity curves at least since the 90s. But as you increase the realism and resolution of the sounds – and BFD2 was the first time we really did so in software to the extent that it became a problem – it becomes apparent that MIDI doesn’t specify how velocity should map on to dB, or foot-pounds-per-second force equivalent, or any real-world units.
That’s tolerable for a keyboard, where a discerning user can set one range for the whole instrument, but when you’re dealing with a V-Drums kit with, potentially, ten or twelve pads, of different types, to set up, and little in the way of a standard curve to aim for, the process becomes cumbersome and off-putting for the end-user. What does “Velocity 72” actually mean from Manufacturer A’s snare drum controller, at a sensitivity setting B, via drum brain C triggering sample D?
Essentially, you run into something of an Uncanny Valley effect (a term from the world of movies / games where, as computer generated graphics moved from obviously artificial 8-bit pixel art to today’s motion-captured, super-sampled cinematic epics, paradoxically audiences would in some cases be less satisfied with the result). So it’s certainly a necessary step to get expressive hardware and software talking to one another – and MPE accomplishes that very nicely indeed – but it’s not sufficient to guarantee that a patch will result in a satisfactory, believable playing experience OOTB.
Some sound-synth-controller-player combinations will be fine, others may not quite live up to expectations, but right now I think it’s natural to expect that it may be a bit hit-and-miss. Feedback on this is something I’d like to actively encourage, we have a great dialogue with the other hardware vendors and are keen for to achieve a high standard of interoperation, but it’s a learning process for all involved.
Thanks, Angus! I’ll be playing with Cypher2 and seeing what I can do with it – but fascinating to hear this take on synths and control mapping. More food for thought.
This week in blasphemy: LOOK MUM NO COMPUTER has another weird nerdy superhit, this time modding and glitching out an electronic bible. Jesus, take the soldering iron!
LOOK MUM NO COMPUTER is inventor-musician-composer Sam Battl of London, whose projects have included synths on bikes, flamethrower organs, and Theremin lightsabres, among other concoctions. And he has a knack for creating weird and wonderful inventions that then go viral.
But speaking of viral millennial sensations (okay, very different millennium), maybe you’ve heard of a bestselling book called … The Bible? All about a thought leader / influencer who … okay, I’ll stop.
Long story short: electronic bible. Soldering iron. Circuit bends. Apparently, a dare from deadmau5. And then, this:
And before I tempt getting struck by lightning while blogging, don’t worry, bible lovers – Sam says “Nothing against the bible here. I showed it to a couple of christian friends before and they seemed to like it.” There, that’s good enough for me.
Okay, sure, it sounds a little demonic, but you know, it’s still the actual Bible. If Christian rock sounded like this, I’d be up for it. (Bach, I like.)
As it happens, this project is interesting from an engineering perspective, too. Recent products are way harder to bend, thanks to fewer exposed bend points and chips hidden beneath black blobs and the like. There’s a reason circuit bending often starts with a trip to eBay or a flea market.
Sam promises more info on his site soon on just how he pulled this off. We’ll be watching.
For more on circuit bending, start with the man who started it all – Reed Ghazala, whose approach to bending is like an ecologist assisting machines in evolving. (He even gives them eyes and the like, for a window into their soul.) It’s radical, wonderful stuff – from an engineering perspective as well as a human and philosophical one. His site:
Okay, for anyone who thought the TR-808 shoes were too much, you may want to sit down. Now there’s a craft beer immortalizing the Roland drum machine for #808day.
It’s called the BR-808, and it’s quite silly – but you do get a celebrity endorsement from the likes of pioneering artist A Guy Called Gerald:
Filmmakers Origin Workshop have teamed up with beat loving brewers Mondo Brewing Company (U.K.), DevilCraft (Japan) and Melvin Brewing (U.S.A) and together they have developed and produced The Origin Workshop BR-808, a special collaboration beer that honours the enduring sounds and cultural legacy of the TR-808 and the seismic shift it created in music.
It’s a taste of the future, a brew defined by the legendary kick drum of the 808. Its been developed to recreate that deep sub-bass low end, delivering a solid Japanese kick that resonates through the American IPA flavours.
With tropical, citrus aromas, mikan orange peel and flavours from the generous rise of Citra and Amarillo hops the Origin Workshop BR-808 provides a refined taste. In the spirit of true collaboration, the brewers added no caramel malt with only light British pale malt and Cara pils, resulting in a refreshing brew coming in at 7% ABV.
That is a hipster marketing singularity if ever I heard one – London + 808 nostalgia + film agency + american IPA. I haven’t tasted this, though, so I can’t vouch yet.
That said, hmmm, 7%? I’m in Berlin, so I’ll stick to the lighter pilsner. 7% and I might not be able to properly operating STEP LOOP on the TR-8S.
If you don’t have a studio, four cellists, amps and recording equipment plus an engineer handy, try this — it’s free.
“Amplified Cello” is the latest instrument in the ongoing LABS series from our friends at Spitfire Audio, a boutique sample house in the UK. They promised some more “experimental” content, after the soft piano and strings and drums, and here you go.
Not only do these cellos get routed through amps for extra edge, but Spitfire founder Christian Henson and engineer Harry Wilson actually did that processing live during recording – cellos in one room, tracking through the amps in another.
But what really makes this interesting to play around with is, they’ve put a bunch of different articulations and gestures in the library. My graduate level musicology education here wants to use words like glissandi and tremolo, but actually, their words “evil,” “wobbly,” and “tension” are both more descriptive of the music and, let’s be honest, truer to our lives sometimes.
And there’s quite a selection:
Now, LABS are really easy and fun to play with, but here we do run up against a limitation: there are a bunch of different samples for various articulations here, and you can only get at them one at a time. Do try out the minimalist controls, as they have more of an impact on the result. It might also be worth setting up a multi-instrument to play with these. (Might get back to you on that!)
These minimal controls may confuse long-time sample users, but – don’t think too much; have a play.
To install, as before you head to the LABS page, login or register, and then click “GET” for each library you want. You can then choose where your plug-ins go, where to store the sample content (as on an external drive), and then download from the Spitfire app:
That app also has updates for Spitfire’s other LABS instruments.
Spitfire’s audio app has also improved. You can finally set a default path for VST2 (essential on Windows) and choose default install paths and plug-in paths, plus log in automatically.
I’m still surprised at readers’ resistance to these sorts of apps, but I’m guessing that means you’ve had a bad experience with some developers. (That part, I understand.) This app for me is reducing frustration, not adding to it, and I’ve tried on both Mac and Windows machines.