Algorave culture has been training years for this – it’s an audiovisual form that can make even a screen and streamed sound really come alive. Just watch – and actually, don’t just watch, here’s how to learn, too. Normally, algorave articles talk breathlessly about code, blah blah, people coding on screen, isn’t that nerdy, look […]
Reimagine pixels and color, melt your screen live into glitches and textures, and do it all for free on the Web – as you play with others. We talk to Olivia Jack about her invention, live coding visual environment Hydra.
Inspired by analog video synths and vintage image processors, Hydra is open, free, collaborative, and all runs as code in the browser. It’s the creation of US-born, Colombia-based artist Olivia Jack. Olivia joined our MusicMakers Hacklab at CTM Festival earlier this winter, where she presented her creation and its inspirations, and jumped in as a participant – spreading Hydra along the way.
Olivia’s Hydra performances are explosions of color and texture, where even the code becomes part of the aesthetic. And it’s helped take Olivia’s ideas across borders, both in the Americas and Europe. It’s part of a growing interest in the live coding scene, even as that scene enters its second or third decade (depending on how you count), but Hydra also represents an exploration of what visuals can mean and what it means for them to be shared between participants. Olivia has rooted those concepts in the legacy of cybernetic thought.
Following her CTM appearance in Berlin, I wanted to find out more about how Olivia’s tool has evolved and its relation to DIY culture and self-fashioned tools for expression.
CDM: Can you tell us a little about your background? Did you come from some experience in programming?
Olivia: I have been programming now for ten years. Since 2011, I’ve worked freelance — doing audiovisual installations and data visualization, interactive visuals for dance performances, teaching video games to kids, and teaching programming to art students at a university, and all of these things have involved programming.
Had you worked with any existing VJ tools before you started creating your own?
Alexandra Cárdenas and Olivia Jack @ ICLC 2019:
In your presentation in Berlin, you walked us through some of the origins of this project. Can you share a bit about how this germinated, what some of the precursors to Hydra were and why you made them?
It’s based on an ongoing Investigation of:
Collaboration in the creation of live visuals
Possibilities of peer-to-peer [P2P] technology on the web
Satellite Arts project by Kit Galloway and Sherrie Rabinowitz http://www.ecafe.com/getty/SA/ (1977)
I was also questioning some of the metaphors we use to understand and interact with the web. “Visiting” a website is exchanging a bunch of bytes with a faraway place and routed through other far away places. Rather than think about a webpage as a “page”, “site”, or “place” that you can “go” to, what if we think about it as a flow of information where you can configure connections in realtime? I like the browser as a place to share creative ideas – anyone can load it without having to go to a gallery or install something.
And I was interested in using the idea of a modular synthesizer as a way to understand the web. Each window can receive video streams from and send video to other windows, and you can configure them in real time suing WebRTC (realtime web streaming).
Here’s one of the early tests I did:
I really liked this philosophical idea you introduced of putting yourself in a feedback loop. What does that mean to you? Did you discover any new reflections of that during our hacklab, for that matter, or in other community environments?
It’s processes of creation, not having a specific idea of where it will end up – trying something, seeing what happens, and then trying something else.
Code tries to define the world using specific set of rules, but at the end of the day ends up chaotic. Maybe the world is chaotic. It’s important to be self-reflective.
How did you come to developing Hydra itself? I love that it has this analog synth model – and these multiple frame buffers. What was some of the inspiration?
I had no intention of creating a “tool”… I gave a workshop at the International Conference on Live Coding in December 2017 about collaborative visuals on the web, and made an editor to make the workshop easier. Then afterwards people kept using it.
I didn’t think too much about the name but [had in mind] something about multiplicity. Hydra organisms have no central nervous system; their nervous system is distributed. There’s no hierarchy of one thing controlling everything else, but rather interconnections between pieces.
Ed.: Okay, Olivia asked me to look this up and – wow, check out nerve nets. There’s nothing like a head, let alone a central brain. Instead the aquatic creatures in the genus hydra has sense and neuron essentially as one interconnected network, with cells that detect light and touch forming a distributed sensory awareness.
Most graphics abstractions are based on the idea of a 2d canvas or 3d rendering, but the computer graphics card actually knows nothing about this; it’s just concerned with pixel colors. I wanted to make it easy to play with the idea of routing and transforming a signal rather than drawing on a canvas or creating a 3d scene.
This also contrasts with directly programming a shader (one of the other common ways that people make visuals using live coding), where you generally only have access to one frame buffer for rendering things to. In Hydra, you have multiple frame buffers that you can dynamically route and feed into each other.
MusicMakers Hacklab in Berlin. Photo: Malitzin Cortes.
Livecoding is of course what a lot of people focus on in your work. But what’s the significance of code as the interface here? How important is it that it’s functional coding?
It’s inspired by [Alex McLean’s sound/music pattern environment] TidalCycles — the idea of taking a simple concept and working from there. In Tidal, the base element is a pattern in time, and everything is a transformation of that pattern. In Hydra, the base element is a transformation from coordinates to color. All of the other functions either transform coordinates or transform colors. This directly corresponds to how fragment shaders and low-level graphics programming work — the GPU runs a program simultaneously on each pixel, and that receives the coordinates of that pixel and outputs a single color.
What’s the experience you have of the code being onscreen? Are some people actually reading it / learning from it? I mean, in your work it also seems like a texture.
I am interested in it being somewhat understandable even if you don’t know what it is doing or that much about coding.
Code is often a visual element in a live coding performance, but I am not always sure how to integrate it in a way that feels intentional. I like using my screen itself as a video texture within the visuals, because then everything I do — like highlighting, scrolling, moving the mouse, or changing the size of the text — becomes part of the performance. It is really fun! Recently I learned about prepared desktop performances and related to the live-coding mantra of “show your screens,” I like the idea that everything I’m doing is a part of the performance. And that’s also why I directly mirror the screen from my laptop to the projector. You can contrast that to just seeing the output of an AV set, and having no idea how it was created or what the performer is doing. I don’t think it’s necessary all the time, but it feels like using the computer as an instrument and exploring different ways that it is an interface.
The algorave thing is now getting a lot of attention, but you’re taking this tool into other contexts. Can you talk about some of the other parties you’ve played in Colombia, or when you turned the live code display off?
Most of my inspiration and references for what I’ve been researching and creating have been outside of live coding — analog video synthesis, net art, graphics programming, peer-to-peer technology.
Having just said I like showing the screen, I think it can sometimes be distracting and isn’t always necessary. I did visuals for Putivuelta, a queer collective and party focused on diasporic Latin club music and wanted to just focus on the visuals. Also I am just getting started with this and I like to experiment each time; I usually develop a new function or try something new every time I do visuals.
Community is such an interesting element of this whole scene. So I know with Hydra so far there haven’t been a lot of outside contributions to the codebase – though this is a typical experience of open source projects. But how has it been significant to your work to both use this as an artist, and teach and spread the tool? And what does it mean to do that in this larger livecoding scene?
I’m interested in how technical details of Hydra foster community — as soon as you log in, you see something that someone has made. It’s easy to share via twitter bot, see and edit the code live of what someone has made, and make your own. It acts as a gallery of shareable things that people have made:
Although I’ve developed this tool, I’m still learning how to use it myself. Seeing how other people use it has also helped me learn how to use it.
I’m inspired by work that Alex McLean and Alexandra Cardenas and many others in live coding have done on this — just the idea that you’re showing your screen and sharing your code with other people to me opens a conversation about what is going on, that as a community we learn and share knowledge about what we are doing. Also I like online communities such as talk.lurk.org and streaming events where you can participate no matter where you are.
I’m also really amazed at how this is spreading through Latin America. Do you feel like there’s some reason the region has been so fertile with these tools?
It’s definitely influenced me rather than the other way around, getting to know Alexandra [Cardenas’] work, Esteban [Betancur, author of live coding visual environment Cine Vivo], rggtrn, and Mexican live coders.
Madrid performance. Photo: Tatiana Soshenina.
What has the scene been like there for you – especially now living in Bogota, having grown up in California?
I think people are more critical about technology and so that makes the art involving technology more interesting to me. (I grew up in San Francisco.) I’m impressed by the amount of interest in art and technology spaces such as Plataforma Bogota that provide funding and opportunities at the intersection of art, science, and technology.
The press lately has fixated on live coding or algorave but maybe not seen connections to other open source / DIY / shared music technologies. But – maybe now especially after the hacklab – do you see some potential there to make other connections?
To me it is all really related, about creating and hacking your own tools, learning, and sharing knowledge with other people.
Oh, and lastly – want to tell us a little about where Hydra itself is at now, and what comes next?
Right now, it’s improving documentation and making it easier for others to contribute.
Personally, I’m interested in performing more and developing my own performance process.
VCV Rack, the open source platform for software modular, keeps blossoming. If what you were waiting for was more maturity and stability and integration, the current pipeline looks promising. Here’s a breakdown.
Even with other software modulars on the scene, Rack stands out. Its model is unique – build a free, open source platform, and then build the business on adding commercial modules, supporting both the platform maker (VCV) and third parties (the module makers). That has opened up some new possibilities: a mixed module ecosystem of free and paid stuff, support for ports of open source hardware to software (Music Thing Modular, Mutable Instruments), robust Linux support (which other Eurorack-emulation tools currently lack), and a particular community ethos.
Of course, the trade-off with Rack 0.xx is that the software has been fairly experimental. Versions 1.0 and 2.0 are now in the pipeline, though, and they promise a more refined interface, greater performance, a more stable roadmap, and more integration with conventional DAWs.
New for end users
VCV founder and lead developer Andrew Belt has been teasing out what’s coming in 1.0 (and 2.0) online.
Here’s an overview:
Polyphony, polyphonic cables, polyphonic MIDI support and MPE
Multithreading and hardware acceleration
Tooltips, manual data entry, and right-click menus to more information on modules
Virtual CV to MIDI and direct MIDI mapping
2.0 version coming with fully-integrated DAW plug-in
More on that:
Polyphony and polyphonic cables. The big one – you can now use polyphonic modules and even polyphonic patching. Here’s an explanation:
Polyphonic MIDI and MPE. Yep, native MPE support. We’ve seen this in some competing platforms, so great to see here.
Multithreading. Rack will now use multiple cores on your CPU more efficiently. There’s also a new DSP framework that adds CPU acceleration (which helps efficiency for polyphony, for example). (See the developer section below.)
Oversampling for better audio quality. Users can set higher settings in the engine to reduce aliasing.
Tooltips and manual value entry. Get more feedback from the UI and precise control. You can also right-click to open other stuff – links to developer’s website, manual (yes!), source code (for those that have it readily available), or factory presets.
Core CV-MIDI. Send virtual CV to outboard gear as MIDI CC, gate, note data. This also integrates with the new polyphonic features. But even better –
Map MIDI directly. The MIDI map module lets you map parameters without having to patch through another module. A lot of software has been pretty literal with the modular metaphor, so this is a welcome change.
And that’s just what’s been announced. 1.0 is imminent, in the coming months, but 2.0 is coming, as well…
Rack 2.0 and VCV for DAWs. After 1.0, 2.0 isn’t far behind. “Shortly after” 2.0 is released, a DAW plug-in will be launched as a paid add-on, with support for “multiple instances, DAW automation with parameter labels, offline rendering, MIDI input, DAW transport, and multi-channel audio.”
These plans aren’t totally set yet, but a price around a hundred bucks and multiple ins and outs are also planned. (Multiple I/O also means some interesting integrations will be possible with Eurorack or other analog systems, for software/hardware hybrids.)
VCV Bridge is already deprecated, and will be removed from Rack 2.0. Bridge was effectively a stopgap for allowing crude audio and MIDI integration with DAWs. The planned plug-in sounds more like what users want.
Rack 2.0 itself will still be free and open source software, under the same license. The good thing about the plug-in is, it’s another way to support VCV’s work and pay the bills for the developer.
Rack v1 will bring a new, stabilized API – meaning you will need to do some work to port your modules. It’s not a difficult process, though – and I think part of Rack’s appeal is the friendly API and SDK from VCV.
You’ll also be able to take advantage of an SSE wrapper (simd.hpp) to take advantage of accelerated code on desktop CPUs, without hard coding manual calls to hardware that could break your plug-ins in the future. This also theoretically opens up future support for other platforms – like NEON or AVX acceleration. (It does seem like ARM platforms are the future, after all.)
While the Facebook group is still active and a place where a lot of people share work, there’s a new dedicated forum. That does things Facebook doesn’t allow, like efficient search, structured sections in chronological order so it’s easy to find answers, and generally not being part of a giant, evil, destructive platform.
Fahmi Mursyid from Indonesia has been creating oceans of wondrously sculpted sounds on netlabels for the past years. Be sure to watch these magical constructions on nothing but Walkman tape loops with effects pedals and VCV Rack patches – immense sonic drones from minimal materials.
Fahmi hails from Bandung, in West Java, Indonesia. While places like Yogyakarta have hogged the attention traditionally (back even to pre-colonial gamelan kingdom heydeys), it seems like Bandung has quietly become a haven for experimentalists.
He also makes gorgeous artworks and photography, which I’ve added here to visualize his work further. Via:
This dude and his friends are absurdly prolific. But you can be ambitious and snap up the whole discography for about twelve bucks on Bandcamp. It’s all quality stuff, so you could load it up on a USB key and have music when you’re away from the Internet ranging from glitchy edges to gorgeous ambient chill.
Watching the YouTube videos gives you a feeling for the materiality of what you’re hearing – a kind of visual kinetic pcture to go with the sound sculpture. Here are some favorites of mine:
Via Bandcamp, he’s just shared this modded Walkman looping away. DSP, plug-in makers: here’s some serious nonlinearity to inspire you. Trippy, whalesong-in-wormhole stuff:
The quote added to YouTube from Steve Reich fits:
“the process of composition but rather pieces of music that are, literally, processes. The distinctive thing about musical processes is that they determine all the note-to-note (sound-to-sound) details and the overall form simultaneously. (Think of a round or infinite canon.)”
He’s been gradually building a technique around tapes.
But there’s an analog to this kind of process, working physically, and working virtually with unexpected, partially unstable modular creations. Working with the free and open source software modular platform VCV Rack, he’s created some wild ambient constructions:
Or the two together:
Eno and Reich pepper the cultural references, but there are aesthetic cues from Indonesia, too, I think (and no reason not to tear down those colonial divisions between the two spheres). Here’s a reinterpretation of Balinese culture of the 1940s, which gives you some texture of that background and also his own aesthetic slant on the music of his native country:
Check out the releases, too. These can get angular and percussive:
— or become expansive soundscapes, as here in collaboration with Sofia Gozali:
— or become deep, physical journeys, as with Jazlyn Melody (really love this one):
Here’s a wonderful live performance:
I got hooked on Fahmi’s music before, and … honestly, far from playing favorites, I find I keep accidentally running over it through aliases and different links and enjoying it over and over again. (While I was just in Indonesia for Nusasonic, it wasn’t the trip that made me discover the music – it was the work of musicians like Fahmi that were the reason we all found ourselves on the other side of the world in the first place, to be more accurate. They discovered new sounds, and us.) So previously:
Vember Audio’s Surge synth could be an ideal choice for an older machine or a tight budget – with deep modulation and loads of wavetables, now free and open source.
And that really means open source: Surge gets a GPL v3 license, which could also make this the basis of other projects.
People are asking for this a lot – “just open source it.” But that can be a lot of work, often prohibitively so. So it’s impressive to see source code dumped on GitHub.
And Surge is a deep synth, even if last updated in 2008. You get an intensive modulation architecture, nearly 200 wavetables, and a bunch of effects (including vocoder and rotary speaker). Plus it’s already 64-bit, so even though it’s a decade old, it’ll play reasonably nicely on newer machines.
Inside the modulation engine.
Synthesis method: Subtractive hybrid
Each patch contain two ‘scenes’ which are separate instances of the entire synthesis engine (except effects) that can be used for layering or split patches.
Quick category-based patch-browser
Future proof, comes as both a 32 & 64-bit VST plugin (Windows PC)
Universal Binary for both VST and AU (Mac)
8 versatile oscillator algorithms: Classic, Sine, Wavetable, Window, FM2, FM3, S/H Noise and Audio-input
The classic oscillator is a morphable pulse/saw/dualsaw oscillator with a sub-oscillator and self-sync.
The FM2/FM3 oscillators consists of a 1 carrier with 2/3 modulators and various options.
Most algorithms (except FM2, FM3, Sine and Audio-input) offer up to 16-voice unison at the oscillator level.
Most oscillator algorithms (except FM2/FM3) are strictly band-limited yet still cover the entire audible spectrum, delivering a clear punchy yet clean sound.
Noise generator with variable spectrum.
Two filter-units with arrangeable in 8 different configurations
Feedback loop (number of variations inside the parenthesis)
Available filter-algorithms: LP12 (3), LP24 (3), LP24L (1-4 poles), HP12 (3), HP24 (3), BP (4), Notch (2), Comb (4), S&H
Filters can self-oscillate (with excitation) and respond amazingly fast to cutoff frequency changes.
Waveshaper (5 shapes)
12 LFO-units available to each voice (6 are running on each voice and 6 are shared for the scene)
DAHDSR envelope generators on every LFO-unit
7 deformable LFO-waveforms + 1 drawable/stepsequencer waveform
LFO1 allows envelope retriggering when used as stepsequencer
Extremely fast and flexible modulation routing. Almost every continuous parameter can be modulated.
8 effect units arranged as 2 inserts/scene, 2 sends and 2 master effects
10 top-quality algorithms: Delay, Reverb, Chorus, Phaser, EQ, Distortion, Conditioner (EQ, stereo-image control & limiter), Rotary speaker, Frequency shifter, Vocoder
It’s definitely an underground subculture of audiovisual media, but lovers of graphics made with vintage displays, analog oscilloscopes, and lasers are getting their own fall festival to share performances and techniques.
Vector Hack claims to be “the first ever international festival of experimental vector graphics” – a claim that is, uh, probably fair. And it’ll span two cities, starting in Zagreb, Croatia, but wrapping up in the Slovenian capital of Ljubljana.
Why vectors? Well, I’m sure the festival organizers could come up with various answers to that, but let’s go with because they look damned cool. And the organizers behind this particular effort have been spitting out eyeball-dazzling artwork that’s precise, expressive, and unique to this visceral electric medium.
Unconvinced? Fine. Strap in for the best. Festival. Trailer. Ever.
Here’s how they describe the project:
Vector Hack is the first ever international festival of experimental vector graphics. The festival brings together artists, academics, hackers and performers for a week-long program beginning in Zagreb on 01/10/18 and ending in Ljubljana on 07/10/18.
Vector Hack will allow artists creating experimental audio-visual work for oscilloscopes and lasers to share ideas and develop their work together alongside a program of open workshops, talks and performances aimed at allowing young people and a wider audience to learn more about creating their own vector based audio-visual works.
We have gathered a group of fifteen participants all working in the field from a diverse range of locations including the EU, USA and Canada. Each participant brings a unique approach to this exiting field and it will be a rare chance to see all their works together in a single program.
Vector Hack festival is an artist lead initiative organised with
support from Radiona.org/Zagreb Makerspace as a collaborative international project alongside Ljubljana’s Ljudmila Art and Science Laboratory and Projekt Atol Institute. It was conceived and initiated by Ivan Marušić Klif and Derek Holzer with assistance from Chris King.
Robert Henke is featured, naturally – the Berlin-based artist and co-founder of Ableton and Monolake has spent the last years refining his skills in spinning his own code to control ultra-fine-tuned laser displays. But maybe what’s most exciting about this scene is discovering a whole network of people hacking into supposedly outmoded display technologies to find new expressive possibilities.
One person who has helped lead that direction is festival initiator Derek Holzer. He’s finishing a thesis on the topic, so we’ll get some more detail soon, but anyone interested in this practice may want to check out his open source Pure Data library. The Vector Synthesis library “allows the creation and manipulation of vector shapes using audio signals sent directly to oscilloscopes, hacked CRT monitors, Vectrex game consoles, ILDA laser displays, and oscilloscope emulation software using the Pure Data programming environment.”
The results are entrancing – organic and synthetic all at once, with sound and sight intertwined (both in terms of control signal and resulting sensory impression). That is itself perhaps significant, as neurological research reveals that these media are experienced simultaneously in our perception. Here are just two recent sketches for a taste:
They’re produced by hacking into a Vectrax console – an early 80s consumer game console that used vector signals to manipulate a cathode ray screen. From Wikipedia, here’s how it works:
The vector generator is an all-analog design using two integrators: X and Y. The computer sets the integration rates using a digital-to-analog converter. The computer controls the integration time by momentarily closing electronic analog switches within the operational-amplifier based integrator circuits. Voltage ramps are produced that the monitor uses to steer the electron beam over the face of the phosphor screen of the cathode ray tube. Another signal is generated that controls the brightness of the line.
Ted Davis is working to make these technologies accessible to artists, too, by developing a library for coding-for-artists tool Processing.
Oscilloscopes, ready for interaction with a library by Ted Davis.
Here’s a glimpse of some of the other artists in the festival, too. It’s wonderful to watch new developments in the post digital age, as artists produce work that innovates through deeper excavation of technologies of the past.
Context, built in Pure Data, is a free and open source modular sequencer that opens up new ways of thinking about melody, rhythm, and pattern.
Sequencers: we’ve seen, well, a lot of them. There are easy-to-use step sequencers, but they tend to be limited to pretty simple patterns. More sophisticated options go to the other extreme, making you build up patterns from scratch or program your own tools.
The challenge is, how do you employ the simplicity of a step sequencer, but make a wider range of patterns just as accessible?
Context is one clever way of doing that. Building on modular patching of patterns – the very essence of what made Max and Pd useful in the first place – Context lets you combine bits and pieces together to create sequencers around your own ideas. And a whole lot of ideas are possible here, from making very powerful sequencers quick to build, LEGO-style, to allowing open-ended variations to satisfy the whims of more advanced musicians and patchers.
Where this gets interesting in Pd specifically is, you can built little feedback networks, creating simple loopers up to fancy generative or interactive music machines.
It’s all just about sequencing, so if you’re a Pd nut, you can combine this with existing patches, and if not, you can use it to sequence other hardware or software instruments.
At first I thought this would be a simple set of Pd patches or something like that, but it’s really deep. There’s a copious manual, which even holds new users by the hand (including with some first-time issues like the Pd font being the wrong size).
You combine patches graphically, working with structures for timing and pattern. But you can control them not only via the GUI, but also via a text-based command language, or – in the latest release – using hardware. (They’ve got an example set up that works directly with the Novation Launchpad.)
So live coder, live musician, finger drummer, whatever – you’re covered.
There are tons of examples and tutorials, plus videos in addition to the PDF manual. (Even though I personally like reading, that gives you some extra performance examples to check out for musical inspiration!)
Once you build up a structure – as a network of modules with feedback – you can adapt Context to a lot of different work. It could drive the timing of a sample player. It could be a generative music tool. You could use it in live performance, shaping sound as you play. You might even take its timing database and timeline and apply it to something altogether different, like visuals.
But impressively, while you can get to the core of that power if you know Pd, all of this functionality is encapsulated in menus, modules, and commands that mean you can get going right away as a novice.
In fact… I really don’t want to write any more, because I want to go play with this.
Here’s an example of a performance all built up:
And you can go grab this software now, free (GPL v3) — ready to run on your Mac, Windows PC, Linux machine, or Raspberry Pi, etc.:
Patching music and visuals is fun, but it helps to learn from other people. With everything from apps (Audulus) to modulars (Softube, VCV Rack) to code and free software (Pd, SuperCollider, Bela), patchstorage is like a free social network for actually making stuff.
It’s funny that we needed international scandal, political catastrophe, numerous academic studies of depression, and everyone’s data getting given away before figuring it out – Facebook isn’t really solving all our problems. But that opens up a chance for new places to find community, learn from each other, and focus on the things we love, like making live music and visuals.
Enter Patchstorage. Choose a platform you’re using – or maybe discover one you aren’t. (Cabbage, for instance, is a free platform for making music software based on Csound.
If you’re a newcomer, you can attempt to just load this up and make sound. And a lot of these patches are made for free environments, meaning you don’t have to spend money to check them out. If you’re a more advanced user, of course, poking through someone else’s work can help you get outside your own process. And there are those moments of – “oh, I didn’t know this did that,” or “huh, that’s how that works.”
There are also, naturally, a ton of creations for VCV Rack, the free and open source Eurorack modular emulation we’ve been going on about so much lately.
Oh, yeah, and — another thing. This doesn’t use Facebook as its social network. Instead, chats are powered by gamer-friendly, Slack-like chat client Discord. That means a new tool to contend with when you want to talk about patches, but it does mean you get a focused environment for doing so. So you won’t be juggling your ex, your boss, some spammers, and propaganda bots in the middle of an environment that’s constantly sucking up data about you.
The free and open VCV Rack software modular platform already is full of a rich selection of open source modules. Now, Rack users get first access to the newest Mutable Instruments modules – and your $20 even goes to charity.
Mutable Instruments is unique among modular makers partly in that its modules are open source – and partly in that they’re really exceptionally creative and sound amazing.
Mutable’s Olivier Gillet was an early adopter of the open source model for music hardware, (along with CDM and our 2010 MeeBlip), starting with the classic Shruthi-1 desktop module (2012). But it’s really been in modular that Mutable has taken off. Even as Eurorack has seen a glut of modules, Olivier’s creations – like Braids, the Macro Oscillator, Clouds, and others – have stood out. And the open source side of this has allowed creative mods, like the Commodore 64 speech synthesis firmware we saw recently.
But Rack, by providing an open software foundation to run modules on, has opened a new frontier for those same modules, even after they’re discontinued. Rack’s ecosystem is a mix of free and open modules and proprietary paid modules. Here, you get a combination of those two ideas.
The software. (Macro Oscillator 2, “Audible Instruments,” in VCV Rack.)
Mutable’s Plaits, a successor to the original multi-functional Braids oscillator, isn’t out yet. And its source will be delayed a bit after that. But for twenty bucks, you get both Plaits (dubbed Macro Oscillator 2 inside VCV) ahead of release, opening up a wonderful new source for pitched and percussion sounds. Most of your money even goes to charity. (Actually, I’m happy to support these developers, too, but sure!) These are two of the more versatile sound sources anywhere.
The idea is, would-be hardware purchasers get an advance test. And everyone gets a version they can run in software for convenience. Either way, all synth lovers win, pretty much. Synthtopia has a similar take:
Maybe, maybe not but — on another level, even if this is just the model for Mutable’s stuff, it’s already good news modular fans and VCV Rack users.
And let’s not forget what it all sounds like. Here’s a mesmerizing, tranquil sound creation by Leipzig-based artist Synthicat, showing off Plaits / Macro Oscillator 2:
Another bonus of VCV Rack support for studio work – you get multiple instances easily, without buying multiple modules. So I can imagine a lot of people using elaborate modular setups they could never afford in the studio, then buying a smaller Eurorack rig for live performance use, for example. Check out Synthicat’s music at his Bandcamp site:
You’ll find a bunch of sound models available, from more traditional FM and analog oscillations to granular to percussive to, indeed, some of that weird speech synthesis business we mentioned. You also get a new interface with more flexible control and CV modulation, unifying what are in fact many different models of sound production into a single, unified, musical interface.
Loads and loads of models. Pop them up by right-clicking, or check the different icons on the center of the module panel.
As for Plaits hardware, here’s some more beautiful music:
When Mutable Instruments releases a new Eurorack module, its source code is kept closed to limit the proliferation of opportunistic “DIY” clones at a time when there is a lot of demand for the module and to avoid exposing dealers to canceled pre-orders. After several months, a second production run is finished and the source code is released.
In a collaboration between VCV and Mutable Instruments, we allow you to test these new modules before their source code is publicly available with the “Audible Instruments Preview” plugin.
We don’t intend to profit from this collaboration. Instead, 80% of sales are donated to the Direct Relief (https://www.directrelief.org/) Humanitarian Medical Aid charity organization. The price exists to limit widespread distribution until each module is mature enough to be merged into Audible Instruments.
I have no doubt this will get hardware people hooked on the software, software people hooked on the hardware, and everybody synth-y and happy.