“Underground romantic engineering” is the motto of up-and-coming gearmaker SOMA Laboratory. Here’s a look at the Russian-Polish foundry creating wild new electronic instruments – and their latest creation.
Music store All For DJ has become a badly-needed new hub for Moscow’s electronic producers. Despite the name, they’re host to all kind of electronic instruments. I met the folks operating the retailer earlier this summer, and it’s an oasis – easy access to lots of gear, which had until recently sometimes been a challenge in Russia, and also tons of information (including a Russian-language blog).
And they’re producing documentaries, like this one looking at SOMA. It’ll definitely be up CDMers’ alley – Ukrainian-Russian creator Vlad Kreimer is the kind of mad scientist experimental musician we love. Now, his Lyra-8 has become a sought-after, one-of-a-kind instrument, and he’s teaming up with Vyacheslav Grigoriev (previously of VG-Line). (Vyacheslav joined me last year on a panel for Synthposium in Moscow, talking about his upbringing in electronics in the USSR.) The operation is growing, with operations both in Russia and Poland, as the electronic music community embraces exactly this sort of strange.
The film is a beautiful and intimate portrait of the creators and their ideas (subtitled in English):
The Lyra synth is like a “book, album, work of art that contains a message,” says Vlad. And there’s a new tome coming – the Pulsar-23, which brings the same ethos to drum machines. Its release is eagerly anticipated, with photos (seen here) showing it in final prototype state, about to hit production. (Advance buyers are apparently bugging them for that.)
Vlad showed off the new box at Superbooth in Berlin in May (which I missed, ironically, as I had to fly to St. Petersburg – positions keep swapping):
Selekta.fm did a hands-on, too, and – wow, that sound.
Selekta also did a full interview:
I hope I get to experience this drum machine in person, soon, as well. Best with the project to SOMA. Meanwhile, behold:
Virtual ANS from prolific omni-platform developer Alexander Zolotov brings back spectral synthesis like it’s the mid-century USSR. But it also future-proofs that tech – full Android and iOS (plus desktop) support, and now a version that’s polyphonic and MIDI playable.
Alexander Zolotov can single-handedly make a mobile device useful. On my new Android phone, it was his stuff I grabbed first – and, well, last. Once you’ve got a tracker like SunVox that runs anywhere, what more do you need?
And for anyone bored with the world of knobs and subtractive synthesis (yawn), enter the eerily beautiful alien sound world of the ANS – an alternate timeline of synth history in which sound is painted as well as made electrical. The creation of Russian engineer Evgeny Murzin, the ANS used a unique analog-optical hybrid approach. Borrowing from the graphic scores used in early film audio, waveforms were optically produced. It’s What You See Is What You Get For Sound – the spectrogram is the interface as well as a representation of what you hear. This technique is what creates the gorgeous, otherworldly timbres of Tarkovsky’s Solaris – and now it can be on your phone.
The original ANS – its name drawn from the initials of Alexander Nikolayevich Scriabin, the synesthesia-experiencing esoteric composer – used a series of optical discs. It’s easier to do this in software, of course. Everything works in real time, you can have as many pure tone generators as you like (since you won’t just run out of optical-mechanical wheels), and you can convert to and from digital files of both images and sounds.
Sound from pictures, pictures from sounds.
Now with MIDI support on both Android and iOS (not to mention desktop OSes).
ANS 3.0 is a major update that moves the whole affair from fascinating proof of concept to a full-featured instrument. You can now map polyphony, and you can play your creations via MIDI – including via external MIDI controllers.
Adding MIDI controllers actually makes for a wild instrument:
Oh, and remember how I just said that AUv3 was the way forward on iOS? Well, Sasha is of course supporting AUv# – as he’s supported Audiobus, IAA, JACK, ALSA, OSS, MME, DirectSound, and ASIO in the past. (That long list of formats comes from supporting Mac, Windows, Linux, Android, and iOS all at once.)
And there’s more. On iOS, you get high-res support and MIDI. Android 6+ has MIDI support. Linux gets multitouch support. Files are accessible in the file system of both iOS and Android – including all those project, image, and sound files. There are more audio export options, new brushes, new lighten and darkening layering modes like you’d expect in Photoshop, and lots of shortcuts. Check the full changelog:
Of course, because it runs on every platform (well, every modern platform), you can sketch an idea on your Android phone, move to iPad and work some more, then load it onto your PC and drop it into a DAW.
Frankly, I think it’s more exciting than anything from Apple this week, but I am impossibly biased toward this esoterica so … that goes without saying.
SOMA laboratory and enigmatic “romantic” engineer Vlad Kreimer have already delivered the strange and wonderful LYRA “organismic” synths. Next up: a drum machine.
The PULSAR-23 takes on that same “organismic” design philosophy, complete with rich, layered, deep space exploration sounds. With a full 23 independent modules, those powers turn to a drum machine design.
And maybe even “drum machine” doesn’t quite do this justice – you could just as easily imagine this as a percussion-heavy synthesizer. There are four independent loop recorders which trigger events, which you can clock into a single groove or leave to independent timing for more experimental rhythms. You can even set each channel to a sustain, so this is a noise/drone synth, too, not just a dancefloor object.
The PULSAR-23 was first announced last year, but now we get to see it move into its production form factor and – wow, it looks great:
It could be a gorgeous standalone machine, or you could see it as part of a larger modular rig. Full specs:
– 4 drum channels: Bass drum, Bass\Percussion, Snare drum, Cymbals\Hi-Hat
– 4 envelope generators with the unique ability to generate a sustain for the drum channels, turning them into noise\drone synthesizers.
– 4 independent loop recorders with the option for individual clocking. They record triggering events, not audio.
– Clock generator with an array of dividers as a very powerful tool for rhythm synthesis.
– Wide range LFO (0.1 – 5000Hz) with variable waveform.
– Shaos – a unique pseudo-random generator based on shift registers with 4 independent outputs, sample and hold and other cool features.
– FX processor with CV control incl. CV control of the entire DSP’s sample rate.
– 2 CV-controlled gates.
– 2 CV-controlled VCAs.
– 2 controllable inverters.
– 3 assignable attenuators
– dynamic CV sensors for CV generation etc
Plus there’s MIDI control and sync, in addition to all the CV options. And if you want the really important specs – 52 knobs, 11 switches, 100 inputs and outputs for patching.
There’s also – “live circuit bending” whatever that entails, exactly?
This is the video from June 2018, where the PULSAR-23 was still just a bunch of guts – no pretty red case – but at least gives you an idea of the sound possibilities.
No lie here: SOMA will be way on the top of my list of gear to check out at Superbooth. I think this is poised to be a 2019 highlight.
Previously, we checked this out from SOMA this spring:
Techno, without all those pesky human producers? Petr Serkin’s Eternal Flow is a generative radio station – and even a portable device – able to make endless techno and deep house variations automatically.
You can run a simple version of Eternal Flow right in your browser:
Recorded sessions are available on a SoundCloud account, as well:
But maybe the most interesting way to run this is in a self-contained portable device. It’s like a never-ending iPod of … well, kind of generic-sounding techno and deep house, depending on mode. Here’s a look at how it works; there’s no voiceover, but you can turn on subtitles for additional explanation:
There are real-world applications here: apart from interesting live performance scenarios, think workout dance music that follows you as you run, for example.
I talked to Moscow-based artist Petr about how this works. (And yeah, he has his own deep house-tinged record label, too.)
“I used to make deep and techno for a long period of time,” he tells us, “so I have some production patterns.” Basically, take those existing patterns, add some randomization, and instead of linear playback, you get material generated over a longer duration with additional variation.
There was more work involved, too. While the first version used one-shot snippets, “later I coded my own synth engine,” Petr tells us. That means the synthesized sounds save on sample space in the mobile version.
It’s important to note this isn’t machine learning – it’s good, old-fashioned generative music. And in fact this is something you could apply to your own work: instead of just keeping loads and loads of fixed patterns for a live set, you can use randomization and other rules to create more variation on the fly, freeing you up to play other parts live or make your recorded music less repetitive.
And this also points to a simple fact: machine learning doesn’t always generate the best results. We’ve had generative music algorithms for many years, which simply produce results based on simple rules. Laurie Spiegel’s ground-breaking Magic Mouse, considered by many to be the first-ever software synth, worked on this concept. So, too, did the Brian Eno – Peter Chilvers “Bloom,” which applied the same notion to ambient generative software and became the first major generative/never-ending iPhone music app.
By contrast, the death metal radio station I talked about last week works well partly because its results sound so raunchy and chaotic. But it doesn’t necessarily suit dance music as well. Just because neural network-based machine learning algorithms are in vogue right now doesn’t mean they will generate convincing musical results.
I suspect that generative music will freely mix these approaches, particularly as developers become more familiar with them.
But from the perspective of a human composer, this is an interesting exercise not necessarily because it puts yourself out of a job, but that it helps you to experiment with thinking about the structures and rules of your own musical ideas.
And, hey, if you’re tired of having to stay in the studio or DJ booth and not get to dance, this could solve that, too.
AI in music is as big a buzzword as in other fields. So now’s the time to put it to the test – to reconnect to history, human practice, and context, and see what holds up. That’s the goal of the Gamma_LAB AI in St. Petersburg next month. An open call is running now.
Machine learning for AI has trended so fast that there are disconnects between genres and specializations. Mathematicians or coders may get going on ideas without checking whether they work with musicians or composers or musicologists – and the other way around.
I’m excited to join as one of the hosts with Gamma_LAB AI partly because it brings together all those possible disciplines, puts international participants in an intensive laboratory, and then shares the results in one of the summer’s biggest festivals for new electronic music and media. We’ll make some of those connections because those people will finally be together in one room, and eventually on one live stage. That investigation can be critical, skeptical, and can diverge from clichéd techniques – the environment is wide open and packed with skills from an array of disciplines.
Natalia Fuchs, co-producer of GAMMA Festival, founder of ARTYPICAL and media art historian, is curating Gamma_LAB AI. The lab will run in May in St. Petersburg, with an open call due this Monday April 8 (hurry!), and then there will be a full AI-stage as a part of Gamma Festival.
Image: Helena Nikonole.
Invited participants will delve into three genres – baroque, jazz, and techno. The idea is not just a bunch of mangled generative compositions, but a broad look at how machine learning could analyze deep international archives of material in these fields, and how the work might be used creatively as an instrument or improviser. We expect participants with backgrounds in musicianship and composition as well as in coding, mathematics, and engineering, and people in between, also researchers and theorists.
To guide that work, we’re working to setup collaboration and confrontation between historical approaches and today’s bleeding-edge computational work. Media artist Helena Nikonole became conceptual artist of the Lab. She will bring her interests in connecting AI with new aesthetics and media, as she has exhibited from ZKM to CTM to Garage Museum of Contemporary Art. Dr. Konstantin Yakovlev joins as one of Russia’s leading mathematicians and computer scientists working at the forefront of AI, machine learning, and smart robotics – meaning we’re guaranteed some of the top technical talent. (Warning: crash course likely.)
Russia has an extraordinarily rich culture of artistic and engineering exploration, in AI as elsewhere. Some of that work was seen recently at Berlin’s CTM Festival exhibition. Helena for her part has created work that, among others, applies machine learning to unraveling the structure of birdsong (with a bird-human translator perhaps on the horizon), and hacked into Internet-connected CCTV cameras and voice synthesis to meld machine learning-generated sacred texts with … well, some guys trapped in an elevator. See below:
I’m humbled to get to work with them and in one of the world’s great musical cities, because I hope we also get to see how these new models relate to older ones, and where gaps lie in music theory and computation. (We’re including some musicians/composers with serious background in these fields, and some rich archives that haven’t been approached like this ever before.)
I came from a musicology background, so I see in so-called “AI” a chance to take musicology and theory closer to the music, not further away. Google recently presented a Bach ”doodle” – more on that soon, in fact – with the goal of replicating some details of Bach’s composition. To those of us with a music theory background, some of the challenges of doing that are familiar: analyzing music is different from composing it, even for the human mind. To me, part of why it’s important to attempt working in this field is that there’s a lot to learn from mistakes and failures.
It’s not so much that you’re making a robo-Bach – any more than your baroque theory class will turn all the students into honorary members of the extended Bach family. (Send your CV to your local Lutheran church!) It’s a chance to find new possibilities in this history we might not have seen before. And it lets us test (and break) our ideas about how music works with larger sets of data – say, all of Bach’s cantatas at once, or a set of jazz transcriptions, or a library full of nothing but different kick drums, if you like. This isn’t so much about testing “AI,” whatever you want that to mean – it’s a way to push our human understanding to its limits.
Oh yes, and we’ll definitely be pushing our own human limits – in a fun way, I’m sure.
A small group of participants will be involved in the heart of St. Petersburg from May 11-22, with time to investigate and collaborate, plus inputs (including at the massive Planetarium No. 1).
But where this gets really interesting – and do expect to follow along here on CDM – is that we will wind up in July with an AI mainstage at the globally celebrated Gamma Festival. Artist participants will create their own AI-inspired audiovisual performances and improvisations, acoustic and electronic hybrids, and new live scenarios. The finalists will be invited to the festival and fully covered in terms of expenses.
So just as I’ve gotten to do with partners at CTM Festival (and recently with southeast Asia’s Nusasonic), we’re making the ultimate laboratory experiment in front of a live audience. Research, make, rave, repeat.
The open call deadline is fast approaching if you think you might want to participate.
Participation at GAMMA_LAB AI is free for the selected candidates. Send a letter of intent and portfolio to firstname.lastname@example.org by end of day April 8, 2019. Participants have to bring personal computers of sufficient capacity to work on their projects during the Laboratory. Transportation and living expenses during the Laboratory are paid by the participants themselves. The organizers provide visa support, as well as the travel of the best Lab participants to GAMMA festival in July.
Electronics are redefining what “sound” means – by remapping other signals into our audible spectrum. The latest is SOMA’s invention Ether, a “microphone” for electromagnetic fields. If that sounds familiar, this one’s a bit different than some EMF devices that came before.
Here’s a look at the new Ether. It’s a new creation from SOMA Laboratory, the same Russian instrument builder who have give us the gorgeous “organismic” LYRA synths. (I covered them in the Russian Synthposium write-up last year.)
First, let’s talk electromagnetic fields. Just like gravity, these fields extend throughout nature. Since we have electricity and electrically-charged stuff pulsing all around us, there’s a lot happening in the electromagnetic field. But we can’t perceive that, because our bodies lack sense organs equipped to do so – well, until now, that is. Now we’ve invented devices that translate to things we can sense. Think of it as expanded sensory perception for the transhumanist, technologically augmented age.
Various artists have built electromagnetic detectors that you can use for music – both by listening directly with headphones, and by letting you plug that signal into a recorder or use in live performance. That includes the superb ElecktroSluch by LOM Label and artist Jonáš Gruska, who both makes these instruments available and has built a body of works around them on his label (both by him and invited artists).
Part of what makes Jonáš special, though, is his interest in delicate sounds and focused sounds – that’s something he applies to his acoustic microphones, as well.
So here’s where the SOMA Ether becomes interesting.
The invention of engineer Vlad Kreimer, the Ether is a portable EMF device. But it’s much more sensitive than other offerings – making it well suited to picking up larger ambiences in recording or live performances. It works on a slightly different technique, and yields different results.
Vlad himself sends along an explanation to make this clearer:
ETHER is not just an inductive sniffer like some projects you can easily find online. A simple low-frequency inductive sniffer will be silent in most places that are full of sounds in the video. Such devices need to be placed close to an emitting source and will not work on a street. All they contain is a coil and a low-frequency amplifier. In comparison, ETHER has a regenerating circuit and a demodulator, making it an actual radio wave receiver, not just an amplifier of low-frequency magnetic fields. However, ETHER can perceive the low-frequency magnetic fields as well. But, honestly, if your goal is to scan objects in close proximity (0-20 centimetres), devices like Elektrosluch will work cleaner and more focused due to its narrow band and lower sensitivity. ETHER was designed to be a part of your walks in the city and may even pick up sounds in a forest or at the seashore (I have such experience). Elektrosluch was designed for using over a table full of gear. Also, ETHER can perceive the electric component of the radiation as well, capturing radiation that is far above the audio range and is much more sensitive. Therefore, it has a significantly different design, functions and implementation than a simple inductive sniffer even if in some cases their functions can overlap.
Devoted EMF fans I can imagine carrying both the Ether and something like an ElectroSluch to capture different sounds, a bit like photographers carry multiple lenses. (Oh yes – this addiction is about to run deep. Or you can think about the difference between a double bass and an oboe.)
As you can hear in the demo, you get these sweeping, overlapping waves of EMF with some really fantastic distortion – punk electromagnetism.
120 EUR, available to order now. (VAT and shipping are additional.)
A great live set brews up new musical directions before your ears. It’s a burst of creativity and energy that’s distinct from what happens alone in a studio, with layers of process. From Liverpool (Madeline T Hall) and Moscow (Nikita Zabelin x Xandr.vasiliev), here are two fine examples to take you into the weekend.
Acid-tinged synths unfold over this brilliant half hour from M T Hall (pictured, top), at a party hosted earlier this year by HMT Liverpool x Cartier 4 Everyone:
I love that this set feels so organic and colors outside the lines, without ever losing forward drive or focus. It organically morphs from timbre to timbre, genre to genre. So just when it seems like it’s just going to be a straight-ahead acid set (that’s not actually a 303, by the way, it seems), it proceeds to perpetually surprise.
I think people are afraid to create contrast in live sets, but each shift here feels intentioned and confident, and so the result is – you won’t mistake this for someone else’s set.
Check out her artist site; she’s got a wildly diverse set of creative endeavors, including immersive drawing and sound performances, and work as an artist covering sculpture, sound, video and installation. (Madeleine, if you’re reading this, hope we can feature your work in more depth! I just can’t wait to release this particular set first!)
Darker (well, and redder, thanks to the lighting), but related in its free-flowing machine explorations, we’ve got another set from Moscow from this month:
It’s the project of Nikita Zabelin x Xandr.vasiliev, at Moscow’s Pluton club, a repurposed factory building giving a suitably raw industrial setting.
This is connected for me, though. Dark as it is, the duo isn’t overly serious – weird and whimsical sounds still bubble out of the shadows. And it shows that grooves and free-form sections can intermix successfully. I got to play after this duo in St. Petersburg and you really do get the sense of open improvisation.
Facing off at Moscow’s Pluton.
xandr aka Alexander has a bunch more here:
That inspires me for the coming days. Have a good weekend, everybody.
Deep in the Arctic Circle, the USSR was drilling deeper into the Earth than anyone before. One artist has combined archaeology and invention to bring its spirit back in sound.
Meet SG-3 (СГ-3) — the Kola Superdeep Borehole. You know when kids would joke about digging a hole to China? Well, the USSR’s borehole got to substantial depths – 12,262 m (over 40,000 ft) at the time of the USSR’s collapse.
The borehole was so epic – and the Soviets so secretive – that it has inspired legends of seismic weapons and even demonic drilling. (A YouTube search gets really interesting – like some people who think the Soviets actually drilled into the gates to Hell.)
Artist Dmitry Morozv – ::vtol:: – evokes some of that quality while returning to the actual evidence of what this thing really did. And what it did is already spectacular – he compares the scale of the project to launching humans into space (well, sort of in the opposite direction).
vtol’s installation 12262 is the perfect example of how sound can be made material, and how digging into history can produce futuristic, post-contemporary speculative objects.
The two stages:
Archaeology. Dima absorbed SG-3’s history and lore, and spent years buying up sample cores at auctions as they were sold off. And twice he visited the remote, ruined site himself – once in 2016, and then back in July with his drilling machine. He even located a punched data tape from the site, though of course it’s difficult to know what it contains. (The investigation began with the Dark Ecology project, a three-year curatorial/research/art project bringing together partners from Norway, Russia, and across Europe, and still bearing this sort of fascinating fruit.)
Invention: The installation itself is a kinetic sound instrument, reading the coded information from the punch tape and operating miniature drilling operations, working on actual core samples. The sounds you hear are produced mechanically and acoustically by those drills.
As usual, Dima lists his cooking ingredients, though I think the sum is uniquely more than these individual parts. It’s as he describes it, a poetic, kinetic meditation, evocative both intellectually and spiritually. That said, the parts:
Commission by NCCA-ROSIZO (National Centre for Contemporary Arts), special for TECHNE “Prolog” exhibition, Moscow, 2018.
Curators: Natalia Fuchs, Antonio Geusa. Producer: Dmitry Znamenskiy.
The work was also a collaboration with Gallery Ch9 (Ч9) in Murmansk. That’s itself something of an achievement; it’s hard enough to find media art galleries in major cities, let alone remote Russia. (That’s far enough northwest in Russia that most of Finland and all of Sweden are south of it.)
But the alien-looking object also got its own trip to the site, ‘performing’ at the location.
It’s appropriate that would happen in Russia. Cosmism visionary Nikolai Fyodorovich Fyodorov and his ideas about creating immortality by resurrecting ancestors may seem bizarre today. But translate that to media art, which threatens to become stuck in time when not informed by history. (Those who do not learn from history are doomed to make installation art that looks like it came from a mid-1990s Ars Electronica or Transmediale, forever, I mean.) To be truly futuristic, media art has to have a deep understanding of technologies progression, its workings, and all the moments in the past that were themselves ahead of their time. That is, maybe we have to dig deep into the ground beneath us, dig up our ancestors, and construct the future atop that knowledge.
At Spektrum Berlin this weekend, there’s also a “materiality of sound” project. Fellow Moscow-based artist Andrey Smirnov will create an imaginative new performance inspired by Theremin’s infamous KGB listening device of the 1940s – also new art fabricated from Soviet history – joined by a lineup of other artists exploring similar themes making sound material and kinetic. (Evelina Domnitch and Dmitry Gelfand, Sonolevitation, Camera Lucida, Eleonora Oreggia aka Xname share the bill.)
To me, these two themes – materiality, drawing from kinetic, mechanical, optical, and acoustic techniques (and not just digital and analog), and archaeological futurism, employing deep historical inquiry that is in turn re-contextualized in forward-thinking, speculative work, offer tremendous possibility. They sound like more than just a zeitgeist-friendly buzzword (yeah, I’m looking at you, blockchain). They sound like something to which artists might even be happy to devote lifetimes.
For another virtual trip to the borehole, here’s Rosa Menkman’s film on a soundwalk at the site in 2016.
Related (curator Natalia Fuchs, interviewed before, also curated this work):
It’s Moscow’s quirkier, playful side that’s probably easiest for us foreigners to miss. But Kate Shilonosova (Kate NV) is earning an international audience for her introspective, surrealist whimsy, and one that’s well-deserved.
Kate NV’s music is beautifully minimal and reflective. The Japan tour makes perfect sense – there’s a distinctively Japanese-compatible electronic aesthetic here. (The poppier nods to minimalism and extensive use of percussion remind me a bit of Cornelius, as do the hand-drawn graphics everywhere.) But her approach to found sound and sampling is equally enjoyable when taken in live. Kate was another highlight for me of Synthposium, and emblematic of Moscow’s experimental, open-minded, live performance-oriented electronic scene. Her own background is in punk and guitars, and she brings that musicianship and improvisational spirit even to this very different sonic idiom.
Live, she works with mics and small percussion and sampling (on various Novation gear and Ableton Live), pulling in elements in a way that’s accessible and fluid. And yeah, she’s the kind of producer who keeps a glockenspiel by her computer in her home studio.
She’s been picked up by RVNG Intl, the Brooklyn-based label with a particularly sharp nose for musical inventiveness. And her LP is terrifically charming. It’s also accompanied by cheery, trippy films from Moscow director Sasha Kulak. Watch “дуб OAK” (each is titled in a combination of the Russian and English equivalent of a word):
— or the extended film “для FOR”:
These films are also available in a generative form, which you can watch on her website – click, and you get different variations:
This project is based on works of Moscow conceptualist Victor Pivovarov,
more specifically on his series called “project for the lonely man”, 1975.
This movie is telling a story about one lonely man’s day.
Every time the button is pressed, the new, slightly different day is generated from the common routine actions.
Thus, creating the sense that all regular days are the same, but in its own way very different.