Before modulars became a product, some of the first electronic synthesis experiments made use of test equipment – gear intended to make sound, but not necessarily musically. And now that approach is making a comeback.
Hainbach, the Berlin-based experimental artist, has been helping this time-tested approach to sound reach new audiences.
I actually have never seen a complete, satisfying explanation of the relationship of abstract synthesis, as developed by engineers and composers, to test gear. Maybe it’s not even possible to separate the two. But suffice to say, early in the development of synthesis, you could pick up a piece of gear intended for calibration and testing of telecommunications and audio systems, and use it to make noise.
Why the heck would you do that now, given the availability of so many options for synthesis? Well, for one – until folks like Hainbach and me make a bunch of people search the used market – a lot of this gear is simply being scrapped. Since it’s heavy and bulky, it ranges from cheap to “if you get this out of my garage, you can have it” pricing. And the sound quality of a lot of it is also exceptional. Sold to big industry back in a time when slicing prices of this sort of equipment wasn’t essential, a lot of it feels and sounds great. And just like any other sound design or composition exercise that begins with finding something unexpected, the strange wonderfulness of these devices can inspire.
I got a chance to play a few days with the Waveform Research Centre in Rotterdam’s WORM, a strange and wild collection of these orphaned devices lovingly curated by Dennis Verschoor. And I got sounds unlike anything I was used to. It wasn’t just the devices and their lovely dials that made that possible – it was also the unique approach required when the normal envelope generators and such aren’t available. Human creativity does tend to respond well to obstacles.
Whether or not you go that route, it is worth delving into the history and possibilities – and Hainbach’s video is a great start. It might at the very least change how you approach your next Reaktor patch, SuperCollider code, synth preset, or Eurorack rig.
Gen X and Y just got their Beatles Anthology, basically – and it’s fantastic. Radiohead remind us why we love them with nearly two gigs of demos ripped from (seriously) MiniDiscs.
Maybe it’s taking Radiohead back to the “just a band” phase, but there’s something gorgeous about these stripped-down and earnest productions. And if you don’t want to burden yourself with the 1.8GB, you can stream them to get a rough impression of one of the biggest bands of their generation when … they were developing ideas and didn’t bother to tune their guitars.
Live sets in there, too, sketches, the lot…
The amazing thing about this story is, they evidently are kidding about being “hacked” – it seems someone really did try to ransom all these all recordings. (Maybe. It’s certainly a believable possibility.)
Of course, unlike the previous generation’s demos, the 90s produced recordings that were pretty half-decent. You’ll hear some charming sounds as mics are moved about, but the quality is pretty crisp – and then you get an in-the-room quality missing in the umpteen times we’ve heard Radiohead’s albums and then various covers.
Heck, even though I run a site that celebrates technology, you might just say the band is even a bit better in this raw, punk format, without all the studio work. There’s just way too much to listen to all at once, but £18 GBP gets you what in the 90s we thought was a big file (two gigs is a lot of dialup download time).
Someone could say something about the value of music here, except Radiohead already have given away albums, so really, this is a slight increase of value? I guess?
Enjoy. And maybe dust off your MiniDisc recorder and go make something.
Following nerve damage, Icelandic composer/producer/musician was unable to play the piano. With his ‘Ghost Pianos’, he gets that ability back, through intelligent custom software and mechanical pianos.
It’s moving to hear him tell the story (to the CNN viral video series) – with, naturally, the obligatory shots of Icelandic mountains and close-up images of mechanical pianos working. No complaints:
This frames accessibility in terms any of us can understand. Our bodies are fragile, and indeed piano history is replete with musicians who lost the original use of their two hands and had to adapt. Here, an accident caused him to lose left hand dexterity, so he needed a way to connect one hand to more parts.
And in the end, as so often is the case with accessibility stories and music technology, he created something that was more than what he had before.
With all the focus on machine learning, a lot of generative algorithmic music continues to work more traditionally. That appears to be the case here – the software analyzes incoming streams and follows rules and music theory to accompany the work. (As I learn more about machine learning, though, I suspect the combination of these newer techniques with the older ones may slowly yield even sharper algorithms – and challenge us to hone our own compositional focus and thinking.)
I’ll try to reach out to the developers, but meanwhile it’s fun squinting at screenshots as you can tell a lot. There’s a polyphonic step sequencer / pattern sequencer of sorts in there, with some variable chance. You can also tell in the screen shots that the pattern lengths are set to be irregular, so that you get these lovely polymetric echoes of what Olafur is playing.
Of course, what makes this most interesting is that Olafur responds to that machine – human echoes of the ‘ghost.’ I’m struck by how even a simple input can do this for you – like even a basic delay and feedback. We humans are extraordinarily sensitive to context and feedback.
The music itself is quite simple – familiar minimalist elements. If that isn’t your thing, you should definitely keep watching so you get to his trash punk stage. But it won’t surprise you at all that this is a guy who plays Clapping Music backstage – there’s some serious Reich influence.
You can hear the ‘ghost’ elements in the reent release ‘ekki hugsa’, which comes with some lovely joyful dancing in the music video:
re:member debuted the software:
There is a history here of adapting composition to injury. (That’s not even including Robert Schumann, who evidently destroyed his own hands in an attempt to increase dexterity.)
Paul Wittgenstein had his entire right arm amputated following World War I injury, commissioned a number of works for just the left hand. (There’s a surprisingly extensive article on Wikipedia, which definitely retrieves more than I had lying around inside my brain.) Ravel’s Piano Concerto for the Left Hand is probably the best-known result, and there’s even a 1937 recording by Wittgenstein himself. It’s an ominous, brooding performance, made as Europe was plunging itself into violence a second time. But it’s notable in that it’s made even more virtuosic in the single hand – it’s a new kind of piano idiom, made for this unique scenario.
I love Arnalds’ work, but listening to the Ravel – a composer known as whimsical, crowd pleasing even – I do lament a bit of what’s been lost in the push for cheery, comfortable concert music. It seems to me that some of that dark and edge could come back to the music, and the circumstances of the composition in that piece ought to remind us how necessary those emotions are to our society.
I don’t say that to diss Mr. Arnalds. On the contrary, I would love to hear some of his punk side return. And his quite beautiful music aside, I also hope that these ideas about harnessing machines in concert music may also find new, punk, even discomforting conceptions among some readers here.
Here’s a more intimate performance, including a day without Internet:
And lastly, more detail on the software:
Meanwhile, whatever kind of music you make, you should endeavor to have a promo site that is complete, like this – also, sheet music!
In glitching collisions of faces, percussive bolts of lightning, Lorem has ripped open machine learning’s generative powers in a new audiovisual work. Here’s the artist on what he’s doing, as he’s about to join a new inquisitive club series in Berlin.
Machine learning that derives gestures from System Exclusive MIDI data … surprising spectacles of unnatural adversarial neural nets … Lorem’s latest AV work has it all.
And by pairing producer Francesco D’Abbraccio with a team of creators across media, it brings together a serious think tank of artist-engineers pushing machine learning and neural nets to new places. The project, as he describes it:
Lorem is a music-driven mutidisciplinary project working with neural networks and AI systems to produce sounds, visuals and texts. In the last three years I had the opportunity to collaborate with AI artists (Mario Klingemann, Yuma Kishi), AI researchers (Damien Henry, Nicola Cattabiani), Videoartists (Karol Sudolski, Mirek Hardiker) and music intruments designers (Luca Pagan, Paolo Ferrari) to produce original materials.
Adversarial Feelings is the first release by Lorem, and it’s a 22 min AV piece + 9 music tracks and a book. The record will be released on APR 19th on Krisis via Cargo Music.
And what about achieving intimacy with nets? He explains:
Neural Networks are nowadays widely used to detect, classify and reconstruct emotions, mainly in order to map users behaviours and to affect them in effective ways. But what happens when we use Machine Learning to perform human feelings? And what if we use it to produce autonomous behaviours, rather then to affect consumers? Adversarial Feelings is an attempt to inform non-human intelligence with “emotional data sets”, in order to build an “algorithmic intimacy” through those intelligent devices. The goal is to observe subjective/affective dimension of intimacy from the outside, to speak about human emotions as perceived by non-human eyes. Transposing them into a new shape helps Lorem to embrace a new perspective, and to recognise fractured experiences.
I spoke with Francesco as he made the plane trip toward Berlin. Friday night, he joins a new series called KEYS, which injects new inquiry into the club space – AV performance, talks, all mixed up with nightlife. It’s the sort of thing you get in festivals, but in festivals all those ideas have been packaged and finished. KEYS, at a new post-industrial space called Trauma Bar near Hauptbahnhof, is a laboratory. And, of course, I like laboratories. So I was pleased to hear what mad science was generating all of this – the team of humans and machines alike.
So I understand the ‘AI’ theme – am I correct in understanding that the focus to derive this emotional meaning was on text? Did it figure into the work in any other ways, too?
Neural Networks and AI were involved in almost every step of the project. On the musical side, they were used mainly to generate MIDI patterns, to deal with SysEx from a digital sampler and to manage recursive re-sampling and intelligent timestretch. Rather then generating the final audio, the goal here was to simulate musician’s behaviors and his creative processes.
On the video side, [neural networks] (especially GANs [generative adverserial networks]) were employed both to generate images and to explore the latent spaces through custom tailored algorithms, in order to let the system edit the video autonomously, according with the audio source.
What data were you training on for the musical patterns?
MIDI – basically I trained the NN on patterns I create.
And wait, SysEx, what? What were you doing with that?
Basically I record every change of state of a sampler (i.e. the automations on a knob), and I ask the machine to “play” the same patch of the sampler according to what it learned from my behavior.
What led you to getting involved in this area? And was there some education involved just given the technical complexity of machine learning, for instance?
I always tried to express my work through multidisciplinary projects. I am very fascinated by the way AI approaches data, allowing us to work across different media with the same perspective. Intelligent devices are really a great tool to melt languages. On the other hand, AI emergency discloses political questions we try to face since some years at Krisis Publishing.
I started working through the Lorem project three years ago, and I was really a newbie on the technical side. I am not a hyper-skilled programmer, and building a collaborative platform has been really important to Lorem’s development. I had the chance to collaborate with AI artists (Klingemann, Kishi), researchers (Henry, Cattabiani, Ferrari), digital artists (Sudolski, Hardiker)…
How did the collaborations work – Mario I’ve known for a while; how did you work with such a diverse team; who did what? What kind of feedback did you get from them?
To be honest, I was very surprised about how open and responsive is the AI community! Some of the people involved are really huge points of reference for me (like Mario, for instance), and I didn’t expect to really get them on Adversarial Feelings. Some of the people involved prepared original contents for the release (Mario, for instance, realised a video on “The Sky would Clear What the …”, Yuma Kishi realized the girl/flower on “Sonnet#002” and Damien Henry did the train hallucination on “Shonx – Canton” remix. With other people involved, the collaboration was more based on producing something together, such a video, a piece of code or a way to explore Latent Spaces.
What was the role of instrument builders – what are we hearing in the sound, then?
Some of the artists and researchers involved realized some videos from the audio tracks (Mario Klingemann, Yuma Kishi). Damien Henry gave me the right to use a video he made with his Next Frame Prediction model. Karol Sudolski and Nicola Cattabiani worked with me in developing respectively “Are Eyes invisible Socket Contenders” + “Natural Readers” and “3402 Selves”. Karol Sudolski also realized the video part on “Trying to Speak”. Nicola Cattabiani developed the ELERP algorithm with me (to let the network edit videos according with the music) and GRUMIDI (the network working with my midi files). Mirek Hardiker built the data set for the third chapter of the book.
I wonder what it means for you to make this an immersive performance. What’s the experience you want for that audience; how does that fit into your theme?
I would say Adversarial Feelings is a AV show totally based on emotions. I always try to prepare the most intense, emotional and direct experience I can.
You talk about the emotional content here and its role in the machine learning. How are you relating emotionally to that content; what’s your feeling as you’re performing this? And did the algorithmic material produce a different emotional investment or connection for you?
It’s a bit like when I was a kid and I was listening at my recorded voice… it was always strange: I wasn’t fully able to recognize my voice as it sounded from the outside. I think neural networks can be an interesting tool to observe our own subjectivity from external, non-human eyes.
The AI hook is of course really visible at the moment. How do you relate to other artists who have done high-profile material in this area recently (Herndon/Dryhurst, Actress, etc.)? And do you feel there’s a growing scene here – is this a medium that has a chance to flourish, or will the electronic arts world just move on to the next buzzword in a year before people get the chance to flesh out more ideas?
I messaged a couple of times Holly Herndon online… I’m really into her work since her early releases, and when I heard she was working on AI systems I was trying to finish Adversarial Feelings videos… so I was so curious to discover her way to deal with intelligent systems! She’s a really talented artist, and I love the way she’s able to embed conceptual/political frameworks inside her music. Proto is a really complex, inspiring device.
More in general, I think the advent of a new technology always discloses new possibilities in artistic practices. I directly experienced the impact of internet (and of digital culture) on art, design and music when I was a kid. I’m thrilled by the fact at this point new configurations are not yet codified in established languages, and I feel working on AI today give me the possibility to be part of a public debate about how to set new standards for the discipline.
What can we expect to see / hear today in Berlin? Is it meaningful to get to do this in this context in KEYS / Trauma Bar?
I am curious too, to be honest. I am very excited to take part of such situation, beside artists and researchers I really respect and enjoy. I think the guys at KEYS are trying to do something beautiful and challenging.
Live in Berlin, 7 June
Lorem will join Lexachast (an ongoing collaborative work by Amnesia Scanner, Bill Kouligas and Harm van den Dorpel), N1L (an A/V artist, producer/dj based between Riga, Berlin, and Cairo), and a series of other tantalizing performances and lectures at Trauma Bar.
From Garderobe to dark rooms to toilets to dance floors, jbkrauss has lovingly built a Minecraft recreation of Be– uh, I really don’t want this to be taken down. Of some Berlin club. Looks like Tresor, probably.
Anyway, this strangely Tresor-ish Berlin club sure does, let’s say, lend itself to the cubic block architecture of Minecraft. (Always said that place was really the Borg cube, on so many levels.) Watch:
No doubt it is.
No Halle, but you do get an Eisbar. Erm, sorry – this is definitely not that club. Some club that has something up some stairs. Maybe it’s fourbar at Tresor. Yes.
I have no doubt that when we’re all stuck in an old age home, we will be visiting techno festivals and clubs inside some sort of virtual reality, whether it’s this in Berlin, or a VR Movement Festival, or MUTEK from our retirement home. Here’s our future. So we better start mining materials.
Source: posted by the creator to the techno subreddit today.
Composer Hildur Guðnadóttir went the extra distance for her score for Chernobyl = taking a real power plant as inspiration for her haunting score.
In a fascinating interview for Score: The Podcast, Guðnadóttir recounts how she followed the film crew to a decommissioned nuclear power plant in Lithuania – even donning a Haz-Mat suit for the research. (Lithuania here is a stand-in for the original site in Ukraine.)
Guðnadóttir, the composer and cellist (she’s played with Throbbing Gristle, scored films, and toured with Sunn O)))) was joined by Chris Watson on field recording. But this wasn’t just about gathering cool samples, but as she puts it, about listening. So every sound you hear is indeed drawn from the landscape of a similar Soviet-era nuclear plant, but as she tells it, the act of observing was just as important.
“I wanted to experience what it feels like to be inside a power plant,” she says. “Trying to make music out of a story – most of it is about listening.” So they go into this world just to listen – with a man who records ants.
And yes, this finally gets us away from Geiger counters and other cliches.
It’s funny to be here in Riga, just last night talking to Erica Synths founder Girts about his experience of the documentary – having lived through the incident within reach of radiation fallout.
Thanks to Noncompliant for this link.
The HBO drama trailer (though a poor representation of the score – like many trailers, it’s edited to materials outside the actual film score):
Ready to drone the f*** out? Here’s your own personal all-night chillout stage, full of ten hours of drones. It’s all part of a growing international annual celebration of drone sounds.
Oh sure, if you’re American you probably had Memorial Day weekend on the mind last weekend. But there was another holiday, too, dedicated to ambient and experimental music.
“Every year we make a noise together that stretches around the world,” proclaim the organizers on the site.”The answer comes through tiny vibrations in our skin and between our bones,” they say. “Gather and drone with friends, with the public, or alone (though you are never truly alone in the drone).”
Drone, community, and experimental sounds are all welcome. The ritual began a few years ago with organizers Marie Claire LeBlanc Flanagan and Weird Canada. This year’s edition had some 60 drone events worldwide.
But if you missed Drone Day on Saturday, don’t worry – you didn’t miss out. We’ve got a full ten hours recorded (and streamed live) in Berlin for your droning needs.
The details of this broadcast, plus the (very lovely) performing lineup:
For Drone Day, May 25th 2019, a live studio broadcast and deep listening session was held in Berlin with funding support from the Musicboard Berlin GmbH. An audio broadcast was also streamed with kind thanks to Radio nunc from 14.00-22.00CET.
0:00:00 improvisation with diane + vida vojić
3:34:10 vida vojić
4:28:31 improvisation with diane + DuChamp
5:15:30 Auguste + Nina Guo
5:55:30 Nina Pixel
6:58:32 Inter Lineas
7:44:05 improvisation with diane + Alexandra Macià + sn(50)
It’s not actually shot in black and white murk; we just live like that in Berlin – it follows us around, like a fog.
You asked for it – you’ve got it. It’s the biggest hits of orchestrions, street organs, fairground music, and mechanical tunes, from the 17th Century, 20th century, and yesterday.
Oh, you didn’t ask for that? Well… you get it anyway. “We’ve picked the most iconic instruments of all styles, from 17th century music boxes, to self playing MIDI accordions,” say the broadcasters of 24-hour-a-day Mechanical Music Radio.
Need a pick me up? You’re covered – there’s “feel good” toe tappers at 6pm and midnight to get your party going plus Friday night uptempo party lineups. And … daily at 6am and noon, because mechanical music lovers party at all kinds of hours. That’s how they roll. (The phrase “24 hour party people” I believe refers to listeners of this streaming radio station.)
Need the latest news of what’s happening in mechanical music events around the world? Top of the hour, every hour. (You need to know what’s up in street organ events, like hourly, way more than traffic and weather together on the 8s.)
Vinyl? Digital? No – fairground mechanical, for maximum fidelity and impact.
It’s the world’s first 24-hour mechanical music radio.
It’s also the world’s only 24-hour mechanical music radio, but I’ll establish a CDM tag for it just in case it inspires others.
Thanks to Graham Dunning, who has single-handedly made mechanical music a modern techno act:
Just discovered Mechanical Music Radio – 24hr streaming fairground organ bangers. "Every hour takes you on a journey though automatic instruments. We've crafted every hour to make sure whenever you tune in, you're never far away from a style of mechanical music you enjoy. "
Speaking of Stravinsky, I wish I could find some of the photos of him and other great composers lounging at his pool. But the “famous composer who would most easily fit into an episode of MTV’s cribs” is undoubtedly mister Rite of Spring himself, who escaped the Soviet Union, embraced capitalism in a major way, and found this sweet pad with an enormous pool in Hollywood. Seriously.
Dear young Buster: why do you look so sad and lonely? Don’t you know that having a yellow Teenage Engineering pocket modular is all the love you need?
Okay, so Buster is in fact Millenial Swedish pop star up and comer Emil Lennstrand, and he is the first face of a record label (really) from the perpetually-open-to-creative-distraction crew of Teenage Engineering. You see, having done cameras for IKEA and marketing campaigns and various synthesizers and … bicycles and lamps and other things … the Teenagers are now getting into a record label.
It’s surprisingly silky-smooth pop from this otherwise fairly hypernerdy and experimental Stockholm shop. But it does predictably feature Teenage Engineering instruments – in this case the pocket operator modular.
They bill the song as “partly produced” by that system 400 (what – the modular isn’t used on the vocals?). But it’s slick stuff, for sure.
The other star of the music video is this – TE’s pocket operator modular series.
So what’s up with the record label? It’s tough to tell from this one track, but here’s what the Teenagers say for themselves:
first teenage engineering started their own band to field test their instruments. now they are taking the next step starting a record label for songs made with teenage engineering products. there are just two rules, it needs to be a good song (easy) and have at least one of teenage engineerings instruments used in the song. the main distribution platform for their releases will be spotify.
Now that’s some serious Swedish loyalty, going Spotify only.
I’m slightly confused, but intrigued. To my mind, the OP-Z remains the best thing recently from Teenage Engineering hands down, but stay tuned for my explanation of why I feel that way.
And there’s more Teenage Engineering stuff to come, including me joining them in Barcelona during SONAR+D this summer – which means a chance to grill them for more information, of course.