An injury left Olafur Arnalds unable to play, so he turned to machines

Following nerve damage, Icelandic composer/producer/musician was unable to play the piano. With his ‘Ghost Pianos’, he gets that ability back, through intelligent custom software and mechanical pianos.

It’s moving to hear him tell the story (to the CNN viral video series) – with, naturally, the obligatory shots of Icelandic mountains and close-up images of mechanical pianos working. No complaints:

This frames accessibility in terms any of us can understand. Our bodies are fragile, and indeed piano history is replete with musicians who lost the original use of their two hands and had to adapt. Here, an accident caused him to lose left hand dexterity, so he needed a way to connect one hand to more parts.

And in the end, as so often is the case with accessibility stories and music technology, he created something that was more than what he had before.

With all the focus on machine learning, a lot of generative algorithmic music continues to work more traditionally. That appears to be the case here – the software analyzes incoming streams and follows rules and music theory to accompany the work. (As I learn more about machine learning, though, I suspect the combination of these newer techniques with the older ones may slowly yield even sharper algorithms – and challenge us to hone our own compositional focus and thinking.)

I’ll try to reach out to the developers, but meanwhile it’s fun squinting at screenshots as you can tell a lot. There’s a polyphonic step sequencer / pattern sequencer of sorts in there, with some variable chance. You can also tell in the screen shots that the pattern lengths are set to be irregular, so that you get these lovely polymetric echoes of what Olafur is playing.

Of course, what makes this most interesting is that Olafur responds to that machine – human echoes of the ‘ghost.’ I’m struck by how even a simple input can do this for you – like even a basic delay and feedback. We humans are extraordinarily sensitive to context and feedback.

The music itself is quite simple – familiar minimalist elements. If that isn’t your thing, you should definitely keep watching so you get to his trash punk stage. But it won’t surprise you at all that this is a guy who plays Clapping Music backstage – there’s some serious Reich influence.

You can hear the ‘ghost’ elements in the reent release ‘ekki hugsa’, which comes with some lovely joyful dancing in the music video:

re:member debuted the software:

There is a history here of adapting composition to injury. (That’s not even including Robert Schumann, who evidently destroyed his own hands in an attempt to increase dexterity.)

Paul Wittgenstein had his entire right arm amputated following World War I injury, commissioned a number of works for just the left hand. (There’s a surprisingly extensive article on Wikipedia, which definitely retrieves more than I had lying around inside my brain.) Ravel’s Piano Concerto for the Left Hand is probably the best-known result, and there’s even a 1937 recording by Wittgenstein himself. It’s an ominous, brooding performance, made as Europe was plunging itself into violence a second time. But it’s notable in that it’s made even more virtuosic in the single hand – it’s a new kind of piano idiom, made for this unique scenario.

I love Arnalds’ work, but listening to the Ravel – a composer known as whimsical, crowd pleasing even – I do lament a bit of what’s been lost in the push for cheery, comfortable concert music. It seems to me that some of that dark and edge could come back to the music, and the circumstances of the composition in that piece ought to remind us how necessary those emotions are to our society.

I don’t say that to diss Mr. Arnalds. On the contrary, I would love to hear some of his punk side return. And his quite beautiful music aside, I also hope that these ideas about harnessing machines in concert music may also find new, punk, even discomforting conceptions among some readers here.

Here’s a more intimate performance, including a day without Internet:

And lastly, more detail on the software:

Meanwhile, whatever kind of music you make, you should endeavor to have a promo site that is complete, like this – also, sheet music!

olafurarnalds.com

Previously:

The KellyCaster reveals what accessibility means for instruments

The post An injury left Olafur Arnalds unable to play, so he turned to machines appeared first on CDM Create Digital Music.

For ‘Chernobyl’ score, Hildur Guðnadóttir went to a nuclear power plant

Composer Hildur Guðnadóttir went the extra distance for her score for Chernobyl = taking a real power plant as inspiration for her haunting score.

In a fascinating interview for Score: The Podcast, Guðnadóttir recounts how she followed the film crew to a decommissioned nuclear power plant in Lithuania – even donning a Haz-Mat suit for the research. (Lithuania here is a stand-in for the original site in Ukraine.)

Guðnadóttir, the composer and cellist (she’s played with Throbbing Gristle, scored films, and toured with Sunn O)))) was joined by Chris Watson on field recording. But this wasn’t just about gathering cool samples, but as she puts it, about listening. So every sound you hear is indeed drawn from the landscape of a similar Soviet-era nuclear plant, but as she tells it, the act of observing was just as important.

“I wanted to experience what it feels like to be inside a power plant,” she says. “Trying to make music out of a story – most of it is about listening.” So they go into this world just to listen – with a man who records ants.

And yes, this finally gets us away from Geiger counters and other cliches.

It’s funny to be here in Riga, just last night talking to Erica Synths founder Girts about his experience of the documentary – having lived through the incident within reach of radiation fallout.

Thanks to Noncompliant for this link.

The HBO drama trailer (though a poor representation of the score – like many trailers, it’s edited to materials outside the actual film score):

Score: The Podcast on Apple Podcasts

The post For ‘Chernobyl’ score, Hildur Guðnadóttir went to a nuclear power plant appeared first on CDM Create Digital Music.

Experimental Ukrainian music, through a looking glass

April is a generous month for fans of unusual Ukrainian compilations – now having covered new braindance from the country, we’re directed by readers to another set, giving a tour of experimentalism and electronic composition.

Flaming Pines is a wonderful label for experimental music, also setting up its virtual home on Bandcamp, that last best hope for underground digital downloads and physical releases. Check their full catalog for adventurous sounds from 9T Antiope (great stuff) to Kate Carr (seriously, just go give those a listen). The label, a transplant from Sydney to London, has also taken on a number of tours of experimental electronic scenes in far-off locales, including a gorgeous Iranian compilation called Absence, and up-and-coming Vietnamese avant garde in Emergence.

It’s not so much exoticism the label seems to find as threads connecting kindred spirits. And now, having plumbed the depths of mystical sound in Ukrainian duo Gamardah Fungus, the label brings back half of that duo to curate a selection of sounds from that motherland. Igor Yalivec is the guide here, leading us in just twelve tracks to some highlights of established compositional voices and younger contributions alike. Igor you’ll also find showing off modular musicianship as a solo artist in addition to working in the duo:

Guitar and electronics yield magical metallic timbres like a lucid dream, in the work of Gamardah Fungus – some potent brew of remembered folklore and time-warped futurism. It’s Slavic spirit ambient, but always inventive – modal melodies tensely wandering about layers of tape and sound:

So this was a perfect starting point for Kaleiodoscope. That leads to Alla Zahaikevych (aka Zagaykevych) – her work spanning traditional concert music training, historical folk singing technique (with over a decade singing in an ensemble dedicated to the practice), and founding the Electronic Music Studio of Kyiv’s National Music Academy of Ukraine. I can’t think of many composers covering that many directions in a single career worldwide, making her a leader on that stage as well as in Ukraine.

Or there’s Andrey Kiritchenko, obsessively prolific generation X-aged composer who founded the cutting-edge Nexsound label – and has worked with names like Kim Cascone, Francisco López, Andreas Tilliander, Frank Bretschneider, Scanner, Charlemagne Palestine, and many others.

But thinking in generations or separating academy from disreputable underground – it’s fitting that we cross those borders freely now. So it’s an easy step to a younger artist like Motorpig, a visceral, dark project spanning techno, industrial, and experimental veins – and things that are none of those, but rather ambient, undulating merry-go-rounds of texture. (Been a while since there was new Motorpig, so I’m up for any new track):

To come full circle, understanding the reason for this journey out to Ukraine, it’s worth hearing the terrifically nuanced sound world of Flaming Pines’ own Kate Carr. These are ambient soundscapes that breathe and ache, as precarious and fragile as evidently the artist was recording them – “sliding about in freezing mud on steep inclines.” And maybe that’s what this is all about – music that invokes deep spirits and puts itself in positions of extreme difficulty, all to catch fleeting moments of beauty.

So the compilation promises great things – like this utterly chilling vocal composition by Alla Zagaykevych, some evidently convolved, ghostly sound that seems to be about to blow away like frost:

Also in future-vocal territory, Andrey Kiritchenko delivers a chanting vocoder:

The art, at top, also comes from Ukraine – artist Alina Gaeva. I look forward to the compilation coming out on April 22 – but there’s plenty of link holes to drain our PayPal accounts on Bandcamp in the meantime.

https://flamingpines.bandcamp.com/album/kaleidoscope

And all of this makes a nice contrast to that naive nerdy braindance business from a couple of days ago. Previously, on “there’s a lot of really cool music from Ukraine on Bandcamp now and it’s worth dropping doing other things to talk about it”:

From Ukraine, a compilation to resist normality and go braindance

The post Experimental Ukrainian music, through a looking glass appeared first on CDM Create Digital Music.

Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki

Complex music conjures up radical, fluid architectures, vivid angles – why not experience those spatial and rhythmic structures together? Here’s insight into a music video this week in which experimental turntablism and 3D graphics collide.

And collide is the right word. Sound and image are all hard edges, primitive cuts, stimulating corners.

Shiva Feshareki is a London-born composer and turntablist; she’s also got a radio show on NTS. With a research specialization in Daphne Oram (there’s a whole story there, even), she’s made a name for herself as one of the world’s leading composers working with turntables as medium, playing to the likes of the Royaal Albert Hall with the London Contemporary Orchestra. Her sounds are themselves often spatial and architectural, too – not just taking over art spaces, but working with spatial organization in her compositions.

That makes a perfect fit with the equally frenetic jump cuts and spinning 3D architectures of visualist Daniel James Oliver Haddock. (He’s a man with so many dimensions they named him four times over.)

NEW FORMS, her album on Belfast’s Resist label, explores the fragmented world of “different social forms,” a cut-up analog to today’s sliced-up, broken society. The abstract formal architecture, then, has a mission. As she writes in the liner notes: “if I can demonstrate sonically how one form can be vastly transformed using nothing other than its own material, then I can demonstrate this complexity and vastness of perspective.”

You can watch her playing with turntables and things around and atop turntables on Against the Clock for FACT:

And grab the album from Bandcamp:

Shiva herself works with graphical scores, which are interpreted in the album art by artist Helena Hamilton. Have a gander at that edition:

But since FACT covered the sound side of this, I decided to snag Daniel James Oliver Haddock. Daniel also wins the award this week for “quickest to answer interview questions,” so hey kids, experimental turntablism will give you energy!

Here’s Daniel:

The conception formed out of conversations with Shiva about the nature of her work and the ways in which she approaches sound. She views sound as these unique 3D structures which can change and be manipulated. So I wanted to emulate that in the video. I also was interested in the drawings and diagrams that she makes to plan out different aspects of her performances, mapping out speakers and sound scapes, I thought they were really beautiful in a very clinical way so again I wanted to use them as a staging point for the 3D environments.

I made about 6 environments in cinema 4d which were all inspired by these drawings. Then animated these quite rudimentary irregular polyhedrons in the middle to kind of represent various sounds.

Her work usually has a lot of sound manipulation, so I wanted the shapes to change and have variables. I ended up rendering short scenes in different camera perspectives and movements and also changing the textures from monotone to colour.

After all the Cinema 4d stuff, it was just a case of editing it all together! Which was fairly labour intensive, the track is not only very long but all the sounds have a very unusual tempo to them, some growing over time and then shortening, sounds change and get re-manipulated so that was challenging getting everything cut well. I basically just went through second by second with the waveforms and matched sounds by eye. Once I got the technique down it moved quite quickly. I then got the idea to involve some found footage to kind of break apart the aesthetic a bit.

Of course, there’s a clear link here to Autechre’s Gantz Graf music video, ur-video of all 3D music videos after. But then, there’s something really delightful about seeing those rhythms visualized when they’re produced live on turntables. Just the VJ in me really wants to see the visuals as live performance. (Well, and to me, that’s easier to produce than the Cinema 4D edits!)

But it’s all a real good time with at the audio/visual synesthesia experimental disco.

More:

Watch experimental turntablist Shiva Feshareki’s ‘V-A-C Moscow’ video [FACT]

https://www.shivafeshareki.co.uk/

https://resistbelfast.bandcamp.com/album/new-forms

Resist label

The post Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki appeared first on CDM Create Digital Music.

Why is this Valentine’s song made by an AI app so awful?

Do you hate AI as a buzzword? Do you despise the millennial whoop? Do you cringe every time Valentine’s Day arrives? Well – get ready for all those things you hate in one place. But hang in there – there’s a moral to this story.

Now, really, the song is bad. Like laugh-out-loud bad. Here’s iOS app Amadeus Code “composing” a song for Valentine’s Day, which says love much in the way a half-melted milk chocolate heart does, but – well, I’ll let you listen, millennial pop cliches and all:

Fortunately this comes after yesterday’s quite stimulating ideas from a Google research team – proof that you might actually use machine learning for stuff you want, like improved groove quantization and rhythm humanization. In case you missed that:

Magenta Studio lets you use AI tools for inspiration in Ableton Live

Now, as a trained composer / musicologist, I do find this sort of exercise fascinating. And on reflection, I think the failure of this app tells us a lot – not just about machines, but about humans. Here’s what I mean.

Amadeus Code is an interesting idea – a “songwriting assistant” powered by machine learning, delivered as an app. And it seems machine learning could generate, for example, smarter auto accompaniment tools or harmonizers. Traditionally, those technologies have been driven by rigid heuristics that sound “off” to our ears, because they aren’t able to adequately follow harmonic changes in the way a human would. Machine learning could – well, theoretically, with the right dataset and interpretation – make those tools work more effectively. (I won’t re-hash an explanation of neural network machine learning, since I got into that in yesterday’s article on Magenta Studio.)

https://amadeuscode.com/

You might well find some usefulness from Amadeus, too.

This particular example does not sound useful, though. It sounds soulless and horrible.

Okay, so what happened here? Music theory at least cheers me up even when Valentine’s Day brings me down. Here’s what the developers sent CDM in a pre-packaged press release:

We wanted to create a song with a specific singer in mind, and for this demo, it was Taylor Swift. With that in mind, here are the parameters we set in the app.

Bpm set to slow to create a pop ballad
To give the verses a rhythmic feel, the note length settings were set to “short” and also since her vocals have great presence below C, the note range was also set from low~mid range.
For the chorus, to give contrast to the rhythmic verses, the note lengths were set longer and a wider note range was set to give a dynamic range overall.

After re-generating a few ideas in the app, the midi file was exported and handed to an arranger who made the track.

Wait – Taylor Swift is there just how, you say?

Taylor’s vocal range is somewhere in the range of C#3-G5. The key of the song created with Amadeus Code was raised a half step in order to accommodate this range making the song F3-D5.

From the exported midi, 90% of the topline was used. The rest of the 10% was edited by the human arranger/producer: The bass and harmony files are 100% from the AC midi files.

Now, first – these results are really impressive. I don’t think traditional melodic models – theoretical and mathematical in nature – are capable of generating anything like this. They’ll tend to fit melodic material into a continuous line, and as a result will come out fairly featureless.

No, what’s compelling here is not so much that this sounds like Taylor Swift, or that it sounds like a computer, as it sounds like one of those awful commercial music beds trying to be a faux Taylor Swift song. It’s gotten some of the repetition, some of the basic syncopation, and oh yeah, that awful overused millennial whoop. It sounds like a parody, perhaps because partly it is – the machine learning has repeated the most recognizable cliches from these melodic materials, strung together, and then that was further selected / arranged by humans who did the same. (If the machines had been left alone without as much human intervention, I suspect the results wouldn’t be as good.)

In fact, it picks up Swift’s ticks – some of the funny syncopations and repetitions – but without stringing them together, like watching someone do a bad impression. (That’s still impressive, though, as it does represent one element of learning – if a crude one.)

To understand why this matters, we’re going to have to listen to a real Taylor Swift song. Let’s take this one:i’

Okay, first, the fact that the real Taylor Swift song has words is not a trivial detail. Adding words means adding prosody – so elements like intonation, tone, stress, and rhythm. To the extent those elements have resurfaced as musical elements in the machine learning-generated example, they’ve done so in a way that no longer is attached to meaning.

No amount of analysis, machine or human, can be generative of lyrical prosody for the simple reason that analysis alone doesn’t give you intention and play. A lyricist will make decisions based on past experience and on the desired effect of the song, and because there’s no real right or wrong to how do do that, they can play around with our expectations.

Part of the reason we should stop using AI as a term is that artificial intelligence implies decision making, and these kinds of models can’t make decisions. (I did say “AI” again because it fits into the headline. Or, uh, oops, I did it again. AI lyricists can’t yet hammer “oops” as an interjection or learn the playful setting of that line – again, sorry.)

Now, you can hate the Taylor Swift song if you like. But it’s catchy not because of a predictable set of pop music rules so much as its unpredictability and irregularity – the very things machine learning models of melodic space are trying to remove in order to create smooth interpolations. In fact, most of the melody of “Blank Space” is a repeated tonic note over the chord progression. Repetition and rhythm are also combined into repeated motives – something else these simple melodic models can’t generate, by design. (Well, you’ll hear basic repetition, but making a relationship between repeated motives again will require a human.)

It may sound like I’m dismissing computer analysis. I’m actually saying something more (maybe) radical – I’m saying part of the mistake here is assuming an analytical model will work as a generative model. Not just a machine model – any model.

This mistake is familiar, because almost everyone who has ever studied music theory has made the same mistake. (Theory teachers then have to listen to the results, which are often about as much fun as these AI results.)

Music theory analysis can lead you to a deeper understanding of how music works, and how the mechanical elements of music interrelate. But it’s tough to turn an analytical model into a generative model, because the “generating” process involves decisions based on intention. If the machine learning models sometimes sound like a first year graduate composition student, that may be that the same student is steeped in the analysis but not in the experience of decision making. But that’s important. The machine learning model won’t get better, because while it can keep learning, it can’t really make decisions. It can’t learn from what it’s learned, as you can.

Yes, yes, app developers – I can hear you aren’t sold yet.

For a sense of why this can go deep, let’s turn back to this same Taylor Swift song. The band Imagine Dragons picked it up and did a cover, and, well, the chord progression will sound more familiar than before.

As it happens, in a different live take I heard the lead singer comment (unironically) that he really loves Swift’s melodic writing.

But, oh yeah, even though pop music recycles elements like chord progressions and even groove (there’s the analytic part), the results take on singular personalities (there’s the human-generative side).

“Stand by Me” dispenses with some of the ticks of our current pop age – millennial whoops, I’m looking at you – and at least as well as you can with the English language, hits some emotional meaning of the words in the way they’re set musically. It’s not a mathematical average of a bunch of tunes, either. It’s a reference to a particular song that meant something to its composer and singer, Ben E. King.

This is his voice, not just the emergent results of a model. It’s a singer recalling a spiritual that hit him with those same three words, which sets a particular psalm from the Bible. So yes, drum machines have no soul – at least until we give them one.

“Sure,” you say, “but couldn’t the machine learning eventually learn how to set the words ‘stand by me’ to music”? No, it can’t – because there are too many possibilities for exactly the same words in the same range in the same meter. Think about it: how many ways can you say these three words?

“Stand by me.”

Where do you put the emphasis, the pitch? There’s prosody. What melody do you use? Keep in mind just how different Taylor Swift and Ben E. King were, even with the same harmonic structure. “Stand,” the word, is repeated as a suspension – a dissonant note – above the tonic.

And even those observations still lie in the realm of analysis. The texture of this coming out of someone’s vocal cords, the nuances to their performance – that never happens the same way twice.

Analyzing this will not tell you how to write a song like this. But it will throw light on each decision, make you hear it that much more deeply – which is why we teach analysis, and why we don’t worry that it will rob music of its magic. It means you’ll really listen to this song and what it’s saying, listen to how mournful that song is.

And that’s what a love song really is:

If the sky that we look upon
Should tumble and fall
Or the mountain should crumble to the sea
I won’t cry, I won’t cry
No, I won’t shed a tear
Just as long as you stand
Stand by me

Stand by me.

Now that’s a love song.

So happy Valentine’s Day. And if you’re alone, well – make some music. People singing about hearbreak and longing have gotten us this far – and it seems if a machine does join in, it’ll happen when the machine’s heart can break, too.

PS – let’s give credit to the songwriters, and a gentle reminder that we each have something to sing that only we can:
Singer Ben E. King, Best Known For ‘Stand By Me,’ Dies At 76 [NPR]

The post Why is this Valentine’s song made by an AI app so awful? appeared first on CDM Create Digital Music.

Explore sonic inspiration, via this artist’s approach to Novation’s Peak 1.2

Novation packed new sounds – and 43 new wavetables – into an update for their flagship Peak synthesizer. Sound designer Patricia Wolf writes to share how she approached making some of those new sounds.

Peak, in case you missed it, has been one of the more compelling new synths in recent years. Novation designed a unique-sounding 8-voice polysynth, melding digital wavetable oscillators with analog processing, per-voice filtering and all-important distortion all over the place.

As with other Novation products, they’ve also been adding features in frequent firmware updates, listening to users in the process.

The big deal in Peak 1.2, released this month, is 43 additional wavetables (which evidently some of you were asking for). But you also get:

16 tuning tables
Two more LFOs you can assign to anything (not just per-voice)
Pitch bend wheel modulation (if you like)
A quicker interface for the Mod Matrix
A new four-slot FX Matrix – so you can route four LFOs to effects parameters
A hold stage for the envelopes (on top of the existing ADSRs)
An option to initalize with current knob/fader positions (instead of defaults)
New soundpacks from GForce and Patricia Wolf

More info:
https://novationmusic.com/news/peak-v12-firmware-update

The update is free via Novation’s Web-based tools:
https://components.novationmusic.com/

Now, as it happens, Patricia Wolf wrote us on her own to share what she has done with her 50 sounds. Patricia is leading what sounds like a great career working in sound design, and her approach to these sounds is really musical – including sharing these etudes of sorts fo illustrate them, inspired by the likes of BBC Radiophonic Workshop pioneering composer Delia Derbyshire. Listen:

Here’s what Patricia has to say:

Hello CDM:) I am a sound designer and electronic musician based in Portland, Oregon. I am one of the official sound designers for the Novation Peak synthesizer and just made a sound pack of 50 patches for their firmware update launch. My soundpack is available for free through Novation’s Components App.

I created a recording demonstrating my patches in a musical/artistic way.

Patricia playing live in Seattle for Further Records. Photo Valerie Ann/DJ Explorateur, framed by video art live by Leo Mayberry.

This recording is a demonstration of the sound design work I did for the Novation Peak. I created 50 patches demonstrating some of the new features that the v1.2 firmware update has to offer. My sound pack is available for free with the update through Novation’s Components App. Select the Novation tab on that app to access them as well as GForce Software’s free patches.

The patches are performed with a mixture of Octatrack sequencing (using sequences from songs I have written) and live performance with a MIDI controller. I was inspired by artists like Delia Derbyshire and wanted to record little vignettes and sonatas using the Peak without other sound sources.

I made this recording so that friends can hear the sounds I made and so that other Peak users can get a closer glimpse into how I envision sound design.

The Novation Peak was recorded directly into a Steinberg UR44 interface. No external effects. Subtle mastering from Tokyo Dawn Labs software to balance recordings of different patches.

More on Patricia:

Patricia Wolf is an electronic musician, sound designer, and gallery curator based in Portland, Oregon. After years of working in the synth pop duo Soft Metals, Wolf became interested in exploring non-linear songwriting and new forms of synthesis. Alongside working with Novation, Wolf co-founded the gallery Variform which focuses on sound design and modern composition. Patricia Wolf is a recipient of the Precipice Fund, a grant funded by the Andy Warhol Foundation for the Visual Arts, to explore synthesis in the contemporary art world.

The post Explore sonic inspiration, via this artist’s approach to Novation’s Peak 1.2 appeared first on CDM Create Digital Music.

Ethereal, enchanting Winter Solstice drone album, made in VCV Rack

It’s the shortest day of the year and first astronomical day of winter in the Northern Hemisphere. Don’t fight it. Embrace all that darkness – with this transcendent album full of drones, made in the free VCV Rack modular platform.

And really, what better way to celebrate modular than with expansive drones? Leave the on-the-grid “mad beats” and EDM wavetable presets to commercial tools. Enjoy as each modular patch achingly, slowly shifts, like a frost across a snowbank. Or something like that.

These aren’t just any drones. The compilation, for its part, is absolutely gorgeous, start to finish. It’s the work of ablaut, a Dutch-born, Suzhou-based artist living in China, with a winter wonderland worth of lush sonic shapes to send a chill up your spine. And everything came from the active VCV Rack community, where users of the open source modular platform have been avidly sharing patches and music alongside.

There’s terrific attention to detail. The group were inspired by the work of composers like La Monte Young, and … this is no lazy “pad through some reverb” work here. It’s utterly otherworldly:

We’ll hopefully take a look at some of these patches soon. If you’ve got ambient Rack creations of your own and missed out on the collaboration, we’d love to hear those, too.

The album is pay-what-you-will.

https://ablaut.bandcamp.com/album/winter-solstice-drone

https://vcvrack.com/

VCV Rack Official Facebook group

The post Ethereal, enchanting Winter Solstice drone album, made in VCV Rack appeared first on CDM Create Digital Music.

An exploration of silence, in a new exhibition in Switzerland

What’s the sound of an exhibition devoted to silence? From John Cage recreations to the latest in interactive virtual reality tech, it turns out there’s a lot. The exhibition’s lead Jascha Dormann tells us more – and gives us a look inside.

The results are surprisingly poetic – like a surrealist listening playground on the topic of isolation.

“Sounds of Silence” opened this month at the Museum of Communication in Bern, Switzerland, and is on through July 2019. Just as John Cage’s revelation that visiting an anechoic chamber was, in fact, noisey, “silence” in this case challenges listening and exploration. It’s about surprise, not void. As the exhibition creators say, “the search for a place where stillness may be experienced, however, becomes difficult: stillness is holding sway only in outer space – yet even there the astronaut is hearing his own breaths.”

Inside the exhibition, there’s not a word of written text, and few traditional photos or videos. Instead, you get abstract spatial graphics. Tracking systems respond as you navigate the exhibit, and an unseen voice hints at what you might do. There’s a snowy cotton-like entry, radio-like sound effects, and then a pathway to explore silence from the start of the universe until this century.

And you get some unique experiences: the isolation tank invented by neurophysiologist John C. Lilly, 3D soundscapes, Sarah Maitland talking to you about her experience in seclusion on the Isle of Skye, and yes, Cage’s iconic if ironic “4’33”.” The Cage work is realized as an eight-channel ORTF 3D audio recording, from a performance by Staatsorchester Stuttgart at the Beethovensaal Stuttgart. (That has to be silence’s largest-ever orchestration, I suppose.) It’s silence in full immersive sound.

“The piece had never been recorded in 3D-audio before,” says Dormann. “We have then implemented the recording into the interactive sound system so visitors can experience it in a version that’s binauralized in real-time.”

Recording silence – in 3D! The session in Stuttgart, Germany.

Photos source: Museum of Communication Bern
Digitale Massarbeit

Exhibition credits:

Sound Concept and Sound Production Lead: Jascha Dormann (Idee und Klang GmbH)
Sound Concept and Sound Design: Ramon De Marco (Idee und Klang GmbH)
Sound Design: Simon Hauswirth (Idee und Klang GmbH)
Development Sound System: Steffen Armbruster (Framed immersive projects GmbH & Co. KG)
Sound Implementation: Marc Trinkhaus (Framed immersive projects GmbH & Co. KG)
Performance John Cage – 4’33’’: Staatsorchester Stuttgart conducted by Cornelius Meister
Recording John Cage – 4’33’’: Jascha Dormann at Beethovensaal / Liederhalle Stuttgart
Project in general
Project Lead and Curator: Kurt Stadelmann (Museum of Communication)
Project Manager: Angelina Keller (Museum of Communication)
Scenography: ZMIK spacial design, / Rolf Indermühle
Exhibition Graphics: Büro Berrel Gschwind, / Dominique Berrel
Author: Bettina Mittelstrass
Head of Exhibitions at Museum of Communication: Christian Rohner (Museum of Communication)

Various events are running alongside the exhibition; full details on the museum’s site:

Exhibitions: Sounds of Silence

More images:

http://www.mfk.ch/en/

The post An exploration of silence, in a new exhibition in Switzerland appeared first on CDM Create Digital Music.

From food stamps and survival to writing the songs you know

“I don’t know what I’m doing,” says artist and composer Allee Willis. Yet her output ranges from Earth, Wind, and Fire’s “September” to the theme song of Friends. If you don’t know Willis, you should – and her story might inspire yours.

Behind all the cheery social media these days, most artists you talk to have struggled. They’ve struggled with creativity and sobriety, mental health and creative blocks, unfriendly industries and obscurity. And sometimes they’ve struggled just to get by – which is where Allee Willis was in 1978, living off food stamps and wondering what would happen next.

What happened next is a career that led to an insane number of hit songs – along with plenty of other fascinating side trips into kitsch and art. (There’s a kitsch-themed social network, an artist alterego named Bubbles, and a music video duet with a 91-year-old woman drummer on an oxygen tank, to name a few.) But what it hasn’t involved is a lot of widespread personal notoriety. Allee Willis is a celebrity’s celebrity, which is to say famous people know her but most people don’t know she’s famous.

At least it’s about that gap. The odds that you don’t know her? Decent. The odds that you don’t know her songs? Unlikely.

Let’s go: Earth, Wind & Fire “September” and “Boogie Wonderland,” The Pointer Sisters’ “Neutron Dance,” Pet Shop Boys with Dusty Springfield’s “What Have I Done To Deserve This.” The theme from Friends, recorded by The Rembrandts (if you knew that, which I suspect you didn’t)… all these and more add up to 60 million records. And she co-authored the Oprah Winfrey-produced, Tony and Grammy-winning Broadway musical The Color Purple. More songs you know in movies: Beverly Hills Cop, The Karate Kid (“You’re the Best”), Howard the Duck.

The Detroit native is an impassioned use of Web tech and animation, networked together machines to design an orchestration workflow for The Color Purple musical, and now lives in LA with … Pro Tools, of course, alongside some cats.

But this isn’t about her resume so much as it is about what she says drives her – that itch to create stuff. And for anyone worried about how to get into the creative zone, maybe the first step is to stop worrying about getting into the creative zone. We value analysis and self-critique so much that sometimes we forget to just have fun making and stop worrying about even our own opinions (or maybe, especially those). In the end, it was that instinct that has driven her work, and presumably lots of stuff that didn’t do as well as that Friends theme song. (But there are her cats. Not the Broadway kind; that’s Andrew Lloyd Weber – the furry ones.)

There’s a great video out from CNN-produced Web video series Great Big Story:

And her site is a wild 1999-vintage-design wonderland of HTML, if you want to dive in:

https://alleewillis.com

More:

How she wrote “What Have I Done to Deserve This” gets into her musical thinking – and incongruity (and she does sure seem like she knows what she’s doing):

Plus how she hears and why she needed a Fender Rhodes:

The post From food stamps and survival to writing the songs you know appeared first on CDM Create Digital Music.

Exploring a journey from Bengali heritage to electronic invention

Can electronic music tell a story about who we are? Debashis Sinha talks about his LP for Establishment, The White Dog, and how everything from Toronto noodle bowls to Bengali field recordings got involved.

The Canadian artist has a unique knack for melding live percussion techniques and electro-acoustic sound with digital manipulation, and in The White Dog, he dives deep into his own Bengali heritage. Just don’t think of “world music.” What emerges is deeply his and composed in a way that’s entirely electro-acoustic in course, not a pastiche of someone else’s musical tradition glued onto some beats. And that’s what drew me to it – this is really the sound of the culture of Debashis, the individual.

And that seems connected to what electronic music production can be – where its relative ease and accessibility can allow us to focus on our own performance technique and a deeper sense of expression. So it’s a great chance not just to explore this album, but what that trip in this work might say to the rest of us.

CDM’s label side project Establishment put out the new release. I spoke to Debashis just after he finished a trip to Germany and a live performance of the album at our event in Berlin. He writes us from his home Toronto.

First, the album:

I want to start with this journey you took across India. What was that experience like? How did you manage to gather research while in that process?

I’ve been to India many times to travel on my own since I turned 18 – usually I spend time with family in and near Kolkata, West Bengal and then travel around, backpacking style. Since the days of Walkman cassette recorders, I’ve always carried something with me to record sound. I didn’t have a real agenda in mind when I started doing it – it was the time of cassettes, really, so in my mind there wasn’t much I could do with these recordings – but it seemed like an important process to undertake. I never really knew what I was going to do with them. I had no knowledge of what sound art was, or radio art, or electroacoustic music. I switched on the recorder when I felt I had to – I just knew I had to collect these sounds, somehow, for me.

As the years went on and I understood the possibilities for using sound captured in the wild on both a conceptual and technical level, and with the advent of tools to use them easily, I found that to my surprise that the act of recording (when in India, at least) didn’t really change. I still felt I was documenting something that was personal and vital to my identity or heart, and the urge to turn on the recorder still came from a very deep place. It could easily have been that I gathered field sound in response to or in order to complete some kind of musical idea, but every time I tried to turn on the recorder in order to gather “assets” for my music, I found myself resisting. So in the end I just let it be, safe in the knowledge that whatever I gathered had a function for me, and may (or may not) in future have a function for my music or sound work. It didn’t feel authentic to gather sound otherwise.

Even though this is your own heritage, I suppose it’s simultaneously something foreign. How did you relate to that, both before and after the trip?

My father moved to Winnipeg, in the center of Canada, almost 60 years ago, and at the time there were next to no Indian (i.e. people from India) there. I grew up knowing all the brown people in the city. It was a different time, and the community was so small, and from all over India and the subcontinent. Passing on art, stories, myth and music was important, but not so much language, and it was easy to feel overwhelmed – I think that passing on of culture operated very differently from family to family, with no overall cultural support at large to bolster that identity for us.

My mom – who used to dance with Uday Shankar’s troupe would corral all the community children to choreograph “dance-dramas” based on Hindu myths. The first wave of Indian people in Winnipeg finally built the first Hindu temple in my childhood – until then we would congregate in people’s basement altars, or in apartment building common rooms.

There was definitely a relationship with India, but it was one that left me what I call “in/between” cultures. I had to find my own way to incorporate my cultural heritage with my life in Canada. For a long time, I had two parallel lives — which seemed to work fine, but when I started getting serious about music it became something I really had to wrestle with. On the one hand, there was this deep and rich musical heritage that I had tenuous connections to. On the other hand, I was also interested in the 2-Tone music of the UK, American hardcore, and experimental music. I took tabla lessons in my youth, as I was interested in and playing drums, but I knew enough to know I would never be a classical player, and had no interest in pursuing that path, understanding even then that my practice would be eclectic.

I did have a desire to contribute to my Indian heritage from where I sat – to express somehow that “in/between”-ness. And the various trips I undertook on my own to India since I was a young person were in part an effort to explore what that expression might take, whether I knew it or not. The collections of field recordings (audio and later video) became a parcel of sound that somehow was a thread to my practice in Canada on the “world music” stage and later in the realms of sound art and composition.

One of the projects I do is a durational improvised concert called “The (X) Music Conference”, which is modeled after the all-night classical music concerts that take place across India. They start in the evening and the headliner usually goes on around 4am and plays for 3 or more hours. Listening to music for that long, and all night, does something to your brain. I wanted to give that experience to audience members, but I’m only one person, so my concert starts at midnight and goes to 7am. There is tea and other snacks, and people can sit or lie down. I wanted to actualize this idea of form (the classical music concert) suffused with my own content (sound improvisations) – it was a way to connect the music culture of India to my own practice. Using field recordings in my solo work is another, or re-presenting/-imagining Hindu myths another.

I think with the development of the various facets of my sound practice, I’ve found a way to incorporate this “form and content” approach, allowing the way that my cultural heritage functions in my psyche to express itself through the tools I use in various ways. It wasn’t an easy process to come to this balance, but along the way I played music with a lot of amazing people that encouraged me in my explorations.

In terms of integrating what you learned, what was the process of applying that material to your work? How did your work change from its usual idioms?

I went through a long process of compartmentalizing when I discovered (and consumer technology supported) producing electroacoustic work easily. When I was concentrating on playing live music with others on the stage, I spent a lot of time studying various drumming traditions under masters all over – Cairo, Athens, NYC, LA, Toronto – and that was really what kept me curious and driven, knowing I was only glimpsing something that was almost unknowable completely.

As the “world music” industry developed, though, I found the “story” of playing music based on these traditions less and less engaging, and the straight folk festival concert format more and more trivial – fun, but trivial – in some ways. I was driven to tell stories with sound in ways that were more satisfying to me, that ran deeper. These field recordings were a way in, and I made my first record with this in mind – Quell. I simply sat down and gathered my ideas and field recordings, and started to work. It was the first time I really sustained an artistic intention all the way through a major project on my own. As I gained facility with my tools, and as I became more educated on what was out there in the world of this kind of sound practice, I found myself seeking these kinds of sound contexts more and more.

However, what I also started to do was eschew my percussion experience. I’m not sure why, but it was a long time before I gave myself permission to introduce more musical and percussion elements into the sound art type of work I was producing. I think in retrospect I was making up rules that I thought applied, in an effort to navigate this new world of sound production – maybe that was what was happening. I think now I’m finding a balance between music, sound, and story that feels good to me. It took a while though.

I’m curious about how you constructed this. You’ve talked a bit about assembling materials over a longer span of time (which is interesting, too, as I know Robert is working the same way). As we come along on this journey of the album, what are we hearing; how did it come together? I know some of it is live… how did you then organize it?

This balance between the various facets of my sound practice is a delicate one, but it’s also driven by instinct, because really, instinct is all I have to depend on. Whereas before I would give myself very strict parameters about how or what I would produce for a given project, now I’m more comfortable drawing from many kinds of sound production practice.

Many of the pieces on “The White Dog” started as small ideas – procedural or mixing explorations. The “Harmonium” pieces were from a remix of the soundtrack to a video art piece I made at the Banff Centre in Canada (White Dog video link here???), where I wanted to make that video piece a kind of club project. “entr’acte” is from a live concert I did with prepared guitar and laptop accompanying the works of Canadian visual artist Clive Holden. Tracks on other records were part of scores for contemporary dance choreographer Peggy Baker (who has been a huge influence on how I make music, speaking of being open). What brought all these pieces together was in a large part instinct, but also a kind of story that I felt was being told. This cross pollination of an implied dramatic thread is important to me.

And there’s some really beautiful range of percussion and the like. What are the sources for the record? How did you layer them?

I’ve quite a collection, and luckily I’ve built that collection through real relationships with the instruments, both technical and emotional/spiritual. They aren’t just cool sounds (although they’re that, too) — but each has a kind of voice that I’ve explored and understood in how I play it. In that regard, it’s pretty clear to me what instrument needs to be played or added as I build a track.

Something new happens when you add a live person playing a real thing inside an electronic environment. It’s something I feel is a deep part of my voice. It’s not the only way to hear a person inside a piece of music, but it;s the way I put myself in my works. I love metallic sounds, and sounds with a lot of sustain, or power. I’m intrigued by how percussion can be a texture as well as a rhythm, so that is something I explore. I’m a huge fan of French percussionist Le Quan Ninh, so the bass-drum-as-tabletop is a big part of my live setup and also my studio setup.

This programmatic element is part of what makes this so compelling to me as a full LP. How has your experience in the theater imprinted on your musical narratives?

My theater work encompasses a wide range of theater practice – from very experimental and small to quite large stages. Usually I do both the sound design and the music, meaning pretty much anything coming out of a speaker from sound effects to music.

My inspiration starts from many non-musical places. That’s mostly, the text/story, but not always — anything could spark a cue, from the set design to the director’s ideas to even how an actor moves. Being open to these elements has made me a better composer, as I often end up reacting to something that someone says or does, and follow a path that ends up in music that I never would have made on my own. It has also made me understand better how to tell stories, or rather maybe how not to – the importance of inviting the audience into the construction of the story and the emotion of it in real time. Making the listener lean forward instead of lean back, if you get me.

This practice of collaborative storytelling of course has impact on my solo work (and vice versa) – it’s made me find a voice that is more rooted in story, in comparison to when I was spending all my time in bands. I think it’s made my work deeper and simpler in many ways — distilled it, maybe — so that the story becomes the main focus. Of course when I say “story” I mean not necessarily an explicit narrative, but something that draws the listener from end to end. This is really what drives the collecting and composition of a group of tracks for me (as well as the tracks themselves) and even my improvisations.

Oh, and on the narrative side – what’s going on with Buddha here, actually, as narrated by the ever Buddha-like Robert Lippok [composer/artist on Raster Media]?

I asked Robert Lippok to record some text for me many years ago, a kind of reimagining the mind of Gautama Buddha under the bodhi tree in the days leading to his enlightenment. I had this idea that maybe what was going through his mind might not have been what we may imagine when we think of the myth itself. I’m not sure where this idea came from – although I’m sure that hearing many different versions of the same myths from various sources while growing up had its effect – but it was something I thought was interesting. I do this often with my works (see above link to Kailash) and again, it’s a way I feel I can contribute to the understanding of my own cultural heritage in a way that is rooted in both my ancestor’s history as well as my own.

And of course, when one thinks of what the Buddha might have sounded like, I defy you to find someone who sounds more perfect than Robert Lippok.

Techno is some kind of undercurrent for this label, maybe not in the strict definition of the genre… I wonder actually if you could talk a bit about pattern and structure. There are these rhythms throughout that are really hypnotic, that regularity seems really important. How do you go about thinking about those musical structures?

The rhythms I seem drawn to run the gamut of time signatures and tempos. Of course, this comes from my studies of various music traditions and repertoire (Arabic, Greek, Turkish, West Asian, south Indian…). As a hand percussionist for many years playing and studying music from various cultures, I found a lot of parallels and cross talk particularly in the rhythms of the material I encountered. I delighted in finding the groove in various tempos and time signatures. There is a certain lilt to any rhythm; if you put your mind and hands to it, the muscles will reveal this lilt. At the same time, the sound material of electronic music I find very satisfying and clear. I’m at best a middling recording engineer, so capturing audio is not my forte – working in the box I find way easier. As I developed skills in programming and sound design, I seemed to be drawn to trying to express the rhythms I’ve encountered in my life with new tools and sounds.

Regularity and grid is important in rhythm – even breaking the grid, or stretching it to its breaking point has a place. (You can hear this very well in south Indian music, among others.) This grid undercurrent is the basis of electronic music and the tools used to make it. The juxtaposition of the human element with various degrees of quantization of electronic sound is something I think I’ll never stop exploring. Even working strongly with a grid has a kind of energy and urgency to it if you’re playing acoustic instruments. There’s a lot to dive into, and I’m planning to work with that idea a lot more for the next release(s).

And where does Alvin Lucier fit in, amidst this Bengali context?

The real interest for me in creating art lies in actualizing ideas, and Lucier is perhaps one of the masters of this – taking an idea of sound and making it real and spellbinding. “Ng Ta (Lucier Mix)” was a piece I started to make with a number of noodle bowls I found in Toronto’s Chinatown – the white ones with blue fishes on them. The (over)tones and rhythms of the piece as it came together reminded me of a piece I’m really interested in performing, “Silver Streetcar for The Orchestra”, a piece for amplified triangle by Lucier. Essentially the musician plays an amplified triangle, muting and playing it in various places for the duration of the piece. It’s an incredible meditation, and to me Ng Ta on The White Dog is a meditation as well – it certainly came together in that way. And so the title.

I wrestle with the degree with which I invoke my cultural heritage in my work. Sometimes it’s very close to the surface, and the work is derived very directly from Hindu myth say, or field recordings from Kolkata. Sometimes it simmers in other ways, and with varying strength. I struggle with allowing it to be expressed instinctually or more directly and with more intent. Ultimately, the music I make is from me, and all those ideas apply whether or not I think of them consciously.

One of the problems I have with the term “world music” is it’s a marketing term to allow the lumping together of basically “music not made by white people”, which is ludicrous (as well as other harsher words that could apply). To that end, the urge to classify my music as “Indian” in some way, while true, can also be a misnomer or an “out” for lazy listening. There are a billion people in India, I believe, and more on the subcontinent and abroad. Why wouldn’t a track like “entr’acte” be “Indian”? On the other hand, why would it? I’m also a product of the west. How can I manage those worlds and expectations and still be authentic? It’s something I work on and think about all the time – but not when I’m actually making music, thank goodness.

I’m curious about your live set, how you were working with the Novation controllers, and how you were looping, etc.

My live sets are always, always constructed differently – I’m horrible that way. I design new effects chains and different ways of using my outboard MIDI gear depending on the context. I might use contact mics on a kalimba and a prepared guitar for one show, and then a bunch of external percussion that I loop and chop live for another, and for another just my voice, and for yet another only field recordings from India. I’ve used Ableton Live to drive a lot of sound installations as well, using follow actions on clips (“any” comes in handy a lot), and I’ve even made some installations that do the same thing with live input (making sure I have a 5 second delay on that input has….been occasionally useful, shall we say).

The concert I put together for The White Dog project is one that I try and keep live as much as possible. It’s important to me to make sure there is room in the set for me to react to the room or the moment of performance – this is generally true for my live shows, but since I’m re-presenting songs that have a life on a record, finding a meaningful space for improv was trickier.

Essentially, I try and have as many physical knobs and faders as possible – either a Novation Launch Control XL or a Behringer BCR2000 [rotary controller], which is a fantastic piece of gear (I know – Behringer?!). I use a Launchpad Mini to launch clips and deal with grid-based effects, and I also have a little Launch Control mapped to the effects parameters and track views or effects I need to see and interact with quickly. Since I’m usually using both hands to play/mix, I always have a Logidy UMI3 to control live looping from a microphone. It’s a 3 button pedal which is luckily built like a tank, considering how many times I’ve dropped it. I program it in various ways depending on the project – for The White Dog concerts with MIDI learn in the Ableton looper to record/overdub, undo and clear button, but the Logidy software allows you to go a lot deeper. I have the option to feed up to 3 effects chains, which I sometimes switch on the fly with dummy clips.

The Max For Live community has been amazing and I often keep some kind of chopper on one of the effect chains, and use the User mode on the Launchpad Mini to punch in and out or alter the length of the loop or whatnot. Sometimes I keep controls for another looper on that grid.

Basically, if you want an overview – I’m triggering clips, and have a live mic that I use for percussion and voice for the looper. I try and keep the mixer in a 1:1 relationship with what’s being played/played back/routed to effects because I’m old school – I find it tricky to do much jumping around when I’m playing live instruments. It’s not the most complicated setup but it gets the job done, and I feel like I’ve struck a balance between electronics and live percussion, at least for this project.

What else are you listening to? Do you find that your musical diet is part of keeping you creative, or is it somehow partly separate?

I jump back and forth – sometimes I listen to tons of music with an ear to try and expand my mind, sometimes just to enjoy myself. Sometimes I stop listening to music just because I’m making a lot on my own. One thing I try to always take care of is my mind. I try to keep it open and curious, and try to always find new ideas to ponder. I am inspired by a lot of different things – paintings, visual art, music, sound art, books – and in general I’m really curious about how people make an idea manifest – science, art, economics, architecture, fashion, it doesn’t matter. Looking into or trying to derive that jump from the mind idea to the actual real life expression of it I find endlessly fascinating and inspiring, even when I’m not totally sure how it might have happened. It’s the guessing that fuels me.

That being said, at the moment I’m listening to lots of things that I feel are percolating some ideas in me for future projects, and most of it coming from digging around the amazing Bandcamp site. Frank Bretschneider turned me on to goat(jp), which is an incredible quartet from Japan with incredible rhythmic and textural muscle. I’ve rediscovered the fun of listening to lots of Stereolab, who always seem to release the same record but still make it sound fresh. Our pal Robert Lippok just released a new record and I am so down with it – he always makes music that straddles the emotional and the electronic, which is something I’m so interested in doing.

I continue to make my way through the catalog of French percussionist Le Quan Ninh, who is an absolute warrior in his solo percussion improvisations. Tanya Tagaq is an incredible singer from Canada – I’m sure many of the people reading this know of her – and her live band, drummer Jean Martin, violinist Jesse Zubot, and choirmaster Christine Duncan, an incredible improv vocalist in her own right are unstoppable. We have a great free music scene in Toronto, and I love so many of the musicians who are active in it, many of them internationally known – Nick Fraser (drummer/composer), Lina Allemano (trumpet), Andrew Downing (cello/composer), Brodie West (sax) – not to mention folks like Sandro Perri and Ryan Driver. They’ve really lit a fire under me to be fierce and in the moment – listening to them is a recurring lesson in what it means to be really punk rock.

Buy and download the album now on Bandcamp.

https://debsinha.bandcamp.com/album/the-white-dog

The post Exploring a journey from Bengali heritage to electronic invention appeared first on CDM Create Digital Music.