From food stamps and survival to writing the songs you know

“I don’t know what I’m doing,” says artist and composer Allee Willis. Yet her output ranges from Earth, Wind, and Fire’s “September” to the theme song of Friends. If you don’t know Willis, you should – and her story might inspire yours.

Behind all the cheery social media these days, most artists you talk to have struggled. They’ve struggled with creativity and sobriety, mental health and creative blocks, unfriendly industries and obscurity. And sometimes they’ve struggled just to get by – which is where Allee Willis was in 1978, living off food stamps and wondering what would happen next.

What happened next is a career that led to an insane number of hit songs – along with plenty of other fascinating side trips into kitsch and art. (There’s a kitsch-themed social network, an artist alterego named Bubbles, and a music video duet with a 91-year-old woman drummer on an oxygen tank, to name a few.) But what it hasn’t involved is a lot of widespread personal notoriety. Allee Willis is a celebrity’s celebrity, which is to say famous people know her but most people don’t know she’s famous.

At least it’s about that gap. The odds that you don’t know her? Decent. The odds that you don’t know her songs? Unlikely.

Let’s go: Earth, Wind & Fire “September” and “Boogie Wonderland,” The Pointer Sisters’ “Neutron Dance,” Pet Shop Boys with Dusty Springfield’s “What Have I Done To Deserve This.” The theme from Friends, recorded by The Rembrandts (if you knew that, which I suspect you didn’t)… all these and more add up to 60 million records. And she co-authored the Oprah Winfrey-produced, Tony and Grammy-winning Broadway musical The Color Purple. More songs you know in movies: Beverly Hills Cop, The Karate Kid (“You’re the Best”), Howard the Duck.

The Detroit native is an impassioned use of Web tech and animation, networked together machines to design an orchestration workflow for The Color Purple musical, and now lives in LA with … Pro Tools, of course, alongside some cats.

But this isn’t about her resume so much as it is about what she says drives her – that itch to create stuff. And for anyone worried about how to get into the creative zone, maybe the first step is to stop worrying about getting into the creative zone. We value analysis and self-critique so much that sometimes we forget to just have fun making and stop worrying about even our own opinions (or maybe, especially those). In the end, it was that instinct that has driven her work, and presumably lots of stuff that didn’t do as well as that Friends theme song. (But there are her cats. Not the Broadway kind; that’s Andrew Lloyd Weber – the furry ones.)

There’s a great video out from CNN-produced Web video series Great Big Story:

And her site is a wild 1999-vintage-design wonderland of HTML, if you want to dive in:

https://alleewillis.com

More:

How she wrote “What Have I Done to Deserve This” gets into her musical thinking – and incongruity (and she does sure seem like she knows what she’s doing):

Plus how she hears and why she needed a Fender Rhodes:

The post From food stamps and survival to writing the songs you know appeared first on CDM Create Digital Music.

Exploring a journey from Bengali heritage to electronic invention

Can electronic music tell a story about who we are? Debashis Sinha talks about his LP for Establishment, The White Dog, and how everything from Toronto noodle bowls to Bengali field recordings got involved.

The Canadian artist has a unique knack for melding live percussion techniques and electro-acoustic sound with digital manipulation, and in The White Dog, he dives deep into his own Bengali heritage. Just don’t think of “world music.” What emerges is deeply his and composed in a way that’s entirely electro-acoustic in course, not a pastiche of someone else’s musical tradition glued onto some beats. And that’s what drew me to it – this is really the sound of the culture of Debashis, the individual.

And that seems connected to what electronic music production can be – where its relative ease and accessibility can allow us to focus on our own performance technique and a deeper sense of expression. So it’s a great chance not just to explore this album, but what that trip in this work might say to the rest of us.

CDM’s label side project Establishment put out the new release. I spoke to Debashis just after he finished a trip to Germany and a live performance of the album at our event in Berlin. He writes us from his home Toronto.

First, the album:

I want to start with this journey you took across India. What was that experience like? How did you manage to gather research while in that process?

I’ve been to India many times to travel on my own since I turned 18 – usually I spend time with family in and near Kolkata, West Bengal and then travel around, backpacking style. Since the days of Walkman cassette recorders, I’ve always carried something with me to record sound. I didn’t have a real agenda in mind when I started doing it – it was the time of cassettes, really, so in my mind there wasn’t much I could do with these recordings – but it seemed like an important process to undertake. I never really knew what I was going to do with them. I had no knowledge of what sound art was, or radio art, or electroacoustic music. I switched on the recorder when I felt I had to – I just knew I had to collect these sounds, somehow, for me.

As the years went on and I understood the possibilities for using sound captured in the wild on both a conceptual and technical level, and with the advent of tools to use them easily, I found that to my surprise that the act of recording (when in India, at least) didn’t really change. I still felt I was documenting something that was personal and vital to my identity or heart, and the urge to turn on the recorder still came from a very deep place. It could easily have been that I gathered field sound in response to or in order to complete some kind of musical idea, but every time I tried to turn on the recorder in order to gather “assets” for my music, I found myself resisting. So in the end I just let it be, safe in the knowledge that whatever I gathered had a function for me, and may (or may not) in future have a function for my music or sound work. It didn’t feel authentic to gather sound otherwise.

Even though this is your own heritage, I suppose it’s simultaneously something foreign. How did you relate to that, both before and after the trip?

My father moved to Winnipeg, in the center of Canada, almost 60 years ago, and at the time there were next to no Indian (i.e. people from India) there. I grew up knowing all the brown people in the city. It was a different time, and the community was so small, and from all over India and the subcontinent. Passing on art, stories, myth and music was important, but not so much language, and it was easy to feel overwhelmed – I think that passing on of culture operated very differently from family to family, with no overall cultural support at large to bolster that identity for us.

My mom – who used to dance with Uday Shankar’s troupe would corral all the community children to choreograph “dance-dramas” based on Hindu myths. The first wave of Indian people in Winnipeg finally built the first Hindu temple in my childhood – until then we would congregate in people’s basement altars, or in apartment building common rooms.

There was definitely a relationship with India, but it was one that left me what I call “in/between” cultures. I had to find my own way to incorporate my cultural heritage with my life in Canada. For a long time, I had two parallel lives — which seemed to work fine, but when I started getting serious about music it became something I really had to wrestle with. On the one hand, there was this deep and rich musical heritage that I had tenuous connections to. On the other hand, I was also interested in the 2-Tone music of the UK, American hardcore, and experimental music. I took tabla lessons in my youth, as I was interested in and playing drums, but I knew enough to know I would never be a classical player, and had no interest in pursuing that path, understanding even then that my practice would be eclectic.

I did have a desire to contribute to my Indian heritage from where I sat – to express somehow that “in/between”-ness. And the various trips I undertook on my own to India since I was a young person were in part an effort to explore what that expression might take, whether I knew it or not. The collections of field recordings (audio and later video) became a parcel of sound that somehow was a thread to my practice in Canada on the “world music” stage and later in the realms of sound art and composition.

One of the projects I do is a durational improvised concert called “The (X) Music Conference”, which is modeled after the all-night classical music concerts that take place across India. They start in the evening and the headliner usually goes on around 4am and plays for 3 or more hours. Listening to music for that long, and all night, does something to your brain. I wanted to give that experience to audience members, but I’m only one person, so my concert starts at midnight and goes to 7am. There is tea and other snacks, and people can sit or lie down. I wanted to actualize this idea of form (the classical music concert) suffused with my own content (sound improvisations) – it was a way to connect the music culture of India to my own practice. Using field recordings in my solo work is another, or re-presenting/-imagining Hindu myths another.

I think with the development of the various facets of my sound practice, I’ve found a way to incorporate this “form and content” approach, allowing the way that my cultural heritage functions in my psyche to express itself through the tools I use in various ways. It wasn’t an easy process to come to this balance, but along the way I played music with a lot of amazing people that encouraged me in my explorations.

In terms of integrating what you learned, what was the process of applying that material to your work? How did your work change from its usual idioms?

I went through a long process of compartmentalizing when I discovered (and consumer technology supported) producing electroacoustic work easily. When I was concentrating on playing live music with others on the stage, I spent a lot of time studying various drumming traditions under masters all over – Cairo, Athens, NYC, LA, Toronto – and that was really what kept me curious and driven, knowing I was only glimpsing something that was almost unknowable completely.

As the “world music” industry developed, though, I found the “story” of playing music based on these traditions less and less engaging, and the straight folk festival concert format more and more trivial – fun, but trivial – in some ways. I was driven to tell stories with sound in ways that were more satisfying to me, that ran deeper. These field recordings were a way in, and I made my first record with this in mind – Quell. I simply sat down and gathered my ideas and field recordings, and started to work. It was the first time I really sustained an artistic intention all the way through a major project on my own. As I gained facility with my tools, and as I became more educated on what was out there in the world of this kind of sound practice, I found myself seeking these kinds of sound contexts more and more.

However, what I also started to do was eschew my percussion experience. I’m not sure why, but it was a long time before I gave myself permission to introduce more musical and percussion elements into the sound art type of work I was producing. I think in retrospect I was making up rules that I thought applied, in an effort to navigate this new world of sound production – maybe that was what was happening. I think now I’m finding a balance between music, sound, and story that feels good to me. It took a while though.

I’m curious about how you constructed this. You’ve talked a bit about assembling materials over a longer span of time (which is interesting, too, as I know Robert is working the same way). As we come along on this journey of the album, what are we hearing; how did it come together? I know some of it is live… how did you then organize it?

This balance between the various facets of my sound practice is a delicate one, but it’s also driven by instinct, because really, instinct is all I have to depend on. Whereas before I would give myself very strict parameters about how or what I would produce for a given project, now I’m more comfortable drawing from many kinds of sound production practice.

Many of the pieces on “The White Dog” started as small ideas – procedural or mixing explorations. The “Harmonium” pieces were from a remix of the soundtrack to a video art piece I made at the Banff Centre in Canada (White Dog video link here???), where I wanted to make that video piece a kind of club project. “entr’acte” is from a live concert I did with prepared guitar and laptop accompanying the works of Canadian visual artist Clive Holden. Tracks on other records were part of scores for contemporary dance choreographer Peggy Baker (who has been a huge influence on how I make music, speaking of being open). What brought all these pieces together was in a large part instinct, but also a kind of story that I felt was being told. This cross pollination of an implied dramatic thread is important to me.

And there’s some really beautiful range of percussion and the like. What are the sources for the record? How did you layer them?

I’ve quite a collection, and luckily I’ve built that collection through real relationships with the instruments, both technical and emotional/spiritual. They aren’t just cool sounds (although they’re that, too) — but each has a kind of voice that I’ve explored and understood in how I play it. In that regard, it’s pretty clear to me what instrument needs to be played or added as I build a track.

Something new happens when you add a live person playing a real thing inside an electronic environment. It’s something I feel is a deep part of my voice. It’s not the only way to hear a person inside a piece of music, but it;s the way I put myself in my works. I love metallic sounds, and sounds with a lot of sustain, or power. I’m intrigued by how percussion can be a texture as well as a rhythm, so that is something I explore. I’m a huge fan of French percussionist Le Quan Ninh, so the bass-drum-as-tabletop is a big part of my live setup and also my studio setup.

This programmatic element is part of what makes this so compelling to me as a full LP. How has your experience in the theater imprinted on your musical narratives?

My theater work encompasses a wide range of theater practice – from very experimental and small to quite large stages. Usually I do both the sound design and the music, meaning pretty much anything coming out of a speaker from sound effects to music.

My inspiration starts from many non-musical places. That’s mostly, the text/story, but not always — anything could spark a cue, from the set design to the director’s ideas to even how an actor moves. Being open to these elements has made me a better composer, as I often end up reacting to something that someone says or does, and follow a path that ends up in music that I never would have made on my own. It has also made me understand better how to tell stories, or rather maybe how not to – the importance of inviting the audience into the construction of the story and the emotion of it in real time. Making the listener lean forward instead of lean back, if you get me.

This practice of collaborative storytelling of course has impact on my solo work (and vice versa) – it’s made me find a voice that is more rooted in story, in comparison to when I was spending all my time in bands. I think it’s made my work deeper and simpler in many ways — distilled it, maybe — so that the story becomes the main focus. Of course when I say “story” I mean not necessarily an explicit narrative, but something that draws the listener from end to end. This is really what drives the collecting and composition of a group of tracks for me (as well as the tracks themselves) and even my improvisations.

Oh, and on the narrative side – what’s going on with Buddha here, actually, as narrated by the ever Buddha-like Robert Lippok [composer/artist on Raster Media]?

I asked Robert Lippok to record some text for me many years ago, a kind of reimagining the mind of Gautama Buddha under the bodhi tree in the days leading to his enlightenment. I had this idea that maybe what was going through his mind might not have been what we may imagine when we think of the myth itself. I’m not sure where this idea came from – although I’m sure that hearing many different versions of the same myths from various sources while growing up had its effect – but it was something I thought was interesting. I do this often with my works (see above link to Kailash) and again, it’s a way I feel I can contribute to the understanding of my own cultural heritage in a way that is rooted in both my ancestor’s history as well as my own.

And of course, when one thinks of what the Buddha might have sounded like, I defy you to find someone who sounds more perfect than Robert Lippok.

Techno is some kind of undercurrent for this label, maybe not in the strict definition of the genre… I wonder actually if you could talk a bit about pattern and structure. There are these rhythms throughout that are really hypnotic, that regularity seems really important. How do you go about thinking about those musical structures?

The rhythms I seem drawn to run the gamut of time signatures and tempos. Of course, this comes from my studies of various music traditions and repertoire (Arabic, Greek, Turkish, West Asian, south Indian…). As a hand percussionist for many years playing and studying music from various cultures, I found a lot of parallels and cross talk particularly in the rhythms of the material I encountered. I delighted in finding the groove in various tempos and time signatures. There is a certain lilt to any rhythm; if you put your mind and hands to it, the muscles will reveal this lilt. At the same time, the sound material of electronic music I find very satisfying and clear. I’m at best a middling recording engineer, so capturing audio is not my forte – working in the box I find way easier. As I developed skills in programming and sound design, I seemed to be drawn to trying to express the rhythms I’ve encountered in my life with new tools and sounds.

Regularity and grid is important in rhythm – even breaking the grid, or stretching it to its breaking point has a place. (You can hear this very well in south Indian music, among others.) This grid undercurrent is the basis of electronic music and the tools used to make it. The juxtaposition of the human element with various degrees of quantization of electronic sound is something I think I’ll never stop exploring. Even working strongly with a grid has a kind of energy and urgency to it if you’re playing acoustic instruments. There’s a lot to dive into, and I’m planning to work with that idea a lot more for the next release(s).

And where does Alvin Lucier fit in, amidst this Bengali context?

The real interest for me in creating art lies in actualizing ideas, and Lucier is perhaps one of the masters of this – taking an idea of sound and making it real and spellbinding. “Ng Ta (Lucier Mix)” was a piece I started to make with a number of noodle bowls I found in Toronto’s Chinatown – the white ones with blue fishes on them. The (over)tones and rhythms of the piece as it came together reminded me of a piece I’m really interested in performing, “Silver Streetcar for The Orchestra”, a piece for amplified triangle by Lucier. Essentially the musician plays an amplified triangle, muting and playing it in various places for the duration of the piece. It’s an incredible meditation, and to me Ng Ta on The White Dog is a meditation as well – it certainly came together in that way. And so the title.

I wrestle with the degree with which I invoke my cultural heritage in my work. Sometimes it’s very close to the surface, and the work is derived very directly from Hindu myth say, or field recordings from Kolkata. Sometimes it simmers in other ways, and with varying strength. I struggle with allowing it to be expressed instinctually or more directly and with more intent. Ultimately, the music I make is from me, and all those ideas apply whether or not I think of them consciously.

One of the problems I have with the term “world music” is it’s a marketing term to allow the lumping together of basically “music not made by white people”, which is ludicrous (as well as other harsher words that could apply). To that end, the urge to classify my music as “Indian” in some way, while true, can also be a misnomer or an “out” for lazy listening. There are a billion people in India, I believe, and more on the subcontinent and abroad. Why wouldn’t a track like “entr’acte” be “Indian”? On the other hand, why would it? I’m also a product of the west. How can I manage those worlds and expectations and still be authentic? It’s something I work on and think about all the time – but not when I’m actually making music, thank goodness.

I’m curious about your live set, how you were working with the Novation controllers, and how you were looping, etc.

My live sets are always, always constructed differently – I’m horrible that way. I design new effects chains and different ways of using my outboard MIDI gear depending on the context. I might use contact mics on a kalimba and a prepared guitar for one show, and then a bunch of external percussion that I loop and chop live for another, and for another just my voice, and for yet another only field recordings from India. I’ve used Ableton Live to drive a lot of sound installations as well, using follow actions on clips (“any” comes in handy a lot), and I’ve even made some installations that do the same thing with live input (making sure I have a 5 second delay on that input has….been occasionally useful, shall we say).

The concert I put together for The White Dog project is one that I try and keep live as much as possible. It’s important to me to make sure there is room in the set for me to react to the room or the moment of performance – this is generally true for my live shows, but since I’m re-presenting songs that have a life on a record, finding a meaningful space for improv was trickier.

Essentially, I try and have as many physical knobs and faders as possible – either a Novation Launch Control XL or a Behringer BCR2000 [rotary controller], which is a fantastic piece of gear (I know – Behringer?!). I use a Launchpad Mini to launch clips and deal with grid-based effects, and I also have a little Launch Control mapped to the effects parameters and track views or effects I need to see and interact with quickly. Since I’m usually using both hands to play/mix, I always have a Logidy UMI3 to control live looping from a microphone. It’s a 3 button pedal which is luckily built like a tank, considering how many times I’ve dropped it. I program it in various ways depending on the project – for The White Dog concerts with MIDI learn in the Ableton looper to record/overdub, undo and clear button, but the Logidy software allows you to go a lot deeper. I have the option to feed up to 3 effects chains, which I sometimes switch on the fly with dummy clips.

The Max For Live community has been amazing and I often keep some kind of chopper on one of the effect chains, and use the User mode on the Launchpad Mini to punch in and out or alter the length of the loop or whatnot. Sometimes I keep controls for another looper on that grid.

Basically, if you want an overview – I’m triggering clips, and have a live mic that I use for percussion and voice for the looper. I try and keep the mixer in a 1:1 relationship with what’s being played/played back/routed to effects because I’m old school – I find it tricky to do much jumping around when I’m playing live instruments. It’s not the most complicated setup but it gets the job done, and I feel like I’ve struck a balance between electronics and live percussion, at least for this project.

What else are you listening to? Do you find that your musical diet is part of keeping you creative, or is it somehow partly separate?

I jump back and forth – sometimes I listen to tons of music with an ear to try and expand my mind, sometimes just to enjoy myself. Sometimes I stop listening to music just because I’m making a lot on my own. One thing I try to always take care of is my mind. I try to keep it open and curious, and try to always find new ideas to ponder. I am inspired by a lot of different things – paintings, visual art, music, sound art, books – and in general I’m really curious about how people make an idea manifest – science, art, economics, architecture, fashion, it doesn’t matter. Looking into or trying to derive that jump from the mind idea to the actual real life expression of it I find endlessly fascinating and inspiring, even when I’m not totally sure how it might have happened. It’s the guessing that fuels me.

That being said, at the moment I’m listening to lots of things that I feel are percolating some ideas in me for future projects, and most of it coming from digging around the amazing Bandcamp site. Frank Bretschneider turned me on to goat(jp), which is an incredible quartet from Japan with incredible rhythmic and textural muscle. I’ve rediscovered the fun of listening to lots of Stereolab, who always seem to release the same record but still make it sound fresh. Our pal Robert Lippok just released a new record and I am so down with it – he always makes music that straddles the emotional and the electronic, which is something I’m so interested in doing.

I continue to make my way through the catalog of French percussionist Le Quan Ninh, who is an absolute warrior in his solo percussion improvisations. Tanya Tagaq is an incredible singer from Canada – I’m sure many of the people reading this know of her – and her live band, drummer Jean Martin, violinist Jesse Zubot, and choirmaster Christine Duncan, an incredible improv vocalist in her own right are unstoppable. We have a great free music scene in Toronto, and I love so many of the musicians who are active in it, many of them internationally known – Nick Fraser (drummer/composer), Lina Allemano (trumpet), Andrew Downing (cello/composer), Brodie West (sax) – not to mention folks like Sandro Perri and Ryan Driver. They’ve really lit a fire under me to be fierce and in the moment – listening to them is a recurring lesson in what it means to be really punk rock.

Buy and download the album now on Bandcamp.

https://debsinha.bandcamp.com/album/the-white-dog

The post Exploring a journey from Bengali heritage to electronic invention appeared first on CDM Create Digital Music.

From Japan, an ambient musician on solitude and views of the sea

As haunting, oceanic wells of sound sing achingly in the background, Tokyo-based ambient musician Chihei Hatakeyama talks in a new documentary about what inspires him.

The creative series toco toco follows the musician to the places and views that inspired the images of his music – including gazing into the sea. Of that view, he says:

“There wasn’t any gap in space, it was translating directly into music.”

Filmmaker Anne Ferrero writes to share her work, as she follows the artist “to the roots of his universe, in the Kamakura and Enoshima areas, where he grew up.”

And he speaks of the beauty in ambient music, and its connection to nature. And while solitude in computer music is often seen as something of a liability, here he talks about its importance – as he uses that laptop as a box for editing improvisations.

Being able to create music alone made it more personal. The music that I wanted to make could now express my mind – what I felt inside.

The film is subtitled in English, with Japanese audio. (Don’t forget to turn CC on.)

It’s a deeply personal film all over, and even talks about the journey from electronic sounds on dancefloors to the quieter, more contemplative world of ambient music. And he finds that moment of liberating himself from the beat – not by trying to copy what people would call ambient music on a superficial level, but by fumbling his way to this solution after eliminating obstacles to expression.

Hey, I love both modes of music, myself, so I can appreciate that balance. It’s just rained here in Berlin, and I’m reminded of that feeling of relief when it rains after long periods of sun … and visa versa. Maybe music is the same way.

Have a watch, and I’m sure you’ll want to pick up a guitar or laptop, or go to a beach, or take a personal field trip to the museum and stare at paintings.

Painting with colors in sound … filling the world with oceans of your own expression. What could be more lovely?

Now, an insane amount of beautiful music:

http://www.chihei.org

https://www.discogs.com/artist/440866-Chihei-Hatakeyama

https://chiheihatakeyama.bandcamp.com

The post From Japan, an ambient musician on solitude and views of the sea appeared first on CDM Create Digital Music.

These fanciful new apps weave virtual music worlds in VR and AR

Virtual reality and augmented reality promise new horizons for music. But one studio is delivering apps you’ll actually want to use – including collaborations with artists like Matmos, Safety Scissors, Robert Lippok, Patrick Russell, Ami Yamasaki, and Patrick Higgins (of Zs).

Consumer-accessible graphics hardware and computation – particularly on mobile – is finally keeping up with the demands of immersive 3D visuals and sound. That includes virtual reality (when you completely block out the outside world, most often using goggles), and mixed reality or augmented reality, which blends views of the world around you with 3D imagery. (Microsoft seems to prefer “mixed reality,” and still has you wearing some googles; Apple likes “augmented reality,” even if that harkens back to some old apps that did weird things with markers and tags. I think I’ve got that right.)

And indeed, we’ve seen this stuff highlighted a lot recently, from game and PC companies talking VR (including via Steam), Facebook showing off Oculus (the Kickstarter-funded project it acquired), and this week Apple making augmented reality a major selling point of its coming iOS releases and developer tools.

But what is this stuff actually for?

That question is still open to creative interpretation. What New York City-based studio Planeta is doing is showing off something artful, not just a tech demo.

They’ve got two apps now, one for VR, and one for AR.

Fields is intended both for listening and creation. Sounds form spatial “sculptures,” which you can build up on your own by assembling loops or recording sounds, then mix with the environment around you – as viewed through the display of your iOS device. There’s a lovely, poetic trailer:

Unlike the sound toys we saw just after the release of the original iPhone App Store, though, they’re partnering with composers and musicians to make sure Fields gets used creatively. It’s a bit like turning it into a (mobile) venue. So in addition to Matmos, you get creations by the likes of Ryuichi Sakamoto collaborator, or Robert Lippok (of Raster Media, née Raster-Noton).

But if you think you have something to say, too, and you aren’t one of those artists, you can also share your own creations as videos, constructed from original sounds and motion captured with your device’s camera and mic.

The developers are Field are also partnering with the Guggenheim to showcase the app. And they’re also helping Berlin’s Monom space, which is powered by the 4DSOUND spatial audio system, to deliver sounds that otherwise would have to get squashed into a bland stereo mix. The ability to appreciate spatial works outside of limited installation venues may help listeners get deeper with the music, and take the experience home.

The results can be totally crazy. Here’s one example:

Pitchfork go into some detail as to how this app came about:

Fields Wants to Be The Augmented Reality App for Experimental Music Fans and Creators Alike

More on the app, including a download, on its site:

http://fields.planeta.cc/

And then there’s Drops – a “rhythm garden.”

We’ve seen some clumsy attempts at VR for music before. Generally, they involve rethinking an interface that already works perfectly well in hardware controllers or onscreen with a mouse, and “reimagining” them in a way that … makes them slightly stupid to use.

It seems this is far better. I’ve yet to give this a try myself – you need Oculus Rift or HTC Vive hardware – but at the very least, the concept is right. The instrument begins as a kind of 3D physics game involving percussion, with elaborate floating clockwork worlds, and builds a kind of surreal ambient music around those Escher-Magritte fantasies. So the music emerges from the interface, instead of bending an existing musical paradigm to awkward VR gimmicks.

And it’s just five bucks, meaning if you’ve bought the hardware, I guess you’ll just go get it!

And it’s really, as it should be, about composition and architecture. Designer Dan Brewster tells the Oculus Blog about inspiration found in Japan:

One space in particular, created by Tadao Ando for Benesse House and consisting of an enclosed circle of water beneath a circle of open sky, felt perfectly suited to VR and inspired the environment of Drops.

VR Visionaries: Planeta

Brewster and team paired with experimental composers – Patrick Russell and Patrick Higgins – to construct a world that is musically composed. I always recoil a bit when people separate technology from music, or engineering from other dimensions of tech projects. But here, we get at what it is they’re really missing – form and composition. You wouldn’t take the engineering out of a building – that’d hurt your head a lot when it collapses on you – but at the same time, you wouldn’t judge a building based on engineering alone. And maybe that’s what’s needed in the VR/AR field.

Clot magazine goes into some more detail about where Drops and this studio fit into the bigger picture, including talking to composer Robert Lippok. (Robert also, unprompted by me, name drops our own collaboration on 4DSOUND.)

Robert based this piece, he says, on an experiment he did with students. (He’s done a series of workshops and the like looking about music as an isolated element, and connecting it to architecture and memory.)

We were talking about imagining sound. Sounds from memories, sound from every day live and unheard sounds. Later than we started to create sonic events just with words, which we translated into some tracks. “Drawing from Memory” is a sonic interpretations of one of those sound / word pieces. FIELDS makes is now possible to unfold the individual parts of this composition and frees it in the same time from its one-directional existence as track on a CD. I should do this with all of my pieces. I see a snowstorm of possibilities.”

Check out that whole article, as it’s also a great read:

Launch: Planeta, addressing the future of interface-sound composition

Find the apps:

http://fields.planeta.cc
http://drops.garden

And let us know if you have any questions or comments for the developers, or on this topic in general – or if you’ve got a creation of your own using these technologies.

The post These fanciful new apps weave virtual music worlds in VR and AR appeared first on CDM Create Digital Music.

Gestural Arranging With Cubase & A Leap Motion Controller

Davidoff uses Cubase’s Chord Pads feature do define chord progressions, and uses the Leap Motion to control inversions and chord versions.… Read More Gestural Arranging With Cubase & A Leap Motion Controller

Virtuoso Commodore 64 composer Martin Walker is back

News for nerds: one of the musicians who was most adept at coaxing intricate music from chips is set to make a return. And that means it’s time for some chip music.

Nowadays, the MOS Technology SID chip might as well claim its place as an instrument, not just a chip with a particular game legacy, but among beloved classic synthesizers. And if instruments from the Minimoog to the Roland D-50 are seeing a return, it’s because there are particular techniques you can apply to those synthesizers. (For instance, our friend Francis Preve has delved into remaking the D-50’s synthesis approach, with or without Roland hardware – while we’re talking about the 80s.)

And this isn’t just nostalgia, partly because this stuff takes practice.

Talk about practice: Martin Walker makes the SID sing.

The radar engineer-turned programmer-turned composer, Mr. Walker is something of a legend in chip music circles. His productions are just dense. It wasn’t just chip music, either – he’s gone on to other projects, including circuit bending, composition on other instruments (like he likes the Chromaphone plug-in as much as I do), and has seen bylines in Sound on Sound.

Commodore Format reported yesterday that he’ll make a return to C64 music for the first time in almost 30 years.

Here’s the thing: far from nostalgic, those 80s creations sound positively forward. Here are a few:

Dragon Breed

Altered Beast

Indiana Jones: Fate of Atlantis

(this is a funny one for me, as this game was oddly a favorite of my composition teacher in college…)

Speedball 2 [love this]

And a whole collection of “Walker’s Warblers”:

Full list of his creations:

http://www.vgmpf.com/Wiki/index.php?title=Martin_Walker

And his own site/label/project:

http://www.yewtreemagic.co.uk/about.php

We’ll be watching Commodore Format for the news this Friday, because… the future ain’t what it used to be?

http://www.commodoreformatarchive.com/

The post Virtuoso Commodore 64 composer Martin Walker is back appeared first on CDM Create Digital Music.

Bougaïeff & Narciss talk craft, and composing 60-second techno loops

Talk about minimal techno: Nicolas Bougaïeff and Narciss made a selection of 60-second locked grooves. Here’s more on that project and their practice.

If you’re hungry for electronic music that still pushes boundaries and technique, Dr. Nicolas Bougaïeff is a good place to start. (Yes, he’s a real doctor – the Ph.D. is in music composition). And lately, he’s been on a tear. Apart from a fanciful EP for our own Establishment, his recent output has focused on aggressively distorted, dystopian timbres, expertly constructed machines that pound forward like giant robots. He’s gotten deserved attention for that, as well, including the 12″ release of Cognitive Resonance, which relaunched Daniel Miller’s seminal NovaMute label.

There’s no paint-by-number techno here: each rhythm, each sound is considered. (It’s little wonder that Nick is working in offering composition lessons on the side – in a field that has been largely short of expert training.)

Now, you can get a view to that in Principles of Newspeak, his Denkfabrik LP, and take a cinematic journey through these realms.

But I thought we’d take the occasion to explore a unique set of etudes that came at the beginning of this year. It’s called Vocabulary C, and it takes the meticulous construction of techno to an extreme. The whole album is a set of locked grooves, each just one minute in length.

It’s not just a simple DJ “tools” release, though – think of it as tools that are also effective etudes. You can actually listen to each of these as a one-minute, standalone composition. There’s audio material drawn from Principles of Newspeak, but you almost don’t need to know that: these stand on their own. (Miniatures are a topic Nicolas has taken up before, not surprisingly – he’s got a release called 24 Miniatures coming out now, too.)

Nicolas teamed up with Berlin-born artist Narciss for this one – an artist who has literally grown up in the middle of Berlin techno, and has a DJ resume (and more releases upcoming on DRVMS LTD. and Seelen Records) to match.

With the fusion of composition and technology here, of course, we had plenty to talk about with these two.

There are two video documentaries as a starting point. First, there’s a short feature of Principles of Newspeak, visiting Nick in his studio:

From there, there’s a second video in which Nicolas and Narciss talk about the project and their collaboration:

CDM: Nick, from the release for Daniel Miller to your own follow-up on your label to this reusing materials … it feels like you’re making connective tissue now between releases. Is that about your own continuity? Is it about a narrative?

NB: Making a large scale musical work inspired by 1984 has been on my mind for over 20 years. Once I got started, I owed it to myself to explore every aspect of the topic. I’m happy I found an angle to the novel that hadn’t really been covered by other musicians, so I just kept on going. Vocabulary C gave me a feeling of closure.

And you’ve worked with miniatures before, too, yes?

I’ve done this sort of project before. Back in 2011, I recorded a new sketch every day for nearly the whole year, 20 minutes every day first thing in the morning no thinking allowed. That yielded hundreds of musical fragments. From those I eventually compiled an album by selecting the very best moments, no further whatsoever besides touching the mixdown and trimming the shortest edit possible. It kind of sat on my hard drive for seven years now, which is a nice contrast to how spontaneous the original process was. I feel it really aged well so I’m finally about to release the 24 Miniatures album via Denkfabrik.

All of these projects draw from the well of dystopia and dystopian imagination – what was that inspiration here? (What’s the Orwell connection?)

NB: Vocabulary C is the last release in a thematic series of three records, all of them inspired by the appendix to George Orwell’s 1984. The lead single “Cognitive Resonance” came out as a 12″ on NovaMute; the album Principles of Newspeak came out on my own label Denkfabrik, and finally, Vocabulary C as a collection of locked grooves inspired by the sounds from the album.

The 1984 appendix is focused on the particular way language is distorted in that fictional universe, a mashup of political slogans and the Whorf-Sapir linguistics theory. The idea is that if you destroy words, you destroy the ability to think of that concept. Fortunately, that’s not the way language works in reality. In the book, vocabulary C is a facet of the language that is used strictly to describe technical processes. In parallel, it seemed to me very fitting that a locked groove, historically, is a very technical musical tool.

6. Also to repeat the video a little bit, maybe you can elaborate on those vocabularies? How did you apply them to managing the material here?

NB: Best to directly quote Orwell here.

“The A vocabulary consisted of the words needed for the business of everyday life — for such things as eating, drinking, working, putting on one’s clothes, going up and down stairs, riding in vehicles, gardening, cooking, and the like”

“The B vocabulary consisted of words which had been deliberately constructed for political purposes: words, that is to say, which not only had in every case a political implication, but were intended to impose a desirable mental attitude upon the person using them.”

See, both of those are interesting, but way too literal to be used for instrumental music. But when you get to Vocabulary C, it’s abstract and detached in a way that seemed to really fit with techno.

“The C vocabulary was supplementary to the others and consisted entirely of scientific and technical terms.”

Can you explain what a locked groove is?

NB: A vinyl groove is normally cut in a spiral. A locked groove is a circle, so the needle loops around over and over. You literally have to pick up the needle to choose another loop, you can have lots of different loops on a record. Pioneering techno artists — Jeff Mills, for example — produced and performed with locked groove records, sometimes making it a central part of their process.

Narciss: To me, it’s kind of the most stripped down techno tool in existence. It really is just an endless loop that can, for example, be used to mix two tracks that don’t perfectly mesh together, or to add some spice to your transitions. Instrumentation is pretty interesting, because using the sounds we had, meant, we mainly patched things through different effects.

There’s something a bit cheeky about embracing minimalism in this way, right? This isn’t phases like Steve Reich; it isn’t messing with time like Morton Feldman. You’re into full-on repetition – right into the heart of what many people claim to dislike about techno. What made you go that route? Is there a personal story to this embrace of rigid structure and repetition, intellectual curiosity aside?

NB: There’s a holy grail in techno: that magical moment when the groove is so good that you bliss out and don’t touch the machines anymore. We experience this all the time as music producers working in the studio, and also on the dance floor when everything is just spot on. You get the same thing in many improvised musics – searching until you lock in. That’s what I wanted to focus on with this project; I wanted to focus on finding self-standing moments where time stands still.

Timbre is significant here, too, I feel. There’s a real brutality to this, maybe something missing in a lot of drenched-out, effect-pedal, too-much-reverb music trending now. What was the source of those sounds; how did you arrive at them?

Narciss: This can mainly be accredited to the extremely raw-sounding base material that we were working with. Both of the albums that Nicolas made have a very violent, heavy structure to them, so naturally working with sounds from them, you would get something like that out too, although even on the loops where we didn’t use any of that material, it was a pretty natural adaption to what we made before, I guess.

NB: The sound palette was more of a consequence of where I had been with my other projects rather than a conscious conceptual choice. We used a a bunch of Narciss’ favorite drum loops as well as a big chunk of my personal sound library from the past couple years, that was all industrial and electroacoustic sounds derived from electric cello, modular synth and loads of distortion pedals. Looking back, I can now better appreciate the tension between the timeless locked groove format and the sounds that grab your attention.

I want to ask about the element of setting the timer. In order to be that immediate, did you find that there was practice necessary first – on your own, as a duo?

Narciss: I didn’t really see it as practice, we pretty much sat down and recorded everything from the first loop to the last. Obviously, quality improved – generally towards the end of the process, we hit it home more times than in the beginning. But I think a little less than half of the record was made during our first day.

NB: I’ve been an improvising musician for over 15 years – working fast feels very comfortable. Also, quantity was a very important part of this project. Our goal was to make 100 locked grooves, and then we would select the best 20 or 30. Many of them were really bad, silly or just boring, but that didn’t matter, because five minutes later, we had an opportunity to begin again.

Actually, I’m kind of interested now that this has been out in the world for a while … uh, not just to rationalize turning in these questions late. What’s happened in the interim; what has the response been?

NB: I’ve been notified from Bandcamp about who downloads the records. I’ve had some interesting surprises there!

Functionally speaking, how do you expect these tracks to live? Are people DJing with them – are you? How do they work as tools – are they intended as tools? Would these encourage people perhaps even to DJ in a different way

Narciss: I’m certainly playing them out live, yes. Not all of them, of course — “Loop C-02” is a particular favorite. Some are definitely meant more as an exploration of the medium than as an actual “locked groove” in its regular function. I think it does force people who only blend two tracks at a time to play differently, though, yeah – because in that environment, a locked groove doesn’t make much sense. But if you play with three decks or more, then I think the more dancefloor-oriented grooves won’t challenge you that much.

NB: Of course they’re tools! They’re radically minimal not only in their form, but also in their sparseness. I’m always trying to figure out what is the least amount of instruments necessary to get a really banging sound. Now whether they’re played on their own or deep in the mix, that really depends on the musical context.

Does that change the meaning, if they are blended with other tracks?

NB: No, they don’t need to be played as stark naked loops on their own, unprocessed. As a central element, my challenge to DJs would be to try to figure out how long you can keep them going on with the least amount of transformation and mixing.

Narciss: It’s an interesting thought, to be sure. But since this project was more of an exploration of this “Locked Groove” concept, I think that if people play them out, it doesn’t as much change the meaning,as hammer home the functionality of it, even if you get analytical and deconstructive with it.

I know you’ve worked together before. This got you working more closely, though, yes?

Narciss: For sure, for me personally this project has furthered this “Sensei student mentality” with Nicolas just so much more, although I think he hates it when I say that, ha!

NB: Yeah, Narciss contributed a remix for my release on Establishement, and I just did a remix for his new record on DRVMS Ltd. We’ve been friends for a couple years, and with this project it was a really intense five or six sessions actually. The five minute non-stop sprints was pretty exhausting. And we’re still friends now!

Narciss, you’re obviously out there in the trenches, too, in the DJ scene. What was the connection like between this slightly experimental format and that clubland experience?

Narciss: There most definitely was a connection between the two. I mean originally, locked grooves themselves are something that only make sense in the context of a DJ-set. So it actually took me personally quite a while to get away from the “four-to-the-floor-mentality” of the medium.

Also, being born in this city, where do you look for inspiration – are you attracted to new things that are flowing into the city’s cultural life? Is the familiarity of growing up here something significant, or is it that turnover that drives you, or some combination? (I do notice different perspectives of natives and transplant.)

Narciss: I love this question – but there are so many aspects to this subject.

It definitely is a combination. Growing up here, the extremely hedonistic way in which Berlin is perceived from the outside was always very perplexing to me, because this was simply not the way that I saw it. Even when I started DJing, I didn’t actually go out that much because the way I got into it was actually just by discovering the genre in my record store, not by going to the parties. The problem with this is that Techno is, of course, a genre that is inspired by parties and clubs, from the way it sounds to just the overall existence of it. I only really understood this, though, when two British friends of mine moved here, because they had so much unbridled passion for techno, that only through them did I fully understand that these two things cannot exist without each other.

So for me, personally, I do actually like to get my inspiration from the memories that I have of Berlin before it got “un-dangerous” or the corners that people just do not explore enough (like Marzahn, for example). Ed.: Take note of Marzahn, architecture fans. Oh dear; I probably just sent someone down a linkhole. But to be honest, without the turnover of Berlin, and just absolute heaps of people moving here from all over the world, I probably would not be making the music I am making today. That being said, if someone who is thinking about renting an overpriced apartment just to go to Panorama Bar loads, is reading this : please don’t you’re making my rent go up. [laughs]

Will we see these animations live outside of the digital release? Audiovisual show?

NB: Itaru Yasuda — itaru.org — made the Vocabulary C animations, that was the beginning of a new live AV collaboration. Itaru and I just released a new video and that live AV project is moving forward fast.

And lastly, what’s next? I know you both have a bunch of upcoming projects and maybe at least one of you big bookings… will this particular project or collaboration also carry on somehow?

NB: I have a couple big bookings coming up, and I already have 3 solo EPs confirmed for release this year. Narciss and I took one of the locked grooves from Vocabulary C and fleshed it out into a full track, that should be coming out later this year as well.

Narciss: Well, there’s a track of ours on the next Seelen Records Release that was still part of the same sessions in which we made “Vocabulary C”. Other than that time will tell I think, I’d definitely be down to make more stuff together, but the magic about this project was that the process was so different to how we individually usually make our music, so I’m not sure how we would go about just making “normal techno” together.

Thanks! We’ll be listening!

https://bougaieffnarciss.bandcamp.com/album/vocabulary-c

The post Bougaïeff & Narciss talk craft, and composing 60-second techno loops appeared first on CDM Create Digital Music.

The amazing classic synth and experimental moments on children’s TV

Before it reverted to Internet age-blandness, American kids’ TV enjoyed a golden age of music, scored by oddball indie composers and legends alike.

And, wow, it could even teach you about synthesis.

Perhaps the most famous of thesse moments is when none other than Suzanne Ciani went on 3-2-1 Contact in 1980 to step inside her studio:

Fred Rogers of Mister Rogers’ Neighborhood fame was actually a composer before going into television, and the show’s deep commitment to music education reflected that. That music was generally of the acoustic variety, but he did one day tote a rare ARP Soloist synthesizer along with his trademark shoes and handmade sweaters – and his message and song about “play” might well be an anthem for us all.

Canadian-born composer Bruce Haack made an epic appearance on that same show in 1968, where he demonstrated a homemade electronic instrument. Haack himself as as prolific a composer of far-out sci-fi music for children as he was (much darker) experimental compositions and psychedelic works.

The best all-time “Fairlight CMI on a kids’ program” (because, amazingly, there’s been more than one of those) – Herbie Hancock, Sesame Street, 1983. Herbie keeps a terrific sense of cool and calm that all kids’ shows could learn from in this day of cloying, sugar-sweet patronizing programming:

Synths were all over vintage Sesame Street, often providing sound effects as in this oddly hypnotic Ernie puzzle:

Steve Horelick, the composer behind Reading Rainbow, showed off his Fairlight CMI and how digital sampling worked. (I have vivid memories of watching this as a kid – sorry, Steve.) Steve apparently came up at a time when Fairlight ownership was rare enough to get you gigs – but a good thing, too, as a whole generation still sings along with that theme song. And you probably got a second educational gift from Steve if you ever followed one of his brilliant video tutorials on Logic.

Even better than that is Reading Rainbow‘s synesthesia 3D trip – John Sanborn and Dean Winkler’s Luminaire, which was made for Montrea’s Expo ’86, to music by composer Daniel “No, I’m not Philip Glass” Lentz.

Better video of the actual animation and music, which – sorry, Mr. Glass, I actually kind of prefer to Glassworks:

Somehow this looks fresher than it did when it was new.

A young, chipper Thomas Dolby explained synthesis to Jim Henson’s little known 1989 program The Ghost of Faffner Hall!:

Oh yeah, also, apparently Jem and the Misfits imagined an audiovisual synth in 1985 that predicts both Siri and Coldcut / AV software years before their time. Plus dolls should always have synthesizer accessories:

Apart from education, there’s been some wildly adventurous music from obscure (who’s that?) and iconic sources (the Philip Glass?!) alike.

For a time, an experimental music Tumblr followed some of these moments. Here are some of my favorites.

Joan La Barbara does the alphabet (1977):

And yes, trip out with a composition by Philip Glass written especially for Sesame Street:

You can read the full history of this animation on Muppet Wiki,

More obscure, but clever (and I remember this one) – from HBO’s Braingames (1983-85), evidently by a guy named Matt Kaplowitz.

Not growing up in the UK, I’d never heard of Chocky, but it has this trippy, gorgeous opening with music by John W. Hyde:

American composer Paul Chihara’s 1983 score for a show called Whiz Kids is hilariously dated and nostalgia-packed now. But the man is a heavyweight in composition – think Nadia Boulanger student and LA Chamber Orchestra resident. He has an extensive film resume, too, which now landed him a position at NYU:

From Chicago public access TV, there’s a show called Chic-A-Go-Go, which in 2001 hosted The Residents.

But The Residents were on Pee-Wee, too:

Absurdly awesome, to close: “The Experimental Music Must Be Stopped.” This one comes to us from 2010 and French animation series Angelo Rules:

The post The amazing classic synth and experimental moments on children’s TV appeared first on CDM Create Digital Music.

Accusonus explain how they’re using AI to make tools for musicians

First, there was DSP (digital signal processing). Now, there’s AI. But what does that mean? Let’s find out from the people developing it.

We spoke to Accusonus, the developers of loop unmixer/remixer Regroover, to try to better understand what artificial intelligence will do for music making – beyond just the buzzwords. It’s a topic they presented recently at the Audio Engineering Society conference, alongside some other developers exploring machine learning.

At a time when a lot of music software retreads existing ground, machine learning is a relatively fresh frontier. One important distinction to make: machine learning involves training the software in advance, then applying those algorithms on your computer. But that already opens up some new sound capabilities, as I wrote about in our preview of Regroover, and can change how you work as a producer.

And the timing is great, too, as we take on the topic of AI and art with CTM Festival and our 2018 edition of our MusicMakers Hacklab. (That call is still open!)

CDM spoke with Accusonus’ co-founders, Alex Tsilfidis (CEO) and Elias Kokkinis (CTO). Elias explains the story from a behind-the-scenes perspective – but in a way that I think remains accessible to us non-mathematicians!

Elias (left) and Alex (right). As Elias is the CTO, he filled us in on the technical inside track.

How do you wind up getting into machine learning in the first place? What led this team to that place; what research background do they have?

Elias: Alex and I started out our academic work with audio enhancement, combining DSP with the study of human hearing. Toward the end of our studies, we realized that the convergence of machine learning and signal processing was the way to actually solve problems in real life. After the release of drumatom, the team started growing, and we brought people on board who had diverse backgrounds, from audio effect design to image processing. For me, audio is hard because it’s one of the most interdisciplinary fields out there, and we believe a successful team must reflect that.

It seems like there’s been movement in audio software from what had been pure electrical engineering or signal processing to, additionally, understanding how machines learn. Has that shifted somehow?

I think of this more as a convergence than a “shift.” Electrical engineering (EE) and signal processing (SP) are always at the heart of what we do, but when combined with machine learning (ML), it can lead to powerful solutions. We are far from understanding how machines learn. What we can actually do today, is “teach” machines to perform specific tasks with very good accuracy and performance. In the case of audio, these tasks are always related to some underlying electrical engineering or signal processing concept. The convergence of these principles (EE, SP and ML) is what allows us to develop products that help people make music in new or better ways.

What does it mean when you can approach software with that background in machine learning. Does it change how you solve problems?

Machine learning is just another tool in our toolbox. It’s easy to get carried away, especially with all the hype surrounding it now, and use ML to solve any kind of problem, but sometimes it’s like using a bazooka to kill a mosquito. We approach our software products from various perspectives and use the best tools for the job.

What do we mean when we talk about machine learning? What is it, for someone who isn’t a researcher/developer?

The term “machine learning” describes a set of methods and principles engineers and scientists use to teach a computer to perform a specific task. An example would be the identification of the music genre of a given song. Let’s say we’d like to know if a song we’re currently listening is an EDM song or not. The “traditional” approach would be to create a set of rules that say EDM songs are in this BPM range and have that tonal balance, etc. Then we’d have to implement specific algorithms that detect a song’s BPM value, a song’s tonal balance, etc. Then we’d have to analyze the results according to the rules we specified and decide if the song is EDM or not. You can see how this gets time-consuming and complicated, even for relatively simple tasks. The machine learning approach is to show the computer thousands of EDM songs and thousands of songs from other genres and train the computer to distinguish between EDM and other genres.

Computers can get very good at this sort of very specific task. But they don’t learn like humans do. Humans also learn by example, but don’t need thousands of examples. Sometimes a few or just one example can be enough. This is because humans can truly learn, reason and abstract information and create knowledge that helps them perform the same task in the future and also get better. If a computer could do this, it would be truly intelligent, and it would make sense to talk about Artificial Intelligence (A.I.), but we’re still far away from that. Ed.: lest the use of that term seem disingenuous, machine learning is still seen as a subset of AI. -PK

If a reader would like to read more into the subject, a great blog post by NVIDIA and a slightly more technical blog post by F. Chollet will shed more light into what machine learning actually is.

We talked a little bit on background about the math behind this. But in terms of what the effect of doing that number crunching is, how would you describe how the machine hears? What is it actually analyzing, in terms of rhythm, timbre?

I don’t think machines “hear,” at least not now, and not as we might think. I understand the need we all have to explain what’s going on and find some reference that makes sense, but what actually goes behind the scenes is more mundane. For now, there’s no way for a machine to understand what it’s listening to, and hence start hearing in the sense a human does.

Inside Accusonus products, we have to choose what part of the audio file/data to “feed” the machine. We might send an audio track’s rhythm or pitch, along with instructions on what to look for in that data. The data we send are “representations” and are limited by our understanding of, for instance, rhythm or pitch. For example, Regroover analyses the energy of the audio loop across time and frequency. It then tries to identify patterns that are musically meaningful and extract them as individual layers.

Is all that analysis done in advance, or does it also learn as I use it?

Most of the time, the analysis is done in advance, or just when the audio files are loaded. But it is possible to have products that get better with time – i.e., “learn” as you use them. There are several technical challenges for our products to learn by using, including significant processing load and having to run inside old-school DAW and plug-in platforms that were primarily developed for more “traditional” applications. As plug-in creators, we are forced to constantly fight our way around obstacles, and this comes at a cost for the user.

Processed with VSCO with x1 preset

What’s different about this versus another approach – what does this let me do that maybe I wasn’t able to do before?

Sampled loops and beats have been around for many years and people have many ways to edit, slice and repurpose them. Before Regroover, everything happened in one dimension, time. Now people can edit and reshape loops and beats in both time and frequency. They can also go beyond the traditional multi-band approach by using our tech to extract musical layers and original sounds. The possibilities for unique beat production and sound design are practically endless. A simple loop can be a starting point for a many musical ideas.

How would you compare this to other tools on the market – those performing these kind of analyses or solving these problems? (How particular is what you’re doing?)

The most important thing to keep in mind when developing products that rely on advanced technologies and machine learning is what the user wants to achieve. We try to “hide” as much of complexity as possible from the user and provide a familiar and intuitive user interface that allows them to focus on the music and not the science. Our single knob noise and reverb removal plug-ins are very good examples of this. The amount of parameters and options of the algorithms would be too confusing to expose to the end user, so we created a simple UI to deliver a quick result to the user.

If you take something as simple as being able to re-pitch samples, each time there’s some new audio process, various uses and abuses follow. Is there a chance to make new kinds of sounds here? Do you expect people to also abuse this to come up with creative uses? (Or has that happened already?)

Users are always the best “hackers” of our products. They come up with really interesting applications that push the boundaries of what we originally had in mind. And that’s the beauty of developing products that expand the sound processing horizons for music. Regroover is the best example of this. Stavros Gasparatos has used Regroover in an installation where he split industrial recordings routing the layers in 6 speakers inside a big venue. He tried to push the algorithm to create all kinds of crazy splits and extract inspiring layers. The effect was that in the middle of the room you could hear the whole sound and when you approached one of the speakers crazy things happened. We even had some users that extracted inspiring layers from washing machine recordings! I’m sure the CDM audience can think of even more uses and abuses!

Regroover gets used in Gasparatos’ expanded piano project:

Looking at the larger scene, do you think machine learning techniques and other analyses will expand what digital software can do in music? Does it mean we get away from just modeling analog components and things like that?

I believe machine learning can be the driving force for a much-needed paradigm shift in our industry. The computational resources available today not only on our desktop computers but also on the cloud are tremendous and machine learning is a great way to utilize them to expand what software can do in music and audio. Essentially, the only limit is our imagination. And if we keep being haunted by the analog sounds of the past, we can never imagine the sound of the future. We hope accusonus can play its part and change this.

Where do you fit into that larger scene? Obviously, your particular work here is proprietary – but then, what’s shared? Is there larger AI and machine learning knowledge (inside or outside music) that’s advancing? Do you see other music developers going this direction? (Well, starting with those you shared an AES panel on?)

I think we fit among the forward-thinking companies that try to bring this paradigm shift by actually solving problems and providing new ways of processing audio and creating music. Think of iZotope with their newest Neutron release, Adobe Audition’s Sound Remover, and Apple Logic’s Drummer. What we need to share between us (and we already do with some of those companies) is the vision of moving things forward, beyond the analog world, and our experiences on designing great products using machine learning (here’s our CEO’s keynote in a recent workshop for this).

Can you talk a little bit about your respective backgrounds in music – not just in software, but your experiences as a musician?

Elias: I started out as a drummer in my teens. I played with several bands during high school and as a student in the university. At the same time, I started getting into sound engineering, where my studies really helped. I ended up working a lot of gigs from small venues to stadiums from cabling and PA setup to mixing the show and monitors. During this time I got interested in signal processing and acoustics and I focused my studies on these fields. Towards the end of university I spent a couple of years in a small recording studio, where I did some acoustic design for the control room, recording and mixing local bands. After graduating I started working on my PhD thesis on microphone bleed reduction and general audio enhancement. Funny enough, Alex was the one who built the first version of the studio, he was the supervisor of my undergraduate thesis and we spend most of our PhDs working together in the same research group. It was almost meant to be that we would start Accusonus together!

Alex: I studied classical piano and music composition as a kid, and turned to synthesizers and electronic music later. As many students do, I formed a band with some friends, and that band happened to be one of the few abstract electronic/trip hop bands in Greece. We started making music around an old Atari computer, an early MIDI-only version of Cubase that triggered some cheap synthesizers and recorded our first demo in a crappy 4-channel tape recorder in a friend’s bedroom. Fun days!

We then bought a PC and more fancy equipment and started making our living from writing soundtracks for theater and dance shows. At that period I practically lived as a professional musician/producer and have quit my studies. But after a couple of years, I realized that I am more and more fascinated by the technology aspect of music so I returned to the university and focused in audio signal processing. After graduating from the Electrical and Computer Engineering Department, I studied Acoustics in France and then started my PhD in de-reverberation and room acoustics at the same lab with Elias. We became friends, worked together as researchers for many years and we realized the we share the same vision of how we want to create innovative products to help everyone make great music! That’s why we founded Accusonus!

So much of software development is just modeling what analog circuits or acoustic instruments do. Is there a chance for software based on machine learning to sound different, to go in different directions?

Yes, I think machine learning can help us create new inspiring sounds and lead us to different directions. Google Magenta’s NSynth is a great example of this, I think. While still mostly a research prototype, it shows the new directions that can be opened by these new techniques.

Can you recommend some resources showing the larger picture with machine learning? Where might people find more on this larger topic?

https://openai.com/

Siraj Raval’s YouTube channel:

Google Magenta’s blog for audio/music applications https://magenta.tensorflow.org/blog/

Machine learning for artists https://ml4a.github.io/

Thanks, Accusonus! Readers, if you have more questions for the developers – or the machine learning field in general, in music industry developments and in art – do sound out. For more:

Regroover is the AI-powered loop unmixer, now with drag-and-drop clips

http://accusonus.com

The post Accusonus explain how they’re using AI to make tools for musicians appeared first on CDM Create Digital Music.

Celebrate Blade Runner with these videos on Vangelis and his sounds

1982’sBlade Runner film one of the reasons a lot of us fell in love with synths. So, with the sequel out, let’s look back on that music.

Surely no composer – not even the legendary Wendy Carlos – managed to inspire so many obvious rip-off sound presets. (Barely-veiled references to chariots and fire and Deckard were there just to avoid any doubt.) And Blade Runner is essentially without comparison, with thick synthesizer instrumentations that recall the colors and shapes of orchestral timbres but are simultaneously unmistakably synthetic and new.

In fact, you might reasonably argue that Blade Runner was one of the popular vehicles to introduce the public to the capabilities of the polysynth, after years of rock music dominated by the Minimoog and its ilk.

I think talking just about those colors might miss some of the compositional elements of the music. Vangelis’ stately pacing and soaring melodies, with the tension of slow sweeps in pitch, kept Ridley Scott’s movie from being dull by injecting futuristic wonder and suspense. But the instrumentation is of course in service of that – and if you ever want to escape those presets, an autopsy of how they were constructed is needed.

First, let’s check out a good breakdown of the signature sound design on the Yamaha CS-80, which you could duplicate on any polysynth with a similar architecture. (Here, it’s faked reasonably well using a slightly later-era Yamaha CS-70M, and strings on a Roland MV-8800 – an unrelated animal to anything available in 1982, but it does the trick.)

Reverb.com breaks down these memorable sounds in a new video that talks about how to recreate them on the kind of gear you’re likely to find today. And, of course, just like studying scores or learning a favorite song, picking apart those sound designs can be a great way to better understand how to make new sounds of your own:

Hat tip to Synthtopia for catching that one. More at Reverb.com, including sample packs for a couple bucks.

Vangelis isn’t prone to a lot of interviews or public appearances, but there are a couple of chances to hear him speak poetically about the role of music in the world – particularly the 2011 interview with Al Jazeera, top:

For the serious Vangelis fan, there’s this two hour documentary portrait:

At about one hour twenty, you get Vangelis and Ridley Scott talking about Blade Runner, just after a chat about the composer’s collaboration with NASA. I imagine somewhere someone cornered him more on this score specifically, but here there are some nice tidbits.

From that interview:

“It was like being in the cave of a magician,” Ridley Scott says. “I’d be there at 2am … watching him just muck about.”

Vangelis: “I don’t really like working on film … everybody’s under pressure.”

Now, there you go: you’re hereby empowered to do some mucking about in your cave, or (thanks to modern tech) on your couch or in your bed or wherever it is your synths are at your disposal.

Just in case the new Blade Runner has you living your own Vangelis fantasy of yourself – go for it. Just make sure to record or hit save, or all those moments will be lost in time… like tears in rain…

Um, sorry, I’ll stop. Enjoy.

The post Celebrate Blade Runner with these videos on Vangelis and his sounds appeared first on CDM Create Digital Music.