Music

CDM Mixes: Voyage into sound like a mystic space cat, with akkamiau

Start your week right with some underground technoes. akkamiau is the multi-faceted Prague-born Akkamiau Kočičí, and she kicks off a special January for us.

Here in Berlin on Saturday, we’re hosting a special night of live performances with akkamiau joining us for a DJ set rounding out the night:
https://www.residentadvisor.net/events/1053318

They’re all released on or forthcoming on our label Establishment, and all of them have robust projects of their own, from live coding work in the Algorave scene with Miri Kat, to their own up-and-coming label projects (Gradient from Jamaica Suk, Denkfabrik from Nicolas Bougaïeff, and a new project emerging from Stanislav Glazov aka Procedural). They’re also teaching – Stas is a modular and Touch Designer guru traveling the world with those projects; both Nick and Jamaica teach privately, and Nick teaches modulars and coaches composition as Dr. Techno – because he’s a real doctor. Oliver Torr on behalf of Prague’s XYZ project is preparing an interactive light installation that will evolve over the course of the night, as well.

Stratofyzika, intermedia group.

I wanted to invite Lenka to send some vibrations to our readers all over the world. Lenka’s own projects are myriad: she’s a founding member of female:pressure, the network and advocacy organization that has worked for years to break apart the gendering of electronic music, she releases and performs and DJs as akkamiau and hiT͟Hərˈto͞o, and adds live sound and music to the choreography- and audiovisual-driven intermedia project Stratofyzika.

She’s also recently hosted quadraphonic sound workshops, working in Ableton Live, plus the wildly popular jam room at Ableton Loop.

And while the trend these days seems to be on narrowly-defined DJs, I believe all those broad influences come across in her DJ mixes as well as her music. Lenka has shared an exclusive mix with us, recorded straight from the mixer in the grimy confines of Berlin’s club Suicide Circus aka Suicide Club. It was the opening of the respected RITUALS series, which takes commanding, dark techno into Berlin’s Thursday night / Friday morning (well, because this is Berlin, and Thursdays are a big night).

Just don’t expect monotonous pounding. Lenka’s mixing is effortlessly fluid and organic, unfolding across the duration, putting beautiful, strange otherworldly textures atop heavy, dirty pulse. And that seems to have as always Lenka’s quirky cosmic feline character there. That doesn’t mean it’s soft in any way: these space cats have big rockets.

Dark but not drab … industrial with groove … powerful but dreamy … sounds like good new years’ resolutions for techno to me.

Track listing (yep that Ancient Methods and Perc are each two favorites of mine, for starters):

Moerbeck & Subjected – 006SB1
Mamiffer – Enantiodromia
Adam X – It’s All Relative
Alexey Volkov – Corner
H880 – weird signs
Drasko V & Kero – Exponent (Drumcell Remix)
Tensal – Levia
Regis – Keep Planning (Original Mix)
Discord – Backyard Trapp
MTd – Basement (Moerbeck Remix)
P.E.A.R.L. – Station1
Tsorn – Strange Theory
FJAAK – The Tube
Ancient Methods – Knights & Bishops
Perc – Look What Your Love Has Done To Me
H880 – KEPLER
Niki Istrefi – Red Armor

Join us in Berlin if you can, and regardless, stay tuned for more of akkamiau, these other artists, and Establishment. Frohes Neues!

Follow akkamiau on SoundCloud, MixCloud, and Facebook

For more listening, check out akkamiau’s work on Colaboradio 88.4FM Berlin. There’s a special episode devoted to the voice:

— and one highlighting those Ableton Link-ed jam sessions at the company’s Loop conference from November:

Saturday’s event, featuring akkamiau:

Establishment: XL & live [Discount advance tickets exclusively on Resident Advisor]
RSVP on Facebook

The post CDM Mixes: Voyage into sound like a mystic space cat, with akkamiau appeared first on CDM Create Digital Music.

CDM Mixes: Sofia Kourtesis takes us dreaming in wintry skies

Year-end lists, while valuable, can blur into vague hype, dizzying lists of artists and tracks. Let’s start by spending some time listening.

Long-time friend of the site Sofia Kourtesis, the producer/DJ with German-Peruvian-Greek connections now based in Berlin, fired over a new mix and her latest production this week. I make no claim of weighing what’s important in grander schemes, but I was moved by the fact that it touched so much of the music I resonated with personally this year, in headphones and in clubs both. There’s Octo Octa and Benjamin Damage – each mastering live performance – and Avalon Emerson and Etapp Kyle and DVS1, who dazzled me as DJs and with productions. And then onward from there.

Sofia calls this “pieces of winter sky”:

1 Olof Dreijer-Echoes from Mamori
2 Adam Marshall – Hose Shipping, Jammed Mix
3 Avalon Emerson – One More Fluorescent Rush
4 Etapp Kyle – Essay [KW20]
5 DVS1 – In The Middle [KW20]
6 Octo Octa – Adrift (Official Video)
7 Benjamin Damage – Montreal
8 Helena Hauff- Do you really think like that, als MP3 im Anhang
9 Sofia Kourtesis Iquitos
10 Aphex Twin – Alberto Balsam

Sofia is busy. In addition to handling bookings at Chalet (the former tollhouse right next to the Berlin headquarters of Native Instruments), she’s playing a festival in Peru organizing around the issue of child trafficking on May 17, has a full schedule of some of the most respected venues in Germany, NYC, and Latin America (see below), and will be curating a concert series at Berlin’s storied Funkhaus (ex-DDR radio facility and host recently to Ableton Loop). She also has a new EP in the works for spring.

Here’s what she says about this mix:

This mix is somehow playful, but also really dynamic, with sounds of mellow, Amazonian, and moody techno and electronica.

I took Olaf Dreijer to begin with, because it always makes me go out of myself on a dreamy journey, thinking about home, or about what home is. I really like his Amazonian elements — and this bass kills me, it’s just beautiful. It keeps me motivated throughout the day.

I also selected some of my favorite female artists at the moment, not just for them being women, but mainly because they’re talented producers using a lot of analog gear. Helena Hauff always brings it to the point, and without needing to try, she simply sounds really organic. I really love her new EP on Ninja Tune. I also like Avalon’s new track that she released on Whities, one of my favorite labels at the moment, alongside Studio Barnus.

The production, the video and her artwork are always really special. I wonder why she didn’t write music for computer games. She could totally do it – what a dream; I would be the first one to buy it. Ed.: We may have to round up some video game music at some point, on that note – see for instance SØS Gunver Ryberg’s wonderful work.

I just found out about Octo Octa this year. She’s a wonderful artist; I really like playing “Adrift” in the middle of a set; it takes me on a journey. Also really good for dancing is Benjamin Damage’s “Montreal” — what a tune… wish I had made it!

I also dared myself to include one of my own new tracks called “Guerrero.” It’s about a close friend of mind who is fighting against FIFA’s corruption.

All the best things at the end — I will never forget to include Aphex Twin in anything I do; he’s always been my hero.

By the way, from Sofia or anyone else, I will rabidly defend left-turn mixing and surprises; I think mixing and DJing could use more risks, not less. Seems a good resolution for 2018.

We’ll have more audio content from CDM coming on 2018, so consider this one end-of-year teaser as we squeeze in some holidays. If you have ideas for how you’d like that to go, I’d love to hear from you. But I believe there should always be more room for listening.

In person is even better, so here are Sofia’s coming dates:

19.01.2018 Chalet Club Berlin
16.2.2018 Institut für Zukunft Leipzig
22.02.2018 Bossa Nova Civic New York
24.02.2018 New York [TBC]
17.05.2018 Proyecto Play Me Lima-Peru
25.05.2018 Mexico City [TBC]

https://www.facebook.com/sofia.kourtesis/

https://soundcloud.com/sofia-kourtesis

The post CDM Mixes: Sofia Kourtesis takes us dreaming in wintry skies appeared first on CDM Create Digital Music.

djay Pro 2 brings algorithms and machine learning to DJing

A.I.D.J.? The next-generation djay Pro 2 for Mac adds mixing and recommendations powered by machine learning – and more human-powered features, too.

When Big Data meets the DJ

The biggest break from how we’ve normally thought about DJ software comes in the form of automatic mixing and selection tools. One is powered by machine learning working with DJ sets, and one from data collected from listening (Spotify).

Automix AI is a new mixing technology. And hold on to your hats, folks, if the “sync” button was unnerving to you, this goes further.

When we say “A.I.,” we’re really talking machine learning – that is, “training” algorithms on large sets of data. In this case, that data comes from existing DJ sets. (Algoriddim tells CDM that was drawn from a variety of DJs, mostly in hip-hop and electronic genres.) Those sets were analyzed according to various sonic features, and the automixing applies those to your music. So this isn’t just about mixing two different techno tracks with mechanical efficiency – it’s meant to go further across different tempos and genres.

It’s also more than matching tempo. Automix AI will identify where the transition occurs, decide how long the fade should be, and apply filters and EQ. So, if you’ve ever listened to existing Automix features and how clumsy they are with starting and stopping tracks, this takes a different approach. Algoriddim explains to CDM:

The core of this tech is finding good start and end regions for transition between two songs, while also respecting the corresponding sound energies and choosing an appropriate transition accordingly (e.g. most likely EQ or short filter transition if you have two high energy parts of the song for the transition)

Then there’s “Morph” – which Algoriddim argue opens up new ways of mixing:

This actually goes beyond what a regular DJ can do with two hands. Morph not only syncs the songs but seamlessly ramps the changed tempo of the inactive deck to its regular speed as the transition progresses. E.g. in the past if you had a hip-hop song at say 95 BPM and an electronic track at 130 BPM, syncing the two and making a transition would leave the new track in an awkwardly rate changed state (even with time-stretching enabled). So as the transition starts, both songs (in this example) would be playing at 130 BPM but as we are doing a simultaneous tempo “crossfade”, the hip-hop track ends up being back at 95 BPM at the end of the transition. This ensures the tracks always play at their regular tempo and these types of mixes sound very natural, allowing for seamless cross-genre transitions.”

Also impressive: while you might think this sort of technology would be licensed externally, the whiz kids over at Algoriddim did all of this on their own, in-house.

On the Spotify integration side, and also related to automating DJing tasks, “Match” technology recommends music based on BPM, key, and music style. Existing Spotify users will be familiar with some of this recommendation engine already. Where it could be good for producers is, this means there’s an avenue by which your music gets exposed by algorithms. And that in turn is potentially good news, if you’re a producer whose music isn’t always charting the top of a genre on Beatport.

These “autopilot” features are all under your control, too: you can choose which parameters are used, choose your own tracks, switch it off at will – as you like. Or you can sit back and let djay Pro run in the background while you’re doing something else, if you want to let the machine do the DJing while you cook dinner, for instance.

Pro features, for humans

Okay, so at this point, djay Pro 2 may sound a bit like this:

But one of the disruptive things about Algoriddim’s approach to DJ software is, it has simultaneously challenged rivals both among entry level and casual users and more advanced users at the same time.

So, here’s the more “Pro” sounding side of this. Some of these are features that are either missing or not implemented quite the way we’d like in industry leaders like Serato and Traktor.

A new audio engine with master AU plug-ins. A rewrite of the engine now allows high-res waveforms, post-fader effects, higher-quality filters, plus the ability to add Audio Unit plug-ins as master output effects.

Integrated libraries. iTunes, Spotify, and music in the file system / Finder are now all integrated and can be viewed side-by-side.

Integrated library views bring together everything on your local machine as well as Spotify.

Smart filters. Set up dynamic playlists sorted by BPM, key, date, genre, and other metadata. (Those columns are available in other tools, but here you get them dynamically, a bit like the ones in iTunes.)

Keyboard Shortcuts Editor. There’s a full editor for assigning individual features to custom shortcuts – which in turn can also map to custom hardware or the MacBook Pro Touch Bar.

CDJ and third-party hardware support. Whereas some other players make their own hardware or limit compatibility (or even require specific hardware just to launch, ahem), Algoriddim’s approach is more open. So they’re fully certified by Pioneer for CDJ compatibility, and they include 60 MIDI controllers in the box, and they have an extensive MIDI learn function.

More cueing and looping. Version 2 now has up to eight cue points and loops, with naming, per song. (I recently lauded Soda for adding this.) You can also now assign loop triggers to cue points.

Single deck mode for preparation. Okay, some (cough, again Serato) lock you into this view if you don’t have authorized hardware plugged in. But here, it’s designed specifically for the purpose of making set prep easier.

Accessibility. VoiceOver support makes djay Pro 2 work for vision-impaired users. We really need more commitment to this in the industry; it’s also been great to see this technology from Algoriddim showcased at Apple’s developer conference. If you’re using this (and hopefully CDM is working well with screen readers), do let us know.

New photo / still image support.

And it does photos

Back to less club/pro features, the other breakthrough for casual users, weddings, and commercial gigs is photo integration. Drag and drop photos or albums onto the visual decks, and the software will make beat-matched slide shows.

The photo decks also work with existing, fairly powerful VJ features, which includes external output, effects, and the like. You can also adjust beat sync.

Still image support builds on an existing video/VJ facility.

Plus a no-brainer price

The other thing that’s disruptive about djay Pro 2: price. It’s US$49.99, with an intro price of US$39.99, on the App Store.

You’ll need Spotify Premium for those features, of course, and macOS 10.11 or later is required.

https://www.algoriddim.com/

The post djay Pro 2 brings algorithms and machine learning to DJing appeared first on CDM Create Digital Music.

Here’s what artists in the 50-hour Moogfest live stream have to say

As Moogfest runs an international, 50-hour livestream of women and transgender artists, here are those voices talking about music, technology, and inspiration.

We’ll update this piece as we hear from more artists, so keep reloading in the next couple of days for more. (At top: Ana Paula Santana.)

See also our full writeup of this project and the first wave lineup announcement from the festival. All images courtesy Moogfest and the artists.

Ana Paula Santana (Guadalajara, Mexico)

1. What was your first access to electronic music technology? Where did you go to learn more about it – and did you find any obstacles to doing so?

I started to do experimental sound compositions when I was working for a Mexican radio as an editor. In the beginning I was doing soundscape with some electronic instruments in Ableton Live, and then I integrated different keyboards and voice. After a wile doing this I went to Barcelona to study a master degree in sound art, and there I met electronic musicians with whom I collaborated. From this last experience I learned many tricks and techniques to create with.

2. What is your choice of instrumentation for the stream, and where in it do you draw inspiration?

I’m going to play with a Microkorg synthesizer, four contact microphones and one midi interface. I also do atmospheres with my voice and I use the feedback in the space as a frequency generator to play with in the midi keyboard.

I’m inspired by constant machine sounds, the sound of the city and the speed in contrast with natural and random soundscapes. I’m also inspired by love stories; ’cause what I do I think it has a lot of melancholy in it.

3. What does it mean to participate in this stream for you?

I’m very happy, it’s a great opportunity to share my work and I love the idea of it being a festival to celebrate the creation of female sound; also I feel very honored to share my work together with artist who I admire.

FARI B (London, UK)

1. What was your first access to electronic music technology? Where did you go to learn more about it – and did you find any obstacles to doing so?

Through sewing and knitting I learned algorythmic thinking, and I studied acoustic music and later journalism which taught my how to edit sound. But at 12 I had a ZX Spectrum…! I used to load the games with a cassette player…
Obstacles were there was no culture among my friends to learn this stuff, or my schools or colleges, I had to find my interest group by volunteering at an arts music radio station called Resonance 104.4FM in 2004 as an engineer.

2. What is your choice of instrumentation for the stream, and where in it do you draw inspiration?

A whole load of found objects and hand made instruments and keyboard…inspiration comes from the many journeys and performances Ive done around the world, from Novi Sad to Isle of Wight.

3. What does it mean to participate in this stream for you?

That something’s shifting in interest and perception, about whose voices we are listening to. Mainstream can’t cater for everyone! Humanity is starting to reflect itself back at itself in media properly, at last.

Maia Koenig (Buenos Aires, Argentina)

1. What was your first access to electronic music technology? Where did you go to learn more about it? Did you find any obstacle to doing it?

In 2008 I started playing in Mielcitas Trash Me where I played bass, I did not have money and I needed a distortion pedal, I found a friend who helped me build it, from there I relate to electronics in a very intuitive way. It’s a little complicated at the beginning, but after you let go, encouraging a new project “D.I.Y” is always an enriching challenge.

2. What is your choice of instrumentation for the broadcast and where is it inspired?

In the last few years I have been playing mostly Gameboy with the LSDJ tracker, I also incorporate a casio pt80 keyboard, a cacophonator (DIY), and something else that comes up in the moment, I like to improvise with the environment and energy that instant in the only one that I live, the present.

3. What does it mean to participate in this current for you?

The electric current, the action / reaction, an impulse, an expansive flare, the electromagnetic network that unites us in a sometimes very destructive world, where music and other arts are part of a transmutation, that’s why noise is necessary as a protest aware that we can change things a little.

Nesa Azabikhah< (Tehran, Iran)

1. What was your first access to electronic music technology? Where did you go to learn more about it? Did you find any obstacle to doing it?

My first access to electronic music would be purchasing a software by the name of “FL Studio“. I started working with the software and getting more familiar and involved with electronic music. Also, before I purchased this software I also started working with CDJ and DJ mixer. In addition, I also started learning from people around me who also played at that time. So I started using FL Studio, Logic, Reason, and now I work with Ableton.

2. What is your choice of instrumentation for the broadcast and where is it inspired?

What I have chosen for this steam is a one hour dj set from different music genre. I’m using my laptop, cdj and mixer. I’m not using any instruments, because I’m not playing live and I’m only playing a one hour dj set. Because, since I have more than one play I didn’t have the opportunity to prepare a live for this stream and that’s why my only choice was a dj set.

3. What does it mean to participate in this current for you?

Lastly, I have to say I am very excited and mostly honored to be part of the 50 artist for this live stream. I’m also happy to be part of this team. This a new and interesting experience for me. I am also looking forward to see even more growth and accomplishment for the women artists and artist who are part of the transgendered and non-binary community.

The post Here’s what artists in the 50-hour Moogfest live stream have to say appeared first on CDM Create Digital Music.

Watch Moogfest kick off with epic 50-hour livestream, lineup – minus men

Women and transgender artists have too often seen their work in electronic music pushed to the margins. Moogfest’s launch this year puts them first.

Moogfest this year promises to have the mix they’ve been brewing in the latest editions: part music festival, part conference, with music and music technology meeting up with larger themes around science and innovation. The difference is, instead of the presence of female and transgender artists being just another box for curators to tick — “hey, look, we booked some women” — here, they’re leading the announcement. That includes both a 50-hour livestream of back-to-back sets from a pretty amazing and diverse set of artists, plus the first wave announcement of artists.

Here’s Madame Gandhi explaining the idea:

The result is a mixture of people you know really well (legends like Suzanne Ciani, Moor Mother) alongside a lot of artists who are almost certainly new to you – particularly as they’ve been drawn from disparate genres and geographies. Indeed, these are the kind of people who have been quietly pushing music in new directions, but who might get lost in the fine print of music programs, or pushed to the side in music headlines. In fact, I think the upshot is a potential victory not only for gender equality, but for independent and out-of-the-mainstream music, too. And knowing CDM readers, irrespective of your gender, I think that’s a value you’re likely to enjoy seeing represented.

As Ciani tells The New York Times:

For Ms. Ciani, the theme for Moogfest 2018 is only natural. “Women have long been intimately connected to electronic music, perhaps because it offered a path outside male-dominated conventional music worlds,” she said. “What has changed is an awareness of women in the field historically as well as a huge influx of contemporary talent.”

Moogfest Shines a Spotlight on Female, Nonbinary and Transgender Musicians

To that I’d add that it’s worth noting that the “influx” and “contemporary” parts are also closely tied to international artists. Our own CDM contributor will have a conversation with a fellow Romanian woman in the Bucharest scene for one link to that; I’ve also had conversations recently with a some Iranian artists about the situation for women making music there (and the resulting international scene as they travel), and … well, look down the list of countries below.

Moor Mother, the ground-breaking experimental project of Philadelphia’s Camae Ayewa, is one of many people deserving of first-wave headliner recognition – and now getting it.

We’ll have some interviews with artists shortly, so Moogfest’s lineup is your gain, wherever you are.

To watch the livestream:

You can watch from anywhere beginning at 12pm ET on Wednesday December 6 until 2pm ET on Friday December 8.
http://AlwaysOn.Live

Or watch here:

I’m also cross-posting to our CDM Facebook page.

The schedule:

The beginning is – starting very radical, in a nice way! Unfortunately, upstream bandwidth / encoding looks … very choppy. Hoping some of the artists sort that out better. (This is a real roadblock of livestreaming, but that’s a topic for another time.)

Livestream artists:

Admina
(Bucharest, Romania)
Adriana T
(Athens, GA, USA)
Alissa Derubeis
(Asheville, NC, USA)
Amy Knoles
(Valencia, CA, USA)
Ana Paula Santana
(Guadalajara, Mexico)
Andrea Alvarez
(Buenos Aires, Argentina)
Annie Hart
(Brooklyn, NY, USA)
Awaymsg
(Durham, NC, USA)
Aseul
(Seoul, South Korea)
Bells Roar
(Albany, NY, USA)
Caz9
(Dublin, Ireland)
Club Chai (8ULENTINA & FOOZOOL)
(Bay Area, CA, USA)
Despicable Zee
(Oxford, UK)
DJ Haram
(Philadelphia, PA, USA)
Dot
(Los Angeles, CA, USA)
Ela Minus
(Bogota, Columbia)
Elles
(London, UK, USA)
Emily Wells
(New York, NY, USA)
Fari B
(London, UK)
FOSIL
(Chile, Santiago)
Galcid
(Tokyo, Japan)
Jil Christensen
(Durham, NC, USA)
KALONICA NICX
(Bandung, Indonesia)
Kandere
(Melbourne, Australia)
Katie Gately
(Los Angeles, CA, USA)
Kim Ki O
(Istanbul, Turkey)
Lauren Flax
(New York, NY, USA)
Lilith Ai
(London, UK)
Lucy Cliche
(Sydney, Australia)
Lya “Drummer”
(London, UK)
Madame Gandhi
(New Delhi, India)
Mileece
(Los Angeles, CA, USA)
Moor Mother
(Philadelphia, PA, USA)
Nazira
(Almaty, Khazakhstan)
Nesa Azadikhah
(Tehran, Iran)
Nicola Kuperus
(Detroit, MI, USA)
Nonku Phiri
(Johannesburg, South Africa)
OG Lullabies
(Washington, DC, USA)
OTOMO X (Fay Milton & Ayse Hassan)
(London, UK)
PlayPlay
(Durham, NC, USA)
Pulpy Shilpy
(Pune, India)
SARANA
(Samarinda, East Borneo)
Sassy Black
(Los Angeles, CA, USA)
Stud1nt
(Asheville, NC, USA)
Sui Zhen
(Melbourne, Australia)
Suzanne Ciani & Layne
(Bolinas, CA, USA)
Suzi Analogue
(Miami, FL, USA)
Therese Workman
(New York, NY, USA)
Vessel Skirt
(Hobart, Tasmania)
Zensofly
(Durham, NC, USA)

Of course, even better than live streaming is – being there in person. (No buffering issues! Or… if there are, seek medical attention!)

Here’s the first-wave lineup announcement, including a couple of friends (and a couple of idols)!

Amber Mark
Annie Hart
Armen Ra
Aurora Halal
Bonaventure
Carla Dal Forno
CEP (Caroline Polachek)
Caterina Barbieri
DJ HARAM
Ellen Allien
Emily Sprague
Fatima Al Qadiri
Fawkes
Gavin Rayna Russom
Helen Money
Honey Dijon
Jamila Woods
Jenny Hval
Kaitlyn Aurelia Smith
Karyyn
Katie Gately
Kristin Kontrol
Kyoka
Lawrence Rothman
Madame Gandhi
Maliibu Miitch
Midori Takada
Nadia Sirota
Nicole Mitchell
Noncompliant
Pamelia Stickney
Sassy Black
Shanti Celeste
SOPHIE
Stud1nt
Umfang
Upper Glossa

The post Watch Moogfest kick off with epic 50-hour livestream, lineup – minus men appeared first on CDM Create Digital Music.

The amazing classic synth and experimental moments on children’s TV

Before it reverted to Internet age-blandness, American kids’ TV enjoyed a golden age of music, scored by oddball indie composers and legends alike.

And, wow, it could even teach you about synthesis.

Perhaps the most famous of thesse moments is when none other than Suzanne Ciani went on 3-2-1 Contact in 1980 to step inside her studio:

Fred Rogers of Mister Rogers’ Neighborhood fame was actually a composer before going into television, and the show’s deep commitment to music education reflected that. That music was generally of the acoustic variety, but he did one day tote a rare ARP Soloist synthesizer along with his trademark shoes and handmade sweaters – and his message and song about “play” might well be an anthem for us all.

Canadian-born composer Bruce Haack made an epic appearance on that same show in 1968, where he demonstrated a homemade electronic instrument. Haack himself as as prolific a composer of far-out sci-fi music for children as he was (much darker) experimental compositions and psychedelic works.

The best all-time “Fairlight CMI on a kids’ program” (because, amazingly, there’s been more than one of those) – Herbie Hancock, Sesame Street, 1983. Herbie keeps a terrific sense of cool and calm that all kids’ shows could learn from in this day of cloying, sugar-sweet patronizing programming:

Synths were all over vintage Sesame Street, often providing sound effects as in this oddly hypnotic Ernie puzzle:

Steve Horelick, the composer behind Reading Rainbow, showed off his Fairlight CMI and how digital sampling worked. (I have vivid memories of watching this as a kid – sorry, Steve.) Steve apparently came up at a time when Fairlight ownership was rare enough to get you gigs – but a good thing, too, as a whole generation still sings along with that theme song. And you probably got a second educational gift from Steve if you ever followed one of his brilliant video tutorials on Logic.

Even better than that is Reading Rainbow‘s synesthesia 3D trip – John Sanborn and Dean Winkler’s Luminaire, which was made for Montrea’s Expo ’86, to music by composer Daniel “No, I’m not Philip Glass” Lentz.

Better video of the actual animation and music, which – sorry, Mr. Glass, I actually kind of prefer to Glassworks:

Somehow this looks fresher than it did when it was new.

A young, chipper Thomas Dolby explained synthesis to Jim Henson’s little known 1989 program The Ghost of Faffner Hall!:

Oh yeah, also, apparently Jem and the Misfits imagined an audiovisual synth in 1985 that predicts both Siri and Coldcut / AV software years before their time. Plus dolls should always have synthesizer accessories:

Apart from education, there’s been some wildly adventurous music from obscure (who’s that?) and iconic sources (the Philip Glass?!) alike.

For a time, an experimental music Tumblr followed some of these moments. Here are some of my favorites.

Joan La Barbara does the alphabet (1977):

And yes, trip out with a composition by Philip Glass written especially for Sesame Street:

You can read the full history of this animation on Muppet Wiki,

More obscure, but clever (and I remember this one) – from HBO’s Braingames (1983-85), evidently by a guy named Matt Kaplowitz.

Not growing up in the UK, I’d never heard of Chocky, but it has this trippy, gorgeous opening with music by John W. Hyde:

American composer Paul Chihara’s 1983 score for a show called Whiz Kids is hilariously dated and nostalgia-packed now. But the man is a heavyweight in composition – think Nadia Boulanger student and LA Chamber Orchestra resident. He has an extensive film resume, too, which now landed him a position at NYU:

From Chicago public access TV, there’s a show called Chic-A-Go-Go, which in 2001 hosted The Residents.

But The Residents were on Pee-Wee, too:

Absurdly awesome, to close: “The Experimental Music Must Be Stopped.” This one comes to us from 2010 and French animation series Angelo Rules:

The post The amazing classic synth and experimental moments on children’s TV appeared first on CDM Create Digital Music.

Watch a completely mental set of MeeBlip synth stop motion animations

You’ve got your acid basslines. Then, you’ve got your acid trips involving a bass synth. Roikat takes us in the direction of the latter.

Creatures dance around urban streets. AI deep dream wildlife stares at you on title cards. Worms amiably amble from car doors and make their way onto the amplitude knobs.

And there are cats. Of course there are cats.

It’s all adorable stop motion with the raw sounds of our MeeBlip synth and no, I really didn’t have any idea this was going to happen until I spotted it on YouTube. Roikat is evidently both animator and MeeBlip composer. The combination is brilliant. I’d go for a whole show.

Your sound demos will never be the same. Behold:

Of course, perhaps the wildest of all is this … ultrasonic demo?! (Watch it drive your cats crazy.)

Plus there was a Halloween jam some time back

Whoever you are, Roikat, you’re crazy and a genius. Looking forward to more synth vids and those promised presets for Dave Smith – we’ll share them here!

The MeeBlip in question here is anode series, but our triode is closely related to the anodes – and it’s on a Black Friday sale now with a lower price and all the cables you need included:

https://meeblip.com/

MeeBlip triode [shop]

The post Watch a completely mental set of MeeBlip synth stop motion animations appeared first on CDM Create Digital Music.

What you can learn from Belief Defect’s modular-PC live rig

Belief Defect’s dark, grungy, distorted sounds come from hardware modulars in tandem with Reaktor and Maschine. Here’s how the Raster artists make it work.

Belief Defect is a duo from two known techno artists, minus their usual identities, with a full-length out on Raster (the label formerly known as Raster-Noton). It digresses from techno into aggressively crunchy left-field sonic tableau and gothic song constructions. There are some video excerpts from their stunning live debut at Berlin’s Atonal Festival, featuring visuals by OKTAform:

See also: STREAM BELIEF DEFECT’S DECADENT YET DEPRAVED ALBUM AND READ THE STORIES BEHIND THEIR CREEPY SAMPLES

They’ve got analog modulars in the studio and onstage, but a whole lot of the live set’s sounds emanate from computers – and the computer pulls the live show together. That’s no less expressive or performative – on the contrary, the combination with Maschine hardware means easy access to playing percussion live and controlling parameters.

Native Instruments asked me to do an in-depth interview for the new NI Blog, to get to talk about their music. The full interview:

Belief Defect on their Maschine and Reaktor modular rig [blog.native-instruments.com]

They’ve got a diverse setup: modular gear across two studios, Bitwig Studio running some stems (and useful in the studio for interfacing with modulars), a Nord Drum connected via MIDI, and then one laptop running Maschine and Reaktor that ties it all together.

Here are some tips picked up from that interview and reviewing the Reaktor patch at the heart of their album and live rig:

1. Embrace your Dr. Frankenstein.

Patching together something from existing stuff to get what you want can give you a tool that gets used and reused. In this case, Belief Defect used some familiar Reaktor ensemble bits to produce their versatile drum kit and effects combo.

2. Saturator love.

Don’t overlook the simple. A lot of the sound of Belief Defect is clever, economical use of the distinctive sound of delay, reverb, filter, and distortion. The distortion, for instance, is the sound of Reaktor’s built-in Saturator 2 module, which is routed after the filter. I suspect that’s not accidental – by not overcomplicating layers of effects, it frees up the artists to use their ears, focus on their source material, and dial in just the sound they want.

And remember if you’re playing with the excellent Reaktor Blocks, you can always modify a module using these tried-and-true bits and pieces from the Reaktor library.

For more saturation, check out the free download they recommend, which you can drop into your Blocks modular rig, too:

ThatOneKnob Compressor [Reaktor User Library]

3. Check out Molekular for vocals.

Also included with Reaktor 6, Molekular is its own modular multi-effects environment. Belief Defect used it on vocals via the harmonic quantizer. And it’s “free” once you have Reaktor – waiting to be used, or even picked apart.

“Using the harmonic quantizer, and then going crazy and have everything not drift into gibberish was just amazing.”

Maschine clips in the upper left trigger snapshots in Reaktor – simple, effective,

4. Maschine can act as a controller and snapshot recall for Reaktor.

One challenge I suspect for some Reaktor users is, whereas your patching and sound design process is initially all about the mouse and computer, when you play you want to get tangible. Here, Belief Defect have used Reaktor inside Maschine. Then the Maschine pads trigger drum sounds, and the encoders control parameters.

Group A on Maschine houses the Reaktor ensemble. Macro controls are mapped consistently, so that turning the third encoder always has the same result. Then Reaktor snapshots are triggered from clips, so that each track can have presets ready to go.

This is so significant, in fact, that I’ll be looking at this in some future tutorials. (Reaktor also pairs nicely with Ableton Push in the same way; I’ve done that live with Reaktor Blocks rigs. Since what you lose going virtual is hands-on control, this gets it back – and handles that preset recall that analog modulars, cough, don’t exactly do.)

5. Maschine can also act as a bridge to hardware.

On a separate group, Belief Defect control their Nord Drum – this time using MIDI CC messages mapped to encoders. That group is color-coded Nord red (cute).

Belief Defect, the duo, in disguise. (You… might recognize them in the video, if you know them.)

6. Build a committed relationship.

Well, with an instrument, that is. By practicing with that one Reaktor ensemble, they built a coherent sound, tied the album together, and then had room to play – live and in the studio – by really making it an instrument and an extension of themselves. The drum sounds they point out lasted ten years. On the hardware side, there’s a parallel – like talking about taking their Buchla Music Easel out to work on.

Check out the full interview:

Belief Defect on their Maschine and Reaktor modular rig [blog.native-instruments.com]

Whoa.

Follow Belief Defect on Twitter:
https://twitter.com/Belief_Defect

and Instagram:
https://www.instagram.com/belief_defect/

Reaktor 6

Reaktor User Library

Photo credits: Giovanni Dominice.

The post What you can learn from Belief Defect’s modular-PC live rig appeared first on CDM Create Digital Music.

Accusonus explain how they’re using AI to make tools for musicians

First, there was DSP (digital signal processing). Now, there’s AI. But what does that mean? Let’s find out from the people developing it.

We spoke to Accusonus, the developers of loop unmixer/remixer Regroover, to try to better understand what artificial intelligence will do for music making – beyond just the buzzwords. It’s a topic they presented recently at the Audio Engineering Society conference, alongside some other developers exploring machine learning.

At a time when a lot of music software retreads existing ground, machine learning is a relatively fresh frontier. One important distinction to make: machine learning involves training the software in advance, then applying those algorithms on your computer. But that already opens up some new sound capabilities, as I wrote about in our preview of Regroover, and can change how you work as a producer.

And the timing is great, too, as we take on the topic of AI and art with CTM Festival and our 2018 edition of our MusicMakers Hacklab. (That call is still open!)

CDM spoke with Accusonus’ co-founders, Alex Tsilfidis (CEO) and Elias Kokkinis (CTO). Elias explains the story from a behind-the-scenes perspective – but in a way that I think remains accessible to us non-mathematicians!

Elias (left) and Alex (right). As Elias is the CTO, he filled us in on the technical inside track.

How do you wind up getting into machine learning in the first place? What led this team to that place; what research background do they have?

Elias: Alex and I started out our academic work with audio enhancement, combining DSP with the study of human hearing. Toward the end of our studies, we realized that the convergence of machine learning and signal processing was the way to actually solve problems in real life. After the release of drumatom, the team started growing, and we brought people on board who had diverse backgrounds, from audio effect design to image processing. For me, audio is hard because it’s one of the most interdisciplinary fields out there, and we believe a successful team must reflect that.

It seems like there’s been movement in audio software from what had been pure electrical engineering or signal processing to, additionally, understanding how machines learn. Has that shifted somehow?

I think of this more as a convergence than a “shift.” Electrical engineering (EE) and signal processing (SP) are always at the heart of what we do, but when combined with machine learning (ML), it can lead to powerful solutions. We are far from understanding how machines learn. What we can actually do today, is “teach” machines to perform specific tasks with very good accuracy and performance. In the case of audio, these tasks are always related to some underlying electrical engineering or signal processing concept. The convergence of these principles (EE, SP and ML) is what allows us to develop products that help people make music in new or better ways.

What does it mean when you can approach software with that background in machine learning. Does it change how you solve problems?

Machine learning is just another tool in our toolbox. It’s easy to get carried away, especially with all the hype surrounding it now, and use ML to solve any kind of problem, but sometimes it’s like using a bazooka to kill a mosquito. We approach our software products from various perspectives and use the best tools for the job.

What do we mean when we talk about machine learning? What is it, for someone who isn’t a researcher/developer?

The term “machine learning” describes a set of methods and principles engineers and scientists use to teach a computer to perform a specific task. An example would be the identification of the music genre of a given song. Let’s say we’d like to know if a song we’re currently listening is an EDM song or not. The “traditional” approach would be to create a set of rules that say EDM songs are in this BPM range and have that tonal balance, etc. Then we’d have to implement specific algorithms that detect a song’s BPM value, a song’s tonal balance, etc. Then we’d have to analyze the results according to the rules we specified and decide if the song is EDM or not. You can see how this gets time-consuming and complicated, even for relatively simple tasks. The machine learning approach is to show the computer thousands of EDM songs and thousands of songs from other genres and train the computer to distinguish between EDM and other genres.

Computers can get very good at this sort of very specific task. But they don’t learn like humans do. Humans also learn by example, but don’t need thousands of examples. Sometimes a few or just one example can be enough. This is because humans can truly learn, reason and abstract information and create knowledge that helps them perform the same task in the future and also get better. If a computer could do this, it would be truly intelligent, and it would make sense to talk about Artificial Intelligence (A.I.), but we’re still far away from that. Ed.: lest the use of that term seem disingenuous, machine learning is still seen as a subset of AI. -PK

If a reader would like to read more into the subject, a great blog post by NVIDIA and a slightly more technical blog post by F. Chollet will shed more light into what machine learning actually is.

We talked a little bit on background about the math behind this. But in terms of what the effect of doing that number crunching is, how would you describe how the machine hears? What is it actually analyzing, in terms of rhythm, timbre?

I don’t think machines “hear,” at least not now, and not as we might think. I understand the need we all have to explain what’s going on and find some reference that makes sense, but what actually goes behind the scenes is more mundane. For now, there’s no way for a machine to understand what it’s listening to, and hence start hearing in the sense a human does.

Inside Accusonus products, we have to choose what part of the audio file/data to “feed” the machine. We might send an audio track’s rhythm or pitch, along with instructions on what to look for in that data. The data we send are “representations” and are limited by our understanding of, for instance, rhythm or pitch. For example, Regroover analyses the energy of the audio loop across time and frequency. It then tries to identify patterns that are musically meaningful and extract them as individual layers.

Is all that analysis done in advance, or does it also learn as I use it?

Most of the time, the analysis is done in advance, or just when the audio files are loaded. But it is possible to have products that get better with time – i.e., “learn” as you use them. There are several technical challenges for our products to learn by using, including significant processing load and having to run inside old-school DAW and plug-in platforms that were primarily developed for more “traditional” applications. As plug-in creators, we are forced to constantly fight our way around obstacles, and this comes at a cost for the user.

Processed with VSCO with x1 preset

What’s different about this versus another approach – what does this let me do that maybe I wasn’t able to do before?

Sampled loops and beats have been around for many years and people have many ways to edit, slice and repurpose them. Before Regroover, everything happened in one dimension, time. Now people can edit and reshape loops and beats in both time and frequency. They can also go beyond the traditional multi-band approach by using our tech to extract musical layers and original sounds. The possibilities for unique beat production and sound design are practically endless. A simple loop can be a starting point for a many musical ideas.

How would you compare this to other tools on the market – those performing these kind of analyses or solving these problems? (How particular is what you’re doing?)

The most important thing to keep in mind when developing products that rely on advanced technologies and machine learning is what the user wants to achieve. We try to “hide” as much of complexity as possible from the user and provide a familiar and intuitive user interface that allows them to focus on the music and not the science. Our single knob noise and reverb removal plug-ins are very good examples of this. The amount of parameters and options of the algorithms would be too confusing to expose to the end user, so we created a simple UI to deliver a quick result to the user.

If you take something as simple as being able to re-pitch samples, each time there’s some new audio process, various uses and abuses follow. Is there a chance to make new kinds of sounds here? Do you expect people to also abuse this to come up with creative uses? (Or has that happened already?)

Users are always the best “hackers” of our products. They come up with really interesting applications that push the boundaries of what we originally had in mind. And that’s the beauty of developing products that expand the sound processing horizons for music. Regroover is the best example of this. Stavros Gasparatos has used Regroover in an installation where he split industrial recordings routing the layers in 6 speakers inside a big venue. He tried to push the algorithm to create all kinds of crazy splits and extract inspiring layers. The effect was that in the middle of the room you could hear the whole sound and when you approached one of the speakers crazy things happened. We even had some users that extracted inspiring layers from washing machine recordings! I’m sure the CDM audience can think of even more uses and abuses!

Regroover gets used in Gasparatos’ expanded piano project:

Looking at the larger scene, do you think machine learning techniques and other analyses will expand what digital software can do in music? Does it mean we get away from just modeling analog components and things like that?

I believe machine learning can be the driving force for a much-needed paradigm shift in our industry. The computational resources available today not only on our desktop computers but also on the cloud are tremendous and machine learning is a great way to utilize them to expand what software can do in music and audio. Essentially, the only limit is our imagination. And if we keep being haunted by the analog sounds of the past, we can never imagine the sound of the future. We hope accusonus can play its part and change this.

Where do you fit into that larger scene? Obviously, your particular work here is proprietary – but then, what’s shared? Is there larger AI and machine learning knowledge (inside or outside music) that’s advancing? Do you see other music developers going this direction? (Well, starting with those you shared an AES panel on?)

I think we fit among the forward-thinking companies that try to bring this paradigm shift by actually solving problems and providing new ways of processing audio and creating music. Think of iZotope with their newest Neutron release, Adobe Audition’s Sound Remover, and Apple Logic’s Drummer. What we need to share between us (and we already do with some of those companies) is the vision of moving things forward, beyond the analog world, and our experiences on designing great products using machine learning (here’s our CEO’s keynote in a recent workshop for this).

Can you talk a little bit about your respective backgrounds in music – not just in software, but your experiences as a musician?

Elias: I started out as a drummer in my teens. I played with several bands during high school and as a student in the university. At the same time, I started getting into sound engineering, where my studies really helped. I ended up working a lot of gigs from small venues to stadiums from cabling and PA setup to mixing the show and monitors. During this time I got interested in signal processing and acoustics and I focused my studies on these fields. Towards the end of university I spent a couple of years in a small recording studio, where I did some acoustic design for the control room, recording and mixing local bands. After graduating I started working on my PhD thesis on microphone bleed reduction and general audio enhancement. Funny enough, Alex was the one who built the first version of the studio, he was the supervisor of my undergraduate thesis and we spend most of our PhDs working together in the same research group. It was almost meant to be that we would start Accusonus together!

Alex: I studied classical piano and music composition as a kid, and turned to synthesizers and electronic music later. As many students do, I formed a band with some friends, and that band happened to be one of the few abstract electronic/trip hop bands in Greece. We started making music around an old Atari computer, an early MIDI-only version of Cubase that triggered some cheap synthesizers and recorded our first demo in a crappy 4-channel tape recorder in a friend’s bedroom. Fun days!

We then bought a PC and more fancy equipment and started making our living from writing soundtracks for theater and dance shows. At that period I practically lived as a professional musician/producer and have quit my studies. But after a couple of years, I realized that I am more and more fascinated by the technology aspect of music so I returned to the university and focused in audio signal processing. After graduating from the Electrical and Computer Engineering Department, I studied Acoustics in France and then started my PhD in de-reverberation and room acoustics at the same lab with Elias. We became friends, worked together as researchers for many years and we realized the we share the same vision of how we want to create innovative products to help everyone make great music! That’s why we founded Accusonus!

So much of software development is just modeling what analog circuits or acoustic instruments do. Is there a chance for software based on machine learning to sound different, to go in different directions?

Yes, I think machine learning can help us create new inspiring sounds and lead us to different directions. Google Magenta’s NSynth is a great example of this, I think. While still mostly a research prototype, it shows the new directions that can be opened by these new techniques.

Can you recommend some resources showing the larger picture with machine learning? Where might people find more on this larger topic?

https://openai.com/

Siraj Raval’s YouTube channel:

Google Magenta’s blog for audio/music applications https://magenta.tensorflow.org/blog/

Machine learning for artists https://ml4a.github.io/

Thanks, Accusonus! Readers, if you have more questions for the developers – or the machine learning field in general, in music industry developments and in art – do sound out. For more:

Regroover is the AI-powered loop unmixer, now with drag-and-drop clips

http://accusonus.com

The post Accusonus explain how they’re using AI to make tools for musicians appeared first on CDM Create Digital Music.

Real underground: watch a live set in the Copenhagen metro

You’ve seen buskers in your subway, maybe, but odds are a full-on rave is a rarity. That’s what Strøm Festival and Anastasia Kristensen gave Copenhagen’s metro.

The basic notion: step onto your metro, get a live party. Watch:

Strøm, for their part, have established themselves as a cornerstone of the Danish electronic scene. And Russian-born, Copenhagen-based Anastasia Kristensen is maybe just the person you’d expect for this gig, in that lately it seems like she’s been everywhere.

I was curious, so asked her about the experience. She tells CDM:

“I compiled a live set with elements of Detroit techno, UK jungle, and all kinds of obscura that I could map to a MIDI controller and launch whenever. There was even an airhorn. 🙂

The experience was massive. As soon as people danced and jumped, the entire train was shaking. It felt like it could derail. I think it was a great way to rethink the way we can imagine a party and get around the city. Big up to Strøm for this!”

It’s also nice to hear some different vibes, beyond just what you’d catch in Tresor or Berghain from her.

I, uh, guess I can say I was following Anastasia before she blew up? (Resident Advisor says that was apparently earlier this year?) On the other hand, I think being everywhere, playing everything has long been her strong suit – someone with the resolve and raw discipline to relentlessly pursue music. I have to point that out just because I think it’s easy from the outside to assume this business is just luck – and that Kristensen, like many other of the most prolific and reliable members of the scene, has accomplished all of this atop a full-time day job.

And yes, follow those people, and you tend to catch those “rising stars” as RA puts it! (It’s a relief to know that’s the case!)

Best to let her and her music speak for themselves, though. Check out both her latest RA mix and her detailed follow-up on what’s driving her currently:

RA.598 Anastasia Kristensen

Her tool of choice, incidentally, is the now-discontinued Pioneer XDJ Aero. What’s nifty about this is, it’s small and it’s cheaper than buying new CDJs and components separately, but it still lets you practice CDJs somewhere other than sound check at a club or in front of a crowd. And it maintains rekordbox compatibility, so you can plug in those USB sticks, and works standalone if you choose. It could be something to look for used and … hey, Pioneer, maybe think about following up in this direction?

I’ve gotten to a few of her recent gigs, but Boiler Room was the one with cameras rolling, for some straight-ahead techno:

Just don’t miss her dark, evocative productions, which remain among my favorites:

The post Real underground: watch a live set in the Copenhagen metro appeared first on CDM Create Digital Music.