What culture, ritual will be like in the age of AI, as imagined by a Hacklab

Machine learning is presented variously as nightmare and panacea, gold rush and dystopia. But a group of artists hacking away at CTM Festival earlier this year did something else with it: they humanized it.

The MusicMakers Hacklab continues our collaboration with CTM Festival, and this winter I co-facilitated the week-long program in Berlin with media artist and researcher Ioann Maria (born in Poland, now in the UK). Ioann has long brought critical speculative imagination to her work (meaning, she gets weird and scary when she has to), as well as being able to wrangle large groups of artists and the chaos the creative process produces. Artists are a mess – as they need to be, sometimes – and Ioann can keep them comfortable with that and moving forward. No one could have been more ideal, in other words.

And our group delved boldly into the possibilities of machine learning. Most compellingly, I thought, these ritualistic performances captured a moment of transformation for our own sense of being human, as if folding this technological moment in against itself to reach some new witchcraft, to synthesize a new tribe. If we were suddenly transported to a cave with flickering electronic light, my feeling was that this didn’t necessarily represent a retreat from tech. It was a way of connecting some long human spirituality to the shock of the new.

This wasn’t just about speculating about what AI would do to people, though. Machine learning applications were turned into interfaces, making gestures and machines interact more clearly. The free, artist-friendly Wekinator was a popular choice. That stands in contrast to corporate-funded AI and how that’s marketed – which is largely as a weird, consumer convenience. (Get me food reservations tonight without me actually talking to anyone, and then tell me what music to listen to and who to date.)

Here, instead, artists took machine learning algorithms and made it another raw material for creating instruments. This was AI getting the machines to better enable performance traditions. And this is partly our hope in who we bring to these performance hacklabs: we want people with experience in code and electronics, but also performance media, musicology, and culture, in various combinations.

(Also spot some kinetic percussion in the first piece, courtesy dadamachines.)

Check out the short video excerpt or scan through our whole performance documentation. All documentation courtesy CTM Festival – thanks. (Photos: Stefanie Kulisch.)

Big thanks to the folks who give us support. The CTM 2018 MusicMakers Hacklab was presented with Native Instruments and SHAPE, which is co-funded by the Creative Europe program of the European Union.

Full audio (which makes for nice sort of radio play, somehow, thanks to all these beautiful sounds):

Full video:

2018 participants – all amazing artists, and ones to watch:

Adrien Bitton
Alex Alexopoulos (Wild Anima)
Andreas Dzialocha
Anna Kamecka
Aziz Ege Gonul
Camille Lacadee
Carlo Cattano
Carlotta Aoun
Claire Aoi
Damian T. Dziwis
Daniel Kokko
Elias Najarro
Gašper Torkar
Islam Shabana
Jason Geistweidt
Joshua Peschke
Julia del Río
Karolina Karnacewicz
Marylou Petot
Moisés Horta Valenzuela AKA ℌEXOℜℭℑSMOS
Nontokozo F. Sihwa / Venus Ex Machina
Sarah Martinus
Thomas Haferlach



For some of the conceptual and research background on these topics, check out the Input sessions we hosted. (These also clearly inspired, frightened, and fired up our participants.)

A look at AI’s strange and dystopian future for art, music, and society

Minds, machines, and centralization: AI and music

The post What culture, ritual will be like in the age of AI, as imagined by a Hacklab appeared first on CDM Create Digital Music.

Minds, machines, and centralization: AI and music

Far from the liberated playground the Internet once promised, online connectivity now threatens to give us mainly pre-programmed culture. As we continue reflections on AI from CTM Festival in Berlin, here’s an essay from this year’s program.

If you attended Berlin’s festival this year, you got this essay I wrote – along with a lot of compelling writing from other thinkers – in a printed book in the catalog. I asked for permission from CTM Festival to reprint it here for those who didn’t get to join us earlier this year. I’m going to actually resist the temptation to edit it (apart from bringing it back to CDM-style American English spellings), even though a lot has happened in this field even since I wrote it at the end of December. But I’m curious to get your thoughts.

I also was lucky enough to get to program a series of talks for CTM Festival, which we made available in video form with commentary earlier this week, also with CTM’s help:
A look at AI’s strange and dystopian future for art, music, and society

The complete set of talks from CTM 2018 are now available on SoundCloud. It’s a pleasure to get to work with a festival that not only has a rich and challenging program of music and art, but serves as a platform for ideas, debate, and discourse, too. (Speaking of which, greetings from another European festival that commits to that – SONAR, in Barcelona.)

The image used for this article is an artwork by Memo Akten, used with permission, as suggested by curator and CTM 2018 guest speaker Estela Oliva. It’s called “Inception,” and I think is a perfect example of how artists can make these technologies expressive and transcendent, amplifying their flaws into something uniquely human.

Minds, Machines, and Centralisation: Why Musicians Need to Hack AI Now


It’s now a defunct entity, but “Muzak,” the company that provided background music, was once everywhere. Its management saw to it that their sonic product was ubiquitous, intrusive, and even engineered to impact behavior — and so the word Muzak became synonymous with all that was hated and insipid in manufactured culture.

Anachronistic as it may seem now, Muzak was a sign of how tele-communications technology would shape cultural consumption. Muzak may be known for its sound, but its delivery method is telling. Nearly a hundred years before Spotify, founder Major General George Owen Squier originated the idea of sending music over wires — phone wires, to be fair, but still not far off from where we’re at today. The patent he got for electrical signalling doesn’t mention music, or indeed even sound content. But the Major General was the first successful business founder to prove in practice that electronic distribution of music was the future, one that would take power out of the hands of radio broadcasters and give the delivery company additional power over content. (He also came up with the now-loathed Muzak brand name.)

What we now know as the conventional music industry has its roots in pianola rolls, then in jukeboxes, and finally in radio stations and physical media. Muzak was something different, as it sidestepped the whole structure: playlists were selected by an unseen, centralized corporation, then piped everywhere. You’d hear Muzak in your elevator ride in a department store (hence the phrase, elevator music). There were speakers tucked into potted plants. The White House and NASA at some points subscribed. Anywhere there was silence, it might be replaced with pre-programmed music.

Muzak added to its notoriety by marketing the notion of using its product to boost worker productivity, through a pseudo-scientific regimen it called the “stimulus progression.” And in that, we see a notion that presages today’s app behavior loops and motivators, meant to drive consumption and engagement, ad clicks and app swipes.

Muzak for its part didn’t last forever, with stimulus progression long since debunked, customers preferring licensed music to this mix of original sounds, and newer competitors getting further ahead in the marketplace.

But what about the idea of homogenized, pre-programmed culture delivered by wire, designed for behavior modification? That basic concept seems to be making a comeback.

Automation and Power

“AI” or machine intelligence has been tilted in the present moment to focus on one specific area: the use of self-training algorithms to process large amounts of data. This is a necessity of our times, and it has special value to some of the big technical players who just happen to have competencies in the areas machine learning prefers — lots of servers, top mathematical analysts, and big data sets.

That shift in scale is more or less inescapable, though, in its impact. Radio implies limited channels; limited channels implies human selectors — meet the DJ. The nature of the internet as wide-open for any kind of culture means wide open scale. And it will necessarily involve machines doing some of the sifting, because it’s simply too large to operate otherwise.

There’s danger inherent in this shift. One, users may be lazy, willing to let their preferences be tipped for them rather than face the tyranny of choice alone. Two, the entities that select for them may have agendas of their own. Taken as an aggregate, the upshot could be greater normalization and homogenization, plus the marginalization of anyone whose expression is different, unviable commercially, or out of sync with the classes of people with money and influence. If the dream of the internet as global music community seems in practice to lack real diversity, here’s a clue as to why.

At the same time, this should all sound familiar — the advent of recording and broadcast media brought with it some of the same forces, and that led to the worst bubblegum pop and the most egregious cultural appropriation. Now, we have algorithms and corporate channel editors instead of charts and label execs — and the worries about payola and the eradication of anything radical or different are just as well-placed.

What’s new is that there’s now also a real-time feedback loop between user actions and automated cultural selection (or perhaps even soon, production). Squier’s stimulus progression couldn’t monitor metrics representing the listener. Today’s online tools can. That could blow apart past biases, or it could reinforce them — or it could do a combination of the two.

In any case, it definitely has power. At last year’s CTM hacklab, Cambridge University’s Jason Rentfrow looked at how music tastes could be predictive of personality and even political thought. The connection was timely, as the talk came the same week as Trump assumed the U.S. presidency, his campaign having employed social media analytics to determine how to target and influence voters.

We can no longer separate musical consumption — or other consumption of information and culture — from the data it generates, or from the way that data can be used. We need to be wary of centralized monopolies on that data and its application, and we need to be aware of how these sorts of algorithms reshape choice and remake media. And we might well look for chances to regain our own personal control.

Even if passive consumption may seem to be valuable to corporate players, those players may discover that passivity suffers diminishing returns. Activities like shopping on Amazon, finding dates on Tinder, watching television on Netflix, and, increasingly, music listening, are all experiences that push algorithmic recommendations. But if users begin to follow only those automated recommendations, the suggestions fold back in on themselves, and those tools lose their value. We’re left with a colorless growing detritus of our own histories and the larger world’s. (Just ask someone who gave up on those Tinder dates or went to friends because they couldn’t work out the next TV show to binge-watch.)

There’s also clearly a social value to human recommendations — expert and friend alike. But there’s a third way: use machines to augment humans, rather than diminish them, and open the tools to creative use, not only automation.

Music is already reaping benefits of data training’s power in new contexts. By applying machine learning to identifying human gestures, Rebecca Fiebrink has found a new way to make gestural interfaces for music smarter and more accessible. Audio software companies are now using machine learning as a new approach to manipulating sound material in cases where traditional DSP tools are limited. What’s significant about this work is that it makes these tools meaningful in active creation rather than passive consumption.

AI, back in user hands

Machine learning techniques will continue to expand as tools by which the companies mining big data make sense of their resources — from ore into product. It’s in turn how they’ll see us, and how we’ll see ourselves.

We can’t simply opt out, because those tools will shape the world around us with or without our personal participation, and because the breadth of available data demands their use. What we can do is to better understand how they work and reassert our own agency.

When people are literate in what these technologies are and how they work, they can make more informed decisions in their own lives and in the larger society. They can also use and abuse these tools themselves, without relying on magical corporate products to do it for them.

Abuse itself has special value. Music and art are fields in which these machine techniques can and do bring new discoveries. There’s a reason Google has invested in these areas — because artists very often can speculate on possibilities and find creative potential. Artists lead.

The public seems to respond to rough edges and flaws, too. In the 60s, when researcher Joseph Weizenbaum attempted to parody a psychotherapist with crude language pattern matching in his program, ELIZA, he was surprised when users started to tell the program their darkest secrets and imagine understanding that wasn’t there. The crudeness of Markov chains as predictive text tool — they were developed for analyzing Pushkin statistics and not generating language, after all — has given rise to breeds of poetry based on their very weirdness. When Google’s style transfer technique was applied using a database of dog images, the bizarre, unnatural images that warped photos into dogs went viral online. Since then, Google has made vastly more sophisticated techniques that apply realistic painterly effects and… well, it seems that’s attracted only a fraction of the interest that the dog images did.

Maybe there’s something even more fundamental at work. Corporate culture dictates predictability and centralized value. The artist does just the opposite, capitalizing on surprise. It’s in the interest of artists if these technologies can be broken. Muzak represents what happens to aesthetics when centralized control and corporate values win out — but it’s as much the widespread public hatred that’s the major cautionary tale. The values of surprise and choice win out, not just as abstract concepts but also as real personal preferences.

We once feared that robotics would eliminate jobs; the very word is derived (by Czech writer Karel Čapek’s brother Joseph) from the word for slave. Yet in the end, robotic technology has extended human capability. It has brought us as far as space and taken us through Logo and its Turtle, even taught generations of kids math, geometry, logic, and creative thinking through code.

We seem to be at a similar fork in the road with machine learning. These tools can serve the interests of corporate control and passive consumption, optimised only for lazy consumption that extracts value from its human users. Or, we can abuse and misuse the tools, take them apart and put them back together again, apply them not in the sense that “everything looks like a nail” when all you have is a hammer, but as a precise set of techniques to solve specific problems. Muzak, in its final days, was nothing more than a pipe dream. What people wanted was music — and choice. Those choices won’t come automatically. We may well have to hack them.

PETER KIRN is an audiovisual artist, composer/musician, technologist, and journalist. He is the editor of CDM and co-creator of the open source MeeBlip hardware synthesizer (meeblip.com). For six consecutive years, he has directed the MusicMaker’s Hacklab at CTM Festival, most recently together with new media artist Ioann Maria.


The post Minds, machines, and centralization: AI and music appeared first on CDM Create Digital Music.

A look at AI’s strange and dystopian future for art, music, and society

Machine learning and new technologies could unlock new frontiers of human creativity – or they could take humans out of the loop, ushering in a new nightmare of corporate control. Or both.

Machine learning, the field of applying neural networks to data analysis, unites a range of issues from technological to societal. And audio and music are very much at the center of the transformative effects of these technologies. Commonly dubbed (partly inaccurately) “artificial intelligence,” they suggest a relationship between humans and machines, individuals and larger state and corporate structures, far beyond what has existed traditionally. And that change has gone from far-off science fiction to a reality that’s very present in our homes, our lives, and of course the smartphones in our pockets.

I had the chance to co-curate with CTM Festival a day of inputs from a range of thinkers and artist/curators earlier this year. Working with my co-host, artist and researcher Ioann Maria, we packed a day full of ideas and futures both enticing and terrifying. We’ve got that full afternoon, even including audience discussion, online for you to soak in.

Me, with Moritz, pondering the future. Photo: CTM Festival / Isla Kriss.

And there are tons of surprises. There are various terrifying dystopias, with some well-reasoned arguments for why they might actually come to fruition (or evidence demonstrating these scenarios are already in progress). There are more hopeful visions of how to get ethics, and humans, back in the loop. There are surveys of artistic responses.

All of this kicked off our MusicMakers Hacklab at CTM Festival, which set a group of invited artists on collaborative, improvisatory explorations of these same technologies as applied to performance.

These imaginative and speculative possibilities become not just idle thoughts, but entertaining and necessary explorations of what might be soon. This is the Ghost of Christmas Yet-to-Come, if a whole lot more fun to watch, here not just to scare us, but to spur us into action and invention.

Let’s have a look at our four speakers.

Machine learning and neural networks

Moritz Simon Geist: speculative futures

Who he is: Moritz is an artist and researcher; he joined us for my first-ever event for CTM Festival with a giant robotic 808, but he’s just at adept in researching history and future.

Topics: Futurism, speculation, machine learning and its impact on music, body enhancement and drugs

Takeaways: Moritz gives a strong introduction to style transfer and other machine learning techniques, then jumps into speculating on where these could go in the future.

In this future, remixes and styles and timbres might all become separate from a more fluid creativity – but that might, in turn, dissolve artistic value.

“In the future … music will not be conceived as an art form any more. – Moritz Simon Geist”

Then, Moritz goes somewhere else entirely – dreaming up speculative drugs that could transform humans, rather than only machines. (The historical basis for this line of thought: Alexander Shulgin and his drug notebooks, which might even propose a drug that transforms perception of pitch.)

Moritz imagines an “UNSTYLE” plug-in that can extract vocals – then change genre.

What if self-transformation – or even fame – were in a pill?

Gene Cogan: future dystopias

Who he is: An artist/technologist who works with generative systems and its overlap with creativity and expression. Don’t miss Gene’s expansive open source resource for code and learning, machine learning for artists.

Topics: Instrument creation, machine learning – and eventually AI’s ability to generate its own music

Takeaways: Gene’s talk begin with “automation of songwriting, production, and curation” as a topic – but tilted enough toward dystopia that he changed the title.

“This is probably going to be the most depressing talk.”

In a more hopeful vision, he presented the latest work of Snyderphonics – instruments that train themselves as musicians play, rather than only the other way around.

He turned to his own work in generative models and artistic works like his Donald Trump “meat puppet,” but presented a scary image of what would happen if eventually analytic and generative machine learning models combined, producing music without human involvement:

“We’re nowhere near anything like this happening. But it’s worth asking now, if this technology comes to fruition, what does that mean about musicians? What is the future of musicians if algorithms can generate all the music we need?”

References: GRUV, a generative model for producing music

WaveNet, the DeepMind tech being used by Google for audio

Sander Dieleman’s content-based recommendations for music

Gene presents – the death of the human musician.

Wesley Goatley: machine capitalism, dark systems

Who he is: A sound artist and researcher in “critical data aesthetics,” plumbing the meaning of data from London in his own work and as a media theorist

Topics: Capitalism, machines, aesthetics, Amazon Echo … and what they may all be doing to our own agency and freedom

Takeaways: Wesley began with “capitalism at machine-to-machine speeds,” then led to ways this informed systems that, hidden away from criticism, can enforce bias and power. In particular, he pitted claims like “it’s not minority report – it’s science; it’s math!” against the realities of how these systems were built – by whom, for whom, and with what reason.

“You are not working them; they are working you.”

As companies like Amazon and Google extend control, under the banner of words like “smart” and “ecosystem,” Wesley argues, what they’re really building is “dark systems”:

“We can’t get access or critique; they’re made in places that resemble prisons.”

The issue then becomes signal-to-noise. Data isn’t really ever neutral, so the position of power lets a small group of people set an agenda:

“[It] isn’t a constant; it’s really about power and space.”

Wesley on dark connectionism, from economics to design. Photo: CTM Festival / Isla Kriss.

Deconstructing an Amazon Echo – and data and AI as echo chamber. Photo: CTM Festival / Isla Kriss.

What John Cage can teach us: silence is never neutral, and neither is data.

Estela Oliva: digital artists respond

Who she is: Estela is a creative director / curator / digital consultant, an anchor of London’s digital art scene, with work on Alpha-ville Festival, a residency at Somerset House, and her new Clon project.

Topics: Digital art responding to these topics, in hopeful and speculative and critical ways – and a conclusion to the dystopian warnings woven through the afternoon.

Takeaways: Estela grounded the conclusion of our afternoon in a set of examples from across digital arts disciplines and perspectives, showing how AI is seen by artists.

Works shown:

Terence Broad and his autoencoder

Sougwen Chung and Doug, her drawing mate


Marija Bozinovska Jones and her artistic reimaginings of voice assistants and machine training:

Memo Akten’s work (also featured in the image at top), “you are what you see”

Archillect’s machine-curated feed of artwork

Superflux’s speculative project, “Our Friends Electric”:


Estela also found dystopian possibilities – as bias, racism, and sexism are echoed in the automated machines. (Contrast, indeed, the machine-to-machine amplification of those worst characteristics with the more hopeful human-machine artistic collaborations here, perhaps contrasting algorithmic capitalism with individual humanism.)

But she also contrasted that with more emotionally intelligent futures, especially with the richness and dimensions of data sets:

“We need to build algorithms that represent our values better – but I’m just worried that unless we really talk about it more seriously, it’s not going to happen.”

Estela Oliva, framed by Memo Akten’s work. Photo: CTM Festival / Isla Kriss.

It was really a pleasure to put this together. There’s obviously a deep set of topics here, and ones I know we need to continue to cover. Let us know your thoughts – and we’re always glad to share in your research, artwork, and ideas.

Thanks to CTM Festival for hosting us.


The post A look at AI’s strange and dystopian future for art, music, and society appeared first on CDM Create Digital Music.

Hugo rattled Berghain’s enormous system, talked to us about sound

What do you do when faced with a sound system associated with a very particular techno sound? One answer: push the speakers until they scream, in a good way.

That’s what Hugo Esquinca did last month at CTM Festival – okay, under the watchful eyes of one of the club’s technicians. (That tech seemed happy with the results; I saw him leap over to Hugo after the show, grinning.)

It’s just another creative sound art experiment from Hugo and fits perfectly with the ethos of the collective he’s part of, oqko.

As part of our new series Cues, I’ll be talking to artists about musical creativity and live performance. And so for this one, we get an exclusive live performance – recorded in front of us at Maze, a club underground Kreuzberg – and chatted with Hugo about his work. Listen (I’ll have podcast subscription information for you next week, too):

If you’re tired of commercial boilerplate for electronics, feast your brain on this text Hugo shared on his creation:

Study on (in)operable rigour at this years edition of CTM @ Berghain was a site-specific composition in which the extensive differences and categories assigned as dimension to space and duration to time were but variables among variables in various algorithmic operations which precisely exposed those values to intensive micro temporal variations, where indeterminate modulations produced a multiplicity of events ranging from aleatory amplification of certain room mode resonances, errors in the sound card deriving from random oversampling which produced unexpected sonorous incidents to emerge, and where regarding a recursive mode in the programming where no halt was assigned, the composition could have potentially runned for an indefinite amount of time, as it was precisely by means of my intervention in ‘stopping’ the events that they were prevented and/or terminally halted.

Here’s a closer look at some of the Pd and Max mayhem:

For his site:


And we’ve covered oqko previously:

Transmissions from the magnetic ooze, in new oqko video premiere

How sound takes Lvis Mejía from Mexico to a collective unconscious

Check their site:


The post Hugo rattled Berghain’s enormous system, talked to us about sound appeared first on CDM Create Digital Music.

MusicMakers Hacklab Berlin to take on artificial minds as theme

AI is the buzzword on everyone’s lips these days. But how might musicians respond to themes of machine intelligence? That’s our topic in Berlin, 2018.

We’re calling this year’s theme “The Hacked Mind.” Inspired by AI and machine learning, we’re inviting artists to respond in the latest edition of our MusicMakers Hacklab hosted with CTM Festival in Berlin. In that collaborative environment, participants will have a chance to answer these questions however they like. They might harness machine learning to transform sound or create new instruments – or even answer ideas around machines and algorithms in other ways, through performance and composition ideas.

As always, the essential challenge isn’t just hacking code or circuits or art: it’s collaboration. By bringing together teams from diverse backgrounds and skill sets, we hope to exchange ideas and knowledge and build something new, together, on the spot.

The end result: a live performance at HAU2, capping off a dense week-plus festival of adventurous electronic music, art, and new ideas.

Hacklab application deadline: 05.12.2017
Hacklab runs: 29.1 – 4.2.2018 in Berlin (Friday opening, Monday – Saturday lab participation, Sunday presentation)

Apply online:
MusicMakers Hacklab – The Hacked Mind – Call for works

We’re not just looking for coders or hackers. We want artists from a range of backgrounds. We want people to wrestle with machine learning tools – absolutely, and some are specifically designed to train to recognize sounds and gestures and work with musical instruments. But we also hope for unorthodox artistic reactions to the topic and larger social implications.

To spur you on, we’ll have a packed lineup of guests, including Gene Kogan, who runs the amazing resource ml4a – machine learning for artists – and has done AV works like these:

And there’s Wesley Goatley, whose work delves into the hidden methods and biases behind machine learning techniques and what their implications might be.

Of course, machine learning and training on big data sets opens up new possibilities for musicians, too. Accusonus recently explained that to us in terms of new audio processing techniques. And tools like Wekinator now use training machines as ways of more intelligently recognizing gestures, so you can transform electronic instruments and how they’re played by humans.

Dog training. No, not like that – training your computer on dogs. From ml4a.

Meet Ioann Maria

We have as always a special guest facilitator joining me. This time, it’s Ioann Maria, whose AV / visual background will be familiar to CDM readers, but who has since entered a realm of specialization that fits perfectly with this year’s theme.

Ioann wrote a personal statement about her involvement, so you can get to know where she’s come from:

My trip into the digital started with real-time audiovisual performance. From there, I went on to study Computer Science and AI, and quickly got into fundamentals of Robotics. The main interest and focus of my studies was all that concerns human-machine interaction.

While I was learning about CS and AI, I was co-directing LPM [Live Performers Meeting], the world’s largest annual meeting dedicated to live video performance and new creative technologies. In that time I started attending Dorkbot Alba meet-ups – “people doing strange things with electricity.” From our regular gatherings arose an idea of opening the first Scottish hackerspace, Edinburgh Hacklab (in 2010 – still prospering today).

I grew up in the spirit of the open source.

For the past couple of years, I’ve been working at the Sussex Humanities Lab at the University of Sussex, England, as a Research Technician, Programmer, and Technologist in Digital Humanities. SHL is dedicated to developing and expanding research into how digital technologies are shaping our culture and society.

I provide technical expertise to researchers at the Lab and University.

At the SHL, I do software and hardware development for content-specific events and projects. I’ve been working on long-term jobs involving big data analysis and visualization, where my main focus for example was to develop data visualization tools looking for speech patterns and analyzing anomalies in criminal proceedings in the UK over the centuries.

I also touched on the technical possibilities and limitations of today’s conversational interfaces, learning more about natural language processing, speech recognition and machine learning.

There’s a lot going on in our Digital Humanities Lab at Sussex and I’m feeling lucky to have a chance to work with super brains I got to meet there.

In the past years, I dedicated my time speaking about the issues of digital privacy, computer security and promoting hacktivism. That too found its way to exist within the academic environment – in 2016 we started the Sussex Surveillance Group, a cross-university network that explores critical approaches to understanding the role and impact of surveillance techniques, their legislative oversight and systems of accountability in the countries that make up what are known as the ‘Five Eyes’ intelligence alliance.

With my background in new media arts and performance and some knowledge, in computing I’m awfully curious about what will happen during the MusicMakers Hacklab 2018.

What fascinating and sorrowful times we happen to live in. How will AI manifest and substantiate our potential, and how will we translate this whole weight and meaning into music, into performing art? It going to be us for, or against the machine? I can’t wait to meet our to-be-chosen Hacklab participants, link our brains and forces into a creative-tech-new – entirely IRL!

MusicMakers Hacklab – The Hacked Mind – Call for works

In collaboration with CTM Festival, CDM, and the SHAPE Platform.
With support from Native Instruments.

The post MusicMakers Hacklab Berlin to take on artificial minds as theme appeared first on CDM Create Digital Music.

How sound takes Lvis Mejía from Mexico to a collective unconscious

In Mexican artist Lvis Mejía’s imagination, the ritual of sound blinks from Peruvian shamans to the Berlin zoo. We talked to him about his new work. It’s an experience of shared culture, collective unconscious – and a tale of assembling a career and collective between Mexico and Europe.

Born in Mexico but with his career emerging in the European art scene, Mejía is now a known name from appearances at Centre Pompidou, MIT, Transmediale Berlin, MUTEK (Montréal & Mexico City), CTM Siberia, ICA London, Secret Solstice Festival, and Visiones Sonoras Festival.

But while Lvis’ answers are complex, layered, and abstract, his music is anything but dry. Instead of a clinical collage of sound recordings, his project Anthropology of AmnesiA is full of acrobatic, cacophonous collisions – a musicircus for the headphones. It’s part meditation, part anarchy – sometimes unexpectedly moving from one to the other. Some sounds are found, some synthesized, some spontaneously orchestral. It’s music for a century of dirt-cheap international airfare and dislodged post-colonial hierarchies, a celebratory ceremony of chaos. And that seems a wonderful antidote to the designer-chic, on-brand conservatism of so much music today.

It’s all a perfect fit for his own collective, oqko, on which this record appears.

There’s also quite a lot of thought behind this, as his expansive discussion with us reveals.

CDM: I want to speak first to this act of remembering. Field recordings are a big part of your process, I understand. Is that connected to your own process of collecting and returning to memories? Does it transport you back to where a recording was made, or are these raw materials in your work?

Lvis: Each take brings me back to its source if needed, nevertheless – within the bigger picture – it is more about every element serving the common cause, a place where each single factor succumbs to the sum of all. The singular beauty behind them as raw materials resides in their essential immateriality and the uniqueness of exploiting time as a bridge between the recording process as an esoteric happening and the dry moment of rationalized sound treatment at the studio in order to articulate the final shape of it.

It has been a while since the circumstance of remembering; “be mindful of” (from the Latin re, ‘again’, and memorari) has surpassed some barriers of my own comprehension and became a recurrent topic of analysis in my work. Regardless, approaching it through different angles and disciplines, the events in Anthropology of AmnesiA address rather the symptoms of what I understand as sound essay.

Given this is such a multi-layered collage of sound, what is your process for gathering materials? Do you have an approach to collecting field recordings?

This album is a mysteriously unexpected hybrid. Its compositional process was directly affected by two other parallel endeavors. In other words, it is the offspring of a major ongoing project called Memory in Amnesia and one of my previous albums, AformA, which was released in 2012 on CMMAS.

Memory in Amnesia is a project based on the premise of a common origin. I believe that there is, at large, a single culture of humanity with a shared set of themes. The focus lies on following the trace of our ancestors and capturing the audio recordings of rituals, which I see as purest form of collective memory and as typical examples for different manifestations of a universal culture. Thanks to the Oral Tradition (viva voce) involved, these living ceremonies mark the influence of the past on the present. The route is not defined by myself, but the aim is to closely follow the footsteps of humanity, “out of Africa” (cf. Salopek), through Laurasia (cf. Witzel). This is the anthropological theory of a common origin. Most of the ethnological recordings derive from this project, notwithstanding the compositional and arrangement aspect in Anthropology of AmnesiA stems directly from the time I was writing AformA, an album allowing a contemporary classic and religious sound recordings symbiosis. It is actually since then that Anthropology of AmnesiA was designed to be some sort of sequel of AformA.

What about other sounds on the record – it seems there is a mix of field recordings and synthetic sounds; what are the sources for some of the more purely electronic timbres we here (or were they also derived from the recordings?)

The synthetic components were tailored with the intention of generating an organic dialogue between them and their counterparts.

Being completely frank about this, I have to say that strangely, this particular task was the most pleasant to do. I feel quite bad making such a statement, though. The thing is, recording a ritual is extremely exciting and it always helps me to put myself in perspective — it is an indescribable sensation. Nevertheless, one has to acquire a lot of sensitivity to the situation and its surroundings, both technically and as an “external” entity that potentially can interfere the sacral procedure, so it makes it difficult to “enjoy.” Whereas in the studio work, one is in control of the situation itself. I somehow felt that producing the electronic material was more like doing the sound design for a piece. I know this could open some redundancy, but I really saw myself assuming the work of a sound designer/engineer because of the huge respect I have towards the rituals and the rest of the field recordings.

The structure seems really fluid, through-composed. But is there a narrative, an evolution?

I was very interested in generating a dissolution of perceived time using sound as an architectonic instance and an abstract non-suggestive dramaturgy.

Due to the fact that there is a decent amount of information being delivered in a relatively short timespan, I opted for some pauses and fixed calm scenes. Through the implementation of pauses, one can draw a very different storyline. Silence is eloquent and majestic. Silence, when used effectively, secures meaning and opens layers of interpretation <-> comprehension. Silence is sacred, so I tried to set it as a cue actor.

The main structure itself was composed, pretty much, in a literary way.

That is one reason why I also like to understand the whole as more of a sound essay.

At some point there are some hard left turns into the realm of percussion, more conventional instrumentation – maybe even a Varèse reference. What’s the inspiration for these moments? (Were these all samples, or did you add some additional live recordings?)

I felt there was a necessity to have a longer presence of some concrete events, and rhythm embodies this purpose very well. This necessity existed for two main reasons. One in order to have a more recognizable component standing out from the composition through its repetitive character, and the second one is utterly linked to repetition itself. Repetition provides a sense of time loss, and as I mentioned previously, it was an important aim to try to influence time perception.

It is interesting that you quote Varèse at this point, because without his work being a direct reference for the album, I consider the realm of percussion on Anthropology of AmnesiA to be very clearly similar to one of his concepts, “organized sound.”

Most of the rhythmic parts are a combination of field recordings and some subtle dubbing or studio-recorded arrangements. It was important to me to keep the idiosyncrasy of the field recordings, so I was focused more on the detailed operations supporting them.

There are a lot of layers here. Can you talk about some of the sources? This is a collective unconscious; is it somehow international? What are some of the geographies that are sourced here; are you ever concerned that you’re appropriating something that’s foreign?

While the major project, Memory in Amnesia, takes on a more self-research-bounded historic ontology, this album procures to avoid secular neutrality. It is a piece evoking, as you properly mention, the collective unconscious. Most of the audio content could sound, until a certain extent, familiar to us. Even without knowing the exact source of it, there is a thread depicting this collective unconscious.

In point of fact, I decide to use a subheading for the title:
“Culture is essentially more than the reflection of human desire.”
(To me, this signifies more an insult to human species and an ode to human culture).

I am more concerned about scoping humankind as a one-culture phenomenon – more than one-race-act – and that is the very reason why the album examines a number of interpretations of rituals, orchestrations, chants, synthesis and field recordings – in the piece you’ll hear recordings of animals, fire, water and also the human heart – leaving appropriation aside. The whole is greater than the sum of its parts, and I am not adopting elements of another culture, it being a minority or not and myself being part of a dominant one or not, for the sake of my own aesthetics and/or benefits.

I highly respect all these expressions and traditions; they represent a big column of the project itself.

Just to mention some of the recordings’ precedences, here is a short list:
a Parisian Mosque, a Peruvian shamanic ceremony, some nice specimens from the Berlin Zoo, the chants of Japanese nuns and a Mexica (Aztec) ritual near Mexico City.

I know you come from Mexico; where did you grow up? My limited experience of Latin America versus the USA gave me the sense that the pre-Colombian, indigenous history is more present in urban life, that you’re always aware of these layers – maybe a bit like this soundscape. Is there a kind of pre-Colombian ritual sensibility here, or does your background play into this?

I grew up in one of Mexico City’s suburbs and left one year after finishing high school. I have always been aware of the richness of indigenous influence in Latin American, but I cannot claim this being a recurrent topic in my general practice. I belong to a mixed ethnicity, just one more under the veil of demographics embodying the result of a long species incest. Nevertheless I decided to finish the record with one fragment of the “Ritual del Sol,” a former Mexica ceremony. It is a humble homage to a geography that permeated my early years through its charisma, among others.

Of course, now we meet in Berlin – and now I’ve had some other Mexican colleagues move here as I have, not to mention meeting more who are thinking of the move. Is this just an international capital for people making sound, or is there some sense that this is the refuge for people making more experimental stuff; are your opportunities more limited if you remain in Mexico?

I cannot say my opportunities would be more limited if I remain in Mexico because I started my artistic career abroad, thus I do not have an objective thought on that. I could start making some comparisons about the scenes, the modus operandi, the socio-political and whatsoever, but those are extremely complex contexts and historic circumstances. Forgive me for having to pass on that this time.

Fortunately, there are some other interesting environments to work and develop worldwide. It is not an exclusivity of this city.

Berlin, Bärlin, BLN….

About the German capital being or turning into the hotspot of “#you-name-it-phenomenon”, sincerely I am already very bothered by the hype many people have over this city. It is true that it is a refuge – without any political connotation– some artistic communities, and that is good like that, but this place is already in the process of converting into a bubble based on relativity for the sake of serving the desire of international contemporary hedonism and ignorance. The “objectivation” of a city. YouknowwhatImean.

I truly believe that places live from a natural dynamic of exchange, but what has been happening through the past years, is in many ways a one-way-rolling-sphere and therefore, this could represent a one-way-ticket to the metropolis and its inhabitants.

Contrary to this, and in a more individual scale, when coming here persuaded of concrete projects, the city embraces you, and that is very comfortable. That is what in my opinion provides many with a home as an actor in the cultural landscape.

But yes, all in all, the positive ph(f)ases that Berlin provides within and throughout its web, are difficult to comprehend, it is not that simple to host so many like-minded individuals.

And speaking of the culture here, can you tell us a bit about oqko? What are its goals; how did it come together? Any other artists we should know?

oqko was formed back in 2015 by the other 3 members (Paolo Combes, Hugo Esquinca and Ástvaldur Thorisson) with the vision of running a gallery. I arrived a bit later, with the release of Shortcuts. Right within that process I felt I could start contributing to the journey. Ever since it has become a second home and a breeding-ground for ideas and inquisitiveness.

oqko tries to act in a more global way (in many senses). We are actually turning now into broader fields within investigative and editorial work, some sort of actual design studio questioning the formats when releasing music and hosting events, attended by 5 to 200 people.

Our commitment is focus now into the exploration of intersections between disciplines, sciences and (what I call) dysutopias.

Personally I see oqko, in the long term, as an “alternative institute exploring the phenomena of the now.” It is a long and intricate way to go, but we are trying to get there.

This month we are releasing ‘Nocturne Works’ by Swiss graphic designer and melancholic sound twiddler Romain Ioannone. A proto-botanical approach pairing his short compositions on cassette and the seeds of the Ipomoe Alba aka Moonflower.

Later on in September, within the frame of oqko’s second anniversary, we are starting to develop a sound installation in one of the former Soviet astronomical facilities in Armenia. More information is coming soon.

Another interesting project coming out at the end of the year is one of HMOT’s linguistic studies accompanied by a soundtrack of 5 pieces. I can see the Siberian artist delivering syncretic knowledge about the modern Russian and blasting modular synthesis.

And just before Anthropology of AmnesiA we have the remix album of astvaldur’s first album. Siete Catorce and Oly from NAAFI are involved as is Dis Fig from Purple Tape Pedigree and some of our other close friends from oqko and beyond.

So, be welcome to catch us at one of our events in order to discover the work of our own and of other artists affiliated to http://www.oqko.org

Shortcuts was also visual; are there visual aspirations or connections to be made here? How will the acousmatic listening sessions work?

Shortcuts was as an album, for which artists were commissioned to articulate the visual language to (mostly short) compositions of mine. In the case of Anthropology of AmnesiA, I sincerely hope not to evoke a single image at all. It inherited the tradition of “deep listening”, an exercise difficult to achieve. This piece is committed to actively focus one’s attention to the sound and the storytelling while being (physically) passive. So, the acousmatic listening sessions you mention, are planned to take place in different settings under diverse circumstances in order to explore the relation between the content of the album and the provided surrounding conditions.

A situation in which the vinyl, the record player and the sound system will be the main characters on stage, and the stage itself resides in the environment.

The release event, and therefore the first listening session, is going to be hosted on September 15th in Yerevan as part of the Triennial of Contemporary Art in Armenia. The exact dates for the following sessions in other cities are going to be announced soon.

Can you tell us a bit about your production setup? What do you use to compose soundscapes? What do you use when you play live? (It strikes me the material here could be composed and recomposed in a lot of permutations live.)

I have a studio with a variety of acoustic and electronic instruments, nothing out of this world though; I was never obsessed with a specific gear or instrument. I rather give every single object able to produce sound a chance to express itself and explore its possibilities. I am more amazed by the fact of the endless options one has to generate sonic output with pretty much whatever. Nevertheless, I am seeking the utopic wish of one day having a Museum of World Instruments with the option for everyone to play and record them. That is why I always try to bring, from the places I visit, an autochthonous instrument back home and use it at least once. I also ask friends to do that for me, so please understand this as an invitation to send one over, haha. When it comes to the live shows, it always depends on what and where I am going to play, but most of the times it starts with a laptop, the Nord Modular G2 and a MIDI controller as basic setup.

I am looking forward to exploring a form of re-composition or arrangement for strings, percussions and choir for an Anthropology of AmnesiA ‘s live version. After having worked with a full orchestra a few years ago, a jazz ensemble and two choirs for a symphony, the idea of expanding the performance modus has been prosecuting me.

While writing the music for that project a fellow composer told me: “My friend, do you know what is the problem of symphonic music? …. It is that you get addicted to (writing) it”. Unfortunately, he is right. Ever since almost everything I have been committed too in terms of music, involves at least one classical instrument.


The post How sound takes Lvis Mejía from Mexico to a collective unconscious appeared first on CDM Create Digital Music.

With Japan’s latest Vocaloid characters, another song from the future

It’s a cyber-technological future you can live now: a plug-in using sophisticated samples and rules that can make a plug-in sing like a Japanese pop star.

Yamaha has announced this week the newest voices for Vocaloid, their virtual singing software. This time, the characters are drawn from a (PS Vita) Sony video game property:

The main characters of the PS Vita games Utagumi 575 and Miracle Girls Festival, as well as the anime Go! Go! 575, Azuki Masaoka (voice actress Yuka Ohtsubo), have finally been made into VOCALOID Voice Banks!


Here’s what those new characters sound like:

And the announcement:

Announcing the debut of two new female Japanese VOCALOID4 Voice Banks

The packs themselves run about 9000 Yen, or roughly 80 US Dollars.

Perhaps this is an excuse to step back and consider what this is about, again. (Well, I’m taking it as one.)

To the extent that pop music is always about making a human more than real, Japan embraces a hyperreal artificiality in their music culture, so it’s not surprising technology would follow. Even given that, it seems the success of Yamaha’s Vocaloid software caught the developers by surprise, as the tool earned a massive fanbase. And while extreme AutoTune effects have fallen out of favor in the west, it seems Japan hasn’t lost its appetite for this unique sound – nor the cult following of aficionados that has grown outside the country.

Vocaloid isn’t really robotic – it uses extensive, detailed samples of a real human singer – but the software is capable of pulling and stretching those samples in ways that defy the laws of human performance. That is, this is to singing as the drum machine is to drumming.

That said, if you go out and buy a conventional vocal sample library, the identities of the singers is relatively disguised. Not so, a Vocaloid sample bank. The fictional character is detailed down to her height in centimeters, her backstory … even her blood type. (Okay, if you know the blood type of a real pop star, that’s a little creepy – but somehow I can imagine fans of these fictional characters gladly donating blood if called upon to do so.)

Lest this all seem to be fantasy, equal attention is paid to the voice actors and their resume.

And the there’s the software. Vocaloid is one of the most complex virtual instruments on the market. There’s specific integration with Cubase, obviously owing to Yamaha’s relationship to Steinberg, but also having to do with the level of editing required to get precise control over Vocaloid’s output. And it is uniquely Japanese: while Yamaha has attempted to ship western voices, Japanese users have told me the whole architecture of Vocaloid is tailored to the particular nuances of Japanese inflection and pitch. Vocaloid is musical because the Japanese language is musical in such a particular way.

All of this has given rise to a music subculture built around the software and vocal characters that live atop the platform. That naturally brings us to Hatsune Miku, a fictional singer personality for Vocaloid whose very name is based on the words for “future” and “sound.” She’s one of a number of characters that have grown out of Vocaloid, but has seen the greatest cultural impact both inside and outside Japan.

Of course, ponder that for a second: something that shipped as a sound library product has taken on an imagined life as a pop star. There’s not really any other precedent for that in the history of electronic music … so far. No one has done a spinoff webisode series about the Chorus 1 preset from the KORG M1. (Yet. Please. Make that happen. You know it needs to.)

Hatsune Miku has a fanbase. She’s done packed, projected virtual concerts, via the old Pepper’s Ghost illusion (don’t call it a hologram).

And you get things like this:

Though with Hatsune Miku alone (let alone Vocaloid generally), you can go down a long, long, long rabbit hole of YouTube videos showing extraordinary range of this phenomenon, as character and as instrumentation.

In a western-Japanese collaboration, LaTurbo Avedon, Laurel Halo, Darren Johnston, Mari Matsutoya and Martin Sulzer (and other contributors) built their own operetta/audiovisual performance around Hatsune Miku, premiered as a joint presentation of CTM Festival and Transmediale here in Berlin in 2016. (I had the fortune of sitting next to a cosplaying German math teacher, a grown man who had convincingly made himself a physical manifestation of her illustrated persona – she sat on the edge of her seat enraptured by the work.)

I was particularly struck by Laurel Halo’s adept composition for Hatsune Miku – in turns lyrical and angular, informed by singing idiom and riding imagined breath, but subtly exploiting the technology’s potential. Sprechstimme and prosody for robots. Of all the various CTM/Transmediale commissions, this is music I’d want to return to. And that speaks to possibilities yet unrealized in the age of the electronic voice. (Our whole field, indeed, owes its path to the vocoder, to Daisy Bell, to the projected vocal quality of a Theremin or the monophonic song of a Moog.)

“Be Here Now” mixed interviews and documentary footage with spectacle and song; some in the audience failed to appreciate that blend, seen before in works like the Steve Reich/Beryl Korot opera The Cave. And some Hatsune Miku fans on the Internet took offense to their character being used in a way removed from her usual context, even though the license attached to her character provides for reuse. But I think the music holds up – and I personally equally enjoy this pop deconstruction as I do the tunes racking up the YouTube hits. See what you think:

All of this makes me want to revisit the Vocaloid software – perhaps a parallel review with a Japanese colleague. (Let’s see who’s up for it.)

After all, there’s no more human expression than singing – and no more emotional connection to what a machine is than when it sings, too.

More on the software, with an explanation of how it works (and why you’d want it, or not):


The post With Japan’s latest Vocaloid characters, another song from the future appeared first on CDM Create Digital Music.

This cybernetic synth contains a brain grown from the inventor’s cells

Digital? Ha. Analog? Oh, please. Biological? Now you’re talking.

The core of this synthesizer was grown in a lab from actual living cells sliced right out of its creator. Skin cells are transformed into stem cells which then form a neural network – one that exists not in code, but in actual living tissue.

Now, in comparison to your brain (billions of neurons and a highly sophisticated interactive structure), this handful of petri dish neurons wired into some analog circuits is impossibly crude. It signifies your brain sort of in the way one antenna on an ant signifies the solar system. But philosophically, the result is something radically different from the technological world to which we’re accustomed. This is a true analog-biological instrument. It produces enormous amounts of data. It learns and responds, via logic that’s in cells instead of in a processor chip. Sci-fi style, biological circuitry and analog circuitry are blended with one another – “wet-analogue,” as the creator dubs it.

And for any of you who hope to live on as a brain in a jar in a Eurorack, well, here’s a glimpse of that.

Artist Guy Ben-Ary comes to Berlin this week to present his invention, the project of a highly collaborative inter-disciplinary team. And “cellF” – pronounced “self,” of course – will play alongside other musicians, for yet an additional human element. (This week, you get Schneider TM on guitar, and Stine Janvin Motland singing.)

There are two ways to think of this: one is as circuitry and cell structures mimicking the brain, but another is this biological art as a way of thinking about the synthesizer as invention. The “brain” lives inside a modular synth, and its own structures of neurons are meant in some way as homage to the modular itself.


Whether or not cellF’s musical style is to your liking, the biological process here is astounding on its own – and lets the artist use as his medium some significant developments in cell technology, ones that have other (non-artistic) applications to the future of healing.

The cells themselves come from a skin biopsy, those skin cells then transformed into stem cells via a ground-breaking technique called Induced Pluripotent technology.

Given the importance of skin cells to research and medical applications, that’s a meaningful choice. The network itself comprises roughly 100,000 cells – which sounds like a lot, but not in comparison to the 100 billion neurons in your brain. The interface is crude, too – it’s just an 8×8 electrode grid. But in doing so, Guy and his team have created a representation of the brain in relationship to analog circuitry. It’s just a symbol, in other words – but it’s a powerful one.

Of course, the way you wire your brain into a modular synthesizer when you use it is profoundly more subtle. But that also seems interesting in these sorts of projects: they provide a mirror on our other interactions, on understanding the basic building blocks of how biology and our own body work.

They also suggest artistic response as a way of art and science engaging one another. Just having those conversations can elevate the level of mutual understanding. And that matters, as our human species faces potentially existential challenges.



It also allows artistic practice to look beyond just the ego, beyond even what’s necessarily human. CTM curator Jan Rohlf talks to CDM about the “post-human” mission of these events this week.

For me personally, the underlying and most interesting question is, if how we can conceptualize and envision something like post-human music. Of course, humans have long ago begun to appreciate non-human made sounds as music, for example bird-song, insects, water etc. Nowadays we can add to this list with generative algorithms and all kinds of sound producing machines, or brain-wave-music and so on. But the questions always is, how do we define music? Is this all really music? Can it be music even if there is no intentional consciousness behind, that creates the sounds with the intend of making music? It is a blurry line, I guess. Animals can appreciate sounds and enjoy them. So we might say that they also participate in something that can be called music-making. But machines? At this stage?

The point is, to have the intention to make music, you need not only some kind of apparatus that creates sounds, but you need a mind that has the intention to interpret the sounds as music. Music is experiential and subjective. There is this quote from Lucian Berio that captures this nicely: ” “Music is everything that one listens to with the intention of listening to music”.

Following this, we really would need to have an artificial or non-human consciousness that appreciates music and listens to sound with the intent of listening to music. And only then we could speak of post-human music.

Anyhow, thinking of the post-human as way to rethink the position we humans have in this world, it still makes sense to call such artistic experiments post-human music. They contribute in a shift of perspective, in which we humans are not the pivot or the center of the world anymore, but an element among many equal elements, living or non-living, human or non-human, that are intensely interconnected.

The post This cybernetic synth contains a brain grown from the inventor’s cells appeared first on CDM Create Digital Music.

Robert Henke, finding beauty in ever-iterating work with lasers

Robert Henke in his post-Ableton life has continued to see his stock rise on the media art scene.

But in some ways, that’s a funny thing. You’ll very often see Robert in one of two guises – as club act, or large-scale AV event. Yet the very thing that makes his style so distinctive is somehow the opposite of what you normally expect from those arenas. Robert’s approach is meticulous, detail-oriented, compulsive. In some sense, I think that’s what makes it scale. Rather than crank the volume, push emotions, and embrace spectacle (in the AV/concert) and the visceral (in the club), what you get is the surgically precise, artful use of those settings.

Now, it’s just my own personal taste, but I find that I can relate to Robert’s work emotionally most in the smaller scale works – the ones at the margins, the gallery performance, the sketch. These are the miniatures to the full-canvas, big budget numbers.

“Spline” isn’t physically small, but it is one of these marginalia in comparison to the flagship Lumiere. And talk about scaling: it’s hypnotic to watch even just in this short video.

It may add the element those lasers were missing, in all their precision – some organic rough edges. A trip into Spline promises to feel like a field trip to an alien aquarium.

Have a watch/listen:


120 meters of thin fabric are suspended from the ceiling to form a curved curtain. Four lasers in the corners of the room each project sixteen sharp beams of light onto it. The curtain’s shape has been calculated using the mathematical principle of spline interpolation. By tracing the surface with the laser beams, a complex geometric figure of 64 intersecting lines emerges, built of light and fog. Movements are synchronised with sonic events. Sound and lasers are controlled in realtime by an algorithmic process, creating an infinite number of variations over the course of the exhibition period.

This week also coincides with news about another titan of German media art, Carsten Nicolai, who has again separated his brand/label (Noton) from what had been raster-noton. But it’s worth saying that Robert is a unicorn in this scene, essentially a one-man studio all his own, aided by an assistant or two and joining in collaborations (as with Christopher Bauder), but insisting on writing his own code and getting involved in every element of engineering. That’s not a criticism of Carsten – on the contrary, a solo alva noto show is a great example of his own ability to jam in any setting. But since there is limited room in the world to start something like Noton, it’s worth considering that even Robert Henke’s biggest works still have his literal fingerprints on them, not just his aesthetic ones.

And that part gives me hope. The honest truth is, an emerging media artist simply isn’t going to have access to big resources. The whole medium is actually doomed if we assume the biggest budget and the greatest technical wizardry and the largest scale always wins.

But instead of looking to that as the model, an up-and-coming artist might look at the fact that a Robert Henke piece can be self-coded. It could use, in place of those fancy lasers, projection and still work compositionally. It could be as effective on a 2-meter wall as in a big venue.

And so long as we appreciate those elements of the medium, anyone can be a star.



And from these smaller sketches, Robert continues to build the flagship into a magnum opus, each a new version – the laser AV equivalent of Beethoven’s iterative overtures.

The best way to see the state-of-the-art evolution there will be through an event for which CDM is media partner, along with our friends at CTM Festival. (Like-minded, like-acronymed, we.)

Lumiere II.

Lumiere II.

Is artistic refactoring a thing? It is now. Here’s Robert on the process that has led to Lumiere III, the latest version we witness:

The desire to create a third iteration came not so much from a perceived problem with the second one, but rather from an abundance of new ideas created during the performances and also from the urge to update the graphics software in such a way that it also could drive upcoming laser installations. The Fall installation presented in 2016 already made use of the new software, and the Spline installation in March 2017 will also rely on it, but requires again a detail improvement which has not been integrated yet. That technical aspect is essential as it provides the means to create both more complex concert works and more complex installations without the necessity to reinvent the wheel each time again. A desired goal for Lumière III was also to focus more on sound design. The sound engine got a major update therefore, too.

The complete software package that is now driving Lumière, both visually and for the sound became so complex that writing documentation about it became an essential part of the work. Lumière III will probably get some minor updates and changes after the initial series of performances and is planned to tour for 2017 – 2018.

Read up on the whole history on Robert’s site:

And if you’re in Berlin, Technosphärenklänge #3 arrives on Friday the 12th of May, to the Haus der Kulturen der Welt. We’ll be bringing you more on the other partners in this event, as well.

Technosphärenklänge #3


The post Robert Henke, finding beauty in ever-iterating work with lasers appeared first on CDM Create Digital Music.

What if you could touch and feel a score?

What if scores could be touched and felt instead of only read? We’ve just come from a deep, far-ranging discussion with artist Enrique Tomás, a researcher at the Interface Culture Lab in Linz. It’s part of Enrique’s residency with CTM Festival and ENCAC – European network for contemporary av creation, who also support some of our work. And it’s presented as part of another of our MusicMakers hacklabs at CTM Festival. It’s worth sharing some thoughts already.

One of his more compelling illustrations of this was his PhD project, tangible scores:

Credits: Enrique Tomás – PhD at Interface Culture Lab – Kunstuniversität Linz
Supervised by Prof. Martin Kaltenbrunner
November 2013
A “Tangible Score” is a tactile interface for musical expression that incorporates a score in its physical shape, surface structure or spatial configuration.
Using sound as a continuous input signal, both synthesis and control are available simultaneously through direct manipulation on the engraved patterns of the physical score.
Every interface is conceived from a different graphical score that still represents a musical idea but it has been also specially designed for providing a diverse palette of acoustic signals when touched. But more important, the tactile scores define and propose specific gestural behaviors due to the different affordances and constraints of the object in front.
Sound is generated through a polyphonic concatenative synthesis driven by a real-time analysis and classification of input signal spectra. Each of the scores is loaded with a specific sound corpus that defines its sonic identity.
Thus, “Tangible Score” provides a implicit visual and haptic feedback in addition to its sonic core functionality, making it intuitive and learnable but as well suitable as an interface for musical improvisation and sonic exploration.

Project page

There’s also a version of his talk from IRCAM; we’ll be sharing some more investigation here on CDM soon, I hope (and have audio of the Berlin session).


But some stand-out issues from me were his thoughts on rethinking representation in musical interfaces, and returning to the body as the integral part of expression and thought itself. (That included examples from the likes of choreographer William Forsythe – bringing us full circle to some other CDM connections, as Forsythe’s work has been deeply involved in investigations of dance and technology and embodiment work in that field. Oh – plus we all have bodies. So there’s that.)

It also says a lot about the fundamentals of performance interactions – whether you’re just practicing your finger drumming on pads or building an entirely new interface. It says that part of what we’re doing is exploring our thoughts and emotions through our body – and that challenge requires new collaborations, new experimentation, and very often modifying or constructing new interfaces and techniques. That cuts to the heart of why we’re here in Berlin for another hacklab.

If you have projects you’d like us to see, or questions you’re pondering, do share. And thanks to our hosts at CTM Festival and Native Instruments as we venture into unexplored territory yet again. (Come visit us if you’re in town.)

Finally, some of Enrique’s music — which lately has been turning radios into instruments:

In this album, Enrique Tomás appropiates of SDR devices (Software Defined Radios) and makes them use as electronic music instruments. Through the active exploration of the radio spectrum (1MHz – 3GHz) at various european locations, Tomás builds artificial soundscapes with extreme ranges of frequencies and amplitudes.
Originally composed for a multichannel setup, here we offer you the stereo version.

Original recordings take in Linz, Madrid and Cambridge (UK).
Mixed and mastered in Linz at Berisha´s Studios.
First Performed: Salzamt Linz (pre-release) and STWST Linz (release)

Recording released in October 2016.
All rights reserved – Enrique Tomás
released October 18, 2016

The post What if you could touch and feel a score? appeared first on CDM Create Digital Music.