Azure Kinect promises new motion, tracking for art

Gamers’ interest may come and go, but artists are always exploring the potential of computer vision for expression. Microsoft this month has resurrected the Kinect, albeit in pricey, limited form. Let’s fit it to the family tree.

Time flies: musicians and electronic artists have now had access to readily available computer vision since the turn of this century. That initially looked like webcams, paired with libraries like the free OpenCV (still a viable option), and later repurposed gaming devices from Sony and Microsoft platforms.

And then came Kinect. Kinect was a darling of live visual projects and art installations, because of its relatively sophisticated skeletal tracking and various artist-friendly developer tools.

History time

A full ten years ago, I was writing about the Microsoft project and interactions, in its first iteration as the pre-release Project Natal. Xbox 360 support followed in 2010, Windows support in 2012 – while digital artists quickly hacked in Mac (and rudimentary Linux) support. An artists in music an digital media quickly followed.

For those of you just joining us, Kinect shines infrared light at a scene, and takes an infrared image (so it can work irrespective of other lighting) which it converts into a 3D depth map of the scene. From that depth image, Microsoft’s software can also track the skeleton image of one or two people, which lets you respond to the movement of bodies. Microsoft and partner PrimeSense weren’t the only to try this scheme, but they were the ones to ship the most units and attract the most developers.

We’re now on the third major revision of the camera hardware.

2010: Original Kinect for Xbox 360. The original. Proprietary connector with breakout to USB and power. These devices are far more common, as they were cheaper and shipped more widely. Despite the name, they do work with both open drivers for respective desktop systems.

2012: Kinect for Windows. Looks and works almost identically to Kinect for 360, with some minor differences (near mode).

Raw use of depth maps and the like for the above yielded countless music videos, and the skeletal tracking even more numerous and typically awkward “wave your hands around to play the music” examples.

Here’s me with a quick demo for the TED organization, preceded by some discussion of why I think gesture matter. It’s… slightly embarrassing, only in that it was produced on an extremely tight schedule, and I think the creative exploration of what I was saying about gesture just wasn’t ready yet. (Not only had I not quite caught up, but camera tech like what Microsoft is shipping this year is far better suited to the task than the original Kinect camera was.) But the points I’m making here have some fresh meaning for me now.

2013: Kinect for Xbox One. Here’s where things got more interesting – because of a major hardware upgrade, these cameras are far more effective at tracking and yield greater performance.

  • Active IR tracking in the dark
  • Wider field of vision
  • 6 skeletons (people) instead of two
  • More tracking features, with additional joints and creepier features like heart rate and facial expression
  • 1080p color camera
  • Faster performance/throughput (which was key to more expressive results)

Kinect One, the second camera (confusing!), definitely allowed more expressive applications. One high point for me was the simple but utterly effective work of Chris Milk and team, “The Treachery of Sanctuary.”

And then it ended. Microsoft unbundled the camera from Xbox One, meaning developers couldn’t count on gamers owning the hardware, and quietly discontinued the last camera at the end of October 2017.

Everything old is new again

I have mixed feelings – as I’m sure you do – about these cameras, even with the later results on Kinect One. For gaming, the devices were abandoned – by gamers, by developers, and by Microsoft as the company ditched the Xbox strategy. (Parallel work at Sony didn’t fare much better.)

It’s hard to keep up with consumer expectations. By implying “computer vision,” any such technology has to compete with your own brain – and your own brain is really, really, really good. “Sensors” and “computation” are all merged in organic harmony, allowing you to rapidly detect the tiniest nuance. You can read a poker player’s tell in an instant, while Kinect will lose the ability to recognize that your leg is attached to your body. Microsoft launched Project Natal talking about seeing a ball and kicking a ball, but… you can do that with a real ball, and you really can’t do that with a camera, so they quite literally got off on the wrong foot.

It’s not just gaming, either. On the art side, the very potential of these cameras to make the same demos over and over again – yet another magic mirror – might well be their downfall.

So why am I even bothering to write this?

Simple: the existing, state-of-the-art Kinect One camera is now available on the used market for well under a hundred bucks – for less than the cost of a mid-range computer mouse. Microsoft’s gaming business whims are your budget buy. The computers to process that data are faster and cheaper. And the software is more mature.

So while digital art has long been driven by novelty … who cares? Actual music and art making requires practice and maturity of both tools and artist. It takes time. So oddly while creative specialists were ahead of the curve on these sorts of devices, the same communities might well innovate in the lagging cycle of the same technology.

And oh yeah – the next generation looks very powerful.

Kinect: The Next Generation

Let’s get the bad news out of the way first: the new Kinect is both more expensive ($400) and less available (launching only in the US and China… in June). Ugh. And that continues Microsoft’s trend here of starting with general purpose hardware for mass audiences and working up to … wait, working up to increasingly expensive hardware for smaller and smaller groups of developers.

That is definitely backwards from how this is normally meant to work.

But the good news here is unexpected. Kinect was lost, and now is found.

The safe bet was that Microsoft would just abandon Kinect after the gaming failure. But to the company’s credit, they’ve pressed on, with some clear interest in letting developers, researchers, and artists decide what this thing is really for. Smart move: those folks often come up with inspiration that doesn’t fit the demands of the gaming industry.

So now Kinect is back, dubbed Azure Kinect – Microsoft is also hell-bent on turning Azure “cloud services” into a catch-all solution for all things, everywhere.

And the hardware looks … well, kind of amazing. It might be described as a first post-smartphone device. Say what? Well, now that smartphones have largely finalized their sensing capabilities, they’ve oddly left the arena open to other tech defining new areas.

For a really good write-up, you’ll want to read this great run-down:


All you need to know on Azure Kinect
[The Ghost Howls, a VR/tech blog, see also a detailed run-down of HoloLens 2 which also just came out]

Here are the highlights, though. Azure Kinect is the child of Kinect and HoloLens. It’s a VR-era sensor, but standalone – which is perfect for performance and art.

Fundamentally, the formula is the same – depth camera, conventional RGB camera, some microphones, additional sensors. But now you get more sensing capabilities and substantially beefed-up image processing.

1MP depth camera (not 640×480) – straight off of HoloLens 2, Microsoft’s augmented reality plaform
Two modes: wide and narrow field of view
4K RGB camera (with standard USB camera operation)
7 microphone array
Gyroscope + accelerometer

And it connects both by USB-C (which can also be used for power) or as a standalone camera with “cloud connection.” (You know, I’m pretty sure that means it has a wifi radio, but oddly all the tech reporters who talked to Microsoft bought the “cloud” buzzword and no one says outright. I’ll double-check.)

Also, now Microsoft supports both Windows and Linux. (Ubuntu 18.04 + OpenGL v 4.4).

Downers: 30 fps operation, limited range.

Somehing something, hospitals or assembly lines Azure services, something that looks like an IBM / Cisco ad:

That in itself is interesting. Artists using the same thing as gamers sort of … didn’t work well. But artists using the same tool as an assembly line is something new.

And here’s the best part for live performance and interaction design – you can freely combine as many cameras as you want, and sync them without any weird tricks.

All in all, this looks like it might be the best networked camera, full stop, let alone best for tracking, depth sensing, and other applications. And Microsoft are planning special SDKs for the sensor, body tracking, vision, and speech.

Also, the fact that it doesn’t plug into an Xbox is a feature, not a bug to me – it means Microsoft are finally focusing on the more innovative, experimental uses of these cameras.

So don’t write off Kinect now. In fact, with Kinect One so cheap, it might be worth picking one up and trying Microsoft’s own SDK just for practice.

Azure Kinect DK preorder / product page

aka.ms/kinectdocs

The post Azure Kinect promises new motion, tracking for art appeared first on CDM Create Digital Music.

Detroit techno, the 90s comic book – and epic new DJ T-1000 techno

In 1992, Alan Oldham aka DJ T-1000 imagined the epic saga of techno and Detroit as a trippy futuristic comic – and it’s prescient today. Plus, Alan’s got a banging new EP that you shouldn’t miss.

I’ve been meaning to share this since I first spotted it in a German-language article, so there’s no time like the present.

Alan was “Minister of Information” for Underground Resistance, as well as making his name as one of the all-time album cover greats with sexy, futuristic work for the likes of legendary imprint Transmat, Derrick May’s imprint. Now, everything in Detroit is in vogue again, but this push and pull between Europe (aka, where the actual techno market is) and Detroit (where it started) is so clear in 1992 that this comic could almost have been posted now.

The setting was a release by pre-minimal Richie Hawtin as F.U.S.E., on Richie’s own Plus 8 Records. Bonus: that demo came with a FlexiDisc and a comic. The comic stands out either way, not least for the presence of a futuristic supercomputer sequencer, a bit of a cross between a mass step sequencer, Deep Thought, and the Borg. Plus it’s great fun imagining UR’s LFO, Daniel Bell (aka DBX of “I’m losing control” fame), and Jochem Paap (Speedy J) as comic superheroes. Yeah, I’d see that Marvel movie.

At the very least, someone needs to make this sequencer.

Nerdcore did the honors and scanned the whole thing, if you need some techno comic reading:

https://nerdcore.de/2017/01/10/f-u-s-e-overdrive-flexidisc-comic/

But Alan deserves credit for his music as well as his graphic art, running those careers as he does in parallel. And his latest, “Message Discipline” EP as DJ T-1000 is a welcome shot of adrenaline in the electronic releases of the fall. It’s clear, focused, aggressive but perpetually bouncy – a blast of fresh sound at a time when so many releases are overthought, over-effected, and muddled in an attempt to shroud the dancing in layers of gloom.

Direct and concise, this is the sound of someone with real confidence in the genre. It’s four perfect cuts.

That’s interesting to me in that we did get a chance to get some insight into Alan’s process, and it was very much about getting straight to that groove. So I’m not just here to shower words on this release, but partly because I imagine it might assist people trying to get to their own voice in dance music.

Grab it on Bandcamp:

https://djt1000.bandcamp.com/album/message-discipline-ep

Previously:

Cues: Detroit innovator Alan Oldham talks to us about techno, creation

More on his site:

http://www.alanoldham.com/

The post Detroit techno, the 90s comic book – and epic new DJ T-1000 techno appeared first on CDM Create Digital Music.

A marvelous sound machine inspired by a Soviet deep drilling project

Deep in the Arctic Circle, the USSR was drilling deeper into the Earth than anyone before. One artist has combined archaeology and invention to bring its spirit back in sound.

Meet SG-3 (СГ-3) — the Kola Superdeep Borehole. You know when kids would joke about digging a hole to China? Well, the USSR’s borehole got to substantial depths – 12,262 m (over 40,000 ft) at the time of the USSR’s collapse.

The borehole was so epic – and the Soviets so secretive – that it has inspired legends of seismic weapons and even demonic drilling. (A YouTube search gets really interesting – like some people who think the Soviets actually drilled into the gates to Hell.)

Artist Dmitry Morozv – ::vtol:: – evokes some of that quality while returning to the actual evidence of what this thing really did. And what it did is already spectacular – he compares the scale of the project to launching humans into space (well, sort of in the opposite direction).

Watch:

vtol’s installation 12262 is the perfect example of how sound can be made material, and how digging into history can produce futuristic, post-contemporary speculative objects.

The two stages:

Archaeology. Dima absorbed SG-3’s history and lore, and spent years buying up sample cores at auctions as they were sold off. And twice he visited the remote, ruined site himself – once in 2016, and then back in July with his drilling machine. He even located a punched data tape from the site, though of course it’s difficult to know what it contains. (The investigation began with the Dark Ecology project, a three-year curatorial/research/art project bringing together partners from Norway, Russia, and across Europe, and still bearing this sort of fascinating fruit.)

Invention: The installation itself is a kinetic sound instrument, reading the coded information from the punch tape and operating miniature drilling operations, working on actual core samples. The sounds you hear are produced mechanically and acoustically by those drills.

As usual, Dima lists his cooking ingredients, though I think the sum is uniquely more than these individual parts. It’s as he describes it, a poetic, kinetic meditation, evocative both intellectually and spiritually. That said, the parts:

soft:

– pure data
– max/msp

hard:

– stepper motors x5 + 2
– dc-motors x5
– arduino mega
– lcd monitor
– custom electronics
– 5 piezo microphones
– 2 channel sound system

Details:
Commission by NCCA-ROSIZO (National Centre for Contemporary Arts), special for TECHNE “Prolog” exhibition, Moscow, 2018.
Curators: Natalia Fuchs, Antonio Geusa. Producer: Dmitry Znamenskiy.

The work was also a collaboration with Gallery Ch9 (Ч9) in Murmansk. That’s itself something of an achievement; it’s hard enough to find media art galleries in major cities, let alone remote Russia. (That’s far enough northwest in Russia that most of Finland and all of Sweden are south of it.)

But the alien-looking object also got its own trip to the site, ‘performing’ at the location.

It’s appropriate that would happen in Russia. Cosmism visionary Nikolai Fyodorovich Fyodorov and his ideas about creating immortality by resurrecting ancestors may seem bizarre today. But translate that to media art, which threatens to become stuck in time when not informed by history. (Those who do not learn from history are doomed to make installation art that looks like it came from a mid-1990s Ars Electronica or Transmediale, forever, I mean.) To be truly futuristic, media art has to have a deep understanding of technologies progression, its workings, and all the moments in the past that were themselves ahead of their time. That is, maybe we have to dig deep into the ground beneath us, dig up our ancestors, and construct the future atop that knowledge.

At Spektrum Berlin this weekend, there’s also a “materiality of sound” project. Fellow Moscow-based artist Andrey Smirnov will create an imaginative new performance inspired by Theremin’s infamous KGB listening device of the 1940s – also new art fabricated from Soviet history – joined by a lineup of other artists exploring similar themes making sound material and kinetic. (Evelina Domnitch and Dmitry Gelfand, Sonolevitation, Camera Lucida, Eleonora Oreggia aka Xname share the bill.)

To me, these two themes – materiality, drawing from kinetic, mechanical, optical, and acoustic techniques (and not just digital and analog), and archaeological futurism, employing deep historical inquiry that is in turn re-contextualized in forward-thinking, speculative work, offer tremendous possibility. They sound like more than just a zeitgeist-friendly buzzword (yeah, I’m looking at you, blockchain). They sound like something to which artists might even be happy to devote lifetimes.

For another virtual trip to the borehole, here’s Rosa Menkman’s film on a soundwalk at the site in 2016.

Related (curator Natalia Fuchs, interviewed before, also curated this work):

Between art tech and techno, past and future, a view from Russia

And on the kinetic-mechanical topic:

Watch futuristic techno made by robots – then learn how it was made

Full project details:

http://vtol.cc/filter/works/12262

The post A marvelous sound machine inspired by a Soviet deep drilling project appeared first on CDM Create Digital Music.

This light sculpture plays like an instrument, escaped from Tron

Espills is a “solid light dynamic sculpture,” made of laser beams, laser scanners, and robotic mirrors. And it makes a real-life effect that would make Tron proud.

The work, made public this month but part of ongoing research, is the creation of multidisciplinary Barcelona-based AV team Playmodes. And while large-scale laser projects are becoming more frequent in audiovisual performance and installation, this one is unique both in that it’s especially expressive and a heavily DIY project. So while dedicated vendors make sophisticated, expensive off-the-shelf solutions, the Playmodes crew went a bit more punk and designed and built many of their own components. That includes robotic mirrors, light drawing tools, synths, scenery, and even the laser modules. They hacked into existing DMX light fixtures, swapping mirrors for lamps. They constructed their own microcontroller solutions for controlling the laser diodes via Artnet and DMX.

And, oh yeah, they have their own visual programming framework, OceaNode, a kind of home-brewed solution for imagining banks of modulation as oscillators, a visual motion synth of sorts.

It’s in-progress, so this is not a Touch Designer rival so much as an interesting homebrew project, but you can toy around with the open source software. (Looks like you might need to do some work to get it to build on your OS of choice.)

https://github.com/playmodesStudio/ofxoceanode

Typically, too, visual teams work separately from music artists. But adding to the synesthesia you feel as a result, they coupled laser motion directly to sound, modding their own synth engine with Reaktor. (OceaNode sends control signal to Reaktor via the now-superior OSC implementation in the latter.)

They hacked that synth engine together from Santiago Vilanova’s PolyComb – a beautiful-sounding set of resonating tuned oscillators (didn’t know this one, now playing!):

https://www.native-instruments.com/es/reaktor-community/reaktor-user-library/entry/show/9717/

Oh yeah, and they made a VST plug-in to send OSC from Reaper, so they can automate OSC envelopes using the Reaper timeline.

OceaNode, visual programming software, also a DIY effort by the team.

… and the DIY OSC VST plug-in, to allow easy automation from a DAW (Reaper, in this case).

It’s really beautiful work. You have to notice that the artists making best use of laser tech – see also Robert Henke and Christopher Bauder here in Berlin – are writing some of their own code, in order to gain full control over how the laser behaves.

I think we’ll definitely want to follow this work as it evolves. And if you’re working in similar directions, let us know.

The post This light sculpture plays like an instrument, escaped from Tron appeared first on CDM Create Digital Music.

The new Jaguar cars sound like spaceships, thanks to Richard Devine

Music, film/TV, games… yes. But another frontier is opening for sound design you might not expect: cars. That has led automaker Jaguar to sound designer Richard Devine, and that in turn means when this Jag accelerates, it sounds like it’s headed into hyperdrive, bound for the outer rim.

Sounds will be another differentiation point of the auto brand experience, a way to set luxury vehicles apart, it’s true. But when it comes to engine noise, there is actually a safety issue. Fully electric cars don’t make the noise that internal combustion engines do, which means you can’t hear them coming – which makes them dangerous.

The cool thing is, manufacturers are finally beginning to consider aesthetics in sound design. And in a world that’s flooded with repetitions of the Windows startup sound, that Nokia theme tune (only mostly driven away by the iPhone), horrible sirens, beeps, and whatnot, this couldn’t come a moment too soon.

Richard Devine has been doing sound design across various industries, from sounds used in films to strange presets you find lurking in your plug-ins (as well as making some great music himself). Now at last he can share publicly that he did sound for the mighty Jaguar, and its all-electric I‑PACE car.

The design team at Jag get to crow about their work in a company blog post:
https://www.jaguar.co.uk/about-jaguar/jaguar-stories/i-pace-design-secrets.html

Here’s how the external sound system works:

The engine acceleration noise is cool, and with good reason – this car may be ecologically minded, but it also does 0 to 60 in 4.5 seconds. (I’m not advertising for Jaguar, though… uh, hey Jag, I accept money. And automobiles. Be in touch.)

Engage:

Iain Suffield, Acoustics Technical Specialist at Jaguar:
“We have taken a completely blank canvas and worked with electronic musician and sound designer Richard Devine to interpret the design language of the vehicle, to create building blocks of sound we can craft into the I-PACE.”

And they’ve worked on every aspect of the sound: “The Stop/Start noise of the motors, the audible vehicle alert system, the dynamic driving sounds all have been designed completely from scratch.”

From the outside, the car hums. Inside the cabin, you get different sound sets to reward you as you engage “dynamic” mode, and there is manual customization. (Yes, your car has sound sets. I’m waiting until I can drive a car that looks like a LADA on the outside but sounds like the Enterprise-D on the inside. I’ll keep dreaming.)

You can expect major car companies to enlist these sorts of sound departments more frequently, along with other manufacturers of various products keen to engage customers. And since these teams are developing internally, as well as hiring outside creative talent as with Richard Devine, that means more opportunities for music producers and audio engineers.

So the next time you’re obsessing over getting a sound right and layering instead of just dialing in a preset the easy way, think of it as a career investment. It worked for Richard.

Previously on CDM, German maker Audi following a similar path:

Designing the Sound of a Real Car: An Audi, from Silence to Noise [Video]

Plus a homebrewed solution for bicycles:

Velosynth: Bicycle-Mounted Synth is Open Source, Hackable, Potentially Useful

The post The new Jaguar cars sound like spaceships, thanks to Richard Devine appeared first on CDM Create Digital Music.

Hacking and 3D printing the future of violins, in a growing community

Violins: they’re often the first example people site when talking about traditional acoustic instruments. But using new pickup techniques and rapid prototyping, that could be about to change.

violinmakers.org is a community for this new kind of digital age luthier – a place to discuss 3D printing and magnetic pickup possibilities and electric violin fabrication, rather than gut strings and wood carving.

Community member Guy Sheffer spoke recently about why this matters. All that legacy of instrument building has perfected acoustic violins, but electric violins remain crude. As Guy writes: “The challenge is, that while modern instruments have been developing effects and new sounds, acoustic violins have been acoustic for the past 400 years.”

Post about why I set up this community

While exploring new frontiers, then, these hacker-luthiers need a place to discuss their experimental craft. Enter violinmakers:

https://violinmakers.org/

There’s already some cool stuff there: open source, 3D-printable electric violins and files for Thingiverse, the repository of 3D printing files. (This is way better than 3D printing guns, obviously.)

Post your designs here

Guy has also shared his own spaced-out, trippy first build, logging the whole process. Yeah, you might as well combine your 3D printed electric violin with some airbrush work, no?

Guy’s own first build. 3D printing + custom paint job. (Now you just need a tour van to match… maybe some custom-built electric, not just an old Ford.)

It’s also worth checking out the open synth platform Guy is using, the Raspberry Pi-based Zynthian. That’s suggestive of a new potential sound source to match the new physical instrument:

http://zynthian.org/

Open sourcing in this case has important implications: it allows this new generation of builders to do what the acoustic makers did generations before, constantly improving and adjusting features like the chin rest or bridge.

There’s clearly a lot of innovation that could happen in acoustic instruments and derivatives – innovation that has often failed to happen because designs are not only conservative, but stuck in very specific modes, and because markets and technologies haven’t developed to serve potential evolution. But it could be that now is the moment. For a past look at my own instrument of choice, the piano, see the separate stories I’ve done on that (including an interview with David Klavins, who will talk passionately about why he wants to see the grand piano evolve past the Steinway Model D):

These piano breakthroughs changed music forever

Acoustic Revelation: Inside the Una Corda, the 100kg, 21st Century Piano Built for Nils Frahm

I’d love to hear more. Got experience with 3D printing, pickups … on violins or other instruments? Do let us know.

The post Hacking and 3D printing the future of violins, in a growing community appeared first on CDM Create Digital Music.

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

Machine learning is presented variously as nightmare and panacea, gold rush and dystopia. But a group of artists hacking away at CTM Festival earlier this year did something else with it: they humanized it.

The MusicMakers Hacklab continues our collaboration with CTM Festival, and this winter I co-facilitated the week-long program in Berlin with media artist and researcher Ioann Maria (born in Poland, now in the UK). Ioann has long brought critical speculative imagination to her work (meaning, she gets weird and scary when she has to), as well as being able to wrangle large groups of artists and the chaos the creative process produces. Artists are a mess – as they need to be, sometimes – and Ioann can keep them comfortable with that and moving forward. No one could have been more ideal, in other words.

And our group delved boldly into the possibilities of machine learning. Most compellingly, I thought, these ritualistic performances captured a moment of transformation for our own sense of being human, as if folding this technological moment in against itself to reach some new witchcraft, to synthesize a new tribe. If we were suddenly transported to a cave with flickering electronic light, my feeling was that this didn’t necessarily represent a retreat from tech. It was a way of connecting some long human spirituality to the shock of the new.

This wasn’t just about speculating about what AI would do to people, though. Machine learning applications were turned into interfaces, making gestures and machines interact more clearly. The free, artist-friendly Wekinator was a popular choice. That stands in contrast to corporate-funded AI and how that’s marketed – which is largely as a weird, consumer convenience. (Get me food reservations tonight without me actually talking to anyone, and then tell me what music to listen to and who to date.)

Here, instead, artists took machine learning algorithms and made it another raw material for creating instruments. This was AI getting the machines to better enable performance traditions. And this is partly our hope in who we bring to these performance hacklabs: we want people with experience in code and electronics, but also performance media, musicology, and culture, in various combinations.

(Also spot some kinetic percussion in the first piece, courtesy dadamachines.)

Check out the short video excerpt or scan through our whole performance documentation. All documentation courtesy CTM Festival – thanks. (Photos: Stefanie Kulisch.)

Big thanks to the folks who give us support. The CTM 2018 MusicMakers Hacklab was presented with Native Instruments and SHAPE, which is co-funded by the Creative Europe program of the European Union.

Full audio (which makes for nice sort of radio play, somehow, thanks to all these beautiful sounds):

Full video:

2018 participants – all amazing artists, and ones to watch:

Adrien Bitton
Alex Alexopoulos (Wild Anima)
Andreas Dzialocha
Anna Kamecka
Aziz Ege Gonul
Camille Lacadee
Carlo Cattano
Carlotta Aoun
Claire Aoi
Damian T. Dziwis
Daniel Kokko
Elias Najarro
Gašper Torkar
Islam Shabana
Jason Geistweidt
Joshua Peschke
Julia del Río
Karolina Karnacewicz
Marylou Petot
Moisés Horta Valenzuela AKA ℌEXOℜℭℑSMOS
Nontokozo F. Sihwa / Venus Ex Machina
Sarah Martinus
Thomas Haferlach

https://www.ctm-festival.de/archive/festival-editions/ctm-2018-turmoil/transfer/musicmakers-hacklab/

http://ioannmaria.com/

For some of the conceptual and research background on these topics, check out the Input sessions we hosted. (These also clearly inspired, frightened, and fired up our participants.)

A look at AI’s strange and dystopian future for art, music, and society

Minds, machines, and centralization: AI and music

The post What culture, ritual will be like in the age of AI, as imagined by a Hacklab appeared first on CDM Create Digital Music.

A look at AI’s strange and dystopian future for art, music, and society

Machine learning and new technologies could unlock new frontiers of human creativity – or they could take humans out of the loop, ushering in a new nightmare of corporate control. Or both.

Machine learning, the field of applying neural networks to data analysis, unites a range of issues from technological to societal. And audio and music are very much at the center of the transformative effects of these technologies. Commonly dubbed (partly inaccurately) “artificial intelligence,” they suggest a relationship between humans and machines, individuals and larger state and corporate structures, far beyond what has existed traditionally. And that change has gone from far-off science fiction to a reality that’s very present in our homes, our lives, and of course the smartphones in our pockets.

I had the chance to co-curate with CTM Festival a day of inputs from a range of thinkers and artist/curators earlier this year. Working with my co-host, artist and researcher Ioann Maria, we packed a day full of ideas and futures both enticing and terrifying. We’ve got that full afternoon, even including audience discussion, online for you to soak in.

Me, with Moritz, pondering the future. Photo: CTM Festival / Isla Kriss.

And there are tons of surprises. There are various terrifying dystopias, with some well-reasoned arguments for why they might actually come to fruition (or evidence demonstrating these scenarios are already in progress). There are more hopeful visions of how to get ethics, and humans, back in the loop. There are surveys of artistic responses.

All of this kicked off our MusicMakers Hacklab at CTM Festival, which set a group of invited artists on collaborative, improvisatory explorations of these same technologies as applied to performance.

These imaginative and speculative possibilities become not just idle thoughts, but entertaining and necessary explorations of what might be soon. This is the Ghost of Christmas Yet-to-Come, if a whole lot more fun to watch, here not just to scare us, but to spur us into action and invention.

Let’s have a look at our four speakers.

Machine learning and neural networks

Moritz Simon Geist: speculative futures

Who he is: Moritz is an artist and researcher; he joined us for my first-ever event for CTM Festival with a giant robotic 808, but he’s just at adept in researching history and future.

Topics: Futurism, speculation, machine learning and its impact on music, body enhancement and drugs

Takeaways: Moritz gives a strong introduction to style transfer and other machine learning techniques, then jumps into speculating on where these could go in the future.

In this future, remixes and styles and timbres might all become separate from a more fluid creativity – but that might, in turn, dissolve artistic value.

“In the future … music will not be conceived as an art form any more. – Moritz Simon Geist”

Then, Moritz goes somewhere else entirely – dreaming up speculative drugs that could transform humans, rather than only machines. (The historical basis for this line of thought: Alexander Shulgin and his drug notebooks, which might even propose a drug that transforms perception of pitch.)

Moritz imagines an “UNSTYLE” plug-in that can extract vocals – then change genre.

What if self-transformation – or even fame – were in a pill?

Gene Cogan: future dystopias

Who he is: An artist/technologist who works with generative systems and its overlap with creativity and expression. Don’t miss Gene’s expansive open source resource for code and learning, machine learning for artists.

Topics: Instrument creation, machine learning – and eventually AI’s ability to generate its own music

Takeaways: Gene’s talk begin with “automation of songwriting, production, and curation” as a topic – but tilted enough toward dystopia that he changed the title.

“This is probably going to be the most depressing talk.”

In a more hopeful vision, he presented the latest work of Snyderphonics – instruments that train themselves as musicians play, rather than only the other way around.

He turned to his own work in generative models and artistic works like his Donald Trump “meat puppet,” but presented a scary image of what would happen if eventually analytic and generative machine learning models combined, producing music without human involvement:

“We’re nowhere near anything like this happening. But it’s worth asking now, if this technology comes to fruition, what does that mean about musicians? What is the future of musicians if algorithms can generate all the music we need?”

References: GRUV, a generative model for producing music

WaveNet, the DeepMind tech being used by Google for audio

Sander Dieleman’s content-based recommendations for music

Gene presents – the death of the human musician.

Wesley Goatley: machine capitalism, dark systems

Who he is: A sound artist and researcher in “critical data aesthetics,” plumbing the meaning of data from London in his own work and as a media theorist

Topics: Capitalism, machines, aesthetics, Amazon Echo … and what they may all be doing to our own agency and freedom

Takeaways: Wesley began with “capitalism at machine-to-machine speeds,” then led to ways this informed systems that, hidden away from criticism, can enforce bias and power. In particular, he pitted claims like “it’s not minority report – it’s science; it’s math!” against the realities of how these systems were built – by whom, for whom, and with what reason.

“You are not working them; they are working you.”

As companies like Amazon and Google extend control, under the banner of words like “smart” and “ecosystem,” Wesley argues, what they’re really building is “dark systems”:

“We can’t get access or critique; they’re made in places that resemble prisons.”

The issue then becomes signal-to-noise. Data isn’t really ever neutral, so the position of power lets a small group of people set an agenda:

“[It] isn’t a constant; it’s really about power and space.”

Wesley on dark connectionism, from economics to design. Photo: CTM Festival / Isla Kriss.

Deconstructing an Amazon Echo – and data and AI as echo chamber. Photo: CTM Festival / Isla Kriss.

What John Cage can teach us: silence is never neutral, and neither is data.

Estela Oliva: digital artists respond

Who she is: Estela is a creative director / curator / digital consultant, an anchor of London’s digital art scene, with work on Alpha-ville Festival, a residency at Somerset House, and her new Clon project.

Topics: Digital art responding to these topics, in hopeful and speculative and critical ways – and a conclusion to the dystopian warnings woven through the afternoon.

Takeaways: Estela grounded the conclusion of our afternoon in a set of examples from across digital arts disciplines and perspectives, showing how AI is seen by artists.

Works shown:

Terence Broad and his autoencoder

Sougwen Chung and Doug, her drawing mate

https://www.bell-labs.com/var/articles/discussion-sougwen-chung-about-human-robotic-collaborations/

Marija Bozinovska Jones and her artistic reimaginings of voice assistants and machine training:

Memo Akten’s work (also featured in the image at top), “you are what you see”

Archillect’s machine-curated feed of artwork

Superflux’s speculative project, “Our Friends Electric”:

OUR FRIENDS ELECTRIC

Estela also found dystopian possibilities – as bias, racism, and sexism are echoed in the automated machines. (Contrast, indeed, the machine-to-machine amplification of those worst characteristics with the more hopeful human-machine artistic collaborations here, perhaps contrasting algorithmic capitalism with individual humanism.)

But she also contrasted that with more emotionally intelligent futures, especially with the richness and dimensions of data sets:

“We need to build algorithms that represent our values better – but I’m just worried that unless we really talk about it more seriously, it’s not going to happen.”

Estela Oliva, framed by Memo Akten’s work. Photo: CTM Festival / Isla Kriss.

It was really a pleasure to put this together. There’s obviously a deep set of topics here, and ones I know we need to continue to cover. Let us know your thoughts – and we’re always glad to share in your research, artwork, and ideas.

Thanks to CTM Festival for hosting us.

https://www.ctm-festival.de/news/

The post A look at AI’s strange and dystopian future for art, music, and society appeared first on CDM Create Digital Music.

This free B-side from Machinedrum is the perfect thing for the solstice

Ready to raise your energy level and channel your higher self on the longest day of the year? (Or the shortest day of the year, if you’re in the southern hemisphere?)

Machinedrum (the artist, not the Elektron box) has quietly released the track that has the perfect vibe for that – even if you didn’t spot the track name. And it’s now a free download. It’s short, but otherwise sounds as much hit single as b-side, warm, friendly, and uncomplicated – genius.

You might put it on repeat as instant anti-depressant. Enjoy!

THE
HIGHER SELF
OF A PERSON
IS SEEN AS
THE
DIVINE SPARK
WITHIN WE ARE
1 W/ GOD

B-Side from the album “Human Energy”

Vocals : Daisy

And oh yeah, catch that whole album:

https://open.spotify.com/embed/user/machine-drum/playlist/2WlTPLCSOi3XoRVeHyVUeV

Want to see Travis in person? He’s got a busy tour schedule for the USA and Europe, from Slovakia to North Carolina:

http://machinedrum.net/

The post This free B-side from Machinedrum is the perfect thing for the solstice appeared first on CDM Create Digital Music.

Here’s all that new Roland stuff in one place, even accordions

It was called “909 day.” It was on the ninth of September. And it included a new 909 product. So far, so good. But Roland’s 909 day stops making sense around there. It launched over 30 products, many of them unrelated, over 24 hours. “909 Day” saw new … accordions. Also, record players that said 909 on them. There were four continents, and a marathon Web stream that would have taken 24 hours to watch, sometimes switching between Japanese and English. In years of covering this business, I’ve never seen anything like it. But before you blow this off, there was some cool stuff in there – depending on whether you play the accordion or sax, for instance. Let’s make sense of it.

Roland_SYSTEM-8_Top

The new PLUG-OUT flagship

Roland’s SYSTEM-1 was actually in some way the most interesting of the first AIRA offerings, not so much because of its PLUG-OUT technology, but because its default mode was a genuinely new synth. So, yes, it was built on ACB – their name for their proprietary component modeling techniques – but it was alive and weird and wonderful. The problem is, PLUG-OUT had some growing pains when it shipped with drivers, and the build on the SYSTEM-1 was to me really poor.

I hope the SYSTEM-8, a keyboard version, improves on the original. Roland is promising updated ACB – as with the Boutique line, they’ve been tweaking all their models. It does seem it has a better keybed, which to me was the deal killer on SYSTEM-1.

Because it works with “PLUG-OUT,” you can load different software models of instruments (purchased separately) … a bit like plugging in expansion cards in Roland products of yore. JUPITER-8 and JUNO-106 are included, or add SH-2, SH-101, PROMARS, and others.

The good news: Roland has smartly added more hands-on controls, and the excellent step sequencing from the AIRA TR, plus chord memory. That makes this look like a lot of fun to play, and I do believe they can make interesting ACB-based instruments.

The bad news: it’s still green. And at US$1499, I have to wonder who it’s for. Here’s the thing: this would make loads of sense to people who hate computers, because they can swap whatever model they want on it and play standalone. The only problem is, if this works like the SYSTEM-1, you can only have one model at a time. So you need the computer to swap models. And accordingly, Roland is modeling PLUG-OUT and SYSTEM-8 to people who love computers, but also don’t want to just use a controller and software, except that they also want to buy this a as a controller.

So, it’s a keyboard for people who love but also hate computers. Somehow that describes me, actually, but I still can’t quite make sense of it. More research needed; stay tuned.

For futuristic wind players

By far, the product I expected least is something called the Aerophone. It’s a kind of digital wind instrument with sax fingerings. That puts it in the category of things like the AKAI EWI series and digital brass instruments, but with some important differences.

First, if you’ve played a sax, you can pick this up and play it right away without any new fingerings. Second, it comes with all the sounds ready to go – clarinet, flute, oboe, trumpet, violin, plus all the sax parts, and you can layer those.

I think wind instruments are as natural a match for electronic performance as keyboards are, so to me, this is a great development.

Between the AKAI EWI series and the Aerophone, there’s something that works for you. I think Roland has an edge for built-in sounds and sax players, while the AKAI remains a more flexible controller (especially as it’s wireless). Fingering matters – check out this page for a discussion of all the different ways you can adjust fingering to the EWI (apart from standard woodwind fingering).

A visual inspection will see what I mean: the EWI looks like a clarinet, and the Aerophone looks like a saxophone … from another planet.

That Aerophone also looks freakin’ huge, a bit like a keytar for the mouth, but … I’m still interested to see it.

Oh yeah, and synth heads can make fun of these all they want. There’s a whole world of instrumentalists out there, though, and they’re a lot larger than the synth community.

Meanwhile, in Japan:

Roland_FR-4xb_Black

For Accordionists

Yes, there’s a new V-Accordion – the FR-4x. That’s Roland getting under the $4000 mark. Uh, for everyone who thinks Roland is just cheap stuff made in Asia, this is a flagship accordion of the future built in Castelfidardo, Italy – seriously. There are people in Italy building MIDI accordions. We live in a wonderful age.

Otherwise, it’s a V-Accordion — just a bit cheaper, lighter, and more portable.

Accordions are cool. No, seriously. Don’t believe me? Watch – Finnish style accordion rock:

You can actually tell a lot about the traditional side of Roland looking at this thing. There’s a similar philosophy as to other instrument categories: load this with sounds, let it run on batteries, meet different genre needs, and offer things like USB playback to keep up with the Internet.

Ernie Rideout, former editor of Keyboard magazine, is actually an accordion player and reviewed the very first V-Accordion. Again, make fun all you like: there’s a lot of accordion music in the world.

I’ve seen Matmos play Whirpool washing machines in Berghain, so … I’d also like to see someone make techno with a V-Accordion, quite frankly.

It’s available in all black finish, so it won’t get turned away at the door.

gp607_image_7_gal

For pianists

The funny thing is, in the midst of something called “909 day,” Roland not-so-quietly made an aggressive play for the home digital piano market – one that pits it head to head with Yamaha.

There are some interesting features here. Roland’s progressive hammer action has improved a lot lately, and SuperNATURAL has upped its sound game. These are premium-priced digital pianos, but you get a compelling offering.

More gee-whiz stuff lately involves Bluetooth MIDI, which allows integration with tablets and hands-free page turning. There are also interactive features for immersive sound, interactive metronomes, and more. I really wonder what the experience of growing up with this stuff is; it’s the one and only case where I really don’t feel digitally native.

The new lineup starts at US$1500.

Roland_Pianos_collage

td-50kv_kd-a22_main_gal

For drummers

The latest model V-Drums actually represent a big upgrade. The sound module is all new, and does what Roland is doing elsewhere – from circuits to acoustic instruments – bundling together a bunch of proprietary modeling stuff that’s meant to make things more playable.

What’s nice here is, this isn’t just a black box of presets. You can customize heads and shells, choose mics and ambience, add compressor and EQ, and even save and compare snapshots.

In other words, what the V-Drums represents is the latest attack on computer software. To my mind, I can’t see a whole lot of benefit to a drummer working with soft synths, because they can get similar power in a drum kit – and they need the hardware anyway. (Film composers and whatnot are another matter, more likely to customize the software.)

Oh yeah, and – interesting to CDM readers, you can even load your own samples on SD card for the first time.

New sensors, new snare and ride pads – technologically speaking, the V-Drums might be the most sophisticated of the new Roland announcements, even if it was the one that got the least attention. (Create Drum Music? Dunno… somewhere. Me in an alternate universe, possibly. I hope I also have a goatee there and that I’m totally evil.)

BOSS_Katana_Amp_Series

For guitarists

The first BOSS announcement is easy to sum up, because you start with the price – BOSS are making a US$199 version of their GT effects processor. I almost don’t care what it does; I think that’s going to be competitive with other offerings, partly because of the name (and the GT has a lot of sonic experience in it). The looks are nothing to write home about, but on the upside, they’re also not confusing, which may be the key here.

BOSS_GT-1_Front

More interesting is the BOSS Katana amps, BOSS’ big new amp play. It’s interesting because the line integrates BOSS effects, which I think is meaningful to a lot of people. And Roland are also playing up on their Japanese-ness using script and name to evoke quality, which I think is smart marketing. Here’s where things get Roland-y, though – you control the head via MIDI? Customize the effects via software? Head-scratcher for me, but, well, different, at least.

Roland_EC-10M

For cajon players

Here’s why I love Roland. Their ELCajon EC-10 was an acoustic cajon (a percussion instrument) that layers electronic sounds – basically, the best busking product I’ve seen in years. Now, it’s available as a standalone module, so you can use it with your existing cajon.

Now, there’s just no excuse for anyone not to have some cajones.

Sorry. I’m really sorry for that. There were a lot of announcements. I lost some sleep. Moving on.

It’s still cool, even if I’m lame.

Roland_TT-99_Front

Roland_DJ-99_Front

Roland is putting “909” on things

I’ve saved these for last, because… they appear to be bog-standard products that Roland has just rebranded, and are otherwise not worth mentioning. Still, Roland: if you’re doing mixers and turntables, I really would like to buy 909 sneakers and underwear. Consider.

And a video switcher

But I’ll put that in another article. Okay, so not quite everything fits here.

This is of course on top of the Boutique and DJ ranges we’ve already covered.

I’ll say this, though: Roland is coming for the market, in an aggressive way I haven’t seen for a long time. Combine this with their DJ partnership with Serato and the Roland Product Group’s more particular approach to our electronic production market, and Roland seem to be on their game again.

The post Here’s all that new Roland stuff in one place, even accordions appeared first on CDM Create Digital Music.