Max TV: go inside Max 8’s wonders with these videos

Max 8 – and by extension the latest Max for Live – offers some serious powers to build your own sonic and visual stuff. So let’s tune in some videos to learn more.

The major revolution in Max 8 – and a reason to look again at Max even if you’ve lapsed for some years – is really MC. It’s “multichannel,” so it has significance in things like multichannel speaker arrays and spatial audio. But even that doesn’t do it justice. By transforming the architecture of how Max treats multiple, well, things, you get a freedom in sketching new sonic and instrumental ideas that’s unprecedented in almost any environment. (SuperCollider’s bus and instance system is capable of some feats, for example, but it isn’t as broad or intuitive as this.)

The best way to have a look at that is via a video from Ableton Loop, where the creators of the tech talk through how it works and why it’s significant.

Description [via C74’s blog]:

In this presentation, Cycling ’74’s CEO and founder David Zicarelli and Content Specialist Tom Hall introduce us to MC – a new multi-channel audio programming system in Max 8.

MC unlocks immense sonic complexity with simple patching. David and Tom demonstrate techniques for generating rich and interesting soundscapes that they discovered during MC’s development. The video presentation touches on the psychoacoustics behind our recognition of multiple sources in an audio stream, and demonstrates how to use these insights in both musical and sound design work.

The patches aren’t all ready for download (hmm, some cleanup work being done?), but watch this space.

If that’s got you in the learning mood, there are now a number of great video tutorials up for Max 8 to get you started. (That said, I also recommend the newly expanded documentation in Max 8 for more at-your-own-pace learning, though this is nice for some feature highlights.)

dude837 has an aptly-titled “delicious” tutorial series covering both musical and visual techniques – and the dude abides, skipping directly to the coolest sound stuff and best eye candy.

Yes to all of these:

There’s a more step-by-step set of tutorials by dearjohnreed (including the basics of installation, so really hand-holding from step one):

For developers, the best thing about Max 8 is likely the new Node features. And this means the possibility of wiring musical inventions into the Internet as well as applying some JavaScript and Node.js chops to anything else you want to build. Our friends at C74 have the hook-up on that:

Suffice to say that also could mean some interesting creations running inside Ableton Live.

It’s not a tutorial, but on the visual side, Vizzie is also a major breakthrough in the software:

That’s a lot of looking at screens, so let’s close out with some musical inspiration – and a reminder of why doing this learning can pay off later. Here’s Second Woman, favorite of mine, at LA’s excellent Bl__K Noise series:

The post Max TV: go inside Max 8’s wonders with these videos appeared first on CDM Create Digital Music.

Exploring machine learning for music, live: Gamma_LAB AI

AI in music is as big a buzzword as in other fields. So now’s the time to put it to the test – to reconnect to history, human practice, and context, and see what holds up. That’s the goal of the Gamma_LAB AI in St. Petersburg next month. An open call is running now.

Machine learning for AI has trended so fast that there are disconnects between genres and specializations. Mathematicians or coders may get going on ideas without checking whether they work with musicians or composers or musicologists – and the other way around.

I’m excited to join as one of the hosts with Gamma_LAB AI partly because it brings together all those possible disciplines, puts international participants in an intensive laboratory, and then shares the results in one of the summer’s biggest festivals for new electronic music and media. We’ll make some of those connections because those people will finally be together in one room, and eventually on one live stage. That investigation can be critical, skeptical, and can diverge from clichéd techniques – the environment is wide open and packed with skills from an array of disciplines.

Natalia Fuchs, co-producer of GAMMA Festival, founder of ARTYPICAL and media art historian, is curating Gamma_LAB AI. The lab will run in May in St. Petersburg, with an open call due this Monday April 8 (hurry!), and then there will be a full AI-stage as a part of Gamma Festival.

Image: Helena Nikonole.

Invited participants will delve into three genres – baroque, jazz, and techno. The idea is not just a bunch of mangled generative compositions, but a broad look at how machine learning could analyze deep international archives of material in these fields, and how the work might be used creatively as an instrument or improviser. We expect participants with backgrounds in musicianship and composition as well as in coding, mathematics, and engineering, and people in between, also researchers and theorists.

To guide that work, we’re working to setup collaboration and confrontation between historical approaches and today’s bleeding-edge computational work. Media artist Helena Nikonole became conceptual artist of the Lab. She will bring her interests in connecting AI with new aesthetics and media, as she has exhibited from ZKM to CTM to Garage Museum of Contemporary Art. Dr. Konstantin Yakovlev joins as one of Russia’s leading mathematicians and computer scientists working at the forefront of AI, machine learning, and smart robotics – meaning we’re guaranteed some of the top technical talent. (Warning: crash course likely.)

Russia has an extraordinarily rich culture of artistic and engineering exploration, in AI as elsewhere. Some of that work was seen recently at Berlin’s CTM Festival exhibition. Helena for her part has created work that, among others, applies machine learning to unraveling the structure of birdsong (with a bird-human translator perhaps on the horizon), and hacked into Internet-connected CCTV cameras and voice synthesis to meld machine learning-generated sacred texts with … well, some guys trapped in an elevator. See below:

Bird Language

deus X mchn

I’m humbled to get to work with them and in one of the world’s great musical cities, because I hope we also get to see how these new models relate to older ones, and where gaps lie in music theory and computation. (We’re including some musicians/composers with serious background in these fields, and some rich archives that haven’t been approached like this ever before.)

I came from a musicology background, so I see in so-called “AI” a chance to take musicology and theory closer to the music, not further away. Google recently presented a Bach ”doodle” – more on that soon, in fact – with the goal of replicating some details of Bach’s composition. To those of us with a music theory background, some of the challenges of doing that are familiar: analyzing music is different from composing it, even for the human mind. To me, part of why it’s important to attempt working in this field is that there’s a lot to learn from mistakes and failures.

It’s not so much that you’re making a robo-Bach – any more than your baroque theory class will turn all the students into honorary members of the extended Bach family. (Send your CV to your local Lutheran church!) It’s a chance to find new possibilities in this history we might not have seen before. And it lets us test (and break) our ideas about how music works with larger sets of data – say, all of Bach’s cantatas at once, or a set of jazz transcriptions, or a library full of nothing but different kick drums, if you like. This isn’t so much about testing “AI,” whatever you want that to mean – it’s a way to push our human understanding to its limits.

Oh yes, and we’ll definitely be pushing our own human limits – in a fun way, I’m sure.

A small group of participants will be involved in the heart of St. Petersburg from May 11-22, with time to investigate and collaborate, plus inputs (including at the massive Planetarium No. 1).

But where this gets really interesting – and do expect to follow along here on CDM – is that we will wind up in July with an AI mainstage at the globally celebrated Gamma Festival. Artist participants will create their own AI-inspired audiovisual performances and improvisations, acoustic and electronic hybrids, and new live scenarios. The finalists will be invited to the festival and fully covered in terms of expenses.

So just as I’ve gotten to do with partners at CTM Festival (and recently with southeast Asia’s Nusasonic), we’re making the ultimate laboratory experiment in front of a live audience. Research, make, rave, repeat.

The open call deadline is fast approaching if you think you might want to participate.

Facebook event
http://gammafestival.ru/english

To apply:
Participation at GAMMA_LAB AI is free for the selected candidates. Send a letter of intent and portfolio to aiworkshop@artypical.com by end of day April 8, 2019. Participants have to bring personal computers of sufficient capacity to work on their projects during the Laboratory. Transportation and living expenses during the Laboratory are paid by the participants themselves. The organizers provide visa support, as well as the travel of the best Lab participants to GAMMA festival in July.

The post Exploring machine learning for music, live: Gamma_LAB AI appeared first on CDM Create Digital Music.

Premiere: rituals of sound and rhythm in the latest from Mexico’s FAX

The Changing Landscape is the latest mystical outing from Mexican ambient/experimental electronic master FAX. And to launch into that world, we have a video that’s liquid, glitchy, a post-digital mind trip.

Let’s watch the music video, created by Hirám López:

Fax, aka Rubén Alonso Tamayo, is the epitome of a long-term artist. He’s got multiple decades of music to his name, spanning from dancefloor to far-out experimental soundscape, but always imbued with craft and thought. Ideally, you’ll get to hear Fax’s work in person – live, he creates earthquakes of sound and transports audiences to other planes. (I was lucky enough to catch him in Mexico City for the edition of MUTEK there.)

The Mexicali, Baja California-native artist is also a hub of activity in Mexico, across visual and sonic media. So for The Changing Landscape, we get free-flowing, spontaneous journeys full of the percussion work of Yamil Rezc.

The landscapes are organized into a diverse progression of “lands,” variations on a theme and instrumentation. “Land I” opens with a squelchy, exposed bassline before breaking into a gentle, jazzy jam. “Land II” is a stuttering, irregular ambient world, drums and piano idly ambling in stumbles over top waves of fuzzy pads. “Land IV” is more futuristic, pulsing synths glistening as noise crests and breaks across the stereo field. “Land V” crackles and cycles in some final parting ritual.

“Land III,” for which we get this video premiere, is clearly a highlight, an esoteric inner sanctum of the album, digital odd angles against a melancholy dialog of pad and bass.

FAX, photo by Braulio Lam.

Like the label he has co-founded, Static Discos, FAX works along borders of geography and medium. As often is the case, the personnel here come from that Mexican border town Mexicali. And visual collaborator Hirám López tunes into the trance-like, surreal-ultrareal quality of the work, writing:

FAX’s atmospheres and musical progressions submerged me in a hypnotic trance that I had to capture. Land III, was an experimentation exercise, where the human collages of Jung Sing were distorted to mix these characters even more through the aesthetics of the glitch. I used Adobe After Effects to replicate a series of visual alterations that bad coding can cause in today’s tech devices, based on the musical figures to give them a synchronized intention.

It’s all subtle, as is the music – the effect just disrupting the surface, a direct analog to the sonic approach in the album. As they write:

“Displacement mapping” was the technique that Hirám López used the most; It allows you to alternate pixel positions from a high contrast image, were the brightness intensity determines how the superimposed pixels on that image or map will move. Lopez’ method consisted in using several layers of this effect on Jung’s illustrations, placing keyframes and expressions (code that detects audio and converts it in a numeric value) that moved the distortion map along the x and y axes, in sync with the music. Under the concept of permanence of the disturbance, as a ghostly trace of the previous or later character, the “Datamoshing” effect created dynamic transitions, with this same tool. Due to its hypnotic effect, the waves and tunnels created with various plugins including “Ripple” and “Radio waves” were very helpful for depth simulation, the repetition of the illustrations, and the Mandelbrot type fractals to emphasize the trance.

Also, “masking” allowed López to cut out some elements from the characters in order to extend its fragmentation, also as a resource based on musical sync and especially on visual composition.

The full album is out on Bandcamp and other services from Static Discos.

Official release page:

http://staticdiscos.com/sta097/

For more – a mix from last year on the Dimension Series from the label:

The post Premiere: rituals of sound and rhythm in the latest from Mexico’s FAX appeared first on CDM Create Digital Music.

Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki

Complex music conjures up radical, fluid architectures, vivid angles – why not experience those spatial and rhythmic structures together? Here’s insight into a music video this week in which experimental turntablism and 3D graphics collide.

And collide is the right word. Sound and image are all hard edges, primitive cuts, stimulating corners.

Shiva Feshareki is a London-born composer and turntablist; she’s also got a radio show on NTS. With a research specialization in Daphne Oram (there’s a whole story there, even), she’s made a name for herself as one of the world’s leading composers working with turntables as medium, playing to the likes of the Royaal Albert Hall with the London Contemporary Orchestra. Her sounds are themselves often spatial and architectural, too – not just taking over art spaces, but working with spatial organization in her compositions.

That makes a perfect fit with the equally frenetic jump cuts and spinning 3D architectures of visualist Daniel James Oliver Haddock. (He’s a man with so many dimensions they named him four times over.)

NEW FORMS, her album on Belfast’s Resist label, explores the fragmented world of “different social forms,” a cut-up analog to today’s sliced-up, broken society. The abstract formal architecture, then, has a mission. As she writes in the liner notes: “if I can demonstrate sonically how one form can be vastly transformed using nothing other than its own material, then I can demonstrate this complexity and vastness of perspective.”

You can watch her playing with turntables and things around and atop turntables on Against the Clock for FACT:

And grab the album from Bandcamp:

Shiva herself works with graphical scores, which are interpreted in the album art by artist Helena Hamilton. Have a gander at that edition:

But since FACT covered the sound side of this, I decided to snag Daniel James Oliver Haddock. Daniel also wins the award this week for “quickest to answer interview questions,” so hey kids, experimental turntablism will give you energy!

Here’s Daniel:

The conception formed out of conversations with Shiva about the nature of her work and the ways in which she approaches sound. She views sound as these unique 3D structures which can change and be manipulated. So I wanted to emulate that in the video. I also was interested in the drawings and diagrams that she makes to plan out different aspects of her performances, mapping out speakers and sound scapes, I thought they were really beautiful in a very clinical way so again I wanted to use them as a staging point for the 3D environments.

I made about 6 environments in cinema 4d which were all inspired by these drawings. Then animated these quite rudimentary irregular polyhedrons in the middle to kind of represent various sounds.

Her work usually has a lot of sound manipulation, so I wanted the shapes to change and have variables. I ended up rendering short scenes in different camera perspectives and movements and also changing the textures from monotone to colour.

After all the Cinema 4d stuff, it was just a case of editing it all together! Which was fairly labour intensive, the track is not only very long but all the sounds have a very unusual tempo to them, some growing over time and then shortening, sounds change and get re-manipulated so that was challenging getting everything cut well. I basically just went through second by second with the waveforms and matched sounds by eye. Once I got the technique down it moved quite quickly. I then got the idea to involve some found footage to kind of break apart the aesthetic a bit.

Of course, there’s a clear link here to Autechre’s Gantz Graf music video, ur-video of all 3D music videos after. But then, there’s something really delightful about seeing those rhythms visualized when they’re produced live on turntables. Just the VJ in me really wants to see the visuals as live performance. (Well, and to me, that’s easier to produce than the Cinema 4D edits!)

But it’s all a real good time with at the audio/visual synesthesia experimental disco.

More:

Watch experimental turntablist Shiva Feshareki’s ‘V-A-C Moscow’ video [FACT]

https://www.shivafeshareki.co.uk/

https://resistbelfast.bandcamp.com/album/new-forms

Resist label

The post Take a 3D trip into experimental turntablism with V-A-C Moscow, Shiva Feshareki appeared first on CDM Create Digital Music.

Azure Kinect promises new motion, tracking for art

Gamers’ interest may come and go, but artists are always exploring the potential of computer vision for expression. Microsoft this month has resurrected the Kinect, albeit in pricey, limited form. Let’s fit it to the family tree.

Time flies: musicians and electronic artists have now had access to readily available computer vision since the turn of this century. That initially looked like webcams, paired with libraries like the free OpenCV (still a viable option), and later repurposed gaming devices from Sony and Microsoft platforms.

And then came Kinect. Kinect was a darling of live visual projects and art installations, because of its relatively sophisticated skeletal tracking and various artist-friendly developer tools.

History time

A full ten years ago, I was writing about the Microsoft project and interactions, in its first iteration as the pre-release Project Natal. Xbox 360 support followed in 2010, Windows support in 2012 – while digital artists quickly hacked in Mac (and rudimentary Linux) support. An artists in music an digital media quickly followed.

For those of you just joining us, Kinect shines infrared light at a scene, and takes an infrared image (so it can work irrespective of other lighting) which it converts into a 3D depth map of the scene. From that depth image, Microsoft’s software can also track the skeleton image of one or two people, which lets you respond to the movement of bodies. Microsoft and partner PrimeSense weren’t the only to try this scheme, but they were the ones to ship the most units and attract the most developers.

We’re now on the third major revision of the camera hardware.

2010: Original Kinect for Xbox 360. The original. Proprietary connector with breakout to USB and power. These devices are far more common, as they were cheaper and shipped more widely. Despite the name, they do work with both open drivers for respective desktop systems.

2012: Kinect for Windows. Looks and works almost identically to Kinect for 360, with some minor differences (near mode).

Raw use of depth maps and the like for the above yielded countless music videos, and the skeletal tracking even more numerous and typically awkward “wave your hands around to play the music” examples.

Here’s me with a quick demo for the TED organization, preceded by some discussion of why I think gesture matter. It’s… slightly embarrassing, only in that it was produced on an extremely tight schedule, and I think the creative exploration of what I was saying about gesture just wasn’t ready yet. (Not only had I not quite caught up, but camera tech like what Microsoft is shipping this year is far better suited to the task than the original Kinect camera was.) But the points I’m making here have some fresh meaning for me now.

2013: Kinect for Xbox One. Here’s where things got more interesting – because of a major hardware upgrade, these cameras are far more effective at tracking and yield greater performance.

  • Active IR tracking in the dark
  • Wider field of vision
  • 6 skeletons (people) instead of two
  • More tracking features, with additional joints and creepier features like heart rate and facial expression
  • 1080p color camera
  • Faster performance/throughput (which was key to more expressive results)

Kinect One, the second camera (confusing!), definitely allowed more expressive applications. One high point for me was the simple but utterly effective work of Chris Milk and team, “The Treachery of Sanctuary.”

And then it ended. Microsoft unbundled the camera from Xbox One, meaning developers couldn’t count on gamers owning the hardware, and quietly discontinued the last camera at the end of October 2017.

Everything old is new again

I have mixed feelings – as I’m sure you do – about these cameras, even with the later results on Kinect One. For gaming, the devices were abandoned – by gamers, by developers, and by Microsoft as the company ditched the Xbox strategy. (Parallel work at Sony didn’t fare much better.)

It’s hard to keep up with consumer expectations. By implying “computer vision,” any such technology has to compete with your own brain – and your own brain is really, really, really good. “Sensors” and “computation” are all merged in organic harmony, allowing you to rapidly detect the tiniest nuance. You can read a poker player’s tell in an instant, while Kinect will lose the ability to recognize that your leg is attached to your body. Microsoft launched Project Natal talking about seeing a ball and kicking a ball, but… you can do that with a real ball, and you really can’t do that with a camera, so they quite literally got off on the wrong foot.

It’s not just gaming, either. On the art side, the very potential of these cameras to make the same demos over and over again – yet another magic mirror – might well be their downfall.

So why am I even bothering to write this?

Simple: the existing, state-of-the-art Kinect One camera is now available on the used market for well under a hundred bucks – for less than the cost of a mid-range computer mouse. Microsoft’s gaming business whims are your budget buy. The computers to process that data are faster and cheaper. And the software is more mature.

So while digital art has long been driven by novelty … who cares? Actual music and art making requires practice and maturity of both tools and artist. It takes time. So oddly while creative specialists were ahead of the curve on these sorts of devices, the same communities might well innovate in the lagging cycle of the same technology.

And oh yeah – the next generation looks very powerful.

Kinect: The Next Generation

Let’s get the bad news out of the way first: the new Kinect is both more expensive ($400) and less available (launching only in the US and China… in June). Ugh. And that continues Microsoft’s trend here of starting with general purpose hardware for mass audiences and working up to … wait, working up to increasingly expensive hardware for smaller and smaller groups of developers.

That is definitely backwards from how this is normally meant to work.

But the good news here is unexpected. Kinect was lost, and now is found.

The safe bet was that Microsoft would just abandon Kinect after the gaming failure. But to the company’s credit, they’ve pressed on, with some clear interest in letting developers, researchers, and artists decide what this thing is really for. Smart move: those folks often come up with inspiration that doesn’t fit the demands of the gaming industry.

So now Kinect is back, dubbed Azure Kinect – Microsoft is also hell-bent on turning Azure “cloud services” into a catch-all solution for all things, everywhere.

And the hardware looks … well, kind of amazing. It might be described as a first post-smartphone device. Say what? Well, now that smartphones have largely finalized their sensing capabilities, they’ve oddly left the arena open to other tech defining new areas.

For a really good write-up, you’ll want to read this great run-down:


All you need to know on Azure Kinect
[The Ghost Howls, a VR/tech blog, see also a detailed run-down of HoloLens 2 which also just came out]

Here are the highlights, though. Azure Kinect is the child of Kinect and HoloLens. It’s a VR-era sensor, but standalone – which is perfect for performance and art.

Fundamentally, the formula is the same – depth camera, conventional RGB camera, some microphones, additional sensors. But now you get more sensing capabilities and substantially beefed-up image processing.

1MP depth camera (not 640×480) – straight off of HoloLens 2, Microsoft’s augmented reality plaform
Two modes: wide and narrow field of view
4K RGB camera (with standard USB camera operation)
7 microphone array
Gyroscope + accelerometer

And it connects both by USB-C (which can also be used for power) or as a standalone camera with “cloud connection.” (You know, I’m pretty sure that means it has a wifi radio, but oddly all the tech reporters who talked to Microsoft bought the “cloud” buzzword and no one says outright. I’ll double-check.)

Also, now Microsoft supports both Windows and Linux. (Ubuntu 18.04 + OpenGL v 4.4).

Downers: 30 fps operation, limited range.

Somehing something, hospitals or assembly lines Azure services, something that looks like an IBM / Cisco ad:

That in itself is interesting. Artists using the same thing as gamers sort of … didn’t work well. But artists using the same tool as an assembly line is something new.

And here’s the best part for live performance and interaction design – you can freely combine as many cameras as you want, and sync them without any weird tricks.

All in all, this looks like it might be the best networked camera, full stop, let alone best for tracking, depth sensing, and other applications. And Microsoft are planning special SDKs for the sensor, body tracking, vision, and speech.

Also, the fact that it doesn’t plug into an Xbox is a feature, not a bug to me – it means Microsoft are finally focusing on the more innovative, experimental uses of these cameras.

So don’t write off Kinect now. In fact, with Kinect One so cheap, it might be worth picking one up and trying Microsoft’s own SDK just for practice.

Azure Kinect DK preorder / product page

aka.ms/kinectdocs

The post Azure Kinect promises new motion, tracking for art appeared first on CDM Create Digital Music.

A free, shared visual playground in the browser: Olivia Jack talks Hydra

Reimagine pixels and color, melt your screen live into glitches and textures, and do it all for free on the Web – as you play with others. We talk to Olivia Jack about her invention, live coding visual environment Hydra.

Inspired by analog video synths and vintage image processors, Hydra is open, free, collaborative, and all runs as code in the browser. It’s the creation of US-born, Colombia-based artist Olivia Jack. Olivia joined our MusicMakers Hacklab at CTM Festival earlier this winter, where she presented her creation and its inspirations, and jumped in as a participant – spreading Hydra along the way.

Olivia’s Hydra performances are explosions of color and texture, where even the code becomes part of the aesthetic. And it’s helped take Olivia’s ideas across borders, both in the Americas and Europe. It’s part of a growing interest in the live coding scene, even as that scene enters its second or third decade (depending on how you count), but Hydra also represents an exploration of what visuals can mean and what it means for them to be shared between participants. Olivia has rooted those concepts in the legacy of cybernetic thought.

Oh, and this isn’t just for nerd gatherings – her work has also lit up one of Bogota’s hotter queer parties. (Not that such things need be thought of as a binary, anyway, but in case you had a particular expectation about that.) And yes, that also means you might catch Olivia at a JavaScript conference; I last saw her back from making Hydra run off solar power in Hawaii.

Following her CTM appearance in Berlin, I wanted to find out more about how Olivia’s tool has evolved and its relation to DIY culture and self-fashioned tools for expression.

Olivia with Alexandra Cardenas in Madrid. Photo: Tatiana Soshenina.

CDM: Can you tell us a little about your background? Did you come from some experience in programming?

Olivia: I have been programming now for ten years. Since 2011, I’ve worked freelance — doing audiovisual installations and data visualization, interactive visuals for dance performances, teaching video games to kids, and teaching programming to art students at a university, and all of these things have involved programming.

Had you worked with any existing VJ tools before you started creating your own?

Very few; almost all of my visual experience has been through creating my own software in Processing, openFrameworks, or JavaScript rather than using software. I have used Resolume in one or two projects. I don’t even really know how to edit video, but I sometimes use [Adobe] After Effects. I had no intention of making software for visuals, but started an investigative process related to streaming on the internet and also trying to learn about analog video synthesis without having access to modular synth hardware.

Alexandra Cárdenas and Olivia Jack @ ICLC 2019:

In your presentation in Berlin, you walked us through some of the origins of this project. Can you share a bit about how this germinated, what some of the precursors to Hydra were and why you made them?

It’s based on an ongoing Investigation of:

  • Collaboration in the creation of live visuals
  • Possibilities of peer-to-peer [P2P] technology on the web
  • Feedback loops

Precursors:

A significant moment came as I was doing a residency in Platohedro in Medellin in May of 2017. I was teaching beginning programming, but also wanted to have larger conversations about the internet and talk about some possibilities of peer-to-peer protocols. So I taught programming using p5.js (the JavaScript version of Processing). I developed a library so that the participants of the workshop could share in real-time what they were doing, and the other participants could use what they were doing as part of the visuals they were developing in their own code. I created a class/library in JavaScript called pixel parche to make this sharing possible. “Parche” is a very Colombian word in Spanish for group of friends; this reflected the community I felt while at Platoedro, the idea of just hanging out and jamming and bouncing ideas off of each other. The tool clogged the network and I tried to cram too much information in a very short amount of time, but I learned a lot.

I was also questioning some of the metaphors we use to understand and interact with the web. “Visiting” a website is exchanging a bunch of bytes with a faraway place and routed through other far away places. Rather than think about a webpage as a “page”, “site”, or “place” that you can “go” to, what if we think about it as a flow of information where you can configure connections in realtime? I like the browser as a place to share creative ideas – anyone can load it without having to go to a gallery or install something.

And I was interested in using the idea of a modular synthesizer as a way to understand the web. Each window can receive video streams from and send video to other windows, and you can configure them in real time suing WebRTC (realtime web streaming).

Here’s one of the early tests I did:

https://vimeo.com/218574728

I really liked this philosophical idea you introduced of putting yourself in a feedback loop. What does that mean to you? Did you discover any new reflections of that during our hacklab, for that matter, or in other community environments?

It’s processes of creation, not having a specific idea of where it will end up – trying something, seeing what happens, and then trying something else.

Code tries to define the world using specific set of rules, but at the end of the day ends up chaotic. Maybe the world is chaotic. It’s important to be self-reflective.

How did you come to developing Hydra itself? I love that it has this analog synth model – and these multiple frame buffers. What was some of the inspiration?

I had no intention of creating a “tool”… I gave a workshop at the International Conference on Live Coding in December 2017 about collaborative visuals on the web, and made an editor to make the workshop easier. Then afterwards people kept using it.

I didn’t think too much about the name but [had in mind] something about multiplicity. Hydra organisms have no central nervous system; their nervous system is distributed. There’s no hierarchy of one thing controlling everything else, but rather interconnections between pieces.

Ed.: Okay, Olivia asked me to look this up and – wow, check out nerve nets. There’s nothing like a head, let alone a central brain. Instead the aquatic creatures in the genus hydra has sense and neuron essentially as one interconnected network, with cells that detect light and touch forming a distributed sensory awareness.

Most graphics abstractions are based on the idea of a 2d canvas or 3d rendering, but the computer graphics card actually knows nothing about this; it’s just concerned with pixel colors. I wanted to make it easy to play with the idea of routing and transforming a signal rather than drawing on a canvas or creating a 3d scene.

This also contrasts with directly programming a shader (one of the other common ways that people make visuals using live coding), where you generally only have access to one frame buffer for rendering things to. In Hydra, you have multiple frame buffers that you can dynamically route and feed into each other.

MusicMakers Hacklab in Berlin. Photo: Malitzin Cortes.

Livecoding is of course what a lot of people focus on in your work. But what’s the significance of code as the interface here? How important is it that it’s functional coding?

It’s inspired by [Alex McLean’s sound/music pattern environment] TidalCycles — the idea of taking a simple concept and working from there. In Tidal, the base element is a pattern in time, and everything is a transformation of that pattern. In Hydra, the base element is a transformation from coordinates to color. All of the other functions either transform coordinates or transform colors. This directly corresponds to how fragment shaders and low-level graphics programming work — the GPU runs a program simultaneously on each pixel, and that receives the coordinates of that pixel and outputs a single color.

I think immutability in functional (and declarative) coding paradigms is helpful in live coding; you don’t have to worry about mentally keeping track of a variable and what its value is or the ways you’ve changed it leading up to this moment. Functional paradigms are really helpful in describing analog synthesis – each module is a function that always does the same thing when it receives the same input. (Parameters are like knobs.) I’m very inspired by the modular idea of defining the pieces to maximize the amount that they can be rearranged with each other. The code describes the composition of those functions with each other. The main logic is functional, but things like setting up external sources from a webcam or live stream are not at all; JavaScript allows mixing these things as needed. I’m not super opinionated about it, just interested in the ways that the code is legible and makes it easy to describe what is happening.

What’s the experience you have of the code being onscreen? Are some people actually reading it / learning from it? I mean, in your work it also seems like a texture.

I am interested in it being somewhat understandable even if you don’t know what it is doing or that much about coding.

Code is often a visual element in a live coding performance, but I am not always sure how to integrate it in a way that feels intentional. I like using my screen itself as a video texture within the visuals, because then everything I do — like highlighting, scrolling, moving the mouse, or changing the size of the text — becomes part of the performance. It is really fun! Recently I learned about prepared desktop performances and related to the live-coding mantra of “show your screens,” I like the idea that everything I’m doing is a part of the performance. And that’s also why I directly mirror the screen from my laptop to the projector. You can contrast that to just seeing the output of an AV set, and having no idea how it was created or what the performer is doing. I don’t think it’s necessary all the time, but it feels like using the computer as an instrument and exploring different ways that it is an interface.

The algorave thing is now getting a lot of attention, but you’re taking this tool into other contexts. Can you talk about some of the other parties you’ve played in Colombia, or when you turned the live code display off?

Most of my inspiration and references for what I’ve been researching and creating have been outside of live coding — analog video synthesis, net art, graphics programming, peer-to-peer technology.

Having just said I like showing the screen, I think it can sometimes be distracting and isn’t always necessary. I did visuals for Putivuelta, a queer collective and party focused on diasporic Latin club music and wanted to just focus on the visuals. Also I am just getting started with this and I like to experiment each time; I usually develop a new function or try something new every time I do visuals.

Community is such an interesting element of this whole scene. So I know with Hydra so far there haven’t been a lot of outside contributions to the codebase – though this is a typical experience of open source projects. But how has it been significant to your work to both use this as an artist, and teach and spread the tool? And what does it mean to do that in this larger livecoding scene?

I’m interested in how technical details of Hydra foster community — as soon as you log in, you see something that someone has made. It’s easy to share via twitter bot, see and edit the code live of what someone has made, and make your own. It acts as a gallery of shareable things that people have made:

https://twitter.com/hydra_patterns

Although I’ve developed this tool, I’m still learning how to use it myself. Seeing how other people use it has also helped me learn how to use it.

I’m inspired by work that Alex McLean and Alexandra Cardenas and many others in live coding have done on this — just the idea that you’re showing your screen and sharing your code with other people to me opens a conversation about what is going on, that as a community we learn and share knowledge about what we are doing. Also I like online communities such as talk.lurk.org and streaming events where you can participate no matter where you are.

I’m also really amazed at how this is spreading through Latin America. Do you feel like there’s some reason the region has been so fertile with these tools?

It’s definitely influenced me rather than the other way around, getting to know Alexandra [Cardenas’] work, Esteban [Betancur, author of live coding visual environment Cine Vivo], rggtrn, and Mexican live coders.

Madrid performance. Photo: Tatiana Soshenina.

What has the scene been like there for you – especially now living in Bogota, having grown up in California?

I think people are more critical about technology and so that makes the art involving technology more interesting to me. (I grew up in San Francisco.) I’m impressed by the amount of interest in art and technology spaces such as Plataforma Bogota that provide funding and opportunities at the intersection of art, science, and technology.

The press lately has fixated on live coding or algorave but maybe not seen connections to other open source / DIY / shared music technologies. But – maybe now especially after the hacklab – do you see some potential there to make other connections?

To me it is all really related, about creating and hacking your own tools, learning, and sharing knowledge with other people.

Oh, and lastly – want to tell us a little about where Hydra itself is at now, and what comes next?

Right now, it’s improving documentation and making it easier for others to contribute.

Personally, I’m interested in performing more and developing my own performance process.

Thanks, Olivia!

Check out Hydra for yourself, right now:

https://hydra-editor.glitch.me/

Previously:

Inside the livecoding algorave movement, and what it says about music

Magical 3D visuals, patched together with wires in browser: Cables.gl

The post A free, shared visual playground in the browser: Olivia Jack talks Hydra appeared first on CDM Create Digital Music.

Apple’s latest Macs have a serious audio glitching bug

Apple has a serious, apparently unresolved bug that causes issues with all audio performance with external devices across all its latest Macs, thanks to the company’s own software and custom security chip. The only good news: there is a workaround.

Following bug reports online, the impacted machines are all the newest computers – those with Apple’s own T2 security chip:

  • iMac Pro
  • Mac mini models introduced in 2018
  • MacBook Air models introduced in 2018
  • MacBook Pro models introduced in 2018

he T2 in Apple’s words “is Apple’s second-generation, custom silicon for Mac. By redesigning and integrating several controllers found in other Mac computers—such as the System Management Controller, image signal processor, audio controller, and SSD controller—the T2 chip delivers new capabilities to your Mac.”

The problem is, it appears that this new chip has introduced glitches on a wide variety of external audio hardware from across the pro audio industry, thanks to a bug in Apple’s software. When your Mac updates its system clock, dropouts and glitches appear in the audio stream. (Any hardware with a non-default clock source appears to be impacted. It’s a good bet that any popular external audio interface may exhibit the problem.)

The workaround is fairly easy: switch off “Set date and time automatically” in System Preferences.

More:
https://www.reddit.com/r/apple/comments/anvufc/psa_2018_macs_with_t2_chip_unusable_with_external/

https://discussions.apple.com/thread/8509051

https://www.logicprohelp.com/forum/viewtopic.php?t=138992

https://www.gearslutz.com/board/music-computers/1232030-usb-audio-glitches-macbook-pro-2018-a.html

https://openradar.appspot.com/46918065

But more alarming is that this is another serious quality control fumble from Apple. The value proposition with Apple always been that the company’s control over its own hardware, software, and industrial engineering meant a more predictable product. But when Apple botches the quality of its own products and doesn’t test creative audio and video use cases, that value case quickly flips. You’re sacrificing choice and paying a higher price for a product that’s actually worse.

Apple’s recent Mac line have also come under fire for charging a premium price while sacrificing things users want (like NVIDIA graphics cards, affordable internal storage, or extra ports). And on the new thin MacBook and MacBook Pro lines, keyboard reliability issues.

Before Windows users start gloating, of course, PCs can have reliability issues of their own. They’re just distributed across a wider range of vendors – which is part of the reason some musicians sought out Apple in the first place.

Regardless, Apple needs to test and address these kinds of issues. Apple’s iPad Pro line is fantastic and essentially unchallenged because of its unique software ecosystem and poor low-cost PC or Android tablet options. But the Mac has to compete with increasingly impressive PC laptops and desktop machines at low costs, and a Windows operating system that has improved its audio plumbing (to say nothing of the fact that Linux now lets you run tools like Bitwig Studio and VCV Rack). And that’s why competition is a good thing – you might be happier with a different choice.

Anyway, if you do have one of these machines, let us know if you’ve been having trouble with this issue and if this workaround (hopefully) solves your problem.

The post Apple’s latest Macs have a serious audio glitching bug appeared first on CDM Create Digital Music.

Live compositions on oscilloscope: nuuun, ATOM TM

The Well-Tempered vector rescanner? A new audiovisual release finds poetry in vintage video synthesis and scan processors – and launches a new AV platform for ATOM TM.

nuuun, a collaboration between Atom™ (raster, formerly Raster-Noton) and Americans Jahnavi Stenflo and Nathan Jantz, have produced a “current suite.” These are all recorded live – sound and visuals alike – in Uwe Schmidt’s Chilean studio.

Minimalistic, exposed presentation of electronic elements is nothing new to the Raster crowd, who are known for bringing this raw aesthetic to their work. You could read that as part punk aesthetic, part fascination with visual imagery, rooted in the collective’s history in East Germany’s underground. But as these elements cycle back, now there’s a fresh interest in working with vectors as medium (see link below, in fact). As we move from novelty to more refined technique, more artists are finding ways of turning these technologies into instruments.

And it’s really the fact that these are instruments – a chamber trio, in title and construct – that’s essential to the work here. It’s not just about the impression of the tech, in other words, but the fact that working on technique brings the different media closer together. As nuuun describe the release:

Informed and inspired by Scan Processors of the early 1970’s such as the Rutt/Etra video synthesizer, “Current Suite No.1” uses the oscillographic medium as an opportunity to bring the observer closer to the signal. Through a technique known as “vector-rescanning”, one can program and produce complex encoded wave forms that can only be observed through and captured from analog vector displays. These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms. Both the music and imagery in each of these videos were recorded as live compositions, as if they were intertwined two-way conversations between sound and visual form to produce a unique synesthetic experience.

“These signals modulate an electron-beam of a cathode-ray tube where the resulting phosphorescent traces reveal a world of hidden forms.”

Even with lots of prominent festivals, audiovisual work – and putting visuals on equal footing with music – still faces an uphill battle. Online music distribution isn’t really geared for AV work; it’s not even obvious how audiovisual work is meant to be uploaded and disseminated apart from channels like YouTube or Vimeo. So it’s also worth noting that Atom™ is promising that NN will be a platform for more audiovisual work. We’ll see what that brings.

Of course, NOTON and Carsten Nicolai (aka Alva Noto) already has a rich fine art / high-end media art career going, and the “raster-media” launched by Olaf Bender in 2017 describes itself as a “platform – a network covering the overlapping border areas of pop, art, and science.” We at least saw raster continue to present installations and other works, extending their footprint beyond just the usual routine of record releases.

There’s perhaps not a lot that can be done about the fleeting value of music in distribution, but then music has always been ephemeral. Let’s look at it this way – for those of us who see sound as interconnected with image and science, any conduit to that work is welcome. So watch this space.

For now, we’ve got this first release:

http://atom-tm.com/NN/1/Current-Suite-No-IVideo/

Previously:

Vectors are getting their own festival: lasers and oscilloscopes, go!

In Dreamy, Electrified Landscapes, Nalepa ‘Daytime’ Music Video Meets Rutt-Etra

The post Live compositions on oscilloscope: nuuun, ATOM TM appeared first on CDM Create Digital Music.

In gorgeous ETHER, a handmade micro lens brings cymatics closer

Sound is physical, but we don’t often get to see that physicality. In this gorgeous video for Thomas Vaquié, directed by Nico Neefs, those worlds of vibrations explode across your screen. It’s the latest release from ANTIVJ, and it’s spellbinding.

The sounds really do generate the visuals here, from generating terrain from an analysis from the waveform to revealing footage of metal powder animated by sonic vibrations. A self-made micro lens provides the optics.

https://www.youtube.com/watch?v=aK0BXH7zu-M

Everything in this video was made using the sound waves of the track Ether.
Equipped with a home-made micro lens, a camera travels inside physical representations of the musical composition, from a concrete mountain built from the spectrogram of the music, to eruptions of metal powder caused by rhythmic impulsions.

(Impulsion is a word; look it up! I had to do so.)

Still from the video.

Nico Neefs is the director, working with images he created with Corentin Kopp. It’s set to music from Belgian producer Thomas Vaquié’s new album Ecume, on Antivj Recordings. That imprint has for over a decade been a label for audiovisual creations across media – release, installation, performance. Simon Geilfus developed the tool for visualization.

They’ve employed the same techniques to make a very attractive physical release. The image you see in the artwork is cast from a concrete mold. For a limited edition box set, they’re producing 33cm x 33cm plates cast from that mold in dark resin. And it’s ready to mount to a wall if you choose; hardware included. Or if you feel instead like you own enough things, there’s a digital edition.

Ultra-limited handmade physical release.

Concrete mold.

Concrete mold; detail.

The whole album is beautiful; I’m especially fond of the bell-like resonances in the opening piece. It’s a sumptuous, sonic environment, full of evocative sound designs that rustle and ring in easy, organic assemblies, part synthetic, part string. Those then break into broken, warped grooves that push forward. (Hey, more impulsion – like a horse.)

The music was repurposed from installations and art contexts:

These are all derivations of compositions for site-specific and installation projects, the original pieces having been created as a response to place and space, to light and architecture, to code and motion. Now separated and transformed from their original context, the music takes on an independent existence in these new realisations.

That does lend the whole release an environmental quality – spaces you can step in and out of – but is nonetheless present emotionally. There’s impact, listening top to bottom, enough so that you might not immediately assume the earlier context. And the release is fully consistent and coherent as a whole. (It is very possible you heard an installation here or there. Vaquié has produced compositions for Centre Pompidou Metz the Old Port of Montreal’s metallic conveyor tower, in Songdo South Korea, at Oaxaca’s ethnobotanical gardens, and at Hala Stulecia, Poland’s huge concrete dome.)

And there’s thoroughly fine string writing throughout – with a sense that strings and electronic media are always attuned to one another.

Cover artwork.

Thomas Vaquié.

Poetic explanation accompanies the album:

Ether embodies the world that exists above the skies.
It is the air that the gods breathe.
It is that feeling of dizziness,
that asphyxiation that we feel when faced with immensity.

Full video credits:

Music by Thomas Vaquié
Video directed by Nico Neefs
Images by Nico Neefs & Corentin Kopp
Edit & Post-production by Nico Neefs
Video produced by Charles Kinoo for Less Is More Studio and Thomas Vaquié
Filmed at BFC Studio, Brussels 2018.

More, including downloads / physical purchases:

https://thomasvaquie.bandcamp.com/

Plus:
www.thomasvaquie.com
www.antivj.com

The post In gorgeous ETHER, a handmade micro lens brings cymatics closer appeared first on CDM Create Digital Music.

SPIRALALALA transforms a spiral staircase into a vocal vortex

Just when you’re bored with digital media installations, something happens that gets you back to childlike wonder mode. And a magical staircase is a pretty good way to do that.

The team at Poland’s panGenerator have been on a tear lately. This time, they took a grand spiral staircase and imagined what would happen if you could make your voice a kinetic part of the architecture. It’s way better than just shouting your echo at a wall.

It’s also a great example of how spatial sound and architecture can interact, making the normally static structures of an environment more dynamic. This is the sort of interactive architecture we’re routinely promised, but now you see/hear it actually working. Each floor gets its own audio, so the sound seems to descend with the ball. Custom built gates with infrared sensors and radio modules complete the illusion by transforming the sound accordingly.

It’s neo-baroque sonic trompe l’oeil, made with digital technology. The digital transformations of the sound, mapped to the actual kinetic movement of the ball, mix virtual and real.

The artists:

Krzysztof Cybulski
Krzysztof Goliński
Jakub Koźniewski

What we got most recently from the same Warszawa-based crew:

The retro-futuristic Apparatum draws from Polish electronic music history

Details:

During MDF Festival we’ve changed the iconic spiral staircase of the Szczecin Philharmonic into 35m long / 15m high spatial voice-transforming instrument.

The audience has been invited to experiment with various spatialised sound effects applied to their vocalisations that were synchronised with the movement of the balls falling along 35m long track. The interaction starts with insertion of the ball into the microphone. Then recording starts and after the recorded sound stops the ball is released to slide down along the track.

Thanks to custom built gates with infrared sensors and radio modules the sound transformations applied to the recording were synchronised with the current speed and position of the ball. The light trail following the ball has also been created thanks to the sensors and microcontrollers measuring the speed of the ball passing the gates.

Since we were using five speakers – one per floor, we were also able to achieve spatialisation of the sound creating the illusion of the sound “falling” with the ball. As a finishing touch we’ve also used simple projection mapping synchronised with the motion of the ball to make the whole thing more visible for the people standing in the lobby of the Philharmonic.

In the end we’ve created a playful and engaging audience-driven audiovisual performance that exemplifies our vision for integrating new media art practice with architecture and breathing the life into static form thanks to digital technology.

——

VIDEO CREDITS

DOP – Hola Hola Film – holaholafilm.pl
VIDEO EDITING & POSTPRODUCTION – Jakub Koźniewski
SOUND EDITING – Krzysztof Cybulski
VIDEO SOUNDTRACK – Maciek Dobrowolski – mdobrowolski.com
VOICE – Jona Ardyn – jonaardyn.pl

SPECIAL THANKS

Paulina Stok-Stocka
Barbara Kinga Majewska
Tomasz Midzio
Maciej Kalczyński

—–

pangenerator.com/
mdf.filharmonia.szczecin.pl/
https:/filharmonia.szczecin.pl/en

More:

http://pangenerator.com/projects/spiralalala/

The post SPIRALALALA transforms a spiral staircase into a vocal vortex appeared first on CDM Create Digital Music.