dadamachines doppler is a new platform for open music hardware

The new doppler board promises to meld the power of FPGA brains with microcontrollers and the accessibility of environments like Arduino. And the founder is so confident that could lead to new stuff, he’s making a “label” to help share your ideas.

doppler is a small, 39EUR development board packing both an ARM microcontroller and an FPGA. It could be the basis of music controllers, effects, synths – anything you can make run on those chips.

If this appeals to you, we’ve even got a CDM-exclusive giveaway for inventors with ideas. (Now, end users, this may all go over your head but … rest assured the upshot for you should be, down the road, more cool toys to play with. Tinkerers, developers, and people with a dangerous appetite for building things – read on.)

But first – why include an FPGA on a development board for music?

The pitch for FPGA

The FPGA is a powerful but rarified circuit. The idea is irresistible: imagine a circuit that could be anything you want to be, rewired as easily as software. That’s kind of what an FPGA is – it’s a big bundle of programmable logic blocks and memory blocks. You get all of that computational power at comparatively low cost, with the flexibility to adapt to a number of tasks. The upshot of this is, you get something that performs like dedicated, custom-designed hardware, but that can be configured on the fly – and with terrific real-time performance.

This works well for music and audio applications, because FPGAs do work in “close to the metal” high performance contexts. And we’ve even seen them used in some music gear. (Teenage Engineer was an early FPGA adopter, with the OP-1.) The challenge has always been configuring this hardware for use, which could easily scare off even some hardware developers.

For more on why open FPGA development is cool, here’s a (nerdy) slide deck: https://fpga.dev/oshug.pdf

Now, all of what I’ve just said a little hard to envision. Wouldn’t it be great if instead of that abstract description, you could fire up the Arduino development environment, upload some cool audio code, and have it running on an FPGA?

doppler, on a breadboard connected to other stuff so it starts to get more musically useful. Future modules could also make this easier.

doppler: easier audio FPGA

doppler takes that FPGA power, and combines it with the ease of working with environments like Arduino. It’s a chewing gum-sized board with both a familiar ARM microcontroller and an FPGA. This board is bare-bones – you just get USB – but the development tools have been set up for you, and you can slap this on a breadboard and add your own additions (MIDI, audio I/O).

The project is led by Johannes Lohbihler, dadamachines founder, along with engineer and artist Sven Braun.

dadamachines also plan some other modules to make it easier to add other stuff us music folks might like. Want audio in and out? A mic preamp? MIDI connections? A display? Controls? Those could be breakout boards, and it seems Johannes and dadamachines are open to ideas for what you most want. (In the meantime, of course, you can lay out your own stuff, but these premade modules could save time when prototyping.)

Full specs of the tiny, core starter board:

120Mhz ARM Cortex M4F MCU 512KB Flash (Microchip ATSAMD51G19A) with FPU
– FPGA 5000 LUT, 1MBit RAM, 6 DSP Cores,OSC, PLL (Lattice ICE40UP5K)
– Arduino IDE compatible
– Breadboard friendly (DIL48)
– Micro USB
– Power over USB or external via pin headers
– VCC 3.5V …. 5.5V
– All GPIO Pins have 3.3V Logic Level
– 1 LED connected to SAMD51
– 4 x 4 LED Matrix (connected to FPGA)
– 2 User Buttons (connected to FPGA)
– AREF Solder Jumper
– I2C (need external pullup), SPI, QSPI Pins
– 2 DAC pins, 10 ADC pins
– Full open source toolchain
– SWD programming pin headers
– Double press reset to enter the bootloader
– UF2 Bootloader with Firmware upload via simple USB stick mode

See also the quickstart PDF.

I’ve focused on the FPGA powers here, because those are the new ones, but the micrcontroller side brings compatibility with existing libraries that allow you to combine some very useful features.

So, for instance, there’s USB host capability, which allows connecting all sorts of input devices, USB MIDI gadgets, and gaming controllers. See:

https://github.com/gdsports/USB_Host_Library_SAMD

That frees up the FPGA to do audio only. Flip it around the other way, and you can use the microcontroller for audio, while the FPGA does … something else. The Teensy audio library will work on this chip, too – meaning a bunch of adafruit instructional content will be useful here:

https://learn.adafruit.com/synthesizer-design-tool?view=all

https://github.com/adafruit/Audio/

doppler is fully open source hardware, with open firmware and code samples, so it’s designed to be easy to integrate into a finished product – even one you might sell commercially.

The software examples for now are mainly limited to configuring and using the board, so you’ll still need to bring your own code for doing something useful. But you can add the doppler as an Arduino library and access even the FPGA from inside the Arduino environment, which expands this to a far wider range of developers.

Look, ma, Arduino!

In a few steps, you can get up and running with the development environment, on any OS. You’ll be blinking lights and even using a 4×4 matrix of lights to show characters, just as easily as you would on an Arduino board – only you’re using an FPGA.

Getting to that stage is lunch break stuff if you’ve at least worked with Arduino before:

https://github.com/dadamachines/doppler

Dig into the firmware, and you can see, for instance, some I/O and a synth working. (This is in progress, it seems, but you get the idea.)

https://github.com/dadamachines/doppler-FPGA-firmware

And lest you think this is going to be something esoteric for experienced embedded hardware developers, part of the reason it’s so accessible is that Johannes is working with Sven Braun. Sven is among other things the developer of iOS apps zmors synth and modular – so you get something that’s friendly to app developers.

doppler in production…

A label for hardware, platform for working together

Johannes tells us there’s more to this than just tossing an open source board out into the world – dadamachines is also inviting collaborations. They’ve made doppler a kind of calling card for working together, as well as a starting point for building new hardware ideas, and are suggesting Berlin-based dadamachines as a “label” – a platform to develop and release those ideas as products.

There are already some cool, familiar faces playing with these boards – think Meng Qi, Tom Whitwell of Music thing, and Ornament & Crime.

Johannes and his dadamachines have already a proven hardware track record, bringing a product from Kickstarter funding to manufacturing, with the automat. It’s an affordable device that makes it easy to connect physical, “robotic” outputs (like solenoids and motors). (New hardware, a software update and more are planned for that, too, by the way.) And of course, part of what you get in doing that kind of hardware is a lot of amassed experience.

We’ve seen fertile open platforms before – Arduino and Raspberry Pi have each created their own ecosystems of both hardware and education. But this suggests working even more closely – pooling space, time, manufacturing, distribution, and knowledge together.

This might be skipping a few steps – even relatively experienced developers may want to see what they can do with this dev board first. But it’s an interesting long-range goal that Johannes has in mind.

Want your own doppler; got ideas?

We have five doppler boards to give away to interested CDM readers.

Just tell dadamachines what you want to make, or connect, or use, and email that idea to:

cdm@dadamachines.com

dadamachines will pick five winners to get a free board sent to them. (Johannes says he wants to do this by lottery, but I’ve said if there are five really especially good ideas or submissions, he should… override the randomness.)

And stay tuned here, as I hope to bring you some more stuff with this soon.

For more:

https://forum.dadamachines.com/

https://dadamachines.com/product/doppler/

The post dadamachines doppler is a new platform for open music hardware appeared first on CDM Create Digital Music.

Flash Sale: Save 40% off Syntorial training software & synth plugin

Audible Genius Syntorial 40 OFF

Plugin Boutique has launched a sale on Syntorial, the video game-like training software by Audible Genius that teaches you how to program synth patches by ear. Syntorial includes lesson packs for popular synths such as Xfer Serum, LennarDigital Sylenth1 and Native Instruments Massive. With almost 200 lessons, combining video demonstrations with interactive challenges, you’ll get […]

The post Flash Sale: Save 40% off Syntorial training software & synth plugin appeared first on rekkerd.org.

TidalCycles, free live coding environment for music, turns 1.0

Live coding environments are free, run on the cheapest hardware as well as the latest laptops, and offer new ways of thinking about music and sound that are leading a global movement. And one of the leading tools of that movement just hit a big milestone.

This isn’t just about a nerdy way of making music. TidalCycles is free, and tribes of people form around using it. Just as important as how impressive the tool may be, the results are spectacular and varied.

There are some people who take on live coding as their primary instrument – some who haven’t had experiencing using even computers or electronic music production tools before, let alone whole coding environments. But I think they’re worth a look even if you don’t envision yourself projecting code onstage as you type live. TidalCycles in particular had its origins not in computer science, but in creator Alex McLean’s research into rhythm and cycle. It’s a way of experiencing a musical idea as much as it is a particular tool.

TidalCycles has been one of the more popular tools, because it’s really easy to learn and musical. The one downside is a slightly convoluted install process, since it’s built on SuperCollider, as opposed to tools that now run in a Web browser. On the other hand, the payoff for that added work is you’ll never outgrow TidalCycles itself – because you can move to SuperCollider’s wider arrange of tools if you choose.

New in version 1.0 is a whole bunch of architectural improvement that really makes the environment feel mature. And there’s one major addition: controller input means you can play TidalCycles like an instrument, even without coding as your perform:
New functions
Updated innards
New ways of combining patterns
Input from live controllers
The ability to set tempo with patterns

Maybe just as important as the plumbing improvements, you also get expanded documentation and an all-new website.

Check out the full list of changes:

https://tidalcycles.org/index.php/Changes_in_Tidal_1.0.0

You’ll need to update some of your code as there’s been some renaming and so on.

But the ability to input OSC and MIDI is especially cool, not least because you can now “play” all the musical, rhythmic stuff TidalCycles does with patterns.

There’s enough musicality and sonic power in TidalCycles that it’s easy to imagine some people will take advantage of the live coding feedback as they create a patch, but play more in a conventional sense with controllers. I’ll be honest; I couldn’t quite wrap my head around typing code as the performance element in front of an audience. And that makes some sense; some people who aren’t comfortable playing actually find themselves more comfortable coding – and those people aren’t always programmers. Sometimes they’re non-programmers who find this an easier way to express themselves musically. Now, you can choose, or even combine the two approaches.

Also worth saying – TidalCycles has happened partly because of community contributions, but it’s also the work primarily of Alex himself. You can keep him doing this by “sending a coffee” – TidalCycles works on the old donationware model, even as the code itself is licensed free and open source. Do that here:

http://ko-fi.com/yaxulive#

While we’ve got your attention, let’s look at what you can actually do with TidalCycles. Here’s our friend Miri Kat with her new single out this week, the sounds developed in that environment. It’s an ethereal, organic trip (the single is also on Bandcamp):

We put out Miri’s album Pursuit last year, not really having anything to do with it being made in a livecoding environment so much as I was in love with the music – and a lot of listeners responded the same way:

For an extended live set, here’s Alex himself playing in November in Tokyo:

And Alexandra Cardenas, one of the more active members of the TidalCycles scene, played what looked like a mind-blowing set in Bogota recently. On visuals is Olivia Jack, who created vibrant, eye-searing goodness in the live coding visual environment of her own invention, Hydra. (Hydra works in the browser, so you can try it right now.)

Unfortunately there are only clips – you had to be there – but here’s a taste of what we’re all missing out on:

See also the longer history of Tidal

It’ll be great to see where people go next. If you haven’t tried it yet, you can dive in now:

https://tidalcycles.org/

Image at top: Alex, performing as part of our workshop/party Encoded in Berlin in June.

The post TidalCycles, free live coding environment for music, turns 1.0 appeared first on CDM Create Digital Music.

Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws

Musicians don’t just endure technology when it breaks. They embrace the broken. So it’s fitting that Holly Herndon’s team have produced a demonic spawn of machine learning algorithms – and that the results are wonderful.

The new music video for the Holly Herndon + Jlin collaboration have been making the rounds online, so you may have seen it already:


n
But let’s talk about what’s going on here. Holly is continuing a long-running collaboration with producer Jlin, here joined by technologist Mat Dryhurst and coder Jules LaPlace. (The music video itself is directed by Daniel Costa Neves with software developer Leif Ryge, employing still more machine learning technique to merge the two artists’ faces.)

Machine learning processes are being explored in different media in parallel – characters and text, images, and sound, voice, and music. But the results can be all over the place. And ultimately, there are humans as the last stage. We judge the results of the algorithms, project our own desires and fears on what they produce, and imagine anthropomorphic intents and characteristics.

Sometimes errors like over-fitting then take on a personality all their own – even as mathematically sophisticated results fail to inspire.

But that’s not to say these reactions aren’t just as real. An part of may make the video “Godmother” compelling is not just the buzzword of AI, but the fact that it genuinely sounds different.

The software ‘Spawn,’ developed by Ryge working with the team, is a machine learning-powered encoder. Herndon and company have anthropomorphized that code in their description, but that itself is also fair – not least because the track is composed in such a way to suggest a distinct vocalist.

I love Holly’s poetic description below, but I think it’s also important to be precise about what we’re hearing. That is, we can talk about the evocative qualities of an oboe, but we should definitely still call an oboe an oboe.

So in this case, I confirmed with Dryhurst that what I was hearing. The analysis stage employs neural network style transfers – some links on that below, though LaPlace and the artists here did make their own special code brew. And then they merged that with a unique vocoder – the high-quality WORLD vocoder. That is, they feed a bunch of sounds into the encoder, and get some really wild results.

And all of that in turn makes heavy use of the unique qualities of Jlin’s voice, Holly’s own particular compositional approach and the arresting percussive take on these fragmented sounds, Matt’s technological sensibilities, LaPlace’s code, a whole lot of time spent on parameters and training and adaptation…

Forget automation in this instance. All of this involves more human input and more combined human effort that any conventionally produced track would.

Is it worth it? Well, aesthetically, you could make comparisons to artists like Autechre, but then you could do that with anything with mangled sample content in it. And on a literal level, the result is the equivalent of a mangled sample. The results retain recognizable spectral components of the original samples, and they add a whole bunch of sonic artifacts which sound (correctly, really) ‘digital’ and computer-based to our ears.

But it’s also worth noting that what you hear is particular to this vocoder technique and especially to audio texture synthesis and neutral network-based style transfer of sound. It’s a commentary on 2018 machine learning not just conceptually, but because what you hear sounds the way it does because of the state of that tech.

And that’s always been the spirit of music. The peculiar sound and behavior of a Theremin says a lot about how radios and circuits respond to a human presence. Vocoders have ultimately proven culturally significant for their aesthetic peculiarities even if their original intention was encoding speech. We respond to broken circuits and broken code on an emotional and cultural level, just as we do acoustic instruments.

In a blog post that’s now a couple of years old – ancient history in machine learning terms, perhaps – Dmitry Ulyanov and Vadim Lebedev acknowledged that some of the techniques they used for “audio texture synthesis and style transfer” used a technique intended for something else. And they implied that the results didn’t work – that they had “stylistic” interest more than functional ones.

Dmitry even calls this a partial failure: “I see a slow but consistent interest increase in music/audio by the community, for sure amazing things are just yet to come. I bet in 2017 already we will find a way to make WaveNet practical but my attempts failed so far :)”

Spoiler – that hasn’t really happened in 2017 or 2018. But “failure” to be practical isn’t necessarily a failure. The rising interest has been partly in producing strange results – again, recalling that the vocoder, Theremin, FM synthesis, and many other techniques evolved largely because musicians thought the sounds were cool.

But this also suggests that musicians may uniquely be able to cut through the hype around so-called AI techniques. And that’s important, because these techniques are assigned mystical powers, Wizard of Oz-style.

Big corporations can only hype machine learning when it seems to be magical. But musicians can hype up machine learning even when it breaks – and knowing how and when it breaks is more important than ever. Here’s Holly’s official statement on the release:

For the past two years, we have been building an ensemble in Berlin.

One member is a nascent machine intelligence we have named Spawn. She is being raised by listening to and learning from her parents, and those people close to us who come through our home or participate at our performances.

Spawn can already do quite a few wonderful things. ‘Godmother’ was generated from her listening to the artworks of her godmother Jlin, and attempting to reimagine them in her mother’s voice.

This piece of music was generated from silence with no samples, edits, or overdubs, and trained with the guidance of Spawn’s godfather Jules LaPlace.

In nurturing collaboration with the enhanced capacities of Spawn, I am able to create music with my voice that far surpass the physical limitations of my body.

Going through this process has brought about interesting questions about the future of music. The advent of sampling raised many concerns about the ethical use of material created by others, but the era of machine legible culture accelerates and abstracts that conversation. Simply through witnessing music, Spawn is already pretty good at learning to recreate signature composition styles or vocal characters, and will only get better, sufficient that anyone collaborating with her might be able to mimic the work of, or communicate through the voice of, another.

Are we to recoil from these developments, and place limitations on the ability for non-human entities like Spawn to witness things that we want to protect? Is permission-less mimicry the logical end point of a data-driven new musical ecosystem surgically tailored to give people more of what they like, with less and less emphasis on the provenance, or identity, of an idea? Or is there a more beautiful, symbiotic, path of machine/human collaboration, owing to the legacies of pioneers like George Lewis, that view these developments as an opportunity to reconsider who we are, and dream up new ways of creating and organizing accordingly.

I find something hopeful about the roughness of this piece of music. Amidst a lot of misleading AI hype, it communicates something honest about the state of this technology; it is still a baby. It is important to be cautious that we are not raising a monster.

– Holly Herndon

Some interesting code:
https://github.com/DmitryUlyanov/neural-style-audio-tf

https://github.com/JeremyCCHsu/Python-Wrapper-for-World-Vocoder

Go hear the music:

http://smarturl.it/Godmother

Previously, from the hacklab program I direct, talks and a performance lab with CTM Festival:

What culture, ritual will be like in the age of AI, as imagined by a Hacklab

A look at AI’s strange and dystopian future for art, music, and society

I also wrote about machine learning:

Minds, machines, and centralization: AI and music

The post Jlin, Holly Herndon, and ‘Spawn’ find beauty in AI’s flaws appeared first on CDM Create Digital Music.

The guts of Tracktion are now open source for devs to make new stuff

Game developers have Unreal Engine and Unity Engine. Well, now it’s audio’s turn. Tracktion Engine is an open source engine based on the guts of a major DAW, but created as a building block developers can use for all sorts of new music and audio tools.

You can new music apps not only for Windows, Mac, and Linux (including embedded platforms like Raspberry Pi), but iOS and Android, too. And while developers might go create their own DAW, they might also build other creative tools for performance and production.

The tutorials section already includes examples for simple playback, independent manipulation of pitch and time (meaning you could conceivably turn this into your own DJ deck), and a step sequencer.

We’ve had an open source DAW for years – Ardour. But this is something different – it’s clear the developers have created this with the intention of producing a reusable engine for other things, rather than just dumping the whole codebase for an entire DAW.

Okay, my Unreal and Unity examples are a little optimistic – those are friendly to hobbyists and first-time game designers. Tracktion Engine definitely needs you to be a competent C++ programmer.

But the entire engine is delivered as a JUCE module, meaning you can drop it into an existing project. JUCE has rapidly become the go-to for reasonably painless C++ development of audio tools across plug-ins and operating systems and mobile devices. It’s huge that this is available in JUCE.

Even if you’re not a developer, you should still care about this news. It could be a sign that we’ll see more rapid development that allows music loving developers to try out new ideas, both in software and in hardware with JUCE-powered software under the hood. And I think with this idea out there, if it doesn’t deliver, it may spur someone else to try the same notion.

I’ll be really interested to hear if developers find this is practical in use, but here’s what they’re promising developers will be able to use from their engine:

A wide range of supported platforms (Windows, macOS, Linux, Raspberry Pi, iOS and Android)
Tempo, key and time-signature curves
Fast audio file playback via memory mapping
Audio editing including time-stretching and pitch shifting
MIDI with quantisation, groove, MPE and pattern generation
Built-in and external plugin support for all the major formats
Parameter adjustments with automation curves or algorithmic modifiers
Modular plugin patching Racks
Recording with punch, overdub and loop modes along with comp editing
External control surface support
Fully customizable rendering of arrangements

The licensing is also stunningly generous. The code is under a GPLv3 license – meaning if you’re making a GPLv3 project (including artists doing that), you can freely use the open source license.

But even commercial licensing is wide open. Educational projects get forum support and have no revenue limit whatsoever. (I hope that’s a cue to academic institutions to open up some of their licensing, too.)

Personal projects are free, too, with revenue up to US$50k. (Not to burst anyone’s bubble, but many small developers are below that threshold.)

For $35/mo, with a minimum 12 month commitment, “indie” developers can make up to $200k. Enterprise licensing requires getting in touch, and then offers premium support and the ability to remove branding. They promise paid licenses by next month.

Check out their code and the Tracktion Engine page:

https://www.tracktion.com/develop/tracktion-engine

https://github.com/Tracktion/tracktion_engine/

I think a lot of people will be excited about this, enough so that … well, it’s been a long time. Let’s Ballmer this.

The post The guts of Tracktion are now open source for devs to make new stuff appeared first on CDM Create Digital Music.

Cycling ’74 releases Max 8 incl. multi-channel MC & performance improvements

Cycling 74 Max 8Cycling ’74 has announced the release of Max 8, a major upgrade to the visual programming software. Max 8 includes MC, allowing for objects and patch cords to contain multiple audio channels by simply typing mc. before the name of any MSP object. It also comes with speed improvements of up to 2x on Mac […]

Watch this $30 kit turn into all these other synthesizers

DIY guru Mitch Altman has been busy expanding ArduTouch, the $30 kit board he designed to teach synthesis and coding. And now you can turn it into a bunch of other synths – with some new videos to who you how that works.

You’ll need to do a little bit of tinkering to get this working – though for many, of course, that’ll be part of the fun. So you solder together the kit, which includes a capacitive touch keyboard (as found on instruments like the Stylophone) and speaker. That means once the soldering is done, you can make sounds. To upload different synth code, you need a programmer cable and some additional steps.

Where this gets interesting is that the ArduTouch is really an embedded computer – and what’s wonderful about computers is, they transform based on whatever code they’re running.

ArduTouch is descended from the Arduino project, which in turn was the embedded hardware coding answer to desktop creative coding environment Processing. And from Processing, there’s the idea of a “sketch” – a bit of code that represents a single idea. “Sketching” was vital as a concept to these projects as it implies doing something simpler and more elegant.

For synthesis, ArduTouch is collecting a set of its own sketches – simple, fun digital signal processing creations that can be uploaded to the board. You get a whole collection of these, including sketches that are meant to serve mainly as examples, so that over time you can learn DSP coding. (The sketches are mostly the creation of Mitch’s friend, Bill Alessi.) Because the ArduTouch itself is cloned from the Arduino UNO, it’s also fully compatible both with UNO boards and the Arduino coding environment.

Mitch has been uploading videos and descriptions (and adding new synths over time), so let’s check them out:

Thick is a Minimoog-like, playable monosynth.

Arpology is an “Eno-influenced” arpeggiator/synth combo with patterns, speed, major/minor key, pitch, and attack/decay controls, plus a J.S. Bach-style generative auto-play mode.

Beatitude is a drum machine with multiple parts and rhythm track creation, plus a live playable bass synth.

Mantra is a weird, exotic-sounding sequenced drone synth with pre-mapped scales. The description claims “it is almost impossible to play something that doesn’t sound good.” (I initially read that backwards!)

Xoid is raucous synth with frequency modulation, ratio, and XOR controls. Actually, this very example demonstrates just why ArduTouch is different – like, you’d probably not want to ship Xoid as a product or project on its own. But as a sketch – and something strange to play with – it’s totally great.

DuoPoly is also glitchy and weird, but represents more of a complete synth workstation – and it’s a grab-bag demo of all the platform can do. So you get Tremelo, Vibrato, Pitch Bend, Distortion Effects, Low Pass Filter, High Pass Filter, Preset songs/patches, LFOs, and other goodies, all crammed onto this little board.

There, they’ve made some different oddball preset songs, too:

Platinum hit, this one:

This one, it sounds like we hit a really tough cave level in Metroid:

Open source hardware, kits available for sale:

https://cornfieldelectronics.com/cfe/projects.php#ardutouch

https://github.com/maltman23/ArduTouch

The post Watch this $30 kit turn into all these other synthesizers appeared first on CDM Create Digital Music.

Vectors are getting their own festival: lasers and oscilloscopes, go!

It’s definitely an underground subculture of audiovisual media, but lovers of graphics made with vintage displays, analog oscilloscopes, and lasers are getting their own fall festival to share performances and techniques.

Vector Hack claims to be “the first ever international festival of experimental vector graphics” – a claim that is, uh, probably fair. And it’ll span two cities, starting in Zagreb, Croatia, but wrapping up in the Slovenian capital of Ljubljana.

Why vectors? Well, I’m sure the festival organizers could come up with various answers to that, but let’s go with because they look damned cool. And the organizers behind this particular effort have been spitting out eyeball-dazzling artwork that’s precise, expressive, and unique to this visceral electric medium.

Unconvinced? Fine. Strap in for the best. Festival. Trailer. Ever.

Here’s how they describe the project:

Vector Hack is the first ever international festival of experimental vector graphics. The festival brings together artists, academics, hackers and performers for a week-long program beginning in Zagreb on 01/10/18 and ending in Ljubljana on 07/10/18.

Vector Hack will allow artists creating experimental audio-visual work for oscilloscopes and lasers to share ideas and develop their work together alongside a program of open workshops, talks and performances aimed at allowing young people and a wider audience to learn more about creating their own vector based audio-visual works.

We have gathered a group of fifteen participants all working in the field from a diverse range of locations including the EU, USA and Canada. Each participant brings a unique approach to this exiting field and it will be a rare chance to see all their works together in a single program.

Vector Hack festival is an artist lead initiative organised with
support from Radiona.org/Zagreb Makerspace as a collaborative international project alongside Ljubljana’s Ljudmila Art and Science Laboratory and Projekt Atol Institute. It was conceived and initiated by Ivan Marušić Klif and Derek Holzer with assistance from Chris King.

Robert Henke is featured, naturally – the Berlin-based artist and co-founder of Ableton and Monolake has spent the last years refining his skills in spinning his own code to control ultra-fine-tuned laser displays. But maybe what’s most exciting about this scene is discovering a whole network of people hacking into supposedly outmoded display technologies to find new expressive possibilities.

One person who has helped lead that direction is festival initiator Derek Holzer. He’s finishing a thesis on the topic, so we’ll get some more detail soon, but anyone interested in this practice may want to check out his open source Pure Data library. The Vector Synthesis library “allows the creation and manipulation of vector shapes using audio signals sent directly to oscilloscopes, hacked CRT monitors, Vectrex game consoles, ILDA laser displays, and oscilloscope emulation software using the Pure Data programming environment.”

https://github.com/macumbista/vectorsynthesis

The results are entrancing – organic and synthetic all at once, with sound and sight intertwined (both in terms of control signal and resulting sensory impression). That is itself perhaps significant, as neurological research reveals that these media are experienced simultaneously in our perception. Here are just two recent sketches for a taste:

They’re produced by hacking into a Vectrax console – an early 80s consumer game console that used vector signals to manipulate a cathode ray screen. From Wikipedia, here’s how it works:

The vector generator is an all-analog design using two integrators: X and Y. The computer sets the integration rates using a digital-to-analog converter. The computer controls the integration time by momentarily closing electronic analog switches within the operational-amplifier based integrator circuits. Voltage ramps are produced that the monitor uses to steer the electron beam over the face of the phosphor screen of the cathode ray tube. Another signal is generated that controls the brightness of the line.

Ted Davis is working to make these technologies accessible to artists, too, by developing a library for coding-for-artists tool Processing.

http://teddavis.org/xyscope/

Oscilloscopes, ready for interaction with a library by Ted Davis.

Ted Davis.

Here’s a glimpse of some of the other artists in the festival, too. It’s wonderful to watch new developments in the post digital age, as artists produce work that innovates through deeper excavation of technologies of the past.

Akiras Rebirth.

Alberto Novell.

Vanda Kreutz.

Stefanie Bräuer.

Jerobeam Fenderson.

Hrvoslava Brkušić.

Andrew Duff.

More on the festival:
https://radiona.org/
https://wiki.ljudmila.org/Main_Page

http://vectorhackfestival.com/

The post Vectors are getting their own festival: lasers and oscilloscopes, go! appeared first on CDM Create Digital Music.

Creative software can now configure itself for control, with OSC

Wouldn’t it be nice if, instead of manually assigning every knob and parameter, software was smart enough to configure itself? Now, visual software and OSC are making that possible.

Creative tech has been moving forward lately thanks to a new attitude among developers: want something cool? Do it. Open source and/or publish it. Get other people to do it, too. We’ve seen seen that as Ableton Link transformed sync wirelessly across iOS and desktop. And we saw it again as software and hardware makers embraced more expression data with MIDI Polyphonic Expression. It’s a way around “chicken and egg” worries – make your own chickens.

Open Sound Control (OSC) has for years been a way of getting descriptive, high-resolution data around. It’s mostly been used in visual apps and DIY audiovisual creations, with some exceptions – Native Instruments’ Reaktor has a nice implementation on the audio side. But what it was missing was a way to query those descriptive messages.

What would that mean? Well, basically, the idea would be for you to connect a new visual app or audio tool or hardware instrument and interactively navigate and assign parameters and controls.

That can make tools smarter and auto-configuring. Or to put it another way – no more typing in the names of parameters you want to control. (MIDI is moving in a similar direction, if via a very different structure and implementation, with something called MIDI-CI or “Capability Inquiry.” It doesn’t really work the same way, but the basic goal – and, with some work, the end user experience – is more or less the same.)

OSC Queries are something I’ve heard people talk about for almost a decade now. But now we have something real you can use right away. Not only is there a detailed proposal for how to make the idea work, but visual tools VDMX, Mad Mapper, and Mitti all have support now, and there’s an open source implementation for others to follow.

Vidvox (makers of VDMX) have led the way, as they have with a number of open source ideas lately. (See also: a video codec called Hap, and an interoperable shader standard for hardware-accelerated graphics.)

Their implementation is already in a new build of VDMX, their live visuals / audiovisual media software:

https://docs.vidvox.net/vdmx_b8700.html

You can check out the proposal on their site:

https://github.com/vidvox/oscqueryproposal

Plus there’s a whole dump of open source code. Developers on the Mac get a Cocoa framework that’s ready to use, but you’ll find some code examples that could be very easily ported to a platform / language of your choice:

https://github.com/Vidvox/VVOSCQueryProtocol

There’s even an implementation that provides compatibility in apps that support MIDI but don’t support OSC (which is to say, a whole mess of apps). That could also be a choice for hardware and not just software.

They’ve even done this in-progress implementation in a browser (though they say they will make it prettier):

Here’s how it works in practice:

Let’s say you’ve got one application you want to control (like some software running generative visuals for a live show), and then another tool – or a computer with a browser open – connected on the same network. You want the controller tool to map to the visual tool.

Now, the moment you open the right address and port, all the parameters you want in the visual tool just show up automatically, complete with widgets to control them.

And it’s (optionally) bi-directional. If you change your visual patch, the controls update.

In VDMX, for instance, you can browse parameters you want to control in a tool elsewhere (whether that’s someone else’s VDMX rig or MadMapper or something altogether different):

And then you can take the parameters you’ve selected and control them via a client module:

All of this is stored as structured data – JSON files, if you’re curious. But this means you could also save and assign mappings from OSC to MIDI, for instance.

Another example: you could have an Ableton Live file with a bunch of MIDI mappings. Then you could, via experimental code in the archive above, read that ALS file, and have a utility assign all those arbitrary MIDI CC numbers to automatically-queried OSC controls.

Think about that for a second: then your animation software could automatically be assigned to trigger controls in your Live set, or your live music controls could automatically be assigned to generative visuals, or an iPad control surface could automatically map to the music set when you don’t have your hardware controller handy, or… well, a lot of things become possible.

We’ll be watching OSCquery. But this may be of enough interest to developers to facilitate some discussion here on CDM to move things forward.

Follow Vidvox:

https://vdmx.vidvox.net/blog

And previously, watching MIDI get smarter (smarter is better, we think):

MIDI evolves, adding more expressiveness and easier configuration

MIDI Polyphonic Expression is now a thing, with new gear and software

Plus an example of cool things done with VDMX, by artist Lucy Benson:

The post Creative software can now configure itself for control, with OSC appeared first on CDM Create Digital Music.

Apple to open source, cross-platform GPU tech: drop dead?

Apple’s decision to shift to its own proprietary tech for accessing modern GPUs could hurt research, education, and pro applications on their platform.

OpenGL and OpenCL are the industry-standard specifications for writing code that runs on graphics architectures, for graphics and general-purpose computation, including everything from video and 3D to machine learning.

This is relevant to an ongoing interest on this site – those technologies also enable live visuals (including for music), creative coding, immersive audiovisual performance, and “AI”-powered machine learning experiments in music and art.

OpenGL and OpenCL, while sometimes arcane technologies, enable a wide range of advanced, cross-platform software. They’re also joined by a new industry standard, Vulkan. Cross-platform code is growing, not shrinking, as artists, researchers, creative professionals, experimental coders, and other communities contribute new generations of software that work more seamlessly across operating systems.

And Apple has just quietly blown off all those groups. From the announcement to developers regarding macOS 10.14:

Deprecation of OpenGL and OpenCL

Apps built using OpenGL and OpenCL will continue to run in macOS 10.14, but these legacy technologies are deprecated in macOS 10.14. Games and graphics-intensive apps that use OpenGL should now adopt Metal. Similarly, apps that use OpenCL for computational tasks should now adopt Metal and Metal Performance Shaders.

They’re also deprecating OpenGL ES on iOS, with the same logic.

Metal is fine technology, but it’s specific to iOS and Mac OS. It’s not open, and it won’t run on other platforms.

Describing OpenGL and OpenCL as “legacy” is indeed fine. But as usual, the issue with Apple is an absence of information, and that’s what’s problematic. Questions:

Does this mean OpenGL apps will stop working? This is actually the big question. “Deprecation” in the case of QuickTime did eventually mean Apple pulled support. But we don’t know if it means that here.

(One interesting angle for this is, it could be a sign of more Apple-made graphics hardware. On the other hand, OpenGL implementations were clearly a time suck – and Apple often lagged major OpenGL releases.)

What about support for Vulkan? Apple are a partner in the Khronos Group, which develops this industry-wide standard. It isn’t in fact “legacy,” and it’s designed to solve the same problems as Metal does. Is Metal being chosen over Vulkan?

Cook’s 2018 Apple seems to be far more interested in showcasing proprietary developer APIs. Compare the early Jobs era, which emphasized cross-platform standards (OpenGL included). Apple has an opportunity to put some weight behind Vulkan – if not at WWDC, fair enough, but at some other venue?

What happens on the Web? Cross-platform here is even more essential, since your 3D or machine learning code for a browser needs to work in multiple scenarios.

Transparency and information might well solve this, but for now we’re a bit short on both.

Metal support in Unity. Frameworks like Unity may be able to smooth out platform differences for developers (including artists).

A case for Apple pushing Metal

First off, there is some sense to Apple’s move here. Metal – like DirectX on Windows or Mantle from AMD – is a lower-level language for addressing the graphics hardware. That means less overhead, higher performance, and extra features. It suggests Apple is pushing their mobile platforms in particular as an option for higher-end games. We’ve seen gaming companies Razer and Asus create Android phones that have high-end specs on paper, but without a low-level API for graphics hardware or a significant installed base, those are more proof of concept than they are useful as game platform.

And Apple does love to deprecate APIs to force developers onto the newest stuff. That’s why so often your older OS versions are so quickly unsupported, even when developers don’t want to abandon you.

On mobile, Apple never implemented OpenCL in the first place. And there’s arguably a more significant gap between OpenGL ES and something like Metal for performance.

Another business case: Apple may be trying to drive a wedge in development between iOS and Android, to ensure more iOS-only games and the like. Since they can’t make platform exclusives the way something like a PlayStation or Nintendo Switch or Xbox can, this is one way to do it.

And it seems Apple is moving away from third-party hardware vendors, meaning they control both the spec here and the chips inside their devices.

But that doesn’t automatically make any of this more useful to end users and developers, who reap benefits from cross-platform support. It significantly increases the workload on Apple to develop APIs and graphics hardware – and to encourage enough development to keep up with competing ecosystems. So there’s a reason for standards to exist.

Vulkan offers some of the low-level advantages of Metal (or DirectX) … but it works cross-platform, even including Web contexts.

Pulling out of an industry standard group

The significant factor here about OpenGL generally is, it’s not software. It’s a specification for an API. And for the moment, it remains the industry standard specification for interfacing with the GPU. Unlike their move to embrace new variations of USB and Thunderbolt over the years, or indeed the company’s own efforts in the past to advance OpenGL, Apple isn’t proposing an alternative standard. They’re just pulling out of a standard the entire industry supports, without any replacement.

And this impacts a range of cross-platform software, open source software, and the ability to share code and research across operating systems, including but not limited to:

Video editing
Post production
Generative graphics
Digital art
VJing and live visual software
Creative coding
Machine learning and neural network tools

Cross platform portability for those use cases meets a significant set of needs. Educators wanting to teach how to write shaders now face having students with Apple hardware having to use a different language, for example. Gamers wanting access to the largest possible library – as on services like Steam – will now likely see more platform-exclusive titles instead on the Apple hardware. And pros wanting access to specific open source, high-end video tools… well, here’s yet another reason to switch to Windows or Linux.

This doesn’t so much impact developers who rely on existing libraries that target Metal specifically. So, for instance, developing in the Unity Game Engine means your creation can use Metal on Apple platforms and OpenGL elsewhere. But because of the size of the ecosystem here, that won’t be the case for a lot of other use cases.

And yeah, I’m serious about Linux as a player here. As Microsoft and Apple continue to emphasize consumers over pros, cramming huge updates over networks and trying to foist them on users, desktop Linux has quietly gotten a lot more stable. For pro video production, post production, 3D, rendering, machine learning, research, and – even a growing niche of people working in audio and music – Linux can simply out-perform its proprietary relatives and save money and time.

So what happened to Vulkan?

Apple could have joined with the rest of the industry in supporting a new low-level API for computation and graphics. That standard is now doubly important as machine learning technology drives new ideas across art, technology, and society.

https://www.khronos.org/vulkan/

And apart from the value of it being a standard, Apple would break with important hardware partners here at their own peril. Yes, Apple makes a lot of their own hardware under the hood – but not all of it. Will they also make a move to proprietary graphics chips on the Mac, and will those keep up with PC offerings? (There is currently a Vulkan SDK for Mac. It’s unclear exactly how it will evolve in the wake of this decision.)

ExtremeTech have a scathing review of the sitution. it’s a must-read, as it clearly breaks down the different pipelines and specs and how they work. But it also points out, Apple have tended to lag not just in hardware adoption but in their in-house support efforts. That suggests you get an advantage from being on Windows or Linux, generally:

Apple brings its Metal API to OS X 10.11, kicks Vulkan to the curb

Updated: Yes, of course you can run Molten, the latest OpenGL tech, atop Metal. In fact, here’s a demo from 2016. (Thanks, George Toledo!)

https://moltengl.com/moltenvk/

https://github.com/KhronosGroup/MoltenVK

That’s little comfort for larger range backwards compatibility with “legacy” OpenGL, but it does bode reasonably well for the future. And, you know … fish tornadoes.

Side note: that’s not just any fish tornado. The credit is to Robert Hodgin, the creative coding artists aka flight404 responsible for many, many generative demos over the years – including a classic iTunes visualizer.)

Fragmentation or standards

Let’s be clear – even with OpenGL and OpenCL, there’s loads of fragmentation in the fields I mention, from hardware to firmware to drivers to SDKs. Making stuff work everywhere is messy.

But users, researchers, and developers do reap real benefits from cross-platform standards and development. And Metal alone clearly doesn’t provide that.

Here’s my hope: I hope that while deprecating OpenGL/CL, Apple does invest in Vulkan and its existing membership in Khronos Group (the industry consortium that supports that API as well as OpenGL). Apple following up this announcement with some news on Vulkan and cross-platform support – and how the transition to that and Metal would work – could turn the mood around entirely.

Apple’s reputation may be proprietary, but this is also the company that pushed USB and Thunderbolt, POSIX and WebKit, that used a browser to sell its first phone, and that was a leading advocate (ironically) for OpenGL and OpenCL.

As game directors and artists and scientists and thinkers all explore the possibilities of new graphics hardware, from virtual reality to artificial intelligence, we have some real potential ahead. The platforms that will win I think will be the ones that maximize capabilities and minimize duplication of effort.

And today, at least, Apple are leaving a lot of those users in the dark about just how that future will work.

I’d love your feedback. I’m ranting here partly because I know a lot of the most interesting folks working on this are readers, so do please get in touch. You know more than I do, and I appreciate your insights.

More:

https://developer.apple.com/macos/whats-new/

https://www.khronos.org/opengl/wiki/FAQ

https://www.khronos.org/vulkan/

https://developer.apple.com/documentation/metalperformanceshaders

… and what this headline is referencing

The post Apple to open source, cross-platform GPU tech: drop dead? appeared first on CDM Create Digital Music.