Sunday 26 January 2014

Snow In Tokyo - go!

I promised I'd write more about what I'm up to, so:

At the moment I'm working on a really fun little project for my partner in crime, DEADBEAR (although I assume his mum calls him Nick). He makes lovely bleeps, and I offered to add some visuals to the mix.

It definitely took us a while to really figure out what we wanted to do (as is always the way with self-directed "let's make something cool" vanity projects), and after a mild lull in activity while I finished some other stuff, my brain has fully whirred into action now. My recent interest in live-coding has spilled out into this project, which for me is no bad thing at all - Nick gets a sweet video, and I get to noodle with something that really interests me enormously. I'm also using this to start being more active in open source projects, at the moment I'm using Cyril, a live coding environment built on OpenFrameworks. It's very new and completely open source, so hopefully I can contribute there too.



The plan at the moment is to produce a live-coded music video for the new single, Snow In Tokyo - I'm going to be visually exploring a snowflake form, deconstructing it over time and completely glitching out the components into a massive cacophony of visual weirdness, before re-constructing it as the song ends. It'll be built from ideally hundreds of rapidly coded segments, each sharing the common snowflake form but with enormous variance based on the music. Every part of the animation will be powered entirely by the music too (remember Rez?) - beat detection and things like FFT/frequency isolation work great already.

To give an indication of how quick and exciting the process is, the sequence below represents about thirty minutes of experimentation once I had the basic snowflake drawn. This is a whole new way of working for me, and feels as close to a jam session with code as I think it's possible to get:








This doesn't sound like live coding, and on the surface I suppose it isn't strictly - I'm using the techniques as a tool to create hundreds of visual treatments, rather that performing it live. I also like the idea of algorithmically putting the video together too, perhaps writing something that judges the craziness of each component and puts it at the right section of the video or something. That's a way off though, baby steps...

I've built a couple of prototypes already just to make sure I can pick out the right parts from the music. In this one, the first three boxes sample three low/high/mid frequencies, and the other two boxes sample hihat and kicks:


This other one simply demonstrates the 32 channel breakdown, any one of which could be used to power the video. It's really simple to do, but will let me do things like create backgrounds to sections that use the low bassy wobble and the upper-end to do finer little animations.



This project has also lead me to say an emphatic "yeah!" to an offer from the Manchester Girl Geeks in July, they're doing some sort of music generation/experimentation event, and I'm going to be live-coding some visuals for it. Most excitingly, some of the audio is built from sources like the Large Hadron Collider and something in deep space too. Should be ace.

Tons more to come on this.

No comments:

Post a Comment