I’ve been thinking about Sonification (has a definition on Wikipedia, even if it isn’t a real word) of data for sometime. My first experiment used this blog. A hidden iFrame on the main page opened a page on a web server running off my laptop. These requests were then converted to audio and played out of the laptop’s speaker. I’m interested in the concept of listening to data all together, and in particular web / internet data is especially intriguing. Similarly I’ve recently become inspired by autonomous art works, things that do their own thing without intervention, and even better than that they do something entirely unpredictable.
My research has progressed, and now rather than using a simple PC speaker my software outputs MIDI information. That can then be interpreted by all means of other software, or even hardware synthesisers to actually turn the data into sound. Also I’ve stopped using this blog as the data source, I’ve actually obtained a months worth of web server log data. This has given me about 7 million records to use as my data set. Due to the way the software processes data the amount of time that it would take to “play” is equal to the amount of time that data took to collect. So, the months worth of data I’m using currently would actually take 1 month to listen to. Here’s a few excerpts from my current set up.
The “rules” that the software adheres to are as follows.
MIDI note is determined by an addition of each segment of the clients IP address, which is then divided by 8. The reasoning behind this is that there are 128 possible MIDI notes. The sum of IP address segments has 1024 different possibilities. 1024 / 8 is 128; so this calculation will always provide a valid MIDI note.
Length of note is determined by looking at the length of time in between web requests. This means in busy periods the software produces lots and lots of notes, whereas at quiet times (in the middle of the night) very few notes are played.
In time I’d like to further develop the system, exploring using other things as parameters with which to modulate aspects of the synthesis. One idea is to look at the geographical location of the client and then alter the sound accordingly in some way. This could work very well with a multiple speaker set up. Also plugging the system into the live log data would be really exciting.
A further development would introduce a visual element to the software. Illuminating variou screen sections according to IP address processing, like the audio. I haven’t looked into that as yet though..! It would probably mean transferring the MIDI processing code from VB into Processing; no bad thing me thinks.
If you run a website, or put a web server online, it shouldn’t take long before you start getting hits. Most of the hits are from automated bots, but still you get them. Following on from my previous post, I thought it might be interesting to have a custom web server that produced sound directly from the HTTP requests that it got; whilst still delivering the HTML content to the requestor.
– Update, I’ve just hacked together a webserver that will make my laptop beep everytime there is a hit on it. So- if you’re reading this then make my PC beep by clicking here.
I’m going to explore the audification of data. It would be fantastic if I can get ‘inside’ a data set by audifying it. Daniel Cummerow’s work with algorithmic music is really fantastic and revealing, and working with mathematics is attractive. I may take maths as the starting point – its easy to transform maths into sound, they go hand in hand – but ultimately it would be nice to have some kind of more Universal generator that can take in any database and with minimal interference produce a sound work.
One possible approach would be to use web activity as the data source.
How about a website that produces a tone. The frequency of the tone could be denoted by the number of visitors – or some other factor that is effected by the visitation. Java.. Get Sam to help!
I’ve just returned from a trip to London, with mixed fortunes. I didn’t do quite what I wanted, but I’ve come back inspired. Its culminated with my mind being full of art stuff that I want to do – at some point. Twitter is going to be my starting point, as a dataset to work with. Anyhow. This is the story.
I failed to go to a party at Lo Recordings, which Leo invited me to, which was annoying. As it happened I made it to Old Street, and was waiting for a bus there, when my mobile phone died. It took with it the address I was going to, the contact numbers of the people there and any chance of finding the place. So I took the tube back to Fulham where I was staying. Effectively making a 2 hour round trip to nowhere. Making matters worse the travelling between Fulham (West) and Central & East made it impossible for me to go and see the first ever Starting Teeth gig – which was another annoyance.
On the plus side, I briefly met both members of Starting Teeth and Nathan Fake in some bar on Brick Lane. And indeed, it was the first time I’d ever been to Brick Lane so that was cool too….
After various discussions, one with my tutor Jane Brake and one with superfly superstar Sam Jeffers, I’ve begun trying to further formalise my understanding of the implications of data. Specifically the data that generated by my digital artworks. Most of my practise so far has been fairly ‘happy go lucky’ in a lot of ways. Mostly I’ve been interested in creating things purely for the sake of creating them – and I’m more than happy to stand by that point of view. Even if one’s creative output doesn’t broach a political subject, or doesn’t directly evoke an intense emotionally reponse in the audience, it does not intrinsically diminish its value. However, what I’ve finally realised, is that better understanding of some of the constructs that I’m working with – the Web, the network effect, data, and people – will allow me to produce “better” work. At the very least, it can’t hurt!