I’ve been thinking about Sonification (has a definition on Wikipedia, even if it isn’t a real word) of data for sometime. My first experiment used this blog. A hidden iFrame on the main page opened a page on a web server running off my laptop. These requests were then converted to audio and played out of the laptop’s speaker. I’m interested in the concept of listening to data all together, and in particular web / internet data is especially intriguing. Similarly I’ve recently become inspired by autonomous art works, things that do their own thing without intervention, and even better than that they do something entirely unpredictable.
My research has progressed, and now rather than using a simple PC speaker my software outputs MIDI information. That can then be interpreted by all means of other software, or even hardware synthesisers to actually turn the data into sound. Also I’ve stopped using this blog as the data source, I’ve actually obtained a months worth of web server log data. This has given me about 7 million records to use as my data set. Due to the way the software processes data the amount of time that it would take to “play” is equal to the amount of time that data took to collect. So, the months worth of data I’m using currently would actually take 1 month to listen to. Here’s a few excerpts from my current set up.
The “rules” that the software adheres to are as follows.
- MIDI note is determined by an addition of each segment of the clients IP address, which is then divided by 8. The reasoning behind this is that there are 128 possible MIDI notes. The sum of IP address segments has 1024 different possibilities. 1024 / 8 is 128; so this calculation will always provide a valid MIDI note.
- Length of note is determined by looking at the length of time in between web requests. This means in busy periods the software produces lots and lots of notes, whereas at quiet times (in the middle of the night) very few notes are played.
In time I’d like to further develop the system, exploring using other things as parameters with which to modulate aspects of the synthesis. One idea is to look at the geographical location of the client and then alter the sound accordingly in some way. This could work very well with a multiple speaker set up. Also plugging the system into the live log data would be really exciting.
A further development would introduce a visual element to the software. Illuminating variou screen sections according to IP address processing, like the audio. I haven’t looked into that as yet though..! It would probably mean transferring the MIDI processing code from VB into Processing; no bad thing me thinks.