The following post was written on the 15th November but due to an unforeseen event that involved my laptop’s hard drive dying, I wasn’t able to post it on here until now. The reason for the delay also strangely ties in with the theme of this post…
OUR SECOND WORKSHOP:
We felt the need to have another workshop day before we settle on the direction we want to take and what we would like to make for this collaboration. We decided to look at programming sensors in this second workshop and it turned out to be a full on day of getting things to function properly and fixing problems instead of experimenting with the technology.
Mike showed me what he had lying around in his workshop from his previous projects and laid out what was to me a plethora of wires, transistors, things called breadboards (http://en.wikipedia.org/wiki/Breadboard) , chips, sensors and other electronic things. He demonstrated how these things could be set up and wired up a distancing sensor connecting it to a LED light. He then showed me how to program it on the computer so that when something is near the sensor, the light will come on. We later switched the LED light with a motor so if something is near the sensor the motor will turn on. So far things were running pretty smoothly.
Looking at the codes took me back to my programming days at university ages ago (I studied Computer Visualisation and Animation and did a year of Computer Science previously). Perhaps using this technology would enable me to merge programming into my practice and see it in a different light. Previously everything I did in computer graphics was on the screen and in terms of creating artworks I usually prefer something more tactile, something “real” and not experienced only on a screen hence I switched to drawing on paper. What Mike showed me here demonstrated something different, it uses programming to control something physical and made it more interesting for me.
Previously we discussed an idea of creating a mutable canvas, a drawing surface that alters depending on the location of the artist. We decided to quickly prototype a crude version of this, by just having the surface rotate when the artist is near a certain location of the drawing surface.
We painted blackboard paint on the surface so we could draw using chalk. I guess it retains the ephemeral theme we both like. Whilst waiting for the paint and glue to dry we looked at another sensor, the 3 axis accelerometer, which measure movement in three dimensions and it is what the Wii controller uses. To demonstrate what it does Mike hooked it up to an RGB LED light so it would change colour as the sensor moved in different directions. This was something that Mike has done before and had codes to do this from previous project. The thing that comes with technology is sometimes you encounter situations where it just decided not to work and you have no idea why as everything you have done worked the last time. Unfortunately this happened to us and finding the source of the problem took a huge amount of time, but luckily we did get it to work in the end. Seeing the accelerometer made us think of wearable devices, perhaps something the artist could wear whilst drawing. The data could then generate another piece of work and so adding another dimension to the drawing process.
Time passed, the paint and glue were dried and we then returned to our crude mutable canvas to attach the moving part to the motor and the distance sensor. I guess this workshop day was not a day for technology, what we set up previously with the distance sensor failed to work and we ended up spending more time trying to figure out why again. We checked the wiring and the code, tested the components individually and everything seemed correct and it should be working. We ended up feeling tired and frustrated so we decided to call it a day and I am sure we will get it to work once we feel less frustrated and defeated by technology.
POST WORKSHOP REFLECTION:
Before the second workshop Mike and I did more research on the drawbots and found more existing examples:
http://www.finkbuilt.com/blog/kids-art-bot/
http://42bots.com/competitions/project-2-line-following-bot-tests/
https://www.youtube.com/watch?v=Cb29RfdY1L0
We played with the idea of creating automated drawing tools in the last workshop and witnessed other examples at the Kinetica Art Fair and noticed how most of them are predetermined machines. Having also looked at the online examples online with less predetermined drawing bots, I wonder if they will generate something interesting other than just be emulating someone drawing random lines? We knew that to develop this idea further we will need to make the bots more advance and we agreed that making bots is not that interesting unless they interact with the artist in some way and perhaps interact with the lines the artist draws, adding and/or erasing them.
Reflecting back on both workshops and our discussions so far, I find myself more drawn to the direction of using sensors, perhaps picking up signals from the artist and do something interesting with the data in real time, i.e. as the artist creates. It makes me think of translation. The conversion of one thing such as movement into something else. I like the idea of translation and will think more about this and share this with Mike. I think I am more drawn to this direction than the bots, partly I am unsure why add bots into it, why do we need automated drawing tools? In addition, are we trying to develop tools in this collaboration or actually make an artwork or both? For me, creating a work using the data/signals picked up during the process of creating would be more interesting. Despite the day’s technological failure, I am still happy to look further into using and programming sensors and come into a more finalised idea of what to make.