Archive for November, 2009

After testing the accelerometer on the Arduino, I decided to move forward with the plan to take it wireless, since my project depends on freedom of movement for jumps and spins in order to generate visual effects on a screen. I spoke to a few people had experience with wireless devices and it was highly recommended that I go with XBee over BlueSmirf, since the latter caused some unknown/unexpected problems.

Ted Hayes, aka t3db0t, recommended that I get the following from SparkFun:

I also got a little AA battery holder to connect to the LilyPad.

Once all the items arrived, I downloaded and reviewed all the datasheets and manuals…and started to feel a heart attack coming on, so I went straight to Ted and he helped me cut to the chase to configure the XBee’s.  I attached the XBee receiver to the USB and wired up the XBee sender to the accelerometer in my Arduino as shown in the photo (the X value from the accelerometer was wired to AD0 input on the XBee, the Y to AD1, and the Z to AD2; VREF on the XBee was wired to power; the 3V on the accelerometer was also wired to power; the power on the Arduino was on the 3.3V pin).


We then went over to the Windows computer in the CommsLab room.  Ted logged in and assigned me a PAN ID on the in-house wiki.  Next we launched X-CTU to update the firmware and set the pins for the ADC (Analog to Digital Converter) on the XBee sender. The sender was assigned an ID of “1” and the receiver was set to “0”.  I then happily saw a green light, which indicated that the sender and receiver were in communication.  Fabulous.

We then created a Processing sketch and pasted XBee Packet Reader code into it.  This utilized serialEvent and parseData functions and drew a graph based on the accelerometer values.  After testing this and getting graph readings, I felt confident to move to the next stage of getting the XBee sender and accelerometer off the breadboard and soldered to the LilyPad.

Step 1: Testing the accelerometer via Arduino and Processing.

I wired up the triple axis accelerometer (that I purchased from the NYU Computer Store) as shown in the following photo (sorry for the blurriness):

Accelerometer wiring

In the Arduino code, I created three int “sensorValue” variables and three int variables for the X, Y and Z inputs.  I created an “establishContact” function, which is called in setup, and the loop reads the values from each X, Y, Z input via  analogRead and assigns these to each “sensorValue.”

In Processing, I created a standard “serialEvent” function with the “Serial myPort” parameter, that reads the incoming data as a string and creates an array of values corresponding to the X, Y and Z values.  For the first Processing sketch, I mapped the X, Y and Z values separately to Red, Green and Blue variables (code example: “r = map(sensors[0], 340,415,1,255);”).  I used the openGL library and in the draw loop, I created a simple ellipse that rotates around on the Z axis and the fill colors change depending on the position of the accelerometer.  Check out the results in this video:

Accelerometer testing, version 1 from N Z on Vimeo.

I ended up creating another Processing sketch because I wanted to see more directly how the motions affected an object.  Using openGL again, I simply created a 3D box, rotated slightly on the Y axis, and mapped the X, Y and Z values to the X, Y and Z coordinates of the box.   Here is the sketch:

Accelerometer testing, version 2 from N Z on Vimeo.

I spent the weekend of November 7th agonizing over ideas for a final project.  For ICM, I entertained the idea of doing more with the butterfly movements and morphing the butterfly into other forms (the Pcomp part involved either a balance board or wand), but then I started to think along the lines of producing something that could be applied to one of two projects that I’m working on outside of ITP: a documentary on adult competitive figure skaters and a website related to vegan raw food preparation.  I thought of games and data visualizations (both of which I ruled out for now but might try to do over the winter break).  I ended up settling on an idea related to skating: namely the creation of a choreography tool for figure skaters.

The short and free/long programs that figure skaters have to perform in tests and competitions are judged in a number of ways, one of which involves coverage of the ice (i.e. making the best use of the dimensions of the rink*).   I believe it would be helpful for skaters during the choreography phase of putting a program together to see a visual representation of their choreography and patterns on the ice with indications of all the required elements (spins, jumps, footwork, etc).  This could be rendered after a skater is videotaped or as they are skating – the output would be a processing sketch that could later be played to the music.

For ITP, I will tweak this idea to enable a person to create a “digital painting” based on their movements.  The person will put on a belt or vest containing a wireless accelerometer that will measure movements like spinning and jumping, each of which will be rendered distinctly in a Processing sketch.  They will perform their movements in front of a camera – their position within the space will be mapped to x and y positions of an object in a Processing sketch.   There is more that I’d like to incorporate in terms of sensors (heart-rate monitor) and Processing effects (adding color and style palettes and sound) but I’ll elaborate on those later, if I’m able to get what I’ve already mentioned done in time.

I’ve already begun to look at examples and log Processing effects that I’d like to incorporate for tracings, spins and jumps and have purchased and tested an accelerometer.  I’ll detail progress in subsequent posts.

* samples of required patterns for “moves in the field” tests


I created a form in PHP that utilized a simple text box, radio button and checkbox and when the user clicks the “submit” button, it “posts” the data to another PHP form and also simultaneously writes to a “data.txt” file in the “data” folder.  (I did not use “get” because it was drilled into me, when I took my webdev class in the Spring, never to use that because of security problems.)  I wanted to enable the user to create an avatar, and because I’m working on a documentary on figure skaters, I wanted to have the avatar be a skater.  I started to draw a body and outfit and quickly realized that it would take way too long to do for this assignment (plus I’m not good at drawing), so I chose some photos of skaters online instead.  The Processing sketch loads the”data.txt” file and then splits the data into individual strings – this part is working fine now, thanks to help from Dan.  I then created buttons with conditional statements to map the user input, based on gender and whether the user chooses “medal”, to the appropriate image (there are four options).  This is working fine locally but isn’t updating properly on the browser…will aim to troubleshoot this tomorrow.   CLICK HERE to go to the PHP form.

The media controller project that I worked on with Baowen Wong, aka Bo, was inspired by our mutual interest in working with images and my appreciation of the 3D animated movie, Coraline.  I latched onto the concept of the tunnel from the film as a portal to a virtual world where “reality” was not as it initially appeared.  I wanted users to look through a “tunnel” and see a different image each time of characters from the film looking back at them.  I also wanted users to engage with the tunnel as an object and use it to change what they saw.  In this prototype, if the user moves the tunnel to the right, then the image’s transparency changes to reveal the “lost souls” at the core of the movie.  If the user pushes the tunnel into the screen, then the pixels of the image push back and explode toward the viewer, which simulates an effect towards the end of the movie when Coraline’s alternate dream reality becomes more and more nightmarish and her environment begins to visually dissove in pieces before her eyes.  Check out a demo that was recorded in the Pcomp lab the day before we presented it in class:

Pcomp Week 9: Media Controller Project from N Z on Vimeo.

How it works:

Our prototype tunnel had just two sides (so that people could see the screen effects), and was made simply of black poster board attached to a computer monitor that Bo checked out from the equipment room (see note below on the tunnel we originally made).  We made a simple frame for the viewer end of the tunnel, covered with gaffer tape, and attached a photocell sensor to the outside of the frame NZ_Pcomp_091116_3295w, with the idea that the viewer’s head would block it and thus trigger the image change on the screen.  This didn’t quite work, so the viewer had to cover the photocell with their hands, which is still somewhat natural since one often cups their hands around their eyes to block out light when peering through some sort of tunnel or hole.

We also attached a flex sensor to the left side of the posterboard flap (out of frame but indicated by the red and green wires; the blue/yellow ones were attached to the photocell).  The values were mapped in Processing such that when the flap was moved to the right, the current image’s transparency setting would drop and reveal the “lost soul” photo already loaded underneath. NZ_Pcomp_091116_3299aw The flex sensor also measured the change in values when that left flap bent towards the monitor, and that variable affected the “z” value of the “explode pixels” function in Processing.  NZ_Pcomp_091116_3305aw

The overall code was based on the serial labs which we had just completed.  The three variables from the two analog sensors were written as such in the “serialEvent” section:

  • flexbendin = map(sensors[0], 260,200,0,width);
  • flexright = map(sensors[0], 290,350,0,100);
  • photocell = map(sensors[1], 170,630,50,100);

The “flexbendin” variable was utilized in the “explode pixels” function in this line:

  • float z = (flexbendin/(float)width) * brightness(images[imageIndex].pixels[loc])- 100.0;

The photocell variable was called in draw, along with a simple function called “changeImage”:

  • if (photocell<50){viewing = true;} else {viewing = false;}
  • if (viewing != lastViewing){if (viewing == false) changeImage();}
  • lastViewing = viewing;

Corey Menscher helped me troubleshoot the boolean button code and how that turned the effects on and off.  The other two variables also utilized their own boolean buttons to turn on and off the transparency/tint effect and the exploding pixels effect.

Bo worked on a whole sound element using the Minim sound library for Processing.  She chose a variety of sounds from the library that matched the mood of the images, and we chose ultimately to integrate just one haunting sound to go with the appearance of the lost souls.  The “;” and “song.close();” were incorporated into the “flexright” conditional statement.  All of the code was correct but the sound didn’t play when we presented the video.  I learned afterward that Minim doesn’t work properly with the version of Processing that I’m using (version 0135 – I’m using this because of an unknown Java runtime error that comes up with the most current version of the application on my computer.  Dan Shiffman knows about this but was not able to help troubleshoot it.  I need to upgrade my OS from Tiger to at least Leopard…but I’ll do that during winter break.)

The H-Bridge Lab builds on the Transistor Lab by inserting an H-Bridge on the breadboard so that the direction of the motor can be controlled by a switch.  Note the addition of a capacitor.  My little motor performed seemed to like this wiring better than the transistor one.


Pcomp Week 8 Lab: HBridge from N Z on Vimeo.

The Transistor Lab shows us how to control a higher-current DC load like a motor from the microcontroller.  The key ingredient is the TIP120 transistor.   I used a potentiometer to control the small DC motor that came in our kit, and wired it like this and below is the video:


Pcomp Week 8 Lab: Transistor from N Z on Vimeo.