Category: S4: SPRING 2011


For my 3DSaV final, I partnered up with artist/composer/designer Matt Ganucheau. We wanted to create a dynamic set of 3D shapes that would move according to our hand movements and also according to our voices.

We decided we wanted to make use of the ofBox example (for creating the 3D shapes) that is part of the beta version of openFrameworks 007.  We also needed the skeleton-tracking code, particularly the “Air Accordion” example that I’ve been working with, that I’d gotten from Eric Mika.  So the first step in our project required a code merging between versions.  Getting openNI to work with that ofBox example was quite difficult, because there was a code/file problem with that OF 007 project example.  Zach helped me troubleshoot this by copying the project file from an example that did work (the “easyCam” one) into the folder for ofBox and then renaming the project file and deleting the bad one.  Kyle helped me troubleshoot the openNI part — I had to copy the library (the “lib” folder inside the “openni” folder) into the bin/data folder inside our new OF 007 project folder.   The Air Accordion example uses the “ofx3dGraphics” files that Zach wrote for our class, so I had to copy the .cpp and .h files, but because OF 007 re-definitions of some functions, I had to change all references of ofPoint to ofVec3f and remove all references to ofLine and ofSphere within the ofx3dGraphics files.

Once we got our project file running in OF 007, we changed ofBox to ofSphere; removed the line width and the image texture; changed all of the variables for movement speed, cloud size, numbers of sphere, and spacing; added smoothing formulas to the movement speed and spacing; and added openGL lighting and blending. Matt brought the ofxOsc addon into the project to make use of his MaxMSP patch that allows for microphone input to be recorded and utilized in the code. Matt will cover the details of the audio code additions in his blog post and will add a cleaned-up version of the code to his github account. Here’s the zip file of the code that we used for our video documentation (because of file size limitations on this blog, I had to delete the openni folder in the bin/data folder, and the following addon folders: ofxOpenCv, ofxOpenNI, ofxOsc, ofxVectorGraphics, ofxVectorMath): MG-NZ_Final_3-30_blog

While on Spring Break vacation, I had the pleasure of swimming in a bioluminescent bay, and I’ve been wanting to recreate that experience, since it’s extremely difficult to photograph or videotape it. See the video below for a first attempt to visualize this (the video is a screen capture of an openFrameworks application with input from a Kinect). The video also served as a test for my final project…

For the final project in my 3DSaV class, I partnered with Matt Ganucheau, and he said he wanted to work with particles. So while he was working on sound input code, I merged the “Binned Particle System” example from class with the “Air Accordion” example that Eric Mika worked on (which incorporates ofxOpenNI and other addons). I mapped the “addRepulsionForce” to my hands, and drew the depth image so you can see the color changes that correlate to my proximity to the Kinect. The code is here (you’ll have to copy the “openni” folder, that in turn contains the “config” and “lib” folders, into the “data” folder that is inside the “bin” folder of the project in order to run this): SwimmingThroughParticles

Swimming through Particles from Nisma Z on Vimeo.

In anticipation of a beach-based Spring Break, and as a test for a final project idea, I created this openFrameworks / Kinect application that gave me the ability to play with and stretch/compress a beach photo between my hands and over my body. The code is based on the “Air Accordion” example that I got from Eric Mika. I loaded one of my old beach photos, and mapped the x/y positions of it, along with the width/height, in relation to my hand x/y positions.

I wanted to be able to measure the distance between points in 3D versus just 2D, so Eric wrote a function called “dist3D(ofPoint a, ofPoint b)” based on a formula I found online. I utilized this in the variables I created for distance3D, lastDistance3D, and velocity3D. I used the velocity3D variable to try to control the playback of a sound file. I also wanted to be able to rotate the photo in 3D and “extrude” the pixels, but I was not able to get this working. I also tried drawing a box and a sphere with the photo texture-mapped onto it, but those attempts didn’t work either. I deleted all the commented-out code that didn’t work, so the file inserted into this post is clean. (You’ll have to copy the “openni” folder, that in turn contains the “config” and “lib” folders, into the “data” folder that is inside the “bin” folder of the project in order to run this.) GoingToTheBeach

Last week in class we learned about three types of shaders: Fragment, Vertex, and Geometry. Zach and Kyle created an example called “Depth Visualizer DOF (Depth of Field)” and today I worked with Matt Ganucheau to experiment with applying shaders to the depth visualizer. We found this helpful reference online, and tried out different shaders in the fragement and vertex files.  To keep the openFrameworks example simple, we just changed the existing “DOFCloud.frag” and “DOFCloud.vert” files, versus making new ones and then referencing them.  In order to apply the shader to the point cloud in 3D, we needed to map the “varying vec4” variable to the “gl_Position” in the vertex file, because gl_Position multiplies the gl_Vertex points by the 3D “gl_ModelViewProjectionMatrix”.

I wanted to make the shader change in relation to the depth, so Zach said that I needed to create a “zscale” variable in the fragment file. I tried various formulas with this, but wasn’t getting the results that I wanted. For now, the shader changes color because of the size of the sin wave that I applied to the zscale variable. Check out the attached code. In the testApp file, I commented out “glBlendFunc(GL_SRC_ALPHA, GL_ONE);” line, in order to better see what the shader is doing to the point cloud. (Also to note in the code: in a previous assignment, I wrote some simple code to test whether a cube drawn to screen has been “hit” by a 3D object detected by the Kinect…all of this is commented out in order to focus on the shader.)
NZ-MG_DiscoShader-DV

Our assignment was to create a 3D region within and activate it. Figuring out the math for the x/y/z points in relation to the virtual camera took some experimenting, so I simply created a cube in this openFrameworks example (based on the DepthVisualizer example that Zach and Kyle provided) that could be activated by anyone detected within the near/far threshold via the Kinect.
DepthVisualizerActivateCube

This openFrameworks project is based on the Depth Visualizer example that Kyle gave us.  Our assignment was to create a bounding box in 3D around our target objects, along with centroids, in a similar manner as the Computer Vision example with the Centroids that Kyle also provided.

Zach explained the concept that every pixel of the “depthImage” in the Depth Visualizer example represents an x, y and z value and that we just needed to iterate through all the pixels, add up the x, y and z values and find the middle value, which would be the centroid.

Mike Knuepfel helped me with the creation of the bounding box, as well as the blob centroid and bounding box center.  All of this is handled in the “drawPointCloud” function. We created a boolean called “firstTime” [iterating through the pixels contained within cameraHeight and cameraWidth] and set it to “true”.  We also created local “long’s” for “totalX”, “totalY”, “totalZ” [all set to 0] and “pointTotal” [set to 1].  Within the pixel loop, we created a local “int” for the “z” value, equal to “depthPixels[i]”.  Then we added x, y, and z pixels to the totalX, totalY, and totalZ values and incremented the pointTotal value.  An “if” statement followed, setting the initial min/max values for x, y, z and changing the firstTime boolean to false.  A series of “if” statements follow, setting the min and max values for each x, y, z.  After the “glEnd();”, we set the centroid x, y, z values to the total for each, divided by the pointTotal, and also the centers of the bounding box (the min value plus difference between the max and min values divided by 2).  We then drew the centroid (I colored mine blue) and bounding box center (colored red) with GL_POINTS.  Finally, we drew the lines of the bounding box (actually a “boxoid” because this is 3D) by first drawing the front and back planes and then drawing the sides, all with GL_LINES.  (Eric Mica mentioned afterward that we could use the “glutWireCube” for this.)

The Depth Visualizer example draws both the “depthImage” and the “pointCloud” to the screen, which can be distracting to look at. So to simplify, I commented out the drawing of the depthImage and adjusted the “ofTranslate” code to center the drawing of the pointCloud.

I wanted to spice it up, so I added the project files from the “opticalFlow with Lines” example and drew that to the screen in red.  I also created variables for the differences between the centroids and the bounding box centers and set those to absolute numbers.  I then based the rotation of the pointCloud on these differences, so that it would dynamically change in relation to the movement of the subject being tracked.  Here’s the code: DepthVisualizer_BoundingBoxHW I set the near/far threshold of the pointCloud via the controlPanel and then made this recording:

It’s one thing to question and challenge existing conventions in art, and an entirely different thing to form a whole new art movement that was “a violent and cynical cry which displayed our sense of rebellion, our deep-rooted disgust, our haughty contempt for vulgarity, for academic and pedantic mediocrity, for the fanatical worship of all that is old and worm-eaten.”  I found much of the intensely passionate language in the Technical Manifesto of Futurist Painting amusing, even though some of the words they used in the numbered manifesto section remind me of Tea Party lingo: “despised”, “tyranny”, “demolish”, “harmful”, “madman”, “destroy”.  I also found many statements paradoxical in that they were rebelling against the rules of art while insisting on dogmatic new rules, for instance “Divisionism, for the modern painter, must be an innate complementariness which we declare to be essential and necessary.  Overall, I think this manifesto placed too much emphasis on what the artists were fighting against (demanding for instance the “total suppression of the nude in painting”) and not enough on what they wanted to create.

Of course there are many statements and ideas in this manifesto that I find inspiring and that I want to integrate in my work — that humans are not opaque and should be represented as part of a larger environment; that non-humans or inanimate objects are worthy of the same degree of representation in art; that everything we see with our eyes is in constant flux; and that there is a “universal vibration” that we’re all part of.  This phrase is beautiful, in relation to the light and colors of human skin: “green, blue and violet dance upon it with untold charms, voluptuous and caressing” and it reminds me of impressionism in 20th century European art, along with the ways in which lighting can transform the human image on video or film.  In terms of non-humans being worthy of artistic representation, I certainly feel this way about the hermit crabs that I have been documenting — they have transformed in my eyes from vivarium bio-sensors to fascinating creatures with distinct personalities and behaviors I can relate to (sharing, watching out for each other, exploring, and even showing off).  “On account of the persistency of an image upon the retina, moving objects constantly multiply themselves; their form changes like rapid vibrations, in their mad career.”  The truth of our ever-changing reality makes me think of kinetic sculptures, video art, and persistence-of-vision displays, all of which I’m interested in exploring and combining.  Lastly, the universal vibration makes me think of “new age” artwork that attempts to depict energy and divine forces.  I tried to create “energy ripples” in video through last year but didn’t quite end up with the effect I wanted.  “Your eyes, accustomed to semi-darkness, will soon open to more radiant visions of light.”  Somewhat cheesy and presumptuous, but the idea of creating work that helps others to peel away filters from their eyes containing expectations and preconceived notions pushes me forward.

Our first assignment for the 3D Sensing and Visualization class was to make our own 3D scanner. I was in a group with Kevin, Zach and Frankie. Kevin wanted to try a structured light hack, so we created a webcam that only sees infrared light (we took apart a webcam and replaced the IR-blocking filter over the lens with an exposed piece of film to let only IR light through), and we created an IR light source (wired 3 IR LED’s together).
Kevin wrote the code in Processing: _3dsav_week1 and created the video embedded below: