Archive for February, 2011

Our assignment was to create a 3D region within and activate it. Figuring out the math for the x/y/z points in relation to the virtual camera took some experimenting, so I simply created a cube in this openFrameworks example (based on the DepthVisualizer example that Zach and Kyle provided) that could be activated by anyone detected within the near/far threshold via the Kinect.

This openFrameworks project is based on the Depth Visualizer example that Kyle gave us.  Our assignment was to create a bounding box in 3D around our target objects, along with centroids, in a similar manner as the Computer Vision example with the Centroids that Kyle also provided.

Zach explained the concept that every pixel of the “depthImage” in the Depth Visualizer example represents an x, y and z value and that we just needed to iterate through all the pixels, add up the x, y and z values and find the middle value, which would be the centroid.

Mike Knuepfel helped me with the creation of the bounding box, as well as the blob centroid and bounding box center.  All of this is handled in the “drawPointCloud” function. We created a boolean called “firstTime” [iterating through the pixels contained within cameraHeight and cameraWidth] and set it to “true”.  We also created local “long’s” for “totalX”, “totalY”, “totalZ” [all set to 0] and “pointTotal” [set to 1].  Within the pixel loop, we created a local “int” for the “z” value, equal to “depthPixels[i]”.  Then we added x, y, and z pixels to the totalX, totalY, and totalZ values and incremented the pointTotal value.  An “if” statement followed, setting the initial min/max values for x, y, z and changing the firstTime boolean to false.  A series of “if” statements follow, setting the min and max values for each x, y, z.  After the “glEnd();”, we set the centroid x, y, z values to the total for each, divided by the pointTotal, and also the centers of the bounding box (the min value plus difference between the max and min values divided by 2).  We then drew the centroid (I colored mine blue) and bounding box center (colored red) with GL_POINTS.  Finally, we drew the lines of the bounding box (actually a “boxoid” because this is 3D) by first drawing the front and back planes and then drawing the sides, all with GL_LINES.  (Eric Mica mentioned afterward that we could use the “glutWireCube” for this.)

The Depth Visualizer example draws both the “depthImage” and the “pointCloud” to the screen, which can be distracting to look at. So to simplify, I commented out the drawing of the depthImage and adjusted the “ofTranslate” code to center the drawing of the pointCloud.

I wanted to spice it up, so I added the project files from the “opticalFlow with Lines” example and drew that to the screen in red.  I also created variables for the differences between the centroids and the bounding box centers and set those to absolute numbers.  I then based the rotation of the pointCloud on these differences, so that it would dynamically change in relation to the movement of the subject being tracked.  Here’s the code: DepthVisualizer_BoundingBoxHW I set the near/far threshold of the pointCloud via the controlPanel and then made this recording:

It’s one thing to question and challenge existing conventions in art, and an entirely different thing to form a whole new art movement that was “a violent and cynical cry which displayed our sense of rebellion, our deep-rooted disgust, our haughty contempt for vulgarity, for academic and pedantic mediocrity, for the fanatical worship of all that is old and worm-eaten.”  I found much of the intensely passionate language in the Technical Manifesto of Futurist Painting amusing, even though some of the words they used in the numbered manifesto section remind me of Tea Party lingo: “despised”, “tyranny”, “demolish”, “harmful”, “madman”, “destroy”.  I also found many statements paradoxical in that they were rebelling against the rules of art while insisting on dogmatic new rules, for instance “Divisionism, for the modern painter, must be an innate complementariness which we declare to be essential and necessary.  Overall, I think this manifesto placed too much emphasis on what the artists were fighting against (demanding for instance the “total suppression of the nude in painting”) and not enough on what they wanted to create.

Of course there are many statements and ideas in this manifesto that I find inspiring and that I want to integrate in my work — that humans are not opaque and should be represented as part of a larger environment; that non-humans or inanimate objects are worthy of the same degree of representation in art; that everything we see with our eyes is in constant flux; and that there is a “universal vibration” that we’re all part of.  This phrase is beautiful, in relation to the light and colors of human skin: “green, blue and violet dance upon it with untold charms, voluptuous and caressing” and it reminds me of impressionism in 20th century European art, along with the ways in which lighting can transform the human image on video or film.  In terms of non-humans being worthy of artistic representation, I certainly feel this way about the hermit crabs that I have been documenting — they have transformed in my eyes from vivarium bio-sensors to fascinating creatures with distinct personalities and behaviors I can relate to (sharing, watching out for each other, exploring, and even showing off).  “On account of the persistency of an image upon the retina, moving objects constantly multiply themselves; their form changes like rapid vibrations, in their mad career.”  The truth of our ever-changing reality makes me think of kinetic sculptures, video art, and persistence-of-vision displays, all of which I’m interested in exploring and combining.  Lastly, the universal vibration makes me think of “new age” artwork that attempts to depict energy and divine forces.  I tried to create “energy ripples” in video through last year but didn’t quite end up with the effect I wanted.  “Your eyes, accustomed to semi-darkness, will soon open to more radiant visions of light.”  Somewhat cheesy and presumptuous, but the idea of creating work that helps others to peel away filters from their eyes containing expectations and preconceived notions pushes me forward.

Our first assignment for the 3D Sensing and Visualization class was to make our own 3D scanner. I was in a group with Kevin, Zach and Frankie. Kevin wanted to try a structured light hack, so we created a webcam that only sees infrared light (we took apart a webcam and replaced the IR-blocking filter over the lens with an exposed piece of film to let only IR light through), and we created an IR light source (wired 3 IR LED’s together).
Kevin wrote the code in Processing: _3dsav_week1 and created the video embedded below: