Archive for March, 2011


For my 3DSaV final, I partnered up with artist/composer/designer Matt Ganucheau. We wanted to create a dynamic set of 3D shapes that would move according to our hand movements and also according to our voices.

We decided we wanted to make use of the ofBox example (for creating the 3D shapes) that is part of the beta version of openFrameworks 007.  We also needed the skeleton-tracking code, particularly the “Air Accordion” example that I’ve been working with, that I’d gotten from Eric Mika.  So the first step in our project required a code merging between versions.  Getting openNI to work with that ofBox example was quite difficult, because there was a code/file problem with that OF 007 project example.  Zach helped me troubleshoot this by copying the project file from an example that did work (the “easyCam” one) into the folder for ofBox and then renaming the project file and deleting the bad one.  Kyle helped me troubleshoot the openNI part — I had to copy the library (the “lib” folder inside the “openni” folder) into the bin/data folder inside our new OF 007 project folder.   The Air Accordion example uses the “ofx3dGraphics” files that Zach wrote for our class, so I had to copy the .cpp and .h files, but because OF 007 re-definitions of some functions, I had to change all references of ofPoint to ofVec3f and remove all references to ofLine and ofSphere within the ofx3dGraphics files.

Once we got our project file running in OF 007, we changed ofBox to ofSphere; removed the line width and the image texture; changed all of the variables for movement speed, cloud size, numbers of sphere, and spacing; added smoothing formulas to the movement speed and spacing; and added openGL lighting and blending. Matt brought the ofxOsc addon into the project to make use of his MaxMSP patch that allows for microphone input to be recorded and utilized in the code. Matt will cover the details of the audio code additions in his blog post and will add a cleaned-up version of the code to his github account. Here’s the zip file of the code that we used for our video documentation (because of file size limitations on this blog, I had to delete the openni folder in the bin/data folder, and the following addon folders: ofxOpenCv, ofxOpenNI, ofxOsc, ofxVectorGraphics, ofxVectorMath): MG-NZ_Final_3-30_blog

While on Spring Break vacation, I had the pleasure of swimming in a bioluminescent bay, and I’ve been wanting to recreate that experience, since it’s extremely difficult to photograph or videotape it. See the video below for a first attempt to visualize this (the video is a screen capture of an openFrameworks application with input from a Kinect). The video also served as a test for my final project…

For the final project in my 3DSaV class, I partnered with Matt Ganucheau, and he said he wanted to work with particles. So while he was working on sound input code, I merged the “Binned Particle System” example from class with the “Air Accordion” example that Eric Mika worked on (which incorporates ofxOpenNI and other addons). I mapped the “addRepulsionForce” to my hands, and drew the depth image so you can see the color changes that correlate to my proximity to the Kinect. The code is here (you’ll have to copy the “openni” folder, that in turn contains the “config” and “lib” folders, into the “data” folder that is inside the “bin” folder of the project in order to run this): SwimmingThroughParticles

Swimming through Particles from Nisma Z on Vimeo.

In anticipation of a beach-based Spring Break, and as a test for a final project idea, I created this openFrameworks / Kinect application that gave me the ability to play with and stretch/compress a beach photo between my hands and over my body. The code is based on the “Air Accordion” example that I got from Eric Mika. I loaded one of my old beach photos, and mapped the x/y positions of it, along with the width/height, in relation to my hand x/y positions.

I wanted to be able to measure the distance between points in 3D versus just 2D, so Eric wrote a function called “dist3D(ofPoint a, ofPoint b)” based on a formula I found online. I utilized this in the variables I created for distance3D, lastDistance3D, and velocity3D. I used the velocity3D variable to try to control the playback of a sound file. I also wanted to be able to rotate the photo in 3D and “extrude” the pixels, but I was not able to get this working. I also tried drawing a box and a sphere with the photo texture-mapped onto it, but those attempts didn’t work either. I deleted all the commented-out code that didn’t work, so the file inserted into this post is clean. (You’ll have to copy the “openni” folder, that in turn contains the “config” and “lib” folders, into the “data” folder that is inside the “bin” folder of the project in order to run this.) GoingToTheBeach

Last week in class we learned about three types of shaders: Fragment, Vertex, and Geometry. Zach and Kyle created an example called “Depth Visualizer DOF (Depth of Field)” and today I worked with Matt Ganucheau to experiment with applying shaders to the depth visualizer. We found this helpful reference online, and tried out different shaders in the fragement and vertex files.  To keep the openFrameworks example simple, we just changed the existing “DOFCloud.frag” and “DOFCloud.vert” files, versus making new ones and then referencing them.  In order to apply the shader to the point cloud in 3D, we needed to map the “varying vec4” variable to the “gl_Position” in the vertex file, because gl_Position multiplies the gl_Vertex points by the 3D “gl_ModelViewProjectionMatrix”.

I wanted to make the shader change in relation to the depth, so Zach said that I needed to create a “zscale” variable in the fragment file. I tried various formulas with this, but wasn’t getting the results that I wanted. For now, the shader changes color because of the size of the sin wave that I applied to the zscale variable. Check out the attached code. In the testApp file, I commented out “glBlendFunc(GL_SRC_ALPHA, GL_ONE);” line, in order to better see what the shader is doing to the point cloud. (Also to note in the code: in a previous assignment, I wrote some simple code to test whether a cube drawn to screen has been “hit” by a 3D object detected by the Kinect…all of this is commented out in order to focus on the shader.)
NZ-MG_DiscoShader-DV