top of page

Out of Reach: Phase 2

In approaching the 2nd phase of this project, the problem that I wanted to solve was to allow for an interaction with more than one viewer.  Square One worked only with one viewer in front of the screen, and if another person was in the view of the Kinect, the functionality could often break.  More than that, an outsider viewing someone else interacting with the piece would not have the sense of depth that the individual would have.  They would be able to understand what the piece was trying to do, but not experience it themselves from their position.

 

Therefore, in processing the concept of Side Effect, I will be using the Kinect’s ability to recognize more than one viewer and its ability to recognize hand gestures.  Knowing that the creation of depth requires a very personalized point of view, instead of making a project that works for multiple viewers, this project aims to be only discernible from the point of view of the individual viewer.  However, it will have the ability to switch to a different viewer depending on hand gestures.  As a viewer in range of the Kinect closes their hand, the image will change to match their view, it can then be changed to a different viewer when they release their hand.

2018-10-25 (4).png

In order to accomplish this, I first selected the data for Right hand closed from the Kinect.  A different numerical value was given to “no hand closed”, “1st person closed” and “2nd person closed” so that a Switch could be used to give different states depending on the input.

2018-10-25 (6).png
2018-10-25 (5).png

In order for the switch to prompt the change in the state of the art piece, python was needed.  At the changed state, it would refresh and switch.

2018-11-06 (1).png

I also worked on creating an open state, a setting that would run when no one was currently interacting with the piece.  The end goal was to create a visual that would prompt someone who approached the piece to close their fist.  To do this, I wanted to use a particle effect, to create a visual that the user would want to “grab” at.  I liked the idea of the viewer reaching out to the piece, and then having it specifically react to them.

 

To create this effect, I connected an lfo chan to the movements of a sphere with particle effects.  This allowed for constant movement without the data of the viewer as an input.  I also linked the data to the opacity, so that the image would fade in and out.  The intention was to have both the surprise of its appearance, and to present it as something to “catch.”  It will require testing with a viewer to determine if when projected, this image will have the intended impact.

The next step was to develop the image that would display when the user interacted with the piece.  For this step, I looked into anamorphic images.  These are 2-D images that are stretched so that they give the illusion of depth to the viewer.

 

For simplicity’s sake, I used the image of a funnel.  The layers required were relatively easy to plan out.  I developed textures using Paint tool Sai to keep the piece engaging, despite its simplicity.  The piece was also separated into layers so that there would be the possibility of movement between the different sections, to add another layer of depth.

sideeffect_anamorphic.png

Once the anamorphic image was implemented into Touchdesigner, a solution had to be made for how the image should rotate.  For the image to work, the piece would have to move with the viewer.  This required some basic trigonometry, plotting out quadrants from the Kinect, and adjusting the x/y/z inputs to match the space that the viewer would be occupying.

slide2.png

Since the Kinect’s reading of a “closed hand” is often inconsistent, I used a Fliter Chop to smooth out the data input feeding the switch to the anamorphic image. 

 

I haven’t found a perfect setting for the Filter, unfortunately.  If the data isn’t filtered, the image consistently flickers, but if the Filter is too strong, the Kinect will not automatically react to a closed hand and will leave the user fighting for a reaction.

slide1.png

The remaining issue to solve is an animation prompt. 

 

So that the adjustment between the (no user) particle effect and the anamorphic image isn’t jarring, an animation has been added to scale up the anamorphic image from zero to one when switching between states. 

 

To accomplish this a Button was linked to a Trigger Chop.  In theory, this set up works, but in function it is largely inconsistent due to the need for the Trigger to reset before it can be prompted again.

2018-12-06.png

Final Assessment:

 

This project has successfully resolved issues from the first phase of the Out of Reach project, allowing multiple interactions, and bringing the piece away from the screen.  It has also lead to other observations.  If I continue with developing technology to give the user a true immersion with a 2-D piece, it will require a study of techniques and angle calculations so that that image can properly adjust.  The current image is not able to account fully for the viewer’s position and height, and therefore the actual depth effect is largely inconsistent.

 

I am interested in studying illusion effects prior to the development of the computer, when specific calculations were largely required and see if there is a way to combine that way of thinking, while also creating a very new and different experience by connecting it to modern technology.

Resources:

​

TouchDesigner. (2014, April 12). Retrieved December 6, 2018, from https://matthewragan.com/teaching-resources/touchdesigner/

​

Polar and Cartesian Coordinates. (n.d.). Retrieved December 6, 2018, from https://www.mathsisfun.com/polar-cartesian-coordinates.html

​

Anamorphic perspective - optical and catoptric perspective. (n.d.). Retrieved December 6, 2018, from https://www.roserushbrooke.com/anamorphic-quilts/anamorphic-perspective.html

​

math - Inverse of Tan in python (tan-1). (n.d.). Retrieved December 6, 2018, from https://stackoverflow.com/questions/10057854/inverse-of-tan-in-python-tan-1

bottom of page