Experiment_01

Camera Feed

For my first experiment, in order to create strong nonsense moment, I want to connect virtual world with the reality, so as to play with perception and make conflict, and one way to do this is importing the camera feed from the mobile phone. This way, the user can not only be able to see the real world, but also experience illogical, contradictory events in the virtual world, which is triggered by the real world.

Possible scenario

scenario_01a–> Contradict to realityscenario_01b –> Think everyone is monkeyscenario_01c –> Encourage to say Hiscenario_01d–> Focus enhancement

 

Real + Virtual Mashup

camFeed

After talking with professor Shawn Van Every, I decided to use browser instead of App as platform first, and test the limitation of browser and Javascript. With flexible HTML5 and Chrome browser, I can get camera feed from mobile just like from webcam of laptop, and getUserMediaMediaStreamTrack function of WebRTC API allow users to choose camera and set up constraints as they want. Below are the gists of it:

  • Get the user/rear camera into video
  • Put into canvas; use the canvas as texture for plane geometry
  • Translate the position of geometry but keep the center of the plane mesh (geometry + material = mesh) on center
  • Rotate the mesh as rotating the camera (== user’s head rotation)

 

Computer Vision on Phone

camFeed_cv

Nonsense is built on sense, so in order to make nonsense in virtual world, the V World needs to know what’s going on in the real world, and my first attempt is using computer vision to analyze the image captured from camera. Below are the computer vision JS libraries I found:

  • https://github.com/inspirit/jsfeat
  • http://trackingjs.com/
  • https://github.com/auduno/clmtrackr (face)
  • https://github.com/auduno/headtrackr (good for face)
  • https://github.com/sightmachine/simplecv-js
  • https://github.com/peterbraden/node-opencv
  • https://cloudcv.io/

Issue #1 – Currently I used jsfeat to grayscale the footage first, and then found the bright area pixels by pixels. It’s obviously slow. Next step will be trying the combination of OpenCV and Node.js (Thanks to Pedro), see if “perform CPU-intense image processing routines in the cloud, let Node.js server handle client requests and call C++ back-end” will optimize the performance or not.

Issue #2 – Have to figure out how to translate the pixel location from canvas to 3D world, since let eyeScreen rotating with camera (head) making everything complicated.