This week I did several small tests. Mainly tech stuff. And I found out that it’s really not easy to come out a good scenario.
Wireless Wooo_v2
Realized only laptop needed to link to itpsandbox to be server, and all the phones can connect to nyu wifi and access laptop server by going to laptop’s IP address. The whole world can access my laptop if they know my IP address! So crazy!! Mind blown!!!
Virtual Arm
Try to figure out how to get 360 rotation from accelerometer, and magnetometer of bluetooth TI Sensor Tag.
So far get pitch and roll from accelerometer, and yaw from magnetometer, but works kind of weird when using all of three at the same time. Works fine if just use pitch and yaw.
Bluetooth data transfer is not as free as serial communication. Might have to swtich to XBee/radio? :/
W/ projector!
I bought a Slimport MicroUSB to HDMI adapter cable and tested it with projectors. It didn’t work with Samsung Pico project (HDMI to VGA to VGA 15pin), which is very small. But it worked with the medium size ones (HDMI to HDMI)!
Control Room 2.0
Following the new strategy, get one done in detailed and change based on feedback, and experiment others freely, I chose Control Room as my target mask, since it already had physical form.
Things planed to change:
Daily use
How can it integrate into daily life? Use it in front of laptop, use it outside home?
–> Only one window is remote view, others all local view
–> Portable home: always connected with your home. So you can see/talk with family members all the time, and your pets too
–> Open field option of relaxing escape for daily use
Based on feedback from user testing, people want to see the real world more(= see more windows, e.g. be able to see their hands if they want)
–> More windows to see more clearly
More comfortable (suggestion from teacher Despina)
Smell good –> To do
Easy to put on, pillow in the back? –> To do
Aim for wearing for longer time, since it represents Home
Interactions
Self: Based on where your head turns toward to, the things in the house react to it
Others
Open the door –> explode the house
Duet
Senario
T_T so difficult
Tech
Quaternion & Euler
“Ahhhhhhhhhhhhhhhhhhhh!!!! TvT” –> famous Fxck & Yeah moment
Spent almost three hours trying to rotate the body with quaternion, but only change rotation.y, trying to deal something looks like this , and then found out that Three.js has Euler FUNCTION that can just convert Quaternion to Euler for you. Just need the source quaternion and order of axes.
Suggested testing different ways to max the performance, setting benchmark and see which part costs the most
Face detection
Use small canvas for executing face detection, display the image with big size of canvas.
From below test pics you can see, it’s much better to analyze with smaller canvas, but there’s no big difference between displaying big and small canvas, so for better resolution, it seems ok to display bigger canvas.
bigCanvasAnalyze
bigCanvasDisplay
smallCanvasDisplay
Greyscale + Blur
multiple faces detection, yet not stable due to changing lightness
Collect personal moment as taking photos –> accompany with you all the time to comfort you to confront unknown
Affecting virtual world
Wireless Wooo
Thanks toAndy‘s advice on hooking up localhost of my laptop through itpsandbox wifi, now I can run codes on mobile phone using laptop as server! (ps. It also works at home, just in NYU because of the security issue, using ITPSANDBOX is needed.)
*Note* It’s not advised to run server elsewhere (e.g. heroku, digitalOcean) because it takes more time to transfer the data back and forth. Localhost with laptop is the best option for proof of concept!
Paper Mache
–> Decide to do it after finalizing the virtual content.
For my first experiment, in order to create strong nonsense moment, I want to connect virtual world with the reality, so as to play with perception and make conflict, and one way to do this is importing the camera feed from the mobile phone. This way, the user can not only be able to see the real world, but also experience illogical, contradictory events in the virtual world, which is triggered by the real world.
Possible scenario
–> Contradict to reality –> Think everyone is monkey –> Encourage to say Hi–> Focus enhancement
Real + Virtual Mashup
After talking with professor Shawn Van Every, I decided to use browser instead of App as platform first, and test the limitation of browser and Javascript. With flexible HTML5 and Chrome browser, I can get camera feed from mobile just like from webcam of laptop, and getUserMedia & MediaStreamTrack function of WebRTC API allow users to choose camera and set up constraints as they want. Below are the gists of it:
Get the user/rear camera into video
Put into canvas; use the canvas as texture for plane geometry
Translate the position of geometry but keep the center of the plane mesh (geometry + material = mesh) on center
Rotate the mesh as rotating the camera (== user’s head rotation)
Computer Vision on Phone
Nonsense is built on sense, so in order to make nonsense in virtual world, the V World needs to know what’s going on in the real world, and my first attempt is using computer vision to analyze the image captured from camera. Below are the computer vision JS libraries I found:
https://github.com/inspirit/jsfeat
http://trackingjs.com/
https://github.com/auduno/clmtrackr (face)
https://github.com/auduno/headtrackr (good for face)
https://github.com/sightmachine/simplecv-js
https://github.com/peterbraden/node-opencv
https://cloudcv.io/
Issue #1 – Currently I used jsfeat to grayscale the footage first, and then found the bright area pixels by pixels. It’s obviously slow. Next step will be trying the combination of OpenCV and Node.js (Thanks to Pedro), see if “perform CPU-intense image processing routines in the cloud, let Node.js server handle client requests and call C++ back-end” will optimize the performance or not.
Issue #2 – Have to figure out how to translate the pixel location from canvas to 3D world, since let eyeScreen rotating with camera (head) making everything complicated.