David Bowie Is – AR Exhibition

David Bowie Is – AR Exhibition, is an AR project by The David Bowie Archive, Sony Music Entertainment (Japan) Inc. and Planeta. In this post, I jotted down some of my development as Lead Developer, under Director Nick Dangerfield, Art Direction Seth Tillett, and Tech Director Dan Brewster.

Paper Interaction

The irl exhibition has a lot of 2D materials, and for browsing them in a mobile app, there is a subtle balance we want to maintain, a balance between efficiency and AR-ness. We tried several ways, shuffle, fade in/out, but ultimately it feels the most natural to see them laying or leaning on some sort of surface.

And just like how you would expect and do in real life, we made the 2D materials move with your fingertip.

The effect especially works great on small paper cutout.
(Note – all videos in the post are direct screen recordings on my old iPhone6s)

And as a tabletop AR, some 2D artifacts need to be stacked in order for all to fit in, thus we need to a way to shuffle them.

After testing, we found out that, bringing the “focus” artifact to the front of the stack after being sent back, feels the most natural and intuitive.

Paper stack shuffle.

Scene Changes Mask

To pack the whole physical show into a tabletop AR mobile app, it is important to find a way to navigate the rich contents.

It includes over 400 high resolution captures of David Bowie’s costumes, sketches, handwritten lyrics, notes, music videos and original works of art are presented in striking arrangements and immersive settings, as well as dozens of never before seen items, including archival videos, drawings, photographs, and notes.

davidbowieisreal.com

Within the show, we decided to condense each section into a digestible Diorama, and make them navigable through a map.

And within each diorama, we need to make some sub-scene changes as well to accommodate all the contents.

For this, I used a lot of invisible 3D mask.

The mask uses depth buffer to occlude and prevent the contents from being shown. It is a simple but effective graphic trick, and more can be found here.

For example in Hunger City, two masks are places on the sides of the stage, to hide the off-stage contents.

Scene Making

Under the guidance of Director and Art Director and with feedback from the whole development team, I build the AR scenes including: Early Influences, Space Oddity, Cultural Influences, Songwriting, Recording, Characters, Collaborations, Life On Mars, Ziggy Stardust, Hunger City, Stage and Screen, New York, and Black & White Years. Below are the in-app screen recording of some of those scenes.

Early Influences

Hunger City

Black and White Years

Stage and Screen

New York

Ziggy Stardust

We Dwell Below

I was contacted by Gabrielle Jenks, Director of Abandon Normal Devices, in May 17′, about creating a VR experience for their biennial festival, which took place in September 17′ in Castleton, a village inside Derbyshire Peak District National Park of UK, with tons of caves! Together with Planeta, we created something crazy, unusual, and definitely push the boundaries of VR.

Concept

Inspired by the cave theme, I had an idea to transform normal people into cave dwellers. From an ordinary, clean, constraint person, to a bizarre, messy, loose wilder who acts freely! I imagine it’s the process of traversing the underground together, doing weird things together, that make the bounds between participants and shape whom they become.

So, we settled down the idea to transform participants as cave dwellers, but what weird behavior can we ask the participants to do?

Around that time, I was developing a Chewing Device for Google Cardboard for my Daily Life VR Ch. 2 Eat (see my instructables here!). Chewing Device is an analog mechanism to bring in the “chewing data” of real life, in order to reenact the chewing behavior in virtual world. Mouth apparently can do a lot of things, what if besides eating, it can also dig the ground? The image of people “chewing at the floor” together is so ridiculous that we all agree it is something worth of experiment.

Analog chewing device in action with real food!

Process: Interaction

Since we are bring in the chewing data, we need to use the most of the mouth / jaw movement. First of all, mouth is the only way and mean to travel down the ground, literally open & close mouth close to the floor, a.k.a. digging the ground.

We encourage participants to go wild and loose, so in experience participants can hear their voices in an echoey way. And the voice volume will generate gibberish, inspired by the 32 categorized symbols being used in Ice Age (source).

With mouth, participants can also eat stuff (meat), and puke stuff (hair ball, rock, etc). And when a participant opens mouth aiming at another participant, he/she will puke heart. And surprisingly that makes opening mouth as a major way to communicate! I’m so thrilled with this non-verbal communication, and really think it helps to shape the behaviors to be raw and primitive, and fun :)

Also, body contact is something we all love to encourage, so whenever participants touch others, random “twig” will be generated and planted on body where the contact point is.

Process: Content

Avatar – I used FinalIK library for creating VR avatar, and the result is quirky but at the same time very effective. The avatar looks like a possessed doll, and knowing in it is a real human being is a very strange and strong feeling. Throughout the journey, the participants will transform (both visually and mentally) from normal human to wilder, hence the shape and texture of the avatar will change every level down as well.

As for environments, it’s the part I probably spent the least time on because of the tight timeline. Luckily with Dan Brewster and Nick Dangerfield’s advice and encouragement, we settled on the heavy hand-drawn textured style.

before

after

Technology

We Dwell Below roughly has four parts involved: 1) Chewing Device, 2) Multiuser (networking) system in Unity, 3) Costumes, 4) Story.

Chewing Device

For this project, I need to redesign the one used with Google Cardboard + web browser completely, for it to work with HTC Vive + Unity3D. I laser-cut the base and used leather for strong strap in the end. Circular pressure sensor (FSR) for the input.

I made the prototype with FSR sensor and Arduino, and Joseph Saavedra helped on perfecting the hardware design and making! It uses Teensy board in the end. The wonderful chin strap is made by Kelsey LaSeur.

Multi-user / Networking VR in Unity

This is really a pain in the butt lol

I’ve spent a lot of time on the Unity Networking system. The documentation improved a lot but it’s still quite limited in 2017. Websocket is so much clear and transparent compared to this!

According to the latest news, Unity is going to re-write all its way to do networking (2019), so I guess all my efforts won’t be able to reuse anymore hahaha (cry cry).

Still, I could share some thoughts on multi-player in VR— it’s really fun to be able to interact with other people both virtually and physically. It is still awkward to interact with strangers, but with people you know, it feels like going through a great trip together.

Costumes

Suggested by AND Festival to have some elements of the scenography manufactured in material which was wearable like my project brain, we went a bit far :) Based on the cave theme and idea of participants having some sort of tribal ritual together, costume designer Kelsey designed 4 big coats and head-wears that are easy to take on and off. All 100% hand made!

To reduce infrared blocking for HTC Vive sensors as few as possible, we used mostly mesh fabric for the headwear.

For the 2018 Newcastle tour, we modified the helmet to make it stronger with more obvious face outlines. I used steal wire to make difference eye contours.

Final Installation

I am very lucky that Planeta, where I work at, was interested in co-producing the piece, or this would never happen in this short amount of time! We were all working hard until the last minutes, in the stone cottage near the installation site.

The project turned out to be a huge hit – thanks to the arrangement of AND Festival, we had a tent thus looked very secretive and seductive ;) Also the costume and weird behaviors with the Chewing Device, both make the experience not just interesting to try, but also entertaining to look at.

We Dwell Below - AND Festival 2017

In 2018, we brought the project back to England again, this time a two day installation in The Great North Museum: Hancock in Newcastle. The participants are mostly kids this time! :D

We Dwell Below at Hancock Museum (2018)

Tzina: Symphony of Longing

Tzina: Symphony of Longing is a WebVR documentary about people inhabited Tzina Dizengoff square, talking about their lives, love, and loneliness. The square was demolished at the beginning of 2017.

Created and directed by Shirin Anlen, the project invites the viewer for a physical walk to explore the square, to visit and hear the characters, at different times of the day. It’s made by Shirin, ​Ziv SchneiderOr Fleisher, ​Avner Peled, and me. Read more here: shirin.works/tzina-symphony-of-longing

I was invited to join the team in Summer 2016, to create the back-animation for the character interviews, which were captured with DepthKit. It’s a pleasant process to discuss with director Shirin, from her understanding of the people she interviewed and her visions, to my interpretation and visualization.

For me, it’s an interesting yet challenging task to maintain the balance of interpretation. I learned how to intrigue viewers without overshadowing the monologue, and am really happy with the result of magical feeling. I feel I might purposefully highlight the more lovely moment of each characters stories, and I hope I did a not-bad-job in the end to try to interpret their lives :)

 

Characters sketch

 

 

Process Stills

Animation Extracts

Virtual Reality Tour of Met

For my internship during Spring semester 2014 in Media Lab of The Metropolitan Museum of Art, I hooked up

    1. 3D models of Met from the Architecture Department
    2. Official Audio Guide
    3. 3D models art pieces in Greek and Roman gallery, made by 3D scan with photos
    4. Unity as game engine
    5. virtual reality head-mounted display Oculus Rift as controller

and create an immersive virtual reality tour of Met!

forBlog With Oculus Rift, users can wonder around the museum, listening to the audio guide and admiring art pieces, walk upstair, watch butterflies, being blocked by huge bowl, and being inside of the surreal mash-up models(credits to Decho<horse> and Rui<uncolor triangulars>). metTour

IDEA

With a background as VFX artist of 3D animation and post production, I’m always interested in 3D and how it can be interactive in the creative way. Once I got the chance to intern in Media Lab of the Met and knew we can access the 3D models of museum, I wanted to use Oculus Rift to walk inside the fantasy version of the Met, and to enjoy the immersive experience in space.

 

PROJECT_DEVELOPMENT

Virtual Met Museum –> Fantasy Experiment –> Art piece + Audio Guide

 

BASIC_SETUP_HOW_TO

First of all, tons of basic knowledge about Unity here. And setup a project from scratch, here.

 

✓ Import BIM 3D models into Unity

Basically just put the fbx file into the Assets folder of the project you just created. Not too complicated but there’s one thing you should be aware of, the SCALE. It’s a good practice to setup scale right in the modeling application before importing the model to Unity, and associated details described as below:

  • 1 Unity unit = 1m
  • the fewer GameObjects the better. Also, use 1 material if you can
  • useful link: wiki unity3d

 

✓ Oculus Rift Plugin in Unity 3d Setup

Just follow the clear instruction on youtube!

 

✓ Add collider to meshes

In order to preventing player walking through meshes(e.g. walls, stairs), we need to add Collider attribute on models, steps as below:

  • select model
  • @inspector
  • Add Component –> Physics –> Box Collider or Mesh Collider
  • Mesh Collider is more specific than box collider but at the same time is more expensive to use.

collider copy

 

✓ Occlusion Culling

Means that things you aren’t looking at, aren’t loading into memory, so game will run faster.

  •  geometry must be broken into sensibly sized pieces.
    • if you have one object that contains all the furniture, either all or none of the entire set of furniture will be culled.
  • tag all scene objects that you want to be part of the occlusion to Occluder Static in the Inspector.
  • Back!
  • useful link: unity3d manual

 

✓ Import 3D-Scanned Models from 

  • Take about 20 photos around the object you want to 3D scan of(360 degrees!).
  • Upload the photos to 123D Catch.
  • Yeah now you’ll have both .obj model file and texture file!
  • Just download the file, and drag whole folder into the Asset folder of Unity!

 

POSSIBILITIES

  • Gain accessibility for people who can’t visit the museum in person.
  • Installation design simulation.

 

Thank_to

It’s really a good experience interning at MediaLab of Met. I know I want to keep working on 3D and also step into virtual reality world with Oculus Rift, and it’s a great match that I can have this topic as my own project, and also match to the needs of Met! From this internship, I gained valuable resources from the museum, and also knowing amazing mentors and colleagues from Labs. This project leads me to the world of virtual reality and I’m glad and also thankful that I’m a Spring 14′ intern of Media Lab of The Metropolitan Museum of Art.