Augmented Reality Elective

Purpose and Brief

Build onto my main year long project of an Augmented Reality Art book Project.

An exploration of how augmented reality can be used within an art book to bring static images to life.

  • To create own Augmented Reality art book with the theme being about the original dark versions of fairy tales.

In reality I can’t create a whole art book, so I will specifically be exploring the process of Augmented Reality and making it work for only one piece of artwork.

Ideas

  • Incorporating motion capture?
  • Augmented Reality in 2D?
  • Augmented Reality in 3D?

Research

2D AR:

3D AR:

 

Tests: Before I started anything to do with my project, I wanted to test if the software would actually work and if I would run into any issues on the way. Vuforia and Unity seemed to be the easiest to work with and had many resources online compared to other Augmented Reality apps.

Unity Vuforia Process:

1

In Unity 3D, using the Vuforia plugin, I was able to attach both 2D and 3D objects to an image tracked object. Although technically the 2D is not really 2D, as it is a 3D plane, with an image attached to it.

 

For my yearlong capstone project, I had decided I would not be incorporating 3D elements, or motion capture.b Rather, I would focus on 2D aspect of Augmented Reality, much like the eyejackapp, where 2D Augmented Reality is overlaid on top of illustrations.

Possible scenes to test:

12

Chosen Scene: Little Red Riding Hood

I have chosen to explore the first page of my Augmented Reality Illustration Book: The Little Red Riding Hood.

MoodBoard:

Picture1Picture2

Layout/Composition Exploration:

WIP:

I drew up illustrations in Photoshop as well as the images required for Augmented Reality. I then imported all the images into premiere pro to make a video for the unity vuforia to read. I included audio, however the audio would play upon start up of the app rather than when the image target had been detected. Therefore I attempted to split the audio and video into 2 separate sequences. Following this tutorial (https://developer.vuforia.com/forum/faq/unity-how-can-i-play-audio-when-targets-get-detected), I created a audio source to attach to the image target. I then attached the audio file (must be mp3 format) onto the audio source and explored the 3D sound settings.

Screen+Shot+2017-06-17+at+11.49.48+AM

The “Play on Awake” component must be checked on, and the Volume RollOff was interesting to explore, as the audio’s volume falls off the further you are from the physical image target in real life. The Script suggested by the previous tutorial did not work, at least for my version of unity. I found another useful one that ended up working. (https://stackoverflow.com/questions/36924828/play-audio-when-model-renders-in-vuforia-for-unity3d). This tutorial aimed to modify the “DefaultTrackableEventHandler” Vuforia Script so that when the image target could not be detected/tracked by the camera, then the audio would also stop playing along with the image disappearing. As it was extremely weird hearing audio without the image being tracked.

Screen+Shot+2017-06-17+at+12.04.07+PMScreen+Shot+2017-06-17+at+12.04.18+PMScreen+Shot+2017-06-17+at+11.54.18+AM

After this everything I needed to do on Unity was completed, and just needed to export to my phone, as it functioned as I wanted it to on my webcam in the Unity game test.

Screen+Shot+2017-06-18+at+1.34.11+PM

Next, I wanted to export the build to my android phone so that I can view the AR not just on the webcam. This process was rather long, and required many other applications to be downloaded such as the Android SDK and Java Development Kit. I followed this tutorial step by step (https://unity3d.com/learn/tutorials/topics/mobile-touch/building-your-unity-game-android-device-testing) to build to Android. I had to install packages in the Android SDK application such as “Android SDK Tools”, “Android SDK Platform tools” “Android SDK Build Tools” and “SDK Platform”. I then had to adjust the build/player settings within Unity, adjusting the name was import in order for it to export correctly. Company, Product Name and Identification needed to be changed. And Identification had to be in a specific format such as “com.Sitepoint.ARDemo. When Exporting the build, I ran into multiple issues. Firstly, the location of the root folder of the Android SDK and JDK needed to be directed to, through the Unity Preferences. Next, there was a problem with the script, saying Found plugins with same names and architectures. I then had to delete a duplicate of one of the Vuforia scripts that I had accidentally imported in twice under 2 different folders.

After these issues were fixed, I was able to transfer the build to my android phone, I encountered a major issue. The build did not detect my tracking marker image and the camera looked out of focus.  I looked to solve the camera focus issue first as this could’ve been why it wasn’t detecting properly. The camera could not focus when the application was open, and it was the phone camera’s problem, as I also tested every other application that used the camera and they all functioned properly. I then tried adding a script onto the AR Camera in Unity which was supposed to fix the auto focus issue.

Screen+Shot+2017-06-18+at+1.44.27+PMScreen+Shot+2017-06-18+at+1.53.57+PM

However, I found no visible difference when I re-built it to my phone, and the camera still refused to detect the image and overlay AR on it, even though it worked perfectly on my webcam in Unity.

I researched on google and through all of the Vuforia unity android forums but could not find anything relevant to solve my issue. I wonder where the problem is occurring, and if it’s a specific problem when exporting vuforia to android phones (or a specific software update).

I was unable to successfully export the build from my mac to android phone.

Motion Capture: Crowd Simulation Games Course

Research Question

How effectively does crowd simulation interact with a given environment through the use of dynamics and simulations.

I wanted to create a crowd simulation games course where the “crowd” runs through a specified path with obstacles. I will be researching various game course designs, drawing inspiration from animes and game shows, such as Deadman Wonderland as well as Wipeout and various videos on youtube.

Design of Games Course Research

Capture.PNG

Image result for WipeoutCapture4.PNGCapture3.PNG

Capture2.PNGCapture5.PNG

Capture2

Capture.PNG

Obstacles

  1. Swinging balls
  2. Ramp
  3. Blowing Wind Machine
  4. Moving Floor
  5. Flowing Bridges
  6. Maze

 

Concept and Storyboard:

18676378_10210155870866988_200714170_o.jpg18641632_10210155870066968_651184499_o.jpg

Software Possibilities: Maya and or unreal engine for the games course. (Edit) After much thought the optimal thing to do would be to just use Maya instead of unreal because there is no real need to use unreal as I am not trying to control the crowd simulation as a player, rather I’d just need to see them via camera. I decided to stick with what software I have used before and developing on it also, as I haven’t done much modelling.

Learning Process Documentation:

  • Modeled games course environment with polygon primitives. ss (2017-05-22 at 11.01.50)
  • NCloth Simulation
    I created a simple cloth simulation that would make my cloth bounce up and down to give the scene more life and details.
  • Wind Simulation
    I made a small wind simulation a part of the obstacle course to blow the running crowd over and to experiment with dynamics.
  • Ball Swing
    I had to experiment with attaching a swinging ball to the obstacle course through various techniques such as making a small rig and animating it as well as nail constraint. The animating would cause me to hand key frame it to look like a natural swing and I am not very good at animation so I decided that the nail constraint seemed to be the easier option so I decided to stick with using that. After creating the nail constraint for the sphere, I parented a long polygon primitive to the sphere to make it look like a swing.
    Tutorial used: https://www.youtube.com/watch?v=xuJoSsOZNfsss (2017-05-22 at 11.08.46)
  • I excluded the other obstacles I had planned and cut the obstacle course short, due to time constraints. There are already many dynamic obstacles the Original Agents will have to interact with.
  • Crowd Simulation
    I was working together with Rebecca Hand on the shoot, clean up and crowd simulation part of the project. For the crowd simulation, this required us to motion capture shoot data. We then had to Cortex and motion builder clean the data up. Afterwards, we Imported into Maya and explored the Trax editor to loop the animation. This was a little difficult since our motion captured actor was moving forward but when looped, he would teleport back to the starting location to start his looped running animation again. To fix this we tried first key framing the motion into place by putting a locator down on his nose and key framing his motion in relation to the locator. This did not work and we attempted to lock the translation of the x/z motion in motion builder character’s setting instead. This worked and I was able to begin looping my running animation in Maya. Animation Editor>Trax Editor. We played around with this to loop the animation as well as create a faster running cycle as well as the normal cycle. When we tried to create a slow jog cycle, the looped animation started to move upwards into the air after each loop.
  • After all this, we started to explore miarmy and followed a bunch of tutorials to guide us on the way. Firstly we made our character into an original agent. We then had to edit these boxes which would control his collisions etc. We plugged in the “action” into the original agent with no trouble. However, I ran into further issues after all the work put into getting the Original Agents to perform the Motion Captured action. I tried adding both collisions, then dynamics decision nodes to the original agents so that when interacting with another object, it would interact and fall over. However, the collisions and dynamics were not functioning properly and would spaz out when attaching collision decisions and dynamic nodes onto the original agent. We ran through a bunch of tutorials to attempt to trouble shoot this, such as going all the way back to try fix our boxes, however this was not the issue. After many days of trial and error, we gave up on MiArmy and crowd simulation.
  • Therefore, I changed my idea. I ignored the use of collisions and focused on aspects of crowd simulation that would not require dynamics and collisions. I ended up still wanting to do some sort of games course so I decided to make a maze for the crowd to run through. This would still use motion capture as well as crowd simulation but with avoiding collision logics applied to the simulation instead.
  • I used this tutorial to quickly build up the maze assets. I created a plane, changed the plane’s texture to a maze file and clicked modify>convert>texture to geometry. After that I used edit mesh>extrude and created the maze.
    https://www.youtube.com/watch?v=vwuQmvGybyA
  • After putting more thought into this and how much time I had left (less than 1 week) I decided to not do crowd simulation at all. Crowd simulation is obviously a sort of simulation and requires me to begin rendering/simulating at frame 1. Therefore I would not be able to transfer my files to render different parts of the sequence on different computers, thus slowing down render time. I would probably not have enough time to render and edit it all together by the due date.
  • I thought of possible projects that I could achieve within the timespan I had left. I saw this interesting video that inspired me to do something similar with my motion captured people. Therefore, just using my motion captured run cycle loop animation I created earlier, I created this running maze idea, which still incorporated a game’s course aspect to it. Instead of the conventional maze with people inside a maze, I decided the people could be on top of the walls and trapped on top, falling to their deaths, rather than inside and lost.
  • However, problems that quickly arose were inter mesh penetration. Since in crowd simulation there was a logic node that was applicable that would enable the crowd to walk on top of the terrain. I just had to do my best to eyeball the character’s motion and the maze and match it up.
  • I attached my motion captured animation to a motion path.
    Draw the motion path by selecting the magnet icon beside “no live surface” whilst having my maze selected,   so it becomes live and able to magnetically draw on mesh and objects so the path isn’t randomly in 3D space.
    Shift select the character first, then the path, Animation>Constrain>Motion Paths>Attach to Motion Paths.
    In the channel box, there will now be a motionpath input under my character’s master control. When I click on this, i can adjust the rotations etc. of my character.
  • 18788146_10210202317228118_277426470_nAfter setting all these up, I was able to start setting up camera shots, shaders, and textures. As seen in my idea generation/mood board, I wanted to go for a very minimalistic and monochromatic look to the scene, so just using white/grey/black. The characters in the scene needed a bit more “pop” to them so I made them more shiny and reflective with materials like gold, chrome, and glass. The maze looked very dull and plain when I tried it with the concrete type of look to it, as seen in my mood board. This was not visually interesting so instead I made the material highly reflective which made the scene much more dynamic.
  • Experiments with lighting rendercaptureCapture2
  • Top one has exposure of dome light set to 3.25, whilst bottom is set to 1. Top one creates a nice gradient between the character and creates better reflections/shadows also.Capture5
  • Top half of body is without directional light exposure, and bottom half of body is with directional light exposure. Bottom half of body is too shiny/reflective and over exposed, this was with the light intensity way down also. So i opted for the more matte looking finish rather than shine.

FIGURATIVE REFLECTIVITY

Brief

An open brief to create a 30 second motion capture sequence which explores alternative, non-figurative and experimental visualizations of movement and the body in motion. This is an opportunity to explore further the possibilities of creating a visual sequence using motion capture. You are encouraged to take a more experimental approach by exploring alternative visualisations of body in motion: dance, sport, mime etc. This might include the use of geometric shapes, particle effects, fire, smoke effects, and dynamic simulations. The use of the virtual camera as an element can also be explored.

Initial Ideas

  • My friend Kayla has background in dance, ballet, choreography who could potentially be our motion capture actor

I decided I wanted to have a dance of some sort. Either of abstract dance, or some sort of choreography.

Abstract Dance – https://www.youtube.com/watch?v=RVVI1uOZ5p8

Choreography – https://www.youtube.com/watch?v=AO5W1zBHPKg

Fire Dance Anime – https://www.youtube.com/watch?v=LPi-38oUtC8

Fire Particle Effect Following – https://www.youtube.com/watch?v=kaRGazSwt7c

Particle Effect Fire Constant – https://www.youtube.com/watch?v=3hqkEbVQwIg

  • I would like to use various particle effects, fire and smoke effects to go with my dance. Whether it be these mentioned effects popping up after a certain movement etc. or the effects following behind/attached somewhere to the rig.
  • The rig may be a just be built through simple geometric objects that resemble a human form. eg. circle head. cone body (for female) cylinders for arms etc. Or all one object, eg. all balls, or even basic character rigs
  • They could bloat up/reduce for figurative dancing. Certain parts could enlarge, reduce in size during the dance.

Further Ideas

After Greg posted up some videos of various motion captured abstract visualisations on the AUT motion capture page, I decided to rethink my initial concepts. I became interested in these moving digital sculpture type particles and dynamic particles that follow the initial movement and remain for a period of time before fading away. I want to create something that still  with circular particles and their movement rather than lined particles.

Human Movement Converted Into Digital Sculptures

The Physics of Kung Fu Brought to Life Through Motion Capture Visualizations

Asphyxia: A Striking Fusion of Dance and Motion Capture Technology

Asphyxia: A Striking Fusion of Dance and Motion Capture Technology

 

Production Diary

Date Action Taken
24/03/16 Assignment Brief given and introduced.
31/03/16 Researched possible dances and documented.
11/04/16 Possible ideas generated and documented.
25/04/16 Motion Capture group formed and personal and possible ideas were discussed with Greg.
02/04/16 Sorted out actor and planned the time to shoot with her.
06/04/16 Equipment needed for Motion capture shoot prepared. Double checked actor was available to motion capture shoot.
08/04/16 Motion Capture Shoot.
15/04/16 Cortex Clean Up.
18/04/16 Cortex Clean Up continued.
06/05/16 Finished all the Cortex Clean Up.
06/05/16 Begun Motion Builder solving data.
13/05/16 Begun exploring various particle effects and various ways to display the movement. Eg. Abstract shapes, or skeleton, or polygon human figure etc.
20/05/16 Finalised idea to use spherical shapes attached to locators to form the movement and shape.
27/05/16 Attached particle effects.
01/06/16 Inserted lighting and cameras. Render.
02/06/16 Begin and finished final editing of sequence. Including – research and editing of background soundtrack.

Work in Progress

Capture11

Imported locators to experiment with attaching an actor onto it.

Capture

Attached character to the mocap data, but the movements seemed awkward and didn’t give the elegant effect I wanted to capture.

Capture2

I also experimented with various polygons to build as the character and attach the movement to those, however the effect was still quite stiff and rigid.

Capture3

Capture4

I decided to go with a more abstract idea, so I used spherical balls and applied a texture to it to create a bubbly/reflective marble look as well as to craft out the form of the dancer. These were all individually assigned to each mocap marker/locator. The locators had to be imported into Maya as an fbx to be able to read as well.

Capture5

I added particles, by adding an emitter to a few objects, in this case the spheres in the scene. I played around with the lifespan, size and gravity of the particles as well as the different render types of particle shapes and sizes.

Capture12

After I was happy with the overall adjustments of the particles and their lifespans/sizes etc. I decided to work on the colour of them. I wanted to change the colour at various stages of the dance for the dance to show progression and change. This was done through applying a material and changing the colour of it and then key framing each point that I wanted it to change at.

Capture13

Capture14

To finish, I decided to make a box/room by creating a few planes and standing them up in a box form. I made their surface 1.00 reflective to make a very reflective room. This created a nice reflection in the back etc. where the form can be seen, but the particles could not be reflected, giving it an abstract metaphor as if the camera is seeing both the reality and what the “dancer” is imagining. The dancer imagines she is a elegant and flowing with particles flying about, but in reality she is normal and just dancing as seen in the reflection.

Capture9