Getting Augmented with Vuforia: Guide for PlayMode (Unity plugin)

 

ezgif.com-crop

Here’s a post with information on how to install Vuforia plugin with Unity and also results of my first basic run using ImageTargets in PlayMode.

I’m currently using Alienware 15 R3 which runs the Windows 10 operating system.


 

 

Step 1: Download the plugin.

The first thing to do once you have opened up Unity and started up a new project(…sigh, I’ll wait for you to catch up…go on) is to download the free Vuforia plugin which you can access on the Asset Store. But to save precious mouse-clicks and roaming…You’re welcome.

https://www.assetstore.unity3d.com/en/#!/content/74050

Select -> Download -> Import.

Or you can get it from the Vuforia website:

https://developer.vuforia.com/downloads/sdk

Once you have this in your project folder you are pretty much half way there! No seriously… It should look like this.

UnityExtensionInProject

 

Step 2: Create a license key

So in order to use this plugin you need to register your app and get a license key which you will then need to input into your project in Unity. You do this by first going onto the Vuforia developer website and creating an account.

Once you’ve been verified then you can access the ‘Develop’ Page. You’ll see a tab that reads ‘License Manager’  – Click that then select ‘Add License Key.’

Choose your Project type and fill in the details then wala! You have a license. If only driving were that easy…

To get the key into your project. Select your App which should now be listed on the License Manager page and copy the block of text under License Key.

Screenshot (5)

Screenshot (7)

Now return to Unity and type into the search bar in your Asset folder, ‘VuforiaConfigure’. There should be a Unity Asset that appears (with the Unity icon). Click on that and in the inspector paste the key into the text input under ‘App license Key.’

Still with me?

Take a small water break and come back…

Step 3: Adding Targets

Okay! so now we can move on to adding out Image Targets. This will be the image that your program will recognize and make your model appear on top of.

Next to the ‘License Manager’ tab, you should also see a ‘Target Manager’ tab. Yes, you guessed it, click on that.

Screenshot (4)

From there – similarly to when creating a license, you should click on ‘Add Database’ (this will be the database of your targets. You can create multiple databases.) They will ask you to name your database and also choose the type. There are three types: VuMark, Device, and Cloud.

  • VuMark:
    • Similar to Device, however, allows for the subtle integration of ARTargets which can store both data and AR experiences. You can have the same design on multiple objects however still have unique data/experience with each object based on their instance ID. It’s also customisable.
  • Device:
    • Standard. Store your image/object that you’ll want to use as a Target. This will enable you to initialize AR experiences with the unity plugin
  • Cloud:
    • Allows for easy editing of Targets. ‘Real Time, Dynamic changing content’ as they say on the website. Host and manage image targets online.

 

Well, I selected Devices. Once that was done, select the database from the ‘Target Manager’ Page and then select which image you want as the target. You also have to add the dimensions(in Unity scene measures…meters I believe.)

After that has been created you can download the database which will be compressed into a unity asset which you can import straight into your project.

Screenshot (6)

Step 4: Setting up the project.

Okay so now we have all our tools in place. We can begin our first AR ImageTarget App!

First, delete the main camera from your scene and add in the Vuforia ‘ARCamera’ which is a prefab provided in your asset folder.

Then add the ‘ImageTarget’ prefab to your scene. This will handle your image target you created and the appearance of your model.

You should see a Script attached called, ‘Image Target Behaviour,’ under ‘Database’ select the database you created and imported into your project from the website (mine is ‘TargetTest.’) If it doesn’t do it automatically, you should select the image in the ‘ImageTarget’ parameter below.

 

Screenshot (3)

You should now see the Image material added as a component at the bottom. don’t worry if it still shows the Cube map. It will still work. (well it should…Fingers-crossed.)

Step 4 1/2: Check the right boxes!

Really important. Go back on the ‘VuforiaConfigure’ asset and make sure that under ‘Datasets,’ your imported data set is selected (Load ——- database). Once that box has been check another will appear called ‘Activate.’ Check that box too!

Then as we are running from PlayMode (for now) make sure that under ‘Webcam’ your preferred camera has been selected from the ‘Camera Device’ options and that the ‘Disable Vuforia PlayMode’ is unchecked.

 

Note: I’ve found if more than 2 Datasets are checked it won’t run. Don’t know if this is a bug…

Screenshot (2)

Step 5: Add your model

Now add your model as a child GameObject of the ImageTarget in the hierarchy. You can do this by dragging dropping the model asset from your asset folder, on top of the ‘ImageTarget’ name on the hierarchy. Now ‘ImageTarget’ should have a submenu with the model indented within it (mine is ‘butterfly’.)

Screenshot (1)

Step 4: Print & Run

Once all that has been done, print off the image you’ve set as a Target and you’re ready to go! Press play and hopefully you should see your model!

Note: I’ve found it doesn’t matter if the image is in black and white.

Update:

Now deployed to Samsung Galaxy S7

augmob

 

 

 

 

 

Playhub: Empathy, Emotion and Body Language

http://playhubs.com/event/playhubs-presents-empathy-emotion-and-body-language-in-vr-lessons-learned-from-15-years-of-research/

So on the 7th of March, I attended an interesting talk at Playhubs in Somerset house. First I must say that the building in itself is a lovely place, very grand and spacious, with great pubs and even an ice rink.

The talk was by my supervisors Marco Gillies and Sylvia Pan on Virtual Agents in Virtual Reality and the expression of emotion, empathy and body language.

It was a very interesting talk which went from the importance of Virtual agents and their use in different applications and study to the different and new techniques being utilised to create realistic movement. One of the new techniques that grabbed me was the use of actors/actresses to replicate realistic body language and gestures. It makes sense if one was to think about it. If I needed someone to show an emotion on cue I wouldn’t entrust it a random individual, it wouldn’t be realistic and may even come out awkward and mixed with other emotions(such as uncertainty as to if they’re doing right, and -knowing my friends, annoyance that they had to do it at all). Then there is the obvious human flaw that humans tend to overthink things.

 

Actors are trained to snap into a psychological state at a whim and those that have experience in theatre know how to express to a mixed crowd a certain emotion clearly if not in an exaggerated form. Though one can’t forget about the other issues such as cultural influence on the gestures and displays of emotion, it’s hard to generalise and choose an action to universally to be accepted as a certain state and expect everyone to receive it in its intended way. Then there’s also micro expressions which arguably could be said ‘speak louder than words’ and it’s what our bodies react to and acknowledge on a subconscious level(?). How do we strike a balance between that and expressing clear distinctive emotion? Is it even possible to marry them to have a real encounter with a virtual agent? Does this affect plausibility?

I’ll probably add more thoughts to this later.

 

Hmm…

Anyway here are the images from the night!

 

This slideshow requires JavaScript.

IGGI AI GAME MODULE

Mid February, me and the IGGI cohort began our first Game AI module of the program. It was based on the fundamental and basic methods used to generate Game AI at this present time. This includes but is by no means limited to Procedural Content Generator(PCG), Behavioural Trees, Steering Behaviours, Finite State Machines and A*.

Being interested in the behavioural aspect of Game AI ( And quite frankly wanting to keep things clean and simple) I decided to try and develop a Finite State Machine in Unity and show some Steering behaviours of some virtual characters.

Now to begin I must say I’m in love with these two text books in particular (mainly because they got me through this module and explained a lot of the difficult concepts in a simple, brokendown manner). I would say if you are just starting out in Game AI these books are the ones to grab as references or if your looking for good examples to start off of.

One is called Game AI Pro by Steven Rabin,

Screen Shot 2016-03-10 at 21.49.24

and the other is Programming Game AI by Example by Mat Buckland.

Screen Shot 2016-03-10 at 21.49.05

I first wanted to make a snooping game based off of my submission in the Goldsmiths Global Game Jam. It was a horror type game where the player had to avoid Lady macbeth and reach the end of the building without getting caught. However after a brief visit with a friends dog, I decided to do a dog simulation.

 

 

 

 

The concept behind this application of FSM and Steering behaviours, is a group of dogs playing in alone in the house and being watched on a camera by the ‘owners’ who have installed a ‘Dog Camera.’ The dogs transition from two states; Chase and Play, where they depict the two steers; Arrive and Wonder respectively.  When in the Chase state (Arrive,) the dogs move in unison towards a bone object places in the scene. When one of the dogs collide with the bone object, then the dogs switch to the Play state(Wonder). In this state the dogs wander around the room randomly, exploring and interacting with the objects in the room. Added functionality is the ability to change the perspective of the camera, therefore freedom to choose which dog to focus on.The main default camera shows all camera perspectives but by using specific key inputs, the camera can be changed to follow one dog’s movement in the scene and get a closer look at their behavior. Additionally is the ability to mute and un-mute the sound of the dogs barking in the scene.

 

Watch the video here:

GlobalGamJam 2016!

So a few members of IGGI along with myself took part in the 2016 Global Game Jam that took place in Goldsmiths University in South East London (Aka my university ;D). It was a tiring but fun three-day adventure trying to create a game with the theme of ritual whilst trying not to overload on the countless piles of junk food made available to all participants of the Jam.

Being the VR lover that I am, I teamed up with fellow VR enthusiast and 3D Artist/programmer, Jing Chun Tan to create a horror VR game based of off Shakespeare’s Macbeth play. (Which i’m a huge fan of.)

Heres the link to the game jam page and a few snippets from our yet to be completed game.

12657360_1013858991985750_4692262816576427227_o.jpghttp://globalgamejam.org/2016/games/curse-macbeth

Setting up the Kinect on OSX (El Capitan)

Kinect.png

With Apple buying out PrimeSense (Thanks Apple…), installing the Kinect on OSX has become a little fumbly. Here’s a step by step guide on getting it up and running.

 

Disable System Integrity Proctection

System Integrity Protection (SIP) is a new default security measure introduced by Apple in OS X 10.11 onward. This rootless feature prevents Mac OS X compromise by malicious code, therefore locking down specific system level locations in the file system. This prevents the user from making changes to the system via Sudo commands. Therefore in order for us to proceed, we need to turn it off.

  • Restart your Mac in Recovery mode
    • Restart your Mac holding down Cmd-R
  • Find  Terminal in the Utilities menu and type in the following : csrutil disable
  • Restart your Mac

Great! Easy start. Now we will be able to have full access.

Download and install MacPorts

http://www.macports.org/install.php

 

Install Dependencies

  • First we have to download a few libraries in order to get the USB port on your Mac working with the Kinect. If you haven’t got them already.
  • Go into Terminal and type:

—————-

sudo port install libtool

—————-

  • Now restart your Mac
  • Next, install the development version of libusb. Type into Terminal:

—————-

sudo port install libusb-devel +universal

—————-

  • Once its been installed, restart your Mac once again.

Install OpenNI

  • (Optional) Create a Kinect directory in Home to place all applications you’ll need to run the Kinect on the Mac.
  • Open up Terminal and type in:

—————–

mkdir ~/Kinect

cd ~/Kinect

—————–

  • As the download page from the Primesense website is not working. Here’s a link to the OpenNI unstable release. Do not try to Download OpenNI v2 beta as it relies solely on the Microsoft Kinect SDK which we cannot use. The version we are going to use is OpenNI SDK (V1.5.7.10)
  • https://mega.nz/#!yJwg1DJS!uJiLY4180QGXjKp7sze8S3eDVU71NHiMrXRq0TA7QpU
  • Move the Zip file to your Kinect folder and double-click to uncompress and reveal the SDK folder.
  • Open Terminal and navigate to the OpenNI SDK folder.
  • Once in the folder, type:

—————-

sudo ./install.sh

—————-

Install SensorKinect

  • First type this command in Terminal to prevent errors when installing SensorKinect.

—————

sudo ln -s /usr/local/bin/niReg /usr/bin/niReg

—————

  • Go to this Github repository and click Download ZIP.
  • https://github.com/avin2/SensorKinect
  • Move the Zip to the Kinect folder and uncompress it
  • Navigate to the SensorKinect093-Bin-MacOSX-v5.1.2.1.tar file inside the Bin folder and uncompress it.
  • Open Terminal and navigate to the SensorKinect093-Bin-MacOSX folder
  • Install by typing the following command:

————–

sudo ./install.sh

————-

  • It will prompt you to enter your password
  • If it works it will install the Primesense sensor

————-

Install NiTE

  • Last thing to install. Go here and download NiTE-Bin-MacOSX-v1.5.2.21.tar.zip
  • Add this file to your Kinect folder and uncompress it
  • Go into Terminal and navigate to the NiTE folder
  • Install NiTE by typing in the following command:

————-

sudo ./install.sh

————-

Once that is done, you’ve pretty much finished! Now try and run some examples!

  • Plug in the Kinect
  • Copy the sample XML files from NiTE/Data over to the Data folder in SensorKinect
  • Open Terminal and navigate to NiTE/Samples/Bin/x64-Release
  • Run the first Demo by typing in the following command:

————

./Sample-PointViewer

————

If everything is setup correctly then a new window should pop up and display a tracking demo!

(Note: You might want to restart and enable the SIP again if you want)