Playhub: Empathy, Emotion and Body Language

http://playhubs.com/event/playhubs-presents-empathy-emotion-and-body-language-in-vr-lessons-learned-from-15-years-of-research/

So on the 7th of March, I attended an interesting talk at Playhubs in Somerset house. First I must say that the building in itself is a lovely place, very grand and spacious, with great pubs and even an ice rink.

The talk was by my supervisors Marco Gillies and Sylvia Pan on Virtual Agents in Virtual Reality and the expression of emotion, empathy and body language.

It was a very interesting talk which went from the importance of Virtual agents and their use in different applications and study to the different and new techniques being utilised to create realistic movement. One of the new techniques that grabbed me was the use of actors/actresses to replicate realistic body language and gestures. It makes sense if one was to think about it. If I needed someone to show an emotion on cue I wouldn’t entrust it a random individual, it wouldn’t be realistic and may even come out awkward and mixed with other emotions(such as uncertainty as to if they’re doing right, and -knowing my friends, annoyance that they had to do it at all). Then there is the obvious human flaw that humans tend to overthink things.

 

Actors are trained to snap into a psychological state at a whim and those that have experience in theatre know how to express to a mixed crowd a certain emotion clearly if not in an exaggerated form. Though one can’t forget about the other issues such as cultural influence on the gestures and displays of emotion, it’s hard to generalise and choose an action to universally to be accepted as a certain state and expect everyone to receive it in its intended way. Then there’s also micro expressions which arguably could be said ‘speak louder than words’ and it’s what our bodies react to and acknowledge on a subconscious level(?). How do we strike a balance between that and expressing clear distinctive emotion? Is it even possible to marry them to have a real encounter with a virtual agent? Does this affect plausibility?

I’ll probably add more thoughts to this later.

 

Hmm…

Anyway here are the images from the night!

 

This slideshow requires JavaScript.

IGGI AI GAME MODULE

Mid February, me and the IGGI cohort began our first Game AI module of the program. It was based on the fundamental and basic methods used to generate Game AI at this present time. This includes but is by no means limited to Procedural Content Generator(PCG), Behavioural Trees, Steering Behaviours, Finite State Machines and A*.

Being interested in the behavioural aspect of Game AI ( And quite frankly wanting to keep things clean and simple) I decided to try and develop a Finite State Machine in Unity and show some Steering behaviours of some virtual characters.

Now to begin I must say I’m in love with these two text books in particular (mainly because they got me through this module and explained a lot of the difficult concepts in a simple, brokendown manner). I would say if you are just starting out in Game AI these books are the ones to grab as references or if your looking for good examples to start off of.

One is called Game AI Pro by Steven Rabin,

Screen Shot 2016-03-10 at 21.49.24

and the other is Programming Game AI by Example by Mat Buckland.

Screen Shot 2016-03-10 at 21.49.05

I first wanted to make a snooping game based off of my submission in the Goldsmiths Global Game Jam. It was a horror type game where the player had to avoid Lady macbeth and reach the end of the building without getting caught. However after a brief visit with a friends dog, I decided to do a dog simulation.

 

 

 

 

The concept behind this application of FSM and Steering behaviours, is a group of dogs playing in alone in the house and being watched on a camera by the ‘owners’ who have installed a ‘Dog Camera.’ The dogs transition from two states; Chase and Play, where they depict the two steers; Arrive and Wonder respectively.  When in the Chase state (Arrive,) the dogs move in unison towards a bone object places in the scene. When one of the dogs collide with the bone object, then the dogs switch to the Play state(Wonder). In this state the dogs wander around the room randomly, exploring and interacting with the objects in the room. Added functionality is the ability to change the perspective of the camera, therefore freedom to choose which dog to focus on.The main default camera shows all camera perspectives but by using specific key inputs, the camera can be changed to follow one dog’s movement in the scene and get a closer look at their behavior. Additionally is the ability to mute and un-mute the sound of the dogs barking in the scene.

 

Watch the video here: