The Virtual Persona Application

While case study 3 of Sub Project B is taking shape after the last workshop, there is also some progress in finalising the result of the previous case study. The Virtual Persona application has been cleaned up and made ready for use within the company. In addition to a code clean-up and proper documentation, a Developers Guide and Use Guide are being written in order to support adoption of the application by the company.

There are also plans to release a more generic version of the virtual environment that can be used by anyone to create their own Virtual Persona application.

The Virtual Persona application after a code clean-up in Blender.

Virtual Personas: Expert Review

On May 24 an expert review session was organised with the company involved in case study 2 of REPAR Sub Project B. The aim of this expert review was to present the current state of the VR application prototype, and discuss the approach for the final test session.

The session started with a quick introduction to the topic of virtual personas, and proceeded with a proposal for the outline of the test session and a demonstration of the VR application. The application itself was projected on a large screen in front of the participants, and participants were invited to try out the application themselves.

The envisioned use of the Virtual Personas application in a group meeting.

Issues

This expert review identified the following issues in the current application and the proposed approach for the test session.

  1. The current application does not provide sufficiently recognisable personas or characters. The visual representations (the avatars) are too similar to each other as they only differ in clothes. To further improve this, introductory videos of the personas should also be shown to session participants.
  2. In order to act-out scenarios or storyboards there should be additional ‘modes’ for the personas. The current application allows the persona to sit, lay down or walk, but it should also be posssible to act-out ‘reading a book’ or ‘watching TV’.
  3. During the test session the use of the application should be embedded between pre-application and post-application activities.
    1. Before using the application, participants should already have thought about what kind of scenario they want to act-out in the virtual environment. This could for instance be achieved by letting them write or sketch initial scenarios.
    2. After using the application to act-out the scenarios with different personas, the participants should be able to store, review and possibly edit the resulting scenarios. They could be recorded as a video, but ideally the scenarios could still be edited afterwards; in this case, advanced concepts could be placed back in the scenario after several design iterations.

These issues are to be resolved before the final test session.

 

 

Virtual Personas

The second case study of sub project B (VR) reached a first milestone. After discussing various VR applications in a previous meeting, a functional prototype of the Virtual Personas application was demonstrated to the company. The Virtual Persona application allows designers (engineers, management, marketing, etc.) to see and control virtual avatars of existing personas in current and future use scenarios. The application intends to bring personas ‘alive’ in a virtual environment (instead of just describing them on paper) and lets designers experience scenarios from a persona’s point of view. A persona can include physical user characteristics (e.g. weight, length, age) but also behavioural characteristics, such as the level of knowledge of electronics, the level of intelligence, etc.

The virtual avatar (outside the truck) can be controlled with a Kinect interface

Prototype

The prototype of this application consists of the following elements;

  1. A virtual world  (A virtual city with roads, highways, etc. was created for this specific case study)
  2. Several virtual avatars, representing existing personas
  3. A motion capture interface that allows the avatars to be controlled by designers (using a Kinect)

A short video demonstrating the prototype is available here.

The setup is envisioned to be used by a group (5-10) of designers in early stage design meetings. The application enables the designers to act out scenarios themselves (using the motion tracking), but at the same time forces designers to act and think from the perspective of a particular persona. As such, it can be useful to discuss questions like “would persona 3 do with this new concept”, or “will this particular concept work with all our personas?”. Though it will probably not result in direct design solutions, it should help the designers in thinking from a user’s point of view, and as a result trigger discussions in the design meeting.

Next Steps

The prototype demonstration resulted in useful feedback and pointers for future work. Firstly, the personas need sufficient introduction before using the tool. Right now, the virtual avatars are simply dropped into the virtual world, without any introduction. A short movie or animation should briefly introduce the primary characteristics of each avatar. Secondly, the current method of interaction with the avatar (motion tracking) may not be the most useful one. Alternatively, some of the avatar actions could be done with a mouse, by a ‘session operator’. For the upcoming test sessions, a hybrid interface (e.g. mouse and Kinect) will be implemented to be able to experience the difference between these two interfaces.

In the two upcoming meetings, a pre-test with a refined version of the prototype will be carried out, followed by a real design session involving a larger group of designers/engineers and a relevant test case.

VR Development: Toolkits

A paper by S.P. Smith and D.J. Duke (Eurographics 2000, vol. 19, nr. 3) discusses a common issue with developing VR applications.

[...] environments encourage customising a virtual environment’s design while rapid prototyping within the confines of a toolkit’s capabilities. Thus the choice of the technology and its associated support has been made independent of the end-use requirements of the final system.

According to the authors, because there are no clear guidelines for developing VR applications, development often follows an exploratory approach; a prototype is developed using a particular toolkit, and customizations are made by quickly evaluating intermediate versions of the application. Because at that stage the toolkit is already chosen, the capabilities of the toolkit limit the capabilities of the final application, leading to biased results. For example, an application may turn out useless because of a lack of visual realism, just because the toolkit didn’t support fancy graphics. Furthermore, following this iterative and explorative development approach does not allow developers (or end users) to evaluate design alternatives; trade-offs are guided by the toolkit possibilities instead of for instance usability, robustness, maintainability, etc.

The paper proposes an alternative approach that starts with defining VR specific design requirements, such as the required level of realism, object behaviour and user interaction. This is used to create a pre-implementation design that acts as a set of criteria for choosing a development toolkit. If needed, a toolkit can be modified (extended, improved) to fit the design requirements, or an alternative toolkit is used.

Alternative approach for VR application development (From Smith and Duke, 2000)

In a way this paper reflects my opinion about ‘application before tool’; first determine what to create, then find out how to do this. The same goes with development; don’t let toolkits limit the application. However, a problem with finding requirements for VR applications is that there is no common understanding about what VR is. I therefore used the VR demonstration session to show examples and trigger users to come up with requirements. Nevertheless, the points made in this paper should definitely be taken into account during application development in the case studies.

VR Development Frameworks

A couple of years ago (late ’90s) it became apparent that the acceptance of VR was obstructed by the lack of a uniform VR application framework. The framework would be needed to control and operate VR applications, but also to allow normal people (non developers I suppose) to build them. Unfortunately this opportunity (I think there is a need for such a framework) was spotted by a lot of researchers, resulting in a huge number of projects trying to come up with one. Even more unfortunately, there is no clear winner, and most of the projects ended up dead(ish). So here’s an overview.

OpenTracker

OpenTracker is an open (LPGL) project providing a modular easy to customize (XML configuration files) framework for setting up tracking applications. It supports an impressive array of hardware. It’s current state is ‘deadish’; the site appears operational, but information is outdated.

MRToolkit

MRToolkit (1993) is dead, but still  referred to in papers and software. Google cached some outdated info. As any VR framework approach, the MR toolkit provides the glue between applications and hardware:

The MR Toolkit simplifies the development of VR applications by providing standard facilities required by a wide range of VR user interfaces. These facilities include support for distributed computing, head-mounted displays, room geometry management, performance monitoring, hand input devices, and sound feedback. (From the original paper).

The MR Toolkit is mentioned by the OpenTracker paper as an aged but still useful starting point for modern VR frameworks and applications. Other than that most references are gone.

VR Juggler

VR Juggler consists of a suite of applications, including device managers and application development tools.  It’s supposed to be a flexible and cross-platform development suite, supporting simple (desktop) VR as well as more complex CAVE setups. The software on the website is quite old (2008) but the community appears active, and VR Juggler is mentioned in several active research projects.

Todo

Some others that are frequently mentioned, but still have to be investigated.

  • DVise (Later became Division’s 3D modeling stuff?)
  • VRPN, VR Peripheral Network
  • Sketchify of course, check integration with VR hardware
  • ARToolkit, for augmented reality, seems to be alive

VR in Blender

Initial experiments show that Blender is a feasible platform for certain VR applications. I’ve been using OpenCV to hook up a webcam and face tracking to Blender’s 3D viewports. The results are promising; an effective and cost efficient 3D display. The good thing about this is that Blender is very open, and allows for Python scripting. As I’m familiar with both, I suppose this is an interesting lead for initial VR prototyping.

Related to this, several people are working on Blender VR. This blog used OpenCV and a webcam to control the Blender Game Engine. This very old post on Blender  Nation mentions the use for Blender in a CAVE environment (status unknown). And of course Blender also supports the WiiMote.