The second case study of sub project B (VR) reached a first milestone. After discussing various VR applications in a previous meeting, a functional prototype of the Virtual Personas application was demonstrated to the company. The Virtual Persona application allows designers (engineers, management, marketing, etc.) to see and control virtual avatars of existing personas in current and future use scenarios. The application intends to bring personas ‘alive’ in a virtual environment (instead of just describing them on paper) and lets designers experience scenarios from a persona’s point of view. A persona can include physical user characteristics (e.g. weight, length, age) but also behavioural characteristics, such as the level of knowledge of electronics, the level of intelligence, etc.
- The virtual avatar (outside the truck) can be controlled with a Kinect interface
The prototype of this application consists of the following elements;
- A virtual worldÂ (A virtual city with roads, highways, etc. was created for this specific case study)
- Several virtual avatars, representing existing personas
- A motion capture interface that allows the avatars to be controlled by designers (using a Kinect)
A short video demonstrating the prototype is available here.
The setup is envisioned to be used by a group (5-10) of designers in early stage design meetings. The application enables the designers to act out scenarios themselves (using the motion tracking), but at the same time forces designers to act and think from the perspective of a particular persona. As such, it can be useful to discuss questions like “would persona 3 do with this new concept”, or “will this particular concept work with all our personas?”. Though it will probably not result in direct design solutions, it should help the designers in thinking from a user’s point of view, and as a result trigger discussions in the design meeting.
The prototype demonstration resulted in useful feedback and pointers for future work. Firstly, the personas need sufficient introduction before using the tool. Right now, the virtual avatars are simply dropped into the virtual world, without any introduction. A short movie or animation should briefly introduce the primary characteristics of each avatar. Secondly, the current method of interaction with the avatar (motion tracking) may not be the most useful one. Alternatively, some of the avatar actions could be done with a mouse, by a ‘session operator’. For the upcoming test sessions, a hybrid interface (e.g. mouse and Kinect) will be implemented to be able to experience the difference between these two interfaces.
In the two upcoming meetings, a pre-test with a refined version of the prototype will be carried out, followed by a real design session involving a larger group of designers/engineers and a relevant test case.