Sub project B is currently investigating the possibilities of mixed reality. Mixed reality typically consists of a virtual and a real part. With augmented reality for instance, virtual information is added to real information (usually a video stream). It is also possible to mix two sources of virtual information.
Image a user interface designer who wants to see how his new GUI idea fits on a future product (e.g. the dashboard of a new car or the user interface ofÂ industrial machinery). Here we can use mixed reality to let GUI designers insert their GUI prototype (which may be an interactive Flash prototype, a simple sketch or a Sketchify file) into a virtual environment, where a test-user can use the GUI on a virtual product.
A first implementation of this idea is shown in the short movie below. On the left, you can see a 3D virtual environment (a simple 3D object in Blender). Onthe right, a window pops up, showing an arbitrary traditional 2D application (in this case a webbrowser). This application is rendered in real-time on the cube in the virtual environment. When we browse in the webbrowser, the cube immediately shows the same output. The other way around also works; the browser can be controlled from the virtual environment. To imagine a more useful application, replace the webbrowser with a GUI prototype, and the 3d cube with a soda machine, a printer or a car dashboard!
The implementation presented here uses Virtual Network Computing, which is a standard and widely available technique for sharing software applications; you can even share an entire desktop over a network. This allows people to watch your desktop, but also to control mouse and keyboard input. The same trick is used here; the webbrowser is a ‘shared’ application that sends visual output and mouse/keyboard controls to the 3D environment. This way, the application can be rendered on the cube (or any 3d object), and mouse input can be sent from the 3d environment back to the 2d application.