A wide range of devices is nowadays capable of displaying 3D computer graphics. Devices as small as phones have the ability to render rather impressive scenes. PCs can be connected to multiple large flatscreens, allowing a great field of view compared to the small screen of a phone. Special
monitors can display 3D graphics with the help of techniques such as stereoscopy. However, normal stereoscopy requires the user to wear special glasses. An improved method called autostereoscopy tries to achieve the same effect, only without the glasses. The picture will therefore no longer appear to be flat. Alternatively, the glasses can be turned into a head-mounted display and no longer require a separate monitor. All these technologies however have one flaw: the user’s freedom of movement is very limited. It is almost mandatory to remain at a certain position. If the user tries to look around, special handling is required. With the help of a tracking system, only comparatively small movements with the head can be registered. Movements that go beyond a certain distance are difficult to handle. In addition to the distance, the viewing angle is very limited as well. Depending on the set-up, the head can usually only be tilted slightly.
It is impossible to look at an object from a completely different angle, for example, from the back. Hence, an alternative way of moving around in the scene has to be implemented. An obvious choice are mouse and keyboard, as almost every computer has access to them. Computer gamers are familiar
with this kind of interacting with a virtual scene. A key has to be pressed and the character of the player starts to move in one direction. The mouse simulates the movements of the head and can be used to look around. Despite being a very precise and efficient way of navigating in 3D space,
it lacks a certain degree of realism. The gamer is sitting still most of the time, only the hands are moving. Therefore it becomes very difficult to immerse into the 3D scene. The Institute of Computer Graphics and Knowledge Visualization at the TU Graz has the means to provide a fully immersive 3D
experience. DAVE, a virtual environment using multiple projectors and an optical tracking system, enables the user to truly move around in a virtual scene. It is possible to look at an object from different angles by just walking a few steps. This results in a whole new way of interacting with the
scene. Many different applications can benefit from these capabilities. However, there also are certain limitations such as input precision or the fact that the user is standing in the middle of an empty room and does not have access to complicated input devices.
This thesis tries to deal with these and other limitations and describes the design and implementation of an application that can be used to create meshes in such a virtual environment. By utilising the capabilities of the DAVE, a user is supposed to feel fully immersed, hence the name Immersive Mesh Creator. A selection of tools are developed, enabling the user to quickly draw and sculpt objects in a very intuitive way.