About me

Dieter Schmalstieg is Alexander von Humboldt Professor of Visual Computing at the University of Stuttgart, Germany. He is also an adjunct professor at the Institute of Visual Computing at Graz University of Technology, Austria. His current research interests are augmented reality, virtual reality, computer graphics, visualization and human-computer interaction. He received Dipl.-Ing. (1993), Dr. techn. (1997) and Habilitation (2001) from Vienna University of Technology. He is author and co-author of over 400 peer-reviewed scientific publications with over 30,000 citations and over twenty best paper awards and nominations. His current and past organizational roles include associate editor in chief of IEEE Transactions on Visualization and Computer Graphics, associate editor of Frontiers in Robotics and AI, member of the steering committee of the IEEE International Symposium on Mixed and Augmented Reality, chair of the EUROGRAPHICS working group on Virtual Environments, and key researcher of the K-Plus Competence Centers VRVis (Vienna) and Know-Center (Graz). In 2002, he received the START career award presented by the Austrian Science Fund. In 2008, he founded the Christian Doppler Laboratory for Handheld Augmented Reality. In 2012, he received the IEEE Virtual Reality technical achievement award, and, in 2020, the IEEE ISMAR Career Impact Award. He was elected as Fellow of IEEE, as a member of the Young Curia of the Austrian Academy of Sciences, the Academia Europaea, and the IEEE VGTC Virtual Reality Academy.

Team

VISUS, University of Stuttgart ——– Institute of Visual Computing, Graz

Current research areas

Real-time graphics This research focuses on scalable graphics algorithms for rendering large and complex scenes at real-time frame rates. We investigate novel approaches to techniques such as level of detail, image-based rendering, potentially visible sets, frame extrapolation and geometry compression. We are particularly interested in rendering methods suitable for streaming rendering and virtual reality displays.
Disocclusion buffer for potentially visible sets
NeuralPVS for potentially visible set prediction
Laced wires for geometry decompression on GPU
Trim regions for potentially visible set generation
Temporally adaptive shading atlas for incremental shading
Shading atlas streaming for virtual reality rendering
Situated visualizations Situted visualization uses augmented reality displays to present information to a human user that is dynamically derived from the user's physical environment. It provides guidance, instructions and context in everyday situations. We investigate when, where and how to visualize such information, and what information to visualize.
Situated Brushing and Linking in VR and AR
CECILIA: embedding visualizations in existing games
AR assembly guidance with error management
guitARhero: augmented reality for guitar learning
AR visualization patterns for reusable situated visualizations
RagRug: a unified development framework for situated analytics
Photorealistic augmented reality Ideally, virtual objects displayed in augmented reality would be indistinguishable from real objects. This grand challenge requires capturing the entire image formation process in reality and simulating the virtual counterpart. It encompasses reality capture, light transport, camera simulation and much more.
HandLight
Neural Bokeh
Learning lightprobes
Image-based modeling and rendering Capturing the geometry and appearance of the real world and novel view sythesis from images is a key technique for compelling extended reality techniques, such as telepresence or mediated reality. In this work, we investigate emerging scene representations, such as light fields and radiance fields.
Sorted Opacity Fields
AAA Gaussians
VR Photo Inpainting
Good Keyframes to Inpaint
InpaintFusion: RGB-D Inpainting
MR Light Fields
3D reconstruction and authoring Augmented reality applications require new types of multimedia content to deliver convincing instructions and guidance. This content is generated using techniques in the area of semantic and parameterized reconstruction combined with procedural and computer-assisted 3D authoring.
Subsurface infrastructure reconstruction
AuthXR: An immersive authoring system for industrial procedures
IntelliCap: Intelligent Guidance for Consistent View Sampling
Model-Free Authoring by Demonstration of Assembly Instructions
Authoring of AR surface instructions
Technical documentation retargeting to AR
Localization and tracking Wide-area localization and tracking is key enabling technology for augmented reality. Our research investigates scalable scene descriptors, sensor fusion, and multimodal localization techniques.
Change-Resilient Localization
Bag of Wor(l)d Anchors
Compact World Anchors
HoloLens stereo tracking
TrackCap
VR Upper Body Pose
Immersive displays Conventional XR displays generate only certain types of depth cues and fall short of a hypothetical ultimate display. Our research on immersive display technology improves some of the limitations of existing XR display technology using light field approximation and focal cue synthesis.
Gaze-Contingent Layered Optical See Through
Off-Axis Layered See-Through Head Mounted Display
Video See Through Display with Focal Cues