[转]Massive Model Rendering Techniques

Massive Model Rendering Techniques

Andreas Dietrich Enrico Gobbetti Sung-Eui Yoon

Abstract

We present an overview of current real-time massive model visualization technology, with the goal of providing readers with a high level understanding of the domain, as well as with pointers to the literature.

本文展示了当前大规模模型实时可视化技术的概况,目的是为了给读者们对这个领域一个比较深入的认识,并指出学术界的一些研究的文献。

I. INTRODUCTION

Interactive visualization and exploration of massive 3D models is a crucial component of many scientific and engineering disciplines and is becoming increasingly important for simulations, education, and entertainment applications such as movies and games. In all those fields, we are observing data explosion, i.e., information quantity is exponentially increasing. Typical sources of rapidly increasing massive data include the following:

交互可视化以及对大规模3D模型的浏览,对于很多的科学与工程的学科来说是十分关键的部分。特别是对于仿真、教育以及娱乐应用如电影与游戏几个方面变得越来越重要。在这些所有的领域中,我们注意到了所采用的数据的爆炸性增加情况,如,表现在信息数量的指数级增长。这些增长快速的领域包括:

• Large-scale engineering projects. Today, complete aircrafts, ships, cars, etc. are designed purely digital. Usually, many geographically dispersed teams are involved in such a complex process, creating thousands of different parts that are modeled at the highest possibly accuracy. For example, the Boeing 777 airplane seen in Figure 1a consists of more than 13,000 individual parts.

大规模的工程项目。目前,整个飞机、船、汽车等设计全是由数字化方式进行的。通常情况下,一些地理上位置分散的小组共同参与到这个复杂的进程中,创建数以万计的高精度的模型。

• Scientific simulations. Numerical simulations of natural real world effects can produce vast amounts of data that need to visualize to be scientifically interpreted. Examples include nuclear reactions, jet engine combustion, and fluid-dynamics to mention a few. Increased numerical accuracy as well as faster computation can lead to datasets of gigabyte or even terabyte size (Figure 1b).

科学仿真。对于真实世界效果的数值模拟可能产生巨大数量的数据,而它们需要用可视化来进行科学解释。

• Acquisition and measuring of real-world objects. Apart from modeling and computing geometry, scanning of real-world objects is a common way of acquiring model data. Improvements in measuring equipment allow scanning in sub-mm accuracy range, which can result in millions to billions of samples per object (Figure 1c).

对真实世界对象的获取与量测。

• Modeling natural environments. Natural landscapes contain an incredible amount of visual detail. Even for a limited field of view, hundreds of thousands of individual plants might be visible. Moreover, plants are made of highly complex structures themselves, e.g., countless leaves, complicated branchings, wrinkled bark, etc. Even modeling only some of these effects can produce excessive quantities of data. For example, the landscape model depicted in Figure 1d measures “only” a square area of 82 km × 82 km.

对自然环境的建模。自然景观包括了众多难以置信的细节。即使在有限的视场角下,也将有数以万计的植被等可见。此外,对象本身也十分的复杂。

Handling such massive models presents important challenges to developers. This is particularly true for highly interactive 3D programs, such as visual simulations and virtual environments, with their inherent focus on interactive, low latency, and real-time processing.

操作这些大规模的模型给开发者们带来了一些重要的挑战。尤其是对于高度交互的3D程序,如视觉仿真或虚拟环境,这些应用的固有的特点是进行交互、低延迟和实时的处理。

In the last decade, the graphics community has witnessed tremendous improvements in the performance and capabilities of computing and graphics hardware. It therefore naturally arises the question if such a performance boost does not transform rendering performance problems into memories of the past. A single standard dual-core 3 GHz Opteron processor has roughly 20 GFlops, a Play station 3’s CELL processor has 180 GFlops, and recent GPUs, now fully programmable, provide around 340 GFlops. With the increased application of hardware parallelism, e.g., in the form of multi-core CPUs or multi-pipe GPUs, the performance improvements, which tend to follow, and even outpace, Gordon Moore’s exponential growth prediction, seem to be continuing for a near future to come. For instance, Intel has already announced an 80 core processor capable of TeraFlop performance. Despite such an observed and continuing increase in computing and graphics processing power, it is however clear to the graphics community that one cannot just rely on hardware developments to cope with any data size within the foreseeable future. This is not only because the increased computing power also allows users to produce more and more complex datasets, but also because memory bandwidth grows at a significantly slower rate than processing power and becomes the major bottleneck when dealing with massive datasets.

过去的十年中,图形领域目睹了图形硬件的计算能力与处理性能的极大的提升。自然而然,将提出这样一个问题:是否这种性能的推进不能使渲染性能的问题成为历史呢?一个标准的双核3GHz的处理器可以处理20GFlops的浮点运行。一个PS3有180GFlops而一个GPU支持可编程能力能提供达到340GFlops的浮点运算。随着硬件并行应用的增加,性能的提升要超过摩尔指数增长的预测。尽管取得了这样的计算与图形处理器能力的提升,而对于图形应用来说,在可预见的将来中,还是并不能完全的依赖于硬件的发展来处理数据集的大小。这不仅是因为计算能力的增长允许用户来创建更为复杂的数据集,也是因为内存带宽的发展速度明显的低于处理器增长的速度,而这成为处理大规模数据集的主要瓶颈。

As a result, massive datasets cannot be interactively rendered by brute force methods. To overcome this limitation, researchers have proposed a wide variety of output-sensitive rendering algorithms, i.e., rendering techniques whose runtime and memory footprint is proportional to the number of image pixels, not to the total model complexity. In addition to requiring out-of-core data management, for handling datasets larger than main memory or for providing applications the ability to explore data stored on remote servers, these methods require the integration of techniques for filtering out as efficiently as possible the data that is not contributing to a particular image.

因而,大 规模数据集的交互渲染不能通过强力模型进行。要克服这个限制,研究者们提出了一系列的输出敏感型的渲染算法。如,渲染技术的运行时间和内存的要求与象素成 比例而不是与全部的模型复杂度成比例。此外在要求外核的数据管理技术时,要处理大于内存的数据集或提供应用程序能力来探测数据存储在远端服务器上,这些方 法需要集成高效的过虑对最终生成的图像不起作用的数据。

This article provides an overview of current massive model rendering technology, with the goal of providing readers with a high level understanding of the domain, as well as with pointers to the literature. The main focus will be on rendering of large static polygonal models, which are by far the current main test case for massive model visualization. We will first discuss the two main rendering techniques (Section II) employed in rendering massive models: rasterization and ray tracing. We will then illustrate how rendering complexity can be reduced by employing appropriate data structures and algorithms for visibility or detail culling, as well as by choosing alternate graphics primitive representations (Section III). We will further focus on data management (Section IV) and parallel processing issues (Section V), which are increasingly important on current architectures. The article concludes with an overview of how the various techniques are integrated into representative state-of-the-art systems, and a discussion of the benefits and limitations of the various approaches (Section VII).

原文请下载

© 版权声明
THE END
喜欢就支持一下吧
点赞0 分享
评论 抢沙发
头像
欢迎您留下宝贵的见解!
提交
头像

昵称

取消
昵称表情代码图片

    暂无评论内容