whose performance and memory usage is independent of terrain's size,
and which is all very short and simple
[PAGE UNDER CONSTRUCTION: thank You for Your patience]
Algorithm by: Simon Hasur
Paper by Simon Hasur and Niso R.
date:nov 2014 - apr 2015
motto of the topic:
To find the Job, use Optimism: an Algorithm which relies on Dynamic Level of Detail
to give more detail near good things and less detail near bad things.
The Job is somewhere near a site with the highest level of detail.
We are going to present to the Reader, a terrain-rendering Algorithm ; in a very short and simple yet complete way. Where by terrain-rendering algorithm we essentially mean a polygonal surface-approximation method: and one which allowes to approximate more or less brutally near different sites of the surface, and can change dynamically how the more-or-less brutal approximation is distributed across the surface (a triangulated surface in this case).
To avoid confusion, the algorithms' nerd of the authors gave a name to the Algorithm: New Terrain System - not a big fantasy in there, but that was the name of the experimental software package when it was under development and experimentation, so there was no point in inventing another name for it and create confusion.
We will, however, deal also with a variant of that algorithm: that one is called Simplified Terrain System - chich is much the seme, but overally a bit shorter and simpler. ; and the name was coined to point that out. We will start start with that one, so don't worry about differences and similarities between the 2 variants. They will resolve almost seamlessly when it's time for it.
Let's get started.
The Algorithm presented provides advanced rendering (very large terrain with full drawdistance), but it's very simple. And it's also computationally very light. And most importantly, in the case of this Algorithm, the terrain can be arbitrarily large.
It was established through heavy research right in order to provide a rendering system that be advanced, but that be also very simple to expalin and to understand. And one that have a very short sourcecode as implemented in a full computer-program, in, for example the C++ programming language.
That's why we intend this to become one of the most adequately documented terrain- and surface- rendering algorithms ; which, in this case coincides with surface-approximation too (this is actually a quite deep concept which shall be adressed in a future paper, a sort of complement to this one). Because with an overally not too complicated Algorithm, and which can be implemented into not more than 20 pages of, say, C++ sourcecode, this is easier to do. There are lots of rendering algorithms out there, but very few of them are anything but well-documented in fact... if documented at all.
The implementation of the Agorithm into a computer-program is provided fully, and it's written in the C++ programming language. Togather with some more-less downgraded variants to make it easier to understand and to use. It can be easily re-used also as a libarary.
All can be downloaded from here:
Where the 4 packages relevant for this paper are:
|order of appearance||item's name||describtion|
|0||all the items in the folder for_documentation||Download this first, it amounts to a few kilobytes. It's a series of down-graded versions of the final implementation provided by the packages listed below. This is the most crucial part of the documentation! You will get along with this small 'kit' alone, for the whole part 1 of the paper. Some programs here are graphical (use SDL library for graphics), while some are text-based (and use only the Standard C and C++ library) and so it's very easy to compile on any system, including Windows, and avoid compromission of clarity and usability by any messy 'prerequisites'. Some of these create SVG files with fprintf(), so there's a lot of fun anyway.|
|1||"terrain_system_smartss3_release1_SMALLFILES"||this is mostly to illustrate the concepts|
|2||"terrain_system_smartss3_release1_MULTITHREAD_SMALLFILES"||this is a more industrial-stenght stuff: for multi-processor computers this is is better because it is multi-treaded|
|3||"simplified_terrain_system_MULTITHREAD_SMALLFILES"||a crucial resource to illustrate the concepts, but it has some interesting properties also on its own, because although it cheats a bit, it's faster: so it's very interesting for use in very very lightweight programs. This goes both for multi-threaded, and simple variant. By default, it is set to multi-threaded. To make it non-multithread, just change these lines in the class_Terrain_new.cpp file, like this:
resul = triangulate_PROGRESSIVE_blocks_according_to_detail() ; // NON-MULTITHREAD (SIMPLER and better for single-core computers)
// resul = triangulate_PROGRESSIVE_blocks_multithread() ; // MULTITHREAD: more compicated but better for multicore computers.
The way this variant 'cheats' - so to speak -, actually casts some very interesting light on the properties of polygonally approximated surfaces.
And of course some videos showing footages of most of the above-mentioned programs:
As said a little before, this paper can be read and understood by anyone.
We also tried to make it funny. Particularly indicated if You play 3D videogames, or if You like geography, cartography or topics such as planet- exploration and mapping.
If instead You are a reader involved in programming/software-development at any level, let's state it so:
While instead if You need a terrain rendering method to use right out-of-the box because You want to make quickly a 3D flight simulator, a 3D adventure game, or a planet-exploration game, this article and alleged sourcecode is, we think, may be good for You too. As it is fine also if You are interested in geography, 3D rendering, techniques to approximate surfaces, or if You are interested in Algorithms.
First, let's show the thing a little bit, then let us introduce it gently.
Fig.1: the terrain rendering Algorithm at work. a) solid visualization ; b) wireframe visualization... the very characteristic triangulation scheme is clearly visible. It's main strenght is it's ability to accumulate detail -so to speak- around sorts of edge-lines which naturally arise on a naturally-shaped terrain, in a more or less pronounced way, which depends on the specific terrain and on the current camera-shot.
This Algorithm uses the triangulation-scheme pioneered by Peter Lindstrom and cooperators, published in 1996 ; later developed further by P.Lindstrom himself togather with Valerio Pascucci, published in 2001 and 2002.
The new Algorithm was established by trying to work out something that use that triangulation-scheme, but do all the rest in a way as simple as possible, and in a way that the memory usage be independent of the terrain's size. The new Algorithm is really simple and does all what it was expected to do: it's typical memory usage is maximum 16 MB ; the speed instead is directly proportional (linear) to the total amount of detail which happens to be in the current camera-shot. That's a really low profile as to system requirements, even if the overall speed is not excellent, even if still very good.
Accidentally after the New Terrain System Algorithm was established, the algorithms' nerd of us accidentally discovered also a variant of that Algorithm: the Simplified Terrain System. And that one is even simpler and also faster ; even if it does the job by literally cheating during one of the stages of the overall procedure. Since all the other concepts hold the seme anyway for both algorithms, first we will present the simplified variant, and then the serious variant. Probably You will be surprized by how much softer it will be this way, the frontal encounter with the "serious" Algorithm.
The most crucial concepts will be introduced enough gradually, while complementary things may be introduced suddently. But don't worry... if something is not clear at the beginning, until the end doubts will certainly disappear. That' why this paper has lots of figures and it's also somewhat lengthy: so as to allow going not only once through the most crucial concepts, but twice. First in a simpler way, and the second time in a slightly more involved way.
To finally put the stuff togather into the full Algorithm.
For example, let's take our previous statement which says that the terrain can be arbitrarily large. And let's add some more involved things to it. So.
A typical value could be a terrain made of 8k x 8k haightsamples, but also 16k x 16k, 32k x 32k are fine. There is no intrinsic limit, as long as there is enough room on the hard-disk to store a bitmap file holding all that information. For simplicity, we assume the usage of bitmap image-files. This because they can be easily accessed in full-streaming just as a text file with a simple function from the standard C/C++ libray such as the fscanf() ;
But the fact of having no intrinsic limit is a very important quality, since a really appealing algorithm should not put limits to the size of the dataset on which it works. Just think to a simple number-sorting algorithm: the most practical ones, however simple they are, can sort any large set. This is beacause they store only a pair of numbers at a time, so that they can be tuned to sort the numbers at a constant memory usage, by sortig the stuff in place (e.g. the number are in a large text-file).
With terrain-rendering algorithms this quality is more difficult to attain, but this algorithm atteins it:
that's why it took something like 2 years of heavy reserch to establish it.
So, enjoy it!
Said very roughly, the main idea behind this Algorithm is that either we want a view-dependentdecision of what deatail is included in the triangulated surface, either we just use some rough criteria like putting smaller blocks near the camera and bigger blocks farther from it ; it is still Dynamic Level of Detail.
The view-dependent thing is certailnly better, but the idea is that the main purpose is not to provide each insant (like 30 frames per second) all and only the details actually visible from the current point-of-view with a given perspectivic projection of points to screen, but rather to gradually vary the detail in a way to arrive quickly at the right amount of detail in the right sites of the terrain as the camera moves around: or at least to approach the ideally right detail.
Let's state it in another way: what we want, is an Algorithm which provide most of the relevant Detail by the time we get close the site we want to inspect, and while we approach it. The Detail must change quickly, but must not necessarily be always the ideal set of detailes visible. It must be mostly the right set... a set containing most of the relevant stuff. This Algorithm exploits as much as possible this idea of Dynamic Level of Detail to do a good job in a simple and very fast way. That is, to always progressively approach the ideally right level of detail: where progressively means that it does little change at a time, but in a context in which those little changes, if iterated many times, allow to reach any configuration of how detail is distributed actross the terrain being rendered.
The basic functionalities for providing DLOD as it is implemented in this Algorithm provides literally the language for expressing and implementing the advanced features which privide View-Dependent Continuous DLOD. V-D CLOD system, simply means that ideally it is able to provide completely seamless transition of LOD, as it is seen by the observer: I mean no pop-up of detail and such visual artifacts ; this becaue the LOD to transition, can be determined excactly. In fact a rendering method's purpose is not that of merely storing data, in our case, data about a terrain in the form of a heghtmap ; or more advanced things like the one explained in one of the secions dedicated to advanced topics. So, CLOD is usually sinonym of advanced, industrial-quality methods. In order to understand clearly and precizely the advanced features which make the difference between a generic DLOD system and a V-D CLOD system, we need to understand the way this Algorithm implements, or le't say approaches, the problem of DLOD. Let's explain it simply and concretely.
The whole procedure is, a conceptually endless repetition of a single program cycle. This is a program cycle (figure).
According to how the DLOD is impelented in this Algorithm, at each cycle there exists a
configuration: a set of blocks, covering a square area (figure).
This is a square area (later on that will be our terrain or polygonal surface, so to speak):
( fig.3: a big square area. )
This, instead is a sequece of different coverages obtainded by performing various subdivisions on it:
( fig.4: block subdivision sequence )
Here is an example of a more elaborate subdivision: look at the figure!
( fig.5: simple graphical representation of a subdivision. )
Refresh the page to get a different one!
And there is abosolutely no such as thing as a record of what there used to be in the cycle before and
what there is going to be after. What exits, is a current blockset. Thus we have a vector containing our blocks.
At the very beginning when the program is started, according to this core system, there exists one bigblock: this is block 0. For now don't worry about the numbers: just let us represent each block with it.
So, our vector ends here.
Now, if we want to change the LOD somewhere, let's think of it as the attempt to include in the mesh some point, as much as really "including" it is possible: because we think that that point contributes some significant information. So, our point is here for example (fig.5),
and there's a little procedure which checks if this point is within the area of any of the currently existing blocks, and if this test is positive (which is our case), this block is replaced with its 4 children-blocks.
And so, the parent block exists no more: as we have added its 4 children, the parent is deleted.
And we are here (fig):
We have arrived at the end of the cycle. And it starts again.
Now what exists in our rendering (just forget about what there used to be before), our scenario, are 4 blocks in the vector "blocks". And they are these 4 blocks:
which we can sort accordint to these 'ID numbers', if we wanted to:
If we had scheduled block 2 from the blockset, then we would have:
void block_level_down( class std::vector
level_up_all() ; // this does brutal level-up everywhere possible... adding conditions inside, allows to customize what to leave there.
Combining a stage of detail addition and a stage of detail subtraction, we can create a dynamic balance of detail addition, and subtraction: thanks to this, we can quickly arrive at any configuration, or let's say blockset.
Detail subtraction is a bit more delicate than addition of detail, as the very balance between the 2 operations itself: in fact quadruplets occur always - but not only - in areas where a block level-down has just occured ; so, doing it unadvertedly, may accidentally lead to subtract detail form all those areas, so cancelling the effect of the detail additions previously done.
Accroding to this, for simplicity let's assume we subtract detail from everywhere it is possible, except in an circular area within which the camera is assumed to stay ( to make this more pratical, let's assume also that the radius of this circular area is nearly half of the height at which the camera is placed) .
[FIGURE: figure not done yet. Thank You for Your patience ]
So far, anyway this only produces a coverage of the surface: one which has no particular qualities. It is a bit too primitive yet, because it's hard to really define in it a LOD near some area, since there can be 2 areas next to each other covered by blocks of very different sizes.
While instead it would be nice (and soon we introduce why it is necessary for most practical needs) to have a coverage in which blocks next to each other can have maximum 1 level of difference in the nodes of the quad-tree they belong to ; that is, to have sides or equally long as the neaby one, or twice as long, or half as long.
Hasur's theorem 4 provides a simple way to arrive at such blocksets, and guarantees the correctness of that way. Given a certain leagal blockset (start from one big block to be sure it is), it says, at eacht step of detail addition and subtraction, which blocks can undergo a level-down operation, which block-quadruplets can undergo a level-up operation, and which can not ; in order to prevent violating the requirement of having always a 'nice' blockset. That is sufficient to arrive at a fully working algorithm, which, by the way, cheats a bit because puts a bit too much of limitation of how the LOD can vary from time to time. It's not too ugly, but sometimes LOD variations are a bit sudden, other time instead it's just too "lazy" - so to speak. It's quite funny, however.
At the end we will give, of course, also the version of the Algorithm which does not 'cheat' ; by replacing the use of theorem 4 with theorem 1, and adding one more stage to the 3 stages of the 'cheating' version. In that, version, at each stage, any block can undergo a level-down operation, and any quadruplet can undergo a level-up operation.
Anyway don't think negatively! - there's a total of only 3 theorems mentioned. One of them is very simple, and for the other 2, in order to understant the overall Algorithm, it's anyway sufficient to just trust them and use the final procedures provided by them.
The requirement of always having a blockest of the afore-mentioned 'nice' kind, is also good (and actually also vitally important) because such a 'nice' blockset is easy to turn into a nice, fully connected triangulated surface: it's sufficient to triangulate each block according to a simple scheme introduced later (which we preview here, anyway). In fact we want to get along with a nice, fixed-scheme triangulation, in which 1 block is made up of maximum a few triangles, according to a nice, regular, and simple scheme.
Since we have now an area completely covered by triangles which are fully connected (area fully covered but no superpositions occuring at all), we can assing any height to the points included in this triangulated network of points: and so we have a trianglated 3D surface (figure)!
All we know about the surface is the heightmap. So, as we change the blockset(e legal one) from which it is obtained, as we trianglate it, we have different polygonal approximations of the seme surface. As we change the blockset over time, triangulate it and visualize the triangles, we have a surface represented with varying LOD. For example here is a pair of screenshots of the previous landscape rendered by the program, but a few moments later ; a few moments during which the camera managed to move a bit ahead: the polygonal approximation of the surface here, is a bit different, and some parts of the surface are more detailed, some are less detailed, than in the previous shot.
We came to a very important point here. Let's take some space for a break, to deepden some concepts.
Now that we came so far, let's clarify some concepts: The condition of a 'nice' blocket is vital becase without it, the biggest problem is that our coverage of the big square area - which is our terrain or surface to approximate polygonally - is made of squares, and not triangles. And a generic random blockset, in most cases can't be used to produce a nicely covered, connected triangulated surface. Because although it is possible to think of some creative way to triangulate the single squares so that they eventually produce a triangulated surface in which sides are nicely connected without leaving void spaces in between 2 triangles (some people call these T-junctions) ; but that would be complicated and most of the the single blocks would have compliated and ugly triangulations.
The most widely used tecnical term for "triangulated surface in which sides are nicely connected without leaving void spaces in between 2 triangles" is:
"fully connected, triangulated surface".
That's all about dynamic LOD as it is implemented by the Algorithm here presented. If you have any doubts, think to a single point being dragged around the area of the terrain: the detailes vanish progressively outside the cirucular area (figure).
To generalize it, think that there be more points around which to narrow detail: that's all. Fine. We have introduced the way Dynamic Level of Detail is implmented in our Algorithm. Let's refine the thing a bit.
In this Algorithm, the Dynamic LOD is achieved by adding more detail in some areas,
and by subtracting detail from some other areas.
In one program cycle (one cycle of this Algorithm) there is 1 stage of detail addition, and one of detail subtraction. Of course many areas - blocks - can be operated during one of the above-mentioned stages.
Clearly, any distribution of detail could be obtained also by only adding detail and restarting alway from 0 (1 big block ): but that would not be practical, and would fail to concretize in a really meaningful way the idea of Dynamic LOD.
So far, no rule occurs anyway, to prevent reciproc cancellation of effects: it could happen to add detail in an area, and then to subtract that seme detail... for now, let it be so. It's not a problem at all... just an occasional waste of resources. The full rendering Algorithm feautres a rule to avoid that waste: it's introduced in part b) of this introduction.
Blocks are stored in a vector, intended as a row of containers all of the seme type (figure) :
( fig. : A generic vector: a row of N generic containers. )
Refresh the page to get a different vector!
dividing further ( block level-down ): ____# child 1: 4*parent + 1 /____# child 2: 4*parent + 2 parent ---< ____# child 3: 4*parent + 3 \____# child 4: 4*parent + 4 merging ( block-quadruplet level-up ): # child (1)____ # child (2)____\ # child (3)____ >--- parent: (child-1)/4 # child (4)____/
_________ | 1 | 2 | |____|____| | 3 | 4 | |____|____|
Triangulating the blocks is easy. Each block is triangulated so as to mach the smaller neghbours. A neighbour of the seme size, clearly does not produce a match because their sides match perfeclty as they are.
So. If the blocks' vector is sorted according to the QID number, we clearly go from the largest to the smallest blocks: it's clar that doing the maching triangulation on all blocks taken singularly, we arrive at a triangulation which is always good. This to maki it more convincing... but of course it works also in the case in which block are not sorted according to their QID number (that is, in this case, according to their size). This fact does not acatually deserve a formulation as a theorem since it's one of the very grounding ideas behind the whole Algorihtm. We could say in fact, that the whole method is built around the attempt to exploit a simple but flexible triangulation scheme, which allows to -say it so - bridge orders of magnitude as soon as we put some creativity in the use we make of it (figure).
As reguards the method for finding the mathces for triangulation, it's sufficient to think to a simple positional search for the neightbour blocks: I have a block, and I check all the other blocks if the are of any of them (eventually if there is one, there is also another) whose area encoloses a point, as in the scheme:
(PROVVISORY FIGURE: thank You for Your patience) check right: _______ ___ | || | | -->||_*_| | -->|| * | |_______||___| check also left: ___ _______ ___ | || || | |_*_||<-- ||___| | * ||<-- || | |___||_______||___| check also up: ___ ___ | | | ___ |_*_|_*_| ___ | || ^ ^ || | |___|| ^ ^ ||___| | || || | |___||_______||___| check also down: ___ ___ | | | ___|___|___|___ | | | | |___| |___| | | V V | | |___|_v___v_|___| | * | * | |___|___| Then, apply the triangulation scheme according to the matches. NOTE: at the extreme sides of the terrain, as one could expect, by definition there is no match. The extreme sides of the terrain, are neutral - so to speak.
____ | /| | / | | / | |/___| basic triangulation of a block with 0 matches at sides.or into 4 triangles (figure):
____ |\ /| | \/ | | /\ | |/__\| alternative triangulation of a block with 0 matches at sides. this one includes a central point in the triangulation. That is optional: it's usefulness depends on the use of the rendering system.
A terrain is a subdivision of a square area into a set of smaller blocks of various sizes ; a set of blocks in which adiancent blocks' sides can be either half as long as the neighbour's, or twice as long ;
A set of blocks, which are then triangulated so as to form a fully connected, triangulated surface.
The set of blocks of the kind mentioned above, varies from time to time, so producing a polygonal approximation of the terrain, in which the distribution of more detailed and less detailed areas, varies over time according to some criteria.
That's how this Algorithm concretizes the concept of Dynamic Level of Detail. Le't see the cycle. It's all made of 3 stages: we outlined all of them so far, one by one. So, here is a panoramic of the 3 stages:
1. detail addition;
2. detail subtraction;
3. triagulation of blocks;
(and then graphical visualization of the final result, of course)
[PAGE UNDER CONSTRUCTION: thank You for Your patience]