Home / index
What is Terrain Rendering?
The Vector as a Datastructure - an article written to test the use of procedurally generated figures in articles

New Terrain System

A Very Simple View-Dependent Terrain-Rendering Algorithm

whose performance and memory usage is independent of terrain's size, and which is all very short and simple
[PAGE UNDER CONSTRUCTION: thank You for Your patience]

Algorithm by: Simon Hasur
Paper by Simon Hasur and Niso R.
date:nov 2014 - apr 2015


motto of the topic:
To find the Job, use Optimism: an Algorithm which relies on Dynamic Level of Detail
to give more detail near good things and less detail near bad things.
The Job is somewhere near a site with the highest level of detail.

We are going to present to the Reader, a terrain-rendering Algorithm ; in a very short and simple yet complete way. Where by terrain-rendering algorithm we essentially mean a polygonal surface-approximation method: and one which allowes to approximate more or less brutally near different sites of the surface, and can change dynamically how the more-or-less brutal approximation is distributed across the surface (a triangulated surface in this case).

To avoid confusion, the algorithms' nerd of the authors gave a name to the Algorithm: New Terrain System - not a big fantasy in there, but that was the name of the experimental software package when it was under development and experimentation, so there was no point in inventing another name for it and create confusion.
We will, however, deal also with a variant of that algorithm: that one is called Simplified Terrain System - chich is much the seme, but overally a bit shorter and simpler. ; and the name was coined to point that out. We will start start with that one, so don't worry about differences and similarities between the 2 variants. They will resolve almost seamlessly when it's time for it.

Let's get started.

The Algorithm presented provides advanced rendering (very large terrain with full drawdistance), but it's very simple. And it's also computationally very light. And most importantly, in the case of this Algorithm, the terrain can be arbitrarily large.

It was established through heavy research right in order to provide a rendering system that be advanced, but that be also very simple to expalin and to understand. And one that have a very short sourcecode as implemented in a full computer-program, in, for example the C++ programming language.

That's why we intend this to become one of the most adequately documented terrain- and surface- rendering algorithms ; which, in this case coincides with surface-approximation too (this is actually a quite deep concept which shall be adressed in a future paper, a sort of complement to this one). Because with an overally not too complicated Algorithm, and which can be implemented into not more than 20 pages of, say, C++ sourcecode, this is easier to do. There are lots of rendering algorithms out there, but very few of them are anything but well-documented in fact... if documented at all.

The implementation of the Agorithm into a computer-program is provided fully, and it's written in the C++ programming language. Togather with some more-less downgraded variants to make it easier to understand and to use. It can be easily re-used also as a libarary.
All can be downloaded from here:


Where the 4 packages relevant for this paper are:

order of appearance item's name describtion
0 all the items in the folder for_documentation Download this first, it amounts to a few kilobytes. It's a series of down-graded versions of the final implementation provided by the packages listed below. This is the most crucial part of the documentation! You will get along with this small 'kit' alone, for the whole part 1 of the paper. Some programs here are graphical (use SDL library for graphics), while some are text-based (and use only the Standard C and C++ library) and so it's very easy to compile on any system, including Windows, and avoid compromission of clarity and usability by any messy 'prerequisites'. Some of these create SVG files with fprintf(), so there's a lot of fun anyway.
1 "terrain_system_smartss3_release1_SMALLFILES" this is mostly to illustrate the concepts
2 "terrain_system_smartss3_release1_MULTITHREAD_SMALLFILES" this is a more industrial-stenght stuff: for multi-processor computers this is is better because it is multi-treaded
3 "simplified_terrain_system_MULTITHREAD_SMALLFILES" a crucial resource to illustrate the concepts, but it has some interesting properties also on its own, because although it cheats a bit, it's faster: so it's very interesting for use in very very lightweight programs. This goes both for multi-threaded, and simple variant. By default, it is set to multi-threaded. To make it non-multithread, just change these lines in the class_Terrain_new.cpp file, like this:
resul = triangulate_PROGRESSIVE_blocks_according_to_detail() ; // NON-MULTITHREAD (SIMPLER and better for single-core computers)
// resul = triangulate_PROGRESSIVE_blocks_multithread() ; // MULTITHREAD: more compicated but better for multicore computers.

The way this variant 'cheats' - so to speak -, actually casts some very interesting light on the properties of polygonally approximated surfaces.

And of course some videos showing footages of most of the above-mentioned programs:
Of course they also feature some explanation along with the footages, so as to make them more meaningful and human.

As said a little before, this paper can be read and understood by anyone.
We also tried to make it funny. Particularly indicated if You play 3D videogames, or if You like geography, cartography or topics such as planet- exploration and mapping.
If instead You are a reader involved in programming/software-development at any level, let's state it so:

While instead if You need a terrain rendering method to use right out-of-the box because You want to make quickly a 3D flight simulator, a 3D adventure game, or a planet-exploration game, this article and alleged sourcecode is, we think, may be good for You too. As it is fine also if You are interested in geography, 3D rendering, techniques to approximate surfaces, or if You are interested in Algorithms.

First, let's show the thing a little bit, then let us introduce it gently.

image failed to visualize correctly. image failed to visualize correctly.
Fig.1: the terrain rendering Algorithm at work. a) solid visualization ; b) wireframe visualization... the very characteristic triangulation scheme is clearly visible. It's main strenght is it's ability to accumulate detail -so to speak- around sorts of edge-lines which naturally arise on a naturally-shaped terrain, in a more or less pronounced way, which depends on the specific terrain and on the current camera-shot.

This Algorithm uses the triangulation-scheme pioneered by Peter Lindstrom and cooperators, published in 1996 ; later developed further by P.Lindstrom himself togather with Valerio Pascucci, published in 2001 and 2002.
The new Algorithm was established by trying to work out something that use that triangulation-scheme, but do all the rest in a way as simple as possible, and in a way that the memory usage be independent of the terrain's size. The new Algorithm is really simple and does all what it was expected to do: it's typical memory usage is maximum 16 MB ; the speed instead is directly proportional (linear) to the total amount of detail which happens to be in the current camera-shot. That's a really low profile as to system requirements, even if the overall speed is not excellent, even if still very good.

Accidentally after the New Terrain System Algorithm was established, the algorithms' nerd of us accidentally discovered also a variant of that Algorithm: the Simplified Terrain System. And that one is even simpler and also faster ; even if it does the job by literally cheating during one of the stages of the overall procedure. Since all the other concepts hold the seme anyway for both algorithms, first we will present the simplified variant, and then the serious variant. Probably You will be surprized by how much softer it will be this way, the frontal encounter with the "serious" Algorithm.

The most crucial concepts will be introduced enough gradually, while complementary things may be introduced suddently. But don't worry... if something is not clear at the beginning, until the end doubts will certainly disappear. That' why this paper has lots of figures and it's also somewhat lengthy: so as to allow going not only once through the most crucial concepts, but twice. First in a simpler way, and the second time in a slightly more involved way.
To finally put the stuff togather into the full Algorithm.

For example, let's take our previous statement which says that the terrain can be arbitrarily large. And let's add some more involved things to it. So.

A typical value could be a terrain made of 8k x 8k haightsamples, but also 16k x 16k, 32k x 32k are fine. There is no intrinsic limit, as long as there is enough room on the hard-disk to store a bitmap file holding all that information. For simplicity, we assume the usage of bitmap image-files. This because they can be easily accessed in full-streaming just as a text file with a simple function from the standard C/C++ libray such as the fscanf() ;

But the fact of having no intrinsic limit is a very important quality, since a really appealing algorithm should not put limits to the size of the dataset on which it works. Just think to a simple number-sorting algorithm: the most practical ones, however simple they are, can sort any large set. This is beacause they store only a pair of numbers at a time, so that they can be tuned to sort the numbers at a constant memory usage, by sortig the stuff in place (e.g. the number are in a large text-file).
With terrain-rendering algorithms this quality is more difficult to attain, but this algorithm atteins it:
that's why it took something like 2 years of heavy reserch to establish it.

So, enjoy it!

a) The Basics

Said very roughly, the main idea behind this Algorithm is that either we want a view-dependentdecision of what deatail is included in the triangulated surface, either we just use some rough criteria like putting smaller blocks near the camera and bigger blocks farther from it ; it is still Dynamic Level of Detail.

The view-dependent thing is certailnly better, but the idea is that the main purpose is not to provide each insant (like 30 frames per second) all and only the details actually visible from the current point-of-view with a given perspectivic projection of points to screen, but rather to gradually vary the detail in a way to arrive quickly at the right amount of detail in the right sites of the terrain as the camera moves around: or at least to approach the ideally right detail.

Let's state it in another way: what we want, is an Algorithm which provide most of the relevant Detail by the time we get close the site we want to inspect, and while we approach it. The Detail must change quickly, but must not necessarily be always the ideal set of detailes visible. It must be mostly the right set... a set containing most of the relevant stuff. This Algorithm exploits as much as possible this idea of Dynamic Level of Detail to do a good job in a simple and very fast way. That is, to always progressively approach the ideally right level of detail: where progressively means that it does little change at a time, but in a context in which those little changes, if iterated many times, allow to reach any configuration of how detail is distributed actross the terrain being rendered.

The basic functionalities for providing DLOD as it is implemented in this Algorithm provides literally the language for expressing and implementing the advanced features which privide View-Dependent Continuous DLOD. V-D CLOD system, simply means that ideally it is able to provide completely seamless transition of LOD, as it is seen by the observer: I mean no pop-up of detail and such visual artifacts ; this becaue the LOD to transition, can be determined excactly. In fact a rendering method's purpose is not that of merely storing data, in our case, data about a terrain in the form of a heghtmap ; or more advanced things like the one explained in one of the secions dedicated to advanced topics. So, CLOD is usually sinonym of advanced, industrial-quality methods. In order to understand clearly and precizely the advanced features which make the difference between a generic DLOD system and a V-D CLOD system, we need to understand the way this Algorithm implements, or le't say approaches, the problem of DLOD. Let's explain it simply and concretely.

The whole procedure is, a conceptually endless repetition of a single program cycle. This is a program cycle (figure).

image failed to visualize correctly.

According to how the DLOD is impelented in this Algorithm, at each cycle there exists a configuration: a set of blocks, covering a square area (figure).

This is a square area (later on that will be our terrain or polygonal surface, so to speak):

( fig.3: a big square area. )

This, instead is a sequece of different coverages obtainded by performing various subdivisions on it:

( fig.4: block subdivision sequence )
image failed to visualize correctly.

Here is an example of a more elaborate subdivision: look at the figure!

( fig.5: simple graphical representation of a subdivision. )

Refresh the page to get a different one!

And there is abosolutely no such as thing as a record of what there used to be in the cycle before and what there is going to be after. What exits, is a current blockset. Thus we have a vector containing our blocks.

image failed to visualize correctly.

At the very beginning when the program is started, according to this core system, there exists one bigblock: this is block 0. For now don't worry about the numbers: just let us represent each block with it.

image failed to visualize correctly.

So, our vector ends here.

Now, if we want to change the LOD somewhere, let's think of it as the attempt to include in the mesh some point, as much as really "including" it is possible: because we think that that point contributes some significant information. So, our point is here for example (fig.5),

image failed to visualize correctly.

and there's a little procedure which checks if this point is within the area of any of the currently existing blocks, and if this test is positive (which is our case), this block is replaced with its 4 children-blocks.
And so, the parent block exists no more: as we have added its 4 children, the parent is deleted.

And we are here (fig):

We have arrived at the end of the cycle. And it starts again. Now what exists in our rendering (just forget about what there used to be before), our scenario, are 4 blocks in the vector "blocks". And they are these 4 blocks:


Fine. We want to assure that the point about which we have been speaking before, is included in the mesh: and so since these 4 blocks exist, each of these is checked for matching the area: is the point within the area of block | 1 |? - yes: so this block is scheduled for being replaced with its 4 children.
And well, for the other blocks it's | 2 |:negative, | 3 | negative, | 4 | negative. And then this block | 1 | is taken, and it's replaced by it 4 children ; and the parent-block is deleted.

At this point, what exists in our rendering, our scenario, are these blocks:


which we can sort accordint to these 'ID numbers', if we wanted to:


The little figure below provides an alternative example:

If we had scheduled block 2 from the 


blockset, then we would have: 


And so on ; and at some point blocks may not be divided anymore because we put a limit for example on how small a block can be in the extreme case, or because the point falls excacly on one of the corners of an existing block. The procedure itself of course could go to infinitely small blocks: and this is good because we are sure that provided enough memory, we can give as much detail as necessary.
At this point, it's in place to discuss a bit what information is needed to represent geometrically a block:

1. side-lenght called "stride" (a floating point number es. 0.25 or 3.45987) ;
2. x,y coordinates of lower-left corner. (a floating point number, seme story) ;

we will add the other few data as the reguarding topics are introduced. This little discussion on the gemoetric representation of a block is useful to outline the contrast with the fact that until now in all the procedure we have been almost exclusively referring to elements of a vector, where elements had a number identifying them. Such an ID number exists and derives from the fact that quadrant divisions create scales, which are orders of magnitude (half of half of half of... etc): an instrinsecally hierarchic structure. We will discuss this more in detail a little bit later. But this is to justify that we have a MOST basic info:

0. ID number (integer number , es 5 or 41 ) ;

Very fine. This is the concept of DLOD as it is implemented here, in this Algorithm. And of course as we could include 1 point in the mesh by narrowing progressively the blockset to include that point, so we could include more points. We just have to do this check for more points, and schedule the blocks which need to be repalced by their 4 childen.
And then, replace them.

From now on, we will call the operation of repalcing a block with its 4 childen, "block level-down" or simply "a level-down operation done on a block"... or just say that we do level-down a block.

image failed to visualize correctly.

Operation which consists in the following operation on the blockset-vector:

image failed to visualize correctly.

In the C++ sourcecode of all programs provided with this paper, this is implemented in the function

void block_level_down( class std::vector *blocks, int i /* which block to level-down? */ ) ;

The following little program (fully text-based and very very simple and short) relies on a minimal version of this, while all other programs use an optimized version. However minimal, this little program lets You experiment with the concepts outlined so far, and get a bit familiar with the way we implement concretely the way a simple vector is used to store the current blockset:


Even if here only 1 block at a time can undergo level-down, it should be sufficient to get the idea. Should You have any doubts about the concepts we discuss further, You can always return to this little program to restart from a very solid basis and clarify all the further developments. For every little advance there is a little program implementing what has been outlined as far as the point when it's proposed. So You can take a break after each one ins introduced. So, let's take a break!

So far, so fine. But for the sake of generality and practical mentality, we would like to progress to being able to perform level-down on multible blocks in one run. So... let's exmplain how to do this.

We do first shedule all maches and then we level-down the scheduled blocks, because the check must be performed on a static set: dynamically changing sets are not a good thing to work with in fact.

Said in short, if
a) we schedule blocks to level-down ;
b) we level-down the previously scheduled blocks ;
we guarantee determinism: a sort of guarantee for simplicity and predictability.

More on this later. For now let us just assume that that we know for sure which blocks we want to level-down. Just think to those ID numers we have been using until now in the figures. As we will introduce later, blocks in the vector holding the blockset, don't get 'messed up' because each block is unique. Moreover, thanks to these uniqueness, they are also assumed to be sorted so that as You go throught them and level-down those which You prefer, everything goes always fine.

All what we have outlined so far, is implemented in the following little C++ program:


a little program which relies only on the standard C++ library, and beside some basic text-output, it also creates a little SVG file with the standartd
fprintf(...) ;
function from the Standard Library, to let You see also graphically the final result. Some of the figures of this very paper were created with this program, in fact.
In addition, the following program


adds true graphics to the above one, using the SDL library to open a graphical window and draw the pixels of the final image, into it. This one, anyway still requires the presence of a text-terminal to be operated. Fully graphical, real-time and mouse-driven variants will be presented later as soon as we can put a bit more content into it. If You find it more comfortable to use a graphical output system other thatn SDL, You can can easily change the program accordingly. If all else should fail, under a *nix -like system, the X11 (a.k.a. X Windows System) library is always a working option. Also ascii-art can be. Later, in part b) of the paper, anyway we will progress to use also OpenGL for the 3D graphcs, even if also a down-graded software-rendering variant will be given for those who are not familiar with it or don't want to use it.

Now. Adding detial is fine ; but also subtracting detaial is needed. The principle is similar, but a bit more tricky: essentially if there are 4 sibling blocks, we can replace them with their parent-block (figure).

image failed to visualize correctly.

So, we define the operation of block-quadruplet level-up (figure)

image failed to visualize correctly.

on the vector holding our current blockset.

But since there are not sibling bocks everywhere, subtractng detail is not possible everywhere ;
even if it is always possible at least in 1 place.

Moreover, while it always makes sense to add detail somewhere in the blockset, it only make sense to subtract detail from wherever there occur 4 sibling blocks.

We can subtract detail from there.

Of course, it also makes sense to brutally subtract detail from wherever possible:
and reiterating this operation enough many times, would lead to have 1 big block present.

In the C++ sourcecode of the program implementing this, this is implemented in the following function:

level_up_all() ; // this does brutal level-up everywhere possible... adding conditions inside, allows to customize what to leave there.

Combining a stage of detail addition and a stage of detail subtraction, we can create a dynamic balance of detail addition, and subtraction: thanks to this, we can quickly arrive at any configuration, or let's say blockset.

Detail subtraction is a bit more delicate than addition of detail, as the very balance between the 2 operations itself: in fact quadruplets occur always - but not only - in areas where a block level-down has just occured ; so, doing it unadvertedly, may accidentally lead to subtract detail form all those areas, so cancelling the effect of the detail additions previously done.
Accroding to this, for simplicity let's assume we subtract detail from everywhere it is possible, except in an circular area within which the camera is assumed to stay ( to make this more pratical, let's assume also that the radius of this circular area is nearly half of the height at which the camera is placed) .

[FIGURE: figure not done yet. Thank You for Your patience ]

So far, anyway this only produces a coverage of the surface: one which has no particular qualities. It is a bit too primitive yet, because it's hard to really define in it a LOD near some area, since there can be 2 areas next to each other covered by blocks of very different sizes.

While instead it would be nice (and soon we introduce why it is necessary for most practical needs) to have a coverage in which blocks next to each other can have maximum 1 level of difference in the nodes of the quad-tree they belong to ; that is, to have sides or equally long as the neaby one, or twice as long, or half as long.

Hasur's theorem 4 provides a simple way to arrive at such blocksets, and guarantees the correctness of that way. Given a certain leagal blockset (start from one big block to be sure it is), it says, at eacht step of detail addition and subtraction, which blocks can undergo a level-down operation, which block-quadruplets can undergo a level-up operation, and which can not ; in order to prevent violating the requirement of having always a 'nice' blockset. That is sufficient to arrive at a fully working algorithm, which, by the way, cheats a bit because puts a bit too much of limitation of how the LOD can vary from time to time. It's not too ugly, but sometimes LOD variations are a bit sudden, other time instead it's just too "lazy" - so to speak. It's quite funny, however.
At the end we will give, of course, also the version of the Algorithm which does not 'cheat' ; by replacing the use of theorem 4 with theorem 1, and adding one more stage to the 3 stages of the 'cheating' version. In that, version, at each stage, any block can undergo a level-down operation, and any quadruplet can undergo a level-up operation.
Anyway don't think negatively! - there's a total of only 3 theorems mentioned. One of them is very simple, and for the other 2, in order to understant the overall Algorithm, it's anyway sufficient to just trust them and use the final procedures provided by them.

The requirement of always having a blockest of the afore-mentioned 'nice' kind, is also good (and actually also vitally important) because such a 'nice' blockset is easy to turn into a nice, fully connected triangulated surface: it's sufficient to triangulate each block according to a simple scheme introduced later (which we preview here, anyway). In fact we want to get along with a nice, fixed-scheme triangulation, in which 1 block is made up of maximum a few triangles, according to a nice, regular, and simple scheme.

image failed to visualize correctly.

Since we have now an area completely covered by triangles which are fully connected (area fully covered but no superpositions occuring at all), we can assing any height to the points included in this triangulated network of points: and so we have a trianglated 3D surface (figure)!

image failed to visualize correctly. image failed to visualize correctly.

All we know about the surface is the heightmap. So, as we change the blockset(e legal one) from which it is obtained, as we trianglate it, we have different polygonal approximations of the seme surface. As we change the blockset over time, triangulate it and visualize the triangles, we have a surface represented with varying LOD. For example here is a pair of screenshots of the previous landscape rendered by the program, but a few moments later ; a few moments during which the camera managed to move a bit ahead: the polygonal approximation of the surface here, is a bit different, and some parts of the surface are more detailed, some are less detailed, than in the previous shot.

image failed to visualize correctly. image failed to visualize correctly.

We came to a very important point here. Let's take some space for a break, to deepden some concepts.

Now that we came so far, let's clarify some concepts: The condition of a 'nice' blocket is vital becase without it, the biggest problem is that our coverage of the big square area - which is our terrain or surface to approximate polygonally - is made of squares, and not triangles. And a generic random blockset, in most cases can't be used to produce a nicely covered, connected triangulated surface. Because although it is possible to think of some creative way to triangulate the single squares so that they eventually produce a triangulated surface in which sides are nicely connected without leaving void spaces in between 2 triangles (some people call these T-junctions) ; but that would be complicated and most of the the single blocks would have compliated and ugly triangulations.
The most widely used tecnical term for "triangulated surface in which sides are nicely connected without leaving void spaces in between 2 triangles" is:
"fully connected, triangulated surface".

That's all about dynamic LOD as it is implemented by the Algorithm here presented. If you have any doubts, think to a single point being dragged around the area of the terrain: the detailes vanish progressively outside the cirucular area (figure).


To generalize it, think that there be more points around which to narrow detail: that's all. Fine. We have introduced the way Dynamic Level of Detail is implmented in our Algorithm. Let's refine the thing a bit.
In this Algorithm, the Dynamic LOD is achieved by adding more detail in some areas,

and by subtracting detail from some other areas.


In one program cycle (one cycle of this Algorithm) there is 1 stage of detail addition, and one of detail subtraction. Of course many areas - blocks - can be operated during one of the above-mentioned stages.
Clearly, any distribution of detail could be obtained also by only adding detail and restarting alway from 0 (1 big block ): but that would not be practical, and would fail to concretize in a really meaningful way the idea of Dynamic LOD.
So far, no rule occurs anyway, to prevent reciproc cancellation of effects: it could happen to add detail in an area, and then to subtract that seme detail... for now, let it be so. It's not a problem at all... just an occasional waste of resources. The full rendering Algorithm feautres a rule to avoid that waste: it's introduced in part b) of this introduction.
Blocks are stored in a vector, intended as a row of containers all of the seme type (figure) :

( fig. : A generic vector: a row of N generic containers. )

Refresh the page to get a different vector!

New container slots can be made for inserting new elements into them, and slots can be deleted.
As new blocks are added because a block is replaced by its 4 children, 4 blocks are added to the vector, and the parent's slot is deleted. As quadruplets of sibling blocks are replaced by their parent -newly created- , the parent block is added to the vector, and the 4 blocks which are its children, are deleted.
That should suffice as an introduction to how the situation is represented and stored by the Algorithm.

At the beginning, there is 1 big block covering the whole big square area.
That big square area is the terrain.
Then this cycle, repeated ideally endlessly many times, provides a coverage of that surface: a coverage varying over time.

Now that we are a bit more in business with the basic concepts, we can say, as an extra, that a coverage is nothing but a subdivision. And at the heart of all the Algorihtm, is the idea that a subdivision, even if it is tied with the concept of hierarchy, can be stored in a simple vector, since we don't need to store a full tree-structure. As in a subdivision, we only have the leaf-nodes of the tree-strucrure which has built up as we came to a particular subdivision by dividing and dividing this or that part again. And since a subdivision is absolutely univocous, it's all easy. We are fine with a vector storing the ID numbers of each of the leaf-nodes of that tree-structure. The ID numbers are univocous too, because of the way we perform a subdivision further, or perform the merging of 4 sibling blocks which we think to be too small.

dividing further ( block level-down          ):

             ____# child 1: 4*parent + 1
            /____# child 2: 4*parent + 2
parent ---<  ____# child 3: 4*parent + 3
            \____# child 4: 4*parent + 4

merging          ( block-quadruplet level-up ):

# child (1)____
# child (2)____\
# child (3)____ >--- parent: (child-1)/4
# child (4)____/

The only thing to be careful about is to choose a geometric disposal for the 4 children, and stick to that convention!

For example, I usually use this convention:
| 1  | 2  |
| 3  | 4  |

Triangulating the blocks is easy. Each block is triangulated so as to mach the smaller neghbours. A neighbour of the seme size, clearly does not produce a match because their sides match perfeclty as they are.
So. If the blocks' vector is sorted according to the QID number, we clearly go from the largest to the smallest blocks: it's clar that doing the maching triangulation on all blocks taken singularly, we arrive at a triangulation which is always good. This to maki it more convincing... but of course it works also in the case in which block are not sorted according to their QID number (that is, in this case, according to their size). This fact does not acatually deserve a formulation as a theorem since it's one of the very grounding ideas behind the whole Algorihtm. We could say in fact, that the whole method is built around the attempt to exploit a simple but flexible triangulation scheme, which allows to -say it so - bridge orders of magnitude as soon as we put some creativity in the use we make of it (figure).


As reguards the method for finding the mathces for triangulation, it's sufficient to think to a simple positional search for the neightbour blocks: I have a block, and I check all the other blocks if the are of any of them (eventually if there is one, there is also another) whose area encoloses a point, as in the scheme:


(PROVVISORY FIGURE: thank You for Your patience)

check right:
      _______  ___ 
     |       ||   |   
     |    -->||_*_|
     |    -->|| * |   

check also left:
 ___  _______  ___
|   ||       ||   |   
|_*_||<--    ||___|
| * ||<--    ||   |   

check also up:

      ___ ___
     |   |   |
 ___ |_*_|_*_| ___
|   || ^   ^ ||   |   
|___|| ^   ^ ||___|
|   ||       ||   |   

check also down:

      ___ ___
     |   |   |
 |   |       |   |   
 |___|       |___|
 |   | V   V |   |   
     | * | * |

Then, apply the triangulation scheme according to the matches.
NOTE: at the extreme sides of the terrain, as one could expect, by definition there is no match. The extreme sides of the terrain, are neutral - so to speak.

image failed to visualize correctly.

It works, but that's just to get the idea. Of course the extreme sides of the terrain never produce a match, by definition. They are, so to spaek, neutral. Moreover, when there are no matches, the block can either be divided into 2 large triangles (figure)

|   /|
|  / |
| /  |

basic triangulation of a block with 0 matches at sides.

or into 4 triangles (figure):
|\  /|
| \/ |
| /\ |

alternative triangulation of a block with 0 matches at sides.
this one includes a central point in the triangulation. 
That is optional: it's usefulness depends on the use of the rendering system.

where in the second case, the triangulation will include a point located at the center of the block. The inclusion of that central point is optional, of course. If it makes You unfomfortable, forget it and choose the basic triangulation as a convention. Anyway we will return to it later, in part b).

Very fine. Let's return for a moment to the checking method. Of course the one presented earlier is a very primitive way of doing the check, firstly because it does really lots of checks: if we have N blocks to triangulate, the number of check to do are proportional to N2 which is a bit high. Moreover this way of doing the check is not very essential either, because it's a check which works with points with a specific position, and checks of area-matches.

While the basic operations of block level-down and block-quadruplet level-up take place on elements of a subdivision, and so we would like to operate as much as possible on the blockset's vector or a tree-structure whose leaf-nodes feature in it.
And even worse, positions involve units of measurement... and earlier we said that the formulations which do not involve units of measurement, are usually better.
Sice the subdivision made up by our blockset is stored as a vector of QID numbers, it would be more appropriate to device an abstract operation which:
1. finds the QID of the 2 neightbour blocks, and
2. It does a binary search in the blockset's vector to see it there's the block.

The binary search takes something between 5 and 31 steps, so it can be considered a constant-time task ; we have to do 4 checks for each N blocks, but that's fine... in fact such an abstract operation would be linear computation-time task!
Hasur's theorem 2 guarantees that this is possible, and provides a simple procedure which gives the QIDs to check for. We will introduce the theorem later.
For binary-search to work, the vector holding the blockset must be kept always sorted according to the QIDs, but this is not a big issue... it does not take so many operations.

Very fine. Now we can merge all what we have said until now, in 1 definition of terrain rendered with dynamically changing LOD:

A terrain is a subdivision of a square area into a set of smaller blocks of various sizes ; a set of blocks in which adiancent blocks' sides can be either half as long as the neighbour's, or twice as long ;
A set of blocks, which are then triangulated so as to form a fully connected, triangulated surface.

The set of blocks of the kind mentioned above, varies from time to time, so producing a polygonal approximation of the terrain, in which the distribution of more detailed and less detailed areas, varies over time according to some criteria.

That's how this Algorithm concretizes the concept of Dynamic Level of Detail. Le't see the cycle. It's all made of 3 stages: we outlined all of them so far, one by one. So, here is a panoramic of the 3 stages:

1. detail addition;
2. detail subtraction;
3. triagulation of blocks;
(and then graphical visualization of the final result, of course)

[PAGE UNDER CONSTRUCTION: thank You for Your patience]