Home / index
New Terrain System

What is Terrain Rendering?

A brief introduction to Terrain- or Surface- Rendering

As You might know, one of my favourite topics is advanced surface- or terrain- rendering. So, for the case You were wondering which is the 'basic' surface-rendering instead, well, thank to the fantastic PHP + SVG-graphics technology, I wrote a little script to show it to You (fig.1)!

(fig.0: A simple, triangulated surface.)


[ fig.: A triangulated surface. Refresh the page for a different example. ]

While there is only 1 'basic' method, there are practically countless advanced methods: each has its advantages and disadvantages, so choice is important.

The great challange of advanced terrain rendering, besides doing it all fast, consists in the fact that while with sorting Algorithms it's realtively easy to sort data accessing only little parts of it at a time (full-streaming), with terrain- renderng Algorithms this is very tricky. And this is a problem, because while tendentially sorting Algorithms use always the seme little amount of memory independently of the size of the dataset which one wants to sort, with terrain-rendering Algorithms this is hard to do. But if it cannot be done, there is a limit to the size of the terrain one can render with unlimited drawdistance. Because yes, potentially unlimited drawdistance is a big attraction. But can it be done or not? What I did in this field was, essentially, to device a method which achieves a memory usage which is independent of the terrains-size, and allows that "full-streaming" stuff relatively easily. This also proves that the answer to the above question is: AFFIRMATIVE! Isn't that great? I managed to device the method by trying to generize another one to have unlimited drawdistance: it was damn hard, but at the and I managed it. And yes, the "generalized" version is simpler than the original one!


image not available. image not available.
video footage: https://youtu.be/2R5sLfQAtMs


For small terrains the 'basic' method is good because it's simple. But an advanced method is needed for large terrains when one also wants to render with it very huge or even until-end drawdistance. The 'basic' method fails to satisfy these requirements because of its very high triangle-count.

If the Reader is interested in advanced methods, there will be a very easy-to-read paper I and a friend wrote to introduce to the Algorithm I've dealt with in this field. It's only one, but that's it. For other methods, on vterrain.org there is a huge collection of material on the terrain-rendeing topic.

Essentially I chose to deal with creating a method based on the triangulation and in general surface-approximation scheme poineered by Peter Lindstrom and cooperators (first method of this kind published in 1996), because although it's quite complicated with respect to other methods, it is a method instead which literally gets at the heart of the problem of polygonal surface-approximation.

And this is important because just as void space ( here take a cartesian reference frame, 2D or 3D and a polar of speherical coordinate system) has a structure, so have surfaces represented polygonally. A surface, in the aforementioned method has the intrinsic structure of a quadtree of blocks. And that's it! It's hard to dea with it, but at the end it's worth it because it provides a true, rock-solid theoretical way of going about thinking of this problem. It literally solves completely the problem or approximating, representing, and even thinking of surfaces: but why? Read below to find out!

The key reason is this:
since the surface/terrain is a (partial!!) quad-tree of blocks, as it's all based on halfing existing blocks and merging 4 sibling blocks into a twice-as-big one (per side of course), this scheme can literally cross ( or let's say "embrace" ) multiple orders of magnitude. Which means that as long as You are fine with lots of detail near to the viewer but exponentially decreasing detail farther from it, you can represent even a 1 million x 1 million heightmap with just a few thousands of blocks.

I am not a methematician or an academic figure, but instead, a nerd: so I have no reluctance to tell You how it all works, in 1-2 simple sentences. Let's go!
Recall I said a few sentences before, that according to the original method pioneered by Peter Lindstrom and cooperators, published in 1996, a surface or terrain is quad-tree of blocks (fig.2).

( fig.1: left: simple graphical representation of a subdivision ; right: generic representation of a quadtree displayed so that its nodes coincide with the centers of the sudivision's blocks ' quad-tree )

Refresh the page to get a different one!


And this is fine:
but as we know, a tree-strucutre can be used also to represent a subdivision either of a segment,or a square (this is our case), or a cube, or just anything like that. But it's a sort of waste in many cases becasuse eventually, what is needed, are only the leaf-nodes of the tree-structure ( NOTE: in our case it's usually partially complete, it's a structure very similar to natural trees... just think that the nodes of our abstract tree grow always either 0, or 4 branches ).
It was intuitive to try representing all this with a simple vector (fig.3)


( fig.2: A generic vector: a row of N generic containers. )

Refresh the page to get a different vector!



and get throught the problem of keeping in memory all a huge tree-structure ; but in order for the rendering method to be operational, also some operations needed to be feasible to do on the subdivision represented by the quad-tree in its current configuration.
Well... simply I did months of research to confirm of confute the suspect, and deviced my first theorem on polygonal surface approximation: which confirmed and proved that one can perform all the operations required, besides storing successfully the subdivision. And provided clearly also the procedure to do all this. So, it was done!
In short, I removed the quadtree-stuff from the original method and derived the first experimental version of my method, whose memory requirement is totally independent of the terrain's size, and depends only on how many block there are currently in the rendering (that is, simply a fully conected, nicely triangulated polygonal surface ).
That's how the concept of terrain of potentially unlimited size comes into play: we can cross orders of maginitude so... thing how large 2 at the 32-th power is... and then take ihe base-2 logarithm of it: it's a small number as large as 32... . Uselss to make long explanations and examples here: just think of it. Figure 4 illustrates the concept:

( fig.4: simple graphical representation of how the method can cross orders of magnitude of distances, with very few blocks )

Refresh the page to get a different one!


But let's refine the thing a bit: I said earlier that in the first experimental version of my Terrain- or Surface- Rendering Algorithm, the mamory usage was proportional to the total number of blocks. But the performance depended on something around the square of the number of blocks, which is not too good. So, After another bit of research, I had the suspect that this dependence could be brought down to depend linearly (that is proportionally) on the number of blocks present in the rendering at some time as the program runs.
So I deviced my second theorem on polygonal surface approximation and it was fine! It also made the sourcecode even shorter since theorem 2 offers a really simple and short solution to one of the annoyances of the overall procedure.
Well, that was the story of how things went, hope You liked it! As soon as the paper presenting the whole Algorithms with nice figures too, It will be accessible through the links of this website.
A provvisory material could be this (video-attachment n.4 to my curriculum):
http://youtu.be/fPow5pTVtdc
Even if I warn that when I did that video-explanation, work was still going on, and I had not even discovered my second theorem on surface-approximation yet. So at the time of that video, all the outlook was yet rather experimental.

And after this, rendering the surface as the camera changes position and angle (orientation - as nerds usually call it ), it's just a metter of of changing the distribuion of the detail so as to show only the those detailes which are relevant from that camera-shot, from time to time.

A true theoretical framework for surface approximation, analysis and representation with dynamic level of detail: with or without approximation.
Even cheating is possible, as much as also benefiting from clamping viewdistance is possible... very flexible so.


The purpose was and is NOT that of devicing the best terrain- or surface- rendering Algorithm in the world (even because there can not be such a thing), but the best-documented one ; and the one documented in the way that it be simplest to understand for most people. And this is in order because this is already one of the simplest algorithtms in the advanced tier, and one which has also a very short sourcecode (printed on A4 papers, the C++ sourcecode would amount nearly to just 20 pages): in fact it takes as little as 10 seconds to compile with the "-O3" option for maximum optimization (but slower compilation time). Isn't that great?


So, essentially, at this point the only remaining practical difficulty is how to access the heghtmap in a convenient way when this is very large: using a plain bitmap file to store it and access it in full-streaming with a 20-line routine reliying on the fscanf() function from the standard library of the the C/C++ programming language, would be fine... but for terrain larger than 4k x 4k height-samples, the file would be too large and also... well, very lenghy to download from a website. Well, this is not the place to discuss these little annoyances, so let's go on. I hope to dedicate some material this in future anyway.


So, now that I managed to solve the problem, You probably are interested in knowing what's next. Next is, corroborating the results. In one sentence, implementing the variant which reads the heightmap in full-streaming, possibly not from a heightmap stored as a bitmap file... intead possibly using PNG files, as they are compressed.
So I'm going to outline shorly what I am thinking to do next: let's go.

According to what I tried to outline in the article about Algorithms, their use and importance, in order to find a simple but rock-solid solution, as first thing I started to care a bit about investigating the consequences of the Algorithm as it is now.
Let's state is shortly: essentially, when a very large terrain with full draw-distance has to be rendered, there's the practical problem of accessing the pixels of the heightmap (stored as a simple 16-bit-per-pixel grayscale image or an 8-bit-per-pixel RGB image in which we put the first 8 bits into the red channel and the other 8 bits in the green channel).

If it's stored as a bitmap file, there's no problem since all pixels can be accessed instantly, comfortably reading it from the hard-distk just as if it were a plain text file containing a bunch of numbers (with the fscanf() function in the C/C++ programming language).
Moreover, with my Algorithm very few pixels have to be read at a time, so speed of access is not even a problem: but the problem is that storing large heightmaps in a non-compressed format (e.g. bitmap) makes up very large files. And that's not practical. It's fine to pretend to do better.
For a 8k x 8k bitmap we would have a file rougly as large as 200 MB: not a disaster but for larger terrains this would grow even larger, and that's just not a neat solution.

Well, I thought first to try exploiting the features of the PNG compressed image-file format, but there are quite some issues: it seems all too tedious to work it out.
Until a 4k x 4k terrain, using bitmap format is still fine since it's relatively access in full-streaming. But algorithmists are there to search for optimal solutions, as far from second-rated compromizes as possible.

So what I am thinking to do is to work out a personalized compressed format, which the Algorithm could use directly. Without going into detailes here, this is quite natural withing the context of how the Algorithm does it's job, and the use of such a file format could also support directly the full-streaming access of the data witout any real need to decompress even only part of it before accessing it.

Essentially we would start from a PNG or plain bitmap file, and we would run a preliminary analysis routine of the surface, and create the above-mentioned compressed file.
Then simply we would run the rendering which this time would use the personalized compressed file.
The compression routine of course is relatively easy and would use an amount of memory independent from the heightmap's size.

I've already started to speare here and there some time to work this refinement out.


date:
10 April 2015
by:
Simon Hasur, a nerd of algorithms.

Article first hosted on the website The Nerd of Algorithms

image not available.