The problem is not one of creating shadows. It is all about polygon Z fighting.
In an aircraft, the camera sits at the pilots eye point.
The closest part of the cockpit is within about 0.1 unit (meter) from the eye point.
The pilot has to have a view distance of around 300KM (300,000 units). Yes water surfaces and sloping shore lines have to be visiable to these distances for the view to be realistic.
The ratio of the far plane to the near plane for the frustum is therefor 3,000,000 to 1.
With the Z buffer set to 24bit mode (default for windows programs) the maximum ratio for far/near should be no larger than around 5,000 to 1 , otherwise Z fighting of the graphics will become visable. (at 3,000,000 to 1 the terrain polygons are swapping backwards and forwards badly)
I have tried swapping over from the Z buffer mode to a linear W buffer mode to try to extend the range before Z fighting becomes visiable, but that started causing huge problems for close objects inside of the cockpit.
To overcome this Z fighting one has to run with something like 32bit Z buffer mode or larger, but almost none of the current video cards support this.
The only solution that tends to be used is to build up the image in 2 steps.
Have one camera rendering from 0.1 to around 200, and another camera rendering from 200 to 300000.
The image from the furthest view camera gets rendered first, and the image from the closer camera overwrites on top of the further camera image.
This approach works very well, but it can only function if one can stop the first rendered image from being erased.
The proof of concept code I am trying to port across to this library, I created under "DarkGDK", and I am trying to use the same approach.
Basically the cameras were created like
Code:
void SetupCamera( void )
{
// Camera variables
float fHorizontalPixels = 1024.0;
float fVerticalPixels = 768.0;
float fAspect = fHorizontalPixels / fVerticalPixels;
float fFov = 1;// in radians *******
float fNear = 0.3f;
float fMiddle = 300.0f; //change over point between the 2 cameras
float fFar = 300000.0f;
dbMakeCamera(1);// create the second camera.
dbSetCameraRange ( 1, fNear, fMiddle );
dbSetCameraRange ( 0, fMiddle, fFar );
dbSetCameraAspect ( 1, fAspect );
dbSetCameraAspect ( 0, fAspect );
dbSetCameraFOV ( 1, 180/3.14 );
dbSetCameraFOV ( 0, 180/3.14 );
/**********************************************************************************************************************
* Disable the backdrop of the closest camera so we can see the content of the furthest camera behind it.
* REMEMBER.. cameras create their picture starting from lowest number camera sequentially upto highest numbered camera.
***********************************************************************************************************************/
dbBackdropOff(1);
}
And the rendering was a simple
Code:
dbSync( ); //Update screen
in the game loop.
Any camera that was active would render.
The problem with the other library was that it was so slow, and it did not provide a means to programmer control the rendering of each individual object.
Because aircraft terrain are so huge, I eventually went to 3 active cameras where the near and far camera moved around the terrain creating their images, and then I overwrote that image with the third camera sitting at world(0,0,0) that only produced the cockpit render, but I want to unwind this third camera and actually reposition the game world every frame so that
ALL the rendering cameras will always stay at world(0,0,0).
I hope this explained what I am trying to do.