Hi & thank you again for your help.
Okay so on the pixel shader, i'm doing this:
Code:
outPos = Vec(outTex-0.5,1); // -0.5 centers the projected 2d UV mesh, else its in the top right.
Vec4 project = Project(outPos);
outVtx = project;
After clicking "Capture", i'm then doing a: Renderer.capture(Image,resX,resY,IMAGE_F32_4,IMAGE_2D,1,true); and then flipping this image vertically (as the mesh has flipped y texture coordinates).
Part of what i'm working on requires reconstructing the mesh from the image (where r/g/b represents x/y/z - hence me altering the colours earlier) - so in another shader program, I simply read the image input on the vertex shader for the mesh - this code works, as I had this near-enough working when I generated the projected 2d uv mesh on the CPU.
Here's the result, from the GPU:
As you can see, the vertex's aren't quite hitting the correct UV space (they nearly are, as you can make out the man). It's very close to being good, but you can see the edge vertex's (by the seams) are just hitting BLACK on the image.
So i'm wondering if I can generate this, somehow ignoring the cameras - e.g. something like outputting an image from the pixel shader, rather than doing a capture. Or whether you can help me get the right camera settings so I don't get these "near-miss" vertex/uv-space. I'll keep playing with the code you suggested in the last post.