I'm trying to generate a real-sized planet with WebGL. After researching, I came to the conclusion that the best way to get around 32-bit float precision issues (since GPUs don't support double precision) is to use 64-bit integer coordinates. I would generate these coordinates on the CPU, then send them as integers to the GPU. After that, I would get the position of the vertices relative to the camera by doing:
vertexPosition-cameraPosition
Since the result is small, I can now safely convert to 32-bit floating point and render it. All was good until I realised that since the coordinates are large integers, I can't multiply them by a model matrix if I want to rotate the planet around it's center since the floating point coordinates are no longer relative to the planet's center.
Is it possible to scale the model matrix by a large value, then convert it to integer, multiply by the position, and then divise by that same large value? I know there is a solution but I can't find it.
Thanks