So after a lot of digging I found RenderTexture.isCubeMap. Then I found that it was obsolete.
Then I found RenderTexture.dimension and, by extension, UnityEngine.Rendering.TextureDimension.Cube which seems to be what I want.
But I'm not sure how this is used. How are the six faces represented internally when the rendertexture is created this way? I'm currently writing a compute shader that writes to a render texture, and I'm not sure how I should be writing my output that writes to the cubemap.
So what do I do with this in the compute shader...
RWTexture2D o_result;
//...
o_result[tex.xy] = float4(outputColor.x, outputColor.y, outputColor.z, 1);
As you can see, this is for a 2d texture, is there anything special I need to do to get it working with a cubemap? My first instinct is something like:
RWTexture3D o_result;
//...
// Where tex.xyz flies on the outside of the cube? How do I address each side's pixels...
o_result[tex.xyz] = float4(outputColor.x, outputColor.y, outputColor.z, 1);
If someone has visibility on cubemap rendertextures but not compute shaders, that's fine. I'm just very unsure as to how this all lays out in addressable memory and using RenderTexture.dimension isn't very well documented.
↧