Hi!
I have recently finished working on a terrain generation system that utilizes the GPU to generate the noise for the terrain and calculate the vertices and triangles of each terrain chunk.
After I got it fully working, I have been trying to optimize it as best as I can, and I noticed that calling `ComputeBuffer.GetData()` on a buffer with a large amount of data stored in it can be very costly.
One such buffer is the buffer that stores the triangulation data of the mesh of a terrain chunk, which is later used to create the actual mesh for the terrain chunk `GameObject` in the scene.
So my question is this: is there a way to skip getting the data from the GPU back to the CPU to generate the mesh? i.e is there a way to generate the mesh for a `GameObject` in the scene straight from the `ComputeShader` I use to generate the triangles in the first place?
Thank you!
I have recently finished working on a terrain generation system that utilizes the GPU to generate the noise for the terrain and calculate the vertices and triangles of each terrain chunk.
After I got it fully working, I have been trying to optimize it as best as I can, and I noticed that calling `ComputeBuffer.GetData()` on a buffer with a large amount of data stored in it can be very costly.
One such buffer is the buffer that stores the triangulation data of the mesh of a terrain chunk, which is later used to create the actual mesh for the terrain chunk `GameObject` in the scene.
So my question is this: is there a way to skip getting the data from the GPU back to the CPU to generate the mesh? i.e is there a way to generate the mesh for a `GameObject` in the scene straight from the `ComputeShader` I use to generate the triangles in the first place?
Thank you!