Hello everbody!
I want to explore GPU computation within Unity. I found a variety of possibilities how to do this. When using ComputeShaders in Unity, should I expect much overhead when applying the shader often in e.g. a for-loop? There are libraries/wrappers like Cudafy and ManagedCuda available, but I guess these will have a significant overhead when calling a kernel in a for-loop?
↧