Quantcast
Channel: Questions in topic: "compute shader"
Viewing all 287 articles
Browse latest View live

AsyncGPUReadbackRequest crashes my pc

$
0
0
Im trying to speed up my marching cubes compute shader so I can use it with some sort of realtime mesh editing and tried to implement AsyncGPUReadbackRequest because ComputeBuffer.GetData() is always waiting for the compute shader and therefore causing lag whenever Im editing the mesh. Sadly my implementation crashes my pc :(. Code: shader.SetBuffer(0, "triangles", triangleBuffer); shader.SetInt("numPointsPerAxis", numPointsPerAxis); shader.SetFloat("isoLevel", isoLevel); shader.Dispatch(0, numThreadsPerAxis, numThreadsPerAxis, numThreadsPerAxis); var request = AsyncGPUReadback.Request(triangleBuffer); while (!request.done) { yield return new WaitForEndOfFrame(); } Triangle[] tris = new Triangle[numTris]; request.GetData().CopyTo(tris);

Compute Shader Not Working in Build (Linux / OpenGL)

$
0
0
I am unable to get my compute shader to write to a texture in a built version of my project, even though it works in the editor. I made a blank project, and added a quad, which has a script containing the following code. public class CSTtest : MonoBehaviour { public ComputeShader cs; private RenderTexture rt; private int texWidth = 512; private int texHeight = 512; void Start() { Screen.SetResolution(1280, 720, false); rt = new RenderTexture(texWidth, texHeight, 0, RenderTextureFormat.ARGB32); rt.enableRandomWrite = true; rt.Create(); GetComponent().material.mainTexture = rt; cs.SetTexture(0, "Result", rt); } void Update() { if (Input.GetKeyDown(KeyCode.G)) { cs.Dispatch(0, texWidth / 8, texHeight / 8, 1); } } void OnDisable() { rt.Release(); } } The computer shader just turns the quad red. // Each #kernel tells which function to compile; you can have many kernels #pragma kernel CSMain // Create a RenderTexture with enableRandomWrite flag and set it // with cs.SetTexture RWTexture2D Result; [numthreads(8,8,1)] void CSMain (uint3 id : SV_DispatchThreadID) { Result[id.xy] = float4(1, 0, 0, 1); } In the editor the quad turns red as expected, but in the built version nothing happens when I dispatch the compute shader. Am I doing something wrong, or is this a bug with Unity? I am running on Linux with Editor version 2020.1.2f1, OpenGL 4.5. When I tested this same project on Windows/DX11 the compute shader worked in the editor and the built version.

How do int textures work in ComputeShaders?

$
0
0
I'm having trouble understanding how to write into a RenderTexture with an integer format (e.g. [RGInt][1]). The following code leads to a completely black texture but by my understanding, it should be yellow:

**C# Script:** using UnityEngine; public class IntTextureTest : MonoBehaviour { public RenderTexture renderTexture; public ComputeShader computeShader; void Start() { renderTexture = new RenderTexture(1024, 1024, 0, RenderTextureFormat.RGInt); renderTexture.enableRandomWrite = true; renderTexture.Create(); computeShader.SetTexture(0, "Result", renderTexture); computeShader.Dispatch(0, renderTexture.width / 8, renderTexture.height / 8, 1); } }
**ComputeShader:** #pragma kernel CSMain RWTexture2D Result; [numthreads(8, 8, 1)] void CSMain(uint3 id : SV_DispatchThreadID) { Result[id.xy] = int2(0x7fffffff, 0x7fffffff); }
I have verified that the texture format is supported using [SystemInfo.SupportsRenderTextureFormat][2] and I tried the same example with a float texture, which worked fine. [1]: https://docs.unity3d.com/ScriptReference/RenderTextureFormat.RGInt.html [2]: https://docs.unity3d.com/ScriptReference/SystemInfo.SupportsRenderTextureFormat.html

GPU Ray tracing: "kernel at index(0) is invalid"

$
0
0
I have been following this tutorial/blog thing: http://three-eyed-games.com/2018/05/03/gpu-ray-tracing-in-unity-part-1/ I'm stuck on the 'reflections' section, I think I have coded (more like 'copied', let's be realistic) everything that I need to, but when I press play, I get: . "RayTracingShader.compute: Kernel at index (0) is invalid UnityEngine.ComputeShader:Dispatch(Int32, Int32, Int32, Int32)" . There's more, but here's what I don't understand: Why does it say the kernel is invalid? Why does it say Dispatch(int32, int32, int32, int32) where the actual dispatch looks like: "RayTracingShader.Dispatch(0, threadGroupsX, threadGroupsY, 1);" Have I just left something out and didn't notice? . I'm just starting to learn how to use computes shaders and I don't really understand it, any help would be greatly appreciated.

Trying to convert slow code to compute shader

$
0
0
I have a Dictionary with a vector3 as the key and a custom class as the value (stores pos/rot/type/etc) Then I have a function that starts a spider that crawls through the whole 3d array, BUT it starts at say (0,0,0) and can only index the dict with up/down/left/right/forward/back relative to its position then in moves to each of the positions it found and re-runs itself for each one eg it found up and left runs up (0,1,0) and can only index the dict with up/down/left/right/forward/back relative to its position runs left(-1,0,0) and can only index the dict with up/down/left/right/forward/back relative to its position Then when the spider finds all the neiboring blocks it checks to see if that is all the blocks if so it does nothing else it cuts out the ones it found and makes a new dict for them, then re-runs the spider in some of the blocks it hasnt found yet thus allowing the user to build in the dict and if they cut a wall or car in half, it will fall into 2 because of the spiders This is quite slow for dicts with more than 1K blocks I am wondering how can a make code in a compute shader to run multiple spiders at once I think running like 1K spiders or the number of blocks in the dict (if its less than 1K) but I have no idea how to do this, or how to make it a callable function in my c# code

Compute shader dispatch large spike in CPU and GPU profiler

$
0
0
Hi everyone, I'm coding a Quadtree based system for the rendering of large planets (similar to No man's sky). Every time im generating a new node in my Quadtree, I have to calculate the new vertex and texture data from a noise function. This is done in a compute shader, the data is never read back to the GPU though (I think) and directly send for rendering. It worked decently well, but I wanted to reduce the load on the GPU and spread the texture calculation ove several frames by dividing it into smaller parts that are sequentially send to the GPU. Unfortunantly, this leads to an even worse perfomance and a single digit framerate. Does anyone know whats going wrong?Shouldnt the GPU load go down when I "divide and conquer" the textue? Heres a picture from the profiler ![alt text][1] [1]: /storage/temp/186769-profiler.jpg and the code to dispatch the compute shaders: public IEnumerator CalculateNodeData(){ InstantiateBuffers(); ComputeShader NodeComputeShader=planet.NodeComputeShader; propertyBlock=new MaterialPropertyBlock(); int vertexKernel = NodeComputeShader.FindKernel("TerrainNodeVertex"); int textureKernel = NodeComputeShader.FindKernel("TerrainNodeTexture"); //Set a bunch of constants and our vertexBuffer and RenderTexture NodeComputeShader.SetBuffer(vertexKernel,"vertexBuffer",verticesBuffer); NodeComputeShader.SetTexture(textureKernel,"textureBuffer",texture); //Calculate the vertices NodeComputeShader.Dispatch(vertexKernel,threads,1,threads); yield return new WaitForEndOfFrame(); //Calculate a part of the texture, one part per frame for(int p=0;p

How to map AsyncGPUReadbackRequest to the buffer it read from?

$
0
0
How to map AsyncGPUReadbackRequest to the buffer it read from? For example to dispose() of the buffer? Let's say I have a few requests all of them reading from buffers of the same size. How do I identify which buffer is associated with the given readback?

Render Texture seems to go blank when sent to Compute Shader?

$
0
0
I have a render texture (that's being blitted to from a webcamtexture, if thats helpful), that works fine when I blit it in turn onto the screen - but when I send it to the compute shader, write whats on it onto a different texture, and blit that new texture to the screen it's entirely black. Relevant CPU code : using System.Collections; using System.Collections.Generic; using UnityEngine; [RequireComponent(typeof(Camera_Handler))] public class Gol_Handler_Camera : MonoBehaviour { public ComputeShader compute; public int _width; public int _height; int width_groups; int height_groups; RenderTexture _Input; RenderTexture _Output; RenderTexture _Screen; RenderTexture _Video; RenderTexture _Video_Save; void OnRenderImage(RenderTexture src, RenderTexture dest) { Graphics.Blit(_Screen, dest); } void Start() { _Video = GetComponent().Camera_Render(); _Video_Save = GetComponent().Camera_Render(); _width = _Video.width; _height = _Video.height; _Screen = new RenderTexture(_width, _height, 0); _Screen.enableRandomWrite = true; _Screen.filterMode = FilterMode.Point; _Screen.wrapMode = TextureWrapMode.Clamp; _Screen.Create(); compute.SetInt(width_id, _width); compute.SetInt(height_id, _height); compute.SetInt(buffer_length_id, _buffer_length); compute.SetTexture(0, input_id, _Input); compute.SetTexture(0, output_id, _Output); compute.SetTexture(0, screen_id, _Screen); compute.SetTexture(0, video_id, _Video); compute.SetTexture(0, video_save_id, _Video_Save); compute.SetBuffer(0, buffer_id, _Buffer) compute.SetTexture(4, input_id, _Input); compute.SetTexture(4, output_id, _Output); compute.SetTexture(4, screen_id, _Screen); compute.SetTexture(4, video_id, _Video); compute.SetTexture(4, video_save_id, _Video_Save); compute.SetBuffer(4, buffer_id, _Buffer); width_groups = Mathf.CeilToInt((float)_width/8f); height_groups = Mathf.CeilToInt((float)_height/8f); } void Update() { compute.SetTexture(4, video_id, _Video); compute.Dispatch(4, width_groups, height_groups, 1); } } Relevant GPU code: #pragma kernel Initialise uint _width; uint _height; RWTexture2D _Video; RWTexture2D _Video_Save; RWTexture2D _Input; RWTexture2D _Output; RWTexture2D _Screen; StructuredBuffer _Buffer; uint _buffer_length; ... #pragma kernel GOL ... #pragma kernel Reset ... #pragma kernel Video_Input [numthreads(8,8,1)] void Video_Input (uint3 id : SV_DispatchThreadID) { _Screen[id.xy] = _Video[id.xy]; } If I change the OnRenderImage blit from blitting _Screen to blitting _Video it works fine, so clearly the _Video render texture contains data prior to being sent over to the shader - but once it's there it seems to behave like it's completely blank. I ideally want to be able to alter and use the data from the _Video texture once its in the compute shader, not just send it back to the screen, but I can't even seem to do that at the moment. Any help would be really appreciated.

How do you pass data from a compute shader to a C# script

$
0
0
I am trying to learn how to use compute shaders to do the heavy lifting in my mesh generation, but I'm struggling to get the most basic thing to work. I'm currently just trying to pass an int array into a shader get it to, then alter and return it using an RWStructuredBuffer. However no matter what I try it only seams to fill the very first position of the returned array. I have got it to return a texture but it doesn't seam to work using the buffer. This is probably a simple mistake but I cant seem to find the issue. Thanks. My C# script: using System.Collections; using System.Collections.Generic; using UnityEngine; public class Testing : MonoBehaviour { [SerializeField] private ComputeShader TestingShader; void Start() { int[] ints = new int[8]; ComputeBuffer buffer = new ComputeBuffer(8, 8 * sizeof(int)); buffer.SetData(ints); TestingShader.SetBuffer(0, "CBuffer", buffer); TestingShader.Dispatch(0, 8, 8, 1); buffer.GetData(ints); buffer.Dispose(); for (int i = 0; i < ints.Length; i++) { Debug.Log(i + " // " + ints[i]); } } } My Compute shader: #pragma kernel CSMain RWStructuredBuffer CBuffer; [numthreads(8,8,1)] void CSMain(uint3 id : SV_DispatchThreadID) { CBuffer[id.x] = 10; }

Sharing code between shader and c# - how?

$
0
0
Hi Everyone, With the new Unity.mathematics library we can now write more or less identical code for (compute) shaders and c#. Fantastic! But *really* great would be if I could use a single source file for all common code. Maintaining identical code across c# and HLSL would then be trivial. (For example, noise functions that can be used in both c# and compute shaders and custom shader graph nodes) This could be easily done with #include. I could just put all the common code in .cginc files and then just #include them in my csharp code... e.g.. public static class ixMath { #include "hashFunctions.cginc" #include "noiseFunctions.cginc" } Unfortunately CSharp doesnt allow #include. Can anyone suggest another solution? How do you all do this?

Compute Shader Outputting Black

$
0
0
Hey All. Just trying to get my head around compute shaders, and just wanted to set an input texture as the output texture. Whenever I do this, I simply get a black output from the processed texture (code below) that is being rendered to a RawImage UI Element. Am I indexing the input image wrong maybe? Any help would be much appreciated. Compute shader: // Each #kernel tells which function to compile; you can have many kernels #pragma kernel CSMain // Input Texture Texture2D imageInput; // Create a RenderTexture with enableRandomWrite flag and set it // with cs.SetTexture RWTexture2D Result; [numthreads(1,1,1)] void CSMain (uint3 id : SV_DispatchThreadID) { // TODO: insert actual code here! Result[id.xy] = imageInput[id.xy]; } C# Script: public class TestCamera : MonoBehaviour { public RenderTexture testTexture; public RenderTexture processedTexture; public RawImage testUI; public ComputeShader computeShader; public GameObject boat; // Start is called before the first frame update void Start() { testTexture = GetComponent().targetTexture; // Set Up Render Texture processedTexture = new RenderTexture(256, 256, 24); processedTexture.enableRandomWrite = true; processedTexture.Create(); // Dispatch Compute Shader int shaderKernal = computeShader.FindKernel("CSMain"); computeShader.SetTexture(shaderKernal, "Result", processedTexture); computeShader.SetTexture(shaderKernal, "imageInput", testTexture); computeShader.Dispatch(shaderKernal, testTexture.width, testTexture.height, 1); } // Update is called once per frame void Update() { testUI.texture = processedTexture; transform.position = new Vector3(boat.transform.position.x, transform.position.y, boat.transform.position.z); } }

Issue with Compute Shader

$
0
0
Hey, I'm working on converting some code I have to a compute shader but am getting weird results and am not sure why. the code is *supposed* to create a sphere with some deformation for height but its like the shader is only running for every nth vertices. image?![here is an image of it](https://i.imgur.com/P2rwPpe.png "planet gen image"). I've attached the compute shader and the script that calls it (as txt files cause that all you can upload here apparently). I have a feeling the issue is with my lack of understanding of compute shader threads. but who knows [link text][1] [1]: /storage/temp/189230-terrainface.txt

How to dig ground, remove soil and pour the soil somewhere else?

$
0
0
Hello Unity Developers, I am trying to make a simple simulator for an excavator similar to the one shown in this video https://www.youtube.com/watch?v=5B4Yyvqfj1w&ab_channel=Veritasium **Solutions I tried** 1. Tessellate and then modify the vertices in the shader - https://www.youtube.com/watch?v=Sr2KoaKN3mU&ab_channel=PeerPlay The problem is that the original mesh in the CPU doesn't change as tessellation and vertex modification all happen in the GPU. For me, the original mesh as well as the mesh collider should change so that physics works according to the actual geomety. 2. Deforming a Mesh at Runtime Based on Collisions - https://www.youtube.com/watch?v=l_2uGpjBMl4&t=32s&ab_channel=WorldofZero Here, everything happens in CPU which I think would be hectic for the CPU. Please provide suggestions or changes to the solutions I have already tried or any alernate solution is also welcome. Regarding simulating the soil, I have no idea from where to start. Any references or solutions I can start with are really appreciated. Thank you!

Unity Jobs/DOTS Vs Compute Shader

$
0
0
I am a bit confused on what the specific use cases are for these optimization features. I often hear "Parallel Processing" on the GPU, and "Multithreading". What are these best used for? Is one "better" than the other? Sorry for the noob question

Compute Shader for Calculations? (like, find closet thing ???)

$
0
0
Hi, most people talking about compute shaders and say it can calculate more things @ once using GPU. My problem is can I use it for find closet enemy to player, something like that? ---------- foreach(Transform potentialTarget in enemies) { Vector3 directionToTarget = potentialTarget.position - currentPosition; float dSqrToTarget = directionToTarget.sqrMagnitude; if(dSqrToTarget < closestDistanceSqr) { closestDistanceSqr = dSqrToTarget; bestTarget = potentialTarget; } } can I use compute shader to do this (above), If I can do this. can you give example how to do it.

How do you use Mathf.PerlinNoise in a compute shader?

$
0
0
Hello there, I made a terrain generator and a texture generator for the terrain, but the performance wasn't really good. And since the texture is 10x more detailed then the terrain (because I didn't want the terrain to be really detailed) I wanted to move my texture script to a compute shader, but as you can see from the question I couldn't find a way to use Mathf.PerlinNoise in a compute shader. Does someone know a way to use the function? If not it's not the biggest problem, but differently I have to - First generate the terrain and store the height in a low resolution image to pass to my compute shader or - Create a perlin noise function myself that I can write in both c# and HLSL If you need a bit more detail about what I actually want to make you can view my devlogs on https://youtube.com/playlist?list=PLEN0z45X0Yw9leG_HuM65WyXHnpQLxsCW Devlog number 2 gives the best information about the terrain

Process multiple buffers with compute shader

$
0
0
I need to process a dynamic number of buffers using a compute shader. Problem, I don't know how to do this and there's sparse information on them online. As the number is dynamic I can't just do RWStructuredBuffer _Buffer1 RWStructuredBuffer _Buffer2 RWStructuredBuffer _Buffer3 At first I tried to for(int i = 0; i < buffers.length; i++) { compute.SetBuffer(0, "_BufferToProcess", buffers[i]); compute.Dispatch(0, 64, 1, 1); } But quickly realized that because I'm setting buffers in a loop and Dispatch schedules, the shader will only process the last buffer. Then I tried to Instantiate() multiple copies of the shader and set their buffer accordingly but that didn't work either. I really don't want to convert multiple buffers into a single one as the data is dynamic, which means I'll need to resize it on every change. Would it be possible for a compute shader to process multiple buffers in one frame? Or is it not a good design to begin with?

Change compute buffer for a set of objects

$
0
0
I gone through the example of [DrawMeshInstancedIndirect()][1]. This works fine when I want to display a huge amount of objects. Now i want to only update some position of the position buffer shown in the example. The rest of it should still be rendered with the positions as before. I dont want to update the entire buffer. How would I do that? Is SetConstantBuffer() with a [MaterialPropertyBlock][2] the right approach? [1]: https://docs.unity3d.com/ScriptReference/Graphics.DrawMeshInstancedIndirect.html [2]: https://docs.unity3d.com/ScriptReference/MaterialPropertyBlock.html

Compute Shader performance is dropping from a single boolean.

$
0
0
Hello, I am trying to make a custom 3d renderer with projection and rasterization, but I have a question about computeShaders. I have a compute shader that sees if a pixel is in a triangle, then draw it. My compute shader code: [numthreads(256,4,1)] void CSMain (int3 id : SV_DispatchThreadID) { float2 coords = float2 ( id.x, id.y ); float minValue = 1; bool shadeIt = false; for (int i = 0; i < numTriangles; i+=3) { float3 v0 = vertices[triangles[i+0]]; float3 v1 = vertices[triangles[i+1]]; float3 v2 = vertices[triangles[i+2]]; float pointInside = isInside ( v0.x, v0.y, v1.x, v1.y, v2.x, v2.y, coords.x, coords.y ); if (pointInside < 0.1f){ shadeIt = true; } } Result[id.xy] = (shadeIt ? float4(2, 1, 0, 1) : float4(0, 1, 2, 1)); } My C# code that runs the shader: vertexBuffer = new ComputeBuffer (mesh.vertices.Length, sizeof(float)*3); triangleBuffer = new ComputeBuffer (mesh.triangles.Length, sizeof(int)); Vector3[] projectedPoints = new Vector3[mesh.vertices.Length]; int i = 0; foreach (Vector3 vec in mesh.vertices) { projectedPoints[i] = (ProjectPoint (vec) * height) + new Vector3 (width/2, height/2, 0); projectedPoints[i].z = vec.z; i ++; } vertexBuffer.SetData (projectedPoints); triangleBuffer.SetData (mesh.triangles); renderer.SetTexture (0, "Result", output); renderer.SetBuffer (0, "vertices", vertexBuffer); renderer.SetBuffer (0, "triangles", triangleBuffer); renderer.SetInt ("width", width); renderer.SetInt ("height", height); renderer.SetInt ("numVertices", mesh.vertices.Length); renderer.SetInt ("numTriangles", mesh.triangles.Length); renderer.SetVector ("cameraPosition", camera.position); renderer.SetVector ("cameraAngles", camera.angles * (3.141f/180)); renderer.Dispatch (0, threadGroups.x, threadGroups.y, threadGroups.z); vertexBuffer.Dispose(); triangleBuffer.Dispose(); And this works, we can plug in any mesh and it renders it: ![alt text][2] But as you can see, 1 FPS. Now, if I take out this bit of code (Which is important lol): `shadeIt = true;` Then obviously, the mesh disappears, but the FPS goes up to 60: ![alt text][1] Why is this happening and how can I fix it? [1]: /storage/temp/191508-speal.png [2]: /storage/temp/191507-shmemal.png

ComputeShader, bad output After "if" condition

$
0
0
Ho every One and thanks to read me, i'm doing a marching cube implementati on via computer shader, Building vertices and triangle for my chunk of voxel, i know implementation exists, but thats for learning purpose, After being able to store vertex in a computeBuffer, i got problem looking for the triangulation table. The triangulation Is stored in c# as int[,] 256 * 16, that i access Linearly in my kernel with a structuredBuffer of int. The problem in racing appear while i look up the table on the GPU, using an uint as index for the 255 use case multiplied by 16 couse the table i use Is represented likes this (copy pasted), if i read the uint index, It exist within the right range 0~255, any how, if i check the table with the uint index i manager to sometimes read the int stored in the table, if i check for the table value, with an if(table[uint]!=-1) the value i read from the table seems likes if it suffer of bad casting, likes if It get truncated or worst interpreted as unsigned, giving me result likes 1900480380 (actually Just example), others times It become 655535, im pretty sure to be using int, 32bit One, but not really sure, if i comment out the of statement, the output start to being readable again... It seems that if i read from the structuredBuffer insider di condition, the data get corrupted or bad casted. Im using rwstructbuffer for vertex, appendBuffer for triangles and structuredBuffer as input for the triangulation table. On the c# i retrieve the data as nativearray. I can provide some example code, im on Linux vulkan probably mono backend. Do you have some hint, on what May Be happening?
Viewing all 287 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>