The effect of framebuffer or texture sizes on the fragment shader [closed]
Good afternoon everyone! Can you please tell me if the size of the framebuffer that the fragment shader draws into has any effect on how the shader works? What I mean. I have created a shader program that does graphical analysis. This program takes primitives as input, as well as a layered texture (TEXTURE_2D_ARRAY) with “analysis information”. I set the framebuffer size to be the same as the texture size - to simply call the same texture pixel as the current one being processed in the fragment shader (while rasterising the primitives). Having called the texture pixel I process it and add the result to SSBO[common_primitive_id] using atomicAdd (with uint). All this works well, but up to some ‘critical point’ - [here comes the test data with one primitive] - for example, if the framebuffer (and multilayer texture) size is “26’760x27’000”, I read from SSBO the value “9513” (the correct analytically obtained value is “6869”), but if I change the size to “26’750x27’000” (“-10” pixels in width), the SSBO starts to contain the value “466284”. This result is completely repeatable and has been tested on three different machines with NVidia graphics cards. Does anyone have any “quick ideas” what this behaviour could be related to? P.S. I tried atomicAdd(SSBO[id], 1) to see how the number of fragment shader calls behaves, but I didn’t see anything anomalous - the number of calls changes according to the framebuffer (and texture - remember) size - only slightly. What causes chaos inside the shader?
Good afternoon everyone!
Can you please tell me if the size of the framebuffer that the fragment shader draws into has any effect on how the shader works?
What I mean. I have created a shader program that does graphical analysis. This program takes primitives as input, as well as a layered texture (TEXTURE_2D_ARRAY
) with “analysis information”. I set the framebuffer size to be the same as the texture size - to simply call the same texture pixel as the current one being processed in the fragment shader (while rasterising the primitives). Having called the texture pixel I process it and add the result to SSBO[common_primitive_id]
using atomicAdd
(with uint
).
All this works well, but up to some ‘critical point’ - [here comes the test data with one primitive] - for example, if the framebuffer (and multilayer texture) size is “26’760x27’000”, I read from SSBO the value “9513” (the correct analytically obtained value is “6869”), but if I change the size to “26’750x27’000” (“-10” pixels in width), the SSBO
starts to contain the value “466284”. This result is completely repeatable and has been tested on three different machines with NVidia graphics cards.
Does anyone have any “quick ideas” what this behaviour could be related to?
P.S. I tried atomicAdd(SSBO[id], 1)
to see how the number of fragment shader calls behaves, but I didn’t see anything anomalous - the number of calls changes according to the framebuffer (and texture - remember) size - only slightly. What causes chaos inside the shader?