I am trying to pass an array of bytes to my fragment shader. Originally I was trying to work with uniform arrays but gave up when I realised you can’t use a variable index to access the data (what exactly is their use then?)
I’m now trying to do it using a texture and storing the data in the RGB components. I’m unsure what exactly the texture2D function does when you pass it texture coordinates - does it always use a GL_NEAREST approach, or does it depend on what the current state is?
Ie, is there a chance that my data will be smoothed (and therefore corrupted)?
What do I do then if I want to create a smooth GL_LINEAR texture, but want to access data from a texture (which from what you’re saying would require GL_NEAREST)?
GL_LINEAR is not the smoothness of the data stored in the texture. It simply tells the program what to do when you are reading from in-between texels. So GL_LINEAR will n-dimensional linear interpolation, as opposed to nearest neighbour, for instance. n is the dimension of your texture, i.e. 1D, 2D or 3D.
It depends on the resulting size (in pixels) of the polygon you want to apply the texture to. If it is e.g. a quad, having the same size ( in pixels, on the screen ) as the texture, then there will be no magnification or minification. So the filters don’t affect FS. Otherwise filters will be used.
Simple example:
Tex: 512x512 pixels
Viewport: 512x512 pixels
Drawing a quad, which fills the viewport results in a 1:1 mapping from texture to screen -> no filters used.
Tex: 256x256 pixels
Viewport: 512x512 pixels
Drawing a quad, which fills the viewport results in a 1:2 mapping from texture to screen (magnification). Filter will affect FS value.