Looking at how Khronos defines layers:
https://www.khronos.org/opengl/wiki/Array_Texture
We both have 3D textures and layers of 2D textures, which can both be
encoded in our existing `Typed3DBuffer` as depth. Since we support
depth already in the GPU API, remove layer everywhere.
Also pass in `Texture2D::LOG2_MAX_TEXTURE_SIZE` as the maximum number
of mipmap levels, so we do not allocate 999 levels on each Image
instantiation.
A GPU (driver) is now responsible for reading and writing pixels from
and to user data. The client (LibGL) is responsible for specifying how
the user data must be interpreted or written to.
This allows us to centralize all pixel format conversion in one class,
`LibSoftGPU::PixelConverter`. For both the input and output image, it
takes a specification containing the image dimensions, the pixel type
and the selection (basically a clipping rect), and converts the pixels
from the input image to the output image.
Effectively this means we now support almost all OpenGL 1.5 formats,
and all custom logic has disappeared from:
- `glDrawPixels`
- `glReadPixels`
- `glTexImage2D`
- `glTexSubImage2D`
The new logic is still unoptimized, but on my machine I experienced no
noticeable slowdown. :^)
We were lacking support for default textures (i.e. calling
`glBindTexture` with a `texture` argument of `0`) which caused our
Quake2 port to render red screens whenever a video was playing. Every
texture unit is now initialized with a default 2D texture.
Additionally, we had this concept of a "currently bound target" on our
texture units which is not how OpenGL wants us to handle targets.
Calling `glBindTexture` should set the texture for the provided target
only, making it sort of an alias for future operations on the same
target.
Finally, `glDeleteTextures` should not remove the bound texture from
the target in the texture unit, but it should reset it to the default
texture.
These enums are used to indicate byte-alignment when reading from and
to textures. The `GL_UNPACK_ROW_LENGTH` value was reimplemented to
support overriding the source data row width.
This sets the length of a row for the image to be transferred. This
value is measured in pixels. When a rectangle with a width less than
this value is transferred the remaining pixels of this row are skipped.
This extracts the sampler functionality into its own class.
Beginning with OpenGL 3 samplers are actual objects, separate
from textures. It makes sense to do this already as it also
cleans up code organization quite a bit.