We have a new, improved string type coming up in AK (OOM aware, no null
state), and while it's going to use UTF-8, the name UTF8String is a
mouthful - so let's free up the String name by renaming the existing
class.
Making the old one have an annoying name will hopefully also help with
quick adoption :^)
Destruction of `GL::GLContext` resulted in the destruction of
`GPU::Driver` _before_ the destruction of the allocated textures, which
in turn point to `GPU::Image` objects. Since the destruction of
`GPU::Driver` unloads the shared library, we were trying to invoke
non-existing code.
Fix this by moving `m_driver` up in `GLContext` so that it's last in
line for destruction.
Propagate errors in places that are already set up to handle them, like
WebGLRenderingContext and the Tubes demo, and convert other callers
to using MUST.
Each texture unit now has its own texture transformation matrix stack.
Introduce a new texture unit configuration that is synced when changed.
Because we're no longer passing a silly `Vector` when drawing each
primitive, this results in a slightly improved frames per second :^)
This makes it consistent with our other `blit_from_color_buffer` and
paves the way for a third method that will be introduced in one of the
next commits.
`GL_COMBINE` is basically a fixed function calculator to perform simple
arithmetics on configurable fragment sources. This patch implements a
number of texture env parameters with support for the RGBA internal
format.
When compiling with SDL_opengl, all kinds of differences between LibGL
and OpenGL constants and types popped up as redefinition warnings and
errors.
This fixes all LibGL-related warnings when compiling PrBoom+ :^)
The Quake 3 port makes use of this extension to determine a more
efficient multitexturing strategy. Since LibSoftGPU supports it, let's
report the extension in LibGL. :^)
A GPU (driver) is now responsible for reading and writing pixels from
and to user data. The client (LibGL) is responsible for specifying how
the user data must be interpreted or written to.
This allows us to centralize all pixel format conversion in one class,
`LibSoftGPU::PixelConverter`. For both the input and output image, it
takes a specification containing the image dimensions, the pixel type
and the selection (basically a clipping rect), and converts the pixels
from the input image to the output image.
Effectively this means we now support almost all OpenGL 1.5 formats,
and all custom logic has disappeared from:
- `glDrawPixels`
- `glReadPixels`
- `glTexImage2D`
- `glTexSubImage2D`
The new logic is still unoptimized, but on my machine I experienced no
noticeable slowdown. :^)
This prevents us from needing a sv suffix, and potentially reduces the
need to run generic code for a single character (as contains,
starts_with, ends_with etc. for a char will be just a length and
equality check).
No functional changes.
Each of these strings would previously rely on StringView's char const*
constructor overload, which would call __builtin_strlen on the string.
Since we now have operator ""sv, we can replace these with much simpler
versions. This opens the door to being able to remove
StringView(char const*).
No functional changes.
glCullFace only accepts GL_FRONT, GL_BACK and GL_FRONT_AND_BACK.
We checked if the mode was valid by performing
```
cull_mode < GL_FRONT || cull_mode > GL_FRONT_AND_BACK
```
However, this range also contains GL_LEFT and GL_RIGHT, which we would
accept when we should return a GL_INVALID_ENUM error.
This commit implements glClipPlane and its supporting calls, backed
by new support for user-defined clip planes in the software GPU clipper.
This fixes some visual bugs seen in the Quake III port, in which mirrors
would only reflect correctly from close distances.
Implement (anti)aliased point drawing and anti-aliased line drawing.
Supported through LibGL's `GL_POINTS`, `GL_LINES`, `GL_LINE_LOOP` and
`GL_LINE_STRIP`.
In order to support this, `LibSoftGPU`s rasterization logic was
reworked. Now, any primitive can be drawn by invoking `rasterize()`
which takes care of the quad loop and fragment testing logic. Three
callbacks need to be passed:
* `set_coverage_mask`: the primitive needs to provide initial coverage
mask information so fragments can be discarded early.
* `set_quad_depth`: fragments survived stencil testing, so depth values
need to be set so depth testing can take place.
* `set_quad_attributes`: fragments survived depth testing, so fragment
shading is going to take place. All attributes like color, tex coords
and fog depth need to be set so alpha testing and eventually,
fragment rasterization can take place.
As of this commit, there are four instantiations of this function:
* Triangle rasterization
* Points - aliased
* Points - anti-aliased
* Lines - anti-aliased
In order to standardize vertex processing for all primitive types,
things like vertex transformation, lighting and tex coord generation
are now taking place before clipping.
According to the spec, these calls should be identical to an invocation
of `glVertex2*`, which sets the W-coordinate to 1 by default.
This fixes the credits sequence rendering of Tux Racer.
This is the vectorized version of `gl_tex_parameter`, which sets the
parameters of a texture's sampler. We currently only support one single
pname, `GL_TEXTURE_BORDER_COLOR`, which sets the border color of a texel
for if it is sampled outside of a mip-map's range.
This adds a virtual base class for GPU devices located in LibGPU.
The OpenGL context now only talks to this device agnostic interface.
Currently the device interface is simply a copy of the existing SoftGPU
interface to get things going :^)
We now support generating top-left submatrices from a `Gfx::Matrix`
and we move the normal transformation calculation into
`SoftGPU::Device`. No functional changes.
We were normalizing data read from vertex attribute pointers based on
their usage, but there is nothing written about this behavior in the
spec or in man pages.
When we implement `glVertexAttribPointer` however, the user can
optionally enable normalization per vertex attribute pointer. This
refactors the `VertexAttribPointer` to have a `normalize` field so we
can support that future implementation.
This merges GLContext and SoftwareGLContext into a single GLContext
class. Since the hardware abstraction is handled via the GPU device
interface we do not need the virtual base of GLContext anymore. All
context handling functionality from the old GLContext has been moved
into the new version. All methods in GLContext are now non virtual and
the class is marked as final.