ladybird/Tests/LibGfx/TestColor.cpp

92 lines
3.4 KiB
C++
Raw Normal View History

/*
* Copyright (c) 2024, Lucas Chollet <lucas.chollet@serenityos.org>
*
* SPDX-License-Identifier: BSD-2-Clause
*/
#include <LibGfx/Color.h>
#include <LibTest/TestCase.h>
TEST_CASE(color)
{
for (u16 i = 0; i < 256; ++i) {
auto const gray = Color(i, i, i);
EXPECT_EQ(gray, gray.to_grayscale());
}
}
LibGfx: Adjust matrices for XYZ -> sRGB conversions TL;DR: There are two available sets of coefficients for the conversion matrices from XYZ to sRGB. We switched from one set to the other, which is what the WPT tests are expecting. All RGB color spaces, like display-p3 or rec2020, are defined by their three color chromacities and a white point. This is also the case for the video color space Rec. 709, from which the sRGB color space is derived. The sRGB specification is however a bit different. In 1996, when formalizing the sRGB spec the authors published a draft that is still available here [1]. In this document, they also provide the matrix to convert from the XYZ color space to sRGB. This matrix can be verified quite easily by using the usual math equations. But hold on, here come the plot twist: at the time of publication, the spec contained a different matrix than the one in the draft (the spec is obviously behind a pay wall, but the numbers are also reported in this official document [2]). This official matrix, is at a first glance simply a wrongly rounded version of the one in the draft publication. It however has some interesting properties: it can be inverted twice (so a roundtrip) in 8 bits and not suffer from any errors from the calculations. So, we are here with two versions of the XYZ -> sRGB matrix, the one from the spec, which is: - better for computations in 8 bits, - and official. This is the one that, by authority, we should use. And a second version, that can be found in the draft, which: - makes sense, as directly derived from the chromacities, - is publicly available, - and (thus?) used in most places. The old coefficients were the one from the spec, this commit change them for the one derived from the mathematical formulae. The Python script to compute these values is available at the end of the commit description. More details about this subject can be found here [3]. [1] https://www.w3.org/Graphics/Color/sRGB.html [2] https://color.org/chardata/rgb/sRGB.pdf [3] https://photosauce.net/blog/post/making-a-minimal-srgb-icc-profile-part-3-choose-your-colors-carefully The Python script: ```python # http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html from numpy.typing import NDArray import numpy as np ### sRGB # https://www.w3.org/TR/css-color-4/#predefined-sRGB srgb_r_chromacity = np.array([0.640, 0.330]) srgb_g_chromacity = np.array([0.300, 0.600]) srgb_b_chromacity = np.array([0.150, 0.060]) ## ## White points white_point_d50 = np.array([0.345700, 0.358500]) white_point_d65 = np.array([0.312700, 0.329000]) # r_chromacity = srgb_r_chromacity g_chromacity = srgb_g_chromacity b_chromacity = srgb_b_chromacity white_point = white_point_d65 def tristmimulus_vector(chromacity: NDArray) -> NDArray: return np.array([ chromacity[0] /chromacity[1], 1, (1 - chromacity[0] - chromacity[1]) / chromacity[1] ]) tristmimulus_matrix = np.hstack(( tristmimulus_vector(r_chromacity).reshape(3, 1), tristmimulus_vector(g_chromacity).reshape(3, 1), tristmimulus_vector(b_chromacity).reshape(3, 1), )) scaling_factors = (np.linalg.inv(tristmimulus_matrix) @ tristmimulus_vector(white_point)) M = tristmimulus_matrix * scaling_factors np.set_printoptions(formatter={'float_kind':'{:.6f}'.format}) xyz_65_to_srgb = np.linalg.inv(M) # http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html # Let's convert from D50 to D65 using the Bradford method. m_a = np.array([ [0.8951000, 0.2664000, -0.1614000], [-0.7502000, 1.7135000, 0.0367000], [0.0389000, -0.0685000, 1.0296000] ]) cone_response_source = m_a @ tristmimulus_vector(white_point_d50) cone_response_destination = m_a @ tristmimulus_vector(white_point_d65) cone_response_ratio = cone_response_destination / cone_response_source m = np.linalg.inv(m_a) @ np.diagflat(cone_response_ratio) @ m_a D50_to_D65 = m xyz_50_to_srgb = xyz_65_to_srgb @ D50_to_D65 print(xyz_50_to_srgb) print(xyz_65_to_srgb) ```
2024-11-16 15:22:41 -05:00
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
TEST_CASE(from_bgrx)
{
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0x00, 0x00, 0xff), Color::from_bgrx(0x000000ff));
EXPECT_EQ(Color(0x00, 0xff, 0x00), Color::from_bgrx(0x0000ff00));
EXPECT_EQ(Color(0xff, 0x00, 0x00), Color::from_bgrx(0x00ff0000));
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0x00, 0x00, 0xff), Color::from_bgrx(0xff0000ff));
EXPECT_EQ(Color(0x00, 0xff, 0x00), Color::from_bgrx(0xff00ff00));
EXPECT_EQ(Color(0xff, 0x00, 0x00), Color::from_bgrx(0xffff0000));
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0xaa, 0xbb, 0xcc), Color::from_bgrx(0x00aabbcc));
}
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
TEST_CASE(from_bgra)
{
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0x00, 0x00, 0x00, 0xff), Color::from_bgra(0xff000000));
EXPECT_EQ(Color(0x00, 0x00, 0xff, 0x00), Color::from_bgra(0x000000ff));
EXPECT_EQ(Color(0x00, 0xff, 0x00, 0x00), Color::from_bgra(0x0000ff00));
EXPECT_EQ(Color(0xff, 0x00, 0x00, 0x00), Color::from_bgra(0x00ff0000));
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0xaa, 0xbb, 0xcc, 0xdd), Color::from_bgra(0xddaabbcc));
}
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
TEST_CASE(from_rgbx)
{
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0x00, 0x00, 0xff), Color::from_rgbx(0x00ff0000));
EXPECT_EQ(Color(0x00, 0xff, 0x00), Color::from_rgbx(0x0000ff00));
EXPECT_EQ(Color(0xff, 0x00, 0x00), Color::from_rgbx(0x000000ff));
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0x00, 0x00, 0xff), Color::from_rgbx(0xffff0000));
EXPECT_EQ(Color(0x00, 0xff, 0x00), Color::from_rgbx(0xff00ff00));
EXPECT_EQ(Color(0xff, 0x00, 0x00), Color::from_rgbx(0xff0000ff));
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0xaa, 0xbb, 0xcc), Color::from_rgbx(0x00ccbbaa));
}
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
TEST_CASE(from_rgba)
{
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0x00, 0x00, 0x00, 0xff), Color::from_rgba(0xff000000));
EXPECT_EQ(Color(0x00, 0x00, 0xff, 0x00), Color::from_rgba(0x00ff0000));
EXPECT_EQ(Color(0x00, 0xff, 0x00, 0x00), Color::from_rgba(0x0000ff00));
EXPECT_EQ(Color(0xff, 0x00, 0x00, 0x00), Color::from_rgba(0x000000ff));
Everywhere: Unify naming of RGBA-like colors The `Bitmap` type was referring to to its internal pixel format by a name that represents the order of the color components as they are layed out in memory. Contrary, the `Color` type was using a naming that where the name represents the order of the components from most to least significant byte when viewed as a unsigned 32bit integer. This is confusing as you have to keep remembering which mental model to use depending on which code you work with. To unify the two, the naming of RGBA-like colors in the `Color` type has been adjusted to match the one from the Bitmap type. This seems to be generally in line with how web APIs think about these types: * `ImageData.pixelFormat` can be `rgba-8unorm` backed by a `Uint8ClamedArray`, but there is no pixel format backed by a 32bit unsigned type. * WebGL can use format `RGBA` with type `UNSIGNED_BYTE`, but there is no such format with type `UNSIGNED_INT`. Additionally, it appears that other browsers and browser-adjacent libraries also think similarly about these types: * Firefox: https://github.com/mozilla-firefox/firefox/blob/main/gfx/2d/Types.h * WebKit: https://github.com/WebKit/WebKit/blob/main/Source/WebCore/platform/graphics/PixelFormat.h * Skia: https://chromium.googlesource.com/skia/+/refs/heads/main/include/core/SkColorType.h This has the not so nice side effect that APIs that interact with these types through 32bit unsigned integers now have the component order inverted due to little-endian byte order. E.g. specifying a color as hex constant needs to be done as `0xAABBGGRR` if it is to be treated as RGBA8888. We could alleviate this by providing endian-independent APIs to callers. But I suspect long-term we might want to think differently about bitmap data anyway, e.g. to better support HDR in the future. However, such changes would be more involved than just unifying the naming as done here. So I considered that out of scope for now.
2025-11-23 13:07:38 +01:00
EXPECT_EQ(Color(0xaa, 0xbb, 0xcc, 0xdd), Color::from_rgba(0xddccbbaa));
}
LibGfx: Adjust matrices for XYZ -> sRGB conversions TL;DR: There are two available sets of coefficients for the conversion matrices from XYZ to sRGB. We switched from one set to the other, which is what the WPT tests are expecting. All RGB color spaces, like display-p3 or rec2020, are defined by their three color chromacities and a white point. This is also the case for the video color space Rec. 709, from which the sRGB color space is derived. The sRGB specification is however a bit different. In 1996, when formalizing the sRGB spec the authors published a draft that is still available here [1]. In this document, they also provide the matrix to convert from the XYZ color space to sRGB. This matrix can be verified quite easily by using the usual math equations. But hold on, here come the plot twist: at the time of publication, the spec contained a different matrix than the one in the draft (the spec is obviously behind a pay wall, but the numbers are also reported in this official document [2]). This official matrix, is at a first glance simply a wrongly rounded version of the one in the draft publication. It however has some interesting properties: it can be inverted twice (so a roundtrip) in 8 bits and not suffer from any errors from the calculations. So, we are here with two versions of the XYZ -> sRGB matrix, the one from the spec, which is: - better for computations in 8 bits, - and official. This is the one that, by authority, we should use. And a second version, that can be found in the draft, which: - makes sense, as directly derived from the chromacities, - is publicly available, - and (thus?) used in most places. The old coefficients were the one from the spec, this commit change them for the one derived from the mathematical formulae. The Python script to compute these values is available at the end of the commit description. More details about this subject can be found here [3]. [1] https://www.w3.org/Graphics/Color/sRGB.html [2] https://color.org/chardata/rgb/sRGB.pdf [3] https://photosauce.net/blog/post/making-a-minimal-srgb-icc-profile-part-3-choose-your-colors-carefully The Python script: ```python # http://www.brucelindbloom.com/index.html?Eqn_RGB_XYZ_Matrix.html from numpy.typing import NDArray import numpy as np ### sRGB # https://www.w3.org/TR/css-color-4/#predefined-sRGB srgb_r_chromacity = np.array([0.640, 0.330]) srgb_g_chromacity = np.array([0.300, 0.600]) srgb_b_chromacity = np.array([0.150, 0.060]) ## ## White points white_point_d50 = np.array([0.345700, 0.358500]) white_point_d65 = np.array([0.312700, 0.329000]) # r_chromacity = srgb_r_chromacity g_chromacity = srgb_g_chromacity b_chromacity = srgb_b_chromacity white_point = white_point_d65 def tristmimulus_vector(chromacity: NDArray) -> NDArray: return np.array([ chromacity[0] /chromacity[1], 1, (1 - chromacity[0] - chromacity[1]) / chromacity[1] ]) tristmimulus_matrix = np.hstack(( tristmimulus_vector(r_chromacity).reshape(3, 1), tristmimulus_vector(g_chromacity).reshape(3, 1), tristmimulus_vector(b_chromacity).reshape(3, 1), )) scaling_factors = (np.linalg.inv(tristmimulus_matrix) @ tristmimulus_vector(white_point)) M = tristmimulus_matrix * scaling_factors np.set_printoptions(formatter={'float_kind':'{:.6f}'.format}) xyz_65_to_srgb = np.linalg.inv(M) # http://www.brucelindbloom.com/index.html?Eqn_ChromAdapt.html # Let's convert from D50 to D65 using the Bradford method. m_a = np.array([ [0.8951000, 0.2664000, -0.1614000], [-0.7502000, 1.7135000, 0.0367000], [0.0389000, -0.0685000, 1.0296000] ]) cone_response_source = m_a @ tristmimulus_vector(white_point_d50) cone_response_destination = m_a @ tristmimulus_vector(white_point_d65) cone_response_ratio = cone_response_destination / cone_response_source m = np.linalg.inv(m_a) @ np.diagflat(cone_response_ratio) @ m_a D50_to_D65 = m xyz_50_to_srgb = xyz_65_to_srgb @ D50_to_D65 print(xyz_50_to_srgb) print(xyz_65_to_srgb) ```
2024-11-16 15:22:41 -05:00
TEST_CASE(all_green)
{
EXPECT_EQ(Color(Color::NamedColor::Green), Color::from_lab(87.8185, -79.2711, 80.9946));
EXPECT_EQ(Color(Color::NamedColor::Green), Color::from_xyz50(0.385152, 0.716887, 0.097081));
EXPECT_EQ(Color(Color::NamedColor::Green), Color::from_xyz65(0.357584, 0.715169, 0.119195));
}
TEST_CASE(hsv)
{
EXPECT_EQ(Color(51, 179, 51), Color::from_hsv(120, 0.714285714, .7));
EXPECT_EQ(Color(51, 179, 51, 128), Color::from_hsv(120, 0.714285714, .7).with_opacity(0.5));
EXPECT_EQ(Color(87, 128, 77), Color::from_hsv(108, 0.4, .5));
}
TEST_CASE(hsl)
{
EXPECT_EQ(Color(191, 191, 0), Color::from_hsl(-300, 1.0, 0.375));
EXPECT_EQ(Color(159, 138, 96), Color::from_hsl(400, 0.25, 0.5));
EXPECT_EQ(Color(159, 96, 128), Color::from_hsl(330, 0.25, 0.5));
EXPECT_EQ(Color(128, 0, 128), Color::from_hsl(300, 1.0, 0.25));
EXPECT_EQ(Color(0, 128, 128), Color::from_hsl(180, 1.0, 0.25));
EXPECT_EQ(Color(128, 239, 16), Color::from_hsl(90, 0.875, 0.5));
EXPECT_EQ(Color(128, 223, 32), Color::from_hsl(90, 0.75, 0.5));
EXPECT_EQ(Color(128, 207, 48), Color::from_hsl(90, 0.625, 0.5));
EXPECT_EQ(Color(128, 191, 64), Color::from_hsl(90, 0.5, 0.5));
EXPECT_EQ(Color(128, 175, 80), Color::from_hsl(90, 0.375, 0.5));
EXPECT_EQ(Color(128, 159, 96), Color::from_hsl(90, 0.25, 0.5));
}