Skip to content

Commit cded300

Browse files
authored
Upload Experimental HLG LUT generator
I think this is all I can currently make by guessing without actual HDR device. I am asking others with HDR display to test this in proper HDR supported software (Resolve etc.), and modify this script to finish it. I am now writing down some of the things about HDR based on my understanding (note this is just my understanding, might not be super accurate): 1. One of my friends with HDR capable MacBook tested Blender 4.0's EDR/HDR support, it seems to be completely broken, both SDR and HDR are terribly broken in Blender 4.0 Mac version. I guess this why Troy was freaking out about the EDR patch, but the official Blender devs don't seem to have an access to MacBook with a Built-in P3 EDR screen, therefore it might be hard for them to realize what is wrong. So maybe don't use Blender for testing this at the moment. 2. Don't trust those video players out there. I asked the friend I mentioned to play the same Rec.2100 PQ encoded video with HDR10 metadata embedded, he comfirmed that different video players each seems to include a unique view transform for HDR content. Including QuickTime, and iPhone 15 Pro's built-in video player. I don't yet know which APP to trust, maybe just use DaVinci Resolve for now. 3. HDR in terms of image formation, is weird. If we think of our AgX Base Rec.2020 image as a percentage for a display (how many percent of emission strength this pixel should be, relative to that pixel's maximum emission strength), the max emission nits value specifically, doesn't seem to really matter, you can have a 1000 nits display, and we would multiply our percentage (0.0 to 1.0, or 0% to 100%) with that 1000 nits, and we get a nits based value for our image. Therefore, if we follow a percentage based mentality for HDR, we can just do the exact same thing we have been doing all alone, that is multiplay the percentage by the max emission (1000 nits for HLG), and apply an HLG "[0 , 1000] nits to [0, 1] encoding" curve. Everything stays the same as our SDR image. However, this is not how most protocals are implemented. Most protocals like Resolve, TCAM, etc., seem to implement HDR encoding with "SDR = 100 nits" assumption. Basically, what we did above is kind of "SDR = 1000 nits", and with 100 nits assuption, we would calculate the ratio between SDR max (100) and HDR max (1000), and muliply the percentage by that ratio, basically in our case, scale the percentage down by 10. This is super super weird IMHO, if we think of percentage here again. Our image's 18% middle grey is now 1.8%. So ultimately, comparing with SDR image, we are actually implementing a "darkening" in our image. Note there has been controversies on the whole SDR 100 nits thing, as it was derived from a weird notion of "diffuse white" which doesn't exist in a formed image. Resolve also has a SDR = 203 nits implementation, Troy mentioned once that most actual SDR displays are around 300 nits. I am going with SDR = 100 nits currently, but there is a parameter in the script that you can change. 4. HDR image needs to match SDR image's appearance. We cannot do it 100% matching since we don't have a perceptual color model that actually works right now (none of the models can describe the classic red train in an image that only has cyan pixels). But we still need to try our best to do the matching. For example, given a movie shot, a bunch of characters are looking at something in shock, the camera cuts to a glowing chromatic object, will it make sense that the HDR distribution of the movie can clearly see the chroma-laden details on the glowing object, while the SDR distribution of the movie only shows a solid block of white lights? It doesn't make sense, therefore, SDR and HDR image need to match, at least in terms of attenuation's rate of change (I.E. the speed things fade to white). There are parameters in the script for tweaking the match. I don't have an HDR device so I cannot fine tune the match. Please, for people that pick this up from here, test this in Resolve and find a setting that matches the AgX Base Rec.2020 image. Or if none of the possible setting of the current parameters work, re-work the script to make it match. The config for using this LUT can be found here: https://github.com/EaryChow/AgX/tree/HDR-Experimental With this published, I will probably no longer do any AgX-related development from now. At least I hope so, I need to focus on my rea-life situation, it's unhealthy for me to continue, but I keep finding myself coming back. Maybe it's because of the Inertia left from my devotion into this project during the past about 2 years. Hopefully I can actually stop developing this and focus on my life issues.
1 parent ab7415e commit cded300

File tree

1 file changed

+313
-0
lines changed

1 file changed

+313
-0
lines changed

AgXBaseHLG.py

+313
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,313 @@
1+
import math
2+
import colour
3+
import numpy
4+
import re
5+
import sigmoid
6+
import argparse
7+
import luminance_compenstation_bt2020 as lu2020
8+
import luminance_compenstation_p3 as lup3
9+
10+
# Log range parameters
11+
midgrey = 0.18
12+
normalized_log2_minimum = -10
13+
normalized_log2_maximum = +6.5
14+
15+
# define color space matrices
16+
bt2020_id65_to_xyz_id65 = numpy.array([[0.6369535067850740, 0.1446191846692331, 0.1688558539228734],
17+
[0.2626983389565560, 0.6780087657728165, 0.0592928952706273],
18+
[0.0000000000000000, 0.0280731358475570, 1.0608272349505707]])
19+
20+
xyz_id65_to_bt2020_id65 = numpy.array([[1.7166634277958805, -0.3556733197301399, -0.2533680878902478],
21+
[-0.6666738361988869, 1.6164557398246981, 0.0157682970961337],
22+
[0.0176424817849772, -0.0427769763827532, 0.9422432810184308]])
23+
24+
# inset matrix from Troy's SB2383 script, setting is rotate = [3.0, -1, -2.0], inset = [0.4, 0.22, 0.13]
25+
# link to the script: https://github.com/sobotka/SB2383-Configuration-Generation/blob/main/generate_config.py
26+
# the relevant part is at line 88 and 89
27+
inset_matrix = numpy.array([[0.856627153315983, 0.0951212405381588, 0.0482516061458583],
28+
[0.137318972929847, 0.761241990602591, 0.101439036467562],
29+
[0.11189821299995, 0.0767994186031903, 0.811302368396859]])
30+
31+
# outset matrix from Troy's SB2383 script, setting is rotate = [0, 0, 0] inset = [0.4, 0.22, 0.04], used on inverse
32+
# link to the script: https://github.com/sobotka/SB2383-Configuration-Generation/blob/main/generate_config.py
33+
# the relevant part is at line 88 and 89
34+
outset_matrix = numpy.linalg.inv(numpy.array([[0.899796955911611, 0.0871996192028351, 0.013003424885555],
35+
[0.11142098895748, 0.875575586156966, 0.0130034248855548],
36+
[0.11142098895748, 0.0871996192028349, 0.801379391839686]]))
37+
38+
# these lines are dependencies from Troy's AgX script
39+
x_pivot = numpy.abs(normalized_log2_minimum) / (
40+
normalized_log2_maximum - normalized_log2_minimum
41+
)
42+
43+
# define SDR max nits
44+
SDRMax = 100
45+
HDRMax = 1000
46+
HDR_SDR_Ratio = HDRMax / SDRMax
47+
midgrey_offset_power = math.log(0.18 / HDR_SDR_Ratio, 0.18)
48+
49+
# parameters used for compensating for midgrey offset power curve's per-channel result
50+
# larger power value will result in more chroma laden image, lower value would result in less chroma landen result
51+
# increase the lower domain limit will limit the upper bound of chroma level, decrease the upper domain limit will limit the lower bound of the chroma level
52+
# todo: Use an actual HDR capable device, test in DaVinci Resolve, and find the setting that matches SDR AgX Base Rec.2020 the most
53+
# I (Eary) don't have an HDR capable device so I probably won't be the one doing it. Blender's HDR/EDR support in 4.0 seems to be broken, so maybe test this in Resolve.
54+
chroma_mix_power_of_value = 1.3
55+
chroma_mix_value_domain = [0, 1]
56+
57+
# define middle grey
58+
y_pivot = colour.models.eotf_inverse_BT2100_HLG(
59+
colour.models.exponent_function_basic(midgrey, midgrey_offset_power, 'basicFwd') * HDRMax)
60+
61+
exponent = [0.4, 0.4]
62+
slope = 2.4
63+
64+
argparser = argparse.ArgumentParser(
65+
description="Generates an OpenColorIO configuration",
66+
formatter_class=argparse.ArgumentDefaultsHelpFormatter,
67+
)
68+
argparser.add_argument(
69+
"-et",
70+
"--exponent_toe",
71+
help="Set toe curve rate of change as an exponential power, hello Sean Cooper",
72+
type=float,
73+
default=exponent[0],
74+
)
75+
argparser.add_argument(
76+
"-ps",
77+
"--exponent_shoulder",
78+
help="Set shoulder curve rate of change as an exponential power",
79+
type=float,
80+
default=exponent[1],
81+
)
82+
argparser.add_argument(
83+
"-fs",
84+
"--fulcrum_slope",
85+
help="Set central section rate of change as rise over run slope",
86+
type=float,
87+
default=slope,
88+
)
89+
argparser.add_argument(
90+
"-fi",
91+
"--fulcrum_input",
92+
help="Input fulcrum point relative to the normalized log2 range",
93+
type=float,
94+
default=x_pivot,
95+
)
96+
argparser.add_argument(
97+
"-fo",
98+
"--fulcrum_output",
99+
help="Output fulcrum point relative to the normalized log2 range",
100+
type=float,
101+
default=y_pivot,
102+
)
103+
argparser.add_argument(
104+
"-ll",
105+
"--limit_low",
106+
help="Lowest value of the normalized log2 range",
107+
type=float,
108+
default=normalized_log2_minimum,
109+
)
110+
argparser.add_argument(
111+
"-lh",
112+
"--limit_high",
113+
help="Highest value of the normalized log2 range",
114+
type=float,
115+
default=normalized_log2_maximum,
116+
)
117+
118+
args = argparser.parse_args()
119+
120+
121+
# these lines are dependencies from Troy's AgX script
122+
123+
124+
def apply_sigmoid(x):
125+
sig_x_input = x
126+
127+
col = sigmoid.calculate_sigmoid(
128+
sig_x_input,
129+
pivots=[args.fulcrum_input, args.fulcrum_output],
130+
slope=args.fulcrum_slope,
131+
powers=[args.exponent_toe, args.exponent_shoulder],
132+
)
133+
134+
return col
135+
136+
137+
def AgX_Base_Rec2020(col, mix_percent):
138+
# apply lower guard rail
139+
col = lu2020.compensate_low_side(col)
140+
141+
# apply inset matrix
142+
col = numpy.tensordot(col, inset_matrix, axes=(0, 1))
143+
144+
# record current chromaticity angle
145+
pre_form_hsv = colour.RGB_to_HSV(col)
146+
147+
# apply Log2 curve to prepare for sigmoid
148+
log = colour.log_encoding(col,
149+
function='Log2',
150+
min_exposure=normalized_log2_minimum,
151+
max_exposure=normalized_log2_maximum,
152+
middle_grey=midgrey)
153+
154+
# apply sigmoid
155+
col = apply_sigmoid(log)
156+
157+
# Linearize
158+
col = colour.models.exponent_function_basic(col, 1 / math.log(y_pivot, 0.18), 'basicFwd')
159+
160+
pre_middel_grey_lowering_hsv = colour.models.RGB_to_HSV(col)
161+
162+
# lower the middle grey, so upon end encoding, the middle grey matches the common "SDR 1.0=100nits" HLG implementation
163+
# This SDR 1.0=100nits HDR implementation is weird since middle grey end up being 1.8% of the max emission, instead of 18%.
164+
# But this is how it is done in Davinci Resolve and OCIO's builtin transform for HLG etc.
165+
col = colour.models.exponent_function_basic(col, midgrey_offset_power, 'basicFwd')
166+
167+
# a hack trying to match HDR and SDR, compensating the per-channel nature of the additional power curve (that we applied to match the middle grey of SDR=100nits assumption)
168+
col = colour.models.RGB_to_HSV(col)
169+
170+
col[1] = colour.algebra.lerp(
171+
numpy.clip(pre_middel_grey_lowering_hsv[2] ** chroma_mix_power_of_value, a_min=chroma_mix_value_domain[0],
172+
a_max=chroma_mix_value_domain[1]), col[1], pre_middel_grey_lowering_hsv[1], False)
173+
col = colour.models.HSV_to_RGB(col)
174+
175+
# record post-sigmoid chroma angle
176+
col = colour.RGB_to_HSV(col)
177+
178+
# mix pre-formation chroma angle with post formation chroma angle.
179+
col[0] = colour.algebra.lerp(mix_percent / 100, pre_form_hsv[0], col[0], False)
180+
181+
col = colour.HSV_to_RGB(col)
182+
183+
# apply outset to make the result more chroma-laden
184+
col = numpy.tensordot(col, outset_matrix, axes=(0, 1))
185+
186+
col = numpy.clip(col, a_min=0, a_max=1)
187+
return col
188+
189+
190+
colour.utilities.filter_warnings(python_warnings=True)
191+
192+
193+
def main():
194+
# resolution of the 3D LUT
195+
LUT_res = 45
196+
197+
# The mix_percent here is the mixing factor of the pre- and post-formation chroma angle. Specifically, a simple HSV here was used.
198+
# Mixing, or lerp-ing the H is a hack here that does not fit a first-principle design.
199+
# I tried other methods but this seems to be the most straight forward way.
200+
# I just can't bare to see our rotation of primaries, the "flourish", is messed up with a per-channel notorious six hue shift.
201+
# This means if we rotate red a bit towards orange for countering abney effect, the orange will then be skewed to yellow.
202+
# Then we apply the rotation in different primaries, like in BT.2020, where BT.709 red is already more orangish in the first place,
203+
# this gets magnified. Troy's original version has outset that also includes the inverse rotation, but because the original rotation
204+
# has already been skewed by the per-channel N6, the outset matrix in his version didn't cancel the rotation. This seems like such a
205+
# mess to me, so I decided to take this hacky approach at least to get the flourish rotation somewhat in control.
206+
# The result is also that my outset matrix now doesn't contain any rotation, otherwise the original rotation can actually be cancelled.
207+
# The number of 40% here is based on personal testing, you can try to test which number works better if you would like to change it.
208+
mix_percent = 40
209+
210+
LUT = colour.LUT3D(name=f'AgX_Formation_Rec2100HLG',
211+
# LUT = colour.LUT3D(name=f'AgX_Formation_Rec2100HLG_P3_Limited',
212+
# LUT = colour.LUT3D(name=f'No_Guard_Rail_AgX_Formation_Rec2100HLG',
213+
size=LUT_res)
214+
215+
LUT.domain = ([[0.0, 0.0, 0.0], [1.0, 1.0, 1.0]])
216+
LUT.comments = [
217+
f'AgX Base Rec.2100 Formation LUT designed to target the {HDRMax} nits HLG medium with assumption of SDR = {SDRMax} nits',
218+
f'per-channel chroma offset compensation power value = {chroma_mix_power_of_value}, domain for that mix factor is {chroma_mix_value_domain}',
219+
f'This LUT expects input to be E Gamut Log2 encoding from -10 stops to +15 stops',
220+
221+
# f'AgX Base Rec.2020 Formation LUT designed to be used on Inverse',
222+
# f'This LUT expects input (output if inverse) to be Rec.2020 Log2 encoding from -10 stops to +6.5 stops',
223+
224+
f'But the end image formation will be Rec2100-HLG',
225+
# f'But the end image formation will be Rec2100-HLG with gamut limited to Display P3',
226+
f' rotate = [3.0, -1, -2.0], inset = [0.4, 0.22, 0.13], outset = [0.4, 0.22, 0.04]',
227+
f'The image formed has {mix_percent}% per-channel shifts',
228+
f'DOMAIN_MIN 0 0 0',
229+
f'DOMAIN_MAX 1 1 1']
230+
231+
x, y, z, _ = LUT.table.shape
232+
233+
for i in range(x):
234+
for j in range(y):
235+
for k in range(z):
236+
col = numpy.array(LUT.table[i][j][k], dtype=numpy.longdouble)
237+
238+
# decode LUT input transfer function (change max to 6.5 when generating no guard rail version)
239+
col = colour.log_decoding(col,
240+
function='Log2',
241+
min_exposure=-10,
242+
max_exposure=+15,
243+
middle_grey=midgrey)
244+
245+
# decode LUT input primaries from E-Gamut to Rec.2020 (mute when generating no guard rail version)
246+
col = numpy.tensordot(col, lu2020.e_gamut_to_xyz_id65, axes=(0, 1))
247+
248+
col = numpy.tensordot(col, lu2020.xyz_id65_to_bt2020_id65, axes=(0, 1))
249+
250+
col = AgX_Base_Rec2020(col, mix_percent)
251+
252+
# Apply P3 Lower Rail for P3 limited output, mute these lines below for full Rec.2020 gamut output
253+
# or unmute the lines below if you want to limit output to P3 gamut.
254+
# col = numpy.tensordot(col, lu2020.bt2020_id65_to_xyz_id65, axes=(0, 1))
255+
# col = numpy.tensordot(col, lup3.xyz_id65_to_p3_id65, axes=(0, 1))
256+
# col = lup3.compensate_low_side(col)
257+
# col = numpy.tensordot(col, lup3.p3_id65_to_xyz_id65, axes=(0, 1))
258+
# col = numpy.tensordot(col, lu2020.xyz_id65_to_bt2020_id65, axes=(0, 1))
259+
260+
# re-encode transfer function
261+
col = colour.models.eotf_inverse_BT2100_HLG(col * HDRMax)
262+
263+
col = numpy.clip(col, a_min=0, a_max=1)
264+
265+
LUT.table[i][j][k] = numpy.array(col, dtype=LUT.table.dtype)
266+
267+
LUT_name = f"AgX_Base_Rec2100-HLG.cube"
268+
# LUT_name = f"AgX_Base_Rec2100-HLG_P3_Limited.cube")
269+
# LUT_name = f"No_GR_AgX_Base_Rec2100-HLG.cube")
270+
colour.write_LUT(
271+
LUT,
272+
LUT_name)
273+
print(LUT)
274+
written_lut = open(LUT_name).read()
275+
written_lut = written_lut.replace('# DOMAIN_', 'DOMAIN_')
276+
written_lut = written_lut.replace('nan', '0')
277+
278+
def remove_trailing_zeros(text):
279+
# Regular expression to find numbers in the text
280+
pattern = r'\b(\d+\.\d*?)(0+)(?=\b|\D)'
281+
282+
# Replace each found number with trailing zeros removed
283+
def replace_zeros(match):
284+
# Remove trailing zeros and, if there are no digits after the decimal point, remove the point as well
285+
after_decimal = match.group(1).rstrip('0')
286+
if after_decimal.endswith('.'):
287+
after_decimal = after_decimal.rstrip('.')
288+
return after_decimal
289+
290+
# Split the text into lines and process each line
291+
lines = text.split('\n')
292+
modified_lines = []
293+
294+
for line in lines:
295+
if not line.startswith('#'):
296+
modified_lines.append(re.sub(pattern, replace_zeros, line))
297+
else:
298+
modified_lines.append(line) # Keep lines starting with #
299+
300+
# Join the modified lines back into text
301+
result = '\n'.join(modified_lines)
302+
return result
303+
304+
written_lut = remove_trailing_zeros(written_lut)
305+
306+
open(LUT_name, 'w').write(written_lut)
307+
308+
309+
if __name__ == '__main__':
310+
try:
311+
main()
312+
except KeyboardInterrupt:
313+
pass

0 commit comments

Comments
 (0)