Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
https://streamable.com/cvgiio
You should be able to see ghosting there, toward end I show with HDR off and smooth motion.
I updated driver to 546.01.
DLSS FG doesn't work with or without HDR. DLSS Frame Gen is gone now. Shows "wrong driver version".
https://imgur.com/a/xKBhTX6
Up to them, only reporting because its part of the HDR issue, surfaced with the HDR patch.
Runs fine without, game is super well optimized and otherwise runs good, don't need.
Sorry, I don't have a big enough sample size to really call it statistically relevant.
But I calibrate displays for various customers including retail end-users and movie production companies (both colorists and VFX).
Don't take everything I write below as a my dogma but I will try to summarize what I gathered and experienced since I started working with displays and tried to research the technicals and best practices since roughly 2009. (I don't wish to come across "high-minded" but I am probably in the top~1000 people around the globe you can reach when it comes to this topic because I spent time reading docs and industry member's remarks, consulted with practicing "perfectionist-minded" colorists in great lengths, did my own experiments at home for 2 decades, etc...).
The question of the TRC (tone response curve) and even the technical white point often end up being more of a philosophical question than simply something you can grab from a de-facto standard.
Considering SDR, Rec709/sRGB is a de-facto standard for R,G,B primaries and somewhat for the White point (*). But that's pretty much it. The TRC is often a mess (which means the industry "best practices" barely overlap with numbers/equations in standards).
* CIE 1931 has a very noticeable metameric failure with recent display technologies, including LED-LCD, WOLED, JOLED, QD-OLED, QLED-LCD, etc (basically, the narrower the peaks on the spectrum plots and the further they lay from the old CRT spectra, the worse it gets visually and not just because spectrophotometers have limited spectral resolution, it's a problem with the CIE 1931 observer curves that are applied to the spctra to get the XYZ / x,y,Y numbers).
I am not sure about other fields, but the movie industry follows and recommends two alternative solutions:
- visually match the white point to a reference grade CRT (the old cathode ray tube) or PDP (a.k.a plasma), measure the CIE 1931 x,y value of that on the current-gen display and use it for calibration target (no offset is applied to R,G,B primaries, mixed colors may or may not be scaled during the 3DLUT transform but they usually are, e.g. follow the standard ICC "relative colorimetric intent").
- the "Judd Vos method" is basically the same as the "eye match" method, except the "eyeballing" part is replaced with measurements based on the Jud Voss 1978 observer curves (calibrate the reference CRT/PDP to CIE 1931 D65, measure it back with Jud Voss, calibrate the current-gen master display to those Jud Voss values, measure the CIE 1931 coordinates to be used for the technical white point).
/// The second one works well for WOLED and JOLED, not so much for LED-LCD (not even ~sRGB gamut WLED-LCD, especially not RG-LED or QD-LED-LCD). But it's more convenient (you don't have to place the CRT/PDP next to the current-gen master display to be calibrated).
The TRC for SDR was very much and still remains somewhat elusive.
Both sRGB and Rec709 define encode curves. The decode curves are either not directly defined or are casually ignored by content creators in most circumstances (and for good reasons). The root of the problem is how these ancient standards were defined with the assumption that end-users have no real control over the TRC of their display (they just get whatever their CRT display naturally provides without signal processing).
One of the problems is that Rec709 says absolutely nothing about the decode curve. And both a quick visual test and delving into the assumptions surrounding the encode curve make it clear that the inverse of the encode curve shall never be directly used as a decode curve under any circumstance. The encode curve of Rec709 is meant to include corrections for the differences between the mastering room/display and the end-user room/display, assuming: both the mastering and end-user displays use pure-power gamma function but viewing conditions are different. They devised that this warrants two modifications during encode: the overall shape of the gamma curve should be offset from 2.2 to 2.4 (given that end-user has no control over the gamma, this technically means that this should be accounted for on the encode end, hence gamma 2.4 is to be used on the production end and CRT-native 2.2 on the end-used end) and near-black TRC should be further offset to preserve "shadow details". But in practice, this was not achieved by using the actual Rec709-defined offset encode curve but rather by using CRT displays with processing to display the input in pure-power gamma 2.4, with the assumption that end-user will watch the content on pure-power 2.2 native gamma CRT. The shadow-region offset was casually ignored in practice. And this is what "we" (the colorists I know in my country) do in practice today: calibrate master / client-demo displays to pure-power 2.4 and use no encode offset at all (no shadow-reagin lifting is done). But this assumes the content is to be watched in theaters (great control over projector characteristics and ambient light + the colorist can visit an "idle" room in target cinema and apply a custom "look" to the render they ship to the cinema) or with Rec1886 TRC at home.
The Rec1886 standard has a built-in offset for the shadow region for LCD-like (low-contrast) displays (this calculates as absolute zero for OLED). Hence, the shadow region is never lifted during encoding (at least for the online/disc render).
-> You probably spotted Rec1886 in that last sentence. That's the supplementary standard for Rec709 (decades after Rec709 and just before Rec2020) that (re-)defines a decode curve for Rec709. Problem is, many modern displays do have a Rec1886 preset but it's rarely actual Rec1886 (with the near-black offset). It's usually plain pure-power gamma 2.4. This is fine for high-contrast displays (where the offset is close to nothing or actual zero), not so much for low-contrast LCD and home-projector.
But applying a shadow-range lift to the final render is worse because then high-contrast displays at home won't be accurate either (near-black will be boosted in home-cinema, so those with high-end OLED displays in their home-cinema, the people who actually care, will suffer.).
To sum up Rec709/1886:
These (alternative CRT-like white point and pure-power gamma 2.4) are the recommendations of display manufacturers like FSI (Flenders), Sony (the broadcast arm), etc, and what the established colorists I know are usually happy with.
Although it gets hard to try and make small-time, new/hobby colorists to understand why I can't just point my probe at their cheap-ish LCD and set it to standard but rather try jumping hops with CRT-white, etc. Actually, I am honestly not sure what's right for them specifically when their materials are not to be presented in (home-)cinema but rather on YouTube, etc. I try my best to explain all these things and ask them to point their mental fingers at one possible solution. But I try to weight my D&D dice a little to roll 10+ in favor of what "the big guys" use. However, I admit, there is a little bit of personal incentive for me here (once they "grow up", they will remember I always tried to make them be "more professional" even if that meant I didn't try to optimize their current workflow for their current audience). So it is still somewhat of a mess, even "morally" so (if there weren't enough philosophical issues already in something meant to be pure science, there is even perceived status/money there to consider, even for the calibration technician/consultant who is basically just a "plumber" in the room next to a colorist, producer, etc). But if you tell me plain and simple you will do color in a bright room and only upload to YouTube, I will set your master display to pure-power 2.2 and standard D65 because that will be closest to what your audience will see.
Now let's delve into sRGB. Another archaic standard, so the same things apply: it was designed with CRT displays in mind and with pure-power native 2.2 gamma at the user end with no control at the user's hand over the TRC of the display (no processing in cheap monitors). The only difference is that this standard assumes brighter ambient light conditions at the content creator's side instead of a very dim room (the movie industry assumed the mastering is done in a dark room, the PC industry assumed the room is not overly bright but far from pitch black, so more like an "office" rather than a movie theater). Thus, the overall shape isn't offset so much by the reference encode curve. But the same logic is present: the encode curve was never meant to be used as an inverse decode curve. When you do that, you ignore the shadow-range correction responsible to gap the difference in ambient light conditions. So there is an encode curve in the standard but the decode curve was assumed to be pure-power gamma 2.2 (uncontrollable at the user end due to no processing built into cheap user displays).
Hence, the content is supposed to look "right" at the user's end when it's encoded with the sRGB TRC (with the defined offset applied during render BUT tagged as the sRGB -> this will be important later) BUT decoded with no processing and watched on pure-power gamma 2.2. This is how the authors of the standard conceived the system (encoded with offset, decoded with no inverse-offset).
Now, you might ask why professional content production software uses inverse-offset decoding... That's because the assumption is that during production, every member of the production team responsible for any kind of any kind of color/look works with a display that is properly set for the ambient light conditions AND their software is configured properly to handle the tags/profiles in the metadata of the content (remember I asked you not to forget about tagging when you applied an offer to the render, e.g. "burnt the offset into the material" once?). At this point, you probably realized that "burning in" the offset at every single point of production would mean you might apply the offset multiple times. That is why, during production, once the offset is burnt-in (rendered with the standard encode offset applied, like converted form ACES to sRGB, for example), you want to monitor that content with an inverse-offset, so that user-offset is not present, and hence nobody else in the production team sees the content with the offset because said offset is only meant to the end-user because their display settings and/or ambient light conditions are different from that of the production team.
See where I am going with this...? Had you not tagged and inverted the offset meant for end-users, the next people after you might want to change the look because their displays/conditions are not the same as the offset was meant for. This is why you tag the content's metadata and reverse the offset when needed (when the content is not watched at the user end but still in the studio).
To sum it up: the standard with its encode offset and its metadata works fine inside the studio with professional software doing inverse-encode where necessary for professional displays in a controlled environment. But end-users are assumed (by the standards) not to have the capability of doing inverse-encoding (it's not about laziness, cheap software, etc..., the standard was designed around the assumption that end-users have absolutely no way of manipulating the content to begin with - that's the reason of burning offsets into the render and not just tagging everything with proper metadata).
Well, there is a tiny bit of turbulence in there because two important things changed since the sRGB standard was conceived:
- Some end-user software started being aware of content metadata and even offer (or silently do) color management.
- Most end-user displays are no longer "dumb analog CRT", so may or may not have sRGB primaries and native CRT gamma.
These things both made color management on the end-user side possible and even (fair to say) necessary (because you can't just watch sRGB images on your wide-gamut LCD without color management).
Actually, HDR makes it absolutely mandatory because you can't play the game I will describe below.
But there are multiple issues here...
- One is that the (ancient) standards that manufacturers of displays and coders of end-user software will try to tap into here never actually defined the end-user decode curves explicitly (or even made it clear enough that encode offset is not meant to be inverted during decode on the end-user side when end-user gamma is pure-power 2.2).
** Well, actually, there is a small thing to note here: In some software, you will run into the option of "scene reference" (or "scene preference", or just plain "scene") under color management options. That's for preserving the intended look of the tagged content as meant for the end-user (e.g. encode offset NOT inverted, regardless of metadata tags indicating an offset was burnt-in).
- Another issue is that a display manufacturer will prefer to advertise their displays for the masses with "rich colors" (= wide gamut) and crispy brightness (= high peak luminance). What's the point of selling a wide-gamut display to the masses that uses standard-ish "dull" colors and "muted" brightness? "Nobody will buy that |hit when there is a display next to it that is cheaper and "POPS" more. I am not stupid. I want the one with the pretty colors. Not that old-looking one with no POP!"
/// This is one of the reasons your consumer-grade display won't be set to sRGB/Rec709 preset by default (out of the box) even if it has one.
But for a very long time, proper color management in the hardware was considered too expensive for consumer-grade displays, so it's simply not present at all. Only HDR made it vital to include (once you have the hardware to handle HDR, doing SDR color management is merely an act of setting up another preset with different parameters and you can't cheap out on the hardware with HDR).
But even display manufacturers which try to position their products for more tech-savvy and conscious customers face an issue. You see... (I went into great lengths above as to why), "modern" displays (as far as SDR goes) are meant to mimic old CRT displays to comply with old standards.
Thus, your mindful consumer-oriented manufacturer wants to ship you an LCD that mimics a CRT. That's the way to comply with sRGB (as written). This will create the "scene reference" I mentioned above.
If your conscious manufacturer ships the display with inverse-encode curve then it, in reality, is not compliant with sRGB. So this might sound counter-intuitive but a real sRGB preset on a consumer display is not really sRGB compliant. That mode is only valid for production use. -> Now you might see why YOUR expensive display had a real sRGB mode: that's because you are a content producer, so it was meant for YOU, not your audience. If you want the "scene reference" at home as intended, you need to use pure-power gamma at home.
But guess how many people, even inside the industry know and understand all this... How many care about this in practice? Now what do you expect from the end-user to know, understand and care about...?
/// I am fairly new to HDR. I have an HDR display at home since 2017 and use it as a consumer but the colorists I know aren't grading for HDR just yet (at least not seriously, only experimentally and/or for small hobby projects). So I can't delve into that as much as I did with SDR. (Well, actually, I am just tired of writing more about SDR but I guess you are tired of reading it as well [if not given up long ago], so I will stop here. But do contact me in private if you are interested!)
Edit:
Well, actually, I will add a tiny bit to HDR GUI brightness: I mentioned above how displays are marketed to be crispy bright. If your display has an sRGB mode, it's most probably set to 120 nit. And that's for "office lighting".
But regular HDR is either meant to be watched in a dark room, or the end-user display is responsible for applying a day-time offset (something like the "Cinema HOME" or "DolbyVision BrightRoom" or *Day presets on your TV).
This is in contrast with SDR where the standard assumed offset must be burnt in. It's the opposite with HDR. The content should be kept as-is and the end-user display is responsible for applying offsets when necessary, according to it's settings that should match the ambient conditions (like Cinema Day/BrightRoom presets).
Thus, you don't have to boost the GUI brightness, your TV will boost it for you in BrightRoom preset. And it will only burn your eyes if you are gaming in a dark room with the DarkRoom preset but the game boosts itself for day-time conditions anyway.
HDR is meant to be delivered as absolute for dark room, and processed for bright room at home is necessary.
Unreal Engine's unfortunately stupid enough to take them as gospel :( A number of UE5 games are shipping with no calibration at all, just taking the fake values Windows pulls out of its butt, and it's always going to look washed out until they stop.
You can use the system level calibration on Windows 11, but it generates a broken ICC profile. It'll give the right luminance levels, but screws up your native color gamut. I'm really miffed about the whole thing. Microsoft and Epic are both insane if they want this to be the future and games just having HDR on with no calibration.
It's often very much necessary to boost the luminance of a game's UI in HDR. Unless the UI is completely opaque, it's going to be alpha-blended with HDR content underneath it. That content is often brighter than the UI. One approach is to take a simple tonemap operator like Reinhard to ensure the content underneath never becomes brighter than the UI, but that's often inadequate. UI either needs to be made opaque, or you have to raise the UI's luminance.
Unreal Engine's really bad at this, I almost always turn its native HDR off because the UI's not bright enough to be legible against the the scene underneath. I can name a couple of games that originally shipped with HDR, and the developers removed it because their UI was so heavily dependent on alpha blended content.
Super well optimized is the last thing I would call this game. I have a 4090 and I'm playing at 4k 120fps and the frame rate can drop by as much as 50% depending on the location. Hell you can be in one area and just looking around will cause it to drop by up to 30%.
I'm enjoying the game I just am surprised to see someone call it optimized when it has such a huge range of frame rates.
Would have already looked into VR if the whole HDR problem didn't happen. Can't make any promises of course.
Oh that's a very nice find! Looks like HDR and FG have something to do with each other after all. Thanks for reporting it, have repro and will look into it.
First, I appreciate such a detailed writeup! This is excellent information, so thanks for taking time to share it!
The only reason people ever say that HDR is a mess, is because they're not aware of how much of a obsolete mess SDR really is under the hood, with TRC literally being based around the physics of how phosphorus happens to behave in CRTs.
What a terrible system in context of modern game development, no wonder nobody follows it and the whole system is broken.
Game dev offices (and private homes now in the post covid work from home era) are not Hollywood studio color grading suites. Devs will also work throughout the day in various lighting conditions, and the scary part is that most of them won't even have fancy displays, they'll just have some off the shelf office/consumer displays, a mixed bag, and most of them won't even be calibrated. But with large enough team and enough chaos, the picture that developers see on average will be close enough to picture that users see on average.
Using one transfer function to develop like that, and having consumers use a different one would result in a game where nobody gets to see creator's intent. So this principle is strictly applicable to color grading studio environments, but even then, sRGB is meant for PC and office use, things like web. Nobody will be making web content in a color grading studio.
Fun fact: my previous monitor Philips 436M6VBPAB happens to hard lock it's SDR sRGB mode to 460 nits, and while that monitor is kind of trash in my opinion and caused me countless headaches, it is representative of consumer displays not trying very hard to adhere to standards. But yeah, 120 would be a reasonable number to expect there.
I've been preaching this as well. It's a quite a problem, and I always considered it a symptom of the SDR being so outdated. Now with the advent of HDR, making non accurate extra punchy displays will start to backfire much more than it used to, as now it's finally possible to have punchiness in the source content itself, which will lead to an overcooked and distasteful image on inaccurate HDR displays. It's probably one of the most important aspects of the whole transition to HDR, along with non-ambiguously defined transfer curve that doesn't differ between TVs and PCs.
I'd like to address the way you explained what sRGB is supposed to be used like, with regards to its implementation in Windows, as that's big part of where I see PC standardisation being anchored to, and how I professionally use it as a result.
The way Windows implemented their SDR interface in HDR, is by following the sRGB standard. While this may be an obvious thing, I went and reverse engineered it myself in the spirit of learning about it and confirming how it works, here's my google sheet with calculations and measurements if you're interested in seeing how I used the formulas: https://docs.google.com/spreadsheets/d/1m2BEueXiRW4VCBY-JhphIac93Qv6npAFf0bdRGXf_68/edit?usp=sharing
And what this showed is that SDR interface is indeed sRGB, and is indeed accurate, despite many claims of it looking washed out or being broken (I wanted to understand what's up with that). After measuring my Phillips HDR monitor, I did learn that it being washed out was the effect of it just being drastically inaccurate, with the effect that sort of looks like having sRGB transform applied twice. My conclusion there upon further investigation and after acquiring an OLED, is that it's bad local dimming implementations that are at fault for this, and not Windows HDR implementation. And when I did switch to OLED, and ensured it was accurate using colorimeter, I found that SDR in HDR looked good, a bit low contrast, but it's UI, it's supposed to be functional and easy on the eyes, and it mostly looked the way I expected SDR to look like.
What this tells me is that Microsoft considers this sRGB implementation in nits to be what everyone using windows should by default see in SDR, and in SDR interfaces within HDR environments. And so whatever else history and various gamma curves and standardisation may play into this, Microsoft in a way defined this is what this OS should look like, unambiguously defined in nits, for both SDR and HDR. And regardless of all the intricacies and considerations, we ultimately want to be able to all see the same image on our displays given that we are in average viewing environment for a typical gaming session, which is probably going to be slightly dim environment, on average, but not totally dark or a fully sunlit room.
So it really doesn't matter which standard we use in the end, it matters that there is a standard that is official in some way (dictated by Microsoft), and then it's up to users to adhere to it if they care about accuracy, or to buy a punchy monitor if they don't. But the default, should be the standard. As soon as defaults in games and applications aren't assuming the same standard, but are assuming pure 2.2 gamma for example, that's just a broken standard then that can't be relied upon, and not a situation we should want to be in.
But of course, if everyone is assuming pure 2.2, then that becomes a new standard, and sRGB should be ignored. Problem is that it's chaos out there with regards to this, both in terms of displays and understanding. The only solution I see here to all this is HDR.
I'd be interested to know what you think about the way Microsoft decided to go about this. My guess is that you'll say that they made a huge mistake, forcing sRGB onto everyone in the way that sRGB was never meant to be used outside of color grading studios?
Yeah, and I stand behind that idea. The inability for users to make their HDR screens brighter to accommodate bright rooms is one of the biggest issues in our transition to HDR. It's leading to the creation of pseudo standard of slightly brighter HDR, the kind you see in Talos 2. But other forces at play here is that the official guideline for grading HDR content might be a bit too conservative as well, catering a bit too much towards the limitations that SDR had back in 80 nit era, with the advice that HDR should be graded such that it should be at that level of brightness, just with unclipped highlights.
203 nit BT.2408 recommendation for diffuse white of SDR content being intercut with 1000 nit max HDR content does take this into a more reasonable territory, as it implies that HDR is graded to the level of brightness that 203 nit SDR would tend to be at. But as the whole of humanity transitions to HDR, I think we'll be seeing people wanting to go higher. The way iphones and androids currently record HDR videos is incredibly out of spec with regards to brightness, but it does show that without the ability to increase baseline brightness of HDR content to make it suitable to environment, the first thing that will happen is that guideline of how bright HDR should be will be broken. If large companies are already doing it to such an extreme extent, we can only expect the trend to continue, and the way things are shaping up, we'll end up with a pseudo standard of grading HDR brightener than reference. If we're lucky, people will settle around a norm of what looks good in an average viewing environment. We'll either have to redefine the recommendation to that then, or we'll have to massively deploy the ability to increase base brightness of HDR (for example adding an additional HDR-only brightness slider to windows display options), though it might already be too late for that thanks for smartphone companies. See example at the timestamp here https://youtu.be/bYS-P2bF2TQ?t=320
No, not by default at least. It only uses the peak brightness from EDID by default to pick between ACES 1000 nit and ACES 2000 nit tone curves, depending on if peak is closer to 1000 or 2000 nits. But either of those tone curves are adjustable using a peak brightness parameter, so in practice this doesn't matter at all. When you go to the calibration screen in Talos 2, you'll see the unreal defaults for all those values, and you'll see Maximum Brightness will by default be set to 1000 nit and won't be affected by EDID.
I don't know how accurate EDID is with monitors, but in theory it should be. My old HDR monitor does seem to report correct numbers. And I know that EDID reporting doesn't work for TVs, and the only way to have correct EDID with an HDR TV is be on Windows 11 and override the registry values with a Custom Resolution Utility tool (I'll be switching to 11 for this reason soon).
One of the goals is for everything to just work. We should continue to strive towards this, even if first attempts at it are prone to issues. But I agree, it's crucial to offer manual HDR calibration tools in games until this is ironed out. In an ideal future, calibration would be done once at the level of OS, and users wouldn't be bothered to calibrate HDR for each game individually, but they should be able to, if desired.
PC display calibration software, like DisplayCal, always defaults to a 2.2 gamma in my experience not sRGB. All monitor reviews I've seen measure against a 2.2 gamma and sRGB is only mentioned in the context of the colour space and that is indeed the standard that most people know about. Perhaps that's where the confusion comes from.
My LG C1 TV lists the gamma as 2.2 in the settings, but I'm not entirely sure if this is just a proxy for the sRGB gamma or if it really is a true 2.2 power gamma. Also, I don't know what effect the Nvidia RTX 3080 graphics card is having in terms of influencing this. Is it doing any conversion given I am using the TV as a PC monitor?
The trouble is most users don't know whether their monitors use the piecewise sRGB gamma or not and the manufacturers are not exactly forthcoming on this matter. Instead people get confused with the sRGB colour space which is what's used in SDR monitors and by 'sRGB' that is what most PC gamers like me are thinking of (the percentage coverage of that colour space is often in the monitor specifications whereas the gamma is not).
So I'm not sure at the end of the day whether I should be using the sRGB gamma option in the game or not. Based on the TV's settings and the fact it's a TV it leads me to think I shouldn't be using the sRGB gamma option given the game's instructions, but subjectively sRGB looks less washed out and shadows are darker and the image is more contrasty to me which I find pleasing. Therefore I'm not sure what is accurate or what is truly the creator's intent due to the confusion and lack of consistency around such archaic standards.
Thankfully if HDR is now fixed I will just be able to use that.
(Just as a sidenote the gamma setting was resetting to sRGB on every launch of the game, I haven't tried the latest patch yet to see whether this has been fixed. I reported it in the bugs section).
I can't speak to DisplayCal defaults, but if so that would be a fair point. I tried to quickly check this, but it doesn't seem to be something that I can reliably research in the time I currently have available.
Gamma 2.2 is often used as shorthand for sRGB unfortunately, as sRGB curve is most closely approximated by that gamma exponent number. But using sRGB to refer just to color primaries is a bit misguided, since sRGB is a definition of both simultaneously. Pure gamma 2.2 is simply not sRGB gamma.
edit: Just saw the latest Monitors Unboxed review and they seem to be reviewing against sRGB gamma: https://www.youtube.com/watch?v=UonGoqrAdMA
LG C1, or any TV really, does not support sRGB. The only way to get sRGB to show properly on a TV is to use it in HDR, as Windows will render all SDR content with sRGB gamma inside of a larger HDR container. It would also be possible to use .icc profile to convert one to another, but at the cost of introducing some quantization artifacts into the shadow areas.
I also use TV as a monitor. Graphics card has no effect here, neither graphics card nor TV do anything about this, even though they are both aware that TV is connected to a PC.
Absolutely! This is why there is a reference image there, to show the effect that the toggle has. And users should simply use that which looks better to them. Idea is that it's better to have the option of switching to pure gamma, than not to have it. I might go and improve that reference image some more to make the overall effect clearer.
You can say that again.
Absolutely.
Might be interesting to know that within the SDR range, HDR will look like the sRGB version of the game and that is the "intended look". Though the intended look is on shaky grounds, since much of the team is using random monitors that are probably mostly sRGB, and are working in random viewing environments. So at the end of the day, don't stress too much about it, just chose what looks better to you for SDR, or with HDR adjust the calibration sliders to your preference. Keeping Shadows and Midtones sliders at defaults values is the reference, the closest thing to the whatever creator's intent all this chaos allows for, if you know your screen to be accurate and are in a dim viewing environment.
Oh wow, thanks for the report! Will look into it.
Official Unreal Engine documentation is outdated on the matter of HDR, mentioning things such as fullscreen being a requirement, as it seemingly hasn't been updated since UE4 and DX11, as well as not mentioning any of the calibration cvars: https://docs.unrealengine.com/5.3/en-US/high-dynamic-range-display-output-in-unreal-engine/
It says there that HDR can be enabled via blueprints, but we of course want to do it from c++ so that it can be triggered directly from the game menu, like every other option.
Next it says:
With following console variables being listed:
1. r.AllowHDR - this one is easy, it just need to be set to 1 once in DefaultEngine.ini.
2. r.HDR.EnableHDROutput - this one is supposed to toggle HDR on/off
3. r.HDR.Display.OutputDevice - this one specifies which gamma curve is used
4. r.HDR.Display.ColorGamut - this one specifies which color gamut is used
UE4 was really problematic regarding HDR, but some time around the transition to UE5, I noticed that now simply toggling r.HDR.EnableHDROutput works properly in both editor and in the game, and HDR just works and can be easily toggled on and off, as long as it's enabled in Windows.
After checking the cvars 3 and 4, I found both of them at their SDR values of 0. So evidently these two cvars no longer matter, as HDR works despite them being at wrong values. This means that internally they either aren't being used any more, or they are being overridden somewhere somehow.
At this point we started getting your reports of HDR not working for some people, and were trying to figure out what's the factor that determines if HDR works or not.
One thing that was suspicious at the time were the log messages:
But those seemed irrelevant as they were showing up both on systems where HDR worked and where HDR didn't work, so we kept on looking for where the actual difference lies.
Finally you guys found out the common factor is having a single monitor setup, at which point we got repro, but Epic said they themselves couldn't repro the issue, indicating the issue is somehow unique to our project, but they pointed us to where in code to look for issues regarding this.
I quickly set up Visual Studio, learned how c++ debugging works, and started digging (our actual programmers didn't have HDR monitors and were overwhelmed with other issues). I found a piece of code that checks for the number of monitors with an accompanying comment that states that in case of 1 display, CVars that were setup do represent the state of the displays. This check is followed by an early return, meaning the rest of the code meant for multiple displays scenario doesn't get executed.
At the end there is a call to a function that properly sets values for OutDisplayOutputFormat and OutDisplayColorGamut depending on context. So what's going on is that if there's 1 monitor, then these two values are copied from cvars 3 and 4 which are assumed to be holding correct HDR values, and if there's more than one, then cvars are ignored and correct HDR values are created from scratch by a separate function.
It was tempting right there and then to insert the call to that function before the return, so that the CVar values are simply overridden regardless of number of monitors (and yes, this worked), but changing engine code should always be a last resort, as it complicates updating engine to new version later and is generally considered riskier. So I went after the cvars, to find out why they weren't being set to correct values.
My first thought was... well, if they are at 0, their SDR values initially, and I never changed them, then maybe that's why, maybe in case of one display they really do need to be set manually, as the official documentation quoted above did not specify which variables exactly need to be manually changed by devs. So I added the function into our HDR toggle menu option, that when the HDR is toggled also sets those cvars 3 and 4 to correct values.
Result? It did nothing. And it actually caused more error messages and warnings to appear in the log file. Ok, then what must be happening is that after our function is changing those cvars to correct HDR values, something is changing them back to wrong SDR values again, before they are being accessed by the function that that is assuming them to have HDR values and wants to copy their values into OutDisplayOutputFormat and OutDisplayColorGamut.
Taking a closer look at the warning again:
...it seems we were right with our assumption in the first place. This not working suggests that we do not need to change cvars manually after all, but it's this SetByDeviceProfile thing that's supposed to be setting them, but something is blocking it. I quickly removed my manual overrides, and sure the number of warnings reduced, gamut cvar was longer being mentioned, but the warning for r.HDR.Display.OutputDevice was still there. Ah, then it must be that we are setting it somewhere somehow with a priority that doesn't let it be changed by the DeviceProfile thing afterwards.
A quick code search revealed the sRGB gamma toggle was setting it. Because of course it was, this cvar contains all the gammas, all the SDR and all the HDR gamma curves, and we want to be able to switch between sRGB and pure 2.2 gamma, which is why we have the sRGB toggle, which really just toggles r.HDR.Display.OutputDevice between 0 and 2.
And looking at the way it is being set, it's just using the "Set" command. After learning about how the cvar priority system in Unreal works I replaced Set function with SetWithCurrentPriority function, so that cvar is no longer tagged as "ECVF_SetByConsole". Now that cvar was no longer inaccessible to DeviceProfile, HDR toggle started working on a single monitor, and there was 15 minutes until the hotfix 1.0.2 was about to start building. After rushing to submit the change while making sure I didn't break anything else, I quickly realized that Apply button makes the HDR broken again, but at that point build was underway and it was too late to stop and restart it again. This is why you had to wait one more day until 1.0.3, and why you got the new workaround in the meantime.
Looking into what was going on with the Apply button, it turned out that it was calling SetUsingGammaSRGB(GetUsingGammaSRGB()); in order to apply any change that might have been done to r.HDR.Display.OutputDevice, which essentially means it's setting r.HDR.Display.OutputDevice to either 0 or 2, both of which are SDR values. After testing that this was indeed a necessary piece of code, I ended up wrapping it in an if statement that checks if HDR is on, and only executes it if it is not, by querying the value of r.HDR.EnableHDROutput. And that's the 1.0.3 hotfix.
I have since found that pressing reset to default inside of HDR calibration menu breaks HDR in the same way if all the values are already at default values, and that's something I still have to look into, luckily it's not urgent since simply toggling HDR off and back on in the game menu fixes it.
I sent feedback to Epic about all this, suggesting that HDR should always be handled the way it is handled in the case of multiple monitors, as that's just way more robust.
And that's why HDR on single monitor setups worked for DESORDRE devs, but didn't work for us, even tho we're at the same engine version. Because we dared to have the menu option to toggle between sRGB and pure 2.2 gamma in SDR, seemingly something completely unrelated to HDR.
I am so glad to support your games.