Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
However XeSS, as Intel Claims, will be Combination of both Machine learning ( DLSS ) and Spatial Upscaling ( AMD FSR )..
This Machine learning part makes it a bit tricky to implement in LosslessScale..
HOWEVER, m sure The Developer will find a way.. But hey..all in due time...XeSS will take a few months to be out....till then Enjoy FSR..!
Closest I could see to it being applied without input from the game dev themselves is either an ENB injection on a per-game basis where an AI model has been trained on that game with a reference library ready to go, a DirectX API-level where Microsoft keeps an ML library for ML upscaling enhancements themselves & just decides to go with Intel's solution for some reason - or if Intel decide to have their users opt in to have the AI training models run on a crowdsourced-level so everyone running every game is sending data back to Intel to feed into a control-panel level enhancement for games that works better the more collective runtime it has. Though I can't imagine this happening without a fair amount of additional CPU load/threads, network bandwidth being consumed, and paltry participation rates considering Intel doesn't even have a market presence yet. So if anything like this ever occurs, it'll probably be exclusively on Intel products, particularly as Intel's upcoming big-little architecture will be better at absorbing the additional CPU threads without compromising game performance that'll primarily be leveraging the big high-IPC cores. And even then, I can't see it happening in the short-term.
TLDR: don't hold your breath to see temporal & particularly ML-based upscaling methods being applied by modders or as any kind of generic injector-process.
The difference will apparently be just slightly slower frametimes for the DP4a version, but still faster than native.
https://www.hardwaretimes.com/intels-xess-upscaling-tech-works-and-looks-just-like-nvidia-dlss-2-0/