Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
I would think that would give a noticeable performance boost if they left the quality the same.
IIRC, it wasn't until RDNA3 that AMD even dedicated die space to dedicated RT cores. Even RDNA 3 just has RT being done on regular compute cores split off of from the main stack, and when they got overwhelmed (which happens fast in heavy RT games) the workload would bleed into the main CUs and utterly tank performance....they're still a few generations behind NV at best, really.
Thus, while HW RT may be faster on some (most?) NV cards due to their dedicated RT cores, on AMD systems (which includes something like 80m consoles) you'd likely see CyberPunk style performance deltas on benchmarks where you have basically unplayable performance. Considering the game cannot be played without RTGI, my guess is they decided to go with the option that ensured the most amount of people would be able to /play/ the game, even if that does ♥♥♥♥ over NV users as their RT cores went off for cigarettes and beer while their main cores are getting the snot beat out of them.
Why not implement both as options? Dunno, but I'm hoping they patch it in at some point.