Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
Consoles can never do Raytraced GI at a halfdecent framerate, therefor Lumen has come into play; but Lumen is actually much slower than Raytraced GI is on high-end NVIDIA hardware, while also looking worse.
Therefor i have already stated that Unreal Engine is not really a PC-focused Engine, and also just doesn't make sense for the future when all hardware does path tracing rather than software approximations like Lumen.
Lumen has a hardware supported mode, and there's nothing stopping devs from using generic DX based hardware raytracing or NV/AMD exclusive models. It's up to developers as to what implementation they wish to support. Unreal Engine 5 also supports DLSS up to version 3.5. So I don't know what you're on about with it being console first. Neither the PS5 nor Xbox S/X has Nvidia hardware. So, why support a PC only feature with DLSS 3.5 if the engine is console only?
But as far as i know the hardware lumen mode just extends upon the software mode, with a fully hardware-based RT-renderer still being much faster on competent PC's.
Of course you can do anything with the engine you want as a dev... but it's main focus when building the engine was delivering prime gfx on current gen consoles, at which they have failed though.
Try running the same titles on a PC equipped with equivalent hardware to either the PS5 or Series X. The efficiency of shared RAM, highly efficient I/O with custom API's, and singular hardware SoC's will eclipse the similarly specced PC's performance.
It's completely disingenuous to compare PC's running 4090's, with 64GB of DDR 5, the latest 13th gen Intel or Ryzen 7800X3D CPU's, and new PCIe gen 5 NVMe's to consoles that cost less than half of what the 4090 by itself costs. Ignoring that AMD is behind Nvidia in raytracing efficiency by a generation, while right on par or better than NV in rasterization.
Of course Epic are looking at optimization on console, and on PC. Full path based raytracing is still at least 3 to 5 generations off for the masses. It's pretty awful on even the highest-end PC's with 4090's when you consider you don't get anything resembling a decent framerate without copious amounts of DLSS and frame generation (which doesn't help latency at all) in anything modern employing the tech like Cyber Punk's raytracing overdrive mode.
I wouldn't count on NV for much longer with their dominance on ML/AI being their mainstay for income. Gaming is just a blip on the radar for them now. Meaning they can charge as much as they want and do as little as they want with no real impact on their business. Intel hasn't been in the game long enough to say they'll be around. AMD has the console market to rely upon and looks like they may be eschewing enthusiast level hardware for more mass production level stuff.
I have a 6900XT overclocked to 6950XT Performance, didn't have any issues either.
I get pretty good performance in this game at 1440P High or Ultra settings with just FSR2 to quality. Not complaining.
I can't wait for FSR3 to release so everyone can get a bump in performance (if you meet Requirements) that is.
I hate upscalers and frame gens, but it looks like that's where we are at until the next 1-2 generations of Gpu's catch up a bit.
That being said I am already running on par / slightly better FPS than an Rtx 4070 TI at any Resolution, Native or with FSR to Quality VS a 4070 ti at dlss quality On UE5 games. Obviously the 4070 TI is a Sure win when Frame Gen is turned on as FSR3 isn't out yet.
With FSR3 frame gen, if I only get 20-30 frames more I will be beating a 4070 TI with frame gen and same settings.
With you having a 6950xt you will be in the same boat lol.
Hopefully the FSR3 frame gen gives us a good boost, but even if its worst case and we only get a small boost, looks like we should be right on par or exceed a 4070 ti with Frame gen and same settings.
Will be fun to test all this when its available.
So basically if a game has a specific workload, it can run faster on a console than on a much faster PC for that specific workload, like streaming textures and loading levels and such...
However, stuff like Lumen and Virtual Shadow maps, just require raw GPU power and all the efficiency of the consoles barely matters for that kind of workload.
And you can see that is correct when we can determine that the PS5 and XSX run this game at 720p to get 60 fps most of the time. Let alone that they probably do not run PC's Ultra settings.
A 2080 Super, which is already a bit faster than those consoles, will surely run it at mostly 60 at 720p as well.
So no, the efficiency in the end doesn't matter for raw performance.
it can however matter for stuttering like loading lag when streaming assets, something that happens alot less on the current consoles.