Lossless Scaling

Lossless Scaling

FN Jan 12 @ 3:19am
Use APU for Frame Gen?
I have an APU+GPU setup with ryzen 5600g and rtx 3050 8GB,
Currently multi monitor mode is disabled in the bios which disables the apu for the meantime. I wonder if others here have offloaded framegen through their APU's and how is it so far? Thanks
Last edited by FN; Jan 12 @ 3:20am
< >
Showing 1-15 of 29 comments
If iGPU off in BIOS the APU is physically off. Nothing else in your system can use it. You gotta turn that shiz on
FN Jan 12 @ 4:14am 
Originally posted by Premises187:
If iGPU off in BIOS the APU is physically off. Nothing else in your system can use it. You gotta turn that shiz on
Yeah I mentioned it, its disabled for the meantime just wondering how it went for other people in terms of performance, etc...
Gizzmoe Jan 12 @ 4:45am 
Try it and find out how it works in your particular case.
Vornir Jan 12 @ 11:02am 
I tried to do that on my laptop but it just caused constant AMD driver crashing any time i try to FG on the iGPU (680m). I also get crashing with certain scalars too which is annoying but luckily i only really need it to force 16:9 on my 16:10 display.

It works fine on dGPU or maybe single GPU on iGPU but my experience with iGPU+dGPU has been horrendous
Last edited by Vornir; Jan 12 @ 11:03am
Off-loading LS to a non-primary GPU seems to have inconsistent results for people; works very well for some, like ass for others.
You may as well try.
Vornir Jan 12 @ 2:43pm 
just doing a bit of testing to see if i can prevent LSFG 3 at 2x at 60fps->120fps from crashing games when doing it on an iGPU so take this with some salt.

My Radeon 680m is driving a 1440p monitor and a 1200p laptop display. When attempting to frame gen with the iGPU my 680m was hitting a power throttle limit and the FPS was fluctuation pretty wildly trying to do it at 100% res. Lowering it to 40% made it a bit more consistent but still not great. Going to the minimum 25% seems to help the 680m keep up with doing 2x. I would say at least in my particular experience a 680m tier GPU can seem to do LSFG 3 x2 at 25% resolution 60fps base fps fairly consistently but raising the resolution will quickly tank your frames.

If you add in much else on top of the frame generation like encoding/decoding video and driving other applications you will likely hit a throttle limit for the iGPU
Last edited by Vornir; Jan 12 @ 2:46pm
FN Jan 13 @ 1:50am 
Originally posted by Vornir:
just doing a bit of testing to see if i can prevent LSFG 3 at 2x at 60fps->120fps from crashing games when doing it on an iGPU so take this with some salt.

My Radeon 680m is driving a 1440p monitor and a 1200p laptop display. When attempting to frame gen with the iGPU my 680m was hitting a power throttle limit and the FPS was fluctuation pretty wildly trying to do it at 100% res. Lowering it to 40% made it a bit more consistent but still not great. Going to the minimum 25% seems to help the 680m keep up with doing 2x. I would say at least in my particular experience a 680m tier GPU can seem to do LSFG 3 x2 at 25% resolution 60fps base fps fairly consistently but raising the resolution will quickly tank your frames.

If you add in much else on top of the frame generation like encoding/decoding video and driving other applications you will likely hit a throttle limit for the iGPU
so based from this, the iGPU is being maxed thus you get bad performance. I might test it out soon but for now I'll stick with dGPU since using the APU will surely increase CPU temps to a significant number.
there was a thing a while.hack where using frame gen from a amd card wity mavdias fre gen worked.great together i would say try n see. fsr 3 i rhink is.univseral now doe
Really not worth it unless you're already rocking a dual GPU setup to begin with (ex. for Rendering etc.)

Tested it myself, and i kinda got it working, but i noticed that you really need a decent entry/mid range GPU at a minimum to even get a decent enough performance out of it. A fully Saturated PCI-E Lanes also has some effect on performance on both GPUs.

Test System i tried it on :
CPU - Ryzen 9 5900x (Has 24 PCIe Lanes)
Main GPU - RTX 3060, RX6600 & RX6800
Sub GPU (used only for Lossless Scaling) - GTX950, GTX1060 3gb, RX580 8gb, Quadro P400
Ram - 32gb Ram

combination i tried :
* RTX3060 + GTX950 - Works but has terrible LSFG performance, will sometimes get a DX12 error due to the GTX 950 not supporting DX12.
* RTX 3060 + GTX1060 3gb - Works way better than the GTX950, no dx12 errors but still has some random performance drops here and there when used on more demanding games/settings compared to just using LSFG on the main GPU.
* RTX 3060 + RX580 - Works, only slightly better result than with the GTX10603g combo.

* RX6600 + GTX950 - Same result as with the RTX3060 + GTX950 test.
* RX6600 + GTX1060 - Same result as with the RTX3060 + GTX1060 3gb test.
* RX6600 + RX580 - Same result as with the RTX3060 + RX580 test.

* RX6800 + R6600 - Serviceable, but its just too much of a waste to only use an RX6600 as a sub GPU just to render fake frame in Lossless.
* RX6800 + RX580 - Works same as with the 1060, but is Most Definitely not worth the ridiculous extra Heat + overall power draw from your system (RX6800 is 200-250watts + RX580 8gb whic is 195 watts).
* RX6800 + GTX 10603gb - Same result as the RTX 3060 + GTX1060 3gb combo in terms of Lossless Performance.
* RX6800 + GTX950 - Same result as with the RTX3060 + GTX950 test.

Finaly, tested a quadro P400 just for the hell of it, and just to see if it was even possible :
* RX6800 + Quadro P400 - Lossless Runs like ♥♥♥♥, only useful for running apps that req. cuda for running/rendering.
* GTX 3060 + Quadro P400 - Also runs like ♥♥♥♥, better to just only use the main GPU for Lossless.

Overall, i find that it just was not worth the extra Heat and total system power draw from putting a second dGPU on your system dedicated for use just for use in Lossless Scaling.
Originally posted by Gizzmoe:
Try it and find out how it works in your particular case.
^ This.
It can give you few fps, if your Gpu is choking running both lsfg and game. It can take few fps/stability away. It can do pretty much nothing in some cases fps-wise.
Faster RAM also helps obviously.
FN Jan 13 @ 3:44am 
Originally posted by Rogal-117:
Really not worth it unless you're already rocking a dual GPU setup to begin with (ex. for Rendering etc.)

Tested it myself, and i kinda got it working, but i noticed that you really need a decent entry/mid range GPU at a minimum to even get a decent enough performance out of it. A fully Saturated PCI-E Lanes also has some effect on performance on both GPUs.

Test System i tried it on :
CPU - Ryzen 9 5900x (Has 24 PCIe Lanes)
Main GPU - RTX 3060, RX6600 & RX6800
Sub GPU (used only for Lossless Scaling) - GTX950, GTX1060 3gb, RX580 8gb, Quadro P400
Ram - 32gb Ram

combination i tried :
* RTX3060 + GTX950 - Works but has terrible LSFG performance, will sometimes get a DX12 error due to the GTX 950 not supporting DX12.
* RTX 3060 + GTX1060 3gb - Works way better than the GTX950, no dx12 errors but still has some random performance drops here and there when used on more demanding games/settings compared to just using LSFG on the main GPU.
* RTX 3060 + RX580 - Works, only slightly better result than with the GTX10603g combo.

* RX6600 + GTX950 - Same result as with the RTX3060 + GTX950 test.
* RX6600 + GTX1060 - Same result as with the RTX3060 + GTX1060 3gb test.
* RX6600 + RX580 - Same result as with the RTX3060 + RX580 test.

* RX6800 + R6600 - Serviceable, but its just too much of a waste to only use an RX6600 as a sub GPU just to render fake frame in Lossless.
* RX6800 + RX580 - Works same as with the 1060, but is Most Definitely not worth the ridiculous extra Heat + overall power draw from your system (RX6800 is 200-250watts + RX580 8gb whic is 195 watts).
* RX6800 + GTX 10603gb - Same result as the RTX 3060 + GTX1060 3gb combo in terms of Lossless Performance.
* RX6800 + GTX950 - Same result as with the RTX3060 + GTX950 test.

Finaly, tested a quadro P400 just for the hell of it, and just to see if it was even possible :
* RX6800 + Quadro P400 - Lossless Runs like ♥♥♥♥, only useful for running apps that req. cuda for running/rendering.
* GTX 3060 + Quadro P400 - Also runs like ♥♥♥♥, better to just only use the main GPU for Lossless.

Overall, i find that it just was not worth the extra Heat and total system power draw from putting a second dGPU on your system dedicated for use just for use in Lossless Scaling.
Damn thats a benchmark right there, thank you for sharing them. I'll try it out soon but seeing your results, it would not make it any better for me but worse since mine is just an APU lol
Last edited by FN; Jan 13 @ 3:45am
Spook Jan 13 @ 4:46am 
I've tried the same thing with a very, very, very highly overclocked 5700g and a 6900XT, and the results were undesirable to say the least.

The fact that the 5600/5700Gs are nerfed in the PCI-department probably makes them an even less suitable candidate then a regular Zen 3 part.

Try it, don't expect miracles. Connecting your monitor to your motherboard should in theory give you better latency and less PCI traffic.

The fact your 3050 outputs less frames might actually be beneficial in this case. Less data to shuffle around.

Don't forget to disable iGPU again when done. Good luck!
FN Jan 13 @ 5:40am 
Originally posted by Spook:
I've tried the same thing with a very, very, very highly overclocked 5700g and a 6900XT, and the results were undesirable to say the least.

The fact that the 5600/5700Gs are nerfed in the PCI-department probably makes them an even less suitable candidate then a regular Zen 3 part.

Try it, don't expect miracles. Connecting your monitor to your motherboard should in theory give you better latency and less PCI traffic.

The fact your 3050 outputs less frames might actually be beneficial in this case. Less data to shuffle around.

Don't forget to disable iGPU again when done. Good luck!
Thank you for your input, will try it when I get the time. It runs great on my dGPU currently and I am able to record or stream interpolated frames using obs
Last edited by FN; Jan 13 @ 5:41am
Does this work similar to how we use to use an extra Nvidia card for phisx ?
It should reduce latency a lot if it works. I've got a Titan x Pascal laying around but i'm too lazy these days to try it.
Spook Jan 13 @ 7:09am 
Originally posted by UNLIMITED POWA:
It should reduce latency a lot if it works.
Very unlikely, because you're introducing a new bottleneck and/or extra processing in the form of your PCI subsystem. If you mean because of reduced strain on your main GPU, i doubt it compensates.

PCI 5.0 or 6.0 should theoretically improve dual-GPU, and gaming-focused iGPUs are slowly getting to point where they can actually bear the burden of FG.

In the best case scenario, the two GPUs would communicate to each other without the need for the CPU playing telephone, probably like an SLI/Crossfire bridge did.

No idea if this could be accomplished over PCI. Maybe via DMA.
< >
Showing 1-15 of 29 comments
Per page: 1530 50

Date Posted: Jan 12 @ 3:19am
Posts: 29