Wallpaper Engine

Wallpaper Engine

View Stats:
How to let Wallpaper Engine use different GPU?
I have a GTX 1070 as my main gaming gpu and a 1050 TI for secondary and want wallpaper engine to use that one.
< >
Showing 1-6 of 6 comments
Tim  [developer] May 24, 2019 @ 1:37pm 
Wallpaper Engine should always run on the GPU that is connected to your screen(s). This can be said for any software, really. Multi-GPU setups like the one you describe on desktop which do not use SLI (or similar) are largely pointless unless you have a specific use-case in mind.

If you let Wallpaper Engine render on your 1050 TI but your screens are connected to your GTX 1070, the image data has to be sent over to your main card which heavily taxes your computer's performance and will do the exact opposite of what you are trying to achieve.

Long story short, don't force-change the GPU that Wallpaper Engine runs on and consider removing the 1050 TI from your system unless you utilize that card for specific purposes (like using it as a PhysX accelerator).
『 』 May 29, 2019 @ 2:33am 
I would like to have that option as well, as rendering is NOT done on the secondary GPU but the primary GPU in my case the 1080TI and that sends it's already processed data to my RX580 which then only displays it, I check the utilization constantly, and wallpaper engine pegs my 1080Ti to about ~40% all the time and my other GPU (RX580, where 5 monitors are connected to) is idling and bored, except for very short spikes in low energy mode, that aren't even enough for it to warrant it to increase the core and memory cloks above 300MHz, which would be the case if it where rendering anything at all, considering my 1080Ti goes to highest P-state because of wallpaper engine.
Biohazard  [developer] May 29, 2019 @ 3:06am 
It isn't possible. The desktop window manager is accelerated by your primary GPU and it hosts the shared texture to display the desktop which has to be merged with whatever will be displayed inside of it, like a wallpaper.

Having monitors only connected to a secondary GPU and also running WE on it means the data will be transferred back and forth even one more time than in the case of the OP. The desktop is just a single texture that covers all monitors, that's how Microsoft made it. I already tried this with scene based wallpapers by explicitly choosing the GPU in the past. For videos, web and especially applications it's not even possible to choose a GPU.

With this many monitors the 1080 will likely go into a high power state anyway, depending on the wallpapers and settings. Maybe not if every single wallpaper is a well optimized scene, but with 5+ monitors it won't idle anymore one way or another.
Last edited by Biohazard; May 29, 2019 @ 3:06am
『 』 May 29, 2019 @ 4:08am 
Thanks for the fast answer :)

I have two monitors connected to my 1080Ti plus the 5 on my RX580, so 7 total.
Only connected two to my 1080Ti so that it actually can get to a low power stage, as it's not possible when having connected 3 or 4 monitors to it due to a bug still not fixed by Nvidia.

So if I'm understanding this right, then it's impossible in this case because it's a wallpaper and not a fullscreen D3D application nor a video rendering thing like OpenCL?
Would that also be the cause why I can't render videos with WE, yet everything else works, on my 5 monitors connected to my secondary GPU?

CEF does have some options to "select" a GPU with "--gpu-active-device-id", "--gpu-active-vendor-id", "--gpu-testing-device-id", "--gpu-testing-vendor-id", apparently those are the switches that do it. But I'm guessing those won't work due to dwm, still "rendering" the desktop.
Last edited by 『 』; May 29, 2019 @ 4:11am
Biohazard  [developer] May 29, 2019 @ 4:40am 
Originally posted by 『 』:
So if I'm understanding this right, then it's impossible in this case because it's a wallpaper and not a fullscreen D3D application nor a video rendering thing like OpenCL?
Yes, normal applications are 'top level' windows and they can have their main frontbuffer on any GPU, Windows even selects it automatically. But wallpapers are child windows, the frontbuffer is only a single texture that goes across all monitors and so it must be shared and only one card "owns" it.

It's like taking any 3D application and stretching its window across monitors of different GPUs, it will slow down and CPU usage will increase for DWM.

Originally posted by 『 』:
Would that also be the cause why I can't render videos with WE, yet everything else works, on my 5 monitors connected to my secondary GPU?

Yes, but also because you mix AMD and Nvidia. I'm 99% sure it would work if you had two different Nvidia cards.

Originally posted by 『 』:
CEF does have some options to "select" a GPU with "--gpu-active-device-id", "--gpu-active-vendor-id", "--gpu-testing-device-id", "--gpu-testing-vendor-id", apparently those are the switches that do it. But I'm guessing those won't work due to dwm, still "rendering" the desktop.

That's true, I haven't figured out how to use them properly and if Google intends to keep supporting them or if they may be altered (since they seem to be for testing purposes). But I did try this with scenes, since I have full control over their DirectX devices, and changing the GPU just made it worse like I described.

For videos I use media foundation and I can't find any way of choosing a device with the topology I currently have anyway, I would probably have to scrap the whole thing and build it anew from scratch. But figuring this out won't help since it doesn't even work well for scenes.



For multi GPU setups the only thing that "can" really work well are SLI/Crossfire or hybrid setups on laptops, since they physically share memory and the desktop texture can be accessed by both cards.
Last edited by Biohazard; May 29, 2019 @ 4:40am
『 』 May 29, 2019 @ 4:49am 
Thank you for that, that helped me fill in the blanks :)

I'll try it at one point with two Nvidia cards though they have a limit of 4 monitors per card (; ̄Д ̄)
< >
Showing 1-6 of 6 comments
Per page: 1530 50

Date Posted: May 24, 2019 @ 1:12pm
Posts: 6