Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
If you let Wallpaper Engine render on your 1050 TI but your screens are connected to your GTX 1070, the image data has to be sent over to your main card which heavily taxes your computer's performance and will do the exact opposite of what you are trying to achieve.
Long story short, don't force-change the GPU that Wallpaper Engine runs on and consider removing the 1050 TI from your system unless you utilize that card for specific purposes (like using it as a PhysX accelerator).
Having monitors only connected to a secondary GPU and also running WE on it means the data will be transferred back and forth even one more time than in the case of the OP. The desktop is just a single texture that covers all monitors, that's how Microsoft made it. I already tried this with scene based wallpapers by explicitly choosing the GPU in the past. For videos, web and especially applications it's not even possible to choose a GPU.
With this many monitors the 1080 will likely go into a high power state anyway, depending on the wallpapers and settings. Maybe not if every single wallpaper is a well optimized scene, but with 5+ monitors it won't idle anymore one way or another.
I have two monitors connected to my 1080Ti plus the 5 on my RX580, so 7 total.
Only connected two to my 1080Ti so that it actually can get to a low power stage, as it's not possible when having connected 3 or 4 monitors to it due to a bug still not fixed by Nvidia.
So if I'm understanding this right, then it's impossible in this case because it's a wallpaper and not a fullscreen D3D application nor a video rendering thing like OpenCL?
Would that also be the cause why I can't render videos with WE, yet everything else works, on my 5 monitors connected to my secondary GPU?
CEF does have some options to "select" a GPU with "--gpu-active-device-id", "--gpu-active-vendor-id", "--gpu-testing-device-id", "--gpu-testing-vendor-id", apparently those are the switches that do it. But I'm guessing those won't work due to dwm, still "rendering" the desktop.
It's like taking any 3D application and stretching its window across monitors of different GPUs, it will slow down and CPU usage will increase for DWM.
Yes, but also because you mix AMD and Nvidia. I'm 99% sure it would work if you had two different Nvidia cards.
That's true, I haven't figured out how to use them properly and if Google intends to keep supporting them or if they may be altered (since they seem to be for testing purposes). But I did try this with scenes, since I have full control over their DirectX devices, and changing the GPU just made it worse like I described.
For videos I use media foundation and I can't find any way of choosing a device with the topology I currently have anyway, I would probably have to scrap the whole thing and build it anew from scratch. But figuring this out won't help since it doesn't even work well for scenes.
For multi GPU setups the only thing that "can" really work well are SLI/Crossfire or hybrid setups on laptops, since they physically share memory and the desktop texture can be accessed by both cards.
I'll try it at one point with two Nvidia cards though they have a limit of 4 monitors per card (; ̄Д ̄)