3DMark
xSOSxHawkens Feb 22, 2019 @ 5:48pm
Mixed Multi-GPU...
As seen in the linked thread, there are obviously people out there who have the hardware and *want* to try this. In the other thread the Dev jokes that mixed rigs are only seen in PR Test Beds, but I have multiple mixed rigs in my house alone.

As it stands now I run a Vega 64 with a 4GB GTX-670 as a dedicated PhysX secondary, and would love to test out what type of performance I could get using the two together. I would also like to test mixed multi with a Vega/960-4G and with a 960/670, and with Intel hD3000 and HD4600 graphics too.

I understand that as of now there are not games supporting these techs, aside from Ashes, but I would also argue that you guys added RTX support despite *it* not being widely adopted and despite it *only* being usable by half the market.


When will we see the leading 3D benchmark implement one of the most exciting and forward thinking features that are available for 3D rendering using already exisint technology? Or more simply, when will it get with the times and show game devs what potential there actually is in mixed GPU rendering?...



https://steamcommunity.com/app/223850/discussions/0/366298942110341784/
Last edited by xSOSxHawkens; Feb 22, 2019 @ 5:50pm
< >
Showing 1-5 of 5 comments
UL_Jarnis  [developer] Feb 23, 2019 @ 1:44am 
Not supported. Main reason is that it is hideously complex to implement and number of people with this type of setups is fractions of one percent. You have to understand that mixed multi-GPU requires that you somehow can predict the performance of each different GPU and then somehow split the work among all those GPUs according to their capabilities and performance, while maintaining stutter-free output.

This is WAY more complicated than normal multi-GPU where you can pretty much guarantee that each GPU has same performance, so you can simply split the load evenly.

Another issue is that current linked node configurations are rapidly moving towards only supporting two GPUs - this simplifies multi-GPU greatly. mixed mode would inevitably also mean "supports any number of GPUs" which also complicates the code horrendously. There is a very good reason why NVIDIA and AMD have effectively dropped SLI and Crossfire support for more than 2 cards. It is the same reason why those with 3- and 4-way multi-GPU roam the forums complaining about "poor SLI/Crossfire scaling".

On top of that there is the main reason why we are not doing mixed-mode support; Can't justify the work & cost of implementing something very complicated for setups that literally make SLI configurations seem dirt common. You are basically asking for a major engine structure change for extremely tiny number of people. We actually looked up how common such setups are in our database a while back and if you disregard configurations where a laptop with dGPU has been further enhanced with an eGPU (a desktop GPU in separate box), it was just so rare it might not as well exist. And no, dGPU + Intel Integrated is not considered here because the integrated is so slow that you can't do anything useful in a mixed mode setup. The overhead would be more than the gains from having the iGPU do something.

And on dGPU + eGPU the problem is that it would be so hobbled by the narrow path between the GPUs (most eGPUs are attached to PCIE 1x) that it really isn't a viable setup either for mixed-GPU.

TLDR: Very complicated to implement, effectively no-one has a setup that would use the feature, not worth doing it.

On raytracing, we did not add vendor-specific RTX support. We added DXR support which is Microsoft standard and in the long term, a major feature for real time rendering. Every other vendor will eventually have to support it.
Last edited by UL_Jarnis; Feb 23, 2019 @ 1:45am
xSOSxHawkens Feb 23, 2019 @ 3:18am 
Thanks for the quick reply...

Please help me to understand something though.

DXR is a *microsoft standard* and is a vendor independant feature baked into DX12. Due to it not being vendor specific all you have to add is DXR, and not a specific AMD/Nvidia version (RTX). Instead its up to the software dev (you) to implment support for the DX12 feaure by programing support for it into your code.

I get all that, and I get that because its a baked in feaure of DX12 it was easy to implment..



What I dont get is that:

Mixed Multi-GPU is *also* a baked in feature to DX 12 from MS, and one that was part of its *original* implmentation years ago and not just recently added to the spec. It is *also* a feature that requirrs the devs to program for it.



What make one baked in feature of DX12 that is not vendor specific so easy to implement, and yet makes another baked in feature that is specifically *not* vendor specific hard to implment?...

Sorry if that seems confusing...
xSOSxHawkens Feb 23, 2019 @ 3:31am 
two further questions.

You mention the iGP+dGP.

I get that Intel HD is gutless, but AMD APU's are not (at least not graphically, though older ones are weak on CPU cores).

AMD APU's specialy newer ones with decent integrated graphics hold a sizeable more power than Intel HD, and are on par with low level dGPU. They are also *widespread*. Would they not offer a potential for performance use?

Lastly, does mixed multi-GPU *have* to split the 3D rendering, or can it be used to split the compute pipelines. I know that allot of games and software do more on the GPU now days than simple rendering. OpenCL and DirectCompute calculations for particles and physx, etc.

We already *know* that when using proprietary PhysX that these calculations can be split onto second GPU...

Is there any reason mixed-multi could not be used to split off those types of calculations to other processing nodes? I know that my HD4600 isnt strong, but I *do* know it can run GTA-V @ 720P, and if it can do the whole game minimum at that I think it could probably handle the smoke/particle effects for the game alone, while the main GPU handled the rest.

Dont devs already split a large amounf of this between multiple nodes already (CPU (0/1/2/3...) and /GPU) and then stitch it all back together on the back end, synching the 3D and game engines for output?... Would it be that much more dificult to break out soemthing (physics using openCL for example) from say CPU thread 3 and instead put it on GPU 1 (assuming GPU0 is primary)?...






I just look at a system like mine and see a lot of wasted potential. I use my iGP for trancode and steaming work and know it has power to use and hate that unless its being used to stream Steam In Home and take 4K from my vega to 1080/60 its just sitting idle. Same for any games that dont use the 670 I have for PhysX....

most dont have a second dGPU, but most do have a second DirectCompute/OpenCL compute unit available that sits idle and is *much* faster for those types of calculatiosn than x86...
Last edited by xSOSxHawkens; Feb 23, 2019 @ 3:31am
UL_Jarnis  [developer] Feb 23, 2019 @ 4:28am 
Originally posted by xSOSxHawkens:
Thanks for the quick reply...

Please help me to understand something though.

DXR is a *microsoft standard* and is a vendor independant feature baked into DX12. Due to it not being vendor specific all you have to add is DXR, and not a specific AMD/Nvidia version (RTX). Instead its up to the software dev (you) to implment support for the DX12 feaure by programing support for it into your code.

I get all that, and I get that because its a baked in feaure of DX12 it was easy to implment..

DXR is vendor-independent and part of DX12. Every GPU vendor then has to implement DXR support in their drivers. This can be done with hardware acceleration (like RTX cards do) or without (like Titan V does)

What I dont get is that:

Mixed Multi-GPU is *also* a baked in feature to DX 12 from MS, and one that was part of its *original* implmentation years ago and not just recently added to the spec. It is *also* a feature that requirrs the devs to program for it.

What make one baked in feature of DX12 that is not vendor specific so easy to implement, and yet makes another baked in feature that is specifically *not* vendor specific hard to implment?...

Sorry if that seems confusing...

Main issue, like I said, is number of users. Roughly no-one uses mixed Multi-GPU. It is also very hard to implement. These two together equals "no sense to implement".
UL_Jarnis  [developer] Feb 23, 2019 @ 4:37am 
Originally posted by xSOSxHawkens:
two further questions.

You mention the iGP+dGP.

I get that Intel HD is gutless, but AMD APU's are not (at least not graphically, though older ones are weak on CPU cores).

AMD APU's specialy newer ones with decent integrated graphics hold a sizeable more power than Intel HD, and are on par with low level dGPU. They are also *widespread*. Would they not offer a potential for performance use?

Possibly, but number of users with AMD iGPU and dGPU is again very small. Much smaller than intel ones. Also if we somehow implemented "works with AMD iGPU but not Intel iGPU because Intel is so bad", we'd get called out for being biased in a second. And in either case the practical benefits would be very small or even negative due to overhead when running multiple GPUs.

Lastly, does mixed multi-GPU *have* to split the 3D rendering, or can it be used to split the compute pipelines. I know that allot of games and software do more on the GPU now days than simple rendering. OpenCL and DirectCompute calculations for particles and physx, etc.

We already *know* that when using proprietary PhysX that these calculations can be split onto second GPU...

Is there any reason mixed-multi could not be used to split off those types of calculations to other processing nodes? I know that my HD4600 isnt strong, but I *do* know it can run GTA-V @ 720P, and if it can do the whole game minimum at that I think it could probably handle the smoke/particle effects for the game alone, while the main GPU handled the rest.

Dont devs already split a large amounf of this between multiple nodes already (CPU (0/1/2/3...) and /GPU) and then stitch it all back together on the back end, synching the 3D and game engines for output?... Would it be that much more dificult to break out soemthing (physics using openCL for example) from say CPU thread 3 and instead put it on GPU 1 (assuming GPU0 is primary)?...

Here we run into an issue that 3DMark is supposed to give a figure how good performance this hardware has. If we do not use all the GPUs maximally in mixed Multi-GPU, some guy will come and post how we are terrible developers because his iGPU is only under 67% load while dGPU is maxed and we should stop making benchmarks because we're so bad.

Even if the difference overall is that the test runs 0.1fps slower than it would if iGPU was 100% loaded.

Even if the number of users with this configuration actually in use was 0.01%.

And one would still have to implement both use cases - mixed Multi-GPU and just single GPU. To retain good optimization, these would have to be two code paths or the standard case of single dGPU would be harmed.

Then you toss in the problem that CPU performance can actually be (slightly) lower if iGPU is being loaded. Modern intel desktop CPUs actually lose about 5-10% of CPU performance when iGPU is heavily loaded due to extra thermals. Another complication to the scenario where iGPU is being taken into the mix, resulting in scenario that no longer reflects how 99.9% of games work (where iGPU is idle if you have dGPU)

I just look at a system like mine and see a lot of wasted potential. I use my iGP for trancode and steaming work and know it has power to use and hate that unless its being used to stream Steam In Home and take 4K from my vega to 1080/60 its just sitting idle. Same for any games that dont use the 670 I have for PhysX....

most dont have a second dGPU, but most do have a second DirectCompute/OpenCL compute unit available that sits idle and is *much* faster for those types of calculatiosn than x86...

The 3D performance of the iGPU is so poor that it is not really wasted. Streaming is bit different due to dedicated hardware acceleration for encoding in intel iGPUs. If we'd make a Streaming + 3D benchmark we'd probably have it as option. For pure 3D gaming performance benchmark, it does not make sense.
Last edited by UL_Jarnis; Feb 23, 2019 @ 4:38am
< >
Showing 1-5 of 5 comments
Per page: 1530 50

Date Posted: Feb 22, 2019 @ 5:48pm
Posts: 5