Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
My question to Torzi was a bit off topic as I wanted to understand reprojection for other true VR games (not using Helix Vision). For some it might be a good idea to interpolate every 2nd frame to save performance instead of reducing resolution.
They all do the same thing, but they each have their own algorithms to accomplish it. "Interpolation" isn't the right word, since you'd need a future frame to know how to construct the in-between frame. an "Estimated" frame is more accurate, since its using a set of previous frames to 'guess' what the next missing frames will look like.
This happens outside of the VR app that's running, to the point where the app can't even influence how it behaves. It's basically just grabbing frames as they're presented to the headset, and building a running estimate of what future frames should look like.
I don't think its very different from any other image or performance enhancing technique. I have a cousin that can't live without anti-aliasing, and cranks it up to the point where he has to turn other settings down, where I don't mind it so much, and would rather put the performance elsewhere. I think it just boils down to what you prefer.
There's nothing wrong at all with the way you're doing things... When motion smoothing turns on (or is forced on), it limits the frames that Katanga can present to the headset to 45FPS, even if has more available. This happens outside of Katanga's influence. The motion smoothing driver just works in the background, and inserts frames into the stream. So, it actually makes perfect sense to limit the game to half the refresh rate of the headset if you intend to use motion smoothing. (This is how I've been using other capture apps for a few years now)
The real issue is that you can't run the game unconstrained without it affecting the VR interface, but I have a feeling this has to do with the implementation of SteamVR, as well as a lack of resource management in Windows when multiple programs want GPU access.
The only way I've been able to max out the game/GPU and NOT have it affect the VR world is in the Desktop view in the WMR Cliffhouse. You can run the game just as you would normally, and let it use whatever resources are left, and it still doesn't affect Cliffhouse. This is because MS tied WMR directly into the DWM... actually modified it's code to handle it. I'm betting there's a level of access to the DWM there that just isn't available to anyone else except MS.
This is also why I'm interested in seeing where OpenXR goes, as that bypasses any dependence on SteamVR, and in the case of WMR goes straight to the DWM. I'm also curios if that will allow other headset brands more direct access, as I'm not sure I really want to stick with WMR for my next headset.
Do you have a new beta with dx12 support ?
ci_beta is our primary public beta to share builds right before we roll them over to default. beta_driver is typically experimental/speculative. I usually leave it live only while testing, but must have forgotten to remove it.
After that I plan to add DX12 support. You can already run DX12 using SuperDepth3D_VR+ and BlueSkyDefender's CompanionApp however. He has a new Super3DMode which gives you full Side-by-Side using color channel compression.
-------
I've created a new custom build of Reshade (off 4.9.0) that might be interesting for some people. This is strictly an UNOFFICIAL build of Reshade. I've created this version to do two things:
1) Bring SuperDepth3D into VR via HelixVision.
2) Enable 3D Vision hardware direct access of SuperDepth3D, without needing SBS.
BlueSkyDefender has been helping me here to integrate with SuperDepth3D_VR+ shader, and the results are very good. It's been a fun collaboration.
This is an engineering build, so you can probably expect weirdness, bugs, and confusion. But so far seems pretty stable. DX11 only, sorry. And to use the 3D Vision connection, you'll need either driver 425.31 or 452.06 with the global driver hack.
Serious flaw at the moment- no Reshade UI visible when in 3D Vision mode. I'm working on it.
How to install:
1) Download this link for the current build and basic Reshade setup without shaders. https://bo3b.s3.amazonaws.com/SD3D_eng.7z
2) Unzip that into your game directory, next to the game.exe you want to try.
3) Download this link for the HV_LTS version of SuperDepth3D_VR+: https://github.com/BlueSkyDefender/Depth3D/archive/refs/tags/HV_LTS.zip
4) Unzip that file into the reshade-shaders folder, and skip replacing the dummy file.
5) Run the game, which will load custom Reshade and setup for VR and/or 3D Vision. The default settings run HelixVision mode in the shader.
You can edit the included HelixVision shortcut to launch Katanga/mirror app to see it in VR.
If you have 3D Vision enabled, it will convert the output of SuperDepth3D into 3D Vision Direct Mode, and you can use your 3D Vision monitor and glasses. If 3D Vision is turned off, it won't run that part, but VR connection is always active.
If your target game has a 3D Vision profile, you need to remove the exe from the profile with NVidia Inspector, otherwise it will interfere with Direct Mode.
to test it. I am so tired of Virtual Desktop/Steam workflow...
That's an interesting observation about z-3D not having 'living' feeling NPCs. I will look for that idea when next I try these. In terms of world scale and world view, I feel that SD3D is very good.
Do you have any observations or idea why they don't feel alive? The haloing/distortion maybe? No popout effect?
BTW, If you happen to be using the 3D Vision part of this eng build, that requires a much higher divergence in SD3D to look good, because of the up-close monitor. In HelixVision screen, you can typically get away with 35 or so, but on monitor I need at least 75-100.
I'm looking forward to test CP77 in Katanga !
In AC Syndicate via Companion App, because of issues with katanga and my RTX3000. i increased the Divergence/Convergence to 125 and it looks like geo-3d but without the feelings like in CP77 (vorpx).