Steam telepítése
belépés
|
nyelv
简体中文 (egyszerűsített kínai)
繁體中文 (hagyományos kínai)
日本語 (japán)
한국어 (koreai)
ไทย (thai)
Български (bolgár)
Čeština (cseh)
Dansk (dán)
Deutsch (német)
English (angol)
Español - España (spanyolországi spanyol)
Español - Latinoamérica (latin-amerikai spanyol)
Ελληνικά (görög)
Français (francia)
Italiano (olasz)
Bahasa Indonesia (indonéz)
Nederlands (holland)
Norsk (norvég)
Polski (lengyel)
Português (portugáliai portugál)
Português - Brasil (brazíliai portugál)
Română (román)
Русский (orosz)
Suomi (finn)
Svenska (svéd)
Türkçe (török)
Tiếng Việt (vietnámi)
Українська (ukrán)
Fordítási probléma jelentése
This actually seems to work! I had to disable "AMD High Definition Audio Device" to get VMware's "High Definition Audio Device" to work.
I'm at work so I have not been able to test Steam but from the looks of it, it sure looks good. :)
You just saved me for wasting 100 bucks on an EDID device!
That is correct. sVGA is bad for GPU passthrough, especially if you're trying to use dVGA with VMWare. I suppose you could use a DVI Detective device to force an EDID over DVI but IMO that's pretty overkill.
edit: nevermind, looks like you got it figured out without a hardware EDID device. Rock on!
Full disclosure - I haven't done any virtualization with Hyper-V. My suspicion would be that as long as Hyper-V supports GPU passthrough (or even PCI passthrough in general) it should work. I don't know what "Direct Play" is in Hyper-V land, but if that's Microsoft's marketing term for GPU Passthrough then that should work.
I actually have 8.1 Pro on my gaming system and I had Hyper-V enabled until the latest Nvidia drivers broke systems with Hyper-V enabled. I was only just beginning to test Hyper-V but with the Nvidia issue present as of a few weeks ago with the 337.50 driver (see here: http://techreport.com/news/26333/nvidia-geforce-337-61-beta-hotfix-display-driver-released) I removed it for the sake of my StarCraft playability sanity. ;)
I'll have to play with it!
edit - just played with it. No options for even PCI passthrough. Maybe the full blown Hyper-V server supports it but I don't know. The version that is tied to Windows 8.1 sure doesn't support it. It doesn't even support VirtIO and performance is pretty poor compared to KVM as well with just about all other devices. No wonder Microsoft isn't considered a hypervisor powerhouse. Oh well. I would suggest XenServer, ESXi or KVM as hypervisors to test streaming of Steam games. Hyper-V is pretty abysmal.
Now if only OpenELEC would easily support the Steam-client! :)
http://www.bloomberg.com/news/2013-11-22/xbox-one-has-not-one-but-three-operating-systems.html
Actually, it will be able to run any 2 x86 compatible OS's. I am working on making it both end user friendly AND open, because you should be able to run whatever you like on your hardware. Android, Mac, whatever.
Anyway, my project is underway and posted on Hackaday.io here:
http://hackaday.io/project/1071-The-HYDRA-multiheaded-virtual-computer
Living room console gaming is only ONE of the many uses for this machine, but I think it's definitely a use worthy of focus.
I posted my method for headless streaming from a VM in the build log, check it out, post comments, maybe even give me a skull if you think my endeavors are worthwhile.
I will also be filling in ALL details on how YOU can construct one yourself. In the spirit of openness.
i run a xeon 1230v3 with amd graphics, under KVM in archLinux, pass both the gfx and audio device to the guest, there are some kernel boot options i required to set for passing of PCIE to not screw with other devices, once i set this it works no problem,
one thing, if i dont shutdown the guest, but rather kill it. then i have trouble with the guest getting the display up again.
i have tested it with 2 VMs each with own gfx and USB controller, and worked well, however available cpu cores goes down, performance was 80-90% of native, even when running 2 at same time, since in-house streamming came out i havent tested 2 streaming VMs but i see no reason why it wouldnt work, other than bandwidth issues,
for more info on this type of VM setup with KVM and vfio-vga:
https://bbs.archlinux.org/viewtopic.php?id=162768
although i never got vfio-vga passthrough to work, it works fine for me with pci-assign
the normal gaming comp has a Nivida 670 and the computer i was using has a APU
=onboard video card=
i had no issues
runs good
I can't imagine M$ is doing this any other way, it could be paravirtualized but it would have to be highly optimized for the GPU they are using and I don't think even then the performance would be good enough for the XBOX. VMware is also kind of funny about PCI passthrough, I am stuck at 5.0 with a patch that fixes VM autostart because later versions won't let you passthrough onboard devices.
Someone should write a hypervisor that is intended for this specific purpose. It will come eventually, and bare metal installs will go the way of the floppy, zip disks and all the other crap we don't use anymore. The ONLY pc in my house that isn't virtual is my laptop, and I haven't looked back.
If you kill your VM, it probably starts either in safe mode or startup repair, which both use the fallback graphics. In VMware this would be the VMware svga adapter that you can't remove or disable. You can enable VNC access to this with a .vmx setting, and you can see the MKS console that way.
add to .vmx:
RemoteDisplay.vnc.enabled = True
RemoteDisplay.vnc.port = 5901 (enumerate this per vm, and use hostIP:port to connect)
RemoteDisplay.vnc.password = password ( I think this can't be blank)
It will generate a RemoteDisplay.vnc.key line, so don't worry about this.
The primary reason I use VMware instead of other hypervisors is the small footprint, and I also now have my hypervisor completely headless. So ALL 6 video cards in this box can be given to VM's. M$ has got to be doing something similar. I actually "steal" the gpu from ESXi - it seems to boot ESXi, then the screen freezes (when passthrough is started presumably) the screen goes blank (when the VM starts) and then SteamOS is up.
(Valve required that SteamBoxes boot directly into SteamOS, so this is how I accomplished that)
I had a similar idea when KVM finally integrated virtIO and vfio into its stack. Opensource, available and it works pretty easily with very low overhead. No one has monetized it yet. My idea was to develop some kind of EMU box with different consoles available from a central menu and gamers could pick a system to play. GPU passthrough with one GPU for passthrough, and an Intel integrated graphics for the host console menu screens in 3D.
Of course, getting permission from Sega, Nintendo, Sony, Microsoft, Neo-Geo et. al. would have been impossible, but could you imagine? Sometimes I love capitalism, sometimes I hate it.
As to VMWare - they have finally seen the light and support dVGA with multi-user mode utilizing Nvidia's VGX layer. I believe it is being announced and released in the next few months, if not already. Citrix has of course already been using this technology. KVM can't since the GPL won't allow it unless Nvidia open-sources it's code - I think.
Any way, rock on!
Instead of how it is now, where you have to "prep" the host by logging in with the credentials you intend to use.
I know this should probably be a question to Valve in a new thread, but I wanted to bump this thread as well. Must be a new set of beta testers have driven this past page 9.
Does this sound like an improvement?
I'm still very much interested in seeing if I can vtd passthrough an iGPU to a VM to act as the streaming host.