Direct Mode is Live

8 February - Chet
Today’s SteamVR Beta update introduces Direct Mode support for HTC Vive Pre.

SteamVR’s Direct Mode allows direct communication with the Vive’s displays, bypassing the operating system’s typical display communication path. This results in a more consistent experience for Vive users, as its composited view renders independent of other desktop application windows. For example, flashing displays upon headset detection, confused display destinations for desktop applications, and fullscreen compositor warnings are all resolved by Direct Mode. Additionally, Direct Mode allows the headset to go into standby mode, preserving power and display life.

Starting with the most recent SteamVR Beta update, this feature is now available on HTC Vive Pre and will be available on all HTC Vive models going forward.

To enable Direct Mode, just update your drivers to 361.75 (NVIDIA) or 1.0.3.16 (AMD) or newer and then make sure you are running SteamVR beta. See the Vive Pre Installation Guide for activating the beta.

Vive PRE has left the building!

5 February - christen


The HTC Vive Pre is now making its way to developers’ studios around the world. If you’ve been working with us to build content using the first version of our dev kit, check your inbox for the key and instructions you’ll use to upgrade that old kit to a new Vive PRE.

Then, while you’re furiously checking your email for shipping confirmation patiently waiting for its arrival, check out BrandonJLa’s Hover Junkers space requirements video for tips on how to make room for that old kit at home in even the tiniest of spaces. And check out our new Developer support site for all things PRE, including the new Vive PRE installation guide.


Catching up in October

8 October, 2015 - Chet
A few weeks back we were at Eurogamer Expo (EGX) in Birmingham demoing the Vive and grabbing The Best of Show. That was nice, but what was even better - people getting to see the versatility of the Vive. At the show people were able to play the seated experience of Elite Dangerous, for Crystal Rift they were standing and using an Xbox Controller, and of course people were able to experience room scale at the HTC Vive Booth.

This versatility point was also made in the talk – Confusion in Virtual Reality. If there was one take away from that talk it was the idea you don’t need a roughly 15ft by 15ft room ( for the rest of the world that has fallen to the evils of the metric system that is 5m by 5m) to enjoy the Vive. It can go up to that size, but it can also go down to a seated experience. The best part, it can do all of that and give you full 360 tracking without occlusion.

Up next for our travels will be Paris Game Week where you can catch a demo – including some new content we will be announcing later this month.

For developers with Vives, we continue to update the new content available in your game library. We start this week with an updated classic from our original demos – Surgeon Simulator VR. This is an updated version from the GDC demo loop.

Next up is Minigolf VR which is as the name states a mini-golf or putt-putt golf or whatever golf you call it that involves hitting a ball through windmills.

Lastly we are including the roughest, earliest game to our library; Chunks. Made by Facepunch, this is following their traditional development model – release it early, listen to feedback and update. So this is early days for Chunks. Give it a try and post some feedback.

To keep up to date with the Vive truck tour, campus tour, and other events, follow SteamVR and HTC Vive on Twitter.

Photogrammetry in VR - part 1 of 3

21 September, 2015 - Cargo Cult
It's not magic, but...

There's a good chance that you already own a high-end 3D scanner, capable of capturing beautifully lit and textured real-world scenes for virtual reality.

This process known as photogrammetry certainly isn't magic, but it's getting close. It involves using images from a conventional digital camera to create three-dimensional scenes, exploiting the differences in photos taken from different positions and angles. There's plenty that will confuse it - and getting decent source material can be a bit of a dark art - but when everything works properly, the results in room-scale VR can be both compelling and unsettling. Being virtually transported to another, real location - and then walking around in it - still feels like something from the far future. Some of the most immersive VR experiences I've had so far have involved scenes captured in this manner.



This article is aimed as a general introduction to photogrammetry for VR purposes, and as a compilation of hints and tips I've accumulated. This isn't the definitive guide, but it should help get you started and will warn of possible pitfalls along the way. Go forth, and capture the world!

Software

The software I've gained the most experience with is Agisoft PhotoScan, from a small Russian company in Saint Petersburg. While it has more of a GIS background (with plenty of features geared towards surveyors and aerial photography) it has gained quite a following in the more artistic side of computer graphics.

PhotoScan has proved highly configurable and versatile, but it certainly isn't the only software of its type available - alternatives include Autodesk's cloud-based 123D-Catch and Memento. Performed locally, the process requires some pretty chunky hardware for particularly advanced scenes, so expect to throw as much memory, CPU and GPU at it as you can afford. If you're careful, you can limit scene complexity to what your hardware is capable of processing in a reasonable amount of time. Although the eight-core monster at work is very much appreciated...

You may have heard of photogrammetry being used on the recent indie game The Vanishing of Ethan Carter, for instance - where it was used to capture surfaces, props and structures for use in a relatively traditional 3D art workflow. They have posted some excellent general advice online, mainly geared towards capturing individual props - whole scene capture has some differences.

How it works

The general principle behind photogrammetry involves having at least two photographs from different angles of every point you want three-dimensional information for - it will identify visual similarities and, using maths, figure out where these similar points are located in space. This does mean it is limited to static scenes containing opaque, non-specular surfaces - as mentioned, it's not magic.

The software will take your photos and, if there are enough common features between different photos, will automatically calculate all the camera positions. The process is a little like automatically stitching together photos for a panorama, only here you absolutely don't want the camera to stay in one place. Think lots of stereoscopic pairs, although it can work with most potential photo alignments.

Once it's aligned all the cameras (or not) and generated a sparse point cloud, you then get it to produce a dense point cloud - potentially hundreds of millions of points if you've overdone the quality settings. From that you can generate a mesh - with potentially millions of triangles - then get it to create textures for that mesh. (We've been able to render some surprisingly heavy meshes in VR!)



Hardware

I've been using some relatively humble equipment - a Canon EOS 7D digital SLR for most things, usually with the 17-55mm f/2.8 EF-S lens. For low-light stuff a tripod is really helpful, but I've got some surprisingly good results from just hand-held shots. You want as deep a depth a field as possible (f/11 seems a decent compromise between depth of field and sharpness) with as low an ISO as you can achieve (to stop sensor noise from swamping details).

Any decent digital SLR with a sharp lens should work well - for full VR scenes, a wide-angle lens can be incredibly helpful, making it far easier to capture the scene with a reasonable number of photos in a limited amount of time, while reducing the chance you've forgotten to get coverage of some vital area. The wider the lens, the trickier it can be to get a decent calibration, however - a Sigma 10-20mm lens of mine suffers from severe inaccuracy in the corners relative to a typical lens model, while a Canon 10-22mm lens is a fair bit better. The 17-55mm lens mentioned earlier, while a fair bit narrower at the wide end, has all its distortion corrected away - giving great all-round results at the expense of more photos being required. (It's also stabilised, making hand-held shots much easier in lower light or with a lower ISO.) Images from GoPros can be used too, but expect to fiddle around with lens calibration a lot.

I've had some fantastic results from a borrowed 14mm cinema lens on an EOS 5D Mark III - quite implausible hardware for general use, but stupendously wide angle (14mm on full-frame!) and with its minimal distortion being fully corrected away.

The process should work with photos taken with different lenses and/or cameras - for a room-scale scan I can do an overall capture with a super-wide-angle lens to make sure I've got full coverage, before going close-up on individual objects. Be sure to minimise the lens differences to those you actually need - if you have a zoom lens, make sure you're at fixed focal lengths (taping down the zoom ring is extremely useful!) or you'll end up with many different 'lenses' with different, more error-prone calibrations. Switching to manual focus and staying at a particular focus distance can also eliminate focus 'breathing'. (Make sure you switch back again afterwards!)

(A note on the fixed focal length issue: over around an hour and a half I captured a fantastically detailed scene in Iceland with my super-wide-angle lens. Since I was shooting in HDR, the camera was on a tripod, and I was moving the tripod and camera for each set of shots. For whatever reason, the zoom ring on the lens slowly drifted from 10mm to 14mm - the tripod meant I wasn't doing my usual hold-zoom-at-widest-with-my-left-hand trick. I managed to rescue the captured scene by creating an unwholesomely large number of lens calibrations in PhotoScan, evenly distributed across the shots - luckily the scene had enough photogrammetric detail in it to make this acceptable - but it still isn't as high quality as it could have been. So yes, even if you know about the potential problem - tape the zoom ring in place anyway. It could save you from a lot of frustration later.)

Continued in part two...

Photogrammetry in VR - part 2 of 3

21 September, 2015 - Cargo Cult
Process

Since photogrammetry works on identifying small details across photos, you want shots to be as sharp and as free from sensor noise as possible. While less important for really noisy organic environments (rocky cliffs scan fantastically well, for example), it can be possible to get decent scans of seemingly featureless surfaces like painted walls if the photos are sharp enough. If there's enough detail in the slight roughness of that painted surface, the system will use it. Otherwise, it'll break up and potentially not scan at all - do expect to have to perform varying amounts of cleanup on scans. Specular highlights and reflections will horribly confuse it. I've read of people scanning super-shiny objects like cars by sprinkling chalk dust over surfaces - you won't get good texture data that way, but really good geometry.

Think of topologically simple, chunky scenes to scan. Old buildings, ruins, tree trunks, rocky landscapes, ancient caves and broken concrete all scan brilliantly - while super-fine details, reflective, shiny or featureless surfaces tend to break down. Do experiment to see what works and what doesn't. Scenes that are interesting in VR can be quite small - lots of layers of parallax and close-up detail can look fantastic, while giant scenes (stadiums, huge canyons and the like) can be oddly underwhelming. Think room-scale!

Scanning the scene can best be done in a robotic, capture-everything manner. For a wall, it can involve positioning yourself so the camera direction is perpendicular to the wall, ensuring you have its whole height in the viewfinder. Take a picture, take a step to the side, then repeat until you run out of wall. You can then do the same from a different height, or from different angles - remember those stereoscopic pairs - remembering to get plenty of overlap between images moving to 'new' areas, with plenty of photos taken when going around corners. 'Orbiting' objects of high detail can be useful - make sure to get photos from many angles, with adjacent photos being quite similar in appearance. If anything, aim to capture too much stuff - it's better that than being clever then missing something. Expect a couple of hundred photos for a detailed scene. (It's fine to use 'Medium' or even 'Low' when generating your dense point cloud - super-fine geometry isn't necessarily what you need for a full scene scan. Dense point clouds with too many points also require phenomenal amounts of memory to build meshes from...)



Most of the work I've done so far has involved low dynamic range imagery (which makes hand-held photography possible) - it's still possible to capture full lighting from a scene with careful manual exposure settings. Shooting RAW and then dragging the shadows and highlights into view in Lightroom can help a lot here - it's not quite HDR, but it helps. Being able to fix chromatic aberration in software is also extremely useful, removing instances of those separated colours which look so awful in texture maps. Export full-resolution JPEGs for processing in the software, making sure they're all of the same alignment (portrait or landscape) - otherwise it may decide they're from different cameras. Don't crop, distort or do anything else too drastic. One trick to use is to pull the highlights and shadows in for a set of images to be used for photogrammetric purposes, then re-export with more natural lighting for the texture generation stage.



A full high dynamic range workflow is possible in AgiSoft PhotoScan. Capture bracketed shots on your camera, then merge them to OpenEXR - you can later have the software export textures to OpenEXR for tonemapping at runtime to give full, real-world HDR lighting. You do need a tripod, and there's a lot more data involved - but you'll potentially get useful-for-photogrammetry detail from both shadows and highlights. For one scene I captured, geometry was generated from HDR images and then textures from carefully manicured TIFFs from Lightroom. Look into Nurulize's work relating to HDR photogrammetry capture here.

If you have something which appears in photos when it shouldn't (for example cars, walking pedestrians, bits of Mars rover, astronauts) it is possible to use image masks to eliminate them from consideration. Unwanted features such as lens flares and raindrops on the lens can also be removed in this manner.


Source imagery: NASA / JPL-Caltech / MSSS

Cleanup

One almost-unknown feature in Agisoft PhotoSCan that's proved incredibly useful is its ability to project photos on to another mesh. I've cleaned up a number of rough architectural scans by essentially tracing around the noisy geometry in a modelling program, then reimporting the super-clean mesh into PhotoScan as an OBJ. Since I kept the scale and alignment the same, I could get it to reproject the photos on to my new UVs - resulting in low-poly, super-clean models preserving all the light and texture present in the photos. (Things which can scan very badly, such as featureless flat surfaces, can be the easiest to model!)

This also gives the opportunity to obtain geometry in different ways. Kinect-based scans, Lidar scans, geometry from CAD models - so long as the scale and shape is appropriate, you'll be able to overlay high-resolution photo-sourced texturing this way. You won't get specular maps without a lot of extra work, but the sensible UVs will make them much easier to produce...



A frustrating thing to occur when scanning a scene is to have a seemingly perfect dense point cloud dissolve away into thin, stringy blobs - or even nothing at all. Office furniture suffers badly from this, as do other thin or detailed surfaces. A workaround can involve generating sections of mesh separately and then performing cleanup - compositing them all together into one mesh for final cleanup.

Natural scenes can really benefit from a simple, quick trick I found - involving putting a sphere or cylinder around the scene, then using PhotoScan's project-to-existing-UVs feature to project imagery on to it. While not necessarily 'correct', the human brain seems to appreciate being fully immersed in colour and detail - an incredibly quick scan I did of some redwoods forest became quite compelling this way. Scanned geometry is limited to the ground and treetrunks in the foreground - everything else is flat and 'fake'. For large-scale terrains, a simple-geometry version in the background can look great, with effectively a photosphere for the sky and more distant landscapes.



Continued in part three...
HARDWARE PARTNERS
STEAM HARDWARE