Install Steam
login
|
language
简体中文 (Simplified Chinese)
繁體中文 (Traditional Chinese)
日本語 (Japanese)
한국어 (Korean)
ไทย (Thai)
Български (Bulgarian)
Čeština (Czech)
Dansk (Danish)
Deutsch (German)
Español - España (Spanish - Spain)
Español - Latinoamérica (Spanish - Latin America)
Ελληνικά (Greek)
Français (French)
Italiano (Italian)
Bahasa Indonesia (Indonesian)
Magyar (Hungarian)
Nederlands (Dutch)
Norsk (Norwegian)
Polski (Polish)
Português (Portuguese - Portugal)
Português - Brasil (Portuguese - Brazil)
Română (Romanian)
Русский (Russian)
Suomi (Finnish)
Svenska (Swedish)
Türkçe (Turkish)
Tiếng Việt (Vietnamese)
Українська (Ukrainian)
Report a translation problem
absolutely! Everything you make with Zephyr can be used for both any commercial and non commercial purposes! You don't owe us any royalties! (But your actors might :P )
Zephyr lite exports mesh into ply, .obj and .stl (so you can easily import them in blender!) while the other versions offer few other formats. What do you mean by "virtual reality" ? I don't we we mention that in our website but i might be wrong
You can set a vertex limit when extracting mesh and textured mesh. Textures are not unwrapped though, and we're seriously thinking into bringing the UV remapping feature from pro to lite. Not a promise, but something we're discussing a lot lately!
The lite version allows you to use 200 photos (or 200 frames extracted from videos). That usually covers most subjects :) as a reference, the voodoo doll in the video was taken with around 60 pics and the full human figure 170 pics iirc. I don't have any software to recommend to take the video (but if you do take video, we suggest using a 4k video when possible. Pictures, especially from a DSLR, are much, much better) but just know that zephyr extracts the frames automatically in an intelligent way (discards blurred, discards too similar)
The steam version includes all 2.x releases, wheter the website version gives you 1 years of updates - meaning that when we release 3.0, if you are still in your 1 year updates, you get that too. The Steam version has cards ( :P ) and gets often big discounts. The Steam Version can be launched offline. The non-steam version can be launched offline too, but requires regular online checks (i.e. once a month). It's a bit of pros and cons for both of them.
It's a perpetual license :)
Hope i answered all your questions, feel free to ask more details! Also, remember that you can get a free 14 days of the lite with no registration directly on 3dflow.net (or just ask me for a pro trial if you're interested in the other features)
I'm not too concerned about UV unwrapping because once you're in blender there are tools for marking seams and unwrapping a model or just doing a smart unwrap without any work at all. In most cases I'll either use the model as is because it's a real life object or I'll be fine unwrapping it and generating new UVs.
I'm not too concerned about the polycount since blender has that destruction modifier but it's nice to see it's possible to set the limit.
Is there any upgrade discount for 2.x buyers for 3.x and so on?
MASKING.
Most of our time is being spent on taking pictures and getting random garbage for models because we can't mark the object and the background. I know that a PERFECT set of images, like the demo, doesn't need masking to produce a model... but we've produced ~10 models that are recognizable but are also partially fused with the background and appear to be melting.
masking is optional in most cases (unless you obviously want to reconstruct something that touches the ground, like the bottom side of the statue seen in the demo, or that is inchoerent with the background, such as turntable data). If you see the cherub statue from tutorial #1 is a good example on how to take object pictures without needing masking at all.
May I ask what is your subject and what your camera is ?
If you also can share the dataset with us (and possibly the zep), i can give you some more pointers. Obiously you need good pictures as it's the input data.
Also, if you are using a turntable try using the autocompute all, that can really improve the speed of your workflow!
The photos were taken by some random expensive dslr camera. I don't know what it is, but each photo was around 2MB I think.
The dataset is gone because the photos were taking up so much space :D It was 65 photos total at 3 heights / angles of a little black cube, and the software gave us what looked like a smudge of black paint against the green surface.
Is cuda just not yet optimized for the titanx? I'm surprised people can use this software in any sort of workflow if top end GPUs (until the end of this month
The shortest time, with good photos, was about 40 minutes. And that was still 100% cpu and lots of cuda calls. Impressive software but sooo expensive (computationally)!
A solution is to either use a projector to project a random pattern on the subject (although it's a pain, since either you lose the texture or you have to take two pictures, projected pattern on and projected pattern of, and switch them after camera orientation).
So it makes sense that you got that "melted" appearance. Another easy solution would be to put a newspaper or something below the bucket, so that camera orientation is easily done, or to use the shape from silhouette (pro only though, and requires masking) algorithm
It is a very resource demanding application indeed, and actually often two (or more =) ) mid-range video cards are better than 1 high end card (although the lite version can use only 1 card, tops. A limitation i'm pushing to remove from the lite version but it looks like it's not going to happen soon).
It takes a while to master the acquisition phase for photogrammetry, which is why we're always happy to help and have a look at datasets, so we can give few pointers on how to reconstruct certain subjects!
http://www.amazon.com/Charging-Charger-Adapter-Activated-Smartphones/dp/B00M26DQBK?ie=UTF8&psc=1&redirect=true&ref_=oh_aui_detailpage_o06_s00
I thought it being simple would make it easier, but I suppose to your software it's an amorphous blob of black.
Do you think lighting would make a difference? If light strikes an object, there's going to be a very defined gradient along the object. Will that help?
Try to avoid direct light (to avoid highlights) and if possible try to "dirty" the surface with something. Some uniform objects can be done, but it is not easy.
If you can share the dataset with me i can give you more custom hints on how to shoot your object!
Cool stuff, but the trial was just slightly lacking in the previously mentioned features to make us unsure in the allotted time.