Liftoff

Liftoff

View Stats:
JL2579 Sep 21, 2017 @ 4:18pm
Enlarged Fisheye Camera Mode
Hey Devs,
I just purchased Liftoff and first of have to say, the sim is amazing. The physics are spot on, and I have the same Vortex drone IRL and this will definitly help me to finally become a good FPV pilot ;D
Its great that you recently added Fisheye, since the edge distortions on central projecton with high FOV are irritating and very different from the FPV lenses. The current fisheye mode has a downside though in that it induces black borders at the corners. Could you please add a second Fisheye mode where the Fisheye image is simply enlarged so that there are exactly no black borders anymore? Then the image will feel truly the same as on my 10 inch FPV monitor. Thanks in advance, I hope its going to be easy to implement!
< >
Showing 1-13 of 13 comments
LuGus Studios  [developer] Sep 21, 2017 @ 11:10pm 
Hi, thanks for the kind words :)

This blogpost explains why we have the black border, it's by no means an intentional design decision.
http://www.liftoff-game.com/2017/08/31/update-0-10-12/
JL2579 Sep 22, 2017 @ 12:49am 
I read the post, it explains why you guys added Fisheye, and I enjoy it indeed because of the reasons stated there. I don't think you quite understood what I meant, therefore I made a screnshot comparison here:
https://i.gyazo.com/6c2778f50f0dbfad5eac81fd1343b8ce.jpg

basically, I know how the linear projection works and the center part of a high FOV image is the same as the same camera with lower FOV as highlighted in 1 and 2.
I know that the fisheye in Liftoff is a post processing effect that squeezes the corners and enlarges the center ( hence the noticable pixelation in the center with low res screens and high Fisheye FOV) to reduce distortions there and therefore adding those black borders.
I just want to be able to toggle between a third Fisheye mode that takes the highlighted center part of the Fisheye image, like example 4
( or maybe get rid of the current one right away, don't know if many people are fond of wasted black screen space ;) )
You can see that it is still fisheye, the bars are still curved, we don't have black borders, nevertheless it feels much more natural for me than linear , while still retaining high side view.

Last edited by JL2579; Sep 22, 2017 @ 1:13am
JL2579 Sep 22, 2017 @ 1:12am 
By the way, after some testing I noticed that you guys are measuring the horizontal Field of view and are therefore scaling the image accordingly (Vert- Method) while most games use the Hor+ scaling method (measuring vertical and scaling the horizontal part of the image accordingly depending on your screen ratio). As you stated, "We feel this is more intuitive and matches the values shown on actual cameras better."

IMO measuring the diagonal would make the most sense, since the diagonal FOV would not change if you switched to Fisheye and scaled the image as suggested in the previous post!

In case anyone doesn't know what I am talking about, Wikipedia explains it well:
https://en.wikipedia.org/wiki/Field_of_view_in_video_games

Last edited by JL2579; Sep 22, 2017 @ 1:18am
TheCodeHippy  [developer] Sep 22, 2017 @ 2:39am 
Hi JL2579,

I understand what you mean, and this scaling is what we used for a long time during development. However, there are some severe drawbacks to scaling the effect.
Perhaps first some technical background. Our fisheye is a post-processing effect. This means we start from the image as it would be rendered normally, and then deform that. This means that the pixels we have in the original are all we can use for the final result.

- Reduced resolution
Normally, Fisheye gives the least distortion around the center. However, this is only true if you go from real-world directly to Fisheye. We go from a high-FOV image to Fisheye. High FOV will greatly reduce the size of the middle of the screen, this means less pixels to work with.
When we apply our Fisheye, the middle of the screen is blown up to get the correct result. This sadly means that things become more "pixelated" (imagine a 10x10 block of pixels blown up to 30x30).
This pixelation is already noticeable on high FOV values. If we were to further blow up the whole image to crop out the black borders, this pixelation would (and did in our development tests) get much worse, to the point of being almost unplayable at 120 degrees FOV.
A solution to this would be to do super-sampling: rendering the initial image at a higher resolution. This would however have a big performance hit, and most of the extra pixels would be cropped out. It's something we can consider in the future, but for now we want the effect to be light-weight.

- Decreased visibility
Odd as it might seem, cropping the image after applying fisheye takes away a lot of the visibility you would expect to gain. In your example you'll notice that in the middle of the screen you'll only see a little bit more in 120 degrees cropped fisheye than with 80 degrees regular rendering. This means you'd be more inclined to go to higher FOV, which would drastically lower the resolution again.

As stated above, there are some solutions, sadly none is perfect. One is to do super-sampling, which would bring a rather large performance overhead. The other is to apply the effect not as post-processing (manipulating the original image) but to do some fancy shader magic that distorts the actual shape of objects. This would cause a whole bunch of other issues: straight lines staying straight instead of curved and having to change every shader in the game.

We want to strike a balance between having a light weight that still looks correct. It could be that we revisit this in the future, but for now we are happy with it.

Edit: As a side note: there is another technique that is used, where multiple cameras are used to create a sort of 360 view around the drone (Link here[strlen.com]). Then that information is stitched together and used to create a fisheye (going up to 360 degrees and higher -- yes, higher). However, this would have the same issue as super-sampling: it would cause a huge performance hit.
Last edited by TheCodeHippy; Sep 22, 2017 @ 2:44am
JL2579 Sep 22, 2017 @ 5:41am 
Hey thanks for the quick response! I have actually seen that quake page before, indeed this is fun! All you mentioned are valid concerns and I understand that you don't want to implement this yet.

However I sat down to calculate the exact magnitudes of the distortions and problems. Right now I am assuming that you apply the following transformation in normalized coordinates (x,y in [-1,1]) after rendering a frame in linear projection, where alpha is the FOV :

x' = x* 2 * arctan( sqrt(x²+y²) * tan( alpha / 2) ) / ( alpha * sqrt(x²+y²) )
y' = y* 2 * arctan( sqrt(x²+y²) * tan( alpha / 2) ) / ( alpha * sqrt(x²+y²) )

this magnifies the center of the image by

2/alpha * tan( alpha/2)

as plotted here:
https://www.wolframalpha.com/input/?i=plot%5B(2*tan(x*pi%2F360))%2F(x*pi%2F180)+%5D+from+0+to+150

here are some example values:

FOV magnification
80° 1.2
100° 1.37
120° 1.65
130° 1.89
140° 2.25
145° 2.51

A standart analog FPV setup has a resolution of 480p, therefore a typical FullHD screen is 2.25 times larger, or in other words up to 140° FOV you still get the same resolution as in the real world FPV setup. So Setting this as a limit seems reasonable I guess.

The corners [+-1,+-1] of the image in the previous transformation get transformed to

s = alpha / ( sqrt(2) * arctan(sqrt(2) * tan( alpha/2))) * [+-1,+-1]

so if we scale the image up by 1/s, the center of the frame gets scaled up by a total of

sqrt(2) * tan ( alpha /2 ) / ( arctan( sqrt(2) * tan( alpha/2 ) ) )

which is plotted here:

https://www.wolframalpha.com/input/?i=plot%5Bsqrt(2)*tan(x*pi%2F360)%2F(arctan(sqrt(2)*tan((x*pi%2F360))))+%5D+from+0+to+150

FOV magnification
80° 1.36
100° 1.63
120° 2.07
125° 2.23
130° 2.42
145° 3.32

if we apply the same previous limit of 2.25, the maximum allowed FOV becomes 126° - which is definitly good enough for most people, I fly with 120 usually, and this is a higher limit than most first person games or sims allow. So I don't think that the additional scaling would really be that bad !

Additionally, you guys could try out the following:

try how an independent scaling looks like that keeps the corners in the corners, not sure if this looks totally distorted or not, I can later try to simulate it myself:

x' = 2 * arctan( x * tan( alphax / 2) ) / ( alphax )
y' = 2 * arctan( y * tan( alphay / 2) ) / ( alphax )

where alphay= 2*arctan( aspect_ratio * alphax/2 ) and alphax is the horizontal FOV you have set.

Or you could try the following and see if it has much of a performance hit ( I don't think so):
Assuming FOV is set to alpha:
For each frame,render a picture at FOV alpha, scale it up by tan(alpha)/tan(alpha/2), superimpose another one taken at x/2 in the middle, then apply Fisheye. You now have increased resolution at the center by just rendering 2 frames instead of 1. Of course this takes a bit of performance, but since neither the physics engine nor the camera position or the graphics buffers would have to change, I assume this will do less than 20% fps hit. But please feel free to prove me wrong, I am by no means a graphics expert^^
Last edited by JL2579; Sep 22, 2017 @ 9:59am
TheCodeHippy  [developer] Sep 22, 2017 @ 7:24am 
Wow, Thanks for the in-depth reply.

I get most of what you are saying, but I have some questions and remarks:

Originally posted by JL2579:
this magnifies the center of the image by
2/alpha * tan( alpha/2)
How did you get to this and how do you define the center of the image? Is it the center pixel, or a small area in the middle?

I agree with the results, however we're not exactly sure about your conclusion of using 480p as a threshold. One of the issues is that the magnification isn't uniform. It's concentrated in the middle of the screen. This gives a (perceived) worse result than a consistent 480p image. While of course this is partially down to preference, the team here agreed that it wasn't acceptable.

As to your suggestion of superimposing a second image on the center. This is something we have considered, but due to time constraints weren't able to explore.
JL2579 Sep 22, 2017 @ 8:10am 
this the theoretical magnification right at the center , defined by the derivative
(dr'/dr) (r'= 0)
= d( 2*arctan(r* tan( alpha/2))/(alpha ) /dr (r=0)
=2/alpha*1/(1+r²*tan²(alpha/2))*tan(alpha/2) (r=0)
= 2/alpha*tan(alpha/2) since d(arctan(x))/dx = 1/(1+x^2)

I made a little mistake in my previous post which I corrected now.

I have an idea in mind on how to optimize this "2pass rendering" which I am going to test out, basically by adjusting the resolution of the 2 images I would superimpose as described in the method above. One doesn't need to render 2 frames at full resolution, since the one with higher FOV will be only used for the sides and compressed, so it can be rendered at much lower resolution and therefore drastically reducing the performance cost. One could see it as rendering a slightly lower than normal resolution image for most of the screen and then adding a high FOV low resolution image on the outsides, which is then merged to 1 Fisheye view.
Last edited by JL2579; Sep 22, 2017 @ 8:11am
TheCodeHippy  [developer] Sep 22, 2017 @ 1:00pm 
You seem to be way more comfortable with the mathematical side of this than I am. I'm a bit rusty, but that makes sense, since the derivative would be the rate of change.
Keep us updated on how your test goes. I'm curious if you can get it superimposed seamlessly.

After reading the post about quake, I gathered that there would be people that are very passionate about FOV and fish eye. Seems I found one :)
Last edited by TheCodeHippy; Sep 22, 2017 @ 1:02pm
JL2579 Sep 24, 2017 @ 8:59am 
Hey, sorry for taking so long to get back at you. So after a lot of calculating I can finally present you my current solution. I plan to make a video aswell, which is going to explain this more in detail. The following graph shows the results in compact form:

https://i.gyazo.com/8a42a990e898e5a6425e804221b5d2b0.png

what you see there is the relative maximum DPI loss of the fisheye transformed image (so the minimum is right at the center) for different diagonal FOVs. I always used the diagonal one in all my calculations since its the only measure that doens't change its value when I apply a Fisheye transformation without black edges. The method that is currently implemented in Liftoff as far as I can tell is "Simple Black".

If you scale this image up to cover the whole screen without black edges, you end up at "Simple Noblack", which is the worst of all methods.

Now my solution to the problem is as follows: We render 2 Images instead of 1 in Linear transformation, while keeping the constraint that the total amount of pixels raytraced stay the same. Now we have an additional variable R, that is the ratio between the size (NOT resolution!!) of the smaller center image and the size of the outer image of the linear projection before stiching them together and applying the fisheye transformation.

I simplified the equations for optimzing this ratio R for the lowest maximum DPI loss in the image, that is if the center image is too small, the area right next to it will be close to the center and still have low resolution, if it is too large, the center area itself is still too pixelated. Also notice that this method only makes sense if the FOV is very high, otherwise you are simply rendering the same area twice with half the pixels per image, which obviously doesn't make much sense.

I then used Matlab to numerically find the optimal ratio R.The results using this ratio are called "2Pass Noblack" and "2Pass black". You can clearly see that this performs much better for high field of views. Note that for FOVs higher than around 133, the 2pass rendering without black borders still has a better resolution than the standart one with black borders.

As it turns out, simply setting R=0.5 is almost equivalent for the region of interest between FOV 120 and 160. Therefore I plotted these lines aswell, called "2Pass R=0.5 black" and "2Pass R=0.5 noblack".

The horizontal Line is the min DPI at the current maximum allowed FOV (horizontal FOV144 =diagonal FOV 148). If we apply the same limit, we can now set the maximum FOV to around 160, or we keep the same limit and improve the resolution in the center of the image by over 30%.


Since this theoretical rambling will probably not convince everybody, I also took the time to render some example screenshots using this method:

http://i.pi.gy/ZQ3R.jpg

The first image 1) is a FullHD screenshot taken at horzFOV 142,
for comparison I added the image transformed in the current ingame fisheye transformation as 2).

Next we have 2 considerably lower resolution images in 3) and 4) that have the same total amount of pixels as the previous fullHD version, taken at different FOVs that are carefully calculated, so that the smaller one contains more details of the middle drone, while the outer gives us enough side vision.

Then in 5) we can finally see the stitched and "fisheyed" version out of these 2 images using R=0.5

Notice the slightly visible stitching border outlined in red 6), this is because I couldn't set the FOV values exactly using the arrow keys, since they have continous values in the game instead of just integer steps.You can also see that the outer side of the image has a way lower resolution than the inside at the stitching boundary.

BTW I didn't use any antialiazing ( or rather was too lazy to program that myself xD) which allows us to better compare the resolutions.

Finally I rendered the same image without black borders in 7) and without cutting pixels vertical in 8). I couldn't quite figure out what you guys did in 2), can it be that your transformation is a bit streched and not actually quite correct ;) ?

9) is just for fun ;^^



Last edited by JL2579; Sep 24, 2017 @ 9:02am
JL2579 Sep 24, 2017 @ 11:36am 
I thought about it for just a bit more and came to the conclusion that pixel density in the center is actually a lot more important than at the corners, because your focus is typically at the center of the image and the human eye has a very sharp resolution decline depending on the distance from the center ( https://xkcd.com/1080/large/ )

This means that using a ratio R of less than 0.5 probably makes the final image look even better. I will come back with some more results soon
Last edited by JL2579; Sep 24, 2017 @ 11:38am
TheCodeHippy  [developer] Sep 25, 2017 @ 6:46am 
Hey JL2579,

Thanks for the extensive write-up. Reading your work here makes me eager to start experimenting again.. It's nice to know that this way of improving the effect is possible. I'm curious to see what the performance would be.

We'll be looking into this in the future, but for now there are some other things that have priority. In regards to the camera, we're working on getting rid of our dual-camera setup, which allowed us to render both very near and far away, but messes with a lot of post-processing. This will make using your technique somewhat more practical for us in the future.
JL2579 Sep 25, 2017 @ 7:30am 
Hey, thanks for the reply and don't worry, I was mostly doing this as an educational challenge for myself, and also in case I should ever start programming something similar. Additionally presenting this on my youtube channel might be an interesting read for my viewers. I didn't know that you already have a dual camera setup, that is cool. BTW I heard that in earlier versions of the game you guys had fixed x mm lenses one could set instead of adjusting the FOV directly. I just measured the horizontal FOV of my current 2.1mm lens and was suprised to see that the resulting 116° are a lot lower than what you can find on online comparison tables. I assume those all measure the diagonal, which is why it didn't mach with your ingame settings previously, as there is a sharp difference between horizontal and dioagonal for fisheye lenses. In my case diagonal was over 160°.

This enourmus difference is because there are different kinds of fisheye projection and lenses, as compared over here:

https://en.wikipedia.org/wiki/Fisheye_lens#Mapping_function

The linear projection has the relation R=f*tan(theta) , the equidistant one I implemented in the previous post is R=f*theta, and my Quadrocopter has an R=2*f sin(theta/2) lens. I didn't previously know that there are that many different kinds, might be an interesting read for you! Also wiki says that the stereographic R=2f tan(theta/2) " is easily implemented by software."

I am now curious to see what that means. So in case you are considering adding an "equivalent x mm lens" description to the FOV setting, I suggest you go ahead and measure it directly yourself using different lenses and drones, then take the average.

Happy and Hippy coding!

P.S. : IMO the Stereographic (conform) R=2*f*tan(theta/2) looks by far best. So in case you are ever revisiting this part of your code, take the opportunity and implement this one :)
Last edited by JL2579; Sep 25, 2017 @ 7:48am
TheCodeHippy  [developer] Sep 26, 2017 @ 12:29am 
Thanks! I'll definitely look into this when we revisit the effect.
< >
Showing 1-13 of 13 comments
Per page: 1530 50

Date Posted: Sep 21, 2017 @ 4:18pm
Posts: 13