Stormworks: Build and Rescue

Stormworks: Build and Rescue

Not enough ratings
How to use the euler x, euler y and euler z outputs of Physics and Astronomy Sensors (the nitty gritty)
By Kernle Frizzle
If you don't have any experience in 3D coordinate transforms and rotations, then chances are the euler angle outputs make no sense. Hopefully this guide will help you understand what they mean and how to use them.
3
2
   
Award
Favorite
Favorited
Unfavorite
What the hell is an Euler Angle?
Euler angles, named after the Swiss Leonhard Euler, are a way to describe an orientation in space using three Davenport chained rotations around fixed coordinate axes.

English translation:
It's like roll, compass heading and elevation. You have three separate angle measures, and each describes a different aspect of the orientation.

The theory is that any orientation in 3D space can be described by a sequence of three consecutive rotations, each with reference to one of the three coordinate axes, x, y, and z. Any point in one coordinate system can then be transformed to its relative position in the rotated coordinate system and vice versa by applying those three rotations to the point's coordinates.

For now this is just the theory. I'll do my best to lay out ways to visualize and make sense of it yourself, but if it doesn't click, don't worry. You can just skip it and trust the math at the end.

Visualization:
I ripped this gif from the wikipedia for Euler angles.[en.wikipedia.org] It shows how the final orientation is reached by the three individual rotations:
.gif]In this case, imagining that the down-left axis is x, the vertical axis is y, and the down-right axis is z, the order of the rotations is y-x'-y''. The 's just represent the fact that each consecutive rotation is done after the first, and in mathematics usually the coordinates of anything once a transformation is done are described as the original names of the coordinates plus one ' for each time a transformation is applied.

One thing that is important to note:
The order of the rotations shown in the gif are what is called intrinsic rotation. That means the rotations are performed around the object's own local axes, and every time a rotation is done, the object's orientation gains an extra ' in its notation. However, the physics sensor Euler angles represent extrinsic rotation, meaning rotation done with reference to the fixed world space's coordinate axes. Extrinsic rotation order does not include 's in its notation, as the reference around which the object is being transformed isn't being transformed itself. Rotations are most easily be performed (mathematically) if they are with respect to one of the fixed (blue) axes. All further mentions of rotations will be extrinsic. Thanks to thepackett for the correction.

This idea is probably easiest to visualize if we talk about roll, elevation and azimuth, which I'm guessing you're familiar with. If the idea of rotating around the three fixed axes is confusing, then follow the steps below, otherwise skip to the next heading.

I want you to sit up straight in your chair and look straight ahead.

Your head is now the fixed coordinate system, you are not allowed to move your head in any way.

Imagine a point directly in front of you, height level, one meter away.

You are allowed to rotate the position of the point around your head, but only in three directions:
  • Around your x-axis (pointing directly out of the right side of your head, rotating around this axis would move the point up or down in your vision),

  • Around your y-axis (pointing directly out of the top of your head, rotating around this axis would move the point left or right in your vision), and

  • Around your z-axis (pointing directly out of the front of your head, if the point is in your vision's centerline then it will look like it doesn't move, as it has no wings to indicate its roll position, but if the point is slightly off, it will rotate around your centerline clockwise or counterclockwise).
Imagine you want to describe the orientation of an aircraft, at zero azimuth to keep it simple. The aircraft is pointed slightly up and rolled slightly to the right.

If you want to move the point in front of your head into the same position as the aircraft, you could do it in two ways:
  • First rotate the dot around the x axis (upward) and then around the z axis (clockwise)

  • First rotate the dot around the z axis (clockwise) and then around the x axis (upward)
Now imagine performing these rotations on the dot in front of you. Using the first way, the dot would rotate upward (good), but then rotating around your z axis, the dot will swing out to the right as it rolls (not good, as the orientation of the dot should be at 0 azimuth). Remember, these rotations are being done around the axes relative to your head, not the dot.

Using the second way, first you would rotate the dot around the z axis, giving it the correct roll value (good), and then you would rotate the dot around the x axis, moving it up and giving it the correct elevation (also good).

This should show you how the order that you apply the rotations is just as important as the size of the rotations themselves. The idea can be extrapolated for azimuth as well, but I'll leave that visualization to you.

But wait, the gif example only rotates around the x and y axes, why doesn't it use all three?

At some point, people decided that Euler's angles were a bit too abstract to be used for more practical applications like aeronautics. Two dudes, Peter Guthrie Tait and George H. Bryan came up with Tait-Bryan angles (creative), which are synonymous with your classic and familiar heading, elevation, bank, or yaw, pitch, roll and such. They use all three axes instead of just two.

Over time, these Tait-Bryan angles have also been called Euler angles, and in that situation, Euler's original angles are called classic Euler angles.

Finally getting to the game itself, the three "euler" angles outputted by the physics and astronomy sensors are actually Tait-Bryan angles in disguise.
The only mystery left is: In what order are these angles applied?
What angles, and in what frame of reference, and in what order, are outputted by the sensors?
The only info we have to go off of is the names of each output: "euler x," euler y" and "euler z."

It's a pretty safe assumption that the "euler x" angle is the value of the rotation around the x axis, and likewise for the rest. As it turns out, this is indeed the case.

These rotations are with respect to the world itself. (extrinsic)

Important to remember: As with the sensor's x, y and z position outputs, y is the vertical axis and x and z are analogous to the in-game map's x and y.

Stop beating around the bush, what order are these angles performed in?
I don't know about you, but my first instinct would be z-x-y, as this would be identical to what you would expect as roll, elevation and azimuth. First you would perform the roll, then the elevation and lastly the azimuth.

If you tried the steps earlier to visualize these rotations, you should be able to see that this would indeed make the most sense. That is, only if x, y and z are truly trying to describe elevation, azimuth and roll.

This, sadly, is not the case.

After some experimenting, I found the true order.
It's x-y-z.

I guess it should be obvious, being in alphabetical order, but the game has no indication of this order other than I guess it being implied by the order the outputs are in, being channels 4, 5 and 6 respectively.

Now we get to the math.
How do I use these angles in any meaningful way?
As described earlier, the order in which these rotations are performed is x-y-z. If you rotated a coordinate system (starting facing north with no tilt whatsoever) around these axes in this order, the resulting coordinate system would be in the same orientation as the physics or astronomy sensor in-game.

One of the most common instances where you need this kind of coordinate shift is with radars.

The radar component will output enough data to allow you to calculate the x, y and z position of a target with reference to the radar component itself, that data being relative distance, azimuth and elevation. The tricky part comes when you need to describe that target's position relative to the map instead of the radar.

If you slap a physics sensor right next to the radar, you can use the euler x, y and z outputs to "decode" these relative positions into their world/map/true positions.

Because of what those "euler" angles represent, that being the angles something has to rotate by in order to result in the final position of the sensor (if their starting position is facing dead ahead (north) with reference to the world/map), if you apply those rotations to the relative coordinates of the target from the radar, you can in essence rotate that entire coordinate system and result in coordinates relative to the map instead of the radar.

Even though the x-y-z order may be harder to visualize, the actual math to perform these rotations is exactly the same as it would be for any other order.

Important notes:
  • Stormworks uses what is called a "left-handed" coordinate system. This means rotations follow the left-hand rule; make a thumbs-up with your left hand, your fingers curl in the direction of positive rotation for the axis pointing in the direction of your thumb.

  • This means the Euler x, y and z angles are clockwise positive, counterclockwise negative.

  • If you experiment with the angles yourself, you may notice that sometimes they seem to randomly flip from positive to negative or gain an extra pi in their measurement. This is normal. Think of how an artificial horizon or compass quickly flips around when you face straight up in a full loop-de-loop. It is a form of gimbal lock.

Your best friend:
function rotmat2d(x,y,r) return x*math.cos(r)-y*math.sin(r), x*math.sin(r)+y*math.cos(r) end
You will find this subprogram in my massive list of useful subprograms, if you've seen my other lua guide.

This function will take in what are essentially the components of a vector on the x,y plane, as well as an angle measure "r" to rotate that vector by. The function will output x',y', which just means the x and y coordinates after they've been rotated.

Here is a visual example:
You may notice that this is in 2D rather than 3D.

However, because our Euler angles are locked to their respective axis, you can fully ignore that axis when calculating the rotation.
Skip the context, just show me what to copy and paste into my Lua.
This code will take the coordinates of a target relative to a physics or astronomy sensor and transform them into coordinates relative to the map:
function rotmat2d(x,y,r) return x*math.cos(r)-y*math.sin(r), x*math.sin(r)+y*math.cos(r) end function onTick() ex = input.getNumber(4) ey = input.getNumber(5) ez = input.getNumber(6) targetx, targety, targetz = 24, 5, 72 --These are just random coordinates for a target relative to the physics sensor, likely coming from a radar or some other kind of sonar or laser sensor. targety, targetz = rotmat2d(targety, targetz, ex) targetz, targetx = rotmat2d(targetz, targetx, ey) targetx, targety = rotmat2d(targetx, targety, ez) --This series of rotations will rotate the coordinates of the target according to the Euler angles from the sensor output.setNumber(1, targetx) output.setNumber(2, targety) output.setNumber(3, targetz) end
Strangely, if you've ever dealt with the cross product of 3D vectors before, the order of x, y and z may look familiar. For the cross product the order to remember is 23-32,31-13,12-21, and equating those numbers to x, y and z results in yz-zy,zx-xz,xy-yx, which is almost exactly what you see in the code, yz,zx,xy. Just a weird observation, it isn't important for this, at least I don't think.

This code will do the opposite of the previous code. It will take world coordinates of a target and transform them into relative coordinates to a physics or astronomy sensor:
function rotmat2d(x,y,r) return x*math.cos(r)-y*math.sin(r), x*math.sin(r)+y*math.cos(r) end function onTick() ex = input.getNumber(4) ey = input.getNumber(5) ez = input.getNumber(6) targetx, targety, targetz = 18, 2, 92.1042957 --These are just random coordinates for a target relative to the map targetx, targety = rotmat2d(targetx, targety, -ez) targetz, targetx = rotmat2d(targetz, targetx, -ey) targety, targetz = rotmat2d(targety, targetz, -ex) --This series of rotations will rotate the coordinates of the target according to the Euler angles from the sensor output.setNumber(1, targetx) output.setNumber(2, targety) output.setNumber(3, targetz) end
You can see that in this code, the signs of the rotations are reversed. It is important to remember that the order of the rotations has to also be reversed, as you are "undoing" their orientation.
(bonus) How to extract other orientation data
If you had just the Euler angles from one of the sensors, you can use them to calculate the corresponding pitch, roll and yaw.

The method consists of applying rotations to vectors representing axes representing the local x, y and z axes of the sensor, then based on the real world directions of those axes, calculating the different angles.

This code will do just that:
function rotmat2d(x,y,r) return x*math.cos(r)-y*math.sin(r), x*math.sin(r)+y*math.cos(r) end function onTick() ex=input.getNumber(4) ey=input.getNumber(5) ez=input.getNumber(6) i1={x=1,y=0,z=0} j1={x=0,y=1,z=0} k1={x=0,y=0,z=1} i1.y,i1.z=rotmat2d(i1.y,i1.z,ex) i1.z,i1.x=rotmat2d(i1.z,i1.x,ey) i1.x,i1.y=rotmat2d(i1.x,i1.y,ez) j1.y,j1.z=rotmat2d(j1.y,j1.z,ex) j1.z,j1.x=rotmat2d(j1.z,j1.x,ey) j1.x,j1.y=rotmat2d(j1.x,j1.y,ez) k1.y,k1.z=rotmat2d(k1.y,k1.z,ex) k1.z,k1.x=rotmat2d(k1.z,k1.x,ey) k1.x,k1.y=rotmat2d(k1.x,k1.y,ez) a=math.atan(k1.x,k1.z) tjz,tjx=rotmat2d(j1.z,j1.x,-a) tkz,tkx=rotmat2d(k1.z,k1.x,-a) e=math.atan(k1.y,tkz) tjz,tjy=rotmat2d(tjz,j1.y,-e) tkz,tky=rotmat2d(tkz,k1.y,-e) r=math.atan(tjx,tjy) output.setNumber(1,a) output.setNumber(2,e) output.setNumber(3,r) end
Technically only the j and k normal vectors are needed for this calculation, but I included all three because why not. Maybe you'll need i for something yourself.

Copy and paste this into a lua hooked up directly to a physics or astronomy sensor, and it will give you your a (azimuth, compass), e (elevation, pitch) and r (roll), all in radians of course.

If you need all the data of a physics sensor, but are stuck with an astronomy sensor, use this microcontroller to derive the missing data.
Matrices (transformation without trig)
Rethinking Vector Components
One way of thinking about coordinates is by re-imagining what the coordinate axes really represent. One of the common notations for vectors describes them as a sum of unit vectors (magnitude 1), one pointing in each x,y,z direction, each multiplied by a scalar corresponding to the x,y or z coordinate. In this point of view, the coordinate axes for the vector don't really exist, just the two (or three in 3D) unit vectors which correspond to the directions of each axis.

The actual notation for these unit vectors is i, j and k with little hats on top. Typically, the direction of i, j and k are x, y and z, or in terms of a global coordinate space, (1,0,0), (0,1,0) and (0,0,1). Note the magnitudes of these vectors are all 1, making them unit vectors that can be scaled accordingly, such as v = 2i + 3j - 5k

Now, if a vector is actually just a sum of smaller vectors, the directions of those smaller vectors can be... anything. They can be skewed to point any direction, and the math of vectors terms of those unit vectors remains exactly the same (this is what XML is all about).

If you define three new unit vectors that correspond to the local axes of a rotated object in space, you can calculate what the local coordinates of any global coordinate or global coordinates of any local coordinate would be with reference to the rotated object.

Once you have these three unit vectors, you can use proj() on each to "flatten" the global coordinates of a point onto each local unit vector, leaving you with three new scaled vectors in the directions of the local unit vectors that, when added together, give back the global coordinates of the point in question. Because these three unit vectors are in the directions of the local axes of the rotated object, the scaling factor for each unit vector is now technically the local coordinate for the point with reference to the object. The same logic holds for converting from local to global; each unit vector is defined in the global reference frame, so simply multiplying that unit vector by a scalar (v = 2i + 3j - 5k) will "extend" that unit vector into the global space. Then add the other two extended unit vectors to it and you're left with the global coordinates for the initial local coordinates, the scalars on each unit vector.

I realized that the megaproj() and megascal() functions defined in the vectors section of my functions guide do the exact same math in the exact same order as a legitimate rotation matrix. Namely:
function megaproj(v,i,j,k) return {x=dot(v,i),y=dot(v,j),z=dot(v,k)} end function megascal(v,i,j,k) return vadd(vadd(scal(i,v.x),scal(j,v.y)),scal(k,v.z)) end --or if the multi-input vadd and combined dot/scal are defined: function megascal(v,i,j,k) return vadd(dot(i,v.x),dot(j,v.y),dot(k,v.z)) end
These functions use other vector functions that you have to define yourself, but to keep it standardized I'd suggest copying them over from the functions guide.

These functions take in a vector v, then either flatten v's coordinates down onto the i, j and k basis vectors for the new orientation (megaproj), or extend that basis' i, j and k vectors into the global frame (megascal). It turns out both of these functions can be represented 1:1 with a single matrix multiplication.
megaproj:
| i.x i.y i.z | | v.x | | j.x j.y j.z | | v.y | | k.x k.y k.z | | v.z |
megascal:
| i.x j.x k.x | | v.x | | i.y j.y k.y | | v.y | | i.z j.z k.z | | v.z |
Those more familiar with matrix multiplication will notice that the matrix for megascal is the transpose of megaproj, and vice versa. This gives the intuition that one rotation can be undone by applying the transpose of that rotation to the result, in the same way that first converting to global with megascal can then be undone by converting back to local with megaproj (and again, vice versa).

Someone who was more familiar with matrices than me mentioned in the comments that with a rotation matrix, you don't have to worry about rotation order. Once you have the completed matrix, that is 100% the case. However, in order to get those i, j and k basis vectors, you need to either use rotmat2d() or directly figure out the trigonometry yourself for each component, and that process is where rotation order is paramount.

Implementation
(keep in mind I haven't tested this in-game, but I did test it externally compared to the rm2d() methods and it returns the same values)
function scal(v,s) return {x=v.x*s,y=v.y*s,z=v.z*s} end function vadd(a,b) return {x=a.x+b.x,y=a.y+b.y,z=a.z+b.z} end function dot(v1,v2) return v1.x*v2.x+v1.y*v2.y+v1.z*v2.z end function megaproj(v,i,j,k) return {x=dot(v,i),y=dot(v,j),z=dot(v,k)} end function megascal(v,i,j,k) return vadd(vadd(scal(i,v.x),scal(j,v.y)),scal(k,v.z)) end function onTick() ex=input.getNumber(4) ey=input.getNumber(5) ez=input.getNumber(6) --precalculate the trig to speed things up (probably not necessary) cx=math.cos(ex) sx=math.sin(ex) cy=math.cos(ey) sy=math.sin(ey) cz=math.cos(ez) sz=math.sin(ez) --calculate the orientation basis vectors i={x=cy*cz,y=cy*sz,z=-sy} j={x=sx*sy*cz-cx*sz,y=sx*sy*sz+cx*cz,z=sx*cy} k={x=cx*sy*cz+sx*sz,y=cx*sy*sz-sx*cz,z=cx*cy} v={x=2,y=6,z=3.7} --some random example input vector v1=megascal(v,i,j,k) --converts v from local --> global v2=megaproj(v,i,j,k) --converts v from global --> local end
If you've seen a constructed rotation matrix, then those i, j and k vectors should look very familiar. These are set up to be generated from eulers in the order ex-ey-ez, same as the physics sensor in-game.
Stand-alone function:
function ijkfromeuler(ex,ey,ez) cx=math.cos(ex) sx=math.sin(ex) cy=math.cos(ey) sy=math.sin(ey) cz=math.cos(ez) sz=math.sin(ez) i={x=cy*cz,y=cy*sz,z=-sy} j={x=sx*sy*cz-cx*sz,y=sx*sy*sz+cx*cz,z=sx*cy} k={x=cx*sy*cz+sx*sz,y=cx*sy*sz-sx*cz,z=cx*cy} return i,j,k end
If you're worried about execution time, regular rm2d() does the transformation in ~1E-5 seconds and the above matrix construction does it in ~5E-5 seconds. I would've assumed the opposite, but the interpreter test script doesn't lie. You can make it faster by making everything local, but that might not be very good for the character limit.
Turret 3rd person -> 1st person seat look control
function ijkfromeuler(ex,ey,ez) cx=math.cos(ex) sx=math.sin(ex) cy=math.cos(ey) sy=math.sin(ey) cz=math.cos(ez) sz=math.sin(ez) i={x=cy*cz,y=cy*sz,z=-sy} j={x=sx*sy*cz-cx*sz,y=sx*sy*sz+cx*cz,z=sx*cy} k={x=cx*sy*cz+sx*sz,y=cx*sy*sz-sx*cz,z=cx*cy} return i,j,k end function dot(a,b) if type(a)=="table" and type(b)=="table" then return a.x*b.x+a.y*b.y+a.z*b.z else local a=type(a)=="table" and a or {x=a,y=a,z=a} local b=type(b)=="table" and b or {x=b,y=b,z=b} return {x=a.x*b.x,y=a.y*b.y,z=a.z*b.z} end end function mag(v) return math.sqrt(v.x^2+v.y^2+v.z^2) end function vadd(...) local s={x=0,y=0,z=0} for k,v in pairs({...}) do s={x=s.x+v.x,y=s.y+v.y,z=s.z+v.z} end return s end function megaproj(v,i,j,k) return {x=dot(v,i),y=dot(v,j),z=dot(v,k)} end function megascal(v,i,j,k) return vadd(dot(i,v.x),dot(j,v.y),dot(k,v.z)) end function onTick() ex=input.getNumber(4) --from the physics sensor ey=input.getNumber(5) ez=input.getNumber(6) tf=input.getNumber(15)*math.pi*2 comp=input.getNumber(17)*-math.pi*2 lookX=input.getNumber(9)*math.pi*2+comp --whatever channels you're using lookY=input.getNumber(10)*math.pi*2+tf lookVector={ x=math.sin(lookX)*math.cos(lookY), y=math.sin(lookY), z=math.cos(lookX)*math.cos(lookY) } i,j,k=ijkfromeuler(ex,ey,ez) lookVector=megaproj(lookVector,i,j,k) newlookX=math.atan(lookVector.x,lookVector.z) newlookY=math.asin(lookVector.y) output.setNumber(1,newlookX/math.pi/2) output.setNumber(2,newlookY/math.pi/2) end
Vehicle Camera Mode must be set to Free in settings for this to work
This code will calculate what the corresponding 1st person seat look X and Y would be from the current 3rd person seat look and a physics sensor on the same subgrid as the seat. It first ensures the 3rd person looks are true to global by adding the current compass and forward tilt (this only works when your camera setting is set to "free"), then calculates the components of a vector representing the look direction. Then, it transforms that vector to local using megaproj() and calculates the corresponding look X and Y using trig.

This is perfect if you need to control a turret from third person, war thunder style.
Matrices (further)
One property of matrix multiplication involves chaining rotations. If you have multiple rotations you need to perform, you can combine rotation matrices first with multiplication and result in a matrix for the total net rotation. Because we now know that rotation matrices are just rows or columns of unit vectors, we can justify that
(megascal) | i.x j.x k.x | | v.x | | v'.x | | i.y j.y k.y | | v.y | = | v'.y | | i.z j.z k.z | | v.z | | v'.z | and | i'.x j'.x k'.x | | v'.x | | v".x | | i'.y j'.y k'.y | | v'.y | = | v".y | | i'.z j'.z k'.z | | v'.z | | v".z | therefore | i'.x j'.x k'.x | | i.x j.x k.x | | v.x | | v".x | | i'.y j'.y k'.y | | i.y j.y k.y | | v.y | = | v".y | | i'.z j'.z k'.z | | i.z j.z k.z | | v.z | | v".z |
resulting in the local-->global transformation from local onto i, j and k, and then from those new local coordinates onto i', j' and k'. When chaining transformations like this, the chain order is just as important as the rotation order is for constructing a matrix.
Visualize:
  1. Imagine a regular x,y,z coordinate in global space
  2. Zoom out to see that this "global" space is actually in the basis of i, j and k
  3. Convert to the true global coordinates (by megascal, matrix columns = i, j and k)
  4. Zoom further out to see that this supposed "global" space is actually in its own basis of 3 other vectors, i', j' and k'
  5. Convert to the true true global coordinates (by another megascal)

With a bit of head scratching, you could convince yourself that:
if you transform the i, j and k basis vectors individually onto the i', j' and k' basis, resulting in basis vectors i", j" and k", then transforming onto the new i", j" and k" basis will have the same effect as first transforming onto i, j and k and then onto i', j' and k'.

If we wanted to transform by those resulting i", j" and k" vectors, we would put them in a matrix like any other trio of basis vectors
| i".x j".x k".x | | v.x | | v".x | | i".y j".y k".y | | v.y | = | v".y | | i".z j".z k".z | | v.z | | v".z |
If chaining rotations results in the same rotation as that single matrix, then logic dictates that
| i'.x j'.x k'.x | | i.x j.x k.x | | i".x j".x k".x | | i'.y j'.y k'.y | | i.y j.y k.y | = | i".y j".y k".y | | i'.z j'.z k'.z | | i.z j.z k.z | | i".z j".z k".z |
Keep in mind i, j and k are completely independent from i', j' and k'. They represent completely independent rotations and aren't linked in any way other than sharing a variable name.

The physical representation of the i", j" and k" basis vectors leads to the final observation:
| i.x i.y i.z | | t.x u.x v.x w.x | | t'.x u'.x v'.x w'.x | | j.x j.y j.z | | t.y u.y v.y w.y | = | t'.y u'.y v'.y w'.y | | k.x k.y k.z | | t.z u.z v.z w.z | | t'.z u'.z v'.z w'.z |
A matrix consisting of n columns of arbitrary vectors transformed by a rotation matrix results in a matrix with n columns of individually transformed vectors. You can transform as many vectors as you want at one time with just a single matrix multiplication.

All of the above logic works for both megascal and megaproj, and pretty much any other linear transformation
and, because megascal will undo megaproj (and vice versa), we can see that
(megaproj) (megascal) | i.x i.y i.z | | i.x j.x k.x | | v.x | | v.x | | j.x j.y j.z | | i.y j.y k.y | | v.y | must = | v.y | | k.x k.y k.z | | i.z j.z k.z | | v.z | | v.z |
and therefore
| i.x i.y i.z | | i.x j.x k.x | | 1 0 0 | | j.x j.y j.z | | i.y j.y k.y | must = | 0 1 0 | | k.x k.y k.z | | i.z j.z k.z | | 0 0 1 | via | i.x*i.x+i.y*i.y+i.z*i.z i.x*j.x+i.y*j.y+i.z*j.z i.x*k.x+i.y*k.y+i.z*k.z | = | j.x*i.x+j.y*i.y+j.z*i.z j.x*j.x+j.y*j.y+j.z*j.z j.x*k.x+j.y*k.y+j.z*k.z | | k.x*i.x+k.y*i.y+k.z*i.z k.x*j.x+k.y*j.y+k.z*j.z k.x*k.x+k.y*k.y+k.z*k.z | | dot(i,i) dot(i,j) dot(i,k) | = | dot(j,i) dot(j,j) dot(j,k) | | dot(k,i) dot(k,j) dot(k,k) | and i, j and k are unit vectors, and i, j and k are all orthogonal, so | 1 0 0 | = | 0 1 0 | | 0 0 1 |
and because megascal and megaproj are the transposes of eachother
(rotationMatrix) x transpose(rotationMatrix) = identityMatrix
but that's enough linear algebra for one day.
If you have any questions or if anything needs clarification,
Please ask in the comments below and I will do my best to answer or explain.

I commend you for making it to this point

Originally posted by Thomas Joseph Grasso:
I did not get my SpaghettiOs, I got spaghetti. I want the press to know this.




Q:
Why is it that for the orientation extraction, the vectors only are rotated in two dimensions at once? Wouldn't they need to be rotated in three?
A:
Because the rotation order for azimuth/elevation/roll is z-x-y, to find the azimuth (y, which is "applied" last), you can straight up use the coordinates of the local z axis, k1, the same way you would find the bearing to a waypoint or a target. When you need to find elevation, you technically could do math.atan(k1.y/math.sqrt(k1.x^2+k1.z^2)) and save some trig, but in this example the local vectors are rotated to be facing "north" to make the calculation just math.atan(k1.y/k1.z). It's then the same process for roll, though roll needs a second reference, j1 (the y vector). The z vector, k1, once rotated back into a position where roll can be calculated, would just be facing {x=0,y=0,z=1} without any way of telling what the roll might be. The y vector j1, however, if rotated the same way as k1, is positioned perfectly for an atan(). That's why to find the a,e and r only the j1 and k1 vectors are rotated in reverse, and one at a time.

If you were wondering why for each rotmat2d() there are only two axes, y and z or whatever it might be, that's because each rotation is performed individually by rotating coordinates around one of the x, y or z axes. If you imagine a point being rotated around the y axis for instance, looking directly at it from the y axis you would see it rotate around the middle of you reference frame. Looking at it from the side, however, you would see that only the x and z coordinates change, and the y is left the same. There's no need to include the y component when you're rotating around the y axis, and the same for the x and z. When you use a 3x3 rotation matrix, everything is done in one step, but that matrix itself is made up of those three individual rotations being applied to a single "object" before that "object" can be applied onto whatever 3D vector you want to rotate. In the end it arrives at the same outcome.
16 Comments
Kernle Frizzle  [author] Dec 21, 2024 @ 12:33am 
Ah ok gotcha, that makes sense. Technically then the extrinsic x-y-z should be equivalent to the intrinsic z-y'-x''
thepackett Dec 20, 2024 @ 5:53pm 
Hey, quick heads up about something that confused me.

Tait-Bryan angles can be either extrinsic, meaning that the axis' of rotation are fixed to some external frame of reference, or intrinsic, meaning that the axis' of rotation move with the rotating reference frame.

From my testing, it seems the Tait-Bryan angles given by the physics sensor are extrinsic, meaning their rotation axis are relative to the world's reference frame. I suspect you're already aware of this, since you mentioned that the rotations are with respect to the world, however the notation you used, x-y'-z'', implies intrinsic Tait-Bryan angles instead. The correct notation is x-y-z for extrinsiv Tait-Bryan angles.
Kernle Frizzle  [author] Dec 17, 2024 @ 9:07pm 
The hand positions are a good way to remember this kind of thing. If I had to guess why left handed is more common for rotation, it would be because of the way rotations are fundamentally represented in the pure math. Actual rendering engines and any kind of program that has to work with orientation will use quaternions instead of vectors to more fully represent rotation, which are essentially fancy 4D complex numbers with properties that result in intrinsic left-handedness. 4D linear algebra is too hard to think about though so I'm going to stick to 3D..
thepackett Dec 17, 2024 @ 6:35pm 
Thanks a bunch for this, trying to figure out the coordinate system and rotation angle order was not something I was looking forward to.

As a fun fact, and to shed some light on why positive rotation around the x axis is pitch down, positive rotation around the y axis is clockwise yaw, etc, I suspect that Stormworks uses a coordinate representation like Unity's, which is a Y up left handed coordinate system. In this system, positive rotation is defined by the left hand rule: Make a thumbs up with your left hand, point the thumb along the axis you're rotating around, and curl your fingers. The direction your fingers are curling is the direction of positive rotation.
Broken Salvo Dec 13, 2024 @ 12:42pm 
What I did was use regular 2d trig to obtain a bearing to target. Then I transformed it with a 2by2 rotational matrix. But it only worked using r not EZ
Kernle Frizzle  [author] Dec 13, 2024 @ 11:48am 
If he ran into the same issue with the same missile then there could be something wrong with the missile itself, the radar or sensor being positioned weirdly
Broken Salvo Dec 13, 2024 @ 11:41am 
My friend had the same issue doing it independently and he believes the physics sensors output the euler angles slightly wrong.
Maybe its because we use them for missiles so do the gps co ordinates relative to us and not the world but i am not sure.
Kernle Frizzle  [author] Dec 13, 2024 @ 11:30am 
That's very strange... I don't know how it would be possible for a typo in the code to make it still work with a e and r other than maybe the order copied over being funky with a lucky sign change
Broken Salvo Dec 13, 2024 @ 11:19am 
The second chunk of code to transform global co ordinates to local
Kernle Frizzle  [author] Dec 13, 2024 @ 10:42am 
Does yours use one of the chunks of code from this guide or did you write up the code yourself?