Aligning the coordinate systems of accelerometer and vehicle

A blog by Sebastian Liem and Erwin Poeze – ViriCiti Labs

ViriCiti provides insights in electric, CNG, diesel, hybrid, and hydrogen vehicles, as well as their charging infrastructure. We can, therefore, provide full insight in energy management, maintenance, route operations, and flexible charging for mixed fleets. Currently, we are working on the service ‘Smart Driving’, a tool assisting drivers to drive in an energy efficient, comfortable, and safe way. One of the core inputs of Smart Driving is the measurements of the vehicle acceleration which is done via the DataHub — ViriCiti’s onboard hardware unit — that has a built-in 3D accelerometer. Before the acceleration signals can be useful to us, however, the coordinate system of the DataHub must be aligned with the vehicle. Only then can we interpret the acceleration in the x-direction (driving direction) as braking or accelerating, and only then can we interpret rotation around the z-direction (downward direction) as the vehicle turning.

The DataHub is installed by our customers and we at ViriCiti do not necessarily know the orientation of the DataHub in relation to the vehicle. The coordinate systems of the DataHub and the vehicle are therefore not necessarily aligned, as depicted in Figure 1.

Aligning the coordinate systems of accelerometer and vehicle

Figure 1: Vehicle’s coordinate system ( x v x_v xv, y v y_v yv, z v z_v zv) is not aligned with that of the DataHub ( x d x_d xd, y d y_d yd, z d z_d zd)

To overcome this misalignment, we have developed a method to automatically determine the DataHub’s orientation enabling us to align its coordinate system with the vehicle’s. This method has two steps; in the first step, we align the z-axes of the DataHub and vehicle. Secondly, we align the x– and y-axes. The principle is the same for both steps: We measure the acceleration in a known vehicle state where we know forces acting on the vehicle, meaning that we know the acceleration vector in the vehicle’s coordinate system. In the first step, the vehicle must be stationary and positioned on a level surface. In this situation, only gravity affects the vehicle and acceleration is a v = ( 0 , 0 , g ) a_v = (0, 0, g) av=(0,0,g) in the vehicle’s coordinate system. As the DataHub is now misaligned, it measures some other acceleration a d a_d ad. Knowing how the a v a_v av is expressed in the coordinate system of the DataHub, we can find a coordinate transformation that aligns the z-axes of the vehicle’s and DataHub’s coordinate systems.

In the z-aligned coordinate system x ’ d , y ’ d , z ’ d x’_d, y’_d, z’_d xd,yd,zd (where z ’ d = z v z’_d = z_v zd=zv), the x- and y-axes can still be misaligned but this is solved in the second step. In this step, the vehicle should be braking while driving in a straight line. We then derive that a v = ( − x , 0 , g ) a_v = (-x, 0, g) av=(x,0,g), which we use to find the coordinate transformation from the z-aligned DataHub’s coordinate system to the vehicle’s coordinate system. Combining the coordinate transformations from step 1 and 2, we have a coordinate transformation that aligns the measurements from the DataHub with the vehicle. This allows us to use the acceleration to describe the movements of the vehicle. In the two following sections, we provide details on how each step works.

Aligning the coordinate systems of accelerometer and vehicle

Figure 2: Step 1, rotation to align the z-axes of the vehicle and DataHub ( z v z_v zv and z d z_d zd). Step 2, rotation around aligned z-axis to align the x- and y-axes.

Step 1: Using Gravity
In this first step, we need to determine when the vehicle is stationary and standing on a level surface. Determining if the vehicle is stationary is quite straightforward as we have direct access to the vehicle speed. Determining if the vehicle is on a level surface, however, is more difficult, as there is no sensor reading readily available. Instead, we rely on statistics taken from many samples — no city is all uphill after all and with a sufficient number of samples the average models a level surface.

The measured acceleration a d a_d ad in the DataHub’s coordinate system expressed in the vehicle’s coordinate system should be a v = ( 0 , 0 , g ) a_v = (0, 0, g) av=(0,0,g). We can now find a transformation matrix R R R so that

a v = R ⋅ a d . a_v = R \cdot a_d. av=Rad.

We use R R R to denote the matrix because we know it should be a rotation matrix, it should preserve the origin, the norm, as well as the orientation of the acceleration signal. Note that R R R doesn’t fully transform the vector from the DataHub’s coordinate system to that of the vehicle. It does, however, align their z-axes or, equivalently, the xy-planes.

There is more than one way to construct a rotation matrix – Euler angles and quaternions are two popular approaches. We found these two methods to be unnecessarily complex and numerically error-prone for our purposes. Instead we took a step back and used the theory of rotations: S O ( 3 ) SO(3) SO(3) the group of 3D Euclidean rotations. We will sketch, with no claims of rigor, the method we settled on. S O ( 3 ) SO(3) SO(3) is a Lie group which has corresponding Lie algebra s o ( 3 ) \mathcal{so}(3) so(3). If we can express our rotation in this algebra we can generate the rotation matrix. To do this we chose the basis L = [ L x , L y , L z ] L = [L_x, L_y, L_z] L=[Lx,Ly,Lz] for s o ( 3 ) \mathcal{so}(3) so(3)

L x = ( 0 0 0 0 0 − 1 0 1 0 ) , L y = ( 0 0 1 0 0 0 − 1 0 0 ) , L z = ( 0 − 1 0 1 0 0 0 0 0 ) L_x =\begin{pmatrix}0 & 0 & 0 \\0 & 0 & -1 \\0 & 1 & 0 \\\end{pmatrix},\quad L_y =\begin{pmatrix}0 & 0 & 1 \\0 & 0 & 0 \\-1 & 0 & 0 \\\end{pmatrix},\quad L_z =\begin{pmatrix}0 & -1 & 0 \\1 & 0 & 0 \\0 & 0 & 0 \\\end{pmatrix} Lx=000001010,Ly=001000100,Lz=010100000

With this basis we can identify angle rotations θ \theta θ around some unit vector u u u as element θ u ⋅ L ∈ s o ( 3 ) \theta u \cdot L \in \mathcal{so}(3) θuLso(3). With this description of the rotation in the Lie algebra we use the exponential map to generate the actual rotation matrix. The map is defined using the matrix exponential series

s o ( 3 ) → S O ( 3 ) ; θ u ⋅ L → R = e θ u ⋅ L = I + θ u ⋅ L + 1 2 ! ( θ u ⋅ L ) 2 + … . \mathcal{so}(3) \to SO(3); \quad \theta u \cdot L \to R = e^{\theta u \cdot L } = I + \theta u \cdot L + \frac{1}{2!}(\theta u \cdot L )^2 + \ldots. so(3)SO(3);θuLR=eθuL=I+θuL+2!1(θuL)2+.

The infinite series has an analytical solution because $u \cdot L $ is skew-symmetric, meaning $(u \cdot L )^3 = -u \cdot L $. The higher order terms simplifies to one $u \cdot L $ term and one ( u ⋅ L ) 2 (u \cdot L )^2 (uL)2 term. We get

R = I + [ θ – θ 3 3 ! + θ 5 5 ! – … ] u ⋅ L + [ θ 2 2 ! – θ 4 4 ! + θ 2 6 ! – … ] ( u ⋅ L ) 2 , R = I + \left[\theta – \frac{\theta^3}{3!} + \frac{\theta^5}{5!} – \ldots\right] u \cdot L+ \left[ {\theta^2}{2!} – {\theta^4}{4!} + {\theta^2}{6!} – \ldots \right](u \cdot L)^2, R=I+[θ3!θ3+5!θ5]uL+[θ22!θ44!+θ26!](uL)2,

which, after remembering our trigonometric expansions, let us write

R = I + sin ⁡ θ u ⋅ L + ( 1 – cos ⁡ θ ) ( u ⋅ L ) 2 . R = I + \sin \theta u \cdot L + (1 – \cos \theta) (u \cdot L)^2. R=I+sinθuL+(1cosθ)(uL)2.

With this formula we can return to our problem of rotating a d a_d ad onto a v a_v av. We rotate in the plane spanned by the two vectors, i.e. around the cross-product v = a d x a v v = a_d x a_v v=adxav. The cos ⁡ θ \cos \theta cosθ and sin ⁡ θ \sin \theta sinθ can be found using the geometric meaning of the dot and cross-products respectively. We identify

u = v ∣ ∣ v ∣ ∣ , cos ⁡ θ = a v ⋅ a d ∣ ∣ a v ∣ ∣ ∣ ∣ a d ∣ ∣ , sin ⁡ θ = ∣ ∣ v ∣ ∣ ∣ ∣ a v ∣ ∣ ∣ ∣ a d ∣ ∣ . u = \frac{v}{||v||}, \quad \cos \theta = \frac{a_v \cdot a_d}{||a_v|| ||a_d||}, \quad \sin \theta = \frac{||v||}{ ||a_v|| ||a_d||}. u=vv,cosθ=avadavad,sinθ=avadv.

And with a d a_d ad measured and a v a_v av assumed we have our rotation matrix R R R which aligns the z-axes of the DataHub and that of the vehicle.

Step 2: Braking
With the z-axes aligned the next step is to align the x- and y-axes. We do this by identifying a scenario where the acceleration in the xy-plane is known to be only in the x-direction. When the vehicle is braking in a straight line, the acceleration is a v = ( − a x ; v , 0 , g ) a_v = (-a_{x;v}, 0, g) av=(ax;v,0,g) in the vehicle coordinate system. Having measured the acceleration in the DataHub’s coordinate system and rotating this acceleration vector to the aligned xy-planes of the vehicle and DataHub during the braking event, we can then find the rotation matrix to completely align the DataHub with its vehicle.

To detect a braking event we simply observe if the speed is rapidly diminishing. As it’s braking, the vehicle must maintain the same driving direction (within some tolerance) and to enforce this we use the circular dispersion of the acceleration sampled during the braking event. Only samples that meet a maximum dispersion threshold are accepted so we can be sure that the vehicle is indeed braking in a sufficiently straight line.

We combine samples for a number of braking events for our final measurement a d a_d ad with which we find the rotation matrix using the method outlined in step 1.

Results
By composing the rotation matrices from step 1 and step 2 we achieve the alignment of the DataHub and the vehicle. In Figure 3 we see the difference between the acceleration signals of the unaligned DataHub and one that is aligned properly. As one would expect, the x– and y components of the acceleration (blue and orange) are nearly zero when the vehicle is stationary, between timesteps 750 and 1100.

Aligning the coordinate systems of accelerometer and vehicle

Figure 3: x, y, z acceleration in the coordinate system of the DataHub (left) and the same signals transformed to the vehicle coordinate system (right).

The process of DataHub alignment is fully automated and currently runs on a selection of vehicles. In the near future we will roll out this new feature to all vehicles equipped with a DataHub and the users will have access to the acceleration signals, useful for, e.g., brake tests of new buses. In the meantime, we continue to develop Smart Driving using the acceleration signals.