Which direction am I facing?

May 25, 2016

The scenario

Let’s say Peter is carrying an Android phone. You know where he is facing in the beginning. You can also assume that the phone’s orientation in his pocket doesn’t change. Can you record the direction he is facing relative to the start?

This is a problem that we faced during our Meerkat project and is likely to be a problem many developers will face with the onset of augmented reality applications. Peter, in our case, was an audience member. The initial position he was facing: the front of a circular stage in which he stands in the middle, watching three theatrical performances that are occurring simultaneously. The audio he hears at any time depends on the direction he is facing. We needed to track where he is facing at any moment during the play. The figure below illustrates the setup.

 

Facing direction tracking

 

Most Android developers will think that is an easy task. And they would be right, if they were willing to implement any solution. The right solution, however, would require you to understand a few things first.

The easy way

The easiest way is to register a listener to the Sensor.TYPE_ORIENTATION sensor. The sensor gives you back the azimuth, pitch and roll – in that order.

These angles are referred to as Cardan angles and sometimes as Euler angles. The illustration below shows the azimuth (or yaw), pitch and roll angles on a phone lying on a desk with the top-end facing northwards. The angles increase clockwise while facing in the direction of the axis.

Phone pitch and roll angles

So the simple solution to this problem would be to get the angles from the sensor, record the value at the start, then subtract them from the sensor values for the rest of the performance. Right?

The problem

Except theSensor.TYPE_ORIENTATION sensor was deprecated in Froyo. However, the documentation just points to SensorManager.getOrientation(float[], float[]), a method that extracts Cardan angles given a rotation matrix. How or where do you get the rotation matrix? Well, the documentation will point you to the SensorManager.getRotationMatrix(float[], float[], float[], float[]) method. This method in return requires accelerometer sensor values, and geomagnetic values. This is where most people start losing interest and possibly just use the deprecated sensor.

The accelerometer values would be the latest values from the accelerometer on the same device while the geomagnetic values would be the latest values from the compass on the device. The accelerometer is the same sensor that allows the device to change screen orientation whenever you turn it on its side while the compass returns a vector that points to the magnetic north. Combining the two values not only allows your phone to know where down is (from the effect of the pull of gravity on the accelerometer) but also where north is (from the compass) relative to its current orientation.

There are many issues to consider while combining values from the two sensors. The accelerometer doesn’t just sense gravity but as its name suggests any other accelerations. These include accelerations the phone would face while in Peter’s pocket due to any movements Peter might make. The compass, on the other hand, does not always point to magnetic north. It is susceptible to any magnetic fields that might be around the sensor. So anything from other electronic devices (in our experience even the speaker on the same phone) to large metal pieces can impact the sensor. The compass can (and many times does) act like a rabid dog biting onto anything that catches its attention!

It all sounds very grim at this point. How accurate are the values when we combine a noisy sensor with another noisy sensor? Well, it’s worth mentioning that the performance went well and even made it into the news. So it’s definitely possible.

The solution

Well, first and foremost, by incorporating a third sensor: the gyroscope. The gyroscope started appearing much later in commodity hardware than the accelerometer and the compass so don’t expect to find it in older devices. It senses rotations and provides rotational vectors (e.g. values in degrees per second or radians per second). The gyroscope has very little else that impacts its measurements other than actual rotations. By computing the integral of the rotational angles, one can come up with an estimation of the angle the phone has changed.

Some clever readers would ask, “Why not just compute the integral of the gyroscope right from the start?” To them, I say, “It doesn’t work because of errors incurred while doing numerical integration!”. In simple terms: any changes in rotation velocity that occur within the little periods between one sensor reading to the next are not included in the computation of the integral. This loss of information results in an error that only builds up over time. You can reduce this error by decreasing the periods in between your sensor readings, but that would only reduce but never eliminate the problem. In addition, more sensor readings means a heavier battery drain and hence a shorter battery life. The animation below (obtained from wikipedia) illustrates the error in computing the integral of the equation y=x2 for different step sizes (periods between samples).

error in computing the integral of the equation y=x2

So now what? Well, consider the strengths and weaknesses of the three sensors. The accelerometer senses where down is but is impacted by any (linear) movements. The compass senses where north is but is susceptible to any magnetic fields that might be around the sensor. The gyroscope can sense any changes in orientation but only for short periods of time. Can we combine these three sensors to get an accurate orientation? How?

Well, we start by noticing that for the accelerometer, the direction of gravity rarely changes (except when the device gets rotated). So linear motions come and go, but the direction of gravity stays the same. This is prime grounds for the application of a low-pass filter. What does a low-pass filter do? Given the data from the sensor, it gets rid of the short-lived changes occurring in the data and leaves behind the longer-lived changes. A high-pass filter does the opposite: it keeps the short-lived changes and removes the longer-lived values. This happens to be what we need for the gyroscope. By combining the long-lived changes from the accelerometer and compass and the shorter-lived changes from integrating the gyroscope values, we get values that are more accurate than anyone sensor alone.

You will notice that the SensorManager.getRotationMatrix(float[], float[], float[], float[]) method doesn’t do the low-pass or high-pass filtering. If you were to code the solution, you would probably have to do it yourself. That sounds like a lot of work. It’s even more work to get it to work properly since you’ll likely have to use algorithms like Kalman filters that are not known by many and understood by even fewer software engineers.

The answer

Luckily, as an Android developer, you don’t have to. If your app is targeting Gingerbread and above, all you have to do is register for values from the Sensor.TYPE_ROTATION_VECTOR sensor. This composite sensor gives you back the orientation of the phone after the complicated math is done. Nice, right?

Well, just one further complication: the Sensor.TYPE_ROTATION_VECTOR sensor gives you unit quaternions and not Cardan angles. This is probably where many readers will go, “What the heck is a quaternion?”

Take a look at the spatial rotation article on Wikipedia and the Quaternion article on Wolfram for a deeper understanding of quaternions. To summarise, they are an extension of complex numbers that, like rotation matrixes, allow continuously combining rotations more accurately than Euler angles while also avoiding gimbal lock. However, they are more efficient, in terms of both processing-time and memory, than rotation matrices. You can find all the quaternion operations conveniently coded in the Apache Math3 library’s Rotation class.

Keep in mind that the fixed frame of reference in Android is the x-axis points east, y-axis towards magnetic north and z-axis upwards. The diagram below (with the globe depicting the Earth) from Android’s developer documentation illustrates this.

Android is the x-axis points east, y-axis towards magnetic north and z-axis upwards

Back to Peter

So back to the original problem, how do you get where Peter is facing? Well, first you register a listener on the Sensor.TYPE_ROTATION_VECTOR sensor. Then you capture and save the orientation at the beginning (we called this the calibration phase). And finally, you apply the conjugate of your original orientation to each subsequent reading (to get the relative direction from the front of the stage) and extract the rotation around the z-axis (i.e. the azimuth).

Code Snippet on GitHub

Code Snippet on GitHub

Didn’t I tell you it was easy?

Author: Umran Abdulla, Software Engineer - Android