MoiréTag: Angular Measurement and Tracking with a Passive Marker

Stable, low-cost, and precise visual measurement of directional information has many applications in domains such as virtual and augmented reality, visual odometry, or industrial computer vision. Conventional approaches like checkerboard patterns require careful pre-calibration, and can therefore not be operated in snapshot mode. Other optical methods like autocollimators offer very high precision but require controlled environments and are hard to take outside the lab. Non-optical methods like IMUs are low cost and widely available, but suffer from high drift errors. To overcome these challenges, we propose a novel snapshot method for angular measurement and tracking with Moiré patterns that are generated by binary structures printed on both sides of a glass plate. The Moiré effect amplifies minute angular shifts and translates them into spatial phase shifts that can be readily measured with a camera, effectively implementing an optical Vernier scale. We further extend this principle from a simple phase shift to a chirp model, which allows for full 6D tracking as well as estimation of camera intrinsics like the field of view. Simulation and experimental results show that the proposed non-contact object tracking framework is computationally efficient and the average angular accuracy of 0.17° outperforms the state-of-the-arts.


INTRODUCTION
We introduce an optical marker to accurately measure angular and directional information.This carefully designed encoded optical element, called MoiréTag, is based on the Moiré effect [Amidror 2009;Gabrielyan 2007;Oster and Nishijima 1963], which occurs when two periodic structures interfere in a multiplicative way.Minute changes in the relative offsets of the two structures manifest themselves as macroscopic phase shifts of the low frequencies in the Moiré pattern.In the two-layer design of MoiréTag, these offsets correspond to the parallax under different observation directions.A subtle shift of one layer over another causes apparent visual movement of the superposed Moiré patterns along different directions.The movement amplification property of the Moiré effect has been The Inogon [Bergkvist and Forsen 1986] is a passive marker that uses the Moiré effect to provide visual feedback on positioning in shipping lanes.
widely utilized as a measuring instrument in various applications such as microscopes, telescopes, micrometers, and Vernier scales (see Fig. 2 (b)).More recently, works like camera tracking for 3-DoF translations [Banks et al. 2019;Xiao and Zheng 2021], and camera pose estimation for 6-DoF extrinsic parameters [Ning et al. 2022] also applied the magnification attributes of Moiré patterns.However, the Moiré effect has not been utilized for high precision angular and directional measurements, which is the main focus of this work.To the best of our knowledge, this is the first work that exhaustively utilizes the Moiré effect's coarse-to-fine frequency properties to achieve high-precision object angular measurements.In addition, we discuss extensions of the basic approach to estimate positional information as well as camera intrinsics from a single snapshot image.Specifically, we make the following contributions: • We introduce a novel, stable, low-cost, computationally efficient, and high-accuracy object tracking method.It works out of the box without camera calibration.• We implement the coarse-to-fine scaling structure of a Vernier scale that can measure the subtle motion of an object.• Our model enables accurate angular estimation in a wide angular range and long measuring distance.A particular hyperbolic chirp model can acquire the capturing distance, the camera's intrinsic parameters, and the field of view (FOV).• Simulation and experiment show a snapshot angular measurement accuracy of 0.17 • as well as a positional accuracy of 3.04 .

RELATED WORK
Optical tracking: Checker boards [Zhang 2000] or fiducial markers such as ArUco [Garrido-Jurado et al. 2014], ARTag [Fiala 2005], STag [Benligiray et al. 2019], AprilTag [2016], or ChromaTag [De-Gol et al. 2017] are commonly used for positional tracking of objects or cameras in a variety of settings, including robotics [Song et al. 2017], visual odometry [Harmat et al. 2015], or AR/VR systems.These approaches achieve good accuracy in object position, however rotational or orientation information can be quite poor and depends heavily on the careful calibration of the camera intrinsics, as well as a good distribution of the fiducials over the full field of view.These are undesirable properties for a number of reasons.First, the need for calibration complicates work outside the lab due to changes of the calibration with focus or zoom.Second, good coverage of the field of view reduces the ability to track small objects at a larger distance.Other optical methods exist for angular measurements, including, for example, autocollimators [Chen et al. 2017;Geckeler et al. 2012].These offer superior sensitivity and precision, however, they are even more difficult to deploy outside controlled environments.In this work, we aim to achieve high angular precision while maintaining the simplicity of marker-based tracking.Our approach can be extended to position tracking and simultaneous estimation of intrinsics from a single image.
Deep-Learning-based approaches: Deep-Learning-based [Liu et al. 2020;Morency et al. 2008;Parameshwara et al. 2022;Ruiz et al. 2018] methods tend to be popular due to the fast-growing training networks.These methods strongly depend on scene structure for 3D motion recovery and require huge training datasets with complex training frameworks.It is also difficult for such approaches to measure subtle motion with high accuracy, and to deliver a generalized model.There are many deep learning methods and developed algorithms for object tracking in AR and VR areas such as Microsoft HoloLens [Microsoft. 2018] and Oculus Quest 2 [Technologies. 2020] apply Simultaneous Localization and Mapping (SLAM) [Campos et al. 2021;Saputra et al. 2018;Younes et al. 2017] and visual odometry [Scaramuzza and Fraundorfer 2011;Yang et al. 2022].They recover the camera pose by applying the camera-captured images.But the performance by using SLAM and visual odometry depends on the environment's lighting condition, static scene structure, and scene texture [Eger Passos and Jung 2020;Hübner et al. 2020] to extract apparent motion, and the results are unreliable [Saputra et al. 2018] in some scenarios.
Moiré patterns for position and angle measurement: An early example of utilizing the Moiré effect for angular measurement is the Inogon [Lars A. Bergkvist 1983] (see Figure 2 (c)), as passive visual indicator for steering ships in narrow shipping lanes.Hideyuki et al. [Tanaka et al. 2012[Tanaka et al. , 2014] ] propose to apply Moiré patterns for solving orientation accuracy in frontal observation and pose ambiguity.
More recent work explores two layers of line gratings for 3D position measurement [Banks et al. 2019], however, the approach has a limited working volume, the Moiré patterns are very large compared to the working volume, and the camera has to be frontoparallel to the two planes.Some of these restrictions have later been relaxed [Xiao and Zheng 2021] to allow for tilted camera angles.In comparison to these methods, our method offers thin markers with a small volume that works with a large standoff distance and primarily targets angular measurements.In very recent work, Ning et al. [2022] propose to use the Moiré pattern resulting from re-photographing a monitor with a digital camera for 6 DoF pose estimation.This approach requires careful calibration of the specific camera and display combination.With a reported angular error of multiple degrees and a positional error of multiple cm, their accuracy is substantially lower than our results using an uncalibrated camera.For completeness, we would like to mention that the Moiré effect has also found various other applications in imaging and graphics, including for security purposes such as a watermark-like technique [Cheng et al. 2021

OVERVIEW
The operating principle of MoiréTag is based on the Moiré effect, which is illustrated in Fig. 3: when two square waves with slightly different frequencies superimpose in a multiplicative fashion, the resulting product wave consists of a mixture of different frequencies, including a low base frequency whose phase magnifies and encodes the relative offset between the two signals.In MoiréTag , we generate this Moiré effect using two periodic stripe patterns printed onto the two sides of a transparent glass slide, as depicted in Fig. 1.The parallax corresponding to different observation directions therefore causes an offset between the two stripe patterns, which gets encoded and magnified into the Moiré effect, and can therefore be measured with a camera.
Since a single Moiré pattern only can only encode a limited range of offsets, we propose a coarse-to-fine approach that employs multiple frequencies (Sec.4), enabling us to achieve both a wide measuring range and high angular accuracy.Although we have derived an analytical model for the mapping individual phase measurements to angles, fusing the information provided by individual frequency patterns is a complex task.This complexity arises from changes in the reliability of each phase measurement with angle, camera standoff distance, camera resolution, motion blur, and noise in different levels (examples are shown in Fig. 13).The effects of all these hidden parameters are difficult to model analytically.Therefore we opt instead to train a simple Multi-Layer Perceptron (MLP) to handle the data fusion between the individual raw phase measurements (Sec.5).Finally, we extend the basic Moiré framework to a hyperbolic chirp model for full 6-DoF measurement as well as camera intrinsic calculation (Sec.6).

Mapping Between Angles and Spatial Offsets
The physical realization of MoiréTag is to print the two-layer designs from Fig. 7 onto the two sides of a thin glass wafer.We need to account for refraction on the glass-air interface when mapping the measured raw Moiré phase to physical angles.In our prototype, the glass wafer has a thickness of  = 510 , and the glass's refractive index is  1 ≈ 1.46.Considering Snell's law [Shirley 1951], the relationship between the camera viewing angle  and pixel offset  relative to the superposed two Moiré layers is  =  tan(arcsin( sin   1 )) (see Fig. 4).

ANGLE MEASUREMENT VIA MOIRÉTAG
In the following, we derive the mathematical model for the phase magnification and encoding due to the Moiré effect.Since we have previously established a straightforward mapping between angles and spatial offsets.In the following we choose to make all derivations with respect to spatial offsets instead of angles for simplicity of notation.

Phase Estimation from Moiré Patterns
The binary patterns consist of an even area proportion of blackand-transparent linear stripes, hence having constant frequencies.
In 1D, each of the two layers is modeled as a square wave   ().
The Fourier series expansion of a square wave   () is The product of two square waves   () and   + ( + Δ) with frequencies  and  +  generates a superposed binary Moiré pattern, where the offset Δ can be geometrically mapped to the parallax angle between the two layers.The Fourier expansion of the observed Moiré pattern is then given as ( where  is a scaling factor 2  2 , and we have made use of the trigonometric identity sin  • sin  = 1 2 [cos( − ) − cos( + )].We can see from the last term of the equation that the lowest frequency component of the Moiré pattern corresponds to the frequency difference .Many high frequency terms are also present, but they are either suppressed through limited camera resolution, e.g. at large standoff distances from the pattern, or they can be filtered out post capture.Moreover, the phase offset  = 2 ( +)Δ in the low frequency term encodes and magnifies the shifted geometric alignment of the grating patterns.The observable (macroscopic) phase shift of the lowest frequency therefore encodes the microscopic offset Δ of the two layers due to parallax.The Moiré design utilizes this effect to achieve high precision angular measurements.

Phase Unwrapping Design
Unfortunately, the Moiré magnification effect also implies that the measurement range is quite limited due to phase wrapping.This is similar to the phase wrapping problem encountered in time of flight 3D cameras.The solution to increase the measurement range is to combine different Moiré patterns with a variety of base frequencies  and frequency differences , and then to fuse the phase information obtained from these individual measurements.This approach is also inspired by the coarse-to-fine approach of classical Vernier scales.
In the following we describe how to select these frequency pairs so that both a large angular range is achieved, and the contrast in the individual Moiré patterns is sufficiently high for detection from a large variety of observation distances.
Contrast maximization.Since we would like Moiré to work over a large range of distances as well as angles, one design criterion is to preserve sufficient contrast in the lowest frequency over a wide range of magnification levels.The exact working volume will depend on factors such as camera resolution and zoom, however we can optimize the design without considering these factors by analyzing the contrast of the Moiré pattern under different downsampling factors.Within a local area, the maximum intensity of the pattern is 0.5, which occurs when the lines of the two layers align perfectly.On the other hand, if the lines of one layer align with the gaps in the other layer, the local average intensity reaches its minimum of 0. To manually optimize the frequencies, we calculate the average area intensity of the superposed Moiré pattern by pixels and plot the interference intensity variation of each pixel (see the first row in Fig. 5) as translating the camera location horizontally from the left (negative degrees) to the middle and the right (positive degrees).Each colored line in Fig. 5 represents a specific pixel, here 14 pixels in total with a maximum intensity change within a full angular range (from −75 • to 75 • ).Almost all of the pixels cover a wide intensity range, which means that the grated Moiré pattern changes obviously under the projection of a moving camera.Then we repeat the previous experiments until finding the appropriate Moiré pairs.When the camera is located at (0, 0, z w ), the camera's rotation matrix is identity and the camera angle is zero.Phase unwrapping.Phase  in the low-frequency term can be taken as a unique symbol to describe the camera's angular position.The distribution of phases from a single Moiré pair has repetitive properties in a wide angular range and the frequency of the repetitiveness depends on the periods of two Moiré layers.A Moiré pair with high periods has a large phase range, and with low periods has a small but repetitive range.We propose to design seven different Moiré pairs with phase repetitiveness from coarse to fine and frequencies from low to high (Fig. 6).Hence, each camera viewing angle corresponds to a unique phase combination.Our coarse-tofine mutual correction design allows phase correction between each periodic pair (Fig. 1) and guarantees the camera's angular accuracy.

MoiréTag Layout
Finally, we combine two identical groups of seven Moiré patterns into a marker layout to measure both horizontal and vertical angles (Fig. 7).The Moiré markers furthermore contain four corner markers in the style of QR codes, which allow for the automatic detection and approximate localization of the tags in images.Then we introduce small bowtie markers that allow for subpixel-accurate localization of the Moiré patterns for accurate phase estimation.
The bowties define a homography that is used to resample each Moiré group into a rectified configuration before further processing.
The front and back layer of the MoiréTag are shown together with the superimposed Moiré effect in Fig. 7.The design fabrication is discussed in Sec. 7.
Front Layer Rear Layer Superposed Moiré Pattern

INFORMATION FUSION BY MLP 5.1 Information Fusion
We previously discussed in Sec. 3 that combining information from individual frequency patterns is a complicated task and involves analyzing complex dependencies such as capturing distance, which is difficult to do analytically.To address this, we chose to use a small Multi Layer Perceptron (MLP) neural network (Fig. 8) to handle the data fusion between the individual raw phase measurements.The input size of MLP is 7.The input layer, two hidden layers, and the output layer are 90, 50, 15, and 1 neurons each.Each layer is followed by a ReLU activation function.
Given a camera image, we first extract rectified versions of the groups of Moiré patterns.In the rectified image, the spatial frequency of each pattern is independent of the observation conditions, and we can obtain an initial raw phase estimate for each frequency by fitting a sinusoid of the appropriate frequency to Eq. 2. The resulting 7 raw phase measurements are processed by the MLP to calculate the angle.The neural network is trained on a dataset that is obtained through a combination of simulation and physical data collection.In particular, it is difficult to set up an accurate 3D rotation system for data collection.We simulate different rotation angles, display the resulting MoiréTag images on a monitor, and re-image those scenes with a camera mounted on a translation stage for accurate measurements under a large range of distances.We believe that this process combined the best of both a relatively simple physical setup with real camera characteristics and the flexibility of simulation.
For each simulated angular pose we capture 40 images covering a wide distance range.The complete input training dataset consists of extracted phases from 203 × 40 = 8210 captured images.Each extracted Moiré pattern of the captured images contains 7 phases, which correspond to a known angular pose.We use the Adam optimizer to train the MLP network with a learning rate of 5 × 10 −3 and batch size of 2800 for 2500 epochs.Fig. 2 in the supplement validates the stability of the proposed model by plotting all the phases from the training dataset.The accuracy regards to the viewing angle is 0.17 • .

Validation with Real Data
The orthogonal design of two Moiré patterns allows us to measure both roll and pitch.Since the two patterns are identical, we analyze only roll measurement during the experiments.In real-scene experiments, it is hard to qualitatively analyze the precision of the training model by moving the camera toward the MoiréTag.Hence, we fix the position of the camera while rotating the rotation stage at an arbitrary location to capture the MoiréTag and extract the phases of the horizontal pattern for the first angular  0 prediction.Then we fine-turn the rotation stage by a constant Δ and snapshot the target along a wide angular range to collect the experiment dataset.We compare the difference between  0 and  0 + Δ among the whole dataset to calculate the root mean square error.A quantitative evaluation of the training model's overall accuracy is performed by capturing several groups of angular images ranging from 1  to 4  (see Fig. 11 and Fig. 1 in supplementary).By varying measuring distance and viewing angle, we obtain an error distribution (see Fig. 12, middle).Fig. 11 shows one of the viewing angle estimation results from 65.20 • to −65.28 • .The purpose of doing these largerange experiments is to validate the stability of the training model.The RMSE of the above wide angular range is 0.1719 (Fig. 12, left), which proves that the model can cover a wide measurement range with high credibility.The overall angular accuracy from multiple real experiments is approximately equal to 0.17 • , which is close to the training results.We demonstrate the precision of MoiréTag by minute changes in angle rather than drastic motions.In addition, we fix the target location and rotate the camera to acquire a dataset to show the practicability of the proposed approach (see the supplemental video).

HYPERBOLIC CHIRP MODEL FOR DOF TRACKING
Hyperbolic chirp.The phase shift analysis presented above assumes that the MoiréTag is observed with an orthogonal camera, so that the front and back layer have the same relative scale.This is an accurate model for determining directional information at larger standoff distances.However, if the marker is large compared to the camera distance, the two layers are scaled differently, and especially at oblique angles, perspective foreshortening results in spatially varying frequencies of the line patterns from each layer.
As a motivating example, consider Fig. 9(a), which shows an example from our experiments, where the marker has been rotated horizontally relative to the camera by 49.88 • .The orange line in Fig. 9 (b) represents a top view of the horizontal Moiré pattern, in which perspective foreshortening causes a spatial variation in the frequency, in this case decreasing from the left to the right.A frequency change with time is known as a chirp effect.If we extract the horizontal Moiré pattern without rectifying the horizontal ("roll") rotation (see Fig. 10 (b)), the frequencies of the superposed Moiré pattern become inconsistent.We show in the following that this perspective effect can be modeled as a chirp with a hyperbolic change in frequency.The hyperbolic chirp model can be used to obtain positional and angular information as well as camera intrinsics.To derive this model, we consider how the Moiré frequencies from Eq. 2 are mapped from world space to image space under a perspective transformation.The top view of our model in Fig. 9 (b) shows the setup between the camera and the horizontal Moiré pattern (orange color), which is extracted without rectifying the "roll" rotation.According to the pinhole camera projection, two coordinate points  and  (negative when sitting on the left side of the mask) on the Moiré pattern are mapped onto a horizontal plane, which is parallel to the camera image plane.Then  .We assume the z w -axis of the camera is aligned with the middle of the mask, so the rotation matrix of the camera remains identity.Hence, the superposed Moiré pattern's period in the image plane is T =    ′   , where   is the focal length.From the above equations, a full representation of the period T on the image plane is (3) Eq. 3 shows that the frequency (1/ T ) of the grated Moiré pattern with respect to point  is a hyperbolic curve.From the lowfrequency term, the hyperbolic chirp model as a function of coordinate point  is expressed as To evaluate the overall accuracy of the chirp model, we fix the monochromatic camera on a tilted linear slider and adjust the distance relative to the fixed MoiréTag.We first put the camera closer to the marker and then move the camera backward within a certain distance to collect the captured MoiréTag for each point.Then we move the camera forward at the same path along the slider.Fig. 4 in supplementary shows the distance and focal length accuracy.

EXPERIMENTS
Prototype fabrication, cameras, and setup: To achieve a compact measuring instrument, we fabricated the prototype MoiréTag on both sides of a 4 inch fused silica wafer using state-of-the-art photolithography techniques.The mask patterns are etched into chromium layers to achieve high absorption.However since chromium is highly reflective, a layer of 100  of silicon dioxide  2 is first deposited by PECD (plasma-enhanced chemical vapor deposition) on both sides wafer.The  2 film works as a light absorber to reduce interreflection between the chromium layers.The vertical and horizontal Moiré patterns are aligned using the front-to-back alignment technique available on the contact aligner ( 6200∞).
The training data was collected with a monochrome camera from Lucid Lab that contains a Sony IMX392 CMOS sensor.The pixel size of the camera is 3.45  and the sensor size is 7.9 .In our other experiments, we also used different cameras including the built-in camera of a DJI Mavic 3 UAV without retraining the network.The experimental setups used a 180  translation stage from Zaber, and the real validation experiment a rotation stage from Thorlabs.

COMPARISONS OF OBJECT TRACKING
Next, we compare our work to marker-based object tracking methods.Specifically, we chose the classical checker board pattern [Zhang 2000], the relatively recent fiducial marker system ArUco [Garrido-Jurado et al. 2014], and the state-of-the-art Moiré-guided method [Ning et al. 2022].In real experiments, with rotation and translation stages, we measure the translations and rotations separately.We test each method at different capture distances (Δ = 1) to better reflect the overall accuracy.The distance range is from 1.0 to 4.0 meters.All of the tracking methods produce errors to some extent.Our method behaves more stable in the experiment and the estimation accuracy is high.To measure translation accuracy, we fix the target pose and put the camera on a translation stage to adjust the distance and position relative to the target.For the computational cost, the checkerboard is fast but generates large errors if the intrinsics are estimated from the same image, as in our approach.ArUco requires recovering the camera's intrinsic parameters before tracking.MoiréBoard from [Xiao and Zheng 2021] state that they can achieve good positional accuracy.However, it is limited in its ability to measure angles.Additionally, Moiré pattern requires adjustments to accommodate varying camera distances measuring range.The comparison results show that our proposed method is competitive with the state-of-the-art snapshot position tracking, and offers an order of magnitude improvement in the quality of angle measurements (see Table 1).

APPLICATION SCENARIOS
We show three experimental results relative to MoiréTag to illustrate the application value of the proposed angular measurement system.
Camera angular retrieval.In the first experiment, we use the MoiréTag to retrieve the camera angle on an iPhone 11.We move the camera from approximately the middle of the fixed MoiréTag to the left.Then we move upward at a certain distance and the right.After that, we move the phone in the opposite direction along the approximately same path.Fig. 14 (

CONCLUSION
We present a novel vision-based marker MoiréTag that incorporates the use of Moiré patterns for high-accuracy object angular measurement.A mutual correction permutation of the Moiré patterns makes the estimation more robust and extends its working volume.MoiréTag is a scalable design that can cover large capturing distances and a wide range of viewing angles with high sensitivity.Moreover, MoiréTag has a thin form factor, making it applicable in virtually all scenarios where 2D markers are used today, including AR/VR tracking, autonomous vehicle tracking, visual odometry for UAVs, etc.The unique design of 6-DoF object tracking provides a snapshot procedure without calibration or complex environmental control compared with other state-of-the-art methods.Unlike other object tracking methods, we derive a hyperbolic chirp model to acquire the camera intrinsic parameters, capturing distance and field of view.We hope our proof of concept can motivate further research for high-precision angular measurement.

Figure 2 :
Figure 2: (a) Multiplicative combination of two periodic structures with slightly different frequency introduces a low frequency component known as the Moiré effect.The phase of this low frequency component is very sensitive to minor changes in the offset between the two periodic structures.(b) The Moiré effect is similar to the Vernier scale, which combines two scales of slightly different period to provide high accuracy measurements.(c) The Inogon[Bergkvist and Forsen 1986] is a passive marker that uses the Moiré effect to provide visual feedback on positioning in shipping lanes.
], a secure QR-based communication method [Pan et al. 2019], and face-spoofing detection [Garcia and de Queiroz 2015].Another Moiré implementation is to minimize the Moiré fringes to improve the quality of a recaptured image from the display monitor [He et al. 2020; Sun et al. 2018; Yu et al. 2022].

Figure 3 :
Figure 3: Left: Square wave   () with period  1 .Middle:   + () with period  2 (top) and   + ( + Δ) with a time shift Δ (bottom).Right: A slight time shift in one wave can result in a significantly base frequency shift (bottom pink) compared to the one without any time shift (top pink).

Figure 4 :
Figure 4: (a) We measure the viewing angle by fixing the camera and rotating the stage per degree.(b) The relationship between viewing angles and the horizontal Moiré pattern in MoiréTag.(c) The correlation between the translation  and the viewing angle  .

Figure 5 :
Figure 5: High pixel contrast and pattern movement.The grated Moiré pattern shows the adjacent pixels generate different intensity values.The intensity display of each pixel (different color) on the right shows that the pixel can generate a wide intensity range across a large camera viewing angles.The bottom Moiré pair shows apparent pattern variation.

Figure 6 :
Figure 6: Phase distribution of each Moiré pair.The phase value of each edge point elaborates the repetitive character and range of Moiré pair.The corresponding angular range is from −65 • to 65 • (see Fig. 2 in supplementary).

Figure 8 :
Figure 8: Information fusion framework.From raw camera images we extract rectified Moiré patterns to which we fit the analytical model in order to obtain raw phase measurements.These raw measurements are fused by an MLP to produce the final measured angle.The MLP is trained o data obtained through a combination of a physical setup and simulation.
a) shows the angular movement.The middle of the angular figure illustrates the direction reversal.Object tracking.In the second and third experiments, we manage to track the angular position of a flying drone with the aid of MoiréTag.The drone we use is DJI Mavic 3, which has Mavic 3 4/3 CMOS Hasselblad camera.The experiment details are shown in Fig. 14 (b) and (c) (see figures only pages and the corresponding supplement video).The results show that we can retrieve the angular variation for both indoor and outdoor flight tests.

Table 1 :
Comparison of RMS angular and positional errors produced by various competing methods.In real experiments.The positional accuracy was averaged over a distance range of 1 − 4.