A Localization-Sonar Module for Small, Distributed Robots
Luis E. Navarro-Serment, Christiaan J.J. Paredis and Pradeep K. Khosla
The Institute for Complex Engineered Systems,
The Robotics Institute, and
Department of Electrical and Computer Engineering
Carnegie Mellon University
Pittsburgh, Pennsylvania 15213-3890
{lenscmu,
cjp, pkk}@cs.cmu.edu
Abstract--
This paper presents the design of a localization/sonar module for a team
of centimeter-scale robots that collaborate to map and explore unknown environments.
The module provides the robots with means of determining their position and
detecting obstacles surrounding them. For localization purposes the system uses
ultrasound to measure the distance from each moving robot to three stationary
robots that serve as beacons. From these distance measurements the position of
the robots is derived using a trilateration algorithm. The robot team can move
over large distances by using a leap-frogging approach in which different
robots serve as beacons at different times.
The localization system estimates the robot positions more accurately
than through dead reckoning, and yet, does not require any landmarks or
previously deployed beacons. Similarly, the robots serving as beacons can
measure the distance to obstacles by measuring the time of flight of the
ultrasonic echoes coming back to them. An array of eight sensors allows the
robot to get those measurements from eight different directions simultaneously,
thus increasing the exploration speed.
In this article, we present the concept and design of a localization/sonar module used by a team of centimeter-scaled robots that collaborate to map and explore unknown environments. The robots, called CMU Millibots, are configured from modular components that include sonar and IR sensors, camera, communication, computation, and mobility modules. Robots with different configurations use their special capabilities collaboratively to accomplish the given task. A typical Millibot is shown in Figure 1. A complete description of the Millibot project can be found in [14].
Autonomous robots within distributed teams usually operate in three different sensing modes. One sensing mode measures internal parameters of the robot, e.g., the position of the driving motors, which provides the necessary information for speed control and dead-reckoning. A second sensing mode perceives the environment in which the robot is operating; obstacle detection is a good example of this type of sensing. The third sensing mode provides information about the state of the team of robots. For instance, a localization system collects position data from each robot of the team and uses it to determine the formation patterns of the group. The localization/sonar module described in this article detects obstacles around the robot, and simultaneously provides information about where the robot is located. This double-mode functionality provides the Millibots with the capability to successfully map and explore unknown areas despite their size.
The exploration and map-building abilities of a robot are strongly dependent of its sensory skills and its capacity to determine its position. When the position of a robot on a certain area is known, any information extracted from its sensors can be related with its current viewpoint. A map is built by creating a list of all these position/sensor-reading relationships. Without knowing the position and orientation of the sensors, it becomes impossible to interpret the sensor data in a global frame of reference and integrate it with the data coming from other robots. Moreover, distributed robotic teams require position knowledge to move to predetermined locations, avoid known obstacles, or reposition themselves for maximum sensor efficiency.
Exploration is usually done without having any previous knowledge of the environment. For this reason, the robot must be able to detect obstacles along its way in order to navigate safely through uncharted areas. While navigating, the robots collect information from their sensors from all possible viewpoints. Range sensors are commonly used for navigation in unexplored environments. They allow the robot to avoid obstacles and identify navigable routes. Furthermore, they allow the robot to detect landmarks, which are used for map-building.
Conventional localization and range measurement systems do not offer a viable solution for Millibots. Many robotic systems rely on a Global Positioning System (GPS) and compass for determining their position and orientation in a global frame of reference [7]. However, due to its large size, limited accuracy, and satellite visibility requirements, GPS is not appropriate for the small Millibots that operate mostly indoors. Dead reckoning, another common localization method, generally suffers from accuracy problems due to integration errors and wheel slippage [3]. This is even more pronounced for Millibots that rely on skid steering for which track slippage is inherent to the steering mechanism. Conversely, other localization systems that are based on landmark recognition [1][9] or map-based positioning [15] require too much computing power and sensing accuracy to be implemented on Millibots.
Commercially available range sensors cannot be used on Millibots, either. Their size and power requirements are not appropriate for the Millibot operation characteristics. Furthermore, conventional sensors are usually specialized units; they can not easily be combined with other units to increase their functionality. The Millibots are collaborative in nature and rely on the combination of efforts to achieve complete mission capability.
To overcome the problems encountered in the implementation of existing localization methods for a team of Millibots, we have developed a novel method that combines aspects of GPS, land-mark based localization, and dead reckoning [14]. The method uses synchronized ultrasound pulses to measure the distances between all the robots on a team and then determines the relative positions of the robots through trilateration. Similar systems have been developed [8] and are even commercially available. However, they are both too large and too expensive for operation on Millibots. Moreover, our system is more flexible because it does not require any fixed beacons with known positions, which is an important relaxation of the requirements when mapping and exploring unknown environments. The first implementation of our localization system is described in [17].

Similarly, we developed a range sensor around the localization system. The
sensor uses the synchronized ultrasound pulses that measure the distances
between the robots on a team for localization purposes. The distances to obstacles
are obtained by measuring the times of flight of the echoes coming back in
eight different directions.
To produce and detect ultrasonic signals, the module is equipped with two arrays of eight ultrasonic transducers each. Both arrays are clearly seen in Figure 2. The first one is a radially symmetrical array, which consists of 8 piezoelectric transducers distributed in a circular fashion, pointing outside the robot. It emits ultrasonic signals with 360-degree coverage in the horizontal plane. The transducers are driven simultaneously. This allows the robot to send signals in every direction, thus reaching other robots and obstacles. Two parallel circular plates reduce stray multipath signals, which could cause interference patterns. They also provide physical protection to the transducer elements.
The second array is composed of eight ultrasonic detectors. These detectors are symmetrically arranged around the module, providing 360-degree detection coverage in the horizontal plane simultaneously. Our system uses a threshold detector to measure the time of arrival of ultrasonic signals. This time instant is determined by the moment at which the incoming ultrasonic signal exceeds for the first time a certain reference level. The amplitude of the ultrasonic signal changes with the traveled distance due to beam spreading and attenuation. This results in small measurement errors at low signal-to-noise ratios. We compensate for the above errors by using an experimentally determined calibration equation. While there are many different techniques for getting an accurate time of flight measurement [4][13], we decided to use this method in order to keep our hardware within the constraints of the Millibots.
Despite the fact that electrostatic transducers are usually the family of choice for ultrasonic ranging applications, we decided to use piezoelectric transducers in our design. Electrostatic transducers couple to the air more efficiently than piezoelectric transducers, and have a higher dynamic range. However, they usually operate at very high voltages (150-200 VDC), and are too large for the Millibots. On the other hand, piezoelectric transducers are inexpensive, work at low voltages and fit easily within the power and size budget of the Millibots [5]. Although the piezoelectric transducers have some latency in their response due to mechanical inertia, we were able to obtain reliable and accurate ultrasound detection up to 3m with a resolution of 4mm.

The localization-sonar module was built around an 8-bit RISC microcontroller running at 20 MHz. This controller detects commands received from the team leader, drives the emitter array and monitors the outputs of the ultrasonic sensors. It also measures the times of flight of the ultrasonic signals, either for localization or obstacle detection. When a new set of measurements is collected, it sends the new data to the team leader, who updates the position estimates and uses the sonar information for mapping purposes.
The module consumes only 150mW, drawing its power from a 5V regulator installed in the mobility platform. Its entire analog signal conditioning circuits were built using single-supply, rail-to-rail stages. This simplifies the design and keeps the component count low.
The module shares the use of the RF transceiver with the main processor (which is usually installed in the middle board). All the Millibots use the same RF channel (418 MHz); the team leader makes sure that no collisions occur by controlling the flow of messages coming from the Millibots.
Reliable distance measurements between robots are essential for the localization system. Therefore, the design of a good 1-D range finder is fundamental. For performance tests two localization modules were attached to a rail equipped with a distance scale. In order to determine the range accuracy and precision of the unit we took 200 measurements and computed the mean value and standard deviation at regular distance intervals. In Figure 3, the measurements are compared to the expected distance, assuming a linear relationship between the distance and time measurement.
The performance of the ultrasonic range finder is affected by several factors. Some of these factors are a result of the hardware design, while others depend on environmental factors. Our system uses a threshold detector to measure the time of arrival of ultrasonic signals. This time instant is determined by the moment at which the incoming ultrasonic signal exceeds for the first time a certain reference level. The amplitude of the ultrasonic signal changes with the traveled distance due to beam spreading and attenuation. This results in small measurement errors at low signal-to-noise ratios. We compensate for the above errors by using an experimentally determined calibration equation. While there are many different techniques for getting accurate time of flight measurements [4][13], we decided to use this method in order to keep our hardware within the constraints of the Millibots.
The measurement process also introduces
quantization noise. Our circuit can
measure the time of flight with a resolution of 10ms. Assuming a 20°C air temperature, the quantization interval is
equivalent to a distance of 3.43mm.
In addition to the noise introduced by the
measurement system, several environmental factors influence the accuracy of the
measurements. Room temperature
drift and temperature gradients influence the distance measurement process
because the speed of sound is a function of temperature. One can compensate for changes in room
temperature using a temperature probe, but temperature gradients cannot easily be measured. Similarly, one cannot easily account for air turbulence and wind
[16]. These effects are more pronounced
when the sound travels over longer distances because it has a larger probability of crossing zones that affect its propagation
speed and consequently its time of flight. We are in the process of determining
the importance of each of these effects on the overall system performance. However, preliminary tests show very
promising results.


The Millibots were designed on a modular fashion. A Millibot is constructed by
assembling a set of sub-systems ranging from computation to communications to
sensors. Even the mobility platform is modular and can be selected based on the
terrain of the mission. To support modularity, each of the subsystems has been
implemented as a self-contained module complete with processor and interface
circuitry. As shown in Figure 4, each Millibot on the team is configured with a
set of different modules, thus forming an heterogeneous group of agents with
different capabilities.
The Millibots are semi-autonomous agents. They receive commands from a host computer (the team leader), which coordinates and controls the operation of the group. A common radio communication channel connects the Millibots with the team leader. The Millibots continuously parse every incoming message from the team leader, execute commanded tasks or collect data from their peripheral sensor processors and send data back to their team leader. For a more detailed description of the currently available modules for the Millibots, see [14].
The Millibot localization system is based on trilateration [3], i.e., determination of the position based on distance measurements to known landmarks or beacons [10] [11]. GPS is an example of a trilateration system; the position of a GPS unit on earth is calculated from distance measurements to satellites in space. Similarly, the Millibot localization system determines the position of each robot based on distance measurements to stationary robots with known positions.
The localization system uses ultrasound pulses to measure the distances between robots. Periodically, each Millibot working as a beacon simultaneously emits a radio frequency (RF) pulse and an ultrasonic pulse. As is illustrated in Figure 5, the RF pulse, traveling at the speed of light (3´108 m/s), arrives at all receivers almost instantaneously. The ultrasonic pulse, on the other hand, traveling only at 343 m/s (assuming 20°C air temperature) arrives at the receiver with a delay proportional to its distance to the beacon. Each Millibot measures this delay (Dti in the Figure), using the RF pulse for synchronization, and converts it to a distance measurement by multiplying with the speed of sound. A team leader coordinates the pinging sequence to ensure that beacon signals from multiple robots do not interfere with one another.
After all the beacons finish pinging, every Millibot has a set of distance measurements from its current position to each beacon position. This information is sequentially transmitted to the team leader, which determines the actual position of every Millibot. In the future, we plan to calculate the Millibot positions on the local processor of each Millibot. However, currently the processor does not have the necessary computation power to perform these floating-point computations.
![]() |
When a team of Millibots is first deployed, they automatically determine their position with respect to a local frame of reference. To accomplish this, the team leader collects distance measurements between any arbitrary pair of robots by pinging the beacon of each robot possibly multiple times to achieve more accurate distance measurements and collecting the measurements from all the other robots. The team leader then assigns the position (0,0) to an arbitrarily chosen robot. A second robot is assigned a position on the X-axis. This defines a frame of reference in which the position of all other robots is determined through trilateration. However, based on distance measurements alone, there remains an ambiguity about the sign of the Y coordinates of each robot. To resolve this ambiguity, the team leader commands one robot to follow a short L-shaped trajectory, and redetermines its position through trilateration. If the robot turned to the left, but the assigned coordinate system indicates a right turn, the sign of the Y-coordinates of all robots is reversed.
An important advantage of the Millibot localization system is that it does not rely on fixed beacons. Instead, a minimum of three Millibots (but not necessarily always the same three) serve as beacons at any time. The Millibots that serve as beacons remain stationary. The other robots can move around in the area that is within reach of the beacons. While they sense the environment, they can determine their position with respect to the current beacons. When the team has explored the area covered by the current beacons, other robots will become stationary and start serving as beacons. In this fashion, the team can move over large areas while maintaining good position estimates as illustrated in Figure 5. This leap-frogging approach allows a team to move forward while always maintaining three stationary beacons in known locations.
In order to avoid numerically ill-conditioned configurations (e.g. collinear beacons), careful planning of the movement sequence is required. The localization algorithm is most accurate when the beacons are at the vertices of an equilateral triangle. When a team moves over a large distance, the beacon that is farthest removed from the goal will be replaced by a Millibot in a position closer to the goal and equidistant to the other two beacons.

The accuracy of the position estimates gradually deteriorates as the number of leaps
increases. We expect several parameters to affect the accuracy, including the
number of leaps, the shape of the leap-frogging pattern and the size of each
leap. Careful characterization of these dependencies is the subject of ongoing
work.
The most commonly used range sensors in mobile robotic applications are optical range sensors and ultrasonic sonars. Since our first implementation of the localization system already employed ultrasound for range measurement [17], we decided to design an ultrasonic sonar built around the localization sensor.
The Millibot Sonar design attempts to achieve simplicity, ruggedness and redundancy. Our main goal was to take advantage of the ultrasonic localization signals to generate echoes, thus detecting obstacles. In this form, the energy employed to emit beacon signals serves a double purpose, while at the same time the localization-obstacle detection cycle is performed faster.
We based our sonar design in the time of flight method. As described earlier in this section, a robot configured as a beacon emits a set of RF and ultrasonic pulses. This set of pulses is detected by other robots in the area and provides information for localization purposes. However, the ultrasonic pulses emitted by the beacon robot are reflected back to it by objects surrounding it, producing echoes. These echoes are detected by an array of eight sensors, located around the robot. The times of flight of the echoes are representative of twice the distance from the object from which they are coming from. The time of arrival of the echoes are recorded and converted to a distance. This information is combined with the position of the sensor in the detector array that measured it, thus providing distance and orientation information to objects.
The team leader merges the information from several robots and generates a single global map to provide a more comprehensive view of the environment to the user. To produce maps of the environment, one of our methods is to build an occupancy grid with a Bayesian update rule. This method allows the combination of sensor readings from different robots and different time instances [10][22][26][28]. In an occupancy grid, the environment is divided into homogeneous cells. For each cell, a probability of occupancy is stored. An occupancy value of zero corresponds to a free cell, a value of one corresponds to a cell occupied by an obstacle. Initially, nothing is known about the environment and all the cells are assigned a value of 0.5 (equally likely to be occupied or free).
The mapping algorithm uses a Bayesian update rule [22]:
(1)
Equation (1) updates the
occupancy probability for cell c,
, based on the current sensor reading,
, and the a priori
probability,
. Any sensor that can convert its data into a probability
that a particular cell is occupied can be merged into the same map. This means that data generated by a
short-range proximity detector can be merged with data from a sonar range
module or even a camera.
To determine the position and orientation of the robots relative to each other, we use a maximum likelihood estimator. If all the distance measurements were perfectly accurate, we could use a simple geometric trilateration algorithm to determine the position of the robots relative to each other. However, measurements are noisy and sometimes missing. As a result, the set of equations resulting from a purely geometric approach is over-constrained and does not always yield a solution. Instead, we use a maximum likelihood estimator that determines the most likely position and orientation of all the robots, given their previous positions and orientations, their movements, and the sonar-based distance measurements.
Assume that we know the position an orientation,
, of all the robots at time
. The question is: how
do we determine the position,
, of the robots at time
, after they have moved?
We can estimate the new positions based on the following information:
Dead reckoning:
Since all the Millibots are equipped with encoders, their position at time
can be estimated by
integrating the encoder signals. This
can be further simplified in our case,
because the Millibots always move according to “vector commands” (i.e.,
rotation in place over an angle
, followed by a forward straight-line motion over a distance,
). A stiff controller guarantees that the commanded motion,
, is realized, eliminating the need to query the robot after
the motion is completed. In addition to
the parameters
and
, we assume that the vector command is characterized by the
angle
. As is illustrated
in Figure 5,
is the angle over which the robot rotates while moving
forward. This unplanned rotation is due to wheel slippage and calibration
errors in the controller. There is a
one-to-one mapping between the incremental motion of the robot, from
to
, and the parameters
:
(3)
Distance measurements: After all the robots have come to a halt, each robot that moved pings its localization beacon to determine its distance to all the other robots. The resulting distance measurements provide accurate data that allow us to overcome the drift typically encountered in localization algorithms based on dead reckoning alone.

We have carefully calibrated the motion controller and localization beacon so that, in addition to the nominal measurement, we have an estimate of the corresponding standard deviation. As illustrated in [23], the distribution of the localization data closely resembles a Normal distribution. Assuming that both the dead reckoning data and the distance measurements are normally distributed, we can compute the likelihood of a particular set of measurements occurring for a given robot position:
Dead reckoning:
The likelihood that a robot moved over an angle,
, and a distance,
, given its initial position
and final position
is:
(4)
Distance
measurements: The likelihood that the measured distance between two robots,
and
, is equal to
is:
(5)
The total conditional likelihood function
is the product of all
the conditional likelihoods introduced above. The most likely robot positions
are found by maximizing
with respect to the
new robot positions
.
The maximum likelihood estimator requires that the initial positions of the robots are known with respect to one another. This requires a slightly modified approach at start up. After collecting distance measurements between all possible robot pairs, a conditional probability density function is defined which only consists of distance measurement terms. In addition, one arbitrary robot is assigned the position (0,0) and a second robot is assigned a position on the X-axis. This defines a frame of reference in which the position of all other robots is determined by maximizing the conditional probability density function. However, based on distance measurements alone, there remains an ambiguity about the sign of the Y-coordinates of each robot. To resolve this ambiguity, the team leader commands one robot to follow a short L-shaped trajectory and recomputes its position. If the robot turned to the left, but the assigned coordinate system indicates a right turn, the signs of the Y-coordinates of all robots are reversed.
The optimization of the conditional probability density function can be formulated as a weighted nonlinear least-squares problem, which we solve using the BFGS nonlinear optimization algorithm [6]. The dead reckoning data provides a good starting point, so that only a few optimization iterations are necessary to reach the optimum. During the initialization stage of the robot team, when no prior information about the robot positions is available, the BFGS algorithm may get stuck in a local minimum. Based on experimentation, we have found that taking the best-out-of-five randomly initialized runs never fails to find the global optimum.
To obtain good results with the above algorithm, it is very important to filter the raw measurement data. Even though the sensors are very accurate and reliable, it is possible that they have returned false measurements. This occurs for instances where the direct path between two robots is obstructed by an obstacle or another robot. The beacon sensors will always return the time corresponding to the first incoming ultra-sound pulse. In this case, the first pulse is the result of some multi-path rather than the direct-path pulse. As a result, the measured distance can be significantly larger than the actual distance. A similar error occurs when there is a multi-path pulse that destructively interferes with the direct pulse. In this case, the ultrasonic pulse is not detected at all.
It should be noted that the difference between a good and a bad distance measurement cannot be recognized based on the measurement data alone. Indeed, multiple measurements will all result in the same (possibly erroneous) reading. Erroneous readings can still be rejected, however, based on dead reckoning information. Even though dead reckoning is unreliable when integrated over a long time, for a single robot action, it can provide a reasonable estimate of the robot’s position. By comparing the encoder distance measurements with the distances computed for the estimated positions, it is possible to reliably reject erroneous measurements due to multi-path.
Furthermore, the accuracy of the algorithm was improved significantly by using more than the minimally required three robot beacons. In our experiments, we used a team of five robots in which, at any time, four served as beacons. The extra distance measurements improve the accuracy of the position estimates, especially, when the direct path to one or more of the robot beacons is obstructed by obstacles. Further accuracy improvements were obtained by pinging each beacon multiple times. Median and mean filtering were then used to significantly reduce the standard deviation of the distance measurement, resulting in a more accurate position estimate.
[1] Atiya, S. and Hager, G. 1993. Real-time Vision-based Robot Localization. IEEE Transactions on Robotics and Automation, Vol. 9, No. 6, pp. 785-800.
[2] Bar-Shalom, Y., and Fortmann, T.E., 1988. Tracking and Data Association. Academic Press.
[3] Borenstein, J., Everett, H. R., and Feng, L., 1996. Navigating Mobile Robots: Sensors and Techniques, Wellesley, MA: A. K. Peters, Ltd.
[4] Cai, C., and Regtien, P.L.1993. Accurate Digital time-of-flight Measurements using Self-Interference. IEEE Transactions on Instrumentation and Measurement. Vol. 42, No. 6. December 1993. pp.990-994.
[5] Everett, H.R. 1995. Understanding Ultrasonic Ranging Sensors. The Robotics Practitioner. Fall, 1995. pp. 27-38.
[6] Fletcher, R. 1987.Practical methods of optimization (second edition). New York, NJ: J. Wiley & Sons, Ltd.
[7] Getting, I. A. 1993. The Global Positioning System. IEEE Spectrum, December, pp. 36-47
[8] ISR - IS Robotics, Inc., 1994. “RR-1/BS-1 System for Communications and Positioning - Preliminary Data Sheet.” IS Robotics, Twin City Office Center, Suite 6, 22 McGrath Highway, Somerville, MA 02143, 617-629-0055.
[9] Jenkin, M., Milios, E., Jasiobedzki, P., Bains, N., and Tran, K. 1993. Global Navigation for ARK. Proceedings of the 1993 IEEE/RSJ International Conference on Intelligent Robotics and Systems. [1] Yokohama, Japan, July 26-30, pp. 2165-2171.
[10] Kleeman, L. 1992. (May, Nice, France) Optimal estimation of Position and Heading for Mobile Robots Using Ultrasonic Beacons and Dead-reckoning. Proceedings of the 1992 IEEE International Conference on Robotics and Automation. pp. 2582-2587.
[11] Leonard, J. F. and Durrant-Whyte, H. F., 1991. Mobile Robot Localization by Tracking Geometric Beacons. IEEE Transactions on Robotics and Automation. Vol. 7. No. 3. pp. 376-382.
[12] Manolakis, D.E. 1996. Efficient Solution and Performance Analysis of 3-D position Estimation by Trilateration. IEEE Transactions on Aerospace and Electronic Systems. Vol. 32, No. 4. October 1996. pp.1239-1248.
[13] Marioli, D., Narduzzi, C., Offelli, C., Petri, D., Sardini, E., and Taroni, A. 1992. Digital Time-of-flight Measurement for Ultrasonic Sensors. IEEE Transactions on Instrumentation and Measurement. Vol. 41, No. 1. February 1992. pp.93-97.
[14] Navarro-Serment, L.E, Grabowski, R., Paredis, C.J.J., and Khosla, P.K. 1999. “Heterogeneous Teams of Modular Robots for Mapping and Exploration,” Technical Report ICES-04-07-99, The Institute for Complex Engineered Systems, Carnegie Mellon University, Pittsburgh, PA 15213.
[15] Stuck, E. R., Manz, A., Green, D. A., and Elgazzar, S., 1994. Map Updating and Path Planning for Real-Time Mobile Robot Navigation. 1994 International Conference on Intelligent Robots and Systems (IROS '94). Munich, Germany, Sept. 12-16, pp. 753-760.
[16] Wehn, H. W., and Belanger, P.R. 1997. Ultrasound-Based Robot Position Estimation. . IEEE Transactions on Robotics and Automation. Vol. 13, No. 5. October 1997. pp.682-692.
[17] Navarro-Serment, L.E., C.J.J. Paredis, P.K. Khosla, "A Beacon System for the Localization of Distributed Robotic Teams," in Proceedings of the International Conference on Field and Service Robotics, Pittsburgh, PA, August 29-31, 1999.
[18] (22) Moravec, H. P., "Sensor fusion in evidence grids for mobile robots," AI Magazine, pp 61-74, 1988.
[19] Elfes, A. 1989. Using Occupancy Grids for Mobile Robot Perception and Navigation. Computer, vol.22, no. 6, June 1989, pp. 46-57.