ICRS: Robot Localization

One of the biggest problems with precise motion in robotics is localization. Encoders are only so accurate and tend to drift over time. Absolute positioning systems like GPS have error margins measured in meters. The solutions to this is often complicated (e.g. SLAM) or expensive (e.g. RTK GPS). As a cheap and simple alternative to this, I’m going to attempt to use incremental trilateration to enable the robots to determine their own position relative to their siblings.

Why It’s Important

An accurate map of where the robots are is critical for proper planning of their movements. Without it they could get in each other’s way or fall off of a cliff! It’s also necessary for them to perform tasks that require precise motion such as moving debris or laying brick. While the robots will also be teleoperated and have an onboard camera, it’s hard for the human eye to properly determine depth and estimate measurements from a 2D image.

The most common method for robot positioning is using encoders to measure the distance the wheels have traveled by counting rotations. There are some caveats to this method, however. If a wheel slips the encoder’s accuracy will suffer and it will think it’s traveled farther than it has. Over time the encoder’s measurement will drift farther and farther from reality. This problem will only be exacerbated by the rough and slippery terrain that these robots will eventually be operating on. However, with incremental trilateration the robot will recalculate an absolute position every time it moves. This absolute measurement won’t be susceptible to location drift.

Incremental Trilateration

The method I’m proposing for robot localization is based on trilateration, which is the same method GPS satellites use to determine position. This method, however, will be able to attain greater accuracy with cheaper and less complex hardware. Rather than the speed-of-light radio signals that GPS satellites use for trilateration, I plan on using much slower sound waves to do the same calculations. This makes it possible to do signal detection and calculation with a relatively slow microcontroller instead of high speed DSPs and custom silicon. I also plan on having an IR blast in the beginning to synchronize all of the robots before each sound pulse is sent. This sync signal will provide a trigger to the robots acting as base points so that they don’t always need to be waiting for sound pulses.

Incremental trilateration consists of four steps:

  1. Movement
  2. Sync Transmit
  3. Signal Transmit
  4. Calculation

Initial Conditions

For this method to work the robots need to start off at set points. This means that the robot master and two of the robots need to start at known locations. Trilateration requires the knowledge of three base points for calculating the fourth point and so absolute positions of the three base points need to be known.

Steps 1-3

Animation.gif

The animation above covers the movement, sync transmit, and signal transmit steps of the process. The big square and two smaller circles on the sides represent the base station and two of the robots at known points.

Movement

In this step the robot that’s currently active moves, either towards a target or to explore. In the example above the robot in front moves along the Y axis from its position directly in front of the base station to an indeterminate position in front and to the right of where it was before.

Sync Transmit

The expanding, red circle sent out from the robot represents a sync signal that will be sent from the robot at an unknown position to the three other robots at known locations. In my planned implementation this will be a 40Khz IR signal. Once the three receiving robots receive this signal they know to start counting and waiting for the signal transmit. It should be noted that I’m ignoring the travel time of the IR pulse because the speed of light is so fast that it can be considered insignificant over the small distances that the robots will be moving.

Signal Transmit

Once the sync IR signal has been received, the robots will start counting until they see the sound pulse. I plan on using 40Khz ultrasonic transducers to handle this and generate an inaudible sound wave. Once the robots see the sound pulse they will stop counting and save the difference in time between the sync and signal transmissions. Using the speed of sound they can then calculate the distance to the robot that sent the transmissions.

Step 4: Calculation

Once each of the base robots calculates the distance to the moving robot they can then effectively draw a circle around themselves with a radius of the distance they’ve calculated. The base robots can definitively know that the moving robot is somewhere on the edge of the circle.

R1
Robot 1 calculates the distance.
R2
Base station calculates the distance.
R3
Robot 2 calculates the distance.

Rall1

Using the circles drawn by each of the base robots and the known direction that the robot traveled in, it can then be determined that the moving robot is sitting at the point where each of the three circles overlap, as displayed in the image above.

Limitations

There are a few limitations to the incremental trilateration method that I’d like to explain and propose solutions to.

Initial Conditions

The first and most inconvenient is that the robots will require known initial conditions. This means that at least three nodes in the localization network need to be at predefined positions, otherwise it’s impossible to calculate the position of the moving robot. This makes setup a little harder and introduces accuracy problems if the initial conditions aren’t perfect.

Some of this can be mitigated by the fact that each of the nodes can determine the location to the master node using the sync and signal method. If each robot is placed along a single straight line (which can be considered a line perpendicular to the Y axis at a known value of Y), it can send sync and signal transmissions to the master to determine its X offset.

Another possibility would be adding three IR and ultrasonic receivers to the master at predefined locations so that the master itself can act as the three reference points for the moving robot. This introduces some complexity but may ultimately be worth it.

Turn based movement

Another limitation is that in the above scenario with four nodes, only one robot can move at a time as it needs the three reference points to be stationary. This is less limiting in larger networks as the necessity of three reference points means (N – 3) robots can be moving with N being the number of robots in the network. For large values of N the limitation is less. However, because the IR and ultrasound are using the air as a common bus the actual transmissions will need to be kept to one at a time to prevent collisions.

Accuracy

Accuracy is the biggest concern with this system. Until I test this I won’t be sure the exact accuracy of the system but there is a lot of variability that can cause problems. The reason I’m using ultrasound for the signal transmission instead of light is because the speed of light is much too fast and the clock speed of microprocessors is much too low to properly detect the signal over small distances. However the ultrasound still has the same limitations, albeit to a lesser degree.

The Arduino micros timer measurement has a resolution of four microseconds. Since the speed of sound is 343 m/s, the ultrasonic pulse will be able to travel 0.001372 meters (343 m/s * 4e-6 seconds) or 1.372 millimeters for one increment of the micros() counter. This is only the maximum theoretical resolution, however, since it doesn’t take into account things such as digital read or sensor latency. Ultimately the actual resolution is something I’ll have to determine experimentally. I’m hoping for a 1cm accuracy for my initial implementation and will have to search for optimizations should that not be immediately achievable.

Another thing to take into account is that the speed of sound changes with temperature. However this can be fixed by using a temperature sensor to more accurately calculate the speed of sound as shown here.

Next up is planning out and designing the actual hardware!

 

ICRS: Robot Swarm Design

Before diving into the nitty-gritty of the robot design I wanted to take a moment and lay out a brief description of the network and base robot architecture. Below is a block diagram and description of both the network topology and the configuration of the robots.

Network Topology

network1908690677882874221.png

The pure swarm approach for a fully modular network would involve a mesh network with with no centralized control source. Instead of doing this I’m opting for a robot swarm with central control node, much like how a bee hive has a queen. And, since robot dancing has not reached bee levels of communication, I’m going to leverage existing technology and use WiFi. This will make it easier to use existing single board computers (e.g. the Raspberry Pi) for robot control instead of making a homebrew control board. Having all of the robots on a single WiFi network will also make it easy to remote login to the individual robots for telepresence control.

I also plan on having a single, central node to handle the complex control and collate all of the data provided by the individual swarm robots. This central point will also have the WiFi access point. Having a single master may seem counter-intuitive for a robot swarm but it will make development and control much easier. A powerful central computer can do complicated operations such as image processing and parsing the most efficient paths for each robot to take. The central node can also handle delegation by directing each robot to assume a role in heterogeneous swarms when different tasks need to be handled by the robots. Another benefit of making a single master node is debugging. Having the central node keep track of all of the data will make it easier to access the swarm’s status and provide a clear picture of how the system is operating.

Base Robot Configuration

I’ve tried my best to limit the robot’s base design (i.e., the components that will be common between all robots no matter their role or attached modules) to the very minimum. The block diagram above is what I came up with.

Each robot will have three core modules built in to the base design: one for power control and distribution, one for motor control, and one for localization. The only exception will be the central node which won’t have the motor control module. The modules will all report to a central CPU. Each of these modules will be intelligent with its own microcontroller for real time control and calculations. Having a microcontroller for each module abstracts away the processing required for each module and lets the CPU retrieve processed data and send commands without having to manage every single component.

I decided on using a full SoC rather than just a microcontroller to help speed up development and to potentially allow for some image processing and other calculations to be done locally. The Pi Zero W seems like the best bet for this at the moment due to its native camera support and on-board WiFi (and the large support community is a huge plus!). Using a full Linux system will also make software design easier without requiring constant retrieval and firmware flashing every time the software changes. It will be simple enough to remote in to each robot for control and status update over SSH.

I plan on defining the interface and functionality of each module in the next few posts. I also want to outline and explain my method for robot localization in detail. That will be my next post.

 

ICRS: Details and Goals

Description

Infrastructure and construction robots are a group of modular robotic agents that can work cooperatively to automate various tasks such as construction, inspection, and repair. Each robot is capable of multiple functions, as capacities can be modified through the installation of different attachments on the unit. Each base unit is designed to be low cost to minimize the time and monetary investment and protect against significant losses due to the failure of a single robot. A group of robots can be teleoperated to increase the efficiency of skilled operators or can be given basic directives to automate simple or repetitive tasks.

Problem A: Infrastructure

  • U.S. infrastructure is currently rated as subpar and is continuously degrading (Graded D+ by ASCE)
  • Part of the issue lies in the extreme costs of repairs necessary across the nation (estimated $4.59 billion by 2025 to fix)
  • This massive increase in spending would cause a drag on the economy ($3.9 trillion GDP loss by 2025)
  • Too many problems, too expensive, takes too long

Problem B: Disaster Relief Management

  • Disasters are difficult to recover from and each disaster has unique requirements and problems that need to be addressed
  • Disasters are inherently unpredictable and so disaster preparation would require addressing all possible problems before it occurs, a prospect that can be prohibitively expensive
  • It’s also expensive to be reactionary and ship required relief materials for every disaster only after the disaster has occurred
  • Loss of life is worse immediately following the disaster before the relief supplies can make it to their destination
  • Massive, repeated shipments can be problematic due to bureaucratic and logistical delays

Problem C: Construction and Landscaping

  • Building homes and structures is expensive and time consuming
  • Machines for construction are expensive and have high specialization sometimes requiring many different machines for a single project
  • Cheaper, mass produced materials result in low levels of customization and customer satisfaction also requiring expensive shipment to the destination
  • Heavy earth moving machinery is often needed for landscaping projects which can be expensive or difficult to acquire
  • Using heavy machinery can be difficult and requires additional skills; alternatively hiring professionals only makes the work more expensive

Problem D: Non-Terrestrial Construction

  • Sending up satellites is limiting in both space and weight
  • After the heavy costs associated with initial construction, there are currently no options for repairs in the case of damage
  • Space junk is a problem without a good method of deorbiting debris
  • Expensive and restrictive to need to send all materials to construct a habitat on another planet
  • Materials to build habitats exist on planets but no machines currently capable of utilizing them or constructing habitats

Solution

  • Teleoperated robots!
  • Remote operated requiring skilled workers or semi-autonomous task completion
  • Perform checking and eventually construction that’s more cost effective
  • Makes workers more efficient and amplifies their capabilities
  • Increase in safety with cheap, expendable robots performing dangerous work
  • Flexible function using modular attachments means each robot is capable of many different actions
  • Small size means robots can work concurrently in the same location, reducing multiple phases of a project into a single, incremental step
  • Swarm methodology for a group of robots means tasks can be accomplished much quicker
  • A combination of a large swarm and the modularity means specific tasks can be dynamically allocated due to shifting requirements

Goals

Tier 1

  1. Robot with modular attachments that enable it to serve various functions
  2. Cheap base model that can be produced in quantity and has basic motion, communication, and sensor capabilities
  3. Assignable roles so that broken robots can easily be replaced by spare units or units can be re-assigned based on need
  4. Teleoperation with visual data so that users can get direct feedback about the status of the project’s target
  5. Limited physical intervention required by users
  6. Ability to direct movement and basics tasks to be performed by the robots

Tier 2

  1. Modular attachments can be swapped autonomously
  2. Dynamic role allocation so that the robots automatically determine the best distribution to get a task done efficiently
  3. Coordinated motion so the robots can be given simple direction to accomplish group behavior

Tier 3

  1. Machine learning can automatically flag problematic inspection data
  2. Augmented reality data so that the users can see the project target and the current progress side by side
  3. Advanced autonomy with the robots having the capabilities to perform a great deal of tasks with little user control required

Specifications

Sensors

  • Camera for teleoperation and visual data collection
  • GPS
  • IMU
  • Attachment identification

Communication

  • Central communication node to route mesh network packets
  • Master node compiles debug information and swarm state
  • Master establishes swarm requirements and passes them on to individual agents
  • Handled over WiFi with the master node providing the central access point

User Interface

  • Communication through a web interface hosted on the central node
  • Accessible by connecting to the swarm network
  • High level control abilities to direct general motion of the group and assign tasks
  • Lower level control also available for finer motion control
  • Individual robots are selectable so that debug information, robot state, and individual command interfaces are available

Job Allocation

  • Jobs are dynamically allocated and passed on to the individual robots via communication with the master node
  • Robots automatically connect to the attachments required to perform their assigned tasks

Potential Attachments

  • Gripper: Movement of structural material
  • Dumper: Transportation and removal of debris, earth, sand, etc.
  • Arm: Fine manipulation of objects
  • Screwdriver
  • Drill

Milestones

1. First Functional Prototype

  • Single robotic unit
  • Direct control basic interface (command line, GPS coordinates, etc.)
  • Basic motion and movement commands with low accuracy

2. Attachment Prototypes

  • Several basic, modular attachments for the prototype robot so it can perform various functions (e.g. gripper, arm, digger, screwdriver, drill, material transportation)
  • Automatic identification and control of the different attachments

3. Prototype Swarm

  • Multiple units controllable via a single interface
  • Different attachments to show multi-use cooperation
  • Direct control basic interface for individual units as well as group movements
  • Central communication hub to route communication between robots and provide single access point that distributes commands to individual swarm members

4. Survey and Analysis Demo

  • Demonstration of basic survey capabilities using direct control interface and teleoperated swarm
  • Robots coordinate with each other to map out a structure and provide detailed pictures
  • Optional additional sensors for measuring other useful data (e.g. radiation, temperature, vibration)

5. Repair Demo

  • Demonstration of basic repair capabilities using direct control interface and teleoperated swarm
  • Robots are capable of moving repair materials into place and performing the repairs without physical operator intervention
  • Robots can cooperate as a cohesive group to transfer repair materials and/or remove broken material

6. Construction Demo

  • Demonstration of basic construction capabilities using direct control interface and teleoperated swarm
  • Robots are capable of assembling an entirely new structure without physical operator intervention
  • Robots are capable of working together to prepare construction area and build a structure