The design for the board to supply power to both the Pi Zero and robot modules is fairly simple. It’s two components are a single-cell, LiPo charging circuit based on the MCP73831 and the Pololu adjustable step-up regulator. The charging circuit is fairly straightforward so I’ll just explain the step-up circuit which takes the output voltage from the single cell battery, passes it to the four pin input side of the step-up regulator, and gets back a boosted voltage from the second four pin output side of the regulator.
I also added four beefy diodes between the regulator and the 5V supply for the system. My reasoning behind this was to provide the ability to use multiple power boards, and therefore multiple LiPo batteries, in the system. Each regulator is limited to roughly three amps of current with the multiple boards, Arduinos, and the Pi Zero taking up a fair chunk of that. I was worried that the regulator wouldn’t be able to supply enough current to the motors and so I added the diodes so that multiple power boards could be safely put in parallel. I also decided to use an adjustable voltage regulator so that the output voltage of the regulator could be set higher than 5V so that the actual voltage seen after the diodes would be 5V after the diode forward voltage.
The first board I’ll go over is the Motor Board prototype. It’s fairly simple; excluding the Arduino that handles communication and control it only consists of encoder connectors and a dual H-bridge. My planned use for this prototype will be to control the main drive motors of the robot base module. The schematic and PCB are shown below.
H-Bridge
I’m using an L293D chip for the motor driver. As shown in Figure 10 of the datasheet, each half of the chip can function as a bidirectional motor controller. By driving PWM signals to the control inputs of the chip, the speed of the motors can also be controlled. Unfortunately, each half of the bridge can only handle up to 600mA, which is relatively low, but it’ll sufficient for controlling the basic motors I plan on using on the prototype robot.
Encoders
The encoders I’m using are KY-040 Rotary Encoders which will measure the number of rotations of the motors and provide feedback to the motor controller. The linked description explains how the encoders work better than I can, but essentially the Arduino will measure the speed and number of rotations that the motor is travelling at and apply more or less current to the motor to get it to the desired end position. I plan on using a basic PID loop for this which I will cover in a later post.
In addition to the encoder connectors on the left, I’ve also added a basic debounce circuit in the top left of the schematic. Because these encoders use mechanical switches, they’re subject to mechanical bouncing of the switch contacts and so I plan on using a capacitor to act as a low pass filter and absorb these bounces and clean up the encoder signal.
Before doing the circuit design and PCB layout I wanted to briefly outline the architecture of the core robot modules and the communication between them. The core modules are the ones that will be present on every robot in the swarm and provides the critical functionality that’s required for the robot to operate. To make programming easier and reduce load on the main processor I’ve decided that each module will have a processor that will intelligently communicate with the main board and abstract away as many unnecessary details as possible.
Main Processor
The main board will be a single board computer that will do all of the “thinking” for the whole robot. It will handle communication to the swarm master and pass information between each of the modules within a robot. Due to the large community and low cost I’ve decided to use the Pi Zero W as the main processor. I’ll be using the GPIO header as the main connector and each module will be connected together through this common header. Each of the submodules will be I2C slaves and will be directed by the Raspberry Pi on the I2C bus. The Pi Zero W also has the benefit of built-in WiFi which further reduces the cost and complexity of the robot.
Power Module
The power module will be fairly dumb with it’s only task being to supply power to the whole module stack. For the first iteration I only plan on measuring battery voltage/percentage via the onboard processor but I may expand these capabilities in the future to include things like power usage, current, battery health, etc.
Motor Controller
The motor controller board will be responsible for the actual movement of the robot. My initial design will have the ability to control two motors as well as encoders to close the loop and verify that the robot has moved where expected. As I mentioned in my previous post, encoders are subject to drift so these encoders will verify that the robot movement is in the right ball park and will only be used for a single move. This means that the encoder position will be reset after each move is completed.
I plan on having this module be intelligent enough to handle all coordinated motion without intervention from the main processor board. The Raspberry Pi should be able to send the board an XY position, say “Go here,” and have the motor board handle the rest. It will also be able to query this submodule for certain status such as current position, status of the move, etc.
Localization
Due to the relative complexity of the localization algorithm, as outlined in my previous post, the localization board will be the most complicated. It required IR LEDs for transmitting the sync signal, IR receivers for receiving another robot’s sync signal, and ultrasonic transducers for sending and receiving localization pings.
This submodule won’t be as independent as the motor controller board due to the inter-robot communication requirements of the localization algorithm, but the interface should be simple. The board will only take one command, to send out a ping, so it knows to trigger the sync signal and localization signal on the IR LED and ultrasonic transmitter. Otherwise the board will constantly wait to receive an IR sync signal and will calculate distance from the transmitting robot based on how long it is until the localization signal arrives. The main processor board will then be able to query the localization board for the distance to the transmitting robot.
With the basics of the submodules laid out the next step is designing the circuits and PCBs!
One of the biggest problems with precise motion in robotics is localization. Encoders are only so accurate and tend to drift over time. Absolute positioning systems like GPS have error margins measured in meters. The solutions to this is often complicated (e.g. SLAM) or expensive (e.g. RTK GPS). As a cheap and simple alternative to this, I’m going to attempt to use incremental trilateration to enable the robots to determine their own position relative to their siblings.
Why It’s Important
An accurate map of where the robots are is critical for proper planning of their movements. Without it they could get in each other’s way or fall off of a cliff! It’s also necessary for them to perform tasks that require precise motion such as moving debris or laying brick. While the robots will also be teleoperated and have an onboard camera, it’s hard for the human eye to properly determine depth and estimate measurements from a 2D image.
The most common method for robot positioning is using encoders to measure the distance the wheels have traveled by counting rotations. There are some caveats to this method, however. If a wheel slips the encoder’s accuracy will suffer and it will think it’s traveled farther than it has. Over time the encoder’s measurement will drift farther and farther from reality. This problem will only be exacerbated by the rough and slippery terrain that these robots will eventually be operating on. However, with incremental trilateration the robot will recalculate an absolute position every time it moves. This absolute measurement won’t be susceptible to location drift.
Incremental Trilateration
The method I’m proposing for robot localization is based on trilateration, which is the same method GPS satellites use to determine position. This method, however, will be able to attain greater accuracy with cheaper and less complex hardware. Rather than the speed-of-light radio signals that GPS satellites use for trilateration, I plan on using much slower sound waves to do the same calculations. This makes it possible to do signal detection and calculation with a relatively slow microcontroller instead of high speed DSPs and custom silicon. I also plan on having an IR blast in the beginning to synchronize all of the robots before each sound pulse is sent. This sync signal will provide a trigger to the robots acting as base points so that they don’t always need to be waiting for sound pulses.
Incremental trilateration consists of four steps:
Movement
Sync Transmit
Signal Transmit
Calculation
Initial Conditions
For this method to work the robots need to start off at set points. This means that the robot master and two of the robots need to start at known locations. Trilateration requires the knowledge of three base points for calculating the fourth point and so absolute positions of the three base points need to be known.
Steps 1-3
The animation above covers the movement, sync transmit, and signal transmit steps of the process. The big square and two smaller circles on the sides represent the base station and two of the robots at known points.
Movement
In this step the robot that’s currently active moves, either towards a target or to explore. In the example above the robot in front moves along the Y axis from its position directly in front of the base station to an indeterminate position in front and to the right of where it was before.
Sync Transmit
The expanding, red circle sent out from the robot represents a sync signal that will be sent from the robot at an unknown position to the three other robots at known locations. In my planned implementation this will be a 40Khz IR signal. Once the three receiving robots receive this signal they know to start counting and waiting for the signal transmit. It should be noted that I’m ignoring the travel time of the IR pulse because the speed of light is so fast that it can be considered insignificant over the small distances that the robots will be moving.
Signal Transmit
Once the sync IR signal has been received, the robots will start counting until they see the sound pulse. I plan on using 40Khz ultrasonic transducers to handle this and generate an inaudible sound wave. Once the robots see the sound pulse they will stop counting and save the difference in time between the sync and signal transmissions. Using the speed of sound they can then calculate the distance to the robot that sent the transmissions.
Step 4: Calculation
Once each of the base robots calculates the distance to the moving robot they can then effectively draw a circle around themselves with a radius of the distance they’ve calculated. The base robots can definitively know that the moving robot is somewhere on the edge of the circle.
Robot 1 calculates the distance.Base station calculates the distance.Robot 2 calculates the distance.
Using the circles drawn by each of the base robots and the known direction that the robot traveled in, it can then be determined that the moving robot is sitting at the point where each of the three circles overlap, as displayed in the image above.
Limitations
There are a few limitations to the incremental trilateration method that I’d like to explain and propose solutions to.
Initial Conditions
The first and most inconvenient is that the robots will require known initial conditions. This means that at least three nodes in the localization network need to be at predefined positions, otherwise it’s impossible to calculate the position of the moving robot. This makes setup a little harder and introduces accuracy problems if the initial conditions aren’t perfect.
Some of this can be mitigated by the fact that each of the nodes can determine the location to the master node using the sync and signal method. If each robot is placed along a single straight line (which can be considered a line perpendicular to the Y axis at a known value of Y), it can send sync and signal transmissions to the master to determine its X offset.
Another possibility would be adding three IR and ultrasonic receivers to the master at predefined locations so that the master itself can act as the three reference points for the moving robot. This introduces some complexity but may ultimately be worth it.
Turn based movement
Another limitation is that in the above scenario with four nodes, only one robot can move at a time as it needs the three reference points to be stationary. This is less limiting in larger networks as the necessity of three reference points means (N – 3) robots can be moving with N being the number of robots in the network. For large values of N the limitation is less. However, because the IR and ultrasound are using the air as a common bus the actual transmissions will need to be kept to one at a time to prevent collisions.
Accuracy
Accuracy is the biggest concern with this system. Until I test this I won’t be sure the exact accuracy of the system but there is a lot of variability that can cause problems. The reason I’m using ultrasound for the signal transmission instead of light is because the speed of light is much too fast and the clock speed of microprocessors is much too low to properly detect the signal over small distances. However the ultrasound still has the same limitations, albeit to a lesser degree.
The Arduino micros timer measurement has a resolution of four microseconds. Since the speed of sound is 343 m/s, the ultrasonic pulse will be able to travel 0.001372 meters (343 m/s * 4e-6 seconds) or 1.372 millimeters for one increment of the micros() counter. This is only the maximum theoretical resolution, however, since it doesn’t take into account things such as digital read or sensor latency. Ultimately the actual resolution is something I’ll have to determine experimentally. I’m hoping for a 1cm accuracy for my initial implementation and will have to search for optimizations should that not be immediately achievable.
Another thing to take into account is that the speed of sound changes with temperature. However this can be fixed by using a temperature sensor to more accurately calculate the speed of sound as shown here.
Next up is planning out and designing the actual hardware!
Before diving into the nitty-gritty of the robot design I wanted to take a moment and lay out a brief description of the network and base robot architecture. Below is a block diagram and description of both the network topology and the configuration of the robots.
Network Topology
The pure swarm approach for a fully modular network would involve a mesh network with with no centralized control source. Instead of doing this I’m opting for a robot swarm with central control node, much like how a bee hive has a queen. And, since robot dancing has not reached bee levels of communication, I’m going to leverage existing technology and use WiFi. This will make it easier to use existing single board computers (e.g. the Raspberry Pi) for robot control instead of making a homebrew control board. Having all of the robots on a single WiFi network will also make it easy to remote login to the individual robots for telepresence control.
I also plan on having a single, central node to handle the complex control and collate all of the data provided by the individual swarm robots. This central point will also have the WiFi access point. Having a single master may seem counter-intuitive for a robot swarm but it will make development and control much easier. A powerful central computer can do complicated operations such as image processing and parsing the most efficient paths for each robot to take. The central node can also handle delegation by directing each robot to assume a role in heterogeneous swarms when different tasks need to be handled by the robots. Another benefit of making a single master node is debugging. Having the central node keep track of all of the data will make it easier to access the swarm’s status and provide a clear picture of how the system is operating.
Base Robot Configuration
I’ve tried my best to limit the robot’s base design (i.e., the components that will be common between all robots no matter their role or attached modules) to the very minimum. The block diagram above is what I came up with.
Each robot will have three core modules built in to the base design: one for power control and distribution, one for motor control, and one for localization. The only exception will be the central node which won’t have the motor control module. The modules will all report to a central CPU. Each of these modules will be intelligent with its own microcontroller for real time control and calculations. Having a microcontroller for each module abstracts away the processing required for each module and lets the CPU retrieve processed data and send commands without having to manage every single component.
I decided on using a full SoC rather than just a microcontroller to help speed up development and to potentially allow for some image processing and other calculations to be done locally. The Pi Zero W seems like the best bet for this at the moment due to its native camera support and on-board WiFi (and the large support community is a huge plus!). Using a full Linux system will also make software design easier without requiring constant retrieval and firmware flashing every time the software changes. It will be simple enough to remote in to each robot for control and status update over SSH.
I plan on defining the interface and functionality of each module in the next few posts. I also want to outline and explain my method for robot localization in detail. That will be my next post.
Infrastructure and construction robots are a group of modular robotic agents that can work cooperatively to automate various tasks such as construction, inspection, and repair. Each robot is capable of multiple functions, as capacities can be modified through the installation of different attachments on the unit. Each base unit is designed to be low cost to minimize the time and monetary investment and protect against significant losses due to the failure of a single robot. A group of robots can be teleoperated to increase the efficiency of skilled operators or can be given basic directives to automate simple or repetitive tasks.
Problem A: Infrastructure
U.S. infrastructure is currently rated as subpar and is continuously degrading (Graded D+ by ASCE)
Part of the issue lies in the extreme costs of repairs necessary across the nation (estimated $4.59 billion by 2025 to fix)
This massive increase in spending would cause a drag on the economy ($3.9 trillion GDP loss by 2025)
Too many problems, too expensive, takes too long
Problem B: Disaster Relief Management
Disasters are difficult to recover from and each disaster has unique requirements and problems that need to be addressed
Disasters are inherently unpredictable and so disaster preparation would require addressing all possible problems before it occurs, a prospect that can be prohibitively expensive
It’s also expensive to be reactionary and ship required relief materials for every disaster only after the disaster has occurred
Loss of life is worse immediately following the disaster before the relief supplies can make it to their destination
Massive, repeated shipments can be problematic due to bureaucratic and logistical delays
Problem C: Construction and Landscaping
Building homes and structures is expensive and time consuming
Machines for construction are expensive and have high specialization sometimes requiring many different machines for a single project
Cheaper, mass produced materials result in low levels of customization and customer satisfaction also requiring expensive shipment to the destination
Heavy earth moving machinery is often needed for landscaping projects which can be expensive or difficult to acquire
Using heavy machinery can be difficult and requires additional skills; alternatively hiring professionals only makes the work more expensive
Problem D: Non-Terrestrial Construction
Sending up satellites is limiting in both space and weight
After the heavy costs associated with initial construction, there are currently no options for repairs in the case of damage
Space junk is a problem without a good method of deorbiting debris
Expensive and restrictive to need to send all materials to construct a habitat on another planet
Materials to build habitats exist on planets but no machines currently capable of utilizing them or constructing habitats
Solution
Teleoperated robots!
Remote operated requiring skilled workers or semi-autonomous task completion
Perform checking and eventually construction that’s more cost effective
Makes workers more efficient and amplifies their capabilities
Increase in safety with cheap, expendable robots performing dangerous work
Flexible function using modular attachments means each robot is capable of many different actions
Small size means robots can work concurrently in the same location, reducing multiple phases of a project into a single, incremental step
Swarm methodology for a group of robots means tasks can be accomplished much quicker
A combination of a large swarm and the modularity means specific tasks can be dynamically allocated due to shifting requirements
Goals
Tier 1
Robot with modular attachments that enable it to serve various functions
Cheap base model that can be produced in quantity and has basic motion, communication, and sensor capabilities
Assignable roles so that broken robots can easily be replaced by spare units or units can be re-assigned based on need
Teleoperation with visual data so that users can get direct feedback about the status of the project’s target
Limited physical intervention required by users
Ability to direct movement and basics tasks to be performed by the robots
Tier 2
Modular attachments can be swapped autonomously
Dynamic role allocation so that the robots automatically determine the best distribution to get a task done efficiently
Coordinated motion so the robots can be given simple direction to accomplish group behavior
Tier 3
Machine learning can automatically flag problematic inspection data
Augmented reality data so that the users can see the project target and the current progress side by side
Advanced autonomy with the robots having the capabilities to perform a great deal of tasks with little user control required
Specifications
Sensors
Camera for teleoperation and visual data collection
GPS
IMU
Attachment identification
Communication
Central communication node to route mesh network packets
Master node compiles debug information and swarm state
Master establishes swarm requirements and passes them on to individual agents
Handled over WiFi with the master node providing the central access point
User Interface
Communication through a web interface hosted on the central node
Accessible by connecting to the swarm network
High level control abilities to direct general motion of the group and assign tasks
Lower level control also available for finer motion control
Individual robots are selectable so that debug information, robot state, and individual command interfaces are available
Job Allocation
Jobs are dynamically allocated and passed on to the individual robots via communication with the master node
Robots automatically connect to the attachments required to perform their assigned tasks
Potential Attachments
Gripper: Movement of structural material
Dumper: Transportation and removal of debris, earth, sand, etc.
Arm: Fine manipulation of objects
Screwdriver
Drill
Milestones
1. First Functional Prototype
Single robotic unit
Direct control basic interface (command line, GPS coordinates, etc.)
Basic motion and movement commands with low accuracy
2. Attachment Prototypes
Several basic, modular attachments for the prototype robot so it can perform various functions (e.g. gripper, arm, digger, screwdriver, drill, material transportation)
Automatic identification and control of the different attachments
3. Prototype Swarm
Multiple units controllable via a single interface
Different attachments to show multi-use cooperation
Direct control basic interface for individual units as well as group movements
Central communication hub to route communication between robots and provide single access point that distributes commands to individual swarm members
4. Survey and Analysis Demo
Demonstration of basic survey capabilities using direct control interface and teleoperated swarm
Robots coordinate with each other to map out a structure and provide detailed pictures
Optional additional sensors for measuring other useful data (e.g. radiation, temperature, vibration)
5. Repair Demo
Demonstration of basic repair capabilities using direct control interface and teleoperated swarm
Robots are capable of moving repair materials into place and performing the repairs without physical operator intervention
Robots can cooperate as a cohesive group to transfer repair materials and/or remove broken material
6. Construction Demo
Demonstration of basic construction capabilities using direct control interface and teleoperated swarm
Robots are capable of assembling an entirely new structure without physical operator intervention
Robots are capable of working together to prepare construction area and build a structure
New project alert! I was throwing around some multidisciplinary project ideas with two Mechanical Engineering friends and we talked about having some sort of robotic swarm for construction, disaster relief, infrastructure inspection, etc. Right now it’s not really a fully formed idea but I did submit it to the Hackaday Prize contest anyway since they’re currently running an “idea” seed funding phase. Here’s the project link. I think this idea has a lot of potential and I’m hoping this can grow into something that’s useful and beneficial to society. More details to follow!
While I’m putting the design for the OpenADR mop module together, I decided to do a quick test of the 3D printed pump I’ll be using to move the water/cleaning solution from the internal reservoir to the floor. The pump I am planning to use is a 3D printed peristaltic pumpfrom Thingiverse.
For my test setup, I used the another of the cheap, yellow motors that I powered the wheels on the main chassis and the brushes on the vacuum module to drive the pump. I threaded some surgical tubing from a full glass of water, through the pump, and into an empty glass. I then ran the motor off of 5V.
Overall the pump ran great, albeit a little slower than I anticipated. The next step is integrating it into the mop!
In my last post, I described the beginnings of the first module for OpenADR, the vacuum. With the Automation round of the Hackaday Prize contest ending this weekend, though, I decided to start working on a second module, a mop, before perfecting the vacuum module. The market for robotic vacuum cleaners is looking pretty crowded these days, and most of the design kinks have been worked out by the major manufacturers. Robotic mops, on the other hand, are far less common with the only major ones being the Scooba and Braava series by iRobot. Both of these robots seem to have little market penetration at this point, so the jury’s still out on what consumers want in a robotic mop.
I’ve been thinking through the design of this module for a while now. The design for the vacuum module was simple enough; all it required was a roller to disturb dirt and a fan to suck it in. Comparatively, the mop module will be much more complex. I don’t plan on having any strict design goals yet for the mop like I did with the vacuum given that the market is still so new. Instead, I’ll be laying out some basic design ideas for my first implementation.
The basic design I envision is as follows: water/cleaning solution gets pumped from a tank onto the floor, where it mixes with dirt and grime. This dirty liquid is then scrubbed and mopped up with an absorbent cloth. I know that probably sounds fairly cryptic now, but I’ll describe my plans for each stage of this process below.
Water Reservoir
Both the Scooba 450 and Braava Jet have tanks (750mL and 150mL, respectively) that they use to store cleaning solution or water for wetting the floor. The simplest way to add a tank to the mop module would be to just integrate a tank into the module’s 3D printed design that I described in an earlier post. This is a little risky, however, as 3D printed parts can be difficult to make water tight (as evidenced by my struggles with sustainable sculptures). Placing the robot’s electronics and batteries near a reservoir of water has to potential to be disastrous. A much safer bet would be to use a pre-made container or even a cut plastic bottle.
Being an optimist, however, I’d rather take the risk on the 3D printed tank to take advantage of the customizability and integration that it would provide. In the case of the sculptures, I wanted to keep the walls thin and transparent. I won’t have such strict constraints in this case and can use a much more effective sealant to waterproof the tank. And just to be on the safe side, I can include small holes in the bottom of the chassis (i.e., around the tank) near any possible leaks so the water drips out of the robot before it can reach any of the electronics.
Dispensing of Water
The next design decision is determining how to actually get the water from the tank to the floor. While I looked for an easily sourceable water pump, I couldn’t find a cheap one that was small enough to fit well in the chassis. Luckily there are some absolutely amazing, customizeable, 3D printed pumps on Thingiverse that I can use instead!
Disturbing Dirt
The biggest complaint when it comes to robot mops seem to be a lack of effectiveness when it comes to scrubbing dirt, especially with dirt trapped in the grout between tiles. The Braava uses a vibrating cloth pad to perform its scrubbing while the Scooba seems to use one of the brushed rollers from a Roomba. Both of these options seem to be lacking based on users’ reviews; the best option would be to use scrubbing brushes designed especially for use with water (rather than the Roomba’s, which are designed to disturb carpet fibers during vacuuming). As with the vacuum module, however, I had a hard time finding bristles or brushes to integrate into my design. Unfortunately using a roller made of flexible filament (i.e., my solution for the vacuum module) isn’t an option in this case, since it’s not capable of the same kind of scrubbing efficacy as a regular mop.
For my first version, I’m just going to use a microfiber cleaning cloth. This has the benefit of being washable and reusable, unlike the cleaning pads on the Braava, and yet I can still achieve some scrubbing functionality by mounting the cleaning cloth to a rotary motor.
Water Recovery
A mop that leaves dirty water on the floor isn’t a very effective mop, so some sort of water and dirt recovery is required. The Scooba uses a vacuum and squeegee to suck the water off of the floor back into a wastewater tank. The Braava’s cleaning pad, on the other hand, serves double duty and acts as both a scrubber and sponge to soak up the dirty water. Both of these options seem perfectly valid, but the Braava’s method seems like an easier implementation for a first revision. It’s also the method that conventional mops use. The microfiber cloth I decided to use for scrubbing can also serve to absorb the water and dirt from the floor.
It’s important to note, however, that using the absorption method for water recovery limits the robot’s water capacity and the amount of floor it can clean. The mop could have a 10L water reservoir, but if the cloth can only absorb 100mL of there will still be 9.9L of water left on the floor. The Braava only has a 150mL tank and 150sqft. of range because its cleaning pad can only hold 150mL of water. I’ll have to do some testing on the microfiber cloths I use to determine the maximum capacity of the mop module.
Now that the navigation functionality of the main chassis is mostly up and running, I’ve transitioned to designing modules that will fit into the chassis and give OpenADR all the functions it needs (see my last post). The first module I’ve designed and built is the vacuum, since it’s currently the most popular implementation of domestic robotics in the market. Because this is my first iteration of the vacuum (and because my wife is getting annoyed at the amount of dust and dog hair I’ve left accumulating on the floor “for testing purposes”), I kept the design very simplistic: just the roller, the body (which doubles as the dust bin), and the fan.
Roller Assembly
The brush assembly is the most complicated aspect of the vacuum. In lieu of finding an easily sourceable roller on eBay, I opted to design the entire assembly from scratch. I used the same type of plain yellow motors that power the wheels on the main chassis to drive the roller.
The rollers themselves consist of two parts, the brush and the center core. The brush is a flexible sleeve, printed with the same TPU filament used for the navigation chassis’s tires, that has spiraling ridges on the outside to disturb the carpet and knock dust and dirt particles loose. The center core is a solid cylinder with a hole on one end for the motor shaft and a protruding smaller cylinder on the other that is used as an axle. One roller is mounted on either side of the module and are driven by the motor in the center.
To print the vacuum module, I had to modify the module base design that I described in my last post. I shortened the front, where the brush assembly will go, so that the dust will be sucked up between the back wall of the main chassis and the front of the vacuum module’s dust bin and be deposited in the dust bin.
Fan Mounting
For the fan, I’ll be using Sparkfun’s squirrel blower. I plan to eventually build a 3D model of the fan so that it fits snugly in the module, but in the meantime, the blower mount is just a hole in the back of the module where the blower outlet will be inserted and hot-glued into place. In the final version, I will include a slot for a carbon filter in this mount, but given that I’m just working with a hole for the blower outlet in this first version, I cut up an extra carbon filter from my Desk Fume Extractor and taped that to where the air enters the blower to make sure dust doesn’t get inside the fan.
The blower itself is positioned at the top of the dust bin with the inlet (where the air flows in) pointed downwards. Once the blower gets clogged, the vacuum will no longer suck (or will it now suck?), so I positioned the inlet as high as possible on the module to maximize the space for debris in the dust bin before it gets clogged.
Dust Bin
The rest of the module is just empty space that serves as the vacuum’s dust bin. I minimized the number of components inside this dust bin area to reduce the risk of dust and debris causing problems. With the roller assembly placed outside the bin on the front of the module, the only component that will be inside of the dust bin is the blower.
With a rough estimate of the dimensions of the dust bin, the vacuum module has the potential to hold up to a 1.7L! This is assuming that the entire dust bin is full, which might not be possible, but is still substantially more than the 0.6L of the Roomba 980 and 0.7L of the Neato Botvac.
Future Improvements
There are a few things I’d like to improve in the next version of the vacuum module since this is really just alpha testing still. The first priority is designing a fan mount that fits the blower and provides the proper support. Going hand in hand with this, the filter needs an easily accessible slot to slide in before the fan input (as opposed to the duct tape I am using now).
I also want to design and test several different types of rollers in order to compare efficiency. The roller I’m using now turned out much stiffer than I’d like so, at the very least, I need to redesign them to be more flexible. Alternatively, I could go with something more like the Roomba’s Aeroforce rollers, which decrease the cross-sectional area of the air passage and thereby increase the air velocity. These rollers offer better suction and less opportunity for hair to get wrapped around the rollers but are a little less effective for thicker carpets.
Further, I need to make sure that the dust bin is in fact air-tight so that dust isn’t getting into the main chassis or back onto the floor. I included bolt mounts on the floor of the dust bin to connect the separate pieces together, but I don’t have mounts on the walls of the dust bin, and so I am using tape around the top of the bin to hold the pieces together for now. Since any holes in the dust bin provide opportunity for its contents to leak onto the floor, making sure I have a good seal here is critical. In the future I’d like to redesign these seams so that they are sealed more securely, possibly by using overlapping side walls.
Lastly, the vacuum module needs a lid. For the current version I intentionally left out the lid so that see everything while I’m testing. I plan to add a transparent covering to this version for that purpose (and so dust doesn’t go flying everywhere!). In the final version, the lid will need to provide a good seal and be easily removable so that the dust bin can be emptied.
But before we do all that, let’s test this vacuum!