Design and Development of a Dual-Mode Autonomous/Manual Floor Scrubber Robot

Project Details

My picture

One of the issues raised in buildings, especially high-rise buildings, is the cleaning of building floor surfaces. In this project, an intelligent robot was designed and built that performs surface cleaning autonomously and intelligently. This project includes the following sub-sections:

• Design and construction of the robot's mechanical components and systems.

• Implementation of the robot's control system.

• Programming the robot.

Various designs were presented based on the robot's motion mechanism. In general, the following two designs were implemented:

1. Robot motion via a single driven front wheel.

2. Robot motion via two independently driven wheels.

In the first design, the robot utilized three wheels in a tadpole configuration, implementing the Ackermann steering mechanism with the two front wheels. However, this system presented challenges in the robot control section, such as an undesirably large turning radius due to constraints on the steering angles. Therefore, I decided to switch to a Delta configuration where the driving and steering force are produced only by the single front wheel.

Each of these systems has its own strengths and weaknesses. An example of the turning behavior of the first type of robot is shown in a video at the bottom of this page.

My picture
My picture

This robot can operate in the following two modes:

• Completely intelligent and operator-independent (autonomous).

• Manual with the presence of an operator.

A view of the chassis design and other electromechanical components of the second type of robot is shown below.

My picture
My picture
My picture
My picture
My picture

Since the robot's motive energy is supplied by a number of batteries, efforts were made to keep the electromechanical system light to reduce the robot's energy consumption. To this end, multiple dynamic and static analyses were performed to optimize the chassis and other electromechanical parts of the robot.

My picture
My picture
My picture
My picture

A view of the mechanical parts of these two types of robotic systems is shown in the photos below.

My picture
My picture
My picture
My picture

The movement mechanism of the first robot is shown in the video below. In this video, the robot's movement mechanism is tested manually.

Download Summary

Some Cool Title

A charging station was considered for the robot in this project, which includes electrical charging and fresh water tank charging, as well as dirty water discharge. The robot's control system uses a Lidar camera. With this camera, the robot implements an intelligible image of its surroundings for itself, including walls and fixed and moving obstacles.

My picture
My picture

Due to space and weight constraints in this project, a mini-case (mini PC) with the following specifications was used for the central processor of these robots.

My picture

The below image shows a schematic of the structure and data flow in the ROS Control system for a Scrubber Cleaning Robot. In this system, the Controller Manager module is responsible for loading, switching, and updating the controllers. Each controller, such as the Navigation and Path Following Controller or the Brush Rotation Intensity and Water Flow Controller, receives commands via ROS Interfaces (like messages for target path or cleaning commands).

These controllers then send the motion and operational commands to the robot's hardware through the Hardware Interface layer. At this stage, information is transferred to the physical components like the Wheel Motors, Cleaning Brush Motors, and Water Pump (Actuators) and the Sensors and Wheel Encoders (for determining position and cleaning status). The robot's status feedback (such as current location or dirty water level) is then returned to the system.

This structure allows ROS to provide precise, modular, and configurable control over the cleaning and movement functions of the scrubber robot.

My picture

In the diagrams below, the left side is the simulation environment, where the scrubber robot model runs inside Gazebo and communicates with ROS through specialized plugins. This allows testing of wheel motion, brush mechanisms, and robot behavior in different environments before working with real hardware.

On the right side is the real scrubber robot hardware, including wheel encoders, navigation sensors, brush motors, suction motors, and water-spray or pump systems. The hardware sends real sensor data to ROS and receives control commands for driving, cleaning functions, and motion updates.

Between simulation and hardware, the hardware_interface layer connects everything to ROS and ros_control, enabling the system to read joint states, send actuator commands, and manage different robot components in a unified structure. At the bottom, the Controller Manager and PID controllers handle precise control of wheel speeds, navigation, and cleaning mechanisms. This architecture ensures that the scrubber robot performs consistently and reliably in both simulation and real-world operation.

My picture

A view of the control interface systems for the second type of scrubber robot, related to the motion control of the hub motors (BLDC), is shown in the photos below.

My picture
My picture

The image below shows a 3D simulation environment in Gazebo, where an autonomous floor-cleaning scrubber robot is being tested. The robot is positioned at the center of the simulated room, and a wide blue fan-shaped field represents the robot’s LiDAR sensor coverage. This sensor continuously scans the surrounding area to detect walls, obstacles, and people.

A human model and a vending machine are placed in the environment to evaluate the robot’s obstacle-avoidance and safe navigation capabilities. The LiDAR scan is used for real-time mapping, collision prevention, and path-planning, allowing the scrubber robot to operate safely in indoor environments such as malls, warehouses, or office buildings.

This simulation setup is typically used to test autonomous behaviors (including perception, mapping, and navigation) before deploying the cleaning robot in real-world applications.

My picture

The image below shows the RViz visualization environment, a ROS-based tool built with a Qt graphical interface. RViz is used here to monitor the autonomous floor-cleaning scrubber robot during mapping and navigation tests.

The central display shows a 2D occupancy grid map generated from the robot’s LiDAR sensor. Black areas represent obstacles, white areas represent free space, and the red points show live LiDAR reflections. The yellow square marks the robot’s current position.

Qt components (such as the left “Displays” panel, the top toolbars, and configuration windows) allow the operator to interact with real-time robot data, adjust visualization settings, and control the mapping process. This Qt-based interface makes RViz flexible for debugging sensor data, validating LiDAR performance, and analyzing the robot’s navigation behavior. This setup is typically used to evaluate how the scrubber robot builds maps, perceives obstacles, and localizes itself while performing autonomous cleaning tasks in indoor environments.

My picture
My picture

The videos below show the motion simulation of the first type of robot and the actual turning state of the first type of robot. The minimum turning radius of this robot is greater than that of the second type of robot.