Heavily coupled to the vision system is the SLAM. Without an internal map of its surroundings, the boat response can only be reactionary. That is to say, the vehicle may only react to sensor measurements (e.g., video, LIDAR) at the current time. To provide a higher level of functionality to the system, the vehicle must be able to leverage past sensor readings with current sensor data and form some estimate of important features in its environment. With an internal map of the environment, the vehicle can perform tasks such as path planning with obstacle avoidance, recursive color estimation for particular objects, and will have more accurate vehicle pose estimates. SLAM works to combine sensor measurements and control inputs to form an estimate about the robot’s pose as well as features within a robotic environment. For the purpose of the boat, these features will be considered as buoys in the water. The particular variant of SLAM used in this implementation is an adaptation of FastSLAM.
The A* path planning method divides the traversable area into a grid of nodes. Each node represents an evaluation point, where a cost may be estimated for forming a potential path. A higher number of nodes exponentially increases the processing time, but too few nodes can easily force imprecisions in positional estimates of obstacles and path options, which can lead to collisions.