-
Notifications
You must be signed in to change notification settings - Fork 0
Key Performance Indicators (KPIs)
Key performance indicators (KPIs) are metrics that are used to proxy for system performance.
The ability to accurately determine the position of the turtlebot in a
The response time between sensing control inputs and moving the turtlebot. This includes the ability to communicate between the sensors and the turtlebot. We aim to move the turtlebot within half a second of sensor input.
The ability to move the turtlebot around the field in a desired manner (left, right, forward, backwards). We aim to able to achieve all basic movements.
The ability to visualise the position of the TurtleBot in space. We aim to visualise the location and velocity of the turtlebot, as well as the control inputs, all within a response time of 0.5 seconds.
The ability to control how fast the turtlebot is moving, and change the velocity with the foot pedal. We aim to linearly control the velocity of the turtlebot, and change the direction
Overall we were able to meet the KPIs with some ammendments.
We have mangaed to localise within
The complexities with meeting this KPI was in finding a model to accurately localise the position of the turtlebot. Given inbuilt functionality with ROS, and preestablished systems, the mapping was a simple process that slidded into our system. However, retrieving the information from the map and getting the position of the turtlebot itself provided difficult. This was mostly due to the nature of how SLAM works. The location of object on the map are hard to determine as only relative locations between frames are stored. i.e. there is a vector from the start point on the map to an odometry frame, and a vector from the odometry frame to the base frame of the turtlebot. Once this became known, we tried to simply add these two vectors to provide a result, however due to instantaneous inacuracies in values, this frame became unpredictably distored and wasn't always linear. To overcome this, we used the inbuilt ROS transformations to map these vectors to a position relative to the map frame, i.e. the frame that the turtlebot originates from. These transformations use statistical analysis using Adaptive Monte Carlo Localisation (AMCL) to correct inacuracies in frames by using multiple frames and putting them through a monte Carlo algorithm to predict the position of the bot.
The responsiveness was successfully reduced to less than 0.5 seconds as per the KPI. Fortunately the network connection during testing was satisfactory and did not prevent reaching this goal. No optimisation was required to reduce response time.
The TurtleBot was successfully able to move all in all directions. We found that at full extension, the foot pedal would sometimes flicker between zero and maximum amplitude, causing the turtlebot to stutter. This was avoided with proper control of the foot pedals during operation. This could have been somewhat rectified by a moving window filter on the control inputs, however this would affect the response time of the TurtleBot.
The display web app must be able to display feedback from the pedals and the turtlebot. Including the x, y and twist of the pedals, the wheel speeds, vx and vy, as well as the angular and linear velocity of the turtlebot. It must also be able to display an update for the turtlebot's position updating at a rate of at least 10Hz.
One of the main complexities for the dashboard was getting the right refresh rate. There was work towards getting more interesting display including showing the turtlebots direction and speed vector etc. However, this took a fair amount of computation power in which slowed down the update rate of the display. This was also a problem when it came to displaying the location of the turtlebot. In the end, a basic display was chosen as a means to reduce computational complexity to allow for a satisfactory refresh rate, whilst still displaying all the needed information.
The velocity was successfully able to be controlled. The foot pedal control were determined to output an
Initially the control node was written in C++ and intended to be run natively on the TurtleBot to gain low level control of the motors. This was intended to increase the maximum velocity of the TurtleBot, since the simple command velocity built in node was limiting. Due to the inability to SSH onto the device and the risk of disabling the turtlebot3 before other teams were able to use it, the native C++ node was scrapped. The C++ node was successfully able to multithread both ROS and MQTT concurrently in a thread safe manner.
Instead, the control node was rewritten in Python as a ROS node on the main computer. One asynchronous coroutine listens to MQTT messages and adds them to a thread safe queue. Another coroutine pops from this queue and publishes corresponding control values.
The other main complexity of the velocity control was the turtlebot switching off when the speed when over the max instead of saturating. This meant that when a wheel went to fast, it would just stop. So we had to create a saturation method in the ros side of the code and reduce the maximum speed from what it actually is in order to have the turtlebot function properly, and not turn the wrong way after one wheel saturating.