top of page

PRACTICAL RESULTS

Camera Calibration

The camera is the only sensor used in the robot. It is used to find the robot's orientation and position (POS) within the lane. Camera calibration is the process of estimating the parameters of a lens and image sensor of a video or image camera, and it is used to improve the quality of the images captured with the camera by correcting for the lens distortion or to measure the object dimension in world units. Therefore, a calibrated camera is an essential component in robotics for navigation and 3D scene reconstruction. Camera calibration involves determining the characteristics of a camera that are known as the intrinsic and extrinsic parameters. 

Intrinsic camera calibration on robot has beed done by using existing software on Duckiebot shown in following picture.

cameracalibration.png

Wheels Calibration

the robot sends two different signals to the right motor and left motor to allow the robot to travel in a straight line without any draft, once it receives a command from the user.

In this project, to perform the calibration of the wheels on the robot, a line of approximately 2 meters on the tails is used

wheels1.png
  1. To make the robot drive straight both gain and the trim parameters should be calibrated. The following steps are used to calibrate the trim and gain parameter:

  2. Power on the robot and launch joystick and motor control program

  3. Ensure about the wire connection between motor and Adafruit Motor HAT 

  4. Place the robot at the center of the line and command the robot to go straight for 2 meters using the joystick

  5. If the robot drifted to the left side of the tape, the value of trim must be decreased. . Conversely, if it drifted to the right side of the tape the value of trim must be increased.

  6. Repeat step 4 and 5 until the robot drives straight on the line

wheels2.png
wheels3.png

Lane Following

The Duckietown consists of two layers: the road and the signal layer. The road layer is made of interconnected black tiles with lane markings on them. Therefore, the road layer provides sufficient information for lane following behavior to function. The lane following behavior is implemented in the following steps

lanefolllwing.png
  • Illumination compensation: uses all pixels with their RGB values in the image captured.

  • Line detection: detecting road markings in images from the camera node.

  • Ground projection: transforms the extracted line segments from image space to oriented points in 3D world coordinates.

  • Lane localization: estimates robot’s position (d) and orientation(φ) relative to the lane at time t.

  • Lane controller: PD controller to produce the control signals for the robot to drive in the lane.

DEMO

bottom of page