日本財団 図書館


 

Visual Feed-back Control for Unmanned Underwater Vehicles

B.A.A.P Balasuriya*, Motoyuki Takai*, Wan-Chung Lam*, Tamaki Ura** and Yoji Kuroda***
*Postgraduate School, University of Tokyo
**Institute of Industrial Science, University of Tokyo
***Autonomous Mobile Systems Laboratory, Meiji University
Abstract
This paper addresses the use of a vision sensor in the feedback control loop of an Unmanned Under water Vehicle (UUV). An algorithm is proposed as a solution for the visual tracking and servoing problem for UUVs. Here, the problem of visual tracking is defined as a problem of combining control with computer vision. In the proposed algorithm, a path planner generates a trajectory using the vector displacements of a particular feature in the image. The generated trajectory is followed by the UUV using a Passive Switched Action (PSA) controller. The experimental results of the proposed algorithm implemented in a UUV cable tracking mission demonstrate that it can be implemented in real time.
1. Introduction
Unmanned Underwater Vehicles (UUVs) can be made flexible and adaptable by incorporating sen-sory information from multiple sources in the feed-back control loop. In this paper, an algorithm to use visual data captured by a commercially available charge-coupled device (CCD) camera for dynami-cally servoing an UUV for object tracking is pro-posed. The work in this paper provides a bridge be-tween traditional vision techniques and control sys-tems.
The problem of visual tracking/servoing can be de-fined as “moving the UUV in such a way that the projection of a target is always at the desired loca-tion in the image”. Although the use of vision for dynamically servoing an autonomous system is a well studied area, the applications are either for land navi-gation or for a structured environment, such as a pro-duction cell[5]. UUV navigation is more challeng-ing due to the fact that it involves 6 degrees of free-dom[1][2][3].
In this paper, an algorithm is proposed to navigate an UUV along the course of an underwater cable. The general schematic diagram of the proposed sys-tem is shown in Fig.1. The peformance of the algorithm is demonstrated by implementing the proposed sys-tem in the test-bed robot “The Twin-Burger” [6] shown in Fig.2.
2. Navigation Sys-tem
As shown in Fig.1, the vision processing unit ex-tracts the cable and its position from the image cap-tured by the CCD camera mounted in front of the UUV. Unlike land or space sys-tems, underwater sys-tems have to carry all the hardware in small pressure hulls. This results in limited hardware and due to the large amount of data available in, an image, the processing will introduce some delay into the system. An estimator is used to compensate this delay due to image processing. Once the location of the cable in the image (two dimensional) is determined using image processing techniques, the error with respect to the target location in the image is calcu-lated. This error is then converted into the camera coordinate system (3-dimensional) deriving the nec

 

 

 

前ページ   目次へ   次ページ

 






日本財団図書館は、日本財団が運営しています。

  • 日本財団 THE NIPPON FOUNDATION