• 検索結果がありません。

A study on UAV automatic navigation and landing system

N/A
N/A
Protected

Academic year: 2022

シェア "A study on UAV automatic navigation and landing system"

Copied!
41
0
0

読み込み中.... (全文を見る)

全文

(1)

A study on UAV automatic navigation and landing system

February 1st, 2016

Name: Chen Guangjie ID number: 5114FG04-8

SHIMAMOTO Lab

(Professor Shigeru Shimamoto)

(2)

ABSTRACT

The use of robotics in Search and Rescue operations is a topic that is

gaining momentum in the scientific community. One aspect of such

operations that has yet to be explored is that of wireless communication

between the various robotic elements operating in the disaster area. This

paper presents a model consisting of Unmanned Ground Vehicle(UGV)

and Unmanned Aerial Vehicle(UAV) formation, that can land on the

platform of UGV to implementation the joint search and rescue

operations. This configuration explores the nature of such communication

in the form of a network with payloads of sensor data, video data and

telemetry bursts being sent between the UGV and the UAVs. The UAVs

will have a automatic flight control and visual identification system that

can send the video data to the On-board PC on the UGV, when the image

recognition system finished image analysis,it will transform the location

information into a control command via the Raspberry Pi system, then the

UAVs would have a self-judgment to finish its landing instructions.

(3)

Contents

1. Introduction

1.1 Research Background---(1) 1.2 Purpose of Research ---(1-2)

2. System Components

2.1 Unmanned Aerial Vehicle (UAV) ---(3-4) 2.2 Unmanned Ground Vehicle (UGV) ---(4-7) 2.3 OpenCV (Visual analysis)

3. Composition and preparation of hardware

3.1 APM- pixhawk module

3.11 Pixhawk control System install (hardware) ---(8-9) 3.12 Mission planner (software) ---(9-11) 3.2 Raspberry Pi

3.21 Raspberry Pi structure and install ---(11-13) 3.22 Connection and control --- (14-15) 3.23 Communicating via MAVLink ---(15-17) 3.3 Visual identification system

3.31Real-time First-person view system(FPV) ---(17-18) 3.32 OpenCV analysis ---(18-22)

4. Remote control System installation and configuration

4.1 Wi-Fi connection settings for Resberry Pi ---(23-24) 4.2 Remote control system programming ---(25-27)

5. Experiment

5.1 Equipment preparation and experiment condition ---(28-30) 5.2 Experiment process ---(30-32)

6. Experiment Result

---(33-36)

7. Conclusion and future work---

(37)

8. Reference ---

(38)

(4)

1. Introduction

1.1 Research Background

The Unmanned Aerial Vehicle (UAV) has been wildly used in disaster rescue assignment and gives us higher possibility to find more victims. In recent years,many research teams had proposed various programs on how to make the UAV be more smart,efficient,reliable,etc. But there’re still many limitations for UAV that should to be improved.Such as the signal transmission will be affected by distance and the obstructions,and for the underground area where can't receive the signal from air,the success rate of finding survivors would be greatly affected. In order to solve these problems, using Unmanned Ground Vehicle (UGV) to form a team is a reliable way to make up these shortcomings came to be widely accepted.

In the UGV-UAV cooperated system,the vehicles can operate autonomously and accept missions from a ground station, and are able to communicate with each other in a distributed manner is the most challenging problem.

A common theme explored in air-ground teaming missions is how to best leverage the differences between the UAV and UGV platforms. [1] Small sized UAVs may be flying at an altitude of 1000 ft. or more, and therefore have a broad field of view.

They can cover areas quickly and are not constrained to road networks. However, it is often challenging to localize targets exactly from a moving air frame. Furthermore, because they are moving quickly and at altitude, they cannot get a detailed view of the target as easily as a ground vehicle can.

A UGV on the other hand, can utilize better localization capabilities and get up close to take sensor readings of suspected targets. Furthermore, a UGV can carry additional payloads and sensors, beyond the weight and power limits of the UAV, and can easily perform long missions. However, the UGV in this case is restricted to navigation over road networks and does not have a full view of the environment at any given time.

So how to combine their respective advantages and disadvantages to configure works is a difficult object.

1.2 Purpose of Research

In order to improve the deficiencies in UAV and UGV team,compare with the previous mode that the UAV is at a high-altitude and UGV is on the ground,in this paper,we will committed to the situation that when we need to search a blind area where the high-altitude UAV haven't searched,using micro low-altitude UAVs and UGV cooperation mode,to improve the efficiency and precision of ground searching.

In this system,the micro UAVs can greatly improve the reconnaissance vision range of UGV,and give faster explore of the unknown region,and send the search data back to the UGV. When the GPS signal is weak or the UGV is indoor that GPS

(5)

signal is unacceptable,how to achieve the UAVs automatically distinguish the position of UGV platform,and autonomously navigate to the UGV for landing and recharge will be the most challenge problem of this paper. We will try to combine Visual Identity with ROS,etc,technologies to achieve these functions.

Fig1. System Components

(6)

2. System Components

2.1 Unmanned Aerial Vehicle (UAV)

In some cases, an UAV is used to improve the navigation performance of an UGV.

For example, tracking and state estimation of a UGV [MacArthur, 2007], [Heppner, 2013], terrain classification and path planning for the UGV [Hudjakov, 2010], or supporting the UGV navigation as in case of GPS loss [Frietsch, 2008]. [Kim, 2001]

uses a coordinate control based on probabilistic approach for UAV and UGV teams employed in pursuit-evasion games. UAVs and UGVs are used for surveillance tasks in [Grocholsky, 2006b] and [Saska, 2012], for cooperative mapping in [Chaimowicz, 2005], and for detection and disposal of mines in [Zawodny, 2005].

But all of these UAV functions are based on pre-set routes,and the UAV is under respective actions,it can't share the information to the UGV immediately. So when the UGV is in indoor situation,those functions would not work. In this case, we need small volume UAV instead of high-altitude type , and also need real-time communication between UAV and UGV.

We all know that it's very difficult to judge the exact positions of each other when the GPS signal can't be received. So here,we considered to use vision recognition to give a judgment of landing platform on the UGV.

The UAV will equip a digital camera which can be adjustable by a 3D gimbal. The lens we used is made by Olympus that has 84 degree wild vision range,and has low level of deformation. When the UAV complete its searching mission,UAV will turn back to the place where it took off ,the camera will turn to the down direction to search the UGV position,and change the flight mode into Hover-Mode to prepare for landing.

Fig2. Walkera 800series UAV

(7)

When UGV arrived the blind area,it will emancipate the UAVs on the platform,

and keep on going to finish its searching mission by pre-set route. So the UAVs don't know the position of UGV clearly,but only know the pre-set route of UGV. In this situation, UAVs need to have a judgment that whether the UGV is at the same place.

When the UGV can’t be identified,the program of UAV controller will automatically judge the action whether landing or tracking the UGV under pre-set route.

Fig3. Equipment schematic

2.2 Unmanned Ground Vehicle (UGV)

UGV structures may vary from one to another, but in general a UGV consists of the following parts:

Sensors: A ground robot needs to have sensor(s) in order to perceive its surrounding, and thus, permit controlled movement. Sensor accuracy is extremely important for robots that operate in highly unpredictable environments such as the battle field or fires.

Platform: The platform provides locomotion, utility infrastructure and power for the robotic system. The configuration has a strong influence on the level of autonomy and interaction a system will have in an unstructured environment; highly configurable and mobile platforms are typically the best for unstructured terrain.

(8)

Control: The level of autonomy and intelligence of the robot depends largely on its control systems, which range from classic algorithmic control to more sophisticated methods such as hierarchical learning, adaptive control, neutral networks and multiple robot collaboration.

Human machine interface: The human machine interface depends on how the robot is controlled. The interface could be a joystick and a monitor control panel in the case of teleoperation, or more desired advanced ones such as speech commands from the commander.

Communication: Communication is essential between UAV and UGV,UGV can analysis the data sent from UGV with On-board PC,and then return flight instruction to the UAV telling what it should to do in the next step. Normally the communication methods are using wifi,bluetooth,etc.[2]

System integration: The choice of system level architecture, configuration, sensors and components provide significant synergy within a robotic system. Well-designed robotic systems will become self reliant, adaptable and fault tolerant, thereby increasing the level of autonomy.

In this case,we desire to make a triangle landing signage for UAV by using LED cable ,that can give UAV a striking indication even in the dark night .we will put a PC on the UGV instead of traditional transceiver. The PC will processing several parts data:

 Real-time video data sent from UAV. Because we need to analysis the video data to judge the picture of triangle,and can confirm the UAV is just above the UGV only when more than two interior angles are at 60 degree.

 UAV and UGV controlling command data. Because the route of UGV is pre-set by APM system,and UAV need a feedback control command by Ground Station.

So we put the command center(APM system)on the UGV and record various data by PC.

 Date conversion. [3]The data we earn from UAV is only picture and video,so we should using OpenCV to analysis these video data,and inferred a vector data.

Then send it toRaspberry Pi to transform into a C++ data,finally to get a control command by analysis the C++ data.

Fig4. UGV landing platform

(9)

2.3 OpenCV (Visual analysis)

OpenCV (Open Source Computer Vision) is a library of programming functions mainly aimed at real-time computer vision, originally developed by Intel research center in Nizhny Novgorod (Russia), later supported by Willow Garage and now maintained by Itseez. The library is cross-platform and free for use under the open source BSD license.

With OpenCV, the main functions can be utilized are as below:

Filtering

Matrix operation

Object Tracking

Segmentation

Calibration

Point feature extraction

Object recognition

Machine learning

Stitching

Computational Photography

GUI (Window display, image file, input and output of video files, camera capture)

Main modules Introduction

In OpenCV, the summarizes for each category function has been provided in a unit called "module"from library. Therefore, application developers linked the libraries of modules which inclusive the necessary functions. And the table below is an overview of the main libraries that make up the OpenCV.

(10)

Fig5. overview of the main libraries Module

name Library name summary

calib3d opencv_calib3d300.lib Camera calibration, stereo correspondence point search

core opencv_core300.lib

Provide an image-matrix data structure, array manipulation, basic Drawing, XML and YAML input and output, the command-line parser, such as utility function features2d opencv_features2d300.lib Feature point extraction (ORB, BRISK, FREAK,etc)

hal opencv_hal300.lib Optimization using the hardware functions (Hardware Acceleration Layer)

highgui opencv_highgui300.lib GUI (such as a window display) imgcodecs opencv_imgcodecs300.lib Image file input and output

imgproc opencv_imgproc300.lib

Filtering, Affine transformation, edge detection, Hough detection, color conversion, histogram calculation, labeling, etc.

ml opencv_ml300.lib SVM, decision trees, boosting, neural networks, etc.

objdetect opencv_objdetect300.lib Object detection (face detection, such as a human body detection)

photo opencv_photo300.lib Image repair, noise removal processing, HDR (High Dynamic Range) synthesis, image synthesis, etc.

shape opencv_shape300.lib Shape matching stitching opencv_stitching300.lib Panorama

superres opencv_superres300.lib Super-resolution processing

video opencv_video300.lib Optical flow, Kalman filter, background subtraction, etc.

videoio opencv_videoio300.lib Input and output of video files, camera capture, etc.

videostab opencv_videostab300.lib Camera shake correction (Video Stabilization) viz opencv_viz300.lib Visualization of 3D data (internally using the VTK)

(11)

3. Composition and preparation of hardware

3.1 APM- pixhawk module

3.11 pixhawk control System Components

The Pixhawk autopilot system is a complete solution for multi-platform autonomous vehicles, based on the open-source Pixhawk autopilot. The Pixhawk kit includes a power module, I2C splitter, mounting foam, micro-SD card, buzzer, safety button, and required cables for connecting the Pixhawk system. All software solutions run on the Pixhawk autopilot, designed by the open hardware development team in collaboration with 3D Robotics.

Fig6. Raspberry Pi module parts

GPS with Compass

A GPS module provides in-flight positioning data for full autonomy. The GPS with Compass module adds high-performance GPS with improved accuracy over Pixhawk’s internal compass. The GPS ports are connected with the six-position DF13 cable, and the MAG port is connected to the I2C port with the four-position DF13 cable.[4]

Wireless transceiver

Air and ground wireless transceiver enable live data and in-flight interaction with Pixhawk.

The UAV in our lab is a 800series made by Walkera with close-source system.In order to achieve the functions above,we need to custom the whole hardware. Taking

(12)

out the original built-in control module, and replacement the connector of cables.Then connect Raspberry Pi module to the pixhawk,so that we can control the UAV through PC without using remote radio system.

Fig7.UAV equipment Layout

3.12 Mission planner

Mission Planner is a full-featured ground station application for the ArduPilot open source autopilot project. This page contains information on the background of Mission Planner and the organization of this site. Mission Planner is a ground control station for Plane, Copter and Rover. It is compatible with Windows only. Mission Planner can be used as a configuration utility or as a dynamic control supplement for your autonomous vehicle. Here are just a few things you can do with Mission Planner:[4]

 Load the firmware (the software) into the autopilot (APM, PX4…) that controls your vehicle.

 Setup, configure, and tune your vehicle for optimum performance.

 Plan, save and load autonomous missions into you autopilot with simple point-and-click way-point entry on Google or other maps.

 Download and analyze mission logs created by your autopilot.

With appropriate telemetry hardware you can:

 Monitor your vehicle’s status while in operation.

 Record telemetry logs which contain much more information the the on-board autopilot logs.

(13)

 View and analyze the telemetry logs.

 Operate your vehicle in FPV (first person view)

Because we want to let the UAV automatically judge whether landing or tracking the UGV under pre-set route. So when set up the mission planner on PC,we need reprogramming the software to control UAV.

Fig.8 APM software compile environment

Here, we modify the flight mode protocol from single mode into several mode convertible.[5]We need achieve that UAV could change its flight mode when it want to landing. So the first step is to give the UAV a judgment function. For example,in the picture below,the takeoff position of UAV is point 2,we want UAV to search point 1,and UGV will keep on going after released the UAV. When UAV finished its search mission,it will turn back to the point 2 by location history. Then it will do a judgment that whether the UGV is still at the pint 2,if not,it will racking the UGV pre-set route like the yellow circle. The second step we should reprogramming the mode switch. The specific programming method is still need learning.

(14)

Fig9. Flight path setting interface

3.2 Raspberry Pi

3.21 Raspberry Pi structure and install

The Raspberry Pi is a series of credit card–sized single-board computers developed in the United Kingdom by the Raspberry Pi Foundation with the intention of promoting the teaching of basic computer science in schools and developing countries.

In addition to the familiar USB, Ethernet and HDMI ports, the Raspberry Pi offers the ability to connect directly to a variety of electronic devices. These include:

 Digital outputs: turn lights, motors, or other devices on or off

 Digital inputs: read an on or off state from a button, switch, or other sensor

 Communication with chips or modules using low-level protocols: SPI, I²C, or serial UART [6]

In order to communication between UAV and UGV ,we need to connect the Raspberry Pi to the mission planner .The Raspberry Pi is a bridge that could translate both video data and control command data into C++ data. So we should to install it at first.

(15)

Fig10. APM and API connection diagram

Connect the Pixhawk’s Telem 2 port to the RPi’s Ground, TX and RX pins as shown in the image above. The red V+ cable can go to the RPi’s +5v pin. When we finished the hardware connection,the next step is how to set up on the On-bord system. The steps should be take like this:[7]

 Connecting to RPi with an SSH/Telnet client

 Install the required packages on the Raspberry Pi

 Disable the OS control of the serial port

 Testing the connection

 Configure MAVProxy to always run

 Installing DroneKit on RPi

 Connecting with the Mission Planner

(16)

Fig11. APM software net connection

Because of the hardware parts have not been bought,the feasibility of these steps need to be test and verify. If success,the connect bottom should be green in the picture above.

(17)

3.22 Connection and control

By implementing the connection between APM and Raspberry Pi,the main objective is to achieve these functions:

 Connection of the APM 2.6 to the Raspberry Pi through UART0.

 Using Wi-Fi to communicate with the APM.

 Creating and running programs to control the APM.

My methods:

 Raspberry Pi Model B+ with a 32 GB microSD card running Raspbian

 APM 2.6 with ArduCopter v3.2.1

 Most communication with the Raspberry Pi was done through SSH by an Ethernet Cable from a Laptop running Windows 7 using PuTTy.

(*Raspbian is a free operating system based on Debian optimized for the Raspberry Pi hardware.)

To use the open source system,firstly must be able to understand both basic Python and C++. Most programs written for Linux based systems (like the Raspberry Pi) will run on either Python or C++. Python will be needed to read those programs, and C++ will be used for the programs we write. And also need to learn how to use a text editor in the command window on the Raspberry Pi. There are many choices.Just like type “vimtutor” into the Linux Command Terminal.

Important Note:

The Telemetry Port (UART0) on the APM and the USB Port uses the same serial port for connection, so there is a MUX (Multiplexer) that disables the Telemetry Port if the USB is connected. So,we need a battery/power supply to power the APM without using a USB cable in order to use the UART0 Port and connect to the Raspberry Pi.

If we want to keep the Telemetry Port to connect to a Ground Station during flight, then you might want to try connecting using the UART2 Port on the APM (which may require soldering). I have not done this by myself,this method also need to be test and verify.

Writing programs

Using my code as an example. (There are many ways to do this)

 The Beginning of the Code

 All message types that are used must be included.

 Having DEFINEs really help to make a code easier to read.

(18)

Fig12. APM software command compile

The first code I wrote was to make my Quadcopter Takeoff from the UGV, hover in the air , and then land.

Because of this, the only topics i needed to subscribe to were:

/mavros/state :Obtain arming status and current flight mode.

/mavros/vfr_hud :Obtain current altitude.

/mavros/rc/in :Obtain RC transmitter channel values.

I only needed to publish to one topic in order to fly the UAV:

/mavros/rc/override :Overrides RC channels with inputted values.

3.23 Communicating via MAVLink

Here, I’ll introduce how to connect and configure a Raspberry Pi so that it is able to communicate with a Pixhawk flight controller using the MAVLink protocol over a serial connection. This can be used to perform additional tasks such as image recognition which simply cannot be done by the Pixhawk due to the memory requirements for storing images.

1. Install the required packages on the Raspberry Pi

Log into the RPi board (default username password is pi/raspberry) and check that its connection to the internet is working:

(19)

After the internet connection is confirmed to be working install these packages:

2. Disable the OS control of the serial port

Use the Raspberry Pi configuration utility for this.

Type:

And in the utility, select “Advanced Options”:

Fig13a. Raspberry Pi install processing

And then “Serial” to disable OS use of the serial connection:

Fig13b. Raspberry Pi install processing

3.Reboot the Raspberry Pi.

4.Testing the connection

To test the RPi and Pixhawk are able to communicate with each other first ensure the RPi and Pixhawk are powered, then in a console on the RPi type:

Once MAVProxy has started you should be able to type in the following command to display the ARMING_CHECK parameters value.

(20)

Fig13c.Raspberry Pi install processing

3.3 Visual identification system

3.31 Real-time First-person view system(FPV)

First Person View provides a true pilot’s eye view while flying by placing a video camera and transmitter on the vehicle paired with a receiver and either an LCD or goggles on the ground. An optional OSD (On Screen Display) helps maintain orientation by providing aircraft instrument overlay on the FPV monitor.

The most common frequencies used for video transmission are: 900 MHz, 1.2 GHz, 2.4 GHz, and 5.8 GHz.[8] Specialized long-range UHF control systems operate at 433 MHz (for amateur radio licensees only — especially with two European nations having exclusive allocations for them, and secondary usage in much of North America) or 869 MHz,and are commonly used to achieve greater control range, while the use of directional, high-gain antennas increases video range. Sophisticated setups are capable of achieving a range of 20–30 miles or more. In addition to the standard video frequencies, 1.3 GHz and 2.3 GHz have emerged as the common frequencies get more crowded.

(21)

Because our UAV is directly controlled by the computer,and the picture and video are send to be analyzed through the On-board PC,so we don't need LCD ,only need a video transform cable to connect the PC.

Connect camera and transmitter to OSD

Connect the black, red, and yellow three-wire cable to the camera and to the three pins on the OSD labeled GND, +12V, and VIN with the black wire connected to GND and the yellow wire connected to VIN. Connect the red, black, and white three-wire cable to the transmitter and to the three pins on the OSD labeled GND, +12V, and VOUT with the black wire connected to GND and the white wire connected to VOUT.

Fig14. RPI video output connection

3.32 OpenCV analysis

The overall system functionality is broken into three parts: autonomous flight, object recognition, and avoidance commands. While in autonomous flight, the aircraft is under the control of the ArduPilot Mega 2.6. Also during flight, the camera is collecting images and sending them to the onboard image processor, which filters and analyzes the images. If the processor detects an object in the images, it will begin to track it. As objects begin moving towards or are already in the flight path of the UAV, they are immediately tracked and upgraded to obstacles. When this happens, the processor sends an avoidance command to the microcontroller, which in turn will calculate the amount of control surface deflection required to move the UAV away from the obstacle.

To utilize computer-based vision for obstacle avoidance, a live video stream from a camera is required. An algorithm for obstacle avoidance was developed through an open source programming library, OpenCV. The primary function of the algorithm created in OpenCV is to detect and track an object. For this purpose, the computer-based vision algorithm utilizes two methods working simultaneously to obtain points of interest from the camera’s live video stream: [9]motion detection (optical flow) and feature detection, which are contained in the OpenCV libraries.

(22)

Optical flow is used to detect, track, and predict the location of an object’s motion in a video stream while feature detection is used to detect and track an object’s feature, such as a corner or edge.

Feature detection defines and tracks primary points of interest of an image. [ ]This is done through the use of a corner detection method known as Shi-Tomasi, which is replicated in OpenCV’s function good Features To Track. The corner detection method uses minimum eigenvalues that act as threshold values for the quality level of a corner. If a corner is below the quality threshold value, that corner is not tracked.

The function assigns a threshold value for the quality of the feature and the number of features to track in each frame of a video stream. Before using the function, certain criteria must be known: the number of corners to track, the quality of the corners, and the minimum distance between each corners.

when the OpenCV program could analysis that the triangle in the picture have more than two 60 degree interior angles,we can confirm the UAV is just upon the UGV. But there‘re many limitations for feature detection,just like the complex image background,visible light refraction,camera lens distortion,etc. So in order to reduce the deviation of these items,we will do some experiment in the next step:

 Comparing the difficulty offeature detection under different lighting conditions

 Comparing the image resolution when the UAV in different altitude

 Comparing image shift amount when UAV in the quiescent state and the non-stationary state

By analyzing by the error value under different circumstances, we can estimate the approximate error value,and put the error parameters into analytic results so that we can achieve more accurate data analysis results.

Straight detection using the Hough transform

For linear detection, the original point (x, y) on the Cartesian coordinates will be

converted into a polar coordinate two-dimensional space with angle θand distanceρ.

For each angle θ and the distance ρ, the number will continue to be add to the memory array( may be referred as Accumulator Cell and Voting Cell). The maximum number of combinations which returned to the original Cartesiancoordinates will becomes a points collection that closest to a straight line.

(23)

Fig15. Indiscriminate straight line detection example

(24)

Triangle Mark detection algorithm using the Hough transform

Step1.Firstly,squeezes the length,relative distance,density of the straight stage. This step is to eliminate unnecessary information such as debris or background.

Fig16a. Triangle recognition process

Step2. Put all slopes of the straight line into stack, and select exists straight stages with a condition of the relative slope difference from 60 º and -60º .

Fig16b. Triangle recognition process

(25)

Step3.To divide all lines from center point to the center point of the triangle,and determine the center point of the target triangle when at least three intersect middle lines.

Fig16c. Triangle recognition process

(26)

4. Remote control System installation and configuration

4.1Wi-Fi connection settings for Resberry Pi

Let the Raspberry Pi has network access, which of course is a significant and exciting things. But for these embedded devices as Raspberry Pi, you will not want to compile a GUI application resource into it that has a lot of consumption and loads to connect to Wi-Fi. After all,we are not using PC equipment, resources are very valuable.

Before we start need to do some preparatory work as follows:

 Prepare a Wi-Fi USB adapter

 A Wireless Router(Here i used a Wi-max pocket wifi for experiment)

 Effective network environment

1. Because Raspberry Pi operating system is not the latest, firstly need to upgrade it.

Then,turn down the Resberry Pi

Fig17a. Wi-Fi connection setting process

2. Plug in Wi-Fi adapter and then start Raspberry Pi. One way to configure our network connection is to manually configure the network interface configuration files.Using common file editor to open the interface settings file (I used vi).

To modify the configuration file into a DHCP connection mode as follows

Fig17b. Wi-Fi connection setting process

3. Next step is to offer the WiFi network connection information. We open the WPA configuration file as follows:

(27)

Here is a configuration example for reference of mine:

Fig17c. Wi-Fi connection setting process

If we don't want use DHCP or want to set up multiple network connections, then need some additional settings. Such as the following changes:

The following example is a static IP configuration:

4. Reboot

Fig17d. Wi-Fi connection setting process

Now the Internet connection is complete. And then we can sent data between the Ground commend center and UAV without any cable.

(28)

4.2 Remote control system programming 1. Install the system to the SD card.

Raspberry Pi running is based on Linux as its operating system.And it reads the system through the SD card. Therefore, we need a Linux system installed on the SD card. it's best to format the SD card before the operation , and then download Raspbian operating system. After installation,plug the SD card into computer,after decompress the Raspbian compressed files,change the file suffix into img,and read the system.

Fig18a. Remote control system install process

2.Login Raspberry Pi via SSH software.

Our computer doesn't need to install a Linux system, only need to install SSH software, this software is used for login Raspberry Pi through network and remote control it.Before log, we need to find the Raspberry Pi assigned IP address.

Fig18b. Remote control system install process

(29)

3.Open SSH software, login Raspberry Pi IP

Fig18c. Remote control system install process

Enter the IP address and password

Fig18d. Remote control system install process

If we had login successfully,the interface will be like below

Fig18e. Remote control system install process

(30)

4.After a successful login, conduct the camera parameter settings,we should write instructions ‘sudo raspi-config’ to get into settings interface and change the camera mode into”Enable Camera”.

Fig18f. Remote control system install process

5.Reboot

6.After the restart, log back into Raspberry Pi via SSH software.

(31)

5.Experiment

5.1 Equipment preparation and experiment condition

 The data analysis experiment is under windows OS environment,combined Visual Studio Community 2013 and OpenCV 2.4.11.

 Considering the easy portrait recognition degree,Neon tube is the best choice.But it requires to send 1.2 kv voltage to the neon lights in every meter. UGV is difficult to provide such high voltage as a carrier. So combining landing environment,time zone,weather,recognition,etc elements,the high color rendering tape LED and 11.1v Lipo battery were selected.

Fig19. Landing triangle mark material: high color rendering 2835 tape LED

Fig20. 11.1v lipo battery

(32)

Fig21. Landing triangle mark

Fig22. Equipped camera on UAV

Because of we must minimize the takeoff weight of UAV, so we gave up using SLR digital camera which can take high quality photos. Instead, we used a light weight digital camera made by Nikon that can provide clear enough picture at 5~10m altitude.

(33)

Fig23. UAV and Landing platform

5.2 Experiment process

We prepared two steps to confirm our consideration. The first step is to test OpenCV analysis speed and the transform from a vector data into a MAVLink command data.Then,the next step is to test the command data transmission between RPI and APM system and let the system do the judgment of whether the UGV is moving.

Image processing

Using Canny algorithm in the OpenCV, to detect the edge.

(34)

Fig24.Photo of landing triangle mark in light/dark environment

Fig25.Edge detection experimental results

Since the detected edges is not a clean straight stage, so we need to select a straight line that composed by points by using the Hough transform, it is stored in the memory as a candidate.

Fig26a. Straight stage detection result

(35)

Fig26b. Straight stage detection result

Extracting the number of straight stage from length, relative distance,and density conditions.Andeliminate unnecessary information such as debris or background.Then, detect the segments which have at least two lines that can form 60°±10°or 120°

±10°angle.

At last,detect the middle point of the triangle,and compare with a photo shot from directly above, to comparative center point offset so that we can gain a vector coordinates data which can transform into a MAVLink command data.

Manually and automatic command input setting

Because we need to test whether the view data can be analyzed and can be sent to UAV successfully. So the first step we tested to sent a command vector data to UAV manually in the RPI system. The altitude is set at 5m, flight speed is constant 1.5m/s.

Then set the image recognition process into 1 per second, test actual command delay and accuracy.Finally is let UAV automatically transfer command data to control flight. Due to space constraints and equipment limitations,the test altitude were set up with 5m,10m,15m,distance between UAV and landing point were 20m,30m.

(36)

6. Experiment Result

By using Hough transform, the triangle can be recognized in different light environment, despite there’re halo and focus deviation of photographs, in OpenCV it also can detected the triangle mark. But the middle point of the triangle detection has deviation. It may lead to affect the accuracy of vector data.Because the altitude of UAV we set is not too high, the influence was not very obviously.

Fig27. Test results in different environment

We compared the difficulty of feature detection under different lighting conditions and the image resolution when the UAV in different altitude, and also tested the image shift amount when UAV in the quiescent state and the non-stationary state. Because of the image processing delay and UAV vibration, returned images were not as ideal state. The flight distance of UAV always exceed the landing platform and there also has accuracy deviation at about 0.3~2m.

(37)

Fig28. Direction judging deviation with different altitude setting

In the graph showed the distance variation curve between UAV and landing point.

It can be seen obviously of communication and fed back processing. The time of data return processing is about 1 per second,since the initial few meters is far away from the landing site, data transmission and image processing time is longer,so that there’re fluctuations of distance deviation. Especially at the last time of data return,UAV always fly over the landing point because of the transmission delay. Therefore,we rewrote the program that add a judging function when switching the landing mode,so that UAV will switch to landing mode only when the photo fully consistent the position just above the landing point.

In order to improve the efficiency and accuracy of flight, we tested the accuracy problems of image processing at different altitude. As we see,the x-axis is the time of flight, y-axis is the image processing delay, the process speed can be controlled in 1 per second when altitude under about 10m, optimal processing feedback height is under 10 m.

(38)

Fig29. Data transmission and processing delay at different altitude

Compared with traditional automatic flight UAV

In our whole system, all the data process were finished by PC on the ground, so the takeoff weight of UAV is much lighter than those UAVs using many sensor,running ROS system. As we all know many research team used ROS system with visual identity to achieve automatic flight control. But in that type,UAV must take too much extra takeoff weight,such like infrared sensors,ultrasonic sensors,laser scanning system,etc. That not only cost more electricity but also need a lot calculation to get real-time mapping.That’s why the flight time is very short and UAV volume is very large.

In the graph we can see that most takeoff weight is battery,avionics parts only accounted for a small part and power consumption is very low. The flight time not been great affected by increased equipment weight, although the battery capacity is only 7500mah,the flight time reached almost 30 min.

(39)

Fig30. Energy consumption in test state

Fig31. Weight of the structures and energy consumption

(40)

Conclusion and future work

This paper presented the design of a reusable and configurable UAV-UGV research platform for performing joint air to air and air to ground cooperative missions. The system can process sensor data on-board and run high level autonomy algorithms for performing collaborative, autonomous missions. This platform has been field tested in multiple flight configurations, including a demonstration of a cooperative detection and surveillance task involving two UAVs and a UGV.

Future experiments will involve additional experiments between multiple UAVs and UGVs operating simultaneously and autonomously.

For instance, given a set of tasks to accomplish, an area to explore is how

the vehicles can negotiate among themselves to divide up the tasks in a

way that reduces the overall system resource expenditures. An example of

this approach, currently being investigated, is the use of an auction

algorithm in which vehicles place bids on tasks to determine task

assignments. An area that will be explored further is the cost metric that is

used to generate bids for the auctions. As part of this, cost methods that

include true (rather than straight line) paths, and include wind and other

environmental conditions will be investigated. Finally, additional target

detection algorithms and sensor configurations are being investigated.

(41)

7.Reference

[1]Selcuk Bayraktar, Georgios E. Fainekos, G. J. P. (2004). Experimental cooperative control of fixed wing unmanned aerial vehicles. In 43rd IEEE Conference on Decision and Control.

[2]Albert S. Huang, Edwin Olson, D. C. M. (2010). Lcm: Lightweight communications and marshalling. Proc. Int. Conf. on Intelligent Robots and Systems (IROS), Taipei, Taiwan.

[3]R.O.S ,http://erlerobot.github.io/erle_gitbook/en/mavlink/ros/roscopter.

html

[4]APM-Pixhawk,https://pixhawk.org/choice

[5]https://github.com/diydrones/ardupilot/tree/master/APMrover2 [6]http://www.raspberrypi.org/help/quick-start-guide/

[7]http://answers.ros.org/question/97131/what-is-the-best-way-to-connect -an-apm-controller-to-ros/

[8]Windestål, David. "The FPV Starting guide". RCExplorer. Retrieved 2 June 2013.

[9]Computer Vision Introduction,

http://www.comp.nus.edu.sg/~cs4243/lecture/motion.pdf

参照

関連したドキュメント

Proof of Theorem 2: The Push-and-Pull algorithm consists of the Initialization phase to generate an initial tableau that contains some basic variables, followed by the Push and

An easy-to-use procedure is presented for improving the ε-constraint method for computing the efficient frontier of the portfolio selection problem endowed with additional cardinality

Let X be a smooth projective variety defined over an algebraically closed field k of positive characteristic.. By our assumption the image of f contains

Thus JC/a v is a defining system of invariant eigendistributions and the Fourier transform of Jr These systems wilt be known to be regular holo- nomic.. Here

For further analysis of the effects of seasonality, three chaotic attractors as well as a Poincar´e section the Poincar´e section is a classical technique for analyzing dynamic

Keywords and Phrases: The Milnor K-group, Complete Discrete Val- uation Field, Higher Local Class Field Theory..

In order to be able to apply the Cartan–K¨ ahler theorem to prove existence of solutions in the real-analytic category, one needs a stronger result than Proposition 2.3; one needs

Tsutsumi, Uniqueness of solutions for the generalized Korteweg-de Vries equation, SIAM J.. Hormander, Linear Partial Differential Operators, Springer.Verlag, Berlin/Heidelberg/New