DESIGN AND IMPLEMENTATION OF A UNIFIED
COMMAND AND CONTROL ARCHITECTURE FOR
MULTIPLE COOPERATIVE UNMANNED VEHICLES
UTILIZING COMMERCIAL OFF THE SHELF
COMPONENTS
THESIS
Jeremy Gray, Civilian, Ctr
AFIT-ENV-MS-15-D-048
DEPARTMENT OF THE AIR FORCE
AIR UNIVERSITY
AIR FORCE INSTITUTE OF TECHNOLOGY
Wright-Patterson Air Force Base, Ohio
DISTRIBUTION STATEMENT A
APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED.
The views expressed in this document are those of the author and do not reflect the
official policy or position of the United States Air Force, the United States Department
of Defense or the United States Government. This material is declared a work of the
U.S. Government and is not subject to copyright protection in the United States.
AFIT-ENV-MS-15-D-048
DESIGN AND IMPLEMENTATION OF A UNIFIED COMMAND AND
CONTROL ARCHITECTURE FOR MULTIPLE COOPERATIVE UNMANNED
VEHICLES UTILIZING COMMERCIAL OFF THE SHELF COMPONENTS
THESIS
Presented to the Faculty
Department of Systems Engineering and Management
Graduate School of Engineering and Management
Air Force Institute of Technology
Air University
Air Education and Training Command
in Partial Fulfillment of the Requirements for the
Degree of Master of Science in Systems Engineering
Jeremy Gray, Civilian, Ctr, B.S.
December 2015
DISTRIBUTION STATEMENT A
APPROVED FOR PUBLIC RELEASE; DISTRIBUTION UNLIMITED.
AFIT-ENV-MS-15-D-048
DESIGN AND IMPLEMENTATION OF A UNIFIED COMMAND AND
CONTROL ARCHITECTURE FOR MULTIPLE COOPERATIVE UNMANNED
VEHICLES UTILIZING COMMERCIAL OFF THE SHELF COMPONENTS
THESIS
Jeremy Gray, Civilian, Ctr, B.S.
Committee Membership:
Dr. David Jaques, PhD
Chair
Dr. John Colombi, PhD
Member
Maj Brian Woolley, PhD
Member
AFIT-ENV-MS-15-D-048
Abstract
Small unmanned systems provide great military application utility due to their
portable and expendable design. These systems are, however, costly to develop, pro¬
duce, and maintain, making it desirable to integrate available commercial off the shelf
(COTS) components. This research investigates the development of a modular unified
command and control (C2) architecture for heterogeneous and homogeneous vehicle
teams to accomplish formation flocking and communication relay scenarios through
the integration of COTS components. In this thesis, a vehicle agnostic architecture
was developed to be applied across different vehicle platforms, different vehicle combi¬
nation, and different cooperative missions. COTS components consisting primarily of
open source hardware and software were integrated and tested based on the positional
accuracy, precision, and other qualitative measures. The resulting system successfully
demonstrated formation flocking in three of four vehicle combinations, with the forth
still demonstrating a leader follower relationship. The system achieved at best a mean
relative positional error of 0.99m, a standard deviation of 0.44m, and a distance root
mean square of 0.59m. The communication relay scenario was also demonstrated
with two vehicle combinations for both distance and physical obstructions breaking
the C2 link. This system demonstrated the desired capabilities and could easily be
adapted to accomplish others through the use of the flexible architecture.
IV
Acknowledgements
I would like to start off by thanking my research advisor Dr. David Jacques. He
not only provided the technical guidance and mentorship I needed to complete this
research, but also provided me this great opportunity to attend and work at AFIT.
I also want to thank Rick Patton for the technical expertise he provided me
throughout the development of my hardware, for flying all of my research platforms,
and most of all for being a tremendous mentor and friend throughout this process.
Finally and most importantly, I have to acknowledge and thank my fiancee. I
asked her to be my wife shortly after starting AFIT and am reminded daily why I
made that decision by her undying love, support, and patience. You have truly been
my rock throughout this process and without you none of this would be possible. I
love you and cannot wait to spend the rest of my life with you.
Jeremy Gray, Civilian, Ctr
v
Table of Contents
Page
lAbstractl .
Acknowledgements
List of Figures .
IList of Tablesl .
IList of Abbreviations!
II. Introduction! . . .
. iv
. . v
. ix
xiv
.xv
. . 1
1.1 Introduction and Motivation! .
.1
.3
1.3 Objective! .4
1.2 Problem Statement! ,
1.4 Investigative Questions and Methodology! .4
1.5 Scope| .5
L6 Assumptions and Limitations.5
1.7 Thesis Qutlmcl .5
III. Literature Reviewl .7
2.1 Chapter Overview
2.2 Unmanned Systems
.7
.7
.9
.9
.10
Search and Surveillancel .10
2.3 Cooperative Behavior Applications
Formation Flight]
Communication Relay
2.5 State of Practice for COTS, OSH, and OSS
Autopilot
Communi
Ground C
cation| .
ontrol Stationl .
2.6 Chapter Summary.
14
14
17
22
23
III. Methodology
24
3.1 Chapter Overview.
3.2 Command and Control Architecture Development
jggj] . .
Formation Flight Use Case and OV-1 .
Communication Relay Use Case and OV-1 .
Architecture Development Method .
24
24
25
26
27
29
vi
Page
T3
3.4
Software Development Procedure
Test and Verification Procedure .
29
30
Formation Flocking Verification, Relative Accuracy and
Precision Testsl .30
Communication Relay Verification, Relative Accuracy
I and Precision Tests
|System Latencyl .
3.5 Chapter Summary .
IIV. Architecture] .
4.1 Chapter Overview .
4.2 Operational Activities .
4.3 System Elements and Functions
4.4 Chapter Summary .
32
34
35
36
36
36
42
49
IV. Rcsultsl
50
5.1 Chapter Overview.
5.2 Selected Hardware and Softwarel
Communication System .
Command and Control Softwarel
Autopilot .
Vchiclcsl .
5.3 Command and Control Software Development!
5.4 Formation Flocking Test Results and Analysis!
UGS Following Multi-Rotor UAS
Multi-Rotor UAS Following UGS
50
50
50
50
51
52
55
59
60
68
Multi-Rotor UAS Following Mufti-Rotor UAS
Fixed Wing UAS Following Fixed Wing UAS
Formation Flight Analysis
5.5
Communication Relay Test Results and Analysis .
Test Results and Analysis
5.6
Latency Test Results and
Analysis! .
IRcsultsI
Analysis .
5.7 Chapter Summary
.84
.98
108
111
112
114
114
115
116
I VI. Conclusion!
117
6.1 Chapter Overview
6.2 Conclusion of Res(
?arch| .
6.3 Recommended Future Work| .
Bibliography .
117
117
121
124
vii
Page
Appendix.
.127
A
Appendix A: Formation Flocking Leader Vehicle Script
.127
B
Appendix B: Formation Flocking Follower Vehicle Script.
.131
C
Appendix C: Multi-Vehicle Function Module, as Tested
.135
D
Appendix D: Communication Relay Remote Vehicle
Script
.137
E
Apper
idix E: Communication Relay Relay Vehicle Script.
.139
F
Appendix F: Multi-Vehicle Function Module With
Fixed Follower Pos Calculation.
.142
G~ Appendix G: Traxxas EMAXX UGS Pixhawk
I Parameters! .144
H Appendix H: X8 Multi-Rotor LIAS Pixhawk Parameters
.148
1 Appendix I: Supper Sky Surfer UAS Pixhawk
I Parameters! .153
viii
List of Figures
Figure
Page
K
Decentralized Ad Hoc Communication Network
15
HE
nrr
20
OV-l: Formation Flight.
OV-1: Communication Relay. |
SV-1: System Interface Description.
Traxxas E-Maxx IJGS.I
3DR X8 Multi-Rotor UAS.I.
Super Sky Surfer Fixed Wing UAS.
Follower Commanded Position Calculation Method. I
27
28
33
35
38
39
41
44
45
48
53
54
55
57
IX
Figure
Page
x
Figure Page
39
Multi-Rotor UAS Following UGS Test 3 Vehicle Position.
.75
40
Multi-Rotor UAS Following UGS Test 4 Radial Position
Error. 1 .
.76
41
Multi-Rotor UAS Following UGS Test 4 Forward-Right
Position Error. 1 .
.76
42
Multi-Rotor UAS Following UGS Test 4 Vehicle Position.
.77
43
Multi-Rotor UAS Following UGS Test 5 Radial Position
Error. 1 .
.78
44
Multi-Rotor UAS Following UGS Test 5 Forward-Right
Position Error .
.78
45
Multi-Rotor UAS Following UGS Test 5 Vehicle Position.
.79
46
Multi-Rotor UAS Following UGS Test 6 Radial Position
Error. 1 .
.80
47
Multi-Rotor UAS Following UGS Test 6 Forward-Right
Position Error .
.81
48
Multi-Rotor UAS Following UGS Test 6 Vehicle Position.
.81
49
Multi-Rotor UAS Following Multi-Rotor UAS Test 1
Radial Position Error. 1 .
.85
50
Multi-Rotor UAS Following Multi-Rotor UAS Test 1
Forward-Right Position Error.
.86
51
Multi-Rotor UAS Following Multi-Rotor UAS Test 1
Vehicle Position. 1 .
.86
52
Multi-Rotor UAS Following Multi-Rotor UAS Test 2
Radial Position Error. 1 .
.87
53
Multi-Rotor UAS Following Multi-Rotor UAS Test 2
Forward-Right Position Error.
.88
54
Multi-Rotor UAS Following Multi-Rotor UAS Test 2
Vehicle Position. 1 .
.88
55
Multi-Rotor UAS Following Multi-Rotor UAS Test 3
Radial Position Error. 1 .
.89
xi
Figure
Page
56
Multi-Rotor UAS Following Multi-Rotor UAS Test 3
Forward-Right Position Error.
.90
57
Multi-Rotor UAS Following Multi-Rotor UAS Test 3
Vehicle Position. 1 .
.90
58
Multi-Rotor UAS Following Multi-Rotor UAS Test 4
Radial Position Error. 1 .
.91
59
Multi-Rotor UAS Following Multi-Rotor UAS Test 4
Forward-Right Position Error.
.92
60
Multi-Rotor UAS Following Multi-Rotor UAS Test 4
Vehicle Position. 1 .
.92
61
Multi-Rotor UAS Following Multi-Rotor UAS Test 5
Radial Position Error. 1 .
.93
62
Multi-Rotor UAS Following Multi-Rotor UAS Test 5
Forward-Right Position Error.
.94
63
Multi-Rotor UAS Following Multi-Rotor UAS Test 5
Vehicle Position. 1 .
.94
64
Multi-Rotor UAS Following Multi-Rotor UAS Test 6
Radial Position Error. 1 .
.95
65
Multi-Rotor UAS Following Multi-Rotor UAS Test 6
Forward-Right Position Error.
.96
66
Multi-Rotor UAS Following Multi-Rotor UAS Test 6
Vehicle Position. 1 .
.96
67
Fixed Wing UAS Following Fixed Wing UAS Test 1
Radial Position Error. 1 .
.100
68
Fixed Wing UAS Following Fixed Wing UAS Test 1
Forward-Right Position Error.
.100
69
Fixed Wing UAS Following Fixed Wing UAS Test 1
Vehicle Position. 1 .
.101
70
Fixed Wing UAS Following Fixed Wing UAS Test 2
Radial Position Error. 1 .
.102
xii
Figure
Page
71
Fixed Wing UAS Following Fixed Wing UAS Test 2
Forward-Right Position Error.
.102
72
Fixed Wing UAS Following Fixed Wing UAS Test 2
Vehicle Position. 1 .
.103
73
Fixed Wing UAS Following Fixed Wing UAS Test 3
Radial Position Error. 1 .
.104
74
Fixed Wing UAS Following Fixed Wing UAS Test 3
Forward-Right Position Error.
.104
75
Fixed Wing UAS Following Fixed Wing UAS Test 3
Vehicle Position. 1 .
.105
76
Fixed Wing UAS Following Fixed Wing UAS Test 3
Vehicle Position NW Corner. 1.
.107
77
Formation Flocking Test Results Summary. .
.109
78
Multi-Rotor UAS Relaying to UGS Radial Position
Error. 1 .
.113
79
Multi-Rotor UAS Relaying to UGS Vehicle Position .
.114
xiii
List of Tables
Table
Page
1
AV-1: System Overview and Summary Information.
.25
2
Formation Flocking Test Matrix.
.32
3
Communication Relay Test Matrix.
.34
4
Vehicle Agnostic Activity Descriptions for Each Type of
Vehicle| .
.42
5
System Function Descriptions.
.46
6
UGS Following Multi-Rotor UAS Test Parameter Matrix
.60
7
UGS Following Multi-Rotor UAS Test Results
.67
8
Multi-Rotor UAS Following UGS Test Parameter Matrix
.69
9
Multi-Rotor UAS Following UGS Test Results
.82
14 _ Latency Test Results
115
List of Abbreviations
Abbreviation
Page
IC2 Command and Controll
POD _ Department of Defense
OSH _ Open Source Hardware
OSS _ Open Source Software
ICOTS Commercial Off the Shclfl .
IUAV Unmanned Aerial Vchiclcl .
GPS _ Global Positioning System
UAS Unmanned Aerial Systems
UGS _ Unmanned Ground Systems
ISR _ Information, Surveillance, and Recognizance
SUAS Small Unmanned Aerial Systems!
IGGS Ground Gontrol Stationl .
PCB _ printed circuit board
IMAVLink Micro Air Vehicle Linkl . . .
IMANET Mobile Ad Hoc Networksl .
IVANET Vehicle Ad Hoc Networksl
FANET Flying Ad Hoc Networks .
GUI _ Graphical User Interface.
IDODAF POD Architecture Framcworkl
. 1
.2
.2
.2
.2
.2
.5
.7
.7
.7
.7
10
11
15
21
21
21
22
24
29
29
29
xv
Abbreviation
Page
. .29
SV-1
Systems Interface Description
SV-5a
Operational Activity to System Function
Traceability Matrix.
.29
DR MS
Distance Error Root Mean Square
30
xvi
DESIGN AND IMPLEMENTATION OF A UNIFIED COMMAND AND
CONTROL ARCHITECTURE FOR MULTIPLE COOPERATIVE UNMANNED
VEHICLES UTILIZING COMMERCIAL OFF THE SHELF COMPONENTS
I. Introduction
1.1 Introduction and Motivation
In recent years, ongoing conflicts in South West Asia have revealed the utility of
unmanned vehicle systems to accomplish a myriad of different missions. Due to this
realization, there is a growing desire and need to execute previously manned missions
with unmanned vehicles. The missions reserved for these unmanned agents are gen¬
erally too dull, dirty, dangerous, or difficult for onboard human pilots to complete.
Additionally, the use of Small Unmanned Vehicles allows for a low cost, portable, and
expendable solution for a number of different missions. These systems not only aid
in the execution of missions previously fulfilled by humans, but can now accomplish
more complicated tasks with a greater level of efficiency and effectiveness. Moving for¬
ward from their current capabilities, the Department of Defense states that currently
fielded systems must be expanded to ’’achieve the levels of effectiveness, efficiency,
affordability, commonality, interoperability, integration, and other key parameters
needed to meet future operational requirements,” while minimizing the overall cost
of acquiring and maintaining the system [Tj.
One method of expanding the capabilities of these systems is to integrate multi¬
ple unmanned vehicles under a unified Command and Control (C2) architecture to
cooperatively execute a common set of missions. Integrating multiple heterogeneous
1
or homogeneous vehicles allows for an increased number of sensors distributed across
the team, allowing the team to survey a larger area and collect data faster than a lone
agent. Also, with the sensors being distributed across multiple agents, the system
can more easily handle the loss of sensors, allowing it to be more robust. Finally,
with increasing levels of autonomy a single operator should be able to monitor and
control more vehicles, allowing them to accomplish more complex tasks with much
greater ease than multiple single vehicle pilots. Examples of such tasks include close
formation flight with other aircraft, relaying communication around obstacles and
across long distances, and surveying or searching a long perimeter or a large area
for a target of interest. Due to the added data management and processing require¬
ments, C2 architectures do have a much higher level of complexity than single vehicle
architectures. Additionally, the weight, size, and power limitations of the vehicle can
constrain the location of the data processing.
Coinciding with the increased use of unmanned vehicles by the Department of
Defense (DOD), a rise in their use by the civilian population has also been seen.
Civilians use these unmanned vehicles with varying levels of autonomy for different
applications including those related to agriculture, videography, and hobbyist activi¬
ties. These communities have driven a rapid evolution of different Open Source Hard¬
ware (OSH), Open Source Software (OSS), and Commercial Off the Shelf (COTS)
products, which are utilized to achieve increased levels of autonomy and capabilities.
These civilian technologies are even being utilized successfully by minimally funded
war fighters of other nations as a means of collecting intelligence in the battle the¬
ater [2]. Though the cited case neutrally affects the United Sates, the availability
of these technologies does introduce the possibility of someone using them for the
wrong purpose at home or abroad. These scenarios range from invading someone’s
privacy using a camera mounted to a Unmanned Aerial Vehicle (UAV), to using an
2
aircraft to carry a dangerous payload, which could all be accomplished autonomously
from a distance or while the perpetrator escapes. Due to these facts, it is critical
that the capabilities of the technology be fully understood by the DOD to aid in
the mitigation of these types of attacks. An additional need for understanding the
capabilities of these technologies is to reduce the cost of acquiring and maintaining
unmanned system through the integration of OSS, OSH, and COTS. The integration
of these components can minimize the design work required to develop systems and
reduce the amount of sensitive material contained in the system, allowing it to be
more expendable if lost or captured.
This research focuses on the development and implementation of a C2 architecture
for heterogeneous and homogeneous teams of multiple cooperative unmanned vehicles
through the utilization of COTS products. Additionally, to aid in the mitigation of
attacks at home or abroad, this research further investigates the capabilities COTS
products can provide unmanned systems. The remainder of this chapter is an overview
of the problem to be solved, the objectives of the research, the limitations and scope
of the research, and the key assumptions made.
1.2 Problem Statement
Existing multi agent systems composed of purely COTS components do not ef¬
fectively control the team to accomplish specific missions. Data latency, command
authority, and other factors hinder the system’s abilities. Additionally, it is desir¬
able that COTS components be utilized due to the development, manufacturing, and
maintenance of proprietary components and software for new systems which is a time
consuming and costly processes. These systems must also allow for interoperable in¬
terfaces to allow for exchangeable component and software modules to increase the
systems overall flexibility and utility.
3
1.3 Objective
The objective of this research is to develop a multi-agent C2 architecture for
cooperative heterogeneous and homogeneous unmanned vehicles through the imple¬
mentation of COTS components, OSH, and OSS. Furthermore, it is desired that the
impact of this implementation be characterized to understand the capabilities of the
system.
1.4 Investigative Questions and Methodology
The questions this research attempts to answer and an overview of the methods
to answer these questions are outlined below.
• What are the desired missions to be accomplished by cooperative multi-agent
systems? A literary review of desired missions for multi-agent cooperative un¬
manned systems will be accomplished.
• What is the structure and limitations of existing C2 architectures for cooper¬
ative unmanned vehicles? A literary review of previously developed C2 archi¬
tectures for cooperative unmanned vehicles will be accomplished to determine
their structure and limitations.
• What are the mission-specific qualitative and quantitative measures for the
system? Performance measures are established to determine the system’s ability
to accomplish each mission.
• How well does this system perform using these performance measures? Tests are
completed to determine the system’s ability to meet each baseline requirement
• What are the effects on system performance clue to the utilization of COTS,
OSH, and OSS? The system performance is reviewed to determine how the use
of COTS may have affected the system performance.
4
1.5 Scope
This research will be limited to the modification of existing Small Unmanned
Vehicle architectures and hardware which are currently used for academic research.
The types of vehicles used will be limited to ground rovers, multi rotors, and fixed wing
vehicles, with no use of any form of aquatic vehicles. Additionally, all components
utilized for this research will be COTS components, OSS, or OSH. This research
will only focus on the C2 of cooperative unmanned vehicles and not the interaction
between manned and unmanned vehicles. However, there will be a safety pilot in
the loop at all times to ensure the safety of test team and any observers. Finally,
the missions to be accomplished by the architecture will be limited to the missions
outlined in the next chapter.
1.6 Assumptions and Limitations
For this research, it is assumed that the Global Positioning System (GPS) will
always be available to determine the location of the unmanned vehicles and other
navigation techniques not utilizing GPS will not be utilized. All testing of hardware
and software will be conducted with trained safety pilots. Test using planes or multi¬
rotors will be conducted at Atterbury Army Base and ground vehicle tests will be
conducted on the Wright Patterson Air Force Base grounds. Testing will not be
conducted during rainy or windy conditions.
1.7 Thesis Outline
In the next Chapter, a review of previous research and other works is accomplished
to examine the existing foundational knowledge on the subject of cooperative C2
architectures for multiple vehicles. Chapter 3 outlines the methods of developing the
5
required C2 architecture through the integration of COTS components and the testing
to be accomplished. Chapter 4 discusses the resulting architecture that is developed
to perform the desired scenarios. Chapter 5 discusses the results of integrating these
components through tests to measure the performance of the system. Finally, Chapter
6 reviews the entirety of this research, discusses conclusions drawn from the research
accomplished, and recommends further work to be accomplished.
6
II. Literature Review
2.1 Chapter Overview
This section is an overview of previous work, theory, and technical information
pertaining to the implementation of COTS, OSS, and OSH in C2 architectures for
teams of unmanned vehicles.
2.2 Unmanned Systems
Unmanned Aerial Systems (UAS) and Unmanned Ground Systems (UGS) are
systems that include the components and personnel required to control an unmanned
vehicle to perform specific missions. These vehicles are used for a number of different
missions, consisting mainly of Information, Surveillance, and Recognizance (ISR)
and strike missions. The goal of using these systems is to reduce the probability of
harm to humans in dull, dirty, or dangerous environments. Small Unmanned Aerial
Systems (SUAS) sub categories of UAS, displayed in Figure [TJ are limited mainly to
ISR missions and are grouped according to weight, achievable altitude, and speed.
UGS are grouped by capability, as displayed in Figure [2j including ISR and force
protection tasks such as bomb defusing and detonation. The portability and low cost
associated with small unmanned systems allows for deployment by ground troops
with less worry of losing expensive equipment. These systems are operated remotely
by a human usually with limited levels of autonomy, only relying on higher levels of
autonomous behavior for extreme circumstances such as a loss of communication link.
The DOD is, however, investigating the addition of levels of autonomy to decrease
the cost of operating the system and decrease the workload of the pilots [lj.
7
IncreasingPerformance, Payload, & Vehicle Sire
Unmanned Aircraft Systems (PB13 and beyond)
2013 Expanding Strategic Fixu5_frwnJrregular_WarfareJoJnclude_Arti-Access_Area_Der'alNear^eer^^03^
Unfunded
Figure 1. DOD Unmanned Systems Roadmap UAS Categories p].
Mission/
Capability Area
PB
FY14
POM
FY15-18
The EPP Years
FY18+
EOD
CBRN
Protection
Engineer
Logistics
Transport
ISR
Command
& Control
Advance EOD Robotic* System (AEODRSJ
■Ms. /
Man Transportable Robotics System IMTRS| , MTRS Incr2
—/
Area Detecbon/Clearance Family of System*
Husky Mounted Detection System [HMDS) Applique'
Engineer Squad Robot (ESR)
All Purpo*« Remote Transport System (ARTS)
|uad Mission Equipment Transport (SMET) ^
Route Clearance and Interrogation Syste m (RC1SJ
Autonomous Mobility AppBque’ System (AMAS) Year 1
(HMMWV/J LTV/FMTV)
AMAS Year / (Ht M T T/MRAP/MTVR]
_ / Neno/ Micro Robot^
Tr
Figure 2. DOD Unmanned Systems Roadmap UGS Categories pQj.
2.3 Cooperative Behavior Applications
Cooperative C2 architectures allow teams of unmanned vehicles to accomplish
tasks more efficiently and effectively than individual vehicles, or even tasks which
individual vehicles cannot accomplish alone. This section reviews common missions
cooperative teams of vehicles are desired to accomplish.
Formation Flight.
The first tasks is formation flight, which can be utilized for autonomous air to
air refueling or to utilize the leader’s jet stream to reduce the amount of drag on the
follower and thus reduce fuel usage. The goal of the required C2 architecture is to
minimize the error between the desired position and current position of the follower
relative to the leader. One of the difficulties accomplishing this task with a team
9
of fixed wing aircraft is reducing this error while flying in the turbulent flow of the
wingtip vortexes. Also, this capability requires a higher controller command frequency
as the desired span between vehicles decreases, limiting the ability to use processing
at the Ground Control Station (GCS) and requiring more processing onboard OH],
Communication Relay.
Another task requiring a cooperative C2 architecture is the relaying of communi¬
cation links. The goal of this task is to provide a communication link from the GCS
through a relaying agent to a remote agent which has a physical object obstructing
their communication link or an obstruction due to distance from the desired point
[2j. Also, this task allows a team of unmanned vehicles to collect and transmit data
from remote sensors which are not networked with the data’s desired destination [6j.
Such tasks can require the team have the ability to obtain knowledge of the other
members positions to avoid mid air collisions, but do not require as high of a refresh
rate as formation flight.
Search and Surveillance.
The final task that is improved by or requires a cooperative C2 architecture is the
cooperative persistent wide area surveillance problem. These applications include the
persistent surveillance of an object with an obstructed viewing area (like a building’s
front door) and the surveillance of a long boarder, perimeter, or large area of inter¬
est which cannot be surveyed with a single sensor or agent. The benefit of utilizing
multiple vehicles for these applications is it distributes the workload across the vehi¬
cles. Again, these tasks can require the team have the ability to obtain knowledge
about other members locations to avoid mid air collisions and the ability to relay data
through other vehicles in a network. Finally, it is an added benefit if the vehicles have
10
the ability to search intelligently, using information about where other vehicles have
recently searched [7j.
2.4 Command and Control Architectures
Cooperative C2 architectures have been successfully implemented on a number
of different research projects. One implementation for the use by the NAVY is the
Low-Cost UAV Swarming Technology (LOCUST) which was developed by the Office
of Naval Research. This system utilizes a team of Coyote UAVs, which are launched
from a tube array based launcher. Each UAV can communicate within the group and
the system as a whole has demonstrated the ability to achieve close formation flight
with up to nine vehicles [B|.
Other implementations of formation flight through cooperative C2 architectures
have been demonstrated using the Phastball UAV test bed, which include a team of
custom SUAS. This test bed UAV is outfitted with a custom printed circuit board
(PCB) autopilot with a GPS rated for 1.5m RMS, a 50Hz mechanical gyroscope, and
4 redundant IMUs which are over sampled to increase the resolution from 14 bit to
18 bit. This platform was used in a flight validation for a multi-UAV framework and
wing wake encounter algorithms. This framework includes a linear quadratic inner
loop controller algorithm which provides the desired trajectories. The flight tests
were conducted using virtual and physical leaders. During testing, this configura¬
tion achieved a mean distance errors of 3.43m with a standard deviation less than
2m for straight portions of the flight path. The mean distance error increased to
approximately 10m with a standard deviation of 3m during turning maneuvers [3].
Another implementation of a cooperative C2 architecture demonstrated by the
Aerospace Controls Laboratory at the Massachusetts Institute of Technology utilized
a flexible test bed architecture which was designed for research in the field of controls.
11
Each UAV contains a Piccolo autopilot, which controls the vehicle’s stabilization inner
loop controls and waypoint navigation outer loop controls. Each autopilot contains
a GPS with position errors of ± 2m and a transceiver to communicate with another
mated transceiver at a dedicated GCS for each UAV. The GCS allows for outer loop
commands to be sent to the autopilot and processing of formation flight algorithms
to determine these commands. Due to the off board processing, the update rate
of commands from the ground station to the vehicle is 1Hz. The demonstration of
formation flight utilized fixed flight paths for both vehicles, varying their speeds to
reduce the positional error between the UAVs to a 25m by 25m square around the
desired position |9J.
As displayed by Kingston et al. in. COTS can also be implemented for other
capabilities such as perimeter surveillance. This demonstration utilized a Kestrel
Autopilot to validate a perimeter surveillance algorithm which allows for growth of
the perimeter and the addition and subtraction of team members on a decentralized
C2 network. The need for a decentralized C2 network is required because the size
of the perimeter could lead the vehicle out of range of the communication link. It is
proposed that the vehicle will collect data while patrolling, and dump the information
once it is within range. In application, the system was able to survey the perimeter
cooperatively, however it is not clearly stated nor does the autopilots utilized lead to
conclusion that the vehicles were operated on a decentralized network.
This research directly follows an investigation into the effects implementing differ¬
ent configurations of low cost OSH and OSS for a cooperative C2 architecture. The
latency and relative positional accuracy of each configuration were measured to deter¬
mine the performance of that particular configuration. The first set of configurations
tested utilized Mission Planner as the GCS, while varying the sub-application utilized
to control multiple vehicles. The sub-applications used are the beta swarm applica-
12
tion and python scripting application. The python scripting application required the
use of multiple instances of Mission Planner, either on the same or different GCS lap¬
tops with a hardwired eithernet connection to pass information between. The swarm
application has limited capabilities due to its use of fixed formation movements rel¬
ative to north as opposed to being relative to the heading of the lead vehicle. The
python scripting on the other hand allows for greater flexibility in the formation of
the vehicles and allows for added levels of autonomy. The results of these tests showed
that the built in beta swarming application was the least latent with an update rate
of approximately 0.4Hz. It was also found that operating both vehicles from the same
ground station produced a higher latency than from individual ground stations, with
the highest refresh rate being 0.3Hz and 0.2Hz respectively [Ill-
Additional work at AFIT includes the CUSS architecture, which was developed as
a solution for cooperative surveillance of stationary and moving targets, along with
a solution for the wide area search problem [7]. The architecture incorporated the
Kestrel autopilot and the Virtual Cockpit GCS, which allows for control of multiple
vehicles from a single application. Four BATCAM UAVs were successfully flown
and provided simultaneous video feeds to survey the area. Another architecture
developed at AFIT was the OWL architecture, which was a solution for relaying a
communication link from the ground station to a remote vehicle that is out of range
of the link [5]. To achieve a signal pass through capability on the relay vehicle, the
link was passed through separate transceivers on the relay vehicle, which were each
mated with the GCS and the remote vehicle transceivers. This does not allow for
variation in the number of links due to the increased number of transceivers required
and algorithms required to handle the data management.
As shown through previous work, the ability to C2 a team or teams of homoge¬
neous vehicles exists. However, as the processing capability migrates away from the
13
airframe to a central processing location, the system’s ability to handle close interac¬
tions with other agents degrades. Due to the latency of transmission between agents,
a similar reaction occurs as the communication network transitions from an ad hoc
network to a centralized network. However, the effects of this migration away from
onboard processing and ad hoc networks are less noticeable during operations when
agents are more highly distributed in space. From this review, it is found that the
use of COTS, OSS, and OSH can and have been used to achieve homogeneous C2
architectures, with the consequence being a lower measure of capability compared to
a system composed of proprietary components.
2.5 State of Practice for COTS, OSH, and OSS
COTS, OSS, and OSH allow for a quick, easy, and cheap solution for countless
problems, with the majority of the time, effort, and cost being associated with the
integration of components with one another or existing proprietary systems. For the
application of unmanned vehicles, the integration of multiple sensors is required to
achieve a more accurate and percise navigation solution verses a standalone sensor.
With the recent expansion in availibility of OSH and OSS for small unmanned ve¬
hicles, components are readily available and come pre-integrated with the required
sensors or in a modular form allowing for the addition of sensors to achieve the de¬
sired capabilities. This section reviews the available COTS, OSS, and OSH required
to attain a cooperative unmanned vehicle C2 architecture.
Autopilot.
The autopilot is an on board control mechanism which contains the inner and
outer loop controlers required to perform stabilized flight and preprogramed missions.
The major difference between different brands of COTS autopilots is the availability
14
of information on the autopilot design. Autopilots such as the Pixhawk [X23J and
Ardupilot [13] are developed with open source chipsets where other autopilots, such as
Pickelo H5 and Kestral na. are developed with custom or non-specihed chipsets, or
for specific GCS software. The trend for open source and some proprietary autopilots
is to exclusively use the Micro Air Vehicle Link (MAVLink) protocol, which is a
message marshalling library of commands and responses specifically for small air
vehicles [16] . The use of this common communication protocol allows for variations in
the GCS software used, as long as that GCS software communicates using MAVLink.
Also, open source autopilots utilize modular sensors such as GPS, compass, and
airspeed sensors through common ports, allowing for easy integration of required
sensors to achieve a more accurate navigation solution.
Due to their intended use for different RC vehicle platforms, open source autopilots
utilize similar architectures. The Pixhawk autopilot controller architecture depicted
in Figure [3] shows the general structure of these controllers. The autopilot receives and
transmits MAVLink commands and telemetry via radio modems. These commands
determine the outer loop command inputs, parameter settings, and vehicle mode.
The inner loop controller commands are then achieved by monitors sensor readouts to
determine attitude and a position estimates and commanding actuators to maintain
the desired trajectory or flight characteristics. The control laws which govern the
inner and outer loops are determined by the mode of the autopilot, which is set from
the GCS or RC radio. Common modes among open source autopilots include a fully
manual mode, which passes RC commands directly to the actuators, a stabilized
mode, which controls only the inner loop controller to provide stabilized flight, and
a fully autonomous mode, which controls both inner and outer loops to maintain
stabile flight and the desired trajectory. Other modes include a guided mode, which
commands the aircraft to fly to a selected point and then loiter at that point.
15
PX4FMU or PX4IO
Figure 3. Pixhawk Autopilot Control Architecture [17| .
The tradeoff between the different open source autopilots is their cost, robustness
against failures, position estimation techniques used, and overall capabilities avail¬
able. Based on these measures, the current market leader is the Pixhawk. Though a
fraction more expensive than others, this autopilot provides a more robust architec¬
ture due to the optional redundant backup power supplies, GPS receivers, and IMUs.
An Extended Kalman Filter is also utilized to provide a better position estimate in the
presence of noise. Finally, this autopilot provides the user the fullest range of capa¬
bilities compaired to other autopilots. These cheap and accessible autopilots coupled
16
with a GCS and a RC vehicle platform allow for near full autonomous behavior of
the vehicle.
Communication.
The method of transmitting and receiving commands and information from sen¬
sors is a critical design point for all command and control architecture. This design
requirement is dependent on the mission to be accomplished, and is even more de¬
manding for cooperative unmanned vehicle C2 architectures due to their increased
complexity and high computational demands. There are two general categories of
communication network architectures which allow for certain forms of cooperative
C2.
The first category of communication networks is the centralized network. As
displayed in Figure [4| this architecture allows each agent to communicate to a central
node, which then relays information to the other agents in the team. The three major
drawbacks from this communication structure are the relatively high transmission
latency for communication between vehicles, the constrained communication range,
and the increased vulnerability due to the single point of failure at the central node,
ffowever, this network does allow for a central processing unit, allowing vehicles to
require less board processing, thus reducing their total weight.
17
A
^ ' v r •j|' i ground station
Figure 4. Centralized Communication Network Architecture [18|.
The other communication network category is the decentralized network which has
no centralized node through which all agents must obtain their information, allowing
them to communicate directly or indirectly with other agents or ground stations. A
subcategory of this is the ad hoc network architecture shown in Figure |5j which allows
agents to relay information directly or indirectly to other agents through the network.
In this configuration, a single UAV is used as the backbone link to the ground station,
allowing for a single high power transceiver on the backbone UAV to extend the range
of the team. This is unlike the centralized network which would require each UAV
to have a high power receiver to achieve the same range. This single backbone and
the relative proximity of the agents to each other allows for the majority of the
team to use relatively lighter weight transceivers. Additionally, the optimal path
of the communication link between vehicles can be determined through algorithms
processed on the agent. One downfall of this architecture is the vulnerability of the
system to the loss of the backbone node, which would disconnect all other agents from
the central node. If the team were in close proximity of the central node, the backbone
18
node is unnecessary, reducing this vulnerability. Another downfall of this architecture
is as the number of agents increase, the complexity of the network increases and the
availible bandwidth would decrease. This issue can be mitigated through the use of
other decentralized networks including the multi-group networks and multi-layer ad
hoc network. The multi-group network displayed in Figure [6] allows for centralized
control of each team while maintaining decentralized control within each team, and
the multi-layer network displayed in Figre [7] allows for decentralized control of each
team and decentralized control within each team [T9] ,
C-r —
*&£.
backbone UAV
ground station
Figure 5. Decentralized Ad Hoc Communication Network Architecture [18] .
19
Figure 6. Decentralized Multi-Group Communication Network Architecture [18]-
20
Wj JfCS
^ backbone UAV
i ->*i" ..- *s..
.
backbone UAV . ..*>!
-*r h
v 1 \. .-V
rV .
/
/
- /
j •
\_vT
s 7 % Z 7 T
!
\ V 'ikt
VS-Ar'"
backbone UAV
> ff* 1 « ^-V ‘4* ■' ■ '<* * +44
^ ^ A A
k
v /
/
/
1 .:+4
ground station
tL «
i .
Figure 7. Decentralized Multi-Layer Communication Network Architecture
These decentralized ad hoc networks can be accomplished through the use of
COTS transceivers which run onboard networking protocol algorithms to determine
the desired path to transfer data to its desired location. These components are used
in Mobile Ad Hoc Networks (MANET) for person to person communication, Vehicle
Ad Hoc Networks (VANET) for ground vehicle to ground vehicle communication,
Flying Ad Hoc Networks (FANET) for flying vehicle to flying vehicle communication.
The major difference between these networks is the node density and mobility. The
node density affects the number of optional paths for the data to be transferred, and
the node mobility affect the rate at which the topology of the network changes [18].
21
Ground Control Station.
One vital component of a C2 architecture is the central command station or GCS,
which allows the user to command the system to accomplish different tasks and review
information recieved from the vehicle. The GCS communicates to the autopilot via
mated radio transceivers using the MAVLink protocol to send outer loop controller
commands and change the mode of the autopilot, which changes the control laws
governing the outer and inner loop controllers. The recent explosion of OSS and
OSH has brought forth a number of easily obtainable ground control stations.
The first category of GCS is the PC based application which communicates with
the vehicle via a radio telemetry link or WIFI. These GCS provide a user friendly
Graphical User Interface (GUI) which allows for near full utilization of the full capa¬
bility of the autopilot, which is limited by the capabilities the open source community
wants in the GCS. minimalized GCS allows users to remotely change flight modes,
plan and send waypoint missions, and monitor telemetry. More advanced GCS such
as Mission Planner [20] allow for the use of python scripts to add levels of autonomy
to the system through logic and computations. Unlike APM Planner 2.0 12a and
Kestrel’s proprietary GCS Virtual Pilot [13], Mission Planner does not have a robust
method to control multiple vehicles. It does have a beta version of this capability.
However, Hardy m successfully demonstrated the ability to use python scripts to
pass information between multiple instances of Mission Planner to perform cooper¬
ative behavior. The GCS MavProxy stands out due to its use of the PC operating
system’s command line to send MAVLink protocol commands directly to the vehicle
without the use of the GUI. This ground station is more flexible, modular, and allows
for a higher utilization of the full capability of the autopilot while also containing the
same capabilities of the other major GCS software. MAVProxy does not require the
use of a GUI, which can reduce the expected latency produced by this protocol layer.
22
Finally, like Mission Planner, this software allows for the use of scripting to achieve
a higher level of autonomy [22].
The most recent expansion in GCS software are tablet and phone applications
based GCS which utilize either a WIFI link or WIFI bridge to radio telemetry link to
control the vehicle. These applications are more simplistic than their PC counterparts,
providing a more limited number of the autopilot’s built in capabilities. All of these
ground control stations provide a GUI which displays streaming flight data and allows
the ability to set waypoints for autonomous missions. Other advanced applications
can utilize the GPS receiver on the control device to allow for a follow me mode
in which the vehicle follows the ground station, usually while keeping its camera
pointed at the target location. These applications are only for the most basic use of
the autopilot and do not allow for any additional scripting or major modification to
the base capabilities of the autopilot.
2.6 Chapter Summary
In this chapter, a definition, baseline information, and applications for cooperative
unmanned systems were established. Also, previous work pertaining to the develop¬
ment of C2 architectures for multiple cooperative unmanned vehicles was reviewed.
Finally, existing COTS, OSH, and OSS and their capabilities were discussed as it
pertains to these architectures. In the next chapter a methodology is developed for
designing, implementing, and testing a cooperative C2 architecture for heterogeneous
unmanned vehicles.
23
III. Methodology
3.1 Chapter Overview
The purpose of this chapter is to articulate the methods followed to develop, test,
and verify the functionality of a cooperative C2 architecture for teams of multiple
vehicles. First, the process of developing the required architecture and software is
described. Then, the test and verification procedures to measure key characteristics
of the design are outlined.
3.2 Command and Control Architecture Development
In this section, the procedure and methods used for developing a C2 architecture
for teams of multiple vehicles is developed. For this research, the DOD Architecture
Framework (DODAF) 2.0 will be utilized as the collection of possible architectural
views to create. From this collection, key views will be completed to design and define
the system.
Before discussing the selected views to be created, an executive level summary of
the system’s primary goals, scope, and purpose is developed to scope the architecture
development methods. To accomplish this, the Overview and Summary Information
(AV-1), High-Level Operational Concept Graphic (OV-1), and brief use cases are
created. The AV-1 is an executive level description of the vision, scope, and purpose
of the desired system. Additionally, the use cases describe the activities the primary
and supporting actors and system accomplish during operation. Finally, the vision is
depicted using the OV-1, which visually portrays the operational concept. All of these
views will be developed for the formation flight and communication relay operations.
24
AV-1.
The AV-1 in Table [l] provides a high level overview and summary of the vision,
scope, and purpose of this system. The vision describes what the desired system is
and how it will work, the scope describes the self imposed factors that will limit the
development of the system, and the purpose describes why the system is needed to
accomplish the vision.
Table 1. AV-1: System Overview and Summary Information
Architecture Vision
The vision for this system is a unified C2 architecture for
teams of two heterogeneous or homogeneous vehicles that
requires only a single operator. This architecture will al¬
low the operator to control a team of autonomous vehicles
to perform cooperative missions scenarios. These scenar¬
ios include performing formation flocking and performing
communication relay. The formation flocking scenario re¬
quires a follower vehicle to autonomously maintain a fixed
formation relative to a lead vehicle. The relaying scenario
requires a relay vehicle to maintain a midpoint and pass
information between the GCS and a remote vehicle that
is out of communication range of the GCS.
Scope
For the initial prototype system, teams will be limited to
two vehicles. All hardware and software will be limited
to OSH, OSS, and COTS. The development and testing
period is limited to six months.
Purpose
The purpose of this system is to provide a single operator
the ability to control a team of multiple homogeneous and
heterogeneous vehicles to accomplish missions requiring
formation flocking or relaying communications through
local vehicles.
25
Formation Flight Use Case and OV-1.
The following use case describes the actions taken during the scenario of formation
flocking by all actors and system elements. The OV-1 in Figure [8] depicts the vision
for this operation.
Formation Flight Use Case and OV-1.
An operator desires to have two vehicles cooperatively flock in for¬
mation to complete a mission. The primary actor is the GCS operator.
The operator determines the mission trajectory for the leader vehicle to
complete. The GCS operator saves the mission to the leader vehicle’s
autopilot. The GCS operator launches the lead vehicle. The GCS op¬
erator commands the lead vehicle to initialize a loiter maneuver. The
GCS operator launches the follower vehicle. The GCS operator com¬
mands the follower vehicle to initialize a loiter maneuver. The GCS
operator initializes the flocking C2 mode on the GCS. The GCS begins
sending commands to the follower vehicle to continuously stay at the
desired position relative to the leader. The GCS operator commands the
leader vehicle to start the mission. This process ends when the mission
is complete or when the GCS operator commands each vehicle to return
home or to be recovered.
26
Formation Flight OV-1.
Figure 8. OV-1: Formation Flight.
Communication Relay Use Case and OV-1.
The following use case describes the actions taken during the scenario of commu¬
nication relay by all actors and system elements. The OV-1 in Figure [9] depicts the
vision for this operation.
Communication Relay Use Case.
An operator desires to have a relay vehicle pass information between
a remote vehicle and the GCS. The primary actor is the GCS operator.
The operator determines the mission trajectory for the remote vehicle to
complete. The GCS operator saves the mission to the remote vehicle’s
27
autopilot. The GCS operator launches the remote vehicle. The GCS
operator commands the remote vehicle to initialize a loiter maneuver.
The GCS operator launches the relay vehicle. The GCS operator com¬
mands the relay vehicle to initialize a loiter maneuver. The GCS oper¬
ator initializes the relay C2 mode on the GCS. The GCS begins sending
commands to the relay vehicle to continuously stay at the desired posi¬
tion relative to the remote vehicle. The GCS operator commands the
remote vehicle to start the mission. This process ends when the mission
is complete, when the GCS operator commands each vehicle to return
home, or to be recovered.
Communication Relay OV-1.
Figure 9. OV-1: Communication Relay.
Architecture Development Method.
The actions described in the use case summaries are decomposed into lower level
operational activities in the Operational Activity Decomposition Tree (OV-5a). To
aid in this decomposition, an Operational Activity Model (OV-5b) will be developed
based on the logic described in each use case. The actors and high level components
of the system depicted as swim lanes in the OV-5b are then decomposed into sub
components and the functions they accomplish in a System Functionality Description
(SV-4). These sub components are then grouped based on their physical location in
each high level component of the system, which is depicted in the Systems Interface
Description (SV-1). Finally, the leaf level functions depicted in the SV-4 will be
traced to the leaf level operational activities of the OV-5a in an Operational Activity
to System Function Traceability Matrix (SV-5a) to demonstrate concordance between
architectural views [23].
3.3 Software Development Procedure
In this section, the procedure for developing the software for a C2 architecture
for teams of multiple vehicles is discussed. Based on the use cases described in the
previous section, a controller is required for processing information from vehicle 1
and sending commands to vehicle 2 from the GCS. These controllers will be able to
command a follower vehicle to maintain a desired offset position relative to the leader
and command a relay vehicle to maintain a midpoint position between the GCS and
a remote vehicle. The commanded position calculation used in this controller for each
scenario are developed in the results and analysis chapter.
29
3.4 Test and Verification Procedure
This section outlines the system test and verification methods performed. The
tests methods described are developed to measure the latency and position accuracy
of the system, while the verification methods are developed to validate the system’s
ability to accomplish the operations outlined in the previous section.
Formation Flocking Verification, Relative Accuracy and Precision Tests.
The first set of tests verifies the system’s ability to perform formation flocking and
measures the relative accuracy and precision achieved during formation flocking. The
verification of the systems capability to perform formation flocking is based on qual¬
itative pass fail measures. The system successfully demonstrates formation flocking
if both vehicles demonstrates a leader follower relationship and the follower displays
the tendency to stay near the desired position. The leader follower relationship is also
demonstrated by the follower attempting to operate in the same operating region as
the leader and the follower maneuvering in the same direction, clockwise or counters
clockwise, as the leader.
For this research, accuracy is measured using the Distance Error Root Mean
Square (DRMS), which is the measure of the average squared error between the
follower vehicle’s current and desired position at each time step. The equation for
DRMS is shown as Equation [lj where N is the number of time steps, P* is the i th
desired position for the follower, and Pj is the i th measured position of the follower.
( 1 )
The precision is measured using the standard deviation of the measured errors in the
follower’s position relative to the desired position of the follower. The equation for
30
standard deviation is shown below as Equation [2j where N is the number of time
steps, Pi is the i th desired position for the follower, P, is the i th measured position of
the follower, and E() is the expected value.
( 2 )
These tests will be performed using four combinations of three vehicles. These
combinations include a UGS following a multi-rotor UAS, a multi-rotor UAS follow¬
ing a UGS, a multi-rotor UAS following a multi-rotor UAS, and a fixed wing UAS
following a fixed wing UAS. To aid in the identification of performance characteristics
of the system while accomplishing straight and curved flight paths, the leader in each
test will be commanded to autonomously perform both a box and circle path. The
follower will simultaneously be commanded to perform formation flocking at a specific
offset radius and offset angle. After the test, the flight data for both the leader and
follower will be obtained from telemetry logs stored on the GCS. The error between
the follower position and the desired follower position will then be evaluated from
these telemetry logs.
The two tests with teams consisting of a UGS and a multi-rotor UAS test the
system’s ability to perform missions using teams of heterogeneous vehicles. The
tests with teams consisting of two multi-rotor UAS and two fixed wing UAS test
the system’s ability to perform missions using teams of homogeneous vehicles. The
combination of fixed wing aircraft and ground vehicle or fixed wing aircraft and multi¬
rotor are not tested due to the difference in operating speed between the vehicles,
which would limit the team’s ability to maintain formation. The test comprised of
only multi-rotor or fixed wing aircraft will use an offset in altitude due to the increased
speed of the vehicles and desire to avoid mid air collisions. Table [2] displays a test
matrix of the different vehicle combinations being tested.
31
Table 2. Formation Flocking Test Matrix
Follower
UGS
Multi-Rotor
UAS
Fixed Wing
UAS
Leader
UGS
No
Yes
No
Multi-Rotor
UAS
Yes
Yes
No
Fixed Wing
UAS
No
No
Yes
Communication Relay Verification, Relative Accuracy and Precision
Tests.
Two tests will be performed to verify the system is capable of relaying the teleme¬
try and C2 link between the GCS and a remote vehicle through a relay vehicle. To
accomplish this, communication nodes at the GCS, relay vehicle, and remote vehicle
will be configured on the same network, allowing each to communicate with all nodes
on the network. Additionally, the transmission power will be turned down to the
lowest value. This allows the remote vehicle to more easily move out of the range of
the GCS node, forcing the path of communication through the relay vehicle at the
midpoint. The qualitative measure for this scenario is simply verifying the system
relays the C2 link. Finally, from the flight data logs stored on the GCS, the relative
position accuracy and precision of the follower vehicle will be determined using the
same methods outlined for formation flocking.
The procedure for this test is as follows. The remote vehicles will be moved away
from the GCS in manual mode until the GCS loses the ability to communicate with
32
the remote node, as indicated by the percent of telemetry packets received in Mission
Planner. The relay vehicle will then be moved in manual mode to the midpoint
between the GCS and remote vehicle until the C2 link between the remote vehicle and
GCS is reestablished. Doing this ensures the path of communication goes through the
relay node to the remote vehicle due to the GCS and remote nodes physical distance
from one another. Once the link is reestablished, the communication relay C2 script
will be started to command the relay vehicle to maintain a midpoint position. The
remote vehicle will then be driven manually in all directions to ensure the relay
vehicle is maintaining a midpoint position. This verifies the system’s ability to relay
the communication and command links between vehicles and maintain a midpoint
position to ensure the link is not lost again. Figure [10] depicts the test setup used to
verify this capability.
Figure 10. Communication Relay Test Configuration.
This test will be performed using two teams, one consisting of two UGS, and the
other consisting of a multi-rotor UAS relaying the C2 link to a remote UGS. Table [3]
details the combinations of vehicles and test performed for this capability.
33
Table 3. Communication Relay Test Matrix
Relay
UGS
Multi-Rotor
Fixed Wing
UAS
UAS
0)
UGS
Yes
Yes
No
0
a
Multi-Rotor
No
No
No
UAS
Fixed Wing
No
No
No
UAS
System Latency.
The final measure to be tested characterizes the latency within the system. For
this research, latency is defined as the difference between the time information is sent,
to the time the information is received and the desired action occurs. For the case
of formation flight, the total latency is the difference between the time the leader
vehicle sends its telemetry to the GCS, to the time when the follower vehicle receives
the new commanded waypoint. For this system, the total system latency can be
decomposed into sub-latencies (Figure [TTj) . The first sub-latency, denoted ti, is the
difference between the time one vehicle sends telemetry to the GCS and the time
the GCS receives the telemetry. The second sub-latency, denoted t 2 , is the difference
between the time the GCS receives the telemetry and the time the GCS sends a
command up to the other vehicle. The third and final sub-latency, denoted t 3 , is
the difference between the time the GCS sends a command up to the other vehicle
and the time the other vehicle receives the command. These sub latencies will be
measured individually using C2 scripts on the GCS.
34
Figure 11. Latency Stackup Test.
3.5 Chapter Summary
This chapter reviewed the development, test, and verification methods used to
ensure the system developed is capable of accomplishing all of the desired tasks.
The following chapters will further develop the required architecture and analyze the
results from the test described above.
35
IV. Architecture
4.1 Chapter Overview
The purpose of this chapter is to articulate the development of the system archi¬
tecture to meet the architecture overview information developed in the AV-1, OV-1,
and brief use cases from the previous chapter. For this architecture, all elements
are developed to be vehicle agnostic, meaning the elements could apply to ground,
quad rotor, or fixed wing vehicles. This is accomplished for the purpose of allowing
the resulting hardware and software to be applied to any of these types of vehicles
and in teams of any combination of these types of vehicles. Also, this architecture is
developed to be modular, allowing for different missions to be accomplished through
the variation of specific elements.
4.2 Operational Activities
In this section, the operational activities of the architecture are developed. To
accomplish this, an OV-5b is created to outline the logical flow of activities to ac¬
complish each use case. From these activities, a joint OV-5a is created to depict
the decomposition from a high level abstract operation to the desired lower level
operational activities for the system to accomplish.
The OV-5b shown in Figure [T2] depicts the flow of activities performed by the GCS
operator, GCS, and each vehicle for both the formation flocking and communication
relay use cases. These use cases are combined into a single activity flow diagram
due to the similarities in the activities performed and to drive the architecture to be
agnostic towards the mission being accomplished. In this view, it is shown that the
GCS operator initializes the use case by performing mission planning operations on
the GCS, with the GCS providing visual information regarding the developed mission.
36
The GCS operator then commands the GCS to write the mission to the autopilot,
causing the GCS to send a command to store the mission. Vehicle 1 receives this
command and provides a verification receipt to the GCS, which visually displays
the acceptance of the mission. After launching vehicle 1, the vehicle performs an
idle maneuver to allow vehicle 2 to be launched and start performing the desired
cooperative behavior before vehicle 1 proceeds to perform the mission. The follower
will continue to perform the cooperative behavior until the mission is complete, at
which time it will perform an idle maneuver until vehicle 1 is recovered. The flow of
activities ends with the vehicle 2 being recovered.
37
Figure 12. OV-5b: System Operational Activity Diagram.
38
The activity Perform cooperative activity is further refined in Figure [13] to further
refine each vehicle’s role in the activity. Additionally, this diagram aids in determin¬
ing the communication structure required to pass vehicle telemetry to the GCS and
commands to each vehicle. This activity involves the GCS software and both vehicles.
The activity Calculate desired vehicle 2 position represents the algorithm performed
by the GCS to determine the commanded go to location sent to vehicle 2 to accom¬
plish each cooperative behavior. This element is the point where the architecture
varies for each cooperative behavior, including formation flight and communication
relay, to allow for different mission controller calculations.
Figure 13. OV-5b: Perform Cooperative Activity.
The activities depicted in both OV-5b views are compiled into the OV-5a shown
in Figure [T4| Additional elements show in blue and outlined in black are derived
39
activities that are required to perform other activities contained in the OV-5b. The
Pass telemetry and Pass command activities are derived from the need to pass in¬
formation and commands to and from the remote vehicle through the relay vehicle
for the communication relay use case. The Maintain steady level movement and Pro¬
vide navigation measurement activities are derived from the need for the vehicle to
stabilize itself and utilize sensors to perform waypoint based navigation. The Ma¬
neuver vehicle and Move vehicle activities are derived from the need for the vehicle
to maneuver and propel itself during missions. Finally, the Command idle maneuver
activity is derived from the need for the operator to sequence the launching of each
vehicle by having them perform their idle maneuver to wait for further commands.
40
Perform System
Operations
y
V
r
Perform control
A
Perform command
f
Perform
r
operations
operations
communication
J
^__
\
operations
J
V
Perform mission
operations
A
:Perform launch
4
:Perform Recovery
4
:Perform idle
maneuver
4
:Perform go to
location
4
:Perform stored
mission
4
C Maintain steady
level movement
/ ; a
Provide navigation
measurement
:Write mission to
autopilot
4
:Command go to
location
4
:Command start
mission
4
:Command perform
cooperative activity
[ Command idle I
I maneuver |
A
:Send telemetry
4
:Recieve telemetry
._ 4
* -s
:Send command
4
:Redeve Command
4
:Provide visual
information
4
:Redeve user
command inputs.
A
:Plan mission
4
: Write mission to
autopilot
4
Calculate desired
vehicle 2 positio
Figure 14. OV-5a: Operational Activity Decomposition Tree.
The activities depicted in the OV-5b and OV-5a views are written to be agnostic
41
towards vehicle type to allow the architecture to be applied to different types of ve¬
hicles. Table [4] below contains descriptions of how these activities are performed by
each type of vehicle.
Table 4. Vehicle Agnostic Activity Descriptions for Each Type of Vehicle
Activity
Action by UGS
Action by Multi-
Rotor UAS
Action by Fixed
Wing UAS
Perform
Launch
Start Driving
Initialize propellers.
Lift off of ground and
hover at designated
altitude and position.
Perform a hand
launch or rolling
launch and fly in
a helix until the
designated altitude
is reached.
Perform
Recovery
Start Driving
Transit to the land¬
ing site. Hover over
landing site. Reduce
altitude until vehicle
touches down. Stop
propellers.
Transit to the land¬
ing site. Reduce
altitude and speed.
Land the UAS in the
landing area.
Perform idle
maneuver
Stop driving.
Stop all movement
and hover at the de¬
sired position or per¬
form a circular loiter
around the location.
Perform a circular
loiter maneuver
around the specified
location.
Perform go to
location
Drive to the location
and perform an idle
maneuver
Fly to location and
perform an idle ma¬
neuver.
Fly to location and
perform an idle ma¬
neuver.
4.3 System Elements and Functions
The SV-1 is a graphical representation of the physical architecture. This repre¬
sentation displays the allocation of components to sub-systems and the information
exchanged between components. This diagram will also aid in ensuring each compo¬
nent can receive the specified data type.
42
Figure [15] depicts the SV-1 view for the desired system. Each element depicted
is either directly responsible for one or more of the activities described in the OV-
5a, or is required to aid other elements in the accomplishment of their activities.
These elements which aid other elements in accomplishing their activities include the
COM port, Monitor, and mouse and keyboard. These specific elements aid both the
operator and communication transceiver with passing commands and information to
and from the GCS computer and GCS software.
From this diagram, the operator provides inputs to the GCS via the keyboard
and mouse, and receives information from the GCS via the monitor. The GCS sends
commands and receives telemetry from each vehicle via the GCS communication
transceiver. Each communication transceiver is on a network with the other com¬
munication transceivers. This configuration is required to allow one vehicle to act
as a relay vehicle if the other is out of range of the GCS. This case can be shown
by breaking the information passage between the GCS and vehicle 1. Due to the
networked communication transceivers, information being passed across this link can
still be passed through the vehicle 2 communication transceiver to the GCS. Each
vehicle is identical except for the telemetry and commands sent to the autopilot. This
is displayed this way to show that no processing is accomplished on the vehicle, and
only commands addressed to the vehicle will reach it’s autopilot. Finally, in each
vehicle, the autopilot provides actuation commands to each component and receives
GPS position information from the GPS receiver.
43
4 ^
•Node*
GCS
«Sotware»
:GCS sofware
«Node»
:GCS Computer
VI telemetry.
,V2 telemetry
VI telemtry.
V2 telemetry
1
VI command,
V2 command
«Node»
•Node* Mouse &
Monitor
Keyboard
«Node»
Operator
VI telemetry. V2 telemetry
VI command. V2 command
VI telemetry. V2 telemetry
VI command. V2 command
«Node»
Vehicle 1
•Node*
:Autopilot
GPS Position Solution
V
•Node*
:Comniunication
transceiver
•Node*
:GPS- Compass
Speed command. PWM
^, 1 iNode*
Primary propelling
component
Actuation command. PWM
•Node*
:Maneuverinq
components
«Node»
Vehicle 2
Actuation command. PWM'
•Node*
:Maneuverinq
components
Speed command. PWM
GPS Position Solution
•Node*
-^ •[Primary propelling
component
•Node*
:GPS- Compass
Figure 15. SV-1: System Interface Description.
The elements of the SV-1 are also defined by the system functions they perform.
These elements are further refined into the system functions they perform in the
SV-4 of Figure 16 Table [5] describes each of these functions based on the system
element that performs the function, the required input, and the resulting output of
each function.
Figure 16. SV-4: System Functionality Description.
45
Table 5. System Function Descriptions
Function
System Element
Input
Output
Plan Mission
Operator
Mission specific re¬
quirements
Planned mission to
be written to an au¬
topilot
Send Command
GCS Software
Command to be sent
and address of recip¬
ient
Command is sent to
recipient
Write Mission
GCS Software
Mission to be written
to autopilot
Confirmation that
mission was written
to autopilot
Perform Calcula¬
tion
GCS Software
Required informa¬
tion for calculation
to be performed
Commanded position
to be sent to vehicle
Display Informa¬
tion
Monitor
Information from
GCS to be displayed
Information is dis¬
played
Receive User In¬
put
Mouse and Keyboard
User mouse or key¬
board input
Command in GCS
software
Receive Informa¬
tion
Communication
Transceiver
Information from
sender and address
of recipient
Information is re¬
ceived
Send Information
Communication
Transceiver
Information and ad¬
dress of recipient
Information is sent
Pass Information
Communication
Transceiver
COM Port
Information from
sender to be passed
and the address of
the recipient Infor¬
mation is passed to
recipient
Perform Com¬
mand
Autopilot
Command from GCS
Vehicle actuator sig¬
nals
Perform Steady
Level Movement
Autopilot
Sensor measurements
Vehicle actuator sig¬
nals
Provide Navi¬
gation Measure¬
ment
GPS- Compass
GPS signal and mag¬
netic North signal
Sensor measurement
Move Vehicle
Maneuvering Com¬
ponents and Primary
Propelling Compo¬
nent
Vehicle actuator sig¬
nals
Vehicle movement
46
To ensure all of the operational activities are being completed by system elements,
the SV-5a in Table [17] is used to map operational activities depicted in the OV-5a
to system functions depicted in the SV-4. This view is used to both ensure each
activity is able to be performed and ensure there are no extraneous components
either completing the same activities or completing non required activities.
47
System Function (SV-4
Perform Command
Send Command
Receive Information
Send Information
Pass Information
Write Mission
Plan Mission
Perform Calculation
Display Information
Move Vehicle
Perform steady level movement
Provide Navigation Measurement
Receive User Input
Perform launch
X
Perform Recovery
X
Perform idle maneuver
X
Perform go to location
X
Perform stored mission
X
Command go to location
X
(U
Command start mission
X
1
>
Command perform cooperative activity
X
O
Command idel maneuver
X
(/)
Q)
Send command
X
y
Receive telemetry
X
>
+-»
receive command
X
u
<
Send telemetry
X
"to
Pass telemetry
X
o
Pass command
X
13
Write mission to autopilot
X
cu
Plan Mission
X
Q.
o
Calculate desired vehicle 2 position
X
Provide visual information
X
Propel vehicle
X
Maneuvervehicle
X
Maintain steady level movement
X
Provide GPS solution
X
Receive user inputs
X
Figure 17. SV-5a: System Function Traceability Matrix.
48
4.4 Chapter Summary
In this chapter, architectural views were created to articulate the development
of the system architecture to perform the conceptual operations described in the
AV-1, OV-1, and brief use cases. First, multiple OV-5b views were developed to
depict the logical sequence of activities required to perform the formation flocking
and communication relay use cases. An OV-5a was also created to depict all of the
activities depicted in the individual OV-5b views and how they related to the higher
level operation of the system. An SV-1 and SV-4 were then developed to depict the
system elements and functions required to accomplish the activities of the OV-5a and
how these elements interact. Finally, an SV-5a was created to ensure each activity is
being completed by a system function.
49
V. Results
5.1 Chapter Overview
In this section, the hardware and software selected to fulfill the roles and perform
the functions developed in the previous chapter are discussed. Then, the C2 scripts
to perform the formation flight and communication relay scenarios described in the
previous chapter are developed. Finally, the results of the tests performed to validate
and quantify the abilities of the system are outlined and analyzed.
5.2 Selected Hardware and Software
Communication System.
The Wave Relay MANET communication system was selected as the C2 link
between the GCS and each vehicle. This COTS system utilizes a 2.3 GHz to 2.5 GHz
radio frequency at up to 2.0 Watts to communicate with other nodes on the same IP
network. The routing path for each link is optimized in real time based on the GPS
location of each node and other factors proprietary to the system. Each node has the
capability of routing data from other nodes to the desired IP address. This capability
gives the system the ability to relay the C2 link from the GCS to a remote vehicle
through an intermediate relay vehicle. An IP to TTL converter is used to convert the
information being passed between the IP based Wave Relay node and the TTL based
telemetry port of the autopilot. Additionally, this converter provides the autopilot
with an IP address, allowing the GCS to connect via UDP or a virtual COM port.
Command and Control Software.
The command line based GCS software MAVProxy was selected as this system’s
primary GCS. This open source GCS software utilizes the MAVLink protocol to
50
communicate with the vehicle’s autopilot through the C2 link. MAVProxy does not
require a GUI like other GCS software, which reduces the processing power required
and allows the GCS to run faster than other GUI based GCS. Also, this GCS software
utilizes modules which increase the capabilities of the baseline GCS software. The
one module used in this system is DroneAPI, which enables the GCS to achieve
higher levels of autonomy through the use of Python 2.7 C2 scripts. These scripts
allow the GCS to send go to here commands to its vehicle, receive telemetry data,
and manipulate this information through the use of the full suite of tools available
in Python 2.7. One downfall of this GCS and module is it does not allow one GCS
and module pair to control more than one vehicle. Due to this, each vehicle requires
its own instance of MAVProxy running DroneAPI. Finally, this GCS is also capable
of sharing the vehicle’s telemetry and command authority with other GCS software.
On initialization, an IP socket is created to pass and receive this information. For
the purposes of meeting requirements outlined in the AFIT Military Flight Release,
Mission Planner is used as a heads up display for each vehicle during testing.
Autopilot.
The Pixhawk autopilot was selected to perform the autopilot element functions
described in the previous chapter. This autopilot is composed of an open source
chip set which utilizes inner loop stability control and outer loop waypoint navigation
control algorithms to allow vehicles to perform autonomous missions. These inner
and outer loop controllers fulfil the required maintain steady level movement function
described in the previous chapter.
One method of waypoint control used by this autopilot is the guided waypoint,
which is a single point that the vehicle will navigate to. Additionally, this waypoint
can be sent from the C2 script to produce an updated navigation path for the vehicle.
51
The Pixhawk utilizes the MAVLink protocol, commonly used on many different
open source ground control stations and Autopilots. This protocol allows the autopi¬
lot to both receive commands from the GCS and transmit telemetry down to the
GCS. This functionality fulfills the receive command and send telemetry functions
described in the previous chapter.
Another function of this autopilot is the ability to store missions from the GCS and
perform stored missions. Waypoint based missions can be written to the autopilot’s
internal storage, allowing the autopilot to perform these missions when commanded
to by the GCS.
Vehicles.
One of the goals of this system is to have the ability to control any small unmanned
vehicle platform including UGS, multi-rotor UAS, and fixed wing UAS. Three COTS
RC vehicles were chosen to fill these roles and each vehicle was retrofitted with the
autopilot, sensors, and communication node described above. For the autopilot, each
vehicle received a 3DR GPS/Compass. For the communication system, each vehicle
received a GPS receiver, a 2.3 GHz to 2.5 GHz antenna, and an Ethernet to TTL
converter.
For the UGS, as shown in Figure p~8| a Traxxas E-Maxx RC truck was retrofitted
with a component shelf which sits on the chasse of the vehicle.
52
Figure 18. Traxxas E-Maxx UGS.
For the UAS multi-rotor aircraft, as shown in Figure 19
a 3DR X8 Octo-copter
was used with no modifications to the base airframe.
53
Figure 19. 3DR X8 Multi-Rotor UAS.
For the fixed wing UAS, as shown in Figure 20, a Banana Hobby Supper Sky
Surfer was used. For this airframe, the rear control surface servos were moved from
the center fuselage back to the tail, reducing the length and allowable bending of the
wire connecting the servo arm to the surface control limb. Additionally, the Electronic
Speed Control (ESC) was mounted to the bottom of the aircraft to increase the airflow
across it and to reduce the space used in the fuselage.
54
Figure 20. Super Sky Surfer Fixed Wing UAS.
5.3 Command and Control Software Development
After developing the system architecture and choosing the componentry required,
it was discovered MAVProxy’s DroneAPI module does not have the ability to control
multiple vehicles from one C2 script. Due to this, each vehicle is controlled by its
own instances of MAVProxy and DroneAPI C2 script. This section develops the
methods of calculating the commanded position used to perform formation flocking
and communication relay through the DroneAPI C2 scripts.
Commanding one vehicle to move to a specified point relative to the other vehicle is
accomplished by sending a guided waypoint command from the GCS. This waypoint
command progresses through two control states. The first control state forces the
vehicle to change its ground course to navigate to the waypoint. The second state
forces the vehicle to perform a loiter maneuver at the point. Depending on the type
55
of vehicle and rules set on the vehicle’s autopilot, the vehicle will either stop inside
a predesignated radius circle around the point or maneuver through the circle and
performing a circle maneuver. In the case of the communication relay, it is desired
the relay vehicle reaches this final state if no new positions are being sent.
In the case of maintaining formation during flight, it is desired the follower not
perform a circle maneuver, as it would force the vehicle out of formation. In the case
of the UGS and multi-rotor, the vehicle can stop in the circle and not inflict position
error. However, the fixed wing aircraft is required to continue moving to maintain
flight and will perform circular loiters, which can induce position error from the plane
flying out of formation. This act of flying out of formation changes both the plane’s
heading and position, possibly also making it more difficult to recover the formation.
Placing the waypoint forward of the desired position makes it less likely that the
follower will reach the waypoint, and therefore reduces the probability of entering
the final control state. An additional benefit of placing the waypoint forward of the
desired position is it forces the follower to cut inside corners and catch up with a
leader vehicle if it is out of range.
To avoid the end state described, the waypoint is placed forward of the desired
position of the follower in the direction of the leaders ground course. The ground
course is used to negate the effects of side slip with the multi-rotor and fixed wing
UAS. Also, altitude will not be taken into account, as it will be fixed allowing each
vehicle to operate freely in its own altitude plane without interference. Figure 21
depicts this method of control and is used to calculate the commanded waypoint. In
this figure, the offset radius (roffset) is the radial distance from the desired follower
position to the leader vehicle’s position, the offset angle ( Ooffset ) is the angular offset
from the leader’s ground course vector, the Li offset (Li) is the forward offset along
the leader’s ground course vector, and the ground course angle (9gc) is the angle of
56
the ground course vector of the leader vehicle relative to north.
Figure 21. Follower Commanded Position Calculation Method.
One down side to a constant L\ forward offset is that the follower is always com¬
manded to fly ahead of its desired position. If the lead vehicle stops moving or slows,
the follower will fly ahead of its desired position, forcing it out of formation. To mit¬
igate this, the L\ forward offset is defined as a function of the lead vehicles ground
course velocity and a lead time constant, denoted as L u . The forward offset is now
defined as the product of the ground course velocity and lead time constant. With
this function, if the lead vehicle reaches a ground course velocity of zero, the follower
will be commanded to maneuver directly to its desired position.
One python script for each vehicle is required to accomplish this method of control
due to the required use of two instances of MAVProxy. For the case of formation flight,
57
these scripts consist of one leader sever script and one follower client script. For the
case of the communication relay scenario, these scripts consist of one remote vehicle
sever script and one relay vehicle client script.
The purpose of the leader server script is to acquire the position, ground course,
and velocity of the leader vehicle, and provide that information to the follower client
script through a UDP socket. The ground course is calculated using the vehicle’s
velocity in the X, Y, and Z directions relative to the body frame due to DroneAPI’s
inability to obtain the ground course directly from the vehicle’s telemetry stream.
The ground course of the vehicle is then rotated to be relative to North, the vehicle’s
position in latitude and longitude is provided relative to the geodetic frame (WGS-
84), and the altitude is provided relative to the launch point of the vehicle. The
python script created to perform this operation is contained in Appendix [A]
The purpose of the follower client script is to receive the information outlined from
the leader server script through a UDP socket, calculate the next waypoint to send to
the follower, and send the waypoint command to the follower autopilot. The python
script created to receive the leader’s information, call the offset position function,
and send the resulting position to the follower autopilot is contained in Appendix |Bj
The python function created to perform the commanded offset position calculation is
contained in the function followerjpos in Appendix |Cj
The purpose of the remote vehicle sever script for communication relay is similar
to the leader server script, with the difference being the information obtained and sent
only includes the position of the remote vehicle. Also, for purposes of determining
the midpoint for the relay vehicle to maintain, on starting the remote vehicle server
script, the vehicle’s first position is obtained and used as the position of the GCS.
This script is contained in Appendix [D]
The purpose of the relay client script is similar to the follower client script, with
58
the difference being the calculation performed to determine the desired position of the
relay vehicle. The relay client script contained in Appendix [E] receives the leader’s
information, calls the position function, and sends the guided waypoint command.
The position is determined by finding the midpoint between the GCS and remote
vehicle, and is contained in the function relay ^pos in Appendix [Cj
The rate at which both of these scripts run is governed to control the amount of
information stored in the UDP buffer. It was found during initial tests that if the
server script is run at a higher rate than the client script, the buffer will get filled,
causing the client script to send commands based on old information. For this reason,
the client script is always run at twice the rate of the server to ensure the buffer does
not fill.
5.4 Formation Flocking Test Results and Analysis
In this section, the results of the formation flocking tests outlined in the methodol¬
ogy section are discussed and analyzed. For these analyses, both the absolute position
of each vehicle and the relative position error in meters will be used to display the
collected data. The absolute position displayed in meters at a local level will be used
to identify behavioral traits of the system while performing formation flocking, while
the relative position error is used to quantify the system’s abilities. The relative posi¬
tion error is determined and displayed using the radial distance between the follower’s
position and the follower’s desired position relative to the leader. Also, the relative
position error of the follower forward and right of the desired position relative to the
leader’s ground course is also displayed. These values are displayed at a higher level
of precision than the GPS receiver can provide because the measurements are taken
after the combination of the IMU and GPS solutions in the Kalman filter. For all
tests except the fixed wing UAS tests, the position error will be calculated after the
59
follower has stabilized its position, typically about 50 seconds after the follower script
is initiated. For all tests, the lead vehicle performs the specified path counter clock¬
wise. Also, the GCS and GCS communication node was located on the East edge of
each leader’s path for all tests. All other test specific incidences or interferences will
be outlined in each test’s section.
UGS Following Multi-Rotor UAS.
The first set of tests performed included a team composed of an UGS in the role of
the follower and a multi-rotor UAS in the role of the leader. This test was performed
as described in the methodology section, with the exception of not performing circular
paths. This change was caused by a time constraint while testing and the inability of
the particular multi-rotor used to perform circular paths. The test parameters listed
in Table [6] were varied with an offset radius of 2m and an offset angle of 0°.
Table 6. UGS Following Multi-Rotor UAS Test Parameter Matrix
Test Number
Flight Path
Lit
1
Box
Os
2
Box
Is
3
Box
2s
Test 1: Box Path, L lt = Os.
For this run, a 15m box pattern was performed by the leader vehicle, with the
follower vehicle using a Os forward offset lead time. As shown in Figure 22, after
allowing the follower to stabilize its position for 50s, the system achieved a mean
radial position error of 3.43m with a standard deviation of 2.23m and a DRMS of
3.46m. Additionally, as shown in Figure [23j the system achieved mean forward and
60
right errors of -0.88m and 0.62m with standard deviations of 3.25m and 2.26m and
DRMS of 2.84m and 1.98m. Figure [24] shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from the
leader.
UGS Following Multi-Rotor UAS Test 1 Radial Position Error, r Qff = 2, theta off = 0, L lt = 0
Figure 22. UGS Following Multi-Rotor UAS Test 1 Radial Position Error.
-Right Error
-Mean Error: 0.61578
Standard Deviation: 2.2625
Figure 23. UGS Following Multi-Rotor UAS Test 1 Forward-Right Position Error.
61
UGS Following Multi-Rotor UAS Test 1 Vehicle Positions, r off = 2, theta ff = 0, L ]t = 0
Figure 24. UGS Following Multi-Rotor UAS Test 1 Vehicle Position.
Test 2: Box Path, L lt = Is.
For this run, a 15m box pattern was performed by the leader vehicle, with the
follower vehicle using a Is forward offset lead time. As shown in Figure 25, after
allowing the follower to stabilize its position for 50s, the system achieved a mean
radial position error of 1.32m with a standard deviation of 0.83m and a DRMS of
1.40m. Additionally, as shown in Figure [26j the system achieved mean forward and
right errors of 0.19m and 0.22m with standard deviations of 1.28m and 0.85m and
DRMS of 1.16m and 0.79m. Figure [27] shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from the
leader.
62
UGS Following Multi-Rotor UAS Test 2 Radial Position Error, r Qff = 2, theta Qff = 0, L lt = 1
Figure 25. UGS Following Multi-Rotor UAS Test 2 Radial Position Error.
UGS Following Multi-Rotor UAS Test 2 Forward Position Error, r „ = 2, theta „ = 0, L. = 1
— Forward Error
-■Mean Error: 0.19102
Standard Deviation: 1.2819
- Right Error
-Mean Error: 0.21594
Standard Deviation: 0.85145
. v . v i/..r.. i?
60
80 100 120 140 160
time [s]
180 200
Figure 26. UGS Following Multi-Rotor UAS Test 2 Forward-Right Position Error.
63
UGS Following Multi-Rotor UAS Test 2 Vehicle Positions, r Qff = 2, theta Qff = 0, l_ 1t =1
O'- 1 - 1 - 1 -*—
0 5 10 15 20
East [m]
Figure 27. UGS Following Multi-Rotor UAS Test 2 Vehicle Position.
Test 3: Box Path, L lt = 2s.
For this run, a 15m box pattern was performed by the leader vehicle, with the
follower vehicle using a 2s forward offset lead time. As shown in Figure 28, after
allowing the follower to stabilize its position for 50s, the system achieved a mean
radial position error of 2.97m with a standard deviation of 1.70m and a DRMS of
2.79m. Additionally, as shown in Figure [29j the system achieved mean forward and
right errors of 0.13m and 0.74m with standard deviations of 2.53m and 2.18m and
DRMS of 2.07m and 1.88m. Figure [30] shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from
the leader. During the test it was noted that the UGS would drive ahead of the
64
multi-rotor and stop, indicating it reached the waypoint and was waiting for a new
command.
UGS Following Multi-Rotor UAS Test 3 Radial Position Error, r Qff = 2, theta Qff = 0, L 1( = 2
Figure 28. UGS Following Multi-Rotor UAS Test 3 Radial Position Error.
UGS Following Multi-Rotor UAS Test 3 Forward Position Error, r „ = 2, theta „ = 0, L. = 2
-Forward Error
-Mean Error: 0.12907
Standard Deviation: 2.5346
-Right Error
-Mean Error: 0.73854
Standard Deviation: 2.177
Figure 29. UGS Following Multi-Rotor UAS Test 3 Forward-Right Position Error.
65
UGS Following Multi-Rotor UAS Test 3 Vehicle Positions, r Qff = 2, theta Qff = 0, l_ 1t =2
Figure 30. UGS Following Multi-Rotor UAS Test 3 Vehicle Position.
Team Analysis.
For this heterogeneous vehicle combination, the L lt value of Is resulted in the
lowest position error for both the accuracy and precision measures. From this, it
can be concluded that this team performs straight line paths with a higher level of
positional accuracy and precision with this value of L\ t . The resulting measurements
are summarized in Table 0 below.
66
Table 7. UGS Following Multi-Rotor UAS Test Results
Forward Error (m)
Right Error (m)
Position Error (m)
Test
cr
DRMS
/i
cr
DRMS
b
O'
DRMS
1
-0.88
3.25
2.84
0.64
2.26
1.98
3.43
2.23
3.46
2
-0.19
1.28
1.16
0.22
0.85
0.79
1.32
0.83
1.40
3
-0.13
2.53
2.07
0.74
2.18
1.88
2.97
1.70
2.79
One reoccurring issue seen during this test is the UGS’s inability to maintain a
low enough velocity to not drive past the desired position. This is caused by the
motor and transmission’s inability to provide the higher torque required to move at
slower speeds, which for this specific platform is seen at velocities below 1.5 m/s.
These factors also cause the vehicle to accelerate quickly from a stopped position.
Additionally, the autopilot cannot command the vehicle to reverse or break, meaning
if the vehicle comes to a point it needs to stop at, the autopilot reduces the throttle
to zero and the vehicle rolls to a stop. The combination of these three factors cause
the system to induce position error by driving or rolling past the desired positions.
Additional error is accrued due to the method the UGS uses to maneuver, meaning
performing circular turns to change heading. After the previous issues occur and no
new waypoint is received, the vehicle will perform a turning maneuver to navigate
back to the passed point. This method of maneuvering can also induce errors at
corners as seen at the North West corner at points 81 and 129 of Figure [24} At these
corners, the vehicle drove through the commanded point, and after turning to the
right to maneuver back to the point, received a new point further down the next leg
of the path. This occurrence causes the vehicle to perform the equivalent of three
right hand turns verse one left hand turn.
These errors are also shown to be induced more at corners, as indicated in the
67
spike in radial position error for tests 1 and 3. For test 1, three spikes in error occur at
the North West corner and are due to the vehicle performing three right hand turns
instead of performing a single left hand turn. For test 3, these spikes in error are also
induced by the vehicles inability to perform a single left hand turn at the corners.
Test 2 does not have noticeable spikes in error at corners due to the higher frequency
of spikes across the entire time, which occurs at both turns and straight paths.
Finally, from the accuracy and precision measures for the forward and right errors,
the system has less ability to maintain forward position opposed to right position.
For each test, the standard deviation and DRMS of the right error are on average
0.59m and 0.48m lower than that of the forward error. However, as the L\ t increases,
the forward mean error decreases, with no noticable trend in either dirrection for
the standard deviation or DRMS. This means that the error ellipse formed by the
standard deviation is centered closer to the desired location for higher values of L u .
Multi-Rotor UAS Following UGS.
The next set of tests performed included a team composed of a multi-rotor UAS in
the role of the follower and a UGS in the role of the leader. This test was performed
as described in the methodology section, with the test parameters listed in Table [10
varied with an offset radius of 2m and an offset angle of 0°. This test was performed
in winds between 8 knots and 11 knots at between 170° and 210°.
Table 8. Multi-Rotor UAS Following UGS Test Parameter Matrix
Test Number
Flight Path
Lit
1
Box
0 s
2
Box
1 s
3
Box
2 s
4
Circle
0 s
5
Circle
1 s
6
Circle
2 s
Test 1: Box Path, Lit = Os.
For this run, a 15m box pattern was performed by the leader vehicle, with the
follower vehicle using a Os forward offset lead time. The data collected had a gap
of missing information making up the majority of the first 50s, so the follower was
allotted an additional 50s to stabilize after the gap. As shown in Figure 31, after
allowing the follower to stabilize its position for 50s after the data gap, the system
achieved a mean radial position error of 4.24m with a standard deviation of 2.27m and
a DRMS of 2.27m. Additionally, as shown in Figure [32j the system achieved mean
forward and right errors of -3.19m and -1.16m with standard deviations of 2.93m and
1.89m and DRMS of 2.48m and 1.27m. Figure [33] shows the path taken by both the
leader and the follower, and the desired path of the follower based on the desired
offset from the leader.
69
Multi-Rotor UAS Following UGS Test 1 Radial Position Error, r Qff = 2, theta Qff = 0, L lt = 0
Figure 31. Multi-Rotor UAS Following UGS Test 1 Radial Position Error.
Multi-Rotor UAS Following UGS Test 1 Forward Position Error, r Qff = 2, theta off = 0, L 1{ = 0
-Forward Error
-Mean Error: -3.1896
Standard Deviation: 2.9274
-Right Error
-Mean Error:-1.1578
Standard Deviation: 1.8867
Figure 32. Multi-Rotor UAS Following UGS Test 1 Forward-Right Position Error.
70
Multi-Rotor UAS Following UGS Test 1 Vehicle Positions, r Qff = 2, theta Qff = 0, l_ 1t =0
Figure 33. Multi-Rotor UAS Following UGS Test 1 Vehicle Position.
Test 2: Box Path, L lt = Is.
For this run, a 15m box pattern was performed by the leader vehicle, with the
follower vehicle using a Is forward offset lead time. At time 120s, a large gust of
wind blew the lead vehicle off course causing a abnormally large spike in the error.
As shown in Figure [34| after allowing the follower to stabilize its position for 50s and
discounting errors after the gust of wind at time 120s, the system accomplished a
mean radial position error of 1.59m with a standard deviation of 0.97m and a DRMS
of 1.54m. Additionally, as shown in Figure [35j the system achieved mean forward
and right errors of -0.85m and -0.17m with standard deviations of 1.29m and 0.92m
and DRMS of 0.93m and 0.56m. Figure 36 shows the path taken by both the leader
71
Right Error [m] Forward Error [m]
and the follower, and the desired path of the follower based on the desired offset from
the leader.
Multi-Rotor UAS Following UGS Test 2 Radial Position Error, r Qff = 2, theta off = 0, L 1( = 1
Figure 34. Multi-Rotor UAS Following UGS Test 2 Radial Position Error.
Multi-Rotor UAS Following UGS Test 2 Forward Position Error, r „ = 2, theta „ = 0, L = 1
-Forward Error
-Mean Error:-0.84601
Standard Deviation: 1.292
-Right Error
-Mean Error:-0.16802
Standard Deviation: 0.92042
Figure 35. Multi-Rotor UAS Following UGS Test 2 Forward-Right Position Error.
72
Multi-Rotor UAS Following UGS Test 2 Vehicle Positions, r Qff = 2, theta Qff = 0, l_ 1t =1
Figure 36. Multi-Rotor UAS Following UGS Test 2 Vehicle Position.
Test 3: Box Path, L lt = 2s.
For this run, a 15m box pattern was performed by the leader vehicle, with the
follower vehicle using a 2s forward offset lead time. As shown in Figure 37, after
allowing the follower to stabilize its position for 50s, the system achieved a mean
radial position error of 0.92m with a standard deviation of 0.43m and a DRMS of
0.90m. Additionally, as shown in Figure [38j the system achieved mean forward and
right errors of -0.03m and -0.23m with standard deviations of 0.87m and 0.46m and
DRMS of 0.77m and 0.46m. Figure [39] shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from the
leader.
73
Multi-Rotor UAS Following UGS Test 3 Radial Position Error, r Qff = 2, theta Qff = 0, L lt = 2
Figure 37. Multi-Rotor UAS Following UGS Test 3 Radial Position Error.
Multi-Rotor UAS Following UGS Test 3 Forward Position Error, r Qff = 2, theta Qff = 0, L 1t = 2
-Forward Error
-Mean Error:-0.032581
Standard Deviation: 0.87098
-Right Error
-Mean Error:-0.23234
Standard Deviation: 0.46237
Figure 38. Multi-Rotor UAS Following UGS Test 3 Forward-Right Position Error.
74
Multi-Rotor UAS Following UGS Test 3 Vehicle Positions, r off = 2, theta ff = 0, L ]t = 2
Figure 39. Multi-Rotor UAS Following UGS Test 3 Vehicle Position.
Test 4: Circle Path, L lt = Os.
For this run, a 15m radius circle pattern was performed by the leader vehicle, with
the follower vehicle using a Os forward offset lead time. During this test, the UGS
leader stopped at approximately 165s due to a malfunction in its waypoint following
operation and the operator had to manually override the truck. The reminder of the
data after this point was removed due to it not allowing an additional 50s for the
follower to stabilize. As shown in Figure 40, after allowing the follower to stabilize
its position for 50s and removing the data after 165s, the system achieved a mean
radial position error of 1.94m with a standard deviation of 0.97m and a DRMS of
1.54m. Additionally, as shown in Figure [41} the system achieved mean forward and
75
right errors of -0.50m and -0.42m with standard deviations of 1.45m and 1.47m and
DRMS of 1.09m and 1.09m. Figure [42] shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from the
leader.
Multi-Rotor UAS Following UGS Test 4 Radial Position Error, r Qff = 2, theta Qff = 0, L 1( = 0
Figure 40. Multi-Rotor UAS Following UGS Test 4 Radial Position Error.
Multi-Rotor UAS Following UGS Test 4 Forward Position Error, r Qff = 2, theta off = 0, L 1{ = 0
-Forward Error
-Mean Error: -0.4967
Standard Deviation: 1.4534
Figure 41. Multi-Rotor UAS Following UGS Test 4 Forward-Right Position Error.
76
Multi-Rotor UAS Following UGS Test 4 Vehicle Positions, r Qff = 2, theta Qff = 0, l_ 1t =0
Figure 42. Multi-Rotor UAS Following UGS Test 4 Vehicle Position.
Test 5: Circle Path, L lt = Is.
For this run, a 15m radius circle pattern was performed by the leader vehicle,
with the follower vehicle using a Is forward offset lead time. As shown in Figure
43, after allowing the follower to stabilize its position for 50s, the system achieved a
mean radial position error of 1.96m with a standard deviation of 1.72m and a DRMS
of 2.13m. Additionally, as shown in Figure 44, the system achieved mean forward
and right errors of -0.92m and -0.88m with standard deviations of 1.72m and 1.49m
and DRMS of 1.60m and 1.41m. There is a noticeably longer stabilization time for
this test, so after a total of 100s of stabilization the system achieved a mean position
error of 1.01m with a standard deviation of 0.45m and a DRMS of 0.68m. Figure 45
77
shows the path taken by both the leader and the follower, and the desired path of
the follower based on the desired offset from the leader.
Multi-Rotor UAS Following UGS Test 5 Radial Position Error, r Qff = 2, theta off = 0, L lt = 1
Figure 43. Multi-Rotor UAS Following UGS Test 5 Radial Position Error.
Multi-Rotor UAS Following UGS Test 5 Forward Position Error, r Qff = 2, theta Qff = 0, l_ 1t = 1
-Right Error
-Mean Error: -0.44063
Standard Deviation: 0.7123
Figure 44. Multi-Rotor UAS Following UGS Test 5 Forward-Right Position Error.
78
Multi-Rotor UAS Following UGS Test 5 Vehicle Positions, r = 2, theta off = 0, l_ 1t = 1
Figure 45. Multi-Rotor UAS Following UGS Test 5 Vehicle Position.
Test 6: Circle Path, L it = 2s.
For this run, a 15m radius circle pattern was performed by the leader vehicle,
with the follower vehicle using a 2s forward offset lead time. With 50s of stabilization
removed, there are two large plateaued spikes in error caused again by a malfunction
in the leader vehicle’s waypoint following operation, which caused the vehicle to
accelerate quickly after each circuit. The leader (UGS) telemetry stored during the
test showed a significant spike in velocity at times 76s and 138s, which went past the
cruise velocity of 1.5 m/s set on the auto pilot and matches the beginning of each
plateau. Figure [46] shows the position error with these plateaus removed up to the
79
point of the vehicle recovering a stabile position, representing the best estimate for the
mean and standard deviation for this configuration with no malfunction in the leader’s
operation. Before removing these errors the system achieved a mean position error
of 6.22m with a standard deviation of 5.2m. After removing these error plateaus, the
mean position error dropped to 1.47m with a standard deviation 1.01m and a DRMS
of 1.34m. Additionally, as shown in Figure [47], the system achieved mean forward and
right errors of -3.50m and 5.28m with standard deviations of 3.89m and 4.15m and
DRMS of 3.03m and 4.18m. This measure does appear to be reasonable based on the
close results of the two previous tests. However, this measure does have a lower level
of confidence due to the manipulation of the data described and the reduced number
of data points available. Figure [48] shows the path taken by both the leader and the
follower, and the desired path of the follower based on the desired offset from the
leader.
Multi-Rotor UAS Following UGS Test 6 Radial Position Error, r Qff = 2, theta Qff = 0, L lt = 2
Figure 46. Multi-Rotor UAS Following UGS Test 6 Radial Position Error.
80
Right Error [m] Forward Error [m]
Multi-Rotor UAS Following UGS Test 6 Forward Position Error, r Qff = 2, theta off = 0, L 1{ = 2
-Forward Error
-Mean Error:-1.1531
Standard Deviation: 2.9685
Figure 47. Multi-Rotor UAS Following UGS Test 6 Forward-Right Position Error.
Multi-Rotor UAS Following UGS Test 6 Vehicle Positions, r = 2, theta Qff = 0, l_ 1t = 2
Figure 48. Multi-Rotor UAS Following UGS Test 6 Vehicle Position.
81
Team Analysis.
For this heterogeneous team, an L lt value of 2s with the leader performing a box
path resulted in the lowest radial mean position error and standard deviation. For
this path, as L u increased, the mean position error, standard deviation, and DRMS
decreases. The opposite was indicated for a circular path, which an L u value of Os
produced the best results in the forward and right directions, with the radial position
results varying only. Also, it is shown that as L lt increases, the mean position error,
standard deviation, and DRMS increase. The resulting measurements are summarized
in Table [9] below.
Table 9. Multi-Rotor UAS Following UGS Test Results
Forward Error (m)
Right Error (m)
Position Error (m)
Test
b
a
DRMS
b
cr
DRMS
b
cr
DRMS
1
-3.19
2.93
2.48
-1.16
1.89
1.27
4.24
2.27
2.27
2
-0.85
0.87
0.93
-0.17
0.92
0.56
1.59
0.85
1.08
3
-0.03
0.87
0.77
-0.23
0.46
0.46
0.92
0.43
0.90
4
-0.50
1.45
1.09
-0.42
1.47
1.09
1.94
0.97
1.54
5
-0.92
1.72
1.60
-0.88
1.49
1.41
1.96
1.72
2.13
6
-1.15
2.97
0.97
-1.86
4.06
0.92
1.47
1.01
1.34
Based on the results for the box pattern, this team performs straight line paths
with a higher level of accuracy and precision for higher values of L\ t . One possible
cause of this is as L\ t increases, the follower will cut inside corners more due to the
commanded point being projected further forward from the follower’s desired position.
Also, with the point projected further, the follower has more time to maneuver to the
point, as opposed to closer points which may require the follower to perform more
82
drastic maneuvers.
The team’s ability to perform curved paths is, however, inversely affected by higher
values of L\ t based on the results for the circle pattern. As the value of L\ t increases,
the path created by the commanded points becomes larger than the path of the lead
vehicle. This increased path radius causes the follower vehicle to fall behind due to
each vehicle having a common cruise velocity.
Unlike a UGS or fixed wing aircraft, the method the multi-rotor airframe uses to
maneuver allows the vehicle to move in any direction without performing turns. If
the vehicle flies past a waypoint or is commanded to fly to a point off of the current
ground course, it can simply pitch or roll in either direction to maneuver to the point.
Due to this higher level of maneuverability, the vehicle is able to achieve a higher level
of precision and accuracy when acting as the follower vehicle.
One possible cause of error can be seen at the corners of the box patter for L\ t
values greater than zero. As the UGS makes the turns, the desired follower position
swings outside the box, creating an elbow on each corner. The higher this value gets,
the more the elbow protrudes off the path, which can most easily be seen in Figure
39 The added length to the flight path can force the vehicle to fall behind. However,
due to reasons stated above, the follower does cut some corners instead of following
this path, allowing the follower to catch up instead of fall behind on the longer elbow
path.
Finally, from the accuracy and precision measures for the forward and right errors,
the system has less ability to maintain forward position opposed to right position
while performing straight paths, as indicated by the box patern. For the box tests,
the standard deviation and DRMS of the right error are on average 0.61m and 0.63m
lower than that of the forward error. Also, as the L lt increases, the forward mean
error, standard deviation, and DRMS decreased. This means that the error ellipse
83
formed by both the standard deviation is centered closer to the desired location for
higher values of L u . The opposite was seen for the loiter tests, which indicate as this
value increase, the center of the error ellipse migrates further away from the desired
position.
Multi-Rotor UAS Following Multi-Rotor UAS.
The next set of test performed included a team composed of two multi-rotor UAS
fulfilling the roles of both leader and follower. This test was performed as described
in the methodology section, with the test parameters listed in Table [TO] varied for an
offset radius of 2m and an offset angle of 45°. The purpose of the 45° offset is to
avoid wind interference on the part of the lower altitude follower from the leader’s
downward thrust. This test was performed in winds at approximately 8 knots at 180°.
Table 10. Multi-Rotor UAS Following Multi-Rotor UAS Test Parameter Matrix
Test Number
Flight Path
Lit
1
Box
0 s
2
Box
1 s
3
Box
2 s
4
Circle
0 s
5
Circle
1 s
6
Circle
2 s
After testing was complete, an error was found in the method of calculating the
offset position when using an angular offset and a L it greater than zero. This error
caused the commanded waypoint to not be placed forward of the follower’s desired
position in the direction of the leader’s ground course. This error does not effect
the data presented which use an offset of 45° and a L u equal to zero. Due to this
84
issue, the mean error is expected to be higher than previous tests, but the standard
deviation should still represent the team’s ability to hold an accurate formation. The
corrected follower script with this issue fixed is contained in Appendix |FJ
Test 1: Box Path, Lit = Os.
For this run, a 15m box pattern was performed by the leader vehicle, with the
follower vehicle using a Os forward offset lead time. As shown in Figure 49, after
allowing the follower to stabilize its position for 50s, the system achieved a mean
radial position error of 3.45m with a standard deviation of 1.11m and a DRMS of
2.97m. Additionally, as shown in Figure 50, the system achieved mean forward and
right errors of -1.76nr and 0.60m with standard deviations of 1.58m and 1.62m and
DRMS of 1.94m and 1.41m. Figure [51] shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from the
leader.
Multi-Rotor UAS Following Multi-Rotor UAS Test 1 Radial Position Error, r Qff = 2, theta Qff = 45, L lt = 0
Figure 49. Multi-Rotor UAS Following Multi-Rotor UAS Test 1 Radial Position Error.
85
Multi-Rotor UAS Following Multi-Rotor UAS Test 1 Forward Position Error, r Qff = 2, theta Qff = 45, l_ 1t = 0
- Forward Error
■Mean Error:-1.7605
Standard Deviation: 1.5826
-Right Error
-Mean Error: 0.60122
Standard Deviation: 1.6151
Figure 50. Multi-Rotor UAS Following Multi-Rotor UAS Test 1 Forward-Right Posi¬
tion Error.
Multi-Rotor UAS Following Multi-Rotor UAS Test 1 Vehicle Positions, r = 2, theta Qff = 45, l_ 1t =0
-5 0 5 10 15 20
East [m]
Figure 51. Multi-Rotor UAS Following Multi-Rotor UAS Test 1 Vehicle Position.
Test 2: Box Path, L lt = Is.
For this run, a 15m box pattern was performed by the leader vehicle, with the
follower vehicle using a Is forward offset lead time. During the test, the safety pilot
of the follower vehicle performed a manual override and moved the vehicle off course.
As shown in Figure [52j after allowing the follower to stabilize its position for 50s
after the manual override, the system achieved a mean radial position error of 3.08m
with a standard deviation of 0.63m and a DRMS of 1.77m. Additionally, as shown in
Figure 53, the system achieved mean forward and right errors of -1.80m and 0.70m
with standard deviations of 1.18m and 0.85m and DRMS of 1.21m and 0.62m. Figure
54 shows the path taken by both the leader and the follower, and the desired path of
the follower based on the desired offset from the leader.
Multi-Rotor UAS Following Multi-Rotor UAS Test 2 Radial Position Error, r Qff = 2, theta Qff = 45, L lt = 1
Figure 52. Multi-Rotor UAS Following Multi-Rotor UAS Test 2 Radial Position Error.
87
Multi-Rotor UAS Following Multi-Rotor UAS Test 2 Forward Position Error, r Qff = 2, theta Qff = 45, l_ lt = 1
-Forward Error
-Mean Error:-1.8047
Standard Deviation: 1.1849
Figure 53. Multi-Rotor UAS Following Multi-Rotor UAS Test 2 Forward-Right Posi¬
tion Error.
Multi-Rotor UAS Following Multi-Rotor UAS Test 2 Vehicle Positions, r = 2, theta Qff = 45, l_ 1t =0
Figure 54. Multi-Rotor UAS Following Multi-Rotor UAS Test 2 Vehicle Position.
Test 3: Box Path, L lt = 2s.
For this run, a 15m box pattern was performed by the leader vehicle, with the
follower vehicle using a 2s forward offset lead time. As shown in Figure 55, after
allowing the follower to stabilize its position for 50s, the system achieved a mean
radial position error of 2.85m with a standard deviation of 1.32m and a DRMS of
2.53m. Additionally, as shown in Figure [56j the system achieved mean forward and
right errors of -1.00m and 0.49m with standard deviations of 1.33m and 1.29m and
DRMS of 1.34m and 1.11m. Figure [57] shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from the
leader.
Multi-Rotor UAS Following Multi-Rotor UAS Test 3 Radial Position Error, r Qff = 2, theta Qff = 45, L lt = 2
Figure 55. Multi-Rotor UAS Following Multi-Rotor UAS Test 3 Radial Position Error.
Multi-Rotor UAS Following Multi-Rotor UAS Test 3 Forward Position Error, r Qff = 2, theta off = 45, L 1{ = 2
-Forward Error
-Mean Error: -0.99889
Standard Deviation: 1.3269
time [s]
-Right Error
-Mean Error: 0.48743
Standard Deviation: 1.285
Figure 56. Multi-Rotor UAS Following Multi-Rotor UAS Test 3 Forward-Right Posi¬
tion Error.
Multi-Rotor UAS Following Multi-Rotor UAS Test 3 Vehicle Positions, r = 2, theta nff = 45, L |t = 2
Figure 57. Multi-Rotor UAS Following Multi-Rotor UAS Test 3 Vehicle Position.
90
Test 4: Circle Path, L lt = Os.
For this run, a 10m radius circle pattern was performed by the leader vehicle,
with the follower vehicle using a Os forward offset lead time. As shown in Figure |58|
after allowing the follower to stabilize its position for 50s, the system achieved a mean
radial position error of 5.08m with a standard deviation of 2.16m and a DRMS of
4.17m. Additionally, as shown in Figure [59j the system achieved mean forward and
right errors of -2.92m and 0.81m with standard deviations of 1.92m and 2.25m and
DRMS of 2.64m and 1.80m. Figure 60 shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from the
leader.
Multi-Rotor UAS Following Multi-Rotor UAS Test 4 Radial Position Error, r Qff = 2, theta Qff = 45, L lt = 0
Figure 58. Multi-Rotor UAS Following Multi-Rotor UAS Test 4 Radial Position Error.
91
Multi-Rotor UAS Following Multi-Rotor UAS Test 4 Forward Position Error, r Qff = 2, theta Qff = 45, L 1t = 0
-Forward Error
-Mean Error: -2.921
Standard Deviation: 1.9201
Figure 59. Multi-Rotor UAS Following Multi-Rotor UAS Test 4 Forward-Right Posi¬
tion Error.
Multi-Rotor UAS Following Multi-Rotor UAS Test 4 Vehicle Positions, r = 2, theta Qff = 45, l_ 1t =0
Figure 60. Multi-Rotor UAS Following Multi-Rotor UAS Test 4 Vehicle Position.
92
Test 5: Circle Path, L lt = Is.
For this run, a 10m radius circle pattern was performed by the leader vehicle,
with the follower vehicle using a Is forward offset lead time. As shown in Figure [61~|
after allowing the follower to stabilize its position for 50s, the system achieved a mean
radial position error of 3.94m with a standard deviation of 1.68m and a DRMS of
3.51m. Additionally, as shown in Figure [62j the system achieved mean forward and
right errors of -2.54m and 0.01m with standard deviations of 1.56m and 1.18m and
DRMS of 2.44m and 0.96m. Figure 63 shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from the
leader.
Multi-Rotor UAS Following Multi-Rotor UAS Test 5 Radial Position Error, r Qff = 2, theta Qff = 45, L lt = 1
Figure 61. Multi-Rotor UAS Following Multi-Rotor UAS Test 5 Radial Position Error.
93
Multi-Rotor UAS Following Multi-Rotor UAS Test 5 Forward Position Error, r Qff = 2, theta Qff = 45, l_ lt = 1
-Forward Error
-Mean Error: -2.5396
Standard Deviation: 1.5642
Figure 62. Multi-Rotor UAS Following Multi-Rotor UAS Test 5 Forward-Right Posi¬
tion Error.
Multi-Rotor UAS Following Multi-Rotor UAS Test 5 Vehicle Positions, r Qff = 2, theta nff = 45, L 1t = 1
10
East [m]
25
20
10
Leader
Follower
Desired Follower Position
Figure 63. Multi-Rotor UAS Following Multi-Rotor UAS Test 5 Vehicle Position.
94
Test 6: Circle Path, L lt = 2s.
For this run, a 10m radius circle pattern was performed by the leader vehicle,
with the follower vehicle using a 2s forward offset lead time. As shown in Figure [64}
after allowing the follower to stabilize its position for 50s, the system achieved a mean
radial position error of 4.47m with a standard deviation of 2.29m and a DRMS of
3.67m. Additionally, as shown in Figure [65j the system achieved mean forward and
right errors of -2.44m and 0.67m with standard deviations of 1.78m and 1.97m and
DRMS of 2.21m and 1.52m. Figure 66 shows the path taken by both the leader and
the follower, and the desired path of the follower based on the desired offset from the
leader.
Multi-Rotor UAS Following Multi-Rotor UAS Test 6 Radial Position Error, r Qff = 2, theta Qff = 45, L lt = 2
Figure 64. Multi-Rotor UAS Following Multi-Rotor UAS Test 6 Radial Position Error.
95
Multi-Rotor UAS Following Multi-Rotor UAS Test 6 Forward Position Error, r Qff = 2, theta off = 45, L 1{ = 2
-Forward Error
-Mean Error: -2.4428
Standard Deviation: 1.7796
Figure 65. Multi-Rotor UAS Following Multi-Rotor UAS Test 6 Forward-Right Posi¬
tion Error.
Multi-Rotor UAS Following Multi-Rotor UAS Test 6 Vehicle Positions, r = 2, theta Qff = 45, l_ 1t =2
Figure 66. Multi-Rotor UAS Following Multi-Rotor UAS Test 6 Vehicle Position.
96
Team Analysis.
For this homogeneous team, an L u value of Is with the leader performing a box
path resulted in the lowest standard deviations and DRMS of the error, with the
lowest mean error being seen for an L u value of 2s. Similarly for a circular path, an
L u value of Is produced the best results for the radial position error. These results
indicate this team performs straight and curved paths with a higher level of precision
and accuracy with this value of L u . The resulting measurements are summarized in
Table [Til
Table 11. Multi-Rotor UAS Following Multi-Rotor UAS Test Results
Forward Error (m)
Right Error (m)
Position Error (m)
Test
cr
DRMS
h
cr
DRMS
b
a
DRMS
1
-1.76
1.58
1.94
0.60
1.62
1.41
3.45
1.11
2.97
2
-1.80
1.18
1.21
0.70
0.85
0.62
3.08
0.63
1.77
3
-1.00
1.33
1.34
0.49
1.29
1.11
2.85
1.32
2.53
4
-2.92
1.92
2.64
-0.81
2.25
1.80
5.08
2.16
4.17
5
-2.54
1.56
2.44
-0.01
1.18
0.96
3.94
1.68
3.51
6
-2.44
1.78
2.21
-0.67
1.97
1.52
4.47
2.29
3.67
As shown in the position plots for each test point, the follower’s ground track
noticeably varied from the desired follower position ground track. One known cause
for this is the error in the method of calculating the commanded follower position
described previously. However, another possible cause of this variation is the combi¬
nation of identical vehicle cruse velocities and the difference in length of the desired
follower ground track and leader ground track. Due to the follower’s desired ground
track on the outer circuit being longer than the leader’s inner circuit, the follower
97
would have to fly at a higher ground speed than the leader around turns to maintain
the desired separation. However, this is partially overcome by the follower cutting
inside corners as the commanded point is placed further to the left after the leader
turns. This allows the follower to gradually regain a similar ground track, as shown
in Figure [51] at the North East corner at point 136. Due to the first cause of error
stated not effecting tests with values of L u equal to zero, it is likely the second cause
of error producing variations for both tests 1 and 4. For all other tests the product
of variation is likely a combination of both causes.
Based on the previous team’s results consisting of a multi-rotor UAS following
a UGS leader and based on how well the vehicles visually maintained separation
during the flight test, it is expected that a team consisting of two multi-rotor UAS
would perform as well or better. It is likely this is true due to their shared flight
characteristics which allow the airframe to maneuver in any direction.
Finally, from the accuracy and precision measures for the forward and right errors,
the system has less ability to maintain forward position opposed to right position
while performing straight paths, as indicated by the box patern. For the box tests,
the standard deviation and DRMS of the right error are on average 0.11m and 0.45m
closer to zero than that of the forward error. However, as the L\ t increases, the
forward mean error decreased. This means that the error ellipse formed by both the
standard deviation is centered closer to the desired location for higher values of L\ t .
No corrilation of this kind is seen for the loiter tests.
Fixed Wing UAS Following Fixed Wing UAS.
The next set of tests performed included a team composed of two fixed wing UAS
fulfilling both the leader and follower roles. This test was performed as described
in the methodology section with the exception of the leader performing the loiter
98
due to time constraints. The test parameters listed in Table 12 were varied for an
offset radius of 10m and an offset angle of 0°. This test was performed in winds at
approximately 11 knots at 180°.
Table 12. Fixed Wing UAS Following Fixed Wing UAS Test Parameter Matrix
Test Number
Flight Path
Lit
1
Box
0 s
2
Box
1 s
3
Box
2 s
Due to the highly oscillatory behavior of the position errors calculated, a 50 second
stabilization time is not used in determining the mean, standard deviation, and DRMS
of the data set.
Test 1: Box Path, Lit = Os.
For this run, a box pattern was performed by the leader vehicle, with the follower
vehicle using a Os forward offset lead time. As shown in Figure[67| for the full duration
of the test, the vehicle accomplished a mean radial position error of 118.82m with a
standard deviation of 64.96m and a DRMS of 135.38m. Additionally, as shown in
Figure |68| the system achieved mean forward and right errors of -60.57m and 74.31m
with standard deviations of 65.87m and 69.45m and DRMS of 89.42m and 101.65m.
Figure [69] shows the path taken by both the leader and the follower, and the desired
path of the follower based on the desired offset from the leader.
99
Fixed Wing UAS Following Fixed Wing UAS Test 1 Radial Position Error, r Qff =10, theta^ = 0, L lt = 0
Figure 67. Fixed Wing UAS Following Fixed Wing UAS Test 1 Radial Position Error.
Fixed Wing UAS Following Fixed Wing UAS Test 1 Forward Position Error, r Qff =10, theta Qff = 0, l_ 1( = 0
-Forward Error
-Mean Error: -60.5663
Standard Deviation: 65.8709
-Right Error
-Mean Error:-74.31
Standard Deviation: 69.4483
Figure 68. Fixed Wing UAS Following Fixed Wing UAS Test 1 Forward-Right Position
Error.
100
Fixed Wing UAS Following Fixed Wing UAS Test 1 Vehicle Positions, r Qff =10, theta Qff = 0, l_ 1t = 0
300
250
200
I 150
100
50
0
Leader
Follower
Desired Follower Position
0 50 100 150 200 250 300 350 400 450
East [m]
Figure 69. Fixed Wing UAS Following Fixed Wing UAS Test 1 Vehicle Position.
Test 2: Box Path, L lt = Is.
For this run, a box pattern was performed by the leader vehicle, with the follower
vehicle using a Is forward offset lead time. As shown in Figure[70j for the full duration
of the test, the vehicle accomplished a mean radial position error of 104.01m with a
standard deviation of 57.79m and a DRMS of 117.68m. Additionally, as shown in
Figure [71J the system achieved mean forward and right errors of -48.41m and 56.77m
with standard deviations of 72.39m and 58.06m and DRMS of 86.06m and 80.27m.
Figure [72] shows the path taken by both the leader and the follower, and the desired
path of the follower based on the desired offset from the leader.
101
Figure 70. Fixed Wing UAS Following Fixed Wing UAS Test 2 Radial Position Error.
Fixed Wing UAS Following Fixed Wing UAS Test 2 Forward Position Error, r off =10, theta Qff = 0, L lt =1
-Forward Error
-Mean Error: -48.4088
Standard Deviation: 72.391
Figure 71. Fixed Wing UAS Following Fixed Wing UAS Test 2 Forward-Right Position
Error.
102
Fixed Wing UAS Following Fixed Wing UAS Test 2 Vehicle Positions, r Qff =10, theta Qff = 0, L 1t = 1
Figure 72. Fixed Wing UAS Following Fixed Wing UAS Test 2 Vehicle Position.
Test 3: Box Path, L lt = 2s.
For this run, a box pattern was performed by the leader vehicle, with the follower
vehicle using a 2s forward offset lead time. As shown in Figure[73j for the full duration
of the test, the vehicle accomplished a mean radial position error of 121.78m with a
standard deviation of 62.11m and a DRMS of 139.11m. Additionally, as shown in
Figure [74| the system achieved mean forward and right errors of -60.26m and 74.90m
with standard deviations of 59.07m and 76.17m and DRMS of 83.93m and 106.25m.
Figure [75] shows the path taken by both the leader and the follower, and the desired
path of the follower based on the desired offset from the leader.
103
Fixed Wing UAS Following Fixed Wing UAS Test 3 Radial Position Error, r Qff =10, theta^ = 0, L lt = 2
Figure 73. Fixed Wing UAS Following Fixed Wing UAS Test 3 Radial Position Error.
Fixed Wing UAS Following Fixed Wing UAS Test 3 Forward Position Error, r off =10, theta off = 0, L 1{ = 2
-Forward Error
-Mean Error: -60.2592
Standard Deviation: 59.065
-Right Error
-Mean Error:-74.9038
Standard Deviation: 76.1719
Figure 74. Fixed Wing UAS Following Fixed Wing UAS Test 3 Forward-Right Position
Error.
104
Fixed Wing UAS Following Fixed Wing UAS Test 3 Vehicle Positions, r Qff =10, theta Qff = 0, L 1t = 2
Figure 75. Fixed Wing UAS Following Fixed Wing UAS Test 3 Vehicle Position.
Team Analysis.
For this homogeneous team, an L\ t value of Is resulted in the lowest mean position
error, standard deviation, and DRMS. Compared to the other measures and the
seemingly erratic flight path performed by the follower vehicle, this result is not
significant. The resulting measurements are summarized in Table 13 below.
105
Table 13. Fixed Wing UAS Following Fixed Wing UAS Test Results
Forward Error (m)
Right Error (m)
Position Error (m)
Test
a
DRMS
cr
DRMS
h
cr
DRMS
1
-60.57
65.87
89.42
-74.31
69.45
101.65
118.82
64.96
135.38
2
-48.41
72.39
86.06
-56.77
58.06
80.27
104.01
57.79
117.68
3
-60.26
59.07
83.93
-74.90
76.17
106.25
121.78
62.11
139.11
As shown in the three position plots, the follower did not maintain a ground track
that closely resembled the desired follower position’s ground track. However, the
vehicles did demonstrate a leader follower relationship. This relationship is indicated
by the follower turning primarily in the same direction as the leader and operating
in the same vicinity as the leader. Additionally, the ground course of the vehicle at
most points are either pointing in the area of the desired position, as shown at point
208 in Figure [75j or are on a path which will point in the area of the desired position,
as shown at point 115 in Figure 69
Like the UGS, the fixed wing UAS is at a disadvantage compared to the multi¬
rotor because of how it is designed to maneuver. If it is commanded to fly to a point
behind it or off the current ground course, it is required to perform a turn. However,
compared to the UGS, this effect is amplified by the speed, larger turn radius, and
the slower rate that commands are received at the extended distance from the GCS.
One indication of a slow rate of commands received and one common path error
shared by all three tests is the lobe that occurs at the North West corner of the leader’s
path. A section of the flight path from Figure 75 containing this lobe is displayed
in Figure [76] with the positions of both vehicles and the commanded position sent to
the follower vehicle shown. Starting at time step 1, the follower vehicle looks to be
106
turning towards the desired position np to time step 10, at which time it maneuvers
away from the desired flight path. The follower does not start to correct its heading
towards the desired position until after time step 16. This 6s delay is likely caused
by a weak C2 link connection, which did not allow the follower vehicle to receive
an updated command between these points. A weak C2 link is also most likely to
occur at this point, as it is one of the furthest points from the GCS communication
node, inducing fewer packets received from the GCS. The likely cause of the vehicle
performing a right turn, opposed to the desired left turn, is the autopilot was likely
commanding the vehicle to either maneuver back to a previous waypoint or to perform
a loiter around a waypoint.
Fixed Wing UAS Following Fixed Wing UAS Test 3 Vehicle Positions, r_ |ff =10, theta nff = 0, l_ n =2
East [m]
Figure 76. Fixed Wing UAS Following Fixed Wing UAS Test 3 Vehicle Position NW
Corner.
107
Another possible cause of error is the method of introducing the vehicle into the
flight path. For this test, the vehicle was not flown into a specific position or in a
specific direction before starting the flocking script. This did not prep the vehicle to
enter the circuit, and perhaps forced it to take more drastic maneuvers to fly to the
correct position. An example of this can be seen in Figure [72j where the script was
initialized at point 1 with the follower in the North West corner of the path heading
West and the leader in the North East corner heading West. This caused the follower
to perform a tight left hand turn and enter the circuit going the wrong direction. An
example of the effects from a proper entrance can be seen in Figure [75] after point 185.
The vehicle flew into the circuit tangent to the ground track of the leader, allowing
the vehicle to maintain a higher level of precision for approximately 20 seconds as
shown in the error plot.
Formation Flight Analysis.
The system’s ability to control a team of two heterogeneous or homogeneous ve¬
hicles to perform formation flocking was verified. The only exception to this is the
fixed wing aircraft team, which did not fully demonstrate formation flight, but did
demonstrate a leader follower relationship. The resulting mean position errors with
± one standard deviation and ± one DRMS from the mean for each test at each value
of L u is outlined in Figure [77| below.
108
Box Test 1: UGS Following Multi-Rotor UAS
-Mean Error
-+/-1 Standard Deviation
+/-1 DRMS
Box Test 2: Multi-Rotor UAS Following UGS
- Mean Error
-+/-1 Standard Deviation
+/-1 DRMS
Circle Test 2: Multi-Rotor UAS Following UGS
10 •
8 ■
0 0.5 1 1.5 2
Mean Error
- +/-1 Standard Deviation
.+/-1 DRMS
Box Test 3: Multi-Rotor UAS Following Multi-Rotor UAS
- Mean Error
-+/-1 Standard Deviation
+/-1 DRMS
Circle Test 3: Multi-Rotor UAS Following Multi-Rotor UAS
- Mean Error
-+/-1 standard Deviation
+/-1 DRMS
BoxTest4: Fixed Wing UAS Following Fixed Wing UAS
300
250- .
200 •
150 ■
g 100 ”
Cl
50- -
0 ■
-501- 1 - 1 - 1 -;
0 0.5 1 1.5 2
- Mean Error
-+/-1 Standard Deviation
+/-1 DRMS
Figure 77. Formation Flocking Test Results Summary.
The effect of different L\ t on the position error and standard deviation varied for
each vehicle combination and for each lead vehicle path performed. As expected,
the values of DRMS and standard deviation have a close correlation; meaning both
measures will increase or decrease together. Over all, the best performance was seen
with the multi-rotor UAS following UGS performing a box pattern with an L u value
of 2. Both results of Box Test 1 and Box Test 2 indicate a higher correlation between
the value of L\ t and the position error due to their tendency to neck down at specific
values. The other tests, however, indicate the value of L u has less of an effect on
position error for those scenarios.
For Box Test 1, a minimum position error is indicated to exist around an L u value
of Is because the mean, standard deviations, and DRMS values are at their lowest at
this point. For Box Test 2, as L\ t increases, the mean, standard deviation, and DRMS
decreased by 3.32 m, 1.85 m, and 1.37m respectively. Unlike its box counterpart,
Circle Test 2 indicates a maximum in position error and standard deviation for an L\ t
value of one, with the standard deviation and DRMS decreasing as L u approaches
109
zero and mean, standard deviation, and DRMS decreasing as L u increases above
one. For the last three tests, there is a less prominent correlation between standard
deviation and L\ t . For Box Test 3 as the value of L\ t increased, the mean position error
decreased by 0.60m. The standard deviation and DRMS values reached a minimum
for this test at an L\ t value of Is, with increased values at the other values of L± t .
Circle test three indicates a minimum position error, standard deviation, and DRMS
for an L\ t value of Is with increasing mean, standard deviation, and DRMS for other
values of L\ t . Finally, the plane tests indicate almost no correlation between L\ t and
the mean and standard deviation.
One benefit seen with the UGS and multi-rotor UAS in the role of follower is this
control method’s tendency to command the follower to cut inside corners during the
box pattern. This action of cutting corners is caused by the commanded point being
placed down the next leg of the box ahead of the leader after it performs a turn.
This aids in the minimization of error, allowing the follower to take a shorter path
through the corner and catch up. However, with this decrease in forward error comes
an increase in right error due to the follower cutting the corner and going off track.
One common theme across all vehicle teams is the effect of the follower vehicle’s
method of maneuvering on the positional accuracy and precision of the system. As
indicated by the multi-rotor following a UGS performing both box and circle patterns,
the multi-rotor has the best ability to maintain a desired position relative to the
leader. This is due to the airframe’s ability to move in any direction, forwards or
backwards, and its method of loitering, which is to maintain a stationary position.
Both the UGS and fixed wing UAS are required to make turns, which can force them
off the desired path and induce more error. Finally, the fixed wing UAS is at the
biggest disadvantage due to statement above, the vehicles method of loitering which
requires it to fly in a circle around a point, and its higher operating speed which
110
reduces the amount of time the aircraft can react to a new commanded position.
The biggest issue seen during these tests is the fixed wing UAS’s inability to
maintain an offset. As stated previously, one likely cause is a weak C2 link which
decreases the percent of telemetry and command packets received, which in turn
increases the latency of the system. For this vehicle combination to operate near the
same level as the other teams the C2 link must be strengthened or the processing
must be moved on board the aircraft to reduce the number of links made.
The final problem seen across all teams is the followers lesser ability to maintain a
steady forward error. The forward and right errors for most tests indicated a greater
ability to maintain a more precise and accurate right error than forward error. This
inaccuracy and inprecision is likely due to the lack of command authority over the
ground course velocity of the vehicle. This functionality was not included in the
C2 architecture because the GCS and control module used does not allow for direct
control of the ground course velocity. However, as indicated by the results previously
discussed, the method of control used does allow for increased precision and accuracy
of forward position for varying values of L\ t depending on the team and path taken.
5.5 Communication Relay Test Results and Analysis
In this section, the results of the communication relay tests outlined in the method¬
ology section are discussed and analyzed. Both the absolute position of each vehicle at
a local level in meters and the relative position error in meters will be used to display
the collected data. The absolute position will be used to identify behavioral traits of
the system while performing communication relay, while the relative position error is
used for quantifying the system’s abilities. The relative position error is determined
and displayed using the radial distance between the relay vehicle’s position and the
relay vehicle’s desired position relative to the remote vehicle and GCS.
Ill
Test Results and Analysis.
The communication relay test was performed as specified in the methodology
section with one additional test to demonstrate the system’s ability to relay a C2 link
around obstructions. The first test, which verified the communication nodes would
reestablish a lost link of a remote vehicle, was first tested with a team of UGS. All
of the communication nodes, including each node on each vehicle and the GCS node,
were set to their lowest power setting of 40mW. The remote UGS was driven to a
distance of 114m until the C2 link was lost between it and the GCS. The relay vehicle
was then manually driven to a halfway point. After 5 seconds, the remote vehicle’s
C2 link was reestablished with 80% packets received.
Another test was performed with this team to demonstrate the system’s ability
to relay the C2 link around an obstruction. For this test, the remote vehicle was
driven next to a cement footer of a lamp post in line of sight with the relay vehicle
communication node turned off. At this point the GCS was receiving 90% of teleme¬
try packets. The remote vehicle was then driven behind the cement footer, out of
line of sight, reducing the percent of packets received between 55% and 65%, with
intermittent loss of the C2 link. The relay vehicle was then turned on and driven to
a point where it would have visual line of sight of both the remote vehicle and GCS.
The GCS then started receiving between 80% and 90% of the telemetry packets.
The final test was performed with a team consisting of a UGS in the role of
the remote vehicle and a multi-rotor UAS in the role of a relay vehicle. Again, all
communication nodes were turned down to their lowest setting, and the remote vehicle
was driven away from the GCS until the link on the GCS was lost. The relay python
script was then started on the GCS, and the relay vehicle was commanded to move
to the midpoint between the GCS and the last known position of the remote vehicle.
Once the relay vehicle reached its destination, the link was reestablished with 85%
112
of the telemetry packets being received. The remote vehicle operator then manually
drove the vehicle around the area of the GCS. The positions of the remote and relay
vehicles are displayed in Figure |79| and the associated error in position between the
relay vehicle and its desired position is displayed in Figure [78} During this test, the
relay vehicle achieved a mean error of 9.76m with a standard deviation of 6.17m and a
DRMS of 11.55m after the vehicle came to a stabilized midpoint position. One point
to note on Figure 78 are the spikes in error seen between data point 300 and 700.
These spikes were caused by a lost link between the relay vehicle and GCS. These
lost links were caused by the remote vehicle driving far enough away to pull the relay
vehicle out of communication range of the GCS.
Communication Relay: Relay Vehicle Distance Error from Desired Position
Figure 78. Multi-Rotor UAS Relaying to UGS Radial Position Error.
113
Communication Relay: Relay and Remote Vehicle Positions
Figure 79. Multi-Rotor UAS Relaying to UGS Vehicle Position.
5.6 Latency Test Results and Analysis
Results.
Each component of the latency was measured as described in the methodology
section using scripts on the GCS. To measure the telemetry down to GCS time, the
GCS script collected the pitch and roll orientations from the telemetry at 20Hz to
ensure the sampling rate was higher than the rate new telemetry is made available,
which is indicated by multiple measurements of the same value for multiple adjacent
time steps. This difference in time between the first instance of a measurement and
an instance of a new measurement is determined as the telemetry down to GCS
114
time. The GCS processing time was measured by determining the start time of the
leader flocking script, passing that time through the UDP socket, and calculating
the total run time at the end of the follower flocking script. The command up and
telemetry down time is measured by determining the time to send an RC channel
PWM command from the GCS to the time the change is seen in the RC channel in
the collected telemetry. These tests provided the resulting values outlined in Table
14 below. The GCS processing time is approximated due to the high rate of speed
the GCS processing scripts run.
Table 14. Latency Test Results
Time Measurment
P
a
Telemetry Down to GCS
0.240s
0.005s
GCS Processing
< 0.003s
—
Command Up to Vehicle
0.220s
0.006s
Analysis.
The average total time for one vehicle to pass telemetry down to the GCS, to when
the follower vehicle reacts to this telemetry is approximately 0.46s based on the times
collected. Based on the standard deviations calculated, this total latency can vary
between 0.493s and 0.427s for ± 3 standard deviations. With a 2.0 Hz update rate
in the worst case, this system performs 400% faster than previous systems utilizing
entirely all COTS components and OSS. This is compared to the work of Hardy
mi, wherein Mission Planner’s built in swarming function was only able to achieve a
maximum of 0.4Hz. This can be attributed to the use of MAVProxy as the primary
GCS software, which as stated before does not require a more computationally taxing
GUI. However, this measurement is for a best case scenario, with all vehicles in close
115
range of the GCS and GCS transceiver. This does not include the impact of lost
telemetry packets over the communication network. As discussed in the formation
flocking tests for a team of fixed wing UAS, as the fixed wing UAS flew further away
from the GCS, the communication link degraded greatly. This caused slow updates of
new waypoints and caused the vehicle to fly erratically compared to the lead vehicle.
5.7 Chapter Summary
In this chapter, the hardware and software selected to accomplish the functions
of the developed architecture were discussed. The development of the C2 python
scripts to control the vehicle was outlined. Finally, the results from the formation
flight, communication relay, and latency tests were covered and the resulting data
was analyzed.
116
VI. Conclusion
6.1 Chapter Overview
This section reviews the work accomplished in this research and the conclusions
drawn from this research. Investigative questions outlined in the first chapter will be
revisited to outline the conclusions drawn from each. Recommendations of actions
that should be taken for future work work are also outlined.
6.2 Conclusion of Research
In the first chapter, investigative questions were established to guide this research
to obtain answers to each question. These questions will be restated and the conclu¬
sions for each are outlined.
What are the desired missions to be accomplished by cooperative multi¬
agent systems?
From the literature review, three groups of missions emerged. The first group is
formation flocking, which is the act of controlling two or more vehicles to perform
a mission in formation. The second group of missions complete the communication
relay scenario, which is the act of passing information between a remote vehicle or
sensor to a central GCS through an intermediate relay vehicle. The final group of
missions is comprised of search and surveillance missions, which include wide area or
perimeter searching or surveillance. For this research, the final group was not inves¬
tigated as it was the only mission requiring a video system and it was desired the
system be simplified for the time scope of the project. However, the architecture and
system developed could possibly be applied to perform this mission with the addition
of the required video system.
117
What is the structure and limitations of existing C2 architectures for
cooperative unmanned vehicles?
Existing architectures range from being comprised of a mixture of proprietary
and COTS components and software to being comprised of entirely of COTS compo¬
nents and OSS. All of these architectures have similar structures, with the primary
variation being the location where the processing is being accomplished. Systems
comprised of primarily proprietary components, as shown by Napolitano et al. [3j,
have the capability to perform decision making onboard the vehicle which increases
the response time dramatically and allow for greater precision with position errors as
low as 3.43m with a standard deviation less than 2m for fixed wing aircraft. As the
processing migrates from onboard the vehicle to the GCS, as shown by How et al. [9],
the response rate diminishes along with the precision with distance errors contained
in a 25m box for fixed wing aircraft. Both of these systems utilize a proprietary GCS,
which allows for greater flexibility in the processing of information and sending of
commands. The Enal architecture examined is structured the same way, but utilizes
entirely COTS components and OSS. With this architecture, shown by Hardy HP,
the update rate further degraded to a maximum of 0.4 Hz for close range vehicles
including UGS and multi-rotor UAS. These results show a degradation in system
performance as more COTS components and OSS is integrated into the system and
as the mission processing unit migrates from onboard to the GCS.
What are the mission-specific qualitative and quantitative measures for
the system?
For this effort, it was decided the desired missions to be accomplished are the
formation flocking and communication relay scenarios based on previously developed
118
architectures. For these scenarios, it was determined the desired quantitative mea¬
sures include both the relative accuracy and precision of the vehicle’s position error.
The DRMS from the desired position was used to measure the relative accuracy of the
system and the standard deviation was used to measure the precision. For formation
flocking, the behavior of the system was used as a qualitative measure to determine
if the system demonstrated a leader follower relationship. For communication relay,
the percent telemetry packets received was used to quantitatively measure when the
C2 link was lost, and the quality of the link once it was reestablished through the
relay vehicle. Also, the ability to relay a communication link was used to qualitatively
measure the system’s ability to perform communication relay. Finally, the latency
was measured using the time of each component of the system.
How well does this system perform using these performance measures?
The results of this question are contained in the results and analysis section.
The best results for formation flocking were seen with a team consisting of a multi¬
rotor UAS following a UGS, which achieved a mean position error of 0.99m with a
standard deviation of 0.44m and DRMS of 0.59m. Other results for combinations of
these two vehicles ranged up to a mean position error of 5.08m, a standard deviation
of 2.46m, and a DRMS of 4.17m for separate tests. The worst case not contained in
the summary above was the team consisting of two fixed wing UAS aircraft, which
resulted in errors two orders of magnitude higher than other tests. This test did not
demonstrate the ability to perform formation flight, but did demonstrate a leader
follower relationship.
For the communication relay scenario, two tests were accomplished to test the
system’s ability to perform this scenario. Both tests at similar ranges with different
combinations of multi-rotor and UGS resulted in a similar percent of packets received,
119
ranging between 80% and 90% after the link was reestablished. This result was
also seen when relaying the C2 link around a physical obstacle. Finally, the system
achieved a mean error of 9.76m with a standard deviation of 6.17m and a DRMS of
11.55m while maintaining the follower’s position at the midpoint.
Finally, the measurements from the latency test showed that the process of ac¬
quiring telemetry and sending commands makes up the majority of the latency, with
the GCS processing only taking up a small portion. From these tests, it was found
that the downlink time takes approximately 0.24s with a standard deviation of 0.005s,
the GCS processing time takes less than 0.003s, and the command uplink time takes
0.22s with a standard deviation of 0.006s.
What are the effects on system performance due to the utilization of
COTS, OSH, and OSS?
As indicated by the limitations of existing architectures, systems primarily or com¬
pletely composed of COTS components have a tendency to have lower overall per¬
formance than completely proprietary systems. These COTS components are likely
applied to roles that require the intended capabilities of the component be modified
to meet the desired functionality. One example of this can be shown with the autopi¬
lot. On proprietary systems, the trend is to have the algorithms required to perform
the mission be performed onboard the autopilot. This on board processing combined
with the proper communication system that allow vehicle to vehicle communication
could allow for a higher control loop frequency. Nonproprietary systems require this
processing to be completed on the GCS, decreasing the control loop frequency. A de¬
creased control loop frequency results in a longer period of time between commands
being sent to and performed by the autopilot’s outer loop, which increases the time
vehicle can veer off course and induce position error.
120
Another trend for completely proprietary systems is to utilize custom GCS soft¬
ware that allows for full access to necessary measurements from the autopilot. The
GCS software used and other COTS GCS software packages have a limited number
of measurements available for manipulation in scripting modules. This again requires
the extension of the intended capabilities of the software to accomplish the desired
task.
6.3 Recommended Future Work
This research opened many doors for others to investigate new topics pertaining
to this system and to apply new topics through the use of this system or similar
systems. These recommended topics are outlined below.
This system performed relatively well for close range vehicles such as UGS and
multi-rotors UAS. Due to this, it is suggested this system be utilized as a platform for
future research on C2 algorithms of these close operating range vehicles. This system
is not perfect and can still be improved on. Examples of possible improvements
includes further developing the system to utilize a closed loop controller, investigating
other methods of determining the commanded position sent to the follower, and
investigating the addition of more vehicles.
The system as it stands could be modified to accomplish additional cooperative
vehicle tasks which require less positional accuracy. Examples of these tasks include
the wide area search problem, persistent surveillance, and perimeter surveillance.
These tasks could be accomplished by modifying the position calculation functions
contained in the multLvehicle-toolbox function contained in Appendix [Cj Also, with
the addition of IP cameras, the communication system could additionally transmit
video down to the GCS to perform visual based missions.
As shown in the formation flight test for a team of fixed wing UAS, this system
121
does not provide the capability for these airframes to perform formation flight. One
major causes of this is the airframes relatively higher speed and the longer operating
range from the GCS required. These factors increase the latency and increase the
error induced between points. Due to this, it is suggested further investigation be
conducted into moving the processing onboard the vehicle. Utilizing the vehicle to
vehicle communication capability that is already present could reduce the commu¬
nication link distance and total system latency. Also, by moving the processing on
board, an outer loop controller could be developed to achieve a higher level of control
authority. This could be achieved by injecting radio control commands from an on
board micro controller into the autopilot input port while the vehicle is in a stabilized
mode. This would utilize the inner loop stabilization while providing a higher rate of
control over the motion of the aircraft.
Measuring the system latency was one of the more difficult tasks during this re¬
search. This is due to the inability to measure time differences between when physical
and computational events occur. It is recommended the methods of measuring sys¬
tem latency be further investigated. System latency plays a large roll in positional
accuracy of this system and, if better understood, could be used to better predict
factors related to the system and therefore better control the system. Additionally,
the effects of relatively long range communication on system latency has not been
investigated for this or similar systems. This communication latency played a large
role in the fixed wing tests performed, and if better understood, could be beneficial
in future work related to fixed wing cooperative control.
122
Finally, it is recommended some of the tests performed be retested. The first set
of tests to be accomplished again are the formation flocking tests for a team of two
multi-rotor UAS due to the error in the commanded position described in the results.
Additionally, the formation flocking test for a team of two fixed wing UAS should also
be retested due to use of an inadequate antenna, which did not provide a sufficient
C2 link with the aircraft.
123
Bibliography
1. “Unmanned Systems Integrated Roadmap FY2013-2038,” 2013.
2. Nicholas Lazaredes, “Ukraine’s DIY drone war: Self-taught soldiers facing up to
Russian-backed war machine,” 2015.
3. Marcello R Napolitano, Yu Gu, Technical Officer, Curtis E Hanson, and
Ms Theresa Stanley, “Cooperative Gust Sensing and Suppression for Aircraft
Formation Flight Final Report Cooperative Gust Sensing and Suppression for
Aircraft Formation Flight Motivation,” Tech. Rep., NASA, 2012.
4. Yu Gu, Giampiero Campa, Brad Seanor, Srikanth Gururajan, and Marcello R
Napolitano, “Autonomous Formation Flight Design and Experiments,” in Aerial
Vehicles , Thanh Mung Lam, Eel., chapter 12, pp. 236-258. InTech, 2009.
5. Matthew T. Seibert, Andrew J. Stryker, Jill T. Ward, and Chris T. Well-
baum, “SYSTEM ANALYSIS AND PROTOTYPING FOR SINGLE OPER¬
ATOR MANAGEMENT OF MULTIPLE UNMANNED AERIAL VEHICLES
OPERATING BEYOND LINE OF SIGHT,” M.S. thesis, Air Force Institute of
Technology, 2010.
6. Edison Pignaton De Freitas, Tales Hcimfarth, Ivayr Farah Netto, Carlos Eduardo
Lino, Carlos Eduardo Pereira, Armando Morado Ferreira, Flavio Rech Wagner,
and Tony Larsson, “UAV relay network to support WSN connectivity,” in
2010 International Congress on Ultra Modern Telecommunications and Control
Systems and Workshops, ICUMT 2010, 2010.
7. Theodore T Diamond, Adam L Rutherford, and Jonathan B Taylor, “Cooperative
Unmanned Aerial Surveillance Control System Architecture,” M.S. thesis, Air
Force Institute of Technology, 2009.
124
8. David Smalley, “Locust: Autonomous, swarming uavs fly into the future,” Online,
April 2015.
9. Jonathan How, Ellis King, and Yoshiaki Kuwata, “Flight Demonstrations of
Cooperative Control for UAV Teams,” in AIAA 3rd ’’Unmanned Unlimited”
Technical Conference, Worksho and Exhibit , 2004.
10. Derek Kingston, Randal W. Beard, and Ryan S. Holt, “Decentralized perimeter
surveillance using a team of UAVs,” 2008.
11. Stefan L Hardy, “IMPLEMENTING COOPERATIVE BEHAVIOR & CON¬
TROL USING OPEN SOURCE TECHNOLOGY ACROSS HETEROGENEOUS
VEHICLES,” M.S. thesis, Air Force Institute of Technology, 2014.
12. “Pixhawk,” ’'https://pixhawk.org/choice”.
13. “Ardupilot 2.6,” ” http: //copter. ardupilot. com/wiki/
common-apm25-and-26-overview/’.
14. “Pickelo Autopilot,” ’'http://www.cloudcaptech.com/products/
auto-pilots’.
15. “Kestrel Autopilot,” ’’http://www.lockheedmartin.com/us/products/
procerus/kestrel-autopilot.html’.
16. “MAVLink Protocol,” ’’http://qgroundcontrol.org/mavlink/start”.
17. “Pixhawk Control Architecture,” ’'https://pixhawk.org/dev/architecture’.
18. Ilker Bekmezci, Ozgur Koray Sahingoz, and amil Temel, “Flying Ad-Hoc Net¬
works (FANETs): A survey,” 2013.
125
19. Jun Li, Yifeng Zhou, and Louise Lamont, “Communication architectures and
protocols for networking unmanned aerial vehicles,” in 2013 IEEE Globecom
Workshops, GC Wkshps 2013 , 2013.
20. “Mission Planner,” "http://planner.ardupilot.com/".
21. “APM Planner 2.0,” ”http: //planner2. ardupilot. com/ ’.
22. “MAVProxy,” ”http: //dronecode. github. io/MAVProxy/html/index. html'.
23. Department of Defense, “Dod architecture framework version 2.0,” August 2010.
126
Appendix . Appendix
A Appendix A: Formation Flocking Leader Vehicle Script
1 #FlockingModeLeader (Jeremy Gray Aug 2015 )
2 # Gets location request from follower and gives the leaders location
and heading
3 #
4 # Prerequisits :
5# Two ( 2 ) instances of MAVProxy are operational
6 # Vehicles are connected in both instances of MAVProxy
7 # Notes :
s # for best results , update system time
9
10 import socket
11 import sys
12 from droneapi.lib import VehicleMode
13 from droneapi.lib import Command
14 from droneapi.lib import mavutil
15 import numpy as np
1 6 import math
17 import time
1 8 from datetime import datetime
19 from LLA_ECEF_Convert import LLA_ECEF_Convert
20 from multi_vehicle_toolbox import follower_pos
21
22 ” ’ INIT PARAMS ’ ’ ’
23 freq_control = 4.0
be float ( 0 . 0 )
24 freq_store= 2.0
25 freq_print= 1.0
26 msg_size =128
27
28 ’ ’ ’DRONEAPI INIT ’ ’ ’
29 # Get a local APIConnection to the autopilot (from companion computer or
GCS) .
30 api = locaLconnect ()
31
32# Create vehicle objects for each vehicle from the APIConnection
33 v_leader = api . get _vehicles () [ 0 ]
34 print ’’Leader Vehicle Object Created”
35
36 ’ ’ ’DATA FILE INIT ’ ’ ’
;:7 timestr = time . strftime (”%n-%d— '%Y_%H- 91 M-% 3 ” ) #date—time for file name
f ile _name= ’ le ad e r _gc s _ t e 1 _ ’ + timestr #file name appended with
date time
39 data_file = open (file.name , ’a’) #create txt doc to append to
40 print ’telemetry file open’
41
42 ’ ’ ’CONNECTION INIT ’ ’ ’
43 #Setup UDP link with leader_server
^frequency of control loop , must be < follower , must
^frequency of data storage, must be float ( 0 . 0 )
^frequency of printed updates, must be float ( 0 . 0 )
#size of msg to be passed
127
44 Port = 50005 # Port to TX/RX to/fronr follower .client
45 IP = ’127.0.0.1’ #Local Host IP
46 s = socket . socket ( socket . AFJNET, socket .SOCKTDGRAM) # Create TCP
socket object
47 print ’socket created’
48 address=(IP , Port)
49
50
si ’ ’ ’ Main LOOP ’ ’ ’
52 t_write=0 /^forces first write to occure on start
53 t _ p r i n t =0
54 print ’starting control loop’
55 ^Current location is locO , next location is loci
56 while not api . exit :
57 try:
58 #get current time for sleep . . .
59 tl=time . time ()
60
ei ^get telemetry information
62 lat=str(v_leader. location, lat)
63 lon=str (v.leader . location . Ion)
64 a 11 _ a s 1 = str (v.leader. location, alt)
sea level (m)
65 p=float(np.deg2rad(v_leader.attitude.pitch))
vehicle relative to NEU frame
66 r=float(np.deg2rad(v_leader.attitude, roll))
vehicle relative to NEU frame
67 y=float(np.deg2rad(v_leader.attitude, yaw))
vehicle relative to NEU frame
68 v_b=v_leader . velocity
vectory (m/s) relative to body
69 t _tel=time . time ()
was recieved
platitude (deg)
#longitude (deg)
#altitude above
#pitch (rad) of
#roll (rad) of
#yaw (rad) of
#v e 1 o c i t y
$dime telemetry
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
#flush data to leader
v.leader . flush ()
^determine gc relative to vehicle frame (NED)
v_b=np . array ([[ v.b [0] , v_b[l], v_b[2]]])
c_p=np . cos (p) ; s_p=np . sin (p)
c _r=np . cos ( r ) ; s _r=np . sin ( r )
c_y=np . cos (y) ; s_y=np . sin (y)
R_v_b=np. array ([ [c_p*c_y, c_p*s_y,
-s-p ] ,
[s_r*s_p*c_y — c _r * s.y , s_r*s_p* s_y+c _r * c.y ,
s _r * c.p ] ,
c_r*c_p ] ])
[ c _r * s.p * c_y+s _r * s.y , c_r*s_p*s_y — s_r*c_y,
^rotation transform from vehicle to body
v_v=np . dot ( R.v.b .T, v.b .T) ^velocity vector relative to vehicle
(NED) frame
gc=np.arctan2( v_v[l],v_v[0] ) ^ground course (rad) relative
128
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
to North
v=np . linalg . norm( v_v) ^velocity of leader,
used to calc LI
#if V is too slow use yaw (rad) as ground course
if v < 0.25:
gc= y
^build telemetry msg to be a known length (msg_size)
tel_msg_raw = ’%s %s %s %s %s ’ %(lat , Ion , alt.asl , str ( float (gc)) ,
str(v)) #build msg
tel_msg=msg_size* ’ ’
if len (tel_msg_raw ) < len (tel_msg ) : #set msg size to
known length
n_spaces=len (tel_msg ) —len (tel_msg_raw )
te 1 _msg=te 1 _msg_raw + n.spaces * ’ ’
else :
print ’err: udp message exceeds length. Increase msg.size ’
break
#send leader telemetry to follower over UDP
s.sendto(str( tel_msg ) , address )
^append data w/ unix time on new line of data txt file , if 1/
freq_store has pased
if time.time() — t.write > 1/freq_store :
msg_data=’%s %s ’ %(t_tel , tel_msg_raw)
data_file . write (msg_data + ’\n’)
t_write=time . time ()
Sprint update message
if time.time() — t.print > 1/freq_print :
print ’telemetry sent & stored: ’ + st r ( datetime . now (). time
0 )
t _print =time . time ()
^determine sleep time
t2=time . time ()
t_remaining= ( 1/freq_control ) — ( t2 — tl )
if t_remaining > 0: #sleep for remainder of this control
cycle
time . sleep (t_remaining)
else : #the operations in the while loop took
too long
print ’ freq_control is too high’
except Keyboardlnterrupt : #only way to stop the ride
data_file . close ()
break
except :
print ’’Unexpected error:”, sys . exc_info () [0]
129
129
data.file . close ()
break
130
131
132 # exit
133 s . close ()
134 print ’End of Script ’
130
B Appendix B: Formation Flocking Follower Vehicle Script
1 #FlockingModeFollower (Jeremy Gray SEP 2015)
2
#
Gets location
of
leader
vehi
cle and
sets waypoints to make
vehicle follow
at
3
#
a fixed offse
t di
l s tance
4
#
5
#
Prerequisits :
6
#
Two (2) i
nsta
nces o
f MAVProxy are operational
7
#
Vehicles
are
connec
ted in both
instances of MAVProxy
8
#
Notes :
9
i n
#
for best
results, u
pdate
system
time
11
im
port socket
12 import sys
13 import math
14 import time
15 from datetime import datetime
16 import re
17 from numpy import matrix
is import numpy as np
19 from droneapi.lib import VehicleMode , Location, Command, mavutil
20 from LLA_ECEF_Convert import LLA_ECEF_Convert
21 from multi_vehicle_toolbox import follower_pos
22
23
24 ” ’ INIT PARAMS ’ ’ ’
25 ^Follower offset parameters (relative to leader’s body frame)
26 off_ll_s=2 #L1 lead time constant [s] for forward offset waypoint
27 off_r = 2 #radial distance [m] away from leader
28 off.theta = 45 #angle (deg) from —x axis (out of tail) , OCW is ( + )
rotation
29 alt_agl_cmd =10 #alt agl [m] to be commanded, used in guided_pos
30
31 #timing prarameters
32 t_freq=8.0 #control loop frequency, must be slower than follower
and float (0.0)
33 freq_store =2.0 ^frequency of storage of data to disk and must be
float (0.0)
34 freq_print=l ^frequency of print statements (try to reduce this)
35
36 #other
37 msg_size=128 #size of msg to be passed
38
39 ’ ’ ’DRONEAPI INIT ’ ’ ’
4o# Get a local APIConnection to the autopilot (from companion computer or
GCS) .
41 api = local_connect ()
42
43# Create vehicle objects for follower vehicle from the APIConnection
44 v_follower = api . get .vehicles () [0]
45 print ’’Follower Vehicle Object Created”
131
4G
47 ’ ’ ’DATA FILE INIT ’ ’ ’
48 timestr = time . strftime (”%n-%d— 1 ) #date—time for file name
49 f ile _name= ’ f o 11 o w e r _gc s _t e 1 _ + timestr #file name appended with
date time
so data_file = open (file.name , ’a’) ^create txt doc to append to
51 msg_data=’%s %s %s ’ %(off_r , off.theta , off_11 _s )
52 data_file . write (msg.data + ’\n’)
53 print ’telemetry file open’
54
55 ’ ’ ’CONNECTION INIT ’ ’ ’
56 #Setup TCP link with leader_server
57 Port = 50005 # Port to TX/RX to/from leader .server
58 IP = ’127.0.0.1’ #Local Host IP
59 s = socket . socket ( socket . AFTNET, socket .SOCKDGRAM)
60 print ’socket created’
ei s . bind ((IP , Port)) # Connect socket
62 print ’Bound to port ’ + str(Port)
63
64 ’ ’ ’Main Loop ’ ’ ’
65 rc_ch=v_follower . channel.readback
eo t_write=0 ^forces first write to occure on start
67 t_print=0 ^forces first print to occure on start
68 ^Current location is posO , next location is posl
69 print ’starting control loop’
70 while not api . exit :
71 try:
72 #get current time for sleep . . .
73 tl=time . time ()
74
75 if rc„ch[’5’] > 1100: ^MANUAL MODE FAIL SAFE, will not store data
76 v.follower . mode = VehicleMode (” STABILIZE” )
77
78 if time.time() — t.print > 1/freq.print :
79 print ’’Follower Mode Set to Manual” + str ( datetime . now (). time
0 )
so t _print=time . time ()
81
82 time . sleep (0.01)
83
84 else :
85 #read leader tel from udp port
86 tel.leader = s . recv (msg.size) #get ”lat(deg) lon(deg) alt (m) gc (
rad) v(m/s)”
87
88 ^manipulate leader tel to parse out lat , Ion , alt ,heading/gc,
velocity
89 pattern = re . compile (” [ ]”) #Data patern (data seperated
by [ ] i.e space)
90 param = pattern . split (tel.leader) #split data based on data
patern
91
132
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
pos.leader = np . array ([ float (param [0]) , float (param [ 1]) ,float(
param [ 2 ]) ])
#leader pos [lat(deg) Ion(deg) alt (m)]
heading_l = np . rad2deg ( float (param [3]) ) #leader ground course
(rad)
v_ 1 = float (param [4]) #leader velocity (m/s)
#calculate desired position
off _1 l = off _11 _s * v_l ^forward offset dist . ( [m] = [s] * [m/s])
post _f=follower_pos ( off_r , off-theta , off_ 11 ,
pos_leader , heading_l) #posl_f = [lat(deg)
Ion(deg) alt (m)]
#Set new follower guided point
guide d_pos=Lo cat ion ( posl_f [0] , post _f [1] — 360, alt_agl_cmd ,
is_relative =T rue)
if v_follower . mode != ’’GUIDED”: #if not already in guided ... go
guided
v_follower . mode = VehicleMode ( ’’GUIDED” )
v_follower . commands . goto ( guided_pos ) #send guided point
V-follower . flush () #flush cmd to follower
#get telemetry information for storage
lat = str ( v_follower . location . lat) platitude (9 bytes CHECK)
lon=str ( v_follower . location . Ion) ^longitude (9 bytes CHECK)
alt.asl = str ( v_follower . location . alt ) ^altitude above sea level
(6 bytes CHECK)
p=float (np . deg2rad ( v_follower . attitude . pitch )) #pitch (rad) of
vehicle relative to NEU frame
r=float (np . deg2rad ( v_follower . attitude . roll)) #roll (rad) of
vehicle relative to NEU frame
y=float (np . deg2rad ( v_follower . attitude . yaw)) #yaw (rad) of
vehicle relative to NEU frame
v_b= v_follower . velocity ^velocity in x dir
relative to body (CHECK)
^determine gc relative to vehicle frame (NEU)
v_b=np.array([[v_b[0], v_b[l], v_b[2 ]]])
c_r=np.cos(r); s_r=np.sin(r)
c_p=np.cos(p); s_p=np.sin(p)
c-y=np.cos(y); s_y=np.sin(y)
R_v_b=np. array ([ [c_p*c_y, c_p*s_y,
-S-P ] ,
[ s _r * s_p * c_y — c_r *s_y , s _r * s_p * s_y+c_r *c_y ,
s_r*c_p ] ,
[ c_r * s_p * c_y+s _r *s_y , c_r*s_p*s_y — s _r *c_y ,
c_r*c_p ] ])
^rotation transform from vehicle to body
v_v=np . dot (R_v_b .T, v_b .T) #velocity vector relative to vehicle (
NEU) frame
gc=np.arctan2( v_v[l],v_v[0] ) ^ground course relative to NEU (
CHECK)
133
129 v=np . li nalg . norm (v_v) ^velocity of leader, used to calc LI
130 t _t e l=time . time ()
131
132 #if V is too slow use yaw as ground course
133 i f v < 1 :
134 gc= y
135
136 v_follower . flush ()
137
138 #build telemetry data str
139 tel_msg_raw — ’%s %s %s %s %s %s %s %s ’ %(lat , Ion , alt_asl , str (float
(gc)) ,str(v) ,
140 str ( post _f [ 0 ]) ,str(posl_f
[ 1 ]) ,str(posl.f[ 2 ] ) )
141
142 ^append data with unix time on a new line of data txt file
143 if time.time() — t_write > 1 /freq_store :
144 msg_data=’%s %s ’ %(t_tel , tel_msg_raw )
145 data_file . write (msg_data + ’\n’)
1 46 t _write=tinre . time ()
147
i 4 s Sprint update message
149 if time, time () — t_print > 1 /fr eq_print :
iso print ’cmd sent & telemetry stored: ’ + st r (datetime . now () .
time () )
151 t _print=time . time ()
152
153 ^determine sleep time
154 t 2 =time . time ()
155 t_remaining= ( l/t_freq ) — ( t 2 — tl )
156 if t_remaining > 0 : Asleep for remainder of this control cycle
157 time . sleep (t-remaining )
158 else : #the operations in the while loop took too
long
159 print ’t_freq is too high’
160
161 except Keyboardlnterrupt : #only way to stop the ride
162 data_file.close()
1 63 break
164
165 except :
1 66 print ’’Unexpected error:”, sys . exc_info () [ 0 ]
1 67 data_file.close()
1 68 break
169
170
171 # exit
172 s . close ()
173 print ’End of Script ’
134
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
C Appendix C: Multi-Vehicle Function Module, as Tested
5 5 5
multi_vehicle_toolbox . py
Calculations required for multi — vehicle operations
1) flocking follower pos calculation
2) comm relay relay vehicle midpoint pos calc
1 1 1
import numpy as np
from LLA_ECEF_Convert import LLA-ECEF-Convert
def follower_pos(off_r ,off_theta , off_ 11 ,locO_l ,heading_l):
^function description: determines the next desired location of
the follower
#
^Inputs : off_r :
# off_theta :
OCW is (+) rotation
# off_ll:
forward of the desired
vehicle in lat Ion alt (LLA)
radial distance away from leader [m]
angle (deg) from — x axis (out of tail),
distance the guided point is placed
#
# 1 o c 0 _1 :
incriment of time
# heading-1 :
incriment of time
^Outputs : loc 1 _f :
follower location
location of the leader at current
heading of the leader at current
current desired location of the follower
^Follower loci relative to leader body frame
off_theta+=270 #add 270 deg to make offset relative to east (+X
axis for math)
loc 1 _f=( off _r — off-11) * np.array([ np . cos (np . deg2rad ( off_theta )
),
np. sin(np.deg2rad(off_theta)
),
0
])
^Follower loci relative to Local Level Frame (L, North—East—Up)
frame
cos_h=np ,cos(np.deg2rad(heading_l))
sin_h=np. sin(np.deg2rad( heading-1))
R_BtoL=np.array([ [cos_h, sin_h , 0],
[— sin_h , cos_h , 0] ,
[0, 0, 1]j ]) ^Rotation from body
to local
loci_f=np.dot(loci_f , R_BtoL)
^Follower loci from Local Level Frame (L, North—East—Up) to ECEF (E)
phi= np . deg2rad (locO _1 [0]) #latitude of leader
la= np . deg2rad ( locO _1 [ 1 ] ) #londitude of leader
135
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
sin_la= np.sin(la)
cos_la=np . cos (la )
sin_phi= np. sin (phi)
cos_phi=np.cos ( phi)
R_LtoE=np . array ([ [—sin_la, — sin.phi * cos_la , cos_phi * cos_la
[cos_la , — sin_phi * sin.la ,
[0 , cos_phi ,
]) ^Rotation from local to ecef
cos_phi * sin.la
sin_phi
T_LtoE=LLA_ECEF_Convert (np . rad2deg ( phi) ,np . rad2deg (la ) ,locO_l [2] , ’
LLAtoECEF ’ )
loci _f= np . dot (RJLtoE , loc 1 _f ) + T_LtoE.T
^Follower Location ECEF to lat Ion alt (LLA)
loci _f= LLA_ECEF_Convert (loc 1 _f [0] , loc 1 _f [ 1 ] , locl_f[2], ’ECEFtoLLA
’)
return locl_f
def relay_pos ( pos_gcs_llh , pos_rem_llh ) :
^function description : Calculates the midpoint between the GCS
and remote vehicle
#
#
^Inputs:
#
#
#Outputs :
#
^Notation :
#
#
#
pos_gcs_llh :
pos_rem_llh :
pos_rel_lla :
to send the relay vehicle
pos of GCS in lat Ion hae
pos of remote vehicle in lat Ion hae
calculated pos of relay vehicle
Remote vehicle : rem
Relay vehicle : rel
Ground Control: GCS
^convert pos of rem & gcs from 11 h to ecef
pos_rem_ecef=LLA_ECEF_Convert ( pos_rem_llh [0] , pos_rem_llh [ 1 ] ,
pos_rem_llh [2] , ’LLAtoECEF’)
pos_gcs_ecef=LLA_ECEF_Convert( pos_gcs_llh [0] , pos_gcs_llh [1] ,
pos_gcs_llh [2] , ’ LLAtoECEF ’ )
^calculate midpoint in ecef
pos_rel_ecef=pos_gcs_ecef + 0.5* ( pos_rem_ecef — pos_gcs_ecef)
^convert pos of rel from ecef to 11 h
pos_rel_lla=LLA_ECEF_Convert( pos_rel_ecef[0] ,pos_rel_ecef[l] ,
pos_rel_ecef[2] , ’ ECEFtoLLA ’ )
return pos_rel_lla
136
D Appendix D: Communication Relay Remote Vehicle Script
1 #FlockingModeLeader (Jeremy Gray Aug 2015)
2 # Gets location request from follower and gives the leaders location
and heading
s #
4 # Prerequisits:
5 # Two (2) instances of MAVProxy are operational
6 # Vehicles are connected in both instances of MAVProxy
7
8 import socket
9 import sys
10 from droneapi.lib import VehicleMode
11 from droneapi.lib import Command
12 from droneapi.lib import mavutil
13 from numpy import matrix
14 import numpy as np
15 import math
16 import time
17 from datetime import datetime
is from LLA_ECEF_Convert import LLA_ECEF_Convert
19 from multi_vehicle_toolbox import follower_pos
20
21 ” ’ INIT PARAMS ’ ’ ’
22 t_freq=5.0 ^control loop frequency, must be slower than follower
23 freq.print =1.0 #rate of printing updates
24 msg_size=128 #size of msg to be passed
25
26 ’ ’ ’DRONEAPI INIT ’ ’ ’
27 # Get a local APIConnection to the autopilot (from companion computer or
GCS) .
28 api = local_connect ()
29
30 # Create vehicle objects for each vehicle from the APIConnection
31 v.remote = api . get _vehicles () [0]
32 print ’’Leader Vehicle Object Created”
33
34 ’ ’ ’CONNECTION INIT ’ ’ ’
35 #Setup UDP link with leader_server
36 Port = 50005 # Port to TX/RX to/from follower_client
37 IP = ’127.0.0.1’ #Local Host IP
38 s = socket . socket ( socket . AFJNET, socket .SOQCDGRAM) # Create TCP
socket object
39 print ’socket created’
40 address=(IP , Port)
41
42 ” ’ Main LOOP ’ ’ ’
43 t_write=0 #forces first write to occure on start
44 t _ p r i n t =0
45 #Current location is locO , next location is loci
46 while not api . exit :
47 try :
137
48
#get current time for sleep . . .
t l=tirrie . time ()
49
50
51 #get telemetry information
52 lat=str ( v_remote . location . lat) ^latitude (deg)
53 lon=str (v.remote . location . Ion ) ^longitude (deg)
54 alt_asl = st r ( v.remote . location . alt) ^altitude above sea
level
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
^build telemetry msg to be a known length (msg.size)
tel_msg_raw = ’%s %s %s ’ %(lat , Ion , alt _asl ) #build msg
tel_msg=msg_size* ’ ’
if len (tel_msg_raw ) < len (tel_msg ) : #set msg size to
known length
n_spaces=len (tel_msg ) —len (tel_msg_raw )
te 1 _msg=te 1 _msg_raw + n.spaces * ’ ’
v.remote . flush ()
s . sendto ( st r (tel_msg ) ,address)
print ’telemetry sent: ’ + st r ( datetime . now (). time () )
Sprint update message
if time.time() — t_print > 1/freq_print :
print ’telemetry sent & stored: ’ + st r ( datetime . now (). time
0 )
t_print=time . time ()
72
73 ^determine sleep time
74 t 2 =time . time ()
75 t_remaining= ( l/t_freq ) — ( t 2 — tl )
76 if t_remaining > 0 : ^sleep for remainder of this control
cycle
77 time . sleep (t.remaining )
78 else : #the operations in the while loop took
too long
79 print ’t_freq is too low’
so
si except Keyboardlnterrupt :
82 break
S3
84 except :
85 print ’’Unexpected error:”, sys . exc_info () [0]
86 break
S7
88 # exit
89 s . close ()
90 print ’End of Script’
138
E Appendix E: Communication Relay Relay Vehicle Script
1 $Cornrri Relay, Relay vehicle client script (Jeremy Gray SEP 2015)
2 # Gets location of remote vehicle and sets waypoints to make relay
vehicle
3 # transit to a halfway point to relay comm
4 #
5 # Prerequisits :
6# Two (2) instances of MAVProxy are operational
7 # Vehicles are connected in both instances of MAVProxy
8
9 import socket
10 import sys
11 import math
12 import time
13 from datetime import datetime
14 import re
15 import numpy as np
16 from droneapi.lib import VehicleMode , Location, Command, mavutil
17 from LLA_ECEF_Convert import LLA_ECEF_Convert
is from multi_vehicle_toolbox import relay_pos
19
20 ’ ” INIT PARAMS ’ ’ ’
2 1 alt_agl_cmd=10 #alt agl [m] to be commanded, used in guided.pos
22 t_freq=10.0 ^control loop frequency, must be faster than leader
23 freq.print =1.0 #rate of printing updates
24 msg_size=128 #size of msg to be passed
25
26 ’ ’ ’DRONEAPI INIT ’ ’ ’
27 # Get a local APIConnection to the autopilot (from companion computer or
GCS) .
28 api = local_connect ()
29
30 # Create vehicle objects for follower vehicle from the APIConnection
31 v.relay = api . get .vehicles () [0]
32 print ’’Follower Vehicle Object Created”
33
34 ’ ’ ’CONNECTION INIT ’ ’ ’
35 #Setup TCP link with leader.server
36 Port = 50005 # Port to TX/RX to/from leader.server
37 IP = ’127.0.0.1’ #Local Host IP
38 s = socket . socket ( socket . AFJNET, socket .SOQCDGRAM)
39 print ’socket created’
40 s . bind ((IP , Port)) # Connect socket
41 print ’Bound to port ’ + str(Port)
42
43 ’ ’ ’GET HOME LOCATION ’ ’ ’
44 pos_home_lla=np . array ([v.relay. location, lat , v.relay.location. Ion,
v.relay . location . alt ])
45 print ” Home WP: %s” % pos_home_lla
46
47 ’ ’ ’MAIN LOOP ’ ’ ’
139
48 msg_size=128
49 rc_ch=v_relay . channel_readback
50 t_write=0 #forces first write to occure on start
51 t_print=0
52 ^remote vehicle is rem, relay is rel
53 while not api . exit :
54
55
56
57
58
59
60
61
62
63
64
66
67
try :
if rc_ch[ ’5’] > 1100:
v_relay . mode = VehicleMode (” STABILIZE” )
print ’’Relay Mode Set to Manual”
time . sleep (0.25)
else :
t l=time . time ()
tel_rem = s . recv ( msg_size ) #read port
^manipulate Rx data to parse out remote vehicle pos
pattern = re . compile (” [ ]”) #Data patern (data seperated
by [ ] i.e space)
param = pattern . split (tel_rem ) #split data based on data
patern
pos_rem_lla = np . array ([ flo at (param [ 0 ]) , float (param [ 1 ]) ,
float (param [2]) ])
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
^calculate desired relay position
pos _rel_lla=relay_pos ( pos-home_1 la , pos_rem_lla)
guided_pos=Location ( pos_rel_lla [0] , pos_rel_lla[l]— 360,
alt_agl_cmd , is_relative=True )
#Set new follower guided point
if v _relay . mode != ’’GUIDED”:
v_relay.mode = VehicleMode ( ’’GUIDED” )
v _relay . commands . goto(guided_pos)
v_relay . flush ()
print ’cmd sent: ’ + str(pos_rel_lla)
Sprint update message
if time, time () — t_print > 1/fr eq_pr int :
print ’cmd sent & telemetry stored: ’ + str (datetime . now
() . time () )
t_print=time . time ()
^determine sleep time
t2=time . time ()
t_remaining= ( l/t_freq ) — ( t2 — tl )
if t_remaining > 0: ^sleep for remainder of this control
cycle
time . sleep (t_remaining)
else : #the operations in the while loop
took too long
print ’t_freq is too low’
140
94
95
96
97
98
99
100
101
102
103
104
except Keyboardlnterrupt : #only way to stop the ride
break
## except:
## print ’’Unexpected error:”, sys . exc _info () [ 0 ]
## break
# exit
s.close ()
print ’End of Script ’
141
F Appendix F: Multi-Vehicle Function Module With Fixed Follower_Pos
Calculation
1
? ? ?
2 multi_vehicle_toolbox . py
3 Calculations required for multi — vehicle operations
4 1) flocking follower pos calculation
s 2 ) comm relay relay vehicle midpoint pos calc
6
7
8 import numpy as np
9 from LLA.ECEF.Convert import LLA.ECEF.Convert
10
11
12
13
14
15
16
def follower_pos(off_r ,off_theta , off_ 11 ,locO_l ,heading_l):
^function description: determines the next desired location of
the follower
#
^Inputs : off_r :
# off.theta :
OCW is (+) rotation
# off hi :
forward of the desired
vehicle in lat Ion alt (LLA)
radial distance away from leader [m]
angle (deg) from —x axis (out of tail),
distance the guided point is placed
17 #
is # locO.l :
incriment of time
1 9 # heading_l :
incriment of time
20 ^Outputs: locl_f:
follower location
location of the leader at current
heading of the leader at current
current desired location of the follower
21
22
23 ^Follower loci relative to leader body frame
24 off_theta+=270 #add 270 deg to make offset relative to east (+X
axis for math)
25 off_theta=np . deg2rad( off.theta)
26 loci _f= off_r*np.arr ay ([np.cos(off_theta),np.sin( off.theta),01) —
27 off.11 *np.array ([1 ,0,0])
28
29 ^Follower loci relative to Local Level
frame
30 cos_h=np . cos (np . deg2rad ( heading.l))
31 sin_h=np . sin (np . deg2rad ( heading.l))
32 R_BtoL=np. array ([ [cos.h, sin.h ,
33 [— sin.h , cos.h ,
34 [ 0 , 0,
to local
35 loc 1 _f=np . dot ( loc 1 _f , R.BtoL)
36
37 ^Follower loci from Local Level Frame (L, North—East—Up) to ECEF (E)
38 phi= np . deg2rad (loc 0 _1 [0 ]) platitude of leader
39 la= np. deg2rad( locO.l [1]) #londitude of leader
40 sin_la= np.sin(la)
cos_la=np . cos (la )
Frame (L, North—East—Up)
0 ] ,
0 ] ,
1] ]) ^Rotation from body
142
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
sin_phi= np.sin(phi)
cos_phi=np.cos ( phi)
R_LtoE=np . array ([ [—sin_la, — sin.phi * cos_la , cos.phi * cos_la
].
[cos_la , — sin_phi*sin_la , cos_phi*sin_la
],
[0 , cos_phi , sin.phi
]) ^Rotation from local to ecef
T_LtoE=LLA_ECEF_Convert (np .rad2deg(phi) ,np.rad2deg(la) , locO _1 [2] , ’
LLAtoECEF ’ )
loci _f= np . dot (R_LtoE , loci _f ) + T_LtoE.T
^Follower Location ECEF to lat Ion alt (LLA)
locl_f= LLA_ECEF_Convert ( loc 1 _f [0] , locl_f[l], locl_f[2], ’ECEFtoLLA
•)
return 1 o c 1 _ f
def relay.pos ( pos_gcs_llh , pos_rem_llh ) :
^function description : Calculates the midpoint between the GCS
and remote vehicle
#
#
^Inputs :
#
#
#Outputs :
#
^Notation :
#
#
#
pos_gcs_llh :
pos_rem_llh :
pos_rel_lla :
to send the relay vehicle
pos of GCS in lat Ion hae
pos of remote vehicle in lat Ion hae
calculated pos of relay vehicle
Remote vehicle: rem
Relay vehicle : rel
Ground Control: GCS
^convert pos of rem & gcs from 11 h to ecef
pos_rem_ecef=LLA_ECEF_Convert ( pos_rem_llli [0] , pos_rem_llh [ 1 ] ,
pos_rem_llh [2] , ’LLAtoECEF’)
pos_gcs_ecef=LLA_ECEF_Convert( pos_gcs_llh [0] , pos_gcs_llh [1] ,
pos_gcs_llh [2] , ’ LLAtoECEF ’ )
^calculate midpoint in ecef
pos_rel_ecef=pos_gcs_ecef + 0.5*( pos_rem_ecef — pos_gcs_ecef)
^convert pos of rel from ecef to 11 h
pos_rel_lla=LLA_ECEF_Convert( pos_rel_ecef[0] ,pos_rel_ecef[l] ,
pos_rel_ecef[2] , ’ECEFtoLLA ’ )
return pos_rel_lla
143
G Appendix G: Traxxas EMAXX UGS Pixhawk Parameters
#NOTE: 8/25/2015 12:26:27 PM
AHRS_C0MP_BETA,0.1
AHRS_EKF_USE,0
AHRS_GPS_GAIN,1
AHRS_GPS_MINSATS,6
AHRS_GPS_USE,1
AHRS_0RIENTATI0N,0
AHRS_RP_P,0.2
AHRS_TRIM_X,0
AHRS_TRIM_Y,0
AHRS_TRIM_Z,0
AHRS_WIND_MAX,0
AHRS_YAW_P,0.2
AUTO_KICKSTART,0
AUTO_TRIGGER_PIN,-1
BATT_AMP_OFFSET,0
BATT_AMP_PERVOLT,17
BATT_CAPACITY,3300
BATT_CURR_PIN,-1
BATT_M0NIT0R,0
BATT_V0LT_MULT,10.1
BATT_V0LT_PIN,-1
BATT2_AMP_0FFSET,0
BATT2_AMP_PERV0L,17
BATT2_CAPACITY,3300
BATT2_CURR_PIN,3
BATT2_M0NIT0R,0
BATT2_V0LT_MULT,10.1
BATT2_V0LT_PIN,2
BRAKING_PERCENT,0
BRAKING_SPEEDERR,3
BRD_PWM_C0UNT,4
BRD_SAFETYENABLE,1
BRD_SBUS_0UT,0
BRD_SER1_RTSCTS,2
BRD_SER2_RTSCTS,2
CAM_DURATI0N,10
CAM_SERV0_0FF,1100
CAM_SERV0_0N,1300
CAM_TRIGG_DIST,0
CAM_TRIGG_TYPE,0
CH7_0PTI0N,1
CLI_ENABLED,0
C0MPASS_AUT0DEC,1
C0MPASS_DEC,-0.09905119
C0MPASS_DEV_ID,73225
C0MPASS_DEV_ID2,131594
C0MPASS_DEV_ID3,0
C0MPASS_EXTERN2,0
C0MPASS_EXTERN3,0
COMPASS_EXTERNAL,1
C0MPASS_LEARN,0
C0MPASS_M0T_X,0
C0MPASS_M0T_Y,0
C0MPASS_M0T_Z,0
C0MPASS_M0T2_X,0
C0MPASS_M0T2_Y,0
C0MPASS_M0T2_Z,0
C0MPASS_M0T3_X,0
C0MPASS_M0T3_Y,0
C0MPASS_M0T3_Z,0
C0MPASS_M0TCT,0
C0MPASS_0FS_X,-101.8812
C0MPASS_0FS_Y,-12.68495
C0MPASS_0FS_Z,-101.4908
C0MPASS_0FS2_X,271.3355
C0MPASS_0FS2_Y,-438.0417
C0MPASS_0FS2_Z,286.0888
C0MPASS_0FS3_X,0
C0MPASS_0FS3_Y,0
C0MPASS_0FS3_Z,0
C0MPASS_0RIENT,0
C0MPASS_0RIENT2,0
C0MPASS_0RIENT3,0
COMPASS_PRIMARY,0
C0MPASS_USE,1
C0MPASS_USE2,1
C0MPASS_USE3,1
CRUISE_SPEED,1.5
CRUISE_THROTTLE,33
144
EKF_ABIAS_PNOISE,5E-05
EKF_ACC_PNOISE,0.25
EKF_ALT_N0ISE,1
EKF_ALT_S0URCE,1
EKF_EAS_GATE,10
EKF_EAS_N0ISE,1.4
EKF_FALLBACK,1
EKF_FL0W_DELAY,25
EKF_FL0W_GATE,5
EKF_FL0W_N0ISE,0.15
EKF_GBIAS_PNOISE,8E-06
EKF_GLITCH_ACCEL,150
EKF_GLITCH_RAD,15
EKF_GND_GRADIENT,2
EKF_GPS_TYPE,0
EKF_GYR0_PN0ISE,0.015
EKF_HGT_GATE,10
EKF_MAG_CAL,1
EKF_MAG_GATE,3
EKF_MAG_N0ISE,0.05
EKF_MAGB_PNOISE,0.0003
EKF_MAGE_PNOISE,0.0003
EKF_MAX_FL0W,2.5
EKF_P0S_DELAY,220
EKF_P0S_GATE,10
EKF_P0SNE_N0ISE,0.5
EKF_RNG_GATE,5
EKF_VEL_DELAY,220
EKF_VEL_GATE,5
EKF_VELD_N0ISE,0.7
EKF_VELNE_NOISE,0.5
EKF_WIND_PNOISE,0.1
EKF_WIND_PSCALE,0.5
F0RMAT_VERSI0N,16
FS_ACTI0N,2
FS_GCS_ENABLE,0
FS_THR_ENABLE,1
FS_THR_VALUE,910
FS_TIME0UT,5
GCS_PID_MASK,0
GND_ABS_PRESS,98397.73
GND_ALT_0FFSET,0
GND_TEMP,33.29029
GPS_AUTO_SWITCH,1
GPS_INJECT_T0,127
GPS_MIN_DGPS,100
GPS_MIN_ELEV,-100
GPS_NA¥FILTER,8
GPS_RAW_DATA,0
GPS_SBAS_M0DE,2
GPS_SBP_LOGMASK,-256
GPS_TYPE,1
GPS_TYPE2,0
INITIAL_M0DE,0
INS_ACC20FFS_X,0
INS_ACC20FFS_Y,0
INS_ACC20FFS_Z,0
INS_ACC2SCAL_X,1
INS_ACC2SCAL_Y,1
INS_ACC2SCAL_Z,1
INS_ACC30FFS_X,0
INS_ACC30FFS_Y,0
INS_ACC30FFS_Z,0
INS_ACC3SCAL_X,0
INS_ACC3SCAL_Y,0
INS_ACC3SCAL_Z,0
INS_ACCEL_FILTER,10
INS_ACC0FFS_X,0
INS_ACC0FFS_Y,0
INS_ACC0FFS_Z,0
INS_ACCSCAL_X,1
INS_ACCSCAL_Y,1
INS_ACCSCAL_Z,1
INS_GYR20FFS_X,0.007782747
INS_GYR20FFS_Y,0.008182988
INS_GYR20FFS_Z,-0.004160143
INS_GYR30FFS_X,0
INS_GYR30FFS_Y,0
INS_GYR30FFS_Z,0
INS_GYRO_FILTER,10
INS_GYR0FFS_X,-0.002938097
INS_GYR0FFS_Y,0.03547082
INS_GYR0FFS_Z,0.006362518
INS_PR0DUCT_ID,5
LEARN_CH,7
L0G_BITMASK,65535
145
MAG_ENABLE,0
MIS_RESTART,0
MIS_TOTAL,6
MNT_ANGMAX_PAN,4500
MNT_ANGMAX_R0L,4500
MNT_ANGMAX_TIL,4500
MNT_ANGMIN_PAN,-4500
MNT_ANGMIN_R0L,-4500
MNT_ANGMIN_TIL,-4500
MNT_DEFLT_M0DE,3
MNT_JSTICK_SPD,0
MNT_K_RATE,5
MNT_LEAD_PTCH,0
MNT_LEAD_RLL,0
MNT_NEUTRAL_X,0
MNT_NEUTRAL_Y,0
MNT_NEUTRAL_Z,0
MNT_0FF_ACC_X,0
MNT_0FF_ACC_Y,0
MNT_0FF_ACC_Z,0
MNT_0FF_GYR0_X,0
MNT_0FF_GYR0_Y,0
MNT_0FF_GYR0_Z,0
MNT_0FF_JNT_X,0
MNT_0FF_JNT_Y,0
MNT_0FF_JNT_Z,0
MNT_RC_IN_PAN,0
MNT_RC_IN_R0LL,0
MNT_RC_IN_TILT,0
MNT_RETRACT_X,0
MNT_RETRACT_Y,0
MNT_RETRACT_Z,0
MNT_STAB_PAN,0
MNT_STAB_R0LL,0
MNT_STAB_TILT,0
MNT_TYPE,0
M0DE_CH,8
M0DE1,10
M0DE2,0
M0DE3,2
M0DE4,3
M0DE5,10
M0DE6,0
NAVL1_DAMPING,0.9
NAVL1_PERI0D,8
PIVOT_TURN_ANGLE,30
RC1_DZ,30
RC1_MAX,1760
RC1_MIN,1248
RC1_REV,1
RC1_TRIM,1496
RC10_DZ,0
RC10_FUNCTI0N,0
RC10_MAX,1900
RC10_MIN,1100
RC10_RFV,1
RC10_TRIM,1500
RC11_DZ,0
RC11_FUNCTI0N,0
RC11_MAX,1900
RC11_MIN,1100
RC11_RFV,1
RC11_TRIM,1500
RC12_DZ,0
RC12_FUNCTI0N,0
RC12_MAX,1900
RC12_MIN,1100
RC12_RFV,1
RC12_TRIM,1500
RC13_DZ,0
RC13_FUNCTI0N,0
RC13_MAX,1900
RC13_MIN,1100
RC13_RFV,1
RC13_TRIM,1500
RC14_DZ,0
RC14_FUNCTI0N,0
RC14_MAX,1900
RC14_MIN,1100
RC14_RFV,1
RC14_TRIM,1500
RC2_DZ,30
RC2_FUNCTI0N,0
RC2_MAX,2009
RC2_MIN,1298
RC2_REV,1
146
RC2_TRIM,1500
RCMAP_R0LL,1
RC3_DZ,0
RCMAP_THROTTLE,2
RC3_MAX,2017
RCMAP_YAW,4
RC3_MIN,1120
RELAY_DEFAULT,0
RC3_REV,1
RELAY_PIN,54
RC3_TRIM,1529
RELAY_PIN2,55
RC4_DZ,0
RELAY_PIN3,-1
RC4_FUNCTI0N,0
RELAY_PIN4,-1
RC4_MAX,2016
RNGFND_DEBOUNCE,2
RC4_MIN,992
RNGFND_FUNCTION,0
RC4_REV,1
RNGFND_GNDCLEAR,10
RC4_TRIM,1502
RNGFND_MAX_CM,700
RC5_DZ,0
RNGFND_MIN_CM,20
RC5_FUNCTI0N,0
RNGFND_0FFSET,0
RC5_MAX,2017
RNGFND_PIN,-1
RC5_MIN,2015
RNGFND.PWRRNG,0
RC5_REV,1
RNGFND_RMETRIC,1
RC5_TRIM,2017
RNGFND_SCALING,3
RC6_DZ,0
RNGFND_SETTLE,0
RC6_FUNCTI0N,0
RNGFND_STOP_PIN,-1
RC6_MAX,2017
RNGFND_TRIGGR_CM,100
RC6_MIN,2015
RNGFND_TURN_ANGL,45
RC6_REV,1
RNGFND_TURN_TIME,1
RC6_TRIM,2016
RNGFND_TYPE,0
RC7_DZ,0
RNGFND2_FUNCTI0N,0
RC7_FUNCTI0N,0
RNGFND2_GNDCLEAR,10
RC7_MAX,1146
RNGFND2_MAX_CM,700
RC7_MIN,1145
RNGFND2_MIN_CM,20
RC7_REV,1
RNGFND2_0FFSET,0
RC7_TRIM,1146
RNGFND2_PIN,-1
RC8_DZ,0
RNGFND2_RMETRIC,1
RC8_FUNCTI0N,0
RNGFND2_SCALING,3
RC8_MAX,2017
RNGFND2_SETTLE,0
RC8_MIM,991
RNGFND2_ST0P_PIN,-1
RC8_REV,1
RNGFND2_TYPE,0
RC8_TRIM,2016
RSSI_PIN,-1
RC9_DZ,0
RST_SWITCH_CH,0
RC9_FUNCTI0N,0
SCHED.DEBUG,0
RC9_MAX,1900
SERIAL0_BAUD,115
RC9_MIN,1100
SERIAL1_BAUD,57
RC9_REV,1
SERIAL1_PR0T0C0L,1
RC9_TRIM,1500
SERIAL2_BAUD,57
RCMAP_PITCH,3
SERIAL2_PR0T0C0L,1
147
SERIAL3_BAUD,38
SERIAL3_PR0T0C0L,5
SERIAL4_BAUD,38
SERIAL4_PR0T0C0L,5
SKID_STEER_IN,0
SKID_STEER_OUT,0
SKIP_GYRO_CAL,0
SPEED_TURN_DIST,2
SPEED_TURN_GAIN,1
SPEED2THR_D,0.5
SPEED2THR_I,0.5
SPEED2THR_IMAX,5000
SPEED2THR_P,0.5
SR0_EXT_STAT,2
SR0_EXTRA1,6
SR0_EXTRA2,6
SR0_EXTRA3,1
SR0_PARAMS,10
SR0_P0SITI0N,2
SR0_RAW_CTRL,4
SR0_RAW_SENS,1
SR0_RC_CHAN,2
SR1_EXT_STAT,4
SR1_EXTRA1,4
SR1_EXTRA2,4
SR1_EXTRA3,4
SR1_PARAMS,10
SR1_P0SITI0N,4
SR1_RAW_CTRL,4
SR1_RAW_SENS,4
SR1_RC_CHAN,4
SR2_EXT_STAT,1
SR2_EXTRA1,1
SR2_EXTRA2,1
SR2_EXTRA3,1
SR2_PARAMS,10
SR2_P0SITI0N,1
SR2_RAW_CTRL,1
SR2_RAW_SENS,1
SR2_RC_CHAN,1
SR3_EXT_STAT,1
SR3_EXTRA1,1
SR3_EXTRA2,1
SR3_EXTRA3,1
SR3_PARAMS,10
SR3_P0SITI0N,1
SR3_RAW_CTRL,1
SR3_RAW_SENS,1
SR3_RC_CHAN,1
STEER2SRV_D,0.2
STEER2SRV_FF,0
STEER2SRV_I,0.1
STEER2SRV_IMAX,5000
STEER2SRV_MINSPD,1
STEER2SRV_P,1.5
STEER2SRV_TC0NST,0.75
SYS_NUM_RESETS,213
SYSID_MYGCS,255
SYSID_SW_TYPE,20
SYSID_THISMAV,1
TELEM.DELAY,0
THR_MAX,100
THR_MIN,0
THR_SLEWRATE,100
TURN_MAX_G,1.1
WP_RADIUS,2
H Appendix H: X8 Multi-Rotor UAS Pixhawk Parameters
#N0TE: 10/15/2015 3:47:42 PM
ACR0_BAL_PITCH,1
ACR0_BAL_R0LL,1
ACR0_EXP0,0.3
ACR0_RP_P,4.5
ACR0_TRAINER,2
ACR0_YAW_P,3
AHRS_C0MP_BETA,0.1
AHRS_EKF_USE,0
AHRS_GPS_GAIN,1
AHRS_GPS_MINSATS,6
AHRS_GPS_USE,1
148
AHRS_ORIENTATION,0
AHRS_RP_P,0.1
AHRS_TRIM_X,-0.003521048
AHRS_TRIM_Y,0.01564041
AHRS_TRIM_Z,0
AHRS_WIND_MAX,0
AHRS_YAW_P,0.1
ANGLE_MAX,4500
ARMING_CHECK,1
ATC_ACCEL_RP_MAX,72000
ATC_ACCEL_Y_MAX,18000
ATC_RATE_FF_ENAB,1
ATC_RATE_RP_MAX,9000
ATC_RATE_Y_MAX,9000
ATC_SLEW_YAW,1000
BAROGLTCH_ACCEL,1500
BAROGLTCH_DIST,500
BAROGLTCH_ENABLE,1
BATT_AMP_OFFSET,0
BATT_AMP_PERVOLT,17
BATT_CAPACITY,6000
BATT_CURR_PIN,3
BATT_M0NIT0R,4
BATT_V0LT_MULT,10.1
BATT_V0LT_PIN,2
BATT_V0LT2_MULT,1
BATT_V0LT2_PIN,-1
BRD_PWM_C0UNT,4
BRD_SAFETYENABLE,1
BRD_SER1_RTSCTS,2
BRD_SER2_RTSCTS,2
CAM_DURATI0N,10
CAM_SERV0_0FF,1100
CAM_SERV0_0N,1300
CAM_TRIGG_DIST,0
CAM_TRIGG_TYPE,0
CH7_0PT,18
CH8_0PT,0
CHUTE_ALT_MIN,10
CHUTE_ENABLED,0
CHUTE_SERV0_0FF,1100
CHUTE_SERV0_0N,1300
CHUTE_TYPE,0
CIRCLE_RADIUS,1000
CIRCLE_RATE,20
C0MPASS_AUT0DEC,1
C0MPASS_DEC,0
C0MPASS_DEV_ID,73225
C0MPASS_DEV_ID2,131594
C0MPASS_DEV_ID3,0
COMPASS_EXTERNAL,1
C0MPASS_LEARN,0
C0MPASS_M0T_X,0
C0MPASS_M0T_Y,0
C0MPASS_M0T_Z,0
C0MPASS_M0T2_X,0
C0MPASS_M0T2_Y,0
C0MPASS_M0T2_Z,0
C0MPASS_M0T3_X,0
C0MPASS_M0T3_Y,0
C0MPASS_M0T3_Z,0
C0MPASS_M0TCT,0
C0MPASS_0FS_X,-91
C0MPASS_0FS_Y,-28
C0MPASS_0FS_Z,-148
C0MPASS_0FS2_X,-59
C0MPASS_0FS2_Y,111
C0MPASS_0FS2_Z,219
C0MPASS_0FS3_X,0
C0MPASS_0FS3_Y,0
C0MPASS_0FS3_Z,0
C0MPASS_0RIENT,0
COMPASS_PRIMARY,0
C0MPASS_USE,1
DCM_CHECK_THRESH,0.8
EKF_ABIAS_PNOISE,0.0001
EKF_ACC_PN0ISE,0.25
EKF_ALT_N0ISE,1
EKF_CHECK_THRESH,0.8
EKF_EAS_GATE,10
EKF_EAS_N0ISE,1.4
EKF_GBIAS_PNOISE,IE-06
EKF_GLITCH_ACCEL,150
EKF_GLITCH_RAD,15
EKF_GPS_TYPE,0
EKF_GYR0_PN0ISE,0.015
149
EKF_HGT_GATE,10
EKF_MAG_CAL,1
EKF_MAG_GATE,3
EKF_MAG_N0ISE,0.05
EKF_MAGB_PNOISE,0.0003
EKF_MAGE_PNOISE,0.0003
EKF_P0S_DELAY,220
EKF_P0S_GATE,10
EKF_P0SNE_N0ISE,0.5
EKF_VEL_DELAY,220
EKF_VEL_GATE,6
EKF_VELD_N0ISE,0.7
EKF_YELNE_NOISE,0.5
EKF_WIND_PNOISE,0.1
EKF_WIND_PSCALE,0.5
ESC, 0
FENCE_ACTI0N,1
FENCE_ALT_MAX,100
FENCE.ENABLE,0
FENCE_MARGIN,2
FENCE_RADIUS,300
FENCE_TYPE,3
FL0W_ENABLE,0
FLTM0DE1,3
FLTM0DE2,16
FLTM0DE3,3
FLTM0DE4,0
FLTM0DE5,6
FLTM0DE6,2
FRAME, 1
FS_BATT_ENABLE,1
FS_BATT_MAH,20
FS_BATT_VOLTAGE,14
FS_GCS_ENABLE,1
FS_GPS_ENABLE,2
FS_THR_ENABLE,1
FS_THR_VALUE,975
GND_ABS_PRESS,98177.3
GND_ALT_OFFSET,0
GND_TEMP,37.70488
GPS_AUTO_SWITCH,1
GPS_HDOP_GODD,200
GPS_MIN_DGPS,100
GPS_NAYFILTER,8
GPS_TYPE,1
GPS_TYPE2,0
GPSGLITCH.ACCEL,1000
GPSGLITCH_ENABLE,1
GPSGLITCH.RADIUS,200
HLD_LAT_P,1
INAV_TC_XY,2.5
INAV_TC_Z,5
INS_ACC20FFS_X,1.147846
INS_ACC20FFS_Y,1.140518
INS_ACC20FFS_Z,1.171686
INS_ACC2SCAL_X,1.040946
INS_ACC2SCAL_Y,0.9906676
INS_ACC2SCAL_Z,0.9855135
INS_ACC30FFS_X,0
INS_ACC30FFS_Y,0
INS_ACC30FFS_Z,0
INS_ACC3SCAL_X,0
INS_ACC3SCAL_Y,0
INS_ACC3SCAL_Z,0
INS_ACCOFFS_X,-0.004180954
INS_ACCOFFS_Y,-0.1249053
INS_ACCOFFS_Z,-0.1089551
INS_ACCSCAL_X,1.004815
INS_ACCSCAL_Y,0.9986967
INS_ACCSCAL_Z,0.9888687
INS_GYR20FFS_X,-0.003148957
INS_GYR20FFS_Y,0.0173818
INS_GYR20FFS_Z,-0.009323909
INS_GYR30FFS_X,0
INS_GYR30FFS_Y,0
INS_GYR30FFS_Z,0
INS_GYROFFS_X,-0.0135583
INS_GYROFFS_Y,0.03729856
INS_GYROFFS_Z,0.008076278
INS_MPU6K_FILTER,0
INS_PRODUCT_ID,0
LAND_REPOSITION,1
LAND_SPEED,50
LOG_BITMASK,26622
LOITER_LAT_D,0
LOITER_LAT_I,0.5
150
LOITER_LAT_IMAX,1000
L0ITER_LAT_P,1
L0ITER_L0N_D,0
L0ITER_L0N_I,0.5
L0ITER_L0N_IMAX,1000
L0ITER_L0N_P,1
MAG_ENABLE,1
MIS_RESTART,0
MIS_T0TAL,2
MNT_ANGMAX_PAN,4500
MNT_ANGMAX_R0L,4500
MNT_ANGMAX_TIL,0
MNT_ANGMIN_PAN,-4500
MNT_ANGMIN_R0L,-4500
MNT_ANGMIN_TIL,-9000
MNT_C0NTR0L_X,0
MNT_C0NTR0L_Y,0
MNT_C0NTR0L_Z,0
MNT_JSTICK_SPD,0
MNT_M0DE,3
MNT_NEUTRAL_X,0
MNT_NEUTRAL_Y,0
MNT_NEUTRAL_Z,0
MNT_RC_IN_PAN,0
MNT_RC_IN_RDLL,0
MNT_RC_IN_TILT,6
MNT_RETRACT_X,0
MNT_RETRACT_Y,0
MNT_RETRACT_Z,0
MNT_STAB_PAN,0
MNT_STAB_R0LL,0
MNT_STAB_TILT,0
M0T_SPIN_ARMED,70
MOT_TCRV_ENABLE,1
MOT_TCRV_MAXPCT,93
MOT_TCRV_MIDPCT,52
0F_PIT_D,0.12
0F_PIT_I,0.5
0F_PIT_IMAX,100
0F_PIT_P,2.5
0F_RLL_D,0.12
0F_RLL_I,0.5
0F_RLL_IMAX,100
0F_RLL_P,2.5
PHLD_BRAKE_ANGLE,3000
PHLD_BRAKE_RATE,8
PIL0T_ACCEL_Z,250
PIL0T_VELZ_MAX,250
P0SC0N_THR_H0VER,414
RALLY_LIMIT_KM,0.3
RALLY.TOTAL,0
RATE_PIT_D,0.005
RATE_PIT_I,0.1999
RATE_PIT_IMAX,5000
RATE_PIT_P,0.1999
RATE_RLL_D,0.005
RATE_RLL_I,0.1999
RATE_RLL_IMAX,5000
RATE_RLL_P,0.1999
RATE_YAW_D,0.005
RATE_YAW_I,0.02
RATE_YAW_IMAX,1000
RATE_YAW_P,0.16
RC_FEEL_RP,15
RC_SPEED,490
RC1_DZ,30
RC1_MAX,1931
RC1_MIN,1080
RC1_REV,1
RC1_TRIM,1506
RC10_DZ,0
RC10_FUNCTI0N,0
RC10_MAX,1900
RC10_MIN,1100
RC10_REV,1
RC10_TRIM,1500
RC11_DZ,0
RC11_FUNCTI0N,0
RC11_MAX,1900
RC11_MIN,1100
RC11_REY,1
RC11_TRIM,1500
RC12_DZ,0
RC12_FUNCTI0N,0
RC12_MAX,1900
RC12_MIN,1100
151
RC12_REV,1
RC12_TRIM,1500
RC13_DZ,0
RC13_FUNCTI0N,0
RC13_MAX,1900
RC13_MIN,1100
RC13_REV,1
RC13_TRIM,1500
RC14_DZ,0
RC14_FUNCTI0N,0
RC14_MAX,1900
RC14_MIN,1100
RC14_REV,1
RC14_TRIM,1500
RC2_DZ,30
RC2_MAX,1933
RC2_MIN,1082
RC2_REV,1
RC2_TRIM,1508
RC3_DZ,30
RC3_MAX,1851
RC3_MIN,1169
RC3_REV,1
RC3_TRIM,1171
RC4_DZ,40
RC4_MAX,1935
RC4_MIN,1084
RC4_REV,1
RC4_TRIM,1509
RC5_DZ,0
RC5_FUNCTI0N,0
RC5_MAX,1892
RC5_MIN,1196
RC5_REV,1
RC5_TRIM,1500
RC6_DZ,0
RC6_FUNCTI0N,0
RC6_MAX,1851
RC6_MIN,1169
RC6_REV,1
RC6_TRIM,1568
RC7_DZ,0
RC7_FUNCTI0N,0
RC7_MAX,1510
RC7_MIN,1084
RC7_REV,1
RC7_TRIM,1510
RC8_DZ,0
RC8_FUNCTI0N,0
RC8_MAX,1900
RC8_MIN,1100
RC8_REV,1
RC8_TRIM,1509
RC9_DZ,0
RC9_FUNCTI0N,7
RC9_MAX,1520
RC9_MIN,1000
RC9_REV,1
RC9_TRIM,1500
RCMAP_PITCH,2
RCMAP_R0LL,1
RCMAP_THROTTLE,3
RCMAP_YAW,4
RELAY_PIN,54
RELAY_PIN2,-1
RNGFND_FUNCTION,0
RNGFND_GAIN,0.8
RNGFND_MAX_CM,700
RNGFND_MIN_CM,20
RNGFND_0FFSET,0
RNGFND_PIN,-1
RNGFND_RMETRIC,1
RNGFND_SCALING,3
RNGFND_SETTLE_MS,0
RNGFND_ST0P_PIN,-1
RNGFND_TYPE,0
RSSI_PIN,-1
RSSI_RANGE,5
RTL_ALT,2000
RTL_ALT_FINAL,0
RTL_L0IT_TIME,5000
SCHED_DEBUG,0
SERIAL0_BAUD,115
SERIAL1_BAUD,57
SERIAL2_BAUD,57
SERIAL2_PR0T0C0L,2
152
SIMPLE,0
SRO_EXT_STAT,0
SR0_EXTRA1,0
SR0_EXTRA2,0
SR0_EXTRA3,0
SRO_PARAMS,10
SR0_P0SITI0N,0
SR0_RAW_CTRL,0
SR0_RAW_SENS,0
SR0_RC_CHAN,0
SR1_EXT_STAT,0
SR1_EXTRA1,0
SR1_EXTRA2,0
SR1_EXTRA3,0
SR1_PARAMS,0
SR1_P0SITI0N,0
SR1_RAW_CTRL,0
SR1_RAW_SENS,0
SR1_RC_CHAN,0
SR2_EXT_STAT,0
SR2_EXTRA1,0
SR2_EXTRA2,0
SR2_EXTRA3,0
SR2_PARAMS,0
SR2_P0SITI0N,0
SR2_RAW_CTRL,0
SR2_RAW_SENS,0
SR2_RC_CHAN,0
STB_PIT_P,4
STB_RLL_P,4
STB_YAW_P,2.5
I Appendix I: Supper Sky
SUPER_SIMPLE,0
SYSID_MYGCS,255
SYSID_SW_MREV,120
SYSID_SW_TYPE,10
SYSID_THISMAV,1
TELEM_DELAY,0
TERRAIN_ENABLE,1
TERRAIN_SPACING,100
THR_ACCEL_D,0
THR_ACCEL_I,1
THR_ACCEL_IMAX,800
THR_ACCEL_P,0.5
THR_ALT_P,1
THR_DZ,100
THR_MAX,1000
THR_MID,450
THR_MIN,130
THR_RATE_P,5
TRIM_THR0TTLE,414
TUNE,0
TUNE_HIGH,1000
TUNE_L0W,0
WP_YAW_BEHAVIOR,2
WPNAV_ACCEL,250
WPNAV_ACCEL_Z,100
WPNAV_L0IT_JERK,1000
WPNAV_LOIT_SPEED,200
WPNAV_RADIUS,100
WPNAV_SPEED,200
WPNAV_SPEED_DN,200
WPNAV_SPEED_UP,200
UAS Pixhawk Parameters
#N0TE:
10/15/2015 12:11:10 PM
Plane: Skywalker
ACR0_L0CKING,0
ACRO_PITCH_RATE,180
ACR0_R0LL_RATE,180
AFS_AMSL_ERR_GPS,-1
AFS_AMSL_LIMIT,0
AFS_ENABLE,0
AFS_HB_PIN,-1
AFS_MAN_PIN,-1
AFS_MAX_C0M_L0SS,0
AFS_MAX_GPS_LOSS,0
AFS_QNH_PRESSURE,0
AFS_RC_FAIL_MS,0
153
AFS_TERM_ACTION,0
AFS_TERM_PIN,-1
AFS_TERMINATE,0
AFS_WP_COMMS,0
AFS_WP_GPS_LOSS,0
AHRS_COMP_BETA,0.1
AHRS_EKF_USE,0
AHRS_GPS_GAIN,1
AHRS_GPS_MINSATS,6
AHRS_GPS_USE,1
AHRS_0RIENTATI0N,0
AHRS_RP_P,0.2
AHRS_TRIM_X,0
AHRS_TRIM_Y,0
AHRS_TRIM_Z,0
AHRS_WIND_MAX,0
AHRS_YAW_P,0.2
ALT_CTRL_ALG,0
ALT_H0LD_FBWCM,0
ALT_H0LD_RTL,10000
ALT_MIX,1
ALT_0FFSET,0
ARMING_CHECK,1
ARMING_DIS_RUD,0
ARMING_REQUIRE,0
ARSPD_AUT0CAL,0
ARSPD.ENABLE,1
ARSPD_FBW_MAX,22
ARSPD_FBW_MIN,9
ARSPD_0FFSET,2.130668
ARSPD_PIN,15
ARSPD_RATI0,1.9936
ARSPD_SKIP_CAL,0
ARSPD_TUBE_ORDER,2
ARSPD_USE,0
AUT0_FBW_STEER,0
AUTOTUNE_LEVEL,6
BATT_AMP_OFFSET,0
BATT_AMP_PERVOLT,17
BATT_CAPACITY,3300
BATT_CURR_PIN,3
BATT_M0NIT0R,0
BATT_V0LT_MULT,10.1
BATT_V0LT_PIN,2
BATT2_AMP_0FFSET,0
BATT2_AMP_PERV0L,17
BATT2_CAPACITY,3300
BATT2_CURR_PIN,3
BATT2_M0NIT0R,0
BATT2_V0LT_MULT,10.1
BATT2_V0LT_PIN,2
BRD_PWM_C0UNT,4
BRD_SAFETYENABLE,1
BRD_SER1_RTSCTS,2
BRD_SER2_RTSCTS,2
CAM_DURATI0N,10
CAM_SERV0_0FF,1100
CAM_SERV0_0N,1300
CAM_TRIGG_DIST,0
CAM_TRIGG_TYPE,0
C0MPASS_AUT0DEC,1
C0MPASS_DEC,0
C0MPASS_DEV_ID,73225
C0MPASS_DEV_ID2,131594
C0MPASS_DEV_ID3,0
C0MPASS_EXTERN2,0
C0MPASS_EXTERN3,0
COMPASS_EXTERNAL,1
C0MPASS_LEARN,1
C0MPASS_M0T_X,0
C0MPASS_M0T_Y,0
C0MPASS_M0T_Z,0
C0MPASS_M0T2_X,0
C0MPASS_M0T2_Y,0
C0MPASS_M0T2_Z,0
C0MPASS_M0T3_X,0
C0MPASS_M0T3_Y,0
C0MPASS_M0T3_Z,0
C0MPASS_M0TCT,0
C0MPASS_0FS_X,-0.3281624
C0MPASS_0FS_Y,5.074333
C0MPASS_0FS_Z,-1.829881
C0MPASS_0FS2_X,0.7644874
C0MPASS_0FS2_Y,10.79797
C0MPASS_0FS2_Z,-2.574864
C0MPASS_0FS3_X,0
154
C0MPASS_0FS3_Y,0
C0MPASS_0FS3_Z,0
COMPASS_ORIENT,0
C0MPASS_0RIENT2,0
C0MPASS_0RIENT3,0
COMPASS_PRIMARY,0
COMPASS_USE,1
C0MPASS.USE2,1
C0MPASS_USE3,1
EKF_ABIAS_PNOISE,0.0002
EKF_ACC_PN0ISE,0.5
EKF_ALT_N0ISE,0.5
EKF_EAS_GATE,10
EKF_EAS_N0ISE,1.4
EKF_FALLBACK,1
EKF_FL0W_DELAY,25
EKF_FL0W_GATE,3
EKF_FL0W_N0ISE,0.3
EKF_GBIAS_PNOISE,IE-06
EKF_GLITCH_ACCEL,150
EKF_GLITCH_RAD,20
EKF_GND_GRADIENT,2
EKF_GPS_TYPE,0
EKF_GYR0_PN0ISE,0.015
EKF_HGT_GATE,20
EKF_MAG_CAL,0
EKF_MAG_GATE,3
EKF_MAG_N0ISE,0.05
EKF_MAGB_PNOISE,0.0003
EKF_MAGE_PNOISE,0.0003
EKF_MAX_FL0W,2.5
EKF_P0S_DELAY,220
EKF_P0S_GATE,30
EKF_P0SNE_N0ISE,0.5
EKF_RNG_GATE,5
EKF_YEL_DELAY,220
EKF_VEL_GATE,6
EKF_VELD_N0ISE,0.5
EKF_YELNE_NOISE,0.3
EKF_WIND_PNOISE,0.1
EKF_WIND_PSCALE,0.5
ELEV0N_CH1_REV,0
ELEV0N_CH2_REV,0
ELEV0N_MIXING,0
ELEV0N_0UTPUT,0
ELEVOM_REVERSE,0
FBWA_TDRAG_CHAN,0
FBWB_CLIMB_RATE,2
FBWB_ELEV_REV,0
FENCE_ACTI0N,0
FENCE_AUTOENABLE,0
FENCE_CHANNEL,0
FENCE_MAXALT,0
FENCE_MINALT,0
FENCE_RET_RALLY,0
FENCE_RETALT,0
FENCE_T0TAL,0
FLAP_1_PERCNT,0
FLAP_1_SPEED,0
FLAP_2_PERCNT,0
FLAP_2_SPEED,0
FLAP_IN_CHANNEL,0
FLAP_SLEWRATE,75
FLAPER0N_0UTPUT,0
FL0W_ENABLE,0
FL0W_FXSCALER,0
FL0W_FYSCALER,0
FLTM0DE_CH,8
FLTM0DE1,10
FLTM0DE2,2
FLTM0DE3,2
FLTM0DE4,2
FLTM0DE5,2
FLTM0DE6,0
FORMAT_VERSION,13
FS_BATT_MAH,0
FS_BATT_VOLTAGE,0
FS_GCS_ENABL,0
FS_LONG_ACTN,0
FS_LONG_TIMEOUT,20
FS_SHORT_ACTN,0
FS_SHORT_TIMEOUT,1.5
GLIDE_SLOPE_MIN,15
GND_ABS_PRESS,98518.57
GND_ALT_OFFSET,0
GND_TEMP,25
155
GPS_AUTO_SWITCH,1
GPS_MIN_DGPS,100
GPS_MIN_ELEV,-100
GPS_NAVFILTER,8
GPS_SBAS_M0DE,2
GPS_TYPE,1
GPS_TYPE2,0
GROUND_STEER_ALT,0
GROUND_STEER_DPS,90
INS_ACC20FFS_X,1.483593
INS_ACC20FFS_Y,2.251354
INS_ACC20FFS_Z,2.305809
INS_ACC2SCAL_X,1
INS_ACC2SCAL_Y,1
INS_ACC2SCAL_Z,1
INS_ACC30FFS_X,0
INS_ACC30FFS_Y,0
INS_ACC30FFS_Z,0
INS_ACC3SCAL_X,0
INS_ACC3SCAL_Y,0
INS_ACC3SCAL_Z,0
INS_ACC0FFS_X,0.4568798
INS_ACC0FFS_Y,1.079645
INS_ACC0FFS_Z,-0.103158
INS_ACCSCAL_X,1
INS_ACCSCAL_Y,1
INS_ACCSCAL_Z,1
INS_GYR20FFS_X,0.03132736
INS_GYR20FFS_Y,0.04376069
INS_GYR20FFS_Z,0.004333549
INS_GYR30FFS_X,0
INS_GYR30FFS_Y,0
INS_GYR30FFS_Z,0
INS_GYR0FFS_X,0.008482047
INS_GYR0FFS_Y,0.02937811
INS_GYR0FFS_Z,0.0008407365
INS_MPU6K_FILTER,0
INS_PRDDUCT_ID,5
INVERTEDFLT_CH,0
KFF_RDDRMIX,0.5
KFF_THR2PTCH,0
LAND_FLAP_PERCNT,0
LAND_FLARE_ALT,3
LAND_FLARE_SEC,2
LAND_PITCH_CD,0
LEVEL_ROLL_LIMIT,5
LIM_PITCH_MAX,2000
LIM_PITCH_MIN,-2500
LIM_R0LL_CD,4500
L0G_BITMASK,65535
MAG_ENABLE,1
MIN_GNDSPD_CM,0
MIS_RESTART,0
MIS_T0TAL,0
MIXING_GAIN,0.5
MNT_ANGMAX_PAN,4500
MNT_ANGMAX_R0L,4500
MNT_ANGMAX_TIL,4500
MNT_ANGMIN_PAN,-4500
MNT_ANGMIN_R0L,-4500
MNT_ANGMIN_TIL,-4500
MNT_C0NTR0L_X,0
MNT_C0NTR0L_Y,0
MNT_C0NTR0L_Z,0
MNT_JSTICK_SPD,0
MNT_LEAD_PTCH,0
MNT_LEAD_RLL,0
MNT_M0DE,0
MNT_NEUTRAL_X,0
MNT_NEUTRAL_Y,0
MNT_NEUTRAL_Z,0
MNT_RC_IN_PAN,0
MNT_RC_IN_R0LL,0
MNT_RC_IN_TILT,0
MNT_RETRACT_X,0
MNT_RETRACT_Y,0
MNT_RETRACT_Z,0
MNT_STAB_PAN,0
MNT_STAB_R0LL,0
MNT_STAB_TILT,0
NAV_C0NTR0LLER,1
NAVL1_DAMPING,0.75
NAVL1_PERI0D,18
0VERRIDE_CHAN,0
PTCH2SRV_D,0.2
PTCH2SRV_I,0.1
156
PTCH2SRV_IMAX,1500
PTCH2SRV_P,2.25
PTCH2SRV_RLL,1
PTCH2SRV_RMAX_DN,0
PTCH2SRV_RMAX_UP,0
PTCH2SRV_TC0NST,0.5
RALLY_LIMIT_KM,5
RALLY.TOTAL,0
RC1_DZ,30
RC1_MAX,1900
RC1_MIN,1100
RC1_REV,1
RC1_TRIM,1500
RC10_DZ,0
RC10_FUNCTI0N,0
RC10_MAX,1900
RC10_MIN,1100
RC10_REV,1
RC10_TRIM,1500
RC11_DZ,0
RC11_FUNCTIQN,0
RC11_MAX,1900
RC11_MIN,1100
RC11_RFV,1
RC11_TRIM,1500
RC12_DZ,0
RC12_FUNCTI0N,0
RC12_MAX,1900
RC12_MIN,1100
RC12_RFV,1
RC12_TRIM,1500
RC13_DZ,0
RC13_FUNCTI0N,0
RC13_MAX,1900
RC13_MIN,1100
RC13_RFV,1
RC13_TRIM,1500
RC14_DZ,0
RC14_FUNCTI0N,0
RC14_MAX,1900
RC14_MIN,1100
RC14_RFV,1
RC14_TRIM,1500
RC2_DZ,30
RC2_MAX,1900
RC2_MIN,1100
RC2_REV,1
RC2_TRIM,1500
RC3_DZ,30
RC3_MAX,1900
RC3_MIN,1100
RC3_REV,1
RC3_TRIM,1500
RC4_DZ,30
RC4_MAX,1900
RC4_MIN,1100
RC4_REV,1
RC4_TRIM,1500
RC5_DZ,0
RC5_FUNCTI0N,0
RC5_MAX,1900
RC5_MIN,1100
RC5_REV,1
RC5_TRIM,1500
RC6_DZ,0
RC6_FUNCTI0N,0
RC6_MAX,1900
RC6_MIN,1100
RC6_REV,1
RC6_TRIM,1500
RC7_DZ,0
RC7_FUNCTI0N,1
RC7_MAX,1900
RC7_MIN,1100
RC7_REV,1
RC7_TRIM,1500
RC8_DZ,0
RC8_FUNCTI0N,0
RC8_MAX,1900
RC8_MIN,1100
RC8_REV,1
RC8_TRIM,1500
RC9_DZ,0
RC9_FUNCTI0N,0
RC9_MAX,1900
RC9_MIN,1100
157
RC9_REV,1
RC9_TRIM,1500
RCMAP_PITCH,2
RCMAP_R0LL,1
RCMAP_THROTTLE,3
RCMAP_YAW,4
RELAY_DEFAULT,0
RELAY_PIN,54
RELAY_PIN2,55
RELAY_PIN3,-1
RELAY_PIN4,-1
RLL2SRV_D,0.07
RLL2SRV_I,0.2
RLL2SRV_IMAX,1500
RLL2SRV_P,2.5
RLL2SRV_RMAX,0
RLL2SRV_TC0NST,0.5
RNGFND_FUNCTION,0
RNGFND_LANDING,0
RNGFND_MAX_CM,700
RNGFND_MIN_CM,20
RNGFND_0FFSET,0
RNGFND_PIN,-1
RNGFND_RMETRIC,1
RNGFND_SCALING,3
RNGFND_SETTLE,0
RNGFND_ST0P_PIM,-1
RNGFND_TYPE,0
RNGFND2_FUNCTIDN,0
RNGFND2_MAX_CM,700
RNGFND2_MIN_CM,20
RNGFND2_0FFSET,0
RNGFND2_PIN,-1
RNGFND2_RMETRIC,1
RNGFND2_SCALING,3
RNGFND2_SETTLE,0
RNGFND2_ST0P_PIN,-1
RNGFND2_TYPE,0
RSSI_PIN, -1
RSSI_RANGE,5
RST_MISSI0N_CH,0
RST_SWITCH_CH,0
RTL_AUT0LAND,0
SCALING_SPEED,15
SCHED_DEBUG,0
SERIAL0_BAUD,115
SERIAL1_BAUD,57
SERIAL2_BAUD,57
SERIAL2_PR0T0C0L,1
SKIP_GYR0_CAL,0
SR0_EXT_STAT,2
SR0_EXTRA1,10
SR0_EXTRA2,10
SR0_EXTRA3,2
SR0_PARAMS,10
SR0_P0SITI0N,3
SR0_RAW_CTRL,1
SR0_RAW_SENS,2
SR0_RC_CHAN,2
SR1_EXT_STAT,1
SR1_EXTRA1,1
SR1_EXTRA2,1
SR1_EXTRA3,1
SR1_PARAMS,10
SR1_P0SITI0N,1
SR1_RAW_CTRL,1
SR1_RAW_SENS,1
SR1_RC_CHAN,1
SR2_EXT_STAT,1
SR2_EXTRA1,1
SR2_EXTRA2,1
SR2_EXTRA3,1
SR2_PARAMS,10
SR2_P0SITI0N,1
SR2_RAW_CTRL,1
SR2_RAW_SENS,1
SR2_RC_CHAN,1
STAB_PITCH_DOWN,2
STALL_PREVENTION,1
STEER2SRV_D,0.005
STEER2SRV_I,0.2
STEER2SRV_IMAX,1500
STEER2SRV_MINSPD,1
STEER2SRV_P,1.8
STEER2SRV_TC0NST,0.75
STICK_MIXING,1
158
SYS_NUM_RESETS,3
SYSID_MYGCS,255
SYSID_SW_TYPE,0
SYSID_THISMAV,1
TECS_CLMB_MAX,5
TECS_HGT_OMEGA,3
TECS_INTEG_GAIN,0.1
TECS_LAND_ARSPD,-1
TECS_LAND_DAMP,0.5
TECS_LAND_SINK,0.25
TECS_LAND_SPDWGT,1
TECS_LAND_TCONST,2
TECS_LAND_THR,-1
TECS_PITCH_MAX,0
TECS_PITCH_MIN,0
TECS_PTCH_DAMP,0
TECS_RLL2THR,10
TECS_SINK_MAX,5
TECS_SINK_MIN,2
TECS_SPD_0MEGA,2
TECS_SPDWEIGHT,1
TECS_THR_DAMP,0.5
TECS_TIME_CONST,5
TECS_VERT_ACC,7
TELEM_DELAY,0
TERRAIN_ENABLE,1
TERRAIN_F0LL0W,0
TERRAIN_L00KAHD,2000
TERRAIN_SPACING,100
THR_FAILSAFE,1
THR_FS_VALUE,950
THR_MAX,75
THR_MIN,0
THR_PASS_STAB,0
THR_SLEWRATE,100
THR_SUPP_MAN,0
THROTTLE_NUDGE,1
TKOFF_FLAP_PCNT,0
TK0FF_R0TATE_SPD,0
TKOFF_TDRAG_ELEV,0
TK0FF_TDRAG_SPD1,0
TKOFF_THR_DELAY,2
TK0FF_THR_MAX,0
TKOFF_THR_MINACC,0
TKOFF_THR_MINSPD,0
TK0FF_THR_SLEW,0
TRIM_ARSPD_CM,1200
TRIM_AUT0,0
TRIM_PITCH_CD,0
TRIM_THR0TTLE,45
VTAIL_0UTPUT,0
WP_L0ITER_RAD,60
WP_MAX_RADIUS,0
WP_RADIUS,90
YAW2SRV_DAMP,0.1
YAW2SRV_IMAX,1500
YAW2SRV_INT,0
YAW2SRV_RLL,0.25
YAW2SRV_SLIP,0
159