Research Pipelines
Goal: A research pipeline will define the different stages of development at a very high level. Each stage will be a series of questions, each stage has questions that build upon the questions in the previous stage. Each requirement will have its own research pipeline.
Requirement: Scan QR code, extract location information and information for pilot (description), send pilot info to ground station, send location to ZP.
Personnel: Wenfei, Adam
Stage | Objectives | Guidance |
---|---|---|
A1: High level systems architecture | Answer: What modules do we want to use to enable this? What data will be exchanged? | Tech Lead will complete this task for confirmation by project members. |
B1: Module implementation plans (exc. communications modules) | Answer: How do we want to implement the logic in these modules? | High level plans from tech lead but implementation is up to project members. |
B2: Communications development | See comms reqs. | See comms reqs. |
C1: Implementation | No further research questions. |
|
Requirement: Detect intruder, highlight, find location, and send to ground station
Personnel: Rey (Owner), Andrew, Mika, Ankit, Ryan
Stage | Questions | Guidance |
---|---|---|
A1: Set up testing datasets | Answer: Are publicly available datasets sufficient for our requirements? Do: Populate a WARG dataset either using premade datasets or recorded data. Divide into train, validate and test datasets. | The requirements for the testing datasets are simply that they contain images of humans from above at a sufficient altitude. Here are some to get started. We can combine these datasets, but lets find out if any of these can be used. Stanford Drone Dataset: High enough altitude, but may not be sufficiently diverse and directly from above. Best bet. UAV-Human: Possibly too low altitude. UCF-ARG (Aerial Images only): May not be sufficiently diverse, but pretty good otherwise. |
A2: Validate geolocation | Answer: How accurate is the geolocation algorithm? |
|
A3: Research FOV polygon generation | Answer: Does the existing geolocation algorithm support this? How can we take a frame and IMU data and find out with geographical area this covers? How difficult is this? | The current plan is that we just take the four corners of the frame and apply the geolocation algorithm to it. Talk w/ Ryan to find if this is sufficient. If geolocation cannot do this, I can’t find any other methods right now, but this paper is our only guidance. If it proves too difficult, we can skip it. |
A4: Implement FramePreProc module | Do: FramePreProc Module Requirements Everything here | Already in progress |
A5: Finalize autonomy plan to understand what else target detection will support. | Answer:Will we have autonomous tracking for the target? Will we map their path? | For Tech Lead |
B1: Research training requirements | Answer:Can a pretrained model be used for this task (i.e ImageNet)? Can transfer learning be applied? | The goal should be to set up pre-trained models on some of these weights and apply our validation methodology to it. |
B2: Research geodatabase solutions | Answer: How should we set up a geo database that is in-memory and supports polygon and point queries? | Refer to points 1-4 and the references in this document. |
C1: Implement TargetMapping module | Do: Follow this document for features |
|
C2: Train model (if imageNet is not sufficient) | Do: Perform training on data from A1 |
|
D1: Validate and deploy model | Answer: How well does our model perform? Do: Deploy model into targetAcquisition (unless we choose to not perform in-air control, in which case doing this on the ground again would make sense) |
|
Requirement: Ground Communication needs to be supported for video and data. Video should be streamed down to input to the desktop. Data should be sent down to the desktop
Personnel: Lucas, Golden, Gary, Justin
Stage | Questions | Guidance |
---|---|---|
A1: Video protocol validations | Answer: Is the OpenHD protocol capable of streaming video to the PC from both the camera directly or the jetson (both are ideal)? How does the analog solution compare? Which should we use | Next step is to plan how we want to validate OpenHD with a setup. |
A2: PyXbee overview | Answer: How does the pyxbee library work? How can we implemenet it? | Get started with XBee Python library — Digi XBee Python library 1.5.0 documentation |
A3: Scope ground station changes | Answer: What changes are we going to have to make to the ground station to support the target highlighting features. |
|
B1: Implementation |
|
|