I am a Ph.D. candidate of Electrical and Computer Engineering in the joint doctoral program between University of California San Diego and San Diego State University, advised by Prof. Junfei Xie and Prof. Nikolay Atanasov. My research interests include Distributed Computing, Multi Agent Learning, Deep Learning and Unmanned Aerial Systems.
I received my M.S. degree in Computer Science from Texas A&M University Corpus Christi in 2019 and B.S. degree in Prospecting Technology and Engineering from Yangtze University, China in 2017.
[05.12.2022] I will start a summer internship at The Boeing Company as an AI researcher.
[11.19.2021] I passed the University Qualifying Exam and became a Ph.D. candidate.
[07.10.2021] One paper was accepted to the journal IEEE Transactions on Network Science and Engineering.
On Batch-Processing Based Coded Computing for Heterogeneous Distributed Computing Systems Baoqian Wang,
Shengli Fu IEEE Transcations on Network Science and Engineering, 2021
Abstract: Coded distributed computing (CDC) can efficiently facilitate many delay-sensitive computation tasks against unexpected latencies in distributed computing systems. In this paper, we focus on practical computing systems with heterogeneous computing resources, and design a novel CDC approach, called batch-processing based coded computing (BPCC), which exploits the fact that every computing node can obtain some coded results before it completes the whole task. To this end, we first describe the main idea of the BPCC framework, and then formulate an optimization problem for BPCC to minimize the task completion time by configuring the computation load.
Coding for Distributed Multi-Agent Reinforcement Learning Baoqian Wang,
2021 International Conference on Robotics and Automation (ICRA)
Abstract: This paper aims to mitigate straggler effects in synchronous distributed learning for multi-agent reinforcement learning (MARL) problems. Stragglers arise frequently in a distributed learning system, due to the existence of various system disturbances such as slow-downs or failures of compute nodes and communication bottlenecks. To resolve this issue, we propose a coded distributed learning framework, which speeds up the training of MARL algorithms in the presence of stragglers, while maintaining the same accuracy as the centralized approach. As an illustration, a coded distributed version of the multi-agent deep deterministic policy gradient(MADDPG) algorithm is developed and evaluated. Different coding schemes, including maximum distance separable (MDS)code, random sparse code, replication-based code, and regular low density parity check (LDPC) code are also investigated. Simulations in several multi-robot problems demonstrate the promising performance of the proposed framework.
Data-Driven Multi-UAV Navigation in Large-Scale Dynamic Environments Under Wind Disturbances Baoqian Wang,
AIAA Scitech 2021 Forum
Abstract: In the near future, large amount of unmanned aerial vehicles (UAVs) are expected to appear in the airspace. To ensure the safety of the airspace, there are many daunting technical problems to tackle, one of which is how to navigate multiple UAVs safely and efficiently in the large-scale airspace with both static and dynamic obstacles under
wind disturbances. This paper solves this problem by developing a novel data-driven multi-UAV navigation framework that combines $A^*$ algorithm with a state-of-the-art deep reinforcement learning (DRL) method. The $A^*$ algorithm generates a sequence of waypoints for each UAV and the DRL ensures that the UAV can reach each waypoint in order while satisfying all dynamic constraints and safety requirements.
Furthermore, our framework significantly expedites the online planning procedure by offloading most computations to offline and limiting online computing to
only path fine-tuning and dynamic obstacle avoidance. The simulation studies show the good performance of the proposed framework.
Computing in the air: An Open Airborne Computing Platform Baoqian Wang,
IET Communications, 2020
Abstract: In this study, we aim to design an open UAS-based airborne computing
platform with advanced onboard computing capability. This platform was designed from three aspects: hardware, software, and
applications. In particular, feasible computing hardware to perform UAS onboard computing is first considered and a prototype is
then designed. To enhance the flexibility and programmability of the platform, two key virtualisation techniques are then
investigated. Finally, they test the performance of their prototype by executing real UAS onboard computing tasks, the results of
which verify the feasibility and potentials of the proposed airborne computing platform.
3-D Trajectory Modeling for Unmanned Aerial Vehicles Baoqian Wang,
GA Guijarro Reyes,
LR Garcia Carrillo,
AIAA Scitech 2019 Forum
Abstract: This paper aims to develop a hybrid 3-dimensional (3-D) UAV trajectory modeling framework, which
integrates the physically-based and data-based models (LSTM-based models).The key idea is to use a physically-based model, which
may not perfectly capture the true dynamics of the UAV of interest, to generate large amount of trajectory
data and use these data to train a data-based model.This baseline model is then tuned using small amount
of real flight data to capture the true dynamics of the targeted UAV.
Geolocation using video sensor measurements
Abstract: In this project, a simple geolocation algorithm using video sensor measurements is implemented in real robot. In particular, the mobile robot positions
in the image frame are obtained through YoloV3. Given the position of the camera in real world and intrinsic parameters of the camera, the position of the mobile robot in the real world frame can be obtained
through camera projection model. The estimated position are then compared to the real positions recorded by motion capture system.
Robotics Sensing and Estimation
Abstract: In this course, three projects were completed including stop sign detection using logistic regression model, particle filter Simultaneous Localization and Mapping (SLAM) and visual intertial SLAM. The code is written in Python.
Robotics Planning and Learning
Abstract: This course focues on standard planning algorithms for robotics such as Dynamic Programming, Dijkstra and A* algorithms as well as some basic reinforcement learning algorithms such as value iteration, policy iteration and Q learning. In the first project, a dyamic programming algorithm is implemented for Key Pick Up environments. The second project implemented the A* algorithm for path planning in 3-D environment. The value iteration and policy iteration algorithms are used to control Pendulum in project 3. The code is written in Python.
Robot Reinforcement Learning
Abstract: Typical reinforcement learning algorithms are covered in this course. The first project implemented PID controllers for a mobile car and a two joints link. The second project implemented Q learning for Frozen Lake environment. The policy gradient algorithm and deep deterministic policy gradient (DDPG) are implemented in project 3 and project 4, respectively, for a robot arm. The code is written in Python.