Research Paper | Problem Being Solved | Thought Process | Actions Taken | Analysis and Takeaways | Summary and Resources | |
---|---|---|---|---|---|---|
1 | ArtiBoost: Boosting Articulated 3D Hand-Object Pose Estimation via Online Exploration and Synthesis | Hand-object pose estimation (HOPE) is extremely difficult given the different orientations and the dexterity of the human hand. ArtiBoost attempts to solve this issue. | It is an online data enhancement method that creates a CVV-space to create synthetic hand-object poses by exploration and synthesis. This is then fed into the model along with real data. | Complex statistics is involved in the creation of the CVV-space. However, the general idea is to train the model and feed the losses back to the exploration step. | The model is better performing than a dataset of only real-world hand-object poses. These synthetic hand-object poses tend to train the model better when they are more diverse rather than when in better quality. | |
2 | Building a local and global 3D map for navigation in autonomous land vehicles (ALVs). | Employ a binocular stereo vision system to use parallax from two cameras to calculate depth. | Used a matching algorithm to generate a disparity map which was changed into a new coordinate system for 3D map building. | The real-time global 3D map generated could be useful for mapping and navigation. However, it requires two cameras as well as GPS/INS built into the UAV. | https://www.researchgate.net/publication/224643999_3D_Map_Building_Based_on_Stereo_Vision | |
3 | Providing ground operators a system to control the drone with a more immersive stereo vision experience. | The ground operators can control the UAV using a controller and VR headset. The drone utilizes a stereo vision based camera with low latency to stream real time video onto the VR headset. | A stereo vision camera was planted on the UAV and an Oculus rift with a controller was utilized by the controller. Specific hardware was used for processing the video feed and sharing it to the ground operator. | The research is very in-depth and can be really useful to implement on the WARG UAV. Depth estimation can be performed for data gathering or live video could be streamed into an FPV based control system. | ||
4 | Presenting a stereo vision mapping algorithm to find safe regions for navigation by detecting objects, inclines, and drop-off points. | A localized map must be generated with annotations to describe the surroundings of the robot. Essentially, the safe and unsafe areas must be analyzed so that the robot can navigate through 3D space. | Stereo vision is employed to calculate depth. The depth is then used to generate a 3D grid which is then segmented into levels and inclines. A 2D local safety map is generated to navigate the robot’s surroundings. | The research provides a really good insight into the steps that we must be taking while constructing our own 3D vision mapping model. Using only a camera for mapping, the idea is easily transferable onto the WARG UAV. | https://web.eecs.umich.edu/~kuipers/papers/Murarka-iros-09.pdf | |
5 |
Manage space
Manage content
Integrations