Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Geolocation of ZED detections #340

Open
Kai-Shafe opened this issue Oct 22, 2024 · 1 comment
Open

[Enhancement] Geolocation of ZED detections #340

Kai-Shafe opened this issue Oct 22, 2024 · 1 comment
Assignees
Labels
2-Star Indicates a relatively easy task, requiring some basic skills or knowledge but still accessible. cameras Tasks or issues specifically related to camera components, feeds, or image processing. enhancement Requests for new features or improvements to existing features.

Comments

@Kai-Shafe
Copy link
Contributor

Why Is This Enhancement Needed?

Our vision models will be able to provide pixel coordinate values for the center of a detected obstacle / object. We need to calculate the UTM coordinate values of detections for use in pathfinding.

Proposed Solution

We will have to perform coordinate transformations based on the relative positions returned from the ZED cameras, then overwrite the coordinate values within the detection handlers.

Additional Context

Research the Active Object design pattern for more details on multi-threading considerations for the handlers.

@Kai-Shafe Kai-Shafe added enhancement Requests for new features or improvements to existing features. 2-Star Indicates a relatively easy task, requiring some basic skills or knowledge but still accessible. cameras Tasks or issues specifically related to camera components, feeds, or image processing. labels Oct 22, 2024
@Kai-Shafe Kai-Shafe moved this to Backlog in URC 2025 - Autonomy Oct 22, 2024
@Brenn515
Copy link

Brenn515 commented Oct 30, 2024

We need to create a struct in object detector that gets the center point and radius. A good example is the Astar class in src/algorithms/planners/AStar. Important variables in that file are dNorthing which is the objects height for coordinate data, dEasting which is the objects width for coordinate data, and the struct at the bottom of the class that we should look at when doing our struct. It's also a good idea to look at the files in vision/aruco and src/drivers/NavigationBoard for how they do the vector points. Beyond that what we need to figure out is how to get the distance and the angle so we can get the measurements of the object seen by the rover.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
2-Star Indicates a relatively easy task, requiring some basic skills or knowledge but still accessible. cameras Tasks or issues specifically related to camera components, feeds, or image processing. enhancement Requests for new features or improvements to existing features.
Projects
Status: In Development
Development

No branches or pull requests

3 participants