This post has been de-listed
It is no longer included in search results and normal feeds (front page, hot posts, subreddit posts, etc). It remains visible only via the author's post history.
Hi all,
I am still fairly inexperienced at setting up image pipelines, but for the life of me, I can not improve this. We are using 2 Leopard Imaging IMX577CS 4k cameras with a Jetson Xavier Dev Kit. We are building opencv 3.4.6 from source with gstreamer and CUDA and all that jazz. We are using CV_Camera on ROS melodic, which supports BGRx (as well as a bunch of other formats). This is to use Aruco Markers for fiducial odometry.
I am able to launch with the following command:
"nvarguscamerasrc sensor-id=0 ! video/x-raw(memory:NVMM), width=1920, height=1080, format=NV12, framerate=(fraction)30/1 ! nvvidconv flip-method=0 ! video/x-raw, format=(string)BGRx ! videoconvert ! appsink"
I believe this issue is that videoconvert I bolded. I believe this step makes my pipeline very CPU dependent, which prevents me from running at high resolution. This is preventing me from getting good performance from reading the Aruco markers. However, despite my best efforts and reading Nvidia's gstreamer documentation, I am unable to do this last step keeping with the Nvidia proprietary image processing. It crashes no matter what I try.
Can anyone suggest how to set up a more efficient image pipeline?
Launch file here: https://github.com/TrickfireRobotics/NasaRmc2019/blob/yolo_hall/src/tfr_sensor/launch/fiducial_cam.launch
Post Details
- Posted
- 4 years ago
- Reddit URL
- View post on reddit.com
- External URL
- reddit.com/r/ROS/comment...