2020-11-11 Meeting Recap

Hello CV! Iโ€™ll be making quick written recaps of our meetings, after work-sessions, unless I feel nothing significant was really done. If you ever need to refer to anything we discussed, you can do so here.

The Old Architecture

I briefly went through a description of the old CV Architecture, which you can also see on Confluence. If youโ€™re reading this, remember three things.

  1. The current CV system has a DeckLinkCapture.cpp class that handles calling the DeckLink SDK for functions (start stream, grab frame etc.). The DeckLinkImport.cpp class calls DeckLinkCapture for this functionality.

  2. We do not need to copy the implementation or architecture exactly, in fact at this point, we need to toy around to find what works. Donโ€™t worry about the implementation, worry about the behaviour.

  3. We need to reimplement four capabilities first.

    1. Start video stream

    2. End video stream

    3. Grab frame

    4. Display video stream.

For now, implement the above functions using openCVโ€™s videocapture class. Once I figure out a way to pipe gstreamer input to openCV, weโ€™ll just send it to the videocapture API. Read on for more details.

The Tasks

Going forward, I have assigned 3/5 tasks to people on the team roster. Tasks are still available on a first-come-first-serve basis.

  1. (Assigned: Aryan). Reimplement start stream functionality

  2. (Assigned: Justin). Reimplement stop stream functionality.

  3. (Assigned: Kailash): Reimplement grab frame functionality.

  4. (Tentative: Shrinjay. Iโ€™ll need some help!) Find a way to pipe video input from gstreamer to openCV in Python.

  5. Display video stream.

GIFT-Grab Sucks!

Weโ€™ve come to the conclusion that even with linux, GIFT-Grab is a hassle, so weโ€™ll go with plan B, and pipe video stream from gstreamer to opencv. The architecture will look something like this.

For anyone assigned a task, just work on writing your code in python for the videocapture api for openCV. (https://docs.opencv.org/3.4/d8/dfe/classcv_1_1VideoCapture.html#ae38c2a053d39d6b20c9c649e08ff0146). Then weโ€™ll handle the video stream stuff with gstreamer.

I essentially ripped this from the following references. Let's use them to build this. https://gstreamer.freedesktop.org/documentation/decklink/decklinkvideosink.html?gi-language=cThis guy did the same thing we want to do.

Example of using input from a Blackmagic Decklink or Ultrastudio card in kivy with the use of OpenCV and GStreamer.

This guy says it should work:

https://answers.opencv.org/question/119583/how-does-the-cv2videocapture-class-work-with-gstreamer-pipelines/

Conclusion

Thatโ€™s all for this week folks! Glad we made this progress. Going forward, weโ€™ll be assigning tasks as such, and youโ€™ll be responsible for completing them. Iโ€™ll be happy to assist at any time, reach me on Slack. As we continue building code, weโ€™ll be able to structure our classes and functions better. For now, just focus on writing code that gets the job done, ideally wrapped in a function. Looking forward to finally getting some video input!

ย