Greetings!
In order to do research on event based vision in the context auf autonomous driving I am currently working on developing an application that is used to record two EVK4's- as well as images of an industrial RGB camera. The application is written in Python and is utilizing the PeriodicFrameGenerationAlgorithm.
Currently we are limited to around 30 MEv/s per camera, if we go beyond that, the recording starts to get faulty, not saving all the events or hardware triggers.
Each Event Camera is started in a separate process and is sending 1 frame per second to the PyQt5 GUI. Each process starts an own Thread for bias configuration via the UI, as well as doing the actual recording.
This raises several Questions for me:
Is it possible to set the Framerate of the PeriodicFrameGenerationAlgorithm to 0 during recording?
Can I expect performance advantages when switching to the CameraStreamSlicer?
And is this Performance to be expected or is something going very wrong here?
I attached some Code that shows the class for the event cameras.
If anyone has already recorded multiple Event Cameras simultaneously on the same PC It would be very helpful to get some insides of how you implemented that and what limitations in terms of Event Rate you found.
We're using an Ubuntu System, Intel Core i5-13500H CPU and 16GB of RAM.
Thanks a lot!