Hello everyone,
I have recently been working on a project where I use the EVK4 event-based camera along with a regular RGB camera to capture two types of data simultaneously. To ensure that the view and surface of the objects we shoot remain consistent, I applied calibration algorithms to compute the intrinsic and extrinsic parameters for both cameras.
The calibration process provided me with the following parameters:
Intrinsic Parameters:
CameraParameters1
CameraParameters2
Extrinsic Parameters:
RotationOfCamera2
TranslationOfCamera2
Additional Outputs:
FundamentalMatrix
EssentialMatrix
MeanReprojectionError
NumPatterns
(number of patterns used for calibration)WorldPoints
(3D coordinates of calibration points)WorldUnits
(units of the calibration patterns)While I have successfully obtained these parameters, I haven't been able to find sufficient documentation or examples about cross-modal calibration involving an event-based camera like the EVK4 and an RGB camera.
My questions are:
How can I load these calibration parameters into the EVK4?
Is there a way to apply these parameters directly to the EVK4 during video capture to ensure alignment with the RGB camera?
How can these parameters be used effectively during shooting?
If loading the parameters into the EVK4 is not possible, what would be the best workflow for applying them during post-processing to ensure alignment between the EVK4 and RGB data?
I would greatly appreciate any guidance or references related to this cross-modal calibration workflow, especially with respect to the Prophesee Metavision SDK(python) or other tools that could help with this setup.
Thank you in advance for your help!