EVK2 camera not being detected.
Hello, I am trying to use the EVK2 camera using the Dockerfile attached, but i am experimenting some errors when executing metavision_platform_info. The camera is connected to my laptop but inside the container when using this command i am obtaining this
Prebuilt metavision_dense_optical_flow.exe (Metavision 4.6.2, Win11)
Hello, I am trying to launch the prebuild metavision_dense_optical_flow.exe c++ example binary on the win11 Metavision v4.6.2 In the terminal it prints: Instantiating TripletMatchingFlowAlgorithm with radius= 1.5 then gui window is showed and quickly
x320 writes bad data to file?
With a x320 ES (which doesn't support the EVT3 format), a patched OpenEB (to support v4l2 devices) can only write raw files of collected data -- hdf5 files open then almost instantly close without any errors. If I convert the raw files to hdf5 with `metavision_file_to_hdf5`
Using the IMX636 + MIPI CSI-2 with Jetson Orin instead of AMD Kria
Does anyone have experience making this work? - Device tree looks like it's available here: https://github.com/prophesee-ai/linux-sensor-drivers - I'm curious if there's a driver available for the Jetson, and if not, if anybody has tips to write one for
RAM usage builds up when EVK4 captures high event rates
Greetings! When building a recording Application I noticed that while the EVK4 is facing conditions which result in a high event rates being captured, the system memory, or RAM that is used by the application is constantly increasing, but not decreasing
Timestamp Jitter Range
Hi, I came across this website: https://support.prophesee.ai/portal/en/kb/articles/evk-latency. It mentions jitter and latency here. The latency is clearly mentioned for IMX636; however, I didn't find any measurement about the jitter time. May I ask what
Building Active markers on kria ubuntu: Sophos hangs on the Kria embedded kit (imx636)
Hello, I am trying to get the event markers code to build & run on the kria ubuntu image, instead of the petalinux image. I am finding that building petalinux via the build tools is time consuming and requires a beefy machine, so I thought maybe switching
Missing images in dataset structure for training YOLOV8 model
Hi, I have the following structure for my dataset to train a yolov8 model according to the documentation but I am getting the error: "AssertionError: train: No images found in /home/allen/code/proheese-camera/train_classification_model/train. Supported
Is there a Python version of the Simple Window using C++ Example?
I want to replicate this example using Python because I have very little experience working with C++. https://docs.prophesee.ai/stable/samples/modules/ui/simple_window.html Has someone done this before? Or is this something that Prophesee has provided
Readout Saturation Error - Events Flashing Spontaneously
Hi, I'm currently using the EVK4 Prophesee event camera, and am recording events via ros2 subscription into a rosbag recording. When I try to replay my recording, I notice that spontaneously (for different durations too), the events on the screen appear
GenX320 on KV260 boot magic number not found
Hello, I am trying to get the GenX320 camera base project working on a KV260 board, using the provided Petalinux image and simply following the instructions in https://docs.prophesee.ai/amd-kria-starter-kit/application/pipeline_setup.html . However, when
train_detection script not training properly (Warning: No boxes were added to the evalutaion)
Hi, I am trying to use the train_detection.py script to train the model used the public FRED dataset (https://miccunifi.github.io/FRED/). I put the data in the correct structure but when I start training, every epoch displays the messaage: Warning : No
Error trying to use metavision sdk on python
Hello, I have installed Metavision SDK 4.6 and created an anaconda environment with python 3.9. I am trying to load the metavision packages however I always get the same error "ImportError Traceback (most recent call last) ----> 5 from metavision_core.event_io
Metavision Sparse Optical Flow Questions
Hello, I am currently using the sparse optical flow algorithm and I have a few questions. As a an object enters the FOV of the camera, its velocity is tracked as near zero. This is because as the object moves into view, its "center" is staying apparently
http 404 error in train_detection.py script
Hello, I am trying to run the train_detection.py script in https://docs.prophesee.ai/stable/samples/modules/ml/train_detection.html#chapter-samples-ml-train-detection and I run it with the "toy_problem" path as shown below: python train_detection.py .
Frequency / bias value matching table
Hi, I just found this table that relates bias_hpf and bias_fo in terms of high-pass cut-off frequency and Low-pass cut-off frequency but this is for GEN 3.1. Can you please share the table or the graphs (https://support.prophesee.ai/portal/en/kb/articles/bias-tuning-flow)
metavision_hal python package
Hi, does anyone know where the metavision_hal python module is supposed to be located.I went through all the installation steps but I cant find that package. I originally got the error that metavision_core python module wasn't found but then I copied
Impact of 10Hz Sync Signal on EVK4 + RGB Camera Synchronization Precision
Hello Prophesee Team, We are currently developing a multi-sensor data acquisition system using the Prophesee EVK4 alongside a standard RGB camera, and we require precise time synchronization between them. I have carefully reviewed the official synchronization
IMX636 internal clock PPM
Hello, I am trying to find information about expected internal clock accuracy as PPM compared to the "perfect" clock. online sources give average "Standard Crystal Oscillators" somewhere around 20 PPM, which means around 12 seconds drift per week. Does
Vibration estimation seems to have a weird bottleneck
Hi, I've been using the vibration estimation code that is provided but im having some issues. The vibration estimation works fine even for detecting high frequencies however I want to run the detection it self at a higher frequency (so how often the algorithm
Stereo Calibration Depth Mapping
Hi, I am trying to use the metavision_stereo_metavision.py script to create a depth map using two syncronized recordings obtained from the metavision_sync.py script but when I run the script I get a lot of NonMontonicTimeHigh and InvalidVectBase errors
Advice to combine spatter_tracking and vibration_estimation algorithms
Good afternoon, I wanted to ask how it is better to combine spatter_tracking and vibration_estimation algorithms in one code, so that when i run that code, both features appear at the same window, and work flowlessly parallel. I used this scripts provided
Saving the results of the generic tracking sample to a numpy file
I'm attempting to adapt the generic tracking script to allow me to save the data in a numpy file, however, whenever I inspect the data, the timestamps when grouped by object_id are exactly the same for each object. That is, object 10 will always have
Recording Application for two EVK4's and an RGB Camera
Greetings! In order to do research on event based vision in the context auf autonomous driving I am currently working on developing an application that is used to record two EVK4's- as well as images of an industrial RGB camera. The application is written
How to set MV_FLAGS_EVT3_UNSAFE_DECODER in Python api
I'm currently utilizing the EventsIterator to read EVT3 files, but I've encountered some errors. I'd like to know how to set the MV_FLAGS_EVT3_UNSAFE_DECODER option in Python so that it can ignore these errors.
EVK4 Stereo Camera ROS Driver Installation
We would appreciate your support in helping us resolve the ROS driver compatibility issue for our EVK4 stereo camera setup. We are currently working on setting up an event-based stereo vision system using two EVK4 cameras (serial numbers: P50905 and P50898).
Efficiently replaying sync data from sync camera
Dear community, Thank you for reading. I am currently working with two EVK4 camera synchronized with the sync module of the SDK. I recorded some synchronized raw and I wanted to play it back real-time, and still synchronized. I cannot use the sync module
Interfacing Prophesee USB EVK with RT Linux CRIO
Is it possible to use a USB EVK* with Linux based NI cRIO? Ideally I would like to get an event stream in my control hardware -a National Instruments cRIO9049. The cRIO uses Linux RT -not ubuntu- and I am not sure if there is a simple way of doing t
How to play back data from raw files in real time?
When playing back data from file using OpenEB (5.0.0) using the code below (simplified), the data packets are delivered not in real-time (usually much faster). Is there anything that I'm doing wrong? ``` const auto cfg = Metavision::FileConfigHints().real_time_playback(true);
Extrinsic Calibration Reference Point
Hello everyone, I am currently performing an extrinsic calibration with the EVK4 HD and would like to validate the results of my translation. I have already been able to measure the X (horizontal) and Y (vertical) directions in a CAD model or directly
EventsIterator can not work rightly
Dear Community, I am working on a project involving an event camera and have created a Python class to manage it. One of the class attributes, self.cam, is an initialized HAL device. The class includes a method, continuous_acquire_events, designed to
KV260 GenX320 to Raspberry Pi connection
Can we use the KV260 – GenX320 starter kit directly on a Raspberry Pi using an official Raspberry Pi standard-to-mini camera cable? Thank you very much for your answer !
kv260 + IMX636 not reading any event?
Dear Prophesee community, I have been doing extensive tests (a lot of try and error) in order to setup my kv260 with the imx636 following line by line the quickstart guide... everythin' seems fine but when metavision_viewer is launched the screen is black
Regarding embedded linux image for RDK2
Hi All, For an application, my linux image size exceeds the flash partition range as per the default settings in RDK2. in this regard, has anyone tried out modifying the partition and dumping the image again ? Thanks, Shankar
EVT2.1 format
Hello, I'm trying to understand how the EVT 2.1 format works on GENX320. In order to do so, I generated 2 light edges (positive then negative) with a contrast well above the threshold value. The generated .RAW file should contain data with a sequence
metavision_viewer unable to read camera_config.json correctly in KV260+imx636
this is print out and the camera_config file. metavision_viewer (4.6.2) is able to work well with the camera_config.json but when it got upgraded to 5.0.0, it become like this. Is there a change to the format of the camera_config.json file? if so, could
Help Getting Started: Streaming GenX320 Output with STM32F746 (No GUI Needed)
Hi everyone, I'm new to event-based vision and have just started working with a Prophesee GenX320 camera and an STM32F746 microcontroller. My initial goal is simple: Just open the camera and stream its output — no GUI, no visualization — just raw event
Stereo Calibration
Hello, I calibrated my two event cameras in a stereo setup to perform depth mapping but I am getting poor results and no where near the accuracy shown in the sample videos. I was wondering if you could provide the camera setup used for the courtyard stereo
eb-synced-pattern-detection not working in calibration pipeline
Hi, I am trying to calibrate two event cameras to perform depth mapping through a stereo setup. I am passing in my own calibration.json file to the metavision_calibration_pipeline script to extract the intrinsics and extrinsics parameters of each camera.
Camera not detected in Metavision Starter kit KV260 (IMX636)
Dear Prophesee Support Team, I am working with the Prophesee Metavision Starter Kit for AMD Kria KV260 and followed the instructions provided in the Quick Start Guide. After flashing the SD card and booting the board, I attempted to launch Metavision
Next Page