Relationship between bias and physical threshold of EVK3 HD
According to EVK3 HD bias document, physical threshold of positive and negative events are controlled by bias_diff_on and bias_diff_off, and bias_diff is recomanded not to change. I did some experiment on the 3 bias parameters, and finded a stronger relationship
SDK 4.6.2, Ubuntu 22.04 Installation: connection timed out
Hello Prophesee Team, I am using https://docs.prophesee.ai/4.6.2/installation/linux.html#chapter-installation-linux to install SDK 4.6.2 on Ubuntu 22.04, but package manager cannot connect to apt.prophesee.ai:443 user@dell:~/Ubuntu22.04_SDK_462$ uname
Threshold selection for after-processing filters
Hi, I am working with Prophesee event camera and using the following SDK filters in a post-processing pipeline: ActivityNoiseFilterAlgorithm SpatioTemporalContrastAlgorithm (STC) TrailFilterAlgorithm I used the configuration with trail cutting disabled
PYNQ driver for the GenX320 camera
Hello, Since I wasn't able to find a driver that allowed the GenX320 camera to work with the PYNQ framework, I ported the existing drivers to Python for this purpose. You can find the code here. The available example is geared towards my current project,
How to select event data format to generate on GENX320
Hello, I'm using GENX320 and I'd like to generate events in the EVT2.0 format instead of the default EVT 2.1 to use the evt to csv decoding functions from the sdk. I can't find this information on the datasheet and i can't change the data format on metavision
ESP for GenX320 CM2 for RaspberryPi5
I have the GenX320 CM2 connected to my RaspberryPi5 via MIPI. I have read in the docs that the GenX320 is supposed to support all Event Signal Processing (ESP) functionality. However, when trying to get the facilities for Anti-Flicker (AFK), Spatio-Temporal-Contrast
No monitor output after quickstart with IMX636 and kv260
I have set up the IMX636 with the kria board via usb (uart connection) and also it is conenct via ethernet to the router, not the laptop directly since I have no possibility to do this, nevertheless I can ssh into the device no issue or via serial interface.
Error with pybind11 when running metavision_model_3d_tracking.py (EVK4, Python)
Hello! I’m using an EVK4 event camera with Python and running the example metavision_model_3d_tracking.py together with the “maker” sample. When I run it, the program fails with an error that looks related to the pybind11 module. Could anyone advise how
Using IMX636 Metavision Starter Kit on KV260 Ubuntu
I have a project where we are trying to record with multiple cameras, including the IMX636 Metavision Starter Kit for the Kria KV260 board. The other cameras have been implemented on the Kria KV260 board, but with the Ubuntu image installed on it. We
Using SDK is Python venv / micromamba environment
I have an EVK4 and I am trying to use the SDK in a Python virtual environment. - Hardware: Prophesee EVK4 event-based camera - SDK: Prophesee Metavision SDK, compiled specifically for Python 3.10 (maximum for my ubuntu version) - Environment: Micromamba
Reconstructing Boot Sequence for GENX320
Dear Community, i am trying to use the GENX320 starter Kit on an FPGA, which is why I want to re-construct the boot-up sequence for the MIPI EVT2.1 stream using I2C from the 15-pin adapter. I tried to copy the signals from the Linux drivers (https://github.com/prophesee-ai/linux-sensor-drivers),
Separting the sensor module from the CCAM adapter
I need to mount the EVK3 (VGA) sensor to a cage-mount system. The simplest way I can think of to do this is to separate the sensor and the CCAM adapter board and connect them with a flexible cable. Does any such cable exist?
IMX636 - Anti Flicker Filter (AFK) - Problem
Hello, I am currently working with the IMX636 camera and I am trying to suppress the events generated by the pixels of a computer monitor (60 Hz). My goal is to use the AFK (Anti-Flicker) filter to remove this flickering. To do this, I modified the camera
EVK4 HD Calibration result
Hello, i'm new to event camera and i followed most of the tutorial from ur documentation in order to do a good biases tuning and an intrinsec calibration. I used the metavision calibration pipeline with the chessboard pattern and i manage to get result
Setting a ROI on GENX320MP and masking the hot pixels in that ROI
Hi, I am trying to to mask some hot pixels in a given ROI on the GENX320 MP. I looked at this page : https://support.prophesee.ai/portal/en/kb/articles/how-to-mask-the-effect-of-crazy-hot-pixel I have the coordinates of the pixels i'd like to mask. But
Dataset Structure for YOLOv8 Training Issue:
I'm encountering problems while training a YOLOv8 model using a custom dataset structure. I've followed the steps outlined in Prophesee YOLOv8 tutorial and used the Prophesee ML labeling tool. I am getting the following error: ValueError: num_samples
What are rows in the pixels architecture
In the Youtube video found on the Event-Based Concepts, it is mentioned that all pixels from the same rows are read simultaneously and their corresponding events will share the same timestamp. I have an EVK4 (IMX636) and the datasheet says the sensor
Establishing a time reference
Hello Prophesee Team, Is it possible to establish a time reference with the API (ideally Python)? For example, if one were to exploit the do_time_shifting functionality, could a timestamp simply be taken as the EventsIterator is called? Or is this too
Licensing T&C's for Kria embedded kit
Hello, I have already put in a support ticket regarding the following. I believe that the licensing T&C's are vague when it comes to demonstration purposes. If I create a binary based on a derivative of the embedded marker application code, how would
Building Active markers on kria ubuntu: Sophos hangs on the Kria embedded kit (imx636)
Hello, I am trying to get the event markers code to build & run on the kria ubuntu image, instead of the petalinux image. I am finding that building petalinux via the build tools is time consuming and requires a beefy machine, so I thought maybe switching
Metavision Studio installation
I recently installed Metavision Studio on my Windows system, but I'm encountering two errors when trying to run the application: The first error states: "Windows doesn't find \share\metavision\apps\metavision_studio\internal\client\Metavision studio."
EVT2.1 format problem in FPGA signal capture
Hi, I am using a Kria kv260 setup with an imx636 camera, and I am visualizing the active marker with the /opt/metavision/embedded_active_marker_3d_tracking app. Marker position is smoothly displays on the screen, proof that signal from sensor is correctly
How to program on GenX320 built-in risc-v cores?
Hi, I saw it is possible to flash firmware to the sensor during runtime from GenX320_STM32_V2.0.0 sample code, like `fw_led_tracking` and `fw_esp_wakeup`. Is there any documentation about this feature, which operations are support and how to transfer
How to change camera settings before calling EventsIterator in python
I need some help on the Python interface. I can read teh events from the camera, with events iterator metavision_core.event_io. I also know how to change camera settings using Camera from metavision_sdk_stream. However, I am not able to combine both things
Is it possible to get multiple Entra accounts, for JFrog access, using only one camera's serial code?
I started a project a few months ago using the EVK4, now another individual is looking to take over the project. Is it possible for them to create their own Microsoft Entra ID to access Prophesee resources on JFrog?
Streaming Event data as frames to an external GUI based application using Tkinter
Hello, I am having issues using my EVK4 with a GUI based application I have designed to stream the event-camera and a frame-camera (semi) simultaneously. The script is written in python and uses queues to put event frames and regular frames in separate
How to build metavision C code only onto KV260
Hello I am starting with the compilation flow of the embedded version of Prophesee application onto the Kria KV260 with the imx636. The full compilation flow is OK (FPGA hardware, application code metavision-active-marker-3d-tracking and petalinux build).
imx636 USB 摄像头套件SDK
请问有没有关于USB方式接入的imx636 的EVK的SDK ?如果有,请提供,谢谢。
EVK2 camera not being detected.
Hello, I am trying to use the EVK2 camera using the Dockerfile attached, but i am experimenting some errors when executing metavision_platform_info. The camera is connected to my laptop but inside the container when using this command i am obtaining this
Prebuilt metavision_dense_optical_flow.exe (Metavision 4.6.2, Win11)
Hello, I am trying to launch the prebuild metavision_dense_optical_flow.exe c++ example binary on the win11 Metavision v4.6.2 In the terminal it prints: Instantiating TripletMatchingFlowAlgorithm with radius= 1.5 then gui window is showed and quickly
x320 writes bad data to file?
With a x320 ES (which doesn't support the EVT3 format), a patched OpenEB (to support v4l2 devices) can only write raw files of collected data -- hdf5 files open then almost instantly close without any errors. If I convert the raw files to hdf5 with `metavision_file_to_hdf5`
Using the IMX636 + MIPI CSI-2 with Jetson Orin instead of AMD Kria
Does anyone have experience making this work? - Device tree looks like it's available here: https://github.com/prophesee-ai/linux-sensor-drivers - I'm curious if there's a driver available for the Jetson, and if not, if anybody has tips to write one for
RAM usage builds up when EVK4 captures high event rates
Greetings! When building a recording Application I noticed that while the EVK4 is facing conditions which result in a high event rates being captured, the system memory, or RAM that is used by the application is constantly increasing, but not decreasing
Timestamp Jitter Range
Hi, I came across this website: https://support.prophesee.ai/portal/en/kb/articles/evk-latency. It mentions jitter and latency here. The latency is clearly mentioned for IMX636; however, I didn't find any measurement about the jitter time. May I ask what
Missing images in dataset structure for training YOLOV8 model
Hi, I have the following structure for my dataset to train a yolov8 model according to the documentation but I am getting the error: "AssertionError: train: No images found in /home/allen/code/proheese-camera/train_classification_model/train. Supported
Is there a Python version of the Simple Window using C++ Example?
I want to replicate this example using Python because I have very little experience working with C++. https://docs.prophesee.ai/stable/samples/modules/ui/simple_window.html Has someone done this before? Or is this something that Prophesee has provided
Readout Saturation Error - Events Flashing Spontaneously
Hi, I'm currently using the EVK4 Prophesee event camera, and am recording events via ros2 subscription into a rosbag recording. When I try to replay my recording, I notice that spontaneously (for different durations too), the events on the screen appear
GenX320 on KV260 boot magic number not found
Hello, I am trying to get the GenX320 camera base project working on a KV260 board, using the provided Petalinux image and simply following the instructions in https://docs.prophesee.ai/amd-kria-starter-kit/application/pipeline_setup.html . However, when
train_detection script not training properly (Warning: No boxes were added to the evalutaion)
Hi, I am trying to use the train_detection.py script to train the model used the public FRED dataset (https://miccunifi.github.io/FRED/). I put the data in the correct structure but when I start training, every epoch displays the messaage: Warning : No
Error trying to use metavision sdk on python
Hello, I have installed Metavision SDK 4.6 and created an anaconda environment with python 3.9. I am trying to load the metavision packages however I always get the same error "ImportError Traceback (most recent call last) ----> 5 from metavision_core.event_io
Next Page