Reconstructing Boot Sequence for GENX320
Dear Community, i am trying to use the GENX320 starter Kit on an FPGA, which is why I want to re-construct the boot-up sequence for the MIPI EVT2.1 stream using I2C from the 15-pin adapter. I tried to copy the signals from the Linux drivers (https://github.com/prophesee-ai/linux-sensor-drivers),
Separting the sensor module from the CCAM adapter
I need to mount the EVK3 (VGA) sensor to a cage-mount system. The simplest way I can think of to do this is to separate the sensor and the CCAM adapter board and connect them with a flexible cable. Does any such cable exist?
IMX636 - Anti Flicker Filter (AFK) - Problem
Hello, I am currently working with the IMX636 camera and I am trying to suppress the events generated by the pixels of a computer monitor (60 Hz). My goal is to use the AFK (Anti-Flicker) filter to remove this flickering. To do this, I modified the camera
No monitor output after quickstart with IMX636 and kv260
I have set up the IMX636 with the kria board via usb (uart connection) and also it is conenct via ethernet to the router, not the laptop directly since I have no possibility to do this, nevertheless I can ssh into the device no issue or via serial interface.
Using IMX636 Metavision Starter Kit on KV260 Ubuntu
I have a project where we are trying to record with multiple cameras, including the IMX636 Metavision Starter Kit for the Kria KV260 board. The other cameras have been implemented on the Kria KV260 board, but with the Ubuntu image installed on it. We
EVK4 HD Calibration result
Hello, i'm new to event camera and i followed most of the tutorial from ur documentation in order to do a good biases tuning and an intrinsec calibration. I used the metavision calibration pipeline with the chessboard pattern and i manage to get result
Setting a ROI on GENX320MP and masking the hot pixels in that ROI
Hi, I am trying to to mask some hot pixels in a given ROI on the GENX320 MP. I looked at this page : https://support.prophesee.ai/portal/en/kb/articles/how-to-mask-the-effect-of-crazy-hot-pixel I have the coordinates of the pixels i'd like to mask. But
Dataset Structure for YOLOv8 Training Issue:
I'm encountering problems while training a YOLOv8 model using a custom dataset structure. I've followed the steps outlined in Prophesee YOLOv8 tutorial and used the Prophesee ML labeling tool. I am getting the following error: ValueError: num_samples
What are rows in the pixels architecture
In the Youtube video found on the Event-Based Concepts, it is mentioned that all pixels from the same rows are read simultaneously and their corresponding events will share the same timestamp. I have an EVK4 (IMX636) and the datasheet says the sensor
Establishing a time reference
Hello Prophesee Team, Is it possible to establish a time reference with the API (ideally Python)? For example, if one were to exploit the do_time_shifting functionality, could a timestamp simply be taken as the EventsIterator is called? Or is this too
Licensing T&C's for Kria embedded kit
Hello, I have already put in a support ticket regarding the following. I believe that the licensing T&C's are vague when it comes to demonstration purposes. If I create a binary based on a derivative of the embedded marker application code, how would
Building Active markers on kria ubuntu: Sophos hangs on the Kria embedded kit (imx636)
Hello, I am trying to get the event markers code to build & run on the kria ubuntu image, instead of the petalinux image. I am finding that building petalinux via the build tools is time consuming and requires a beefy machine, so I thought maybe switching
Metavision Studio installation
I recently installed Metavision Studio on my Windows system, but I'm encountering two errors when trying to run the application: The first error states: "Windows doesn't find \share\metavision\apps\metavision_studio\internal\client\Metavision studio."
EVT2.1 format problem in FPGA signal capture
Hi, I am using a Kria kv260 setup with an imx636 camera, and I am visualizing the active marker with the /opt/metavision/embedded_active_marker_3d_tracking app. Marker position is smoothly displays on the screen, proof that signal from sensor is correctly
How to program on GenX320 built-in risc-v cores?
Hi, I saw it is possible to flash firmware to the sensor during runtime from GenX320_STM32_V2.0.0 sample code, like `fw_led_tracking` and `fw_esp_wakeup`. Is there any documentation about this feature, which operations are support and how to transfer
How to change camera settings before calling EventsIterator in python
I need some help on the Python interface. I can read teh events from the camera, with events iterator metavision_core.event_io. I also know how to change camera settings using Camera from metavision_sdk_stream. However, I am not able to combine both things
Is it possible to get multiple Entra accounts, for JFrog access, using only one camera's serial code?
I started a project a few months ago using the EVK4, now another individual is looking to take over the project. Is it possible for them to create their own Microsoft Entra ID to access Prophesee resources on JFrog?
Streaming Event data as frames to an external GUI based application using Tkinter
Hello, I am having issues using my EVK4 with a GUI based application I have designed to stream the event-camera and a frame-camera (semi) simultaneously. The script is written in python and uses queues to put event frames and regular frames in separate
How to build metavision C code only onto KV260
Hello I am starting with the compilation flow of the embedded version of Prophesee application onto the Kria KV260 with the imx636. The full compilation flow is OK (FPGA hardware, application code metavision-active-marker-3d-tracking and petalinux build).
imx636 USB 摄像头套件SDK
请问有没有关于USB方式接入的imx636 的EVK的SDK ?如果有,请提供,谢谢。
EVK2 camera not being detected.
Hello, I am trying to use the EVK2 camera using the Dockerfile attached, but i am experimenting some errors when executing metavision_platform_info. The camera is connected to my laptop but inside the container when using this command i am obtaining this
Prebuilt metavision_dense_optical_flow.exe (Metavision 4.6.2, Win11)
Hello, I am trying to launch the prebuild metavision_dense_optical_flow.exe c++ example binary on the win11 Metavision v4.6.2 In the terminal it prints: Instantiating TripletMatchingFlowAlgorithm with radius= 1.5 then gui window is showed and quickly
x320 writes bad data to file?
With a x320 ES (which doesn't support the EVT3 format), a patched OpenEB (to support v4l2 devices) can only write raw files of collected data -- hdf5 files open then almost instantly close without any errors. If I convert the raw files to hdf5 with `metavision_file_to_hdf5`
Using the IMX636 + MIPI CSI-2 with Jetson Orin instead of AMD Kria
Does anyone have experience making this work? - Device tree looks like it's available here: https://github.com/prophesee-ai/linux-sensor-drivers - I'm curious if there's a driver available for the Jetson, and if not, if anybody has tips to write one for
RAM usage builds up when EVK4 captures high event rates
Greetings! When building a recording Application I noticed that while the EVK4 is facing conditions which result in a high event rates being captured, the system memory, or RAM that is used by the application is constantly increasing, but not decreasing
Timestamp Jitter Range
Hi, I came across this website: https://support.prophesee.ai/portal/en/kb/articles/evk-latency. It mentions jitter and latency here. The latency is clearly mentioned for IMX636; however, I didn't find any measurement about the jitter time. May I ask what
Missing images in dataset structure for training YOLOV8 model
Hi, I have the following structure for my dataset to train a yolov8 model according to the documentation but I am getting the error: "AssertionError: train: No images found in /home/allen/code/proheese-camera/train_classification_model/train. Supported
Is there a Python version of the Simple Window using C++ Example?
I want to replicate this example using Python because I have very little experience working with C++. https://docs.prophesee.ai/stable/samples/modules/ui/simple_window.html Has someone done this before? Or is this something that Prophesee has provided
Readout Saturation Error - Events Flashing Spontaneously
Hi, I'm currently using the EVK4 Prophesee event camera, and am recording events via ros2 subscription into a rosbag recording. When I try to replay my recording, I notice that spontaneously (for different durations too), the events on the screen appear
GenX320 on KV260 boot magic number not found
Hello, I am trying to get the GenX320 camera base project working on a KV260 board, using the provided Petalinux image and simply following the instructions in https://docs.prophesee.ai/amd-kria-starter-kit/application/pipeline_setup.html . However, when
train_detection script not training properly (Warning: No boxes were added to the evalutaion)
Hi, I am trying to use the train_detection.py script to train the model used the public FRED dataset (https://miccunifi.github.io/FRED/). I put the data in the correct structure but when I start training, every epoch displays the messaage: Warning : No
Error trying to use metavision sdk on python
Hello, I have installed Metavision SDK 4.6 and created an anaconda environment with python 3.9. I am trying to load the metavision packages however I always get the same error "ImportError Traceback (most recent call last) ----> 5 from metavision_core.event_io
Metavision Sparse Optical Flow Questions
Hello, I am currently using the sparse optical flow algorithm and I have a few questions. As a an object enters the FOV of the camera, its velocity is tracked as near zero. This is because as the object moves into view, its "center" is staying apparently
http 404 error in train_detection.py script
Hello, I am trying to run the train_detection.py script in https://docs.prophesee.ai/stable/samples/modules/ml/train_detection.html#chapter-samples-ml-train-detection and I run it with the "toy_problem" path as shown below: python train_detection.py .
Frequency / bias value matching table
Hi, I just found this table that relates bias_hpf and bias_fo in terms of high-pass cut-off frequency and Low-pass cut-off frequency but this is for GEN 3.1. Can you please share the table or the graphs (https://support.prophesee.ai/portal/en/kb/articles/bias-tuning-flow)
metavision_hal python package
Hi, does anyone know where the metavision_hal python module is supposed to be located.I went through all the installation steps but I cant find that package. I originally got the error that metavision_core python module wasn't found but then I copied
Impact of 10Hz Sync Signal on EVK4 + RGB Camera Synchronization Precision
Hello Prophesee Team, We are currently developing a multi-sensor data acquisition system using the Prophesee EVK4 alongside a standard RGB camera, and we require precise time synchronization between them. I have carefully reviewed the official synchronization
IMX636 internal clock PPM
Hello, I am trying to find information about expected internal clock accuracy as PPM compared to the "perfect" clock. online sources give average "Standard Crystal Oscillators" somewhere around 20 PPM, which means around 12 seconds drift per week. Does
Vibration estimation seems to have a weird bottleneck
Hi, I've been using the vibration estimation code that is provided but im having some issues. The vibration estimation works fine even for detecting high frequencies however I want to run the detection it self at a higher frequency (so how often the algorithm
Stereo Calibration Depth Mapping
Hi, I am trying to use the metavision_stereo_metavision.py script to create a depth map using two syncronized recordings obtained from the metavision_sync.py script but when I run the script I get a lot of NonMontonicTimeHigh and InvalidVectBase errors
Next Page