No module named 'metavision_sdk_base_paths_internal'
Hi! When I tried to run the python tutorials, there is a problem: No module named 'metavision_sdk_base_paths_internal'. Please see the attached photo below: I found some related files but they are not named exactly the same with the module above: Could
DVS for Ultra high speed applications (Real time)
Dear Prophesee Team! Problem Statement: I want to process the Raw Event (X, Y, P, T) data with Delta t of 4000uS / 250 FPS in real-time. Any process (slow/fast) inside the main loop shouldn’t affect the input stream of the camera. Below is the main loop
Xorg blank screen issue: Kria + IMX636
I am trying to run the kv260-psee application by using the HDMI connected between Kria and a display monitor. Following the instructions at https://docs.prophesee.ai/amd-kria-starter-kit/application/app_deployment.html I could power on the sensor but
Using the IMX636 + MIPI CSI-2 with Jetson Orin instead of AMD Kria
Does anyone have experience making this work? - Device tree looks like it's available here: https://github.com/prophesee-ai/linux-sensor-drivers - I'm curious if there's a driver available for the Jetson, and if not, if anybody has tips to write one for
Stereo Calibration
I am trying to add a stage to a stereo calibration pipeline this stage must receive the flow of two streams of the events coming from two Prophesee cameras and the stage must filter them and lets go only the events coming from both cameras having the
How to read .h5 file format in Metavision 4.4.0
Hello, I am trying to read the dataset DSEC (Download – DSEC (uzh.ch)) with the Metavision 4.4.0. The dataset is supposed to be from Prophesee Gen3, and the file format provided is .h5. From my understanding .h5 and .hdf5 have the same porperties, and
When to use SDK vs HAL
After reading through the tutorials, it seems to me that many things can be achieved through the HAL API that can also be done through the SDK. For example, opening a camera and adding a callback for events can be done using the HAL (eg HAL Viewer Sample)
ModuleNotFoundError: No module named 'metavision_core'
Hello, I have installed metavision SDK on ubuntu 18.04 successfully, and the Metavision Studio worked well. Besides, I have set the anaconda environment according to the usage https://docs.prophesee.ai/stable/installation/linux.html. However, when I
Duplicate events in the event stream
Hi, I'm using EVK4 HD for event data collection. I find that there are several duplicate cd events (i.e. the same events in pixel location, timestamp and polarity) in the event stream. Are these duplicate events redundant, which I can remove without any
Load intrinsic and extrinsic parameters into the EVK4 using Python
Hello everyone, I have recently been working on a project where I use the EVK4 event-based camera along with a regular RGB camera to capture two types of data simultaneously. To ensure that the view and surface of the objects we shoot remain consistent,
Quantification of active pixels/s.
Hi, I would like to know please if there is any code from Metavision to calculate the active pixels/second in a recording? I was thinking in converting my recording to frames by using the Frame Generator and then, having frames with certain accumulation
Unable to locate the path to necessary DLLs for Metavision Python bindings
I am trying to follow the tutorial 'RAW File Loading using Python' (RAW File Loading using Python — Metavision SDK Docs 4.4.0 documentation (prophesee.ai)), but when I launch the program, it shows the following error: Unable to locate the path to necessary
Vibration estimation seems to have a weird bottleneck
Hi, I've been using the vibration estimation code that is provided but im having some issues. The vibration estimation works fine even for detecting high frequencies however I want to run the detection it self at a higher frequency (so how often the algorithm
Filtering the event video
Hi After filtering the raw event video using various filtering algorithms as shown in your code example, is it possible to save the filtered event file as a raw file?
Feature tracking algorithm
Hi, Is there a feature tracker specifically made or compatible with a prophesee event camera? I want to compare such an algorithm to a conventional KLT tracker. Does metavision or a third-party company provide such an algorithm? Kind regards, Rik van
Skipping events, based on the performance problems
Hello, Prophesee Team! Let say we have a simple pipeline: auto& cam_stage = p.add_stage(std::make_unique<Metavision::CameraStage>(std::move(cam), event_buffer_duration_ms)); auto& algo_stage = p.add_stage(std::make_unique<AlgoStage>(), cam_stage); auto&
No module named 'metavision_sdk_base_paths_internal'
Hi, Prophesee team I want to install Metavision SDK on Ubuntu 22.04 and i installed it following the installation document. But when i trie to run python tutorials, i meet a problem :No module named 'metavision_sdk_base_paths_internal' . My python version
Error while compiling metavision_sdk_get_started
Hi, I have an error while trying to compile the file metavision_sdk_get_started. PS C:\Users\ahamm\Documents\MA4\Project_event_camera\Code\PROPHESEE\core> python metavision_sdk_get_started.py Traceback (most recent call last): File "C:\Users\ahamm\Documents\MA4\Project_event_camera\Code\PROPHESEE\core\metavision_sdk_get_started.py",
Event Frame GPU Loading Sample using C++
Hello, I am trying to run the Event Frame GPU Loading Sample using C++ and I am getting an unexpected error. I installed CUDA 11.7, and i am running in a Windows OS. I am following the steps as stated here: Event Frame GPU Loading Sample using C++ — Metavision
Event camera acquisition on Jetson Orin
Hello, I would be interested to make event camera recordings with a jetson orin : 945-13730-0005-000 | Kit de développement NVIDIA Jetson AGX Orin Developer Kit NVIDIA | RS (rs-online.com) Event Camera Evaluation Kit 4 HD IMX636 Prophesee-Sony How can
Extrinsic Calibration Issue- Unable to detect LEDs
Hi I've been trying to use Arduino 2560 to blink 2 LEDs at different frequencies(150&200Hz) - code attached below. However, it seems like LEDs aren't being detected by the Prophesee EVK4 camera - (bias settings are at default (recording attached)) when
LIBUSB_TRANSFER_ERROR after running for sometime
Camera: EVK4 Environment: Ubuntu2204, OpenEB 4.6.2, libusb 1.0.0 Problem: We are running USB camera for long time but sometimes get the following messages. [HAL][ERROR] ErrTransfert [HAL][ERROR] LIBUSB_TRANSFER_ERROR [HAL][ERROR] ErrTransfert [HAL][ERROR]
OPENEB 4.3.0 on ubuntu 20.04, compiling cpp examples
Hello Prophesee Team, I have compiled OPENEB 4.3.0 on arm64 ubuntu 20.04 using the guide https://docs.prophesee.ai/4.3.0/installation/linux_openeb.html and have used Option 1 - working from build folder. the test suite: ctest -C Release run without errors.
Are Metavision SDK Cmake files missing KV260
Are the Metavision SDK CMake (.cmake) files missing from the KV260 Petalinux image? Same question for OpenCV. I am attempting to build some of the examples and CMake cannot find the MetavisionSDKConfig.cmake. I have attempted to locate any CMake files
Questions about 3d model tracking
Hello, I'm learning to use the metavision_model_3d_tracking program. Because the 3d model should be in JSON format, I have questions about this data format. The program needs an "object_init_pose.json" file and an "object.json" file. However, if I convert
Installation of free Metavision sdk 4.6.2
metavision-sdk-analytics-bin : Depends: libopencv-core4.5d (>= 4.5.4+dfsg) but it is not installable Depends: libopencv-highgui4.5d (>= 4.5.4+dfsg) but it is not installable Depends: libopencv-imgproc4.5d (>= 4.5.4+dfsg) but it is not installable Depends:
There is a very obvious display delay when developing our data collection software
While developing our data collection software with five EVK4 cameras, we ran into a display problem. We have contacted the local Prophesee support, and they told us to test the provided sample code (metavision\sdk\cv\samples\metavision_noise_filtering).
MVTec HALCON Camera Not recognized evk4
经过: https://support.prophesee.ai/portal/en/kb/articles/mvtec-halcon-acquisition-interface 目前我下载的halcon版本是22.11 相机无法识别 软件开发工具包4.1 MV SDK4.1 目前测试通过后,软件无法识别EVK4相机。我需要安装其他插件吗?
Issues of EVT2 RAW File encoder
Hi, when I used metavision_evt2_raw_file_encoder to encode a CSV file into an EVT2 format RAW file, I found that the timestamps of the events in the RAW file were different from that in the original CSV file. There is a difference of tens of microseconds
Sparse Optical Flow in C++ Speed information
For extracting the speed information using the metavision_sparse_optical_flow I changed some code lines as in the website is said with the function void writeCSV. This function gets the flow_output and writes it down in a csv format. I try to run the
Converting Raw Files to Video - Questions regarding FPS
In the Metavision Studio application, you can export videos at varying frame rates. Regardless of the frame rate chosen, the resulting video is rendered at 30 FPS. This results in a slow/fast motion effect depending on whether the FPS is more or less
ROS node with standard CPU architecture
Hello, I am trying to use the EVK4 to make a ROS node for a larger system. I am looking to build the ROS node on a PC with a traditional x86 Intel-i7 processor. I saw on the ROS node website (GitHub - prophesee-ai/prophesee_ros_wrapper: ROS driver for
noise filtering post recording
I have an object that has an intrinsic blinking that can be captured with the EVK3, but the blinking is excited by a vibrating light. The vibrations have a known frequency, and I'm wondering if there is any way to filter this frequency after recording
Why do the OFF events come first in a moving object?
Hello, In this example the spinner end is spinning from left to right (clockwise). Why are the first events happening OFF events? Because the contour of ON events also do not look round, so I was wondering how the camera records the movement.
Camera connection error: [SERVER] - stderr - [HAL][ERROR] Unable to open device
I connected an IMX636 to my computer, ran metavision studio (SDK 3.1.2) and clicked "open a camera". However, cmd will display the ERROR message "[SERVER] -stderr - [HAL][ERROR] Unable to open device". This error will be reported for a total of 4 times,
Optical Flow Center-X and Center-Y always 0
I'm running into a strange problem where I am trying to process the optical flow of data and plot their position over time using the center_x and center_y values. For some reason, the center_x and center_y values are all essentially zero with some floating
has anyone implemented the driver development for the event camera GENX320-CM2 on Raspberry Pi?
Hello, has anyone implemented the driver development for the event camera GENX320-CM2 on Raspberry Pi? If it has been implemented, could you share a sample driver code for my reference? I have been trying to drive an event camera on RK3568 recently, but
Sparse Optical Flow
is there a publication where I can read how the used algorithm for sparse optical flow computation works?
Request for Information on Video Deblurring Technique from Prophesee YouTube Demo
Hi everyone, I recently watched a video on Prophesee's official YouTube channel (link) demonstrating video deblurring. It showcased a method combining a CMOS camera's output with an event sensor's stream. Unfortunately, I couldn't find any related samples
Recording events straight to csv
While digging through the documentation, examples, and source code, I found a rather poorly documented feature of the python HAL API, and I am wondering if it works the way I think it does. I have tried a few examples and had some promising results, but
Next Page