Using OpenEB expelliarmus to decode data
Hi Community, I was trying to decode the raw files from the camera using expelliarmus to decode the data and put it to a pandas df. What is the time stamps scale from it. When I use the raw to csv python scripts I get different results and the timescale
Converting Raw Files to Video - Questions regarding FPS
In the Metavision Studio application, you can export videos at varying frame rates. Regardless of the frame rate chosen, the resulting video is rendered at 30 FPS. This results in a slow/fast motion effect depending on whether the FPS is more or less
Active pixels behind the moving object - Pixel latency
Hi, I'm doing an experiment about a rotating black DOT in a white background with backlight to maximize contrast. In all the cases I have this ON events after the movement of the DOT which in this case is going clockwise, this actually looks like a tail
Assistance Needed in Creating Dataset from Golf Shot Recordings
Hello, I'm currently working on a university project where I've encountered difficulties in creating a dataset. My project involves analyzing golf shots, for which I've recorded a large quantity of strikes, specifically focusing on the moment of impact
noise filtering post recording
I have an object that has an intrinsic blinking that can be captured with the EVK3, but the blinking is excited by a vibrating light. The vibrations have a known frequency, and I'm wondering if there is any way to filter this frequency after recording
The quickest recording of a range of events
Hello Prophesee Team, I want to write some amounts of events (200ms, several pipeline slices) before a dynamically calculated point in time. I am always saving last 5 pointers of event buffers to deque, and use it to write events when I need it. WriteCSV()
Pipelines Supported Algorithms
Hello, I was wondering what algorithm are supported as part of the data pipeline systems? We are looking at using this to test out multi-threading by converting the Event Frame Generation Sample into one using a pipeline approach. From what I have been
Unable to locate the path to necessary DLLs for Metavision Python bindings
I am trying to follow the tutorial 'RAW File Loading using Python' (RAW File Loading using Python — Metavision SDK Docs 4.4.0 documentation (prophesee.ai)), but when I launch the program, it shows the following error: Unable to locate the path to necessary
How to read camera data using fpga
We hope to use fpga to read the camera data instead of SDK. Is there any relevant information or documentation?
Getting GPU device error when running the detection pipeline
Hi team, I am trying to run the detection pipeline on my system. The command i am entering is python3 sdk/ml/python_samples/detection_and_tracking_pipeline/detection_and_tracking_pipeline.py --object_detector_dir ~/Documents/pre-trained_models/metavision_sdk_3.x/red_event_cube_05_2020/
Data transmission & Processor
Hello, According to the data announced on your site, the equivalent temporal precision of the event-technology is over 10 000 fps. Is this value fixed? Or can it decrease again in the most unfavorable cases? For example, we have a VGA EVK1 with a resolution
No plugin found error on Jetson orin nano
Hi, I sussessfully compiled OPENEB-4.3.0 on my Jetson Orin Nano ( arm64 coretex, Ubuntu 20.4 ), and I lunched 'metavision_viewer' from the 'build' fowder. And I plugged EVK3 to my Jetson, but there comes the warning: 'no plugin found' and 'Metavision
LIBUSB_ERROR_TIMEOUT in Ubuntu (VMware Virtual Machine)
Hi! When I launch this example in VMware Virtual Machine - Ubuntu OS, I receive this error: "[HAL][ERROR] Error while opening from plugin: hal_plugin_gen31_evk3 [HAL][ERROR] Integrator: Prophesee [HAL][ERROR] Device discovery: Metavision::TzCameraDiscovery
python SDK samples problem
Dear All, I'm trying to interface with a vision camera EB by Imago (linux image 1.3b) using python and MetaVision 2.3.1 on Windows 10. As far as I understand, the HAL plugin provided by Imago is linked against MetaVision 2.3.1, so I believe I'm stuck
Mv_iter stuck when no events occurred
Hello, I am using the Metavision recorder (https://support.prophesee.ai/portal/en/kb/articles/how-to-record-data-with-metavision-recorder), and notice that when no events are received from the camera (For example, after use small roi and strict biases),
Metavision interface for Mvtec Halcon
There is a problem to install the MVTec HALCON acquisition interface for prophesee camera. the last version of your SDK don't mutch with your dll file for the interface. These attachemements works only for 4.1 version and not for 4.3 is it possible to
Overheating Gen4.1 sensor
Hi, I've been doing experiments with the camera where the event rate is always around 100 Mev/s, with around 10 minutes of use, the camera starts to overheating and the Metavision Studio app stops suddenly. Can I know your opinion please about this? Since
Discrepancy viewing vs processing raw file
The code below is meant to produce 2 black and white images, one for positive events and one for negative events event_iterator.reader.reset() # Reset the reader event_iterator.start_ts = int(1e6) event_iterator.delta_t = int(1e6) imgp = np.zeros((height,
Recording events straight to csv
While digging through the documentation, examples, and source code, I found a rather poorly documented feature of the python HAL API, and I am wondering if it works the way I think it does. I have tried a few examples and had some promising results, but
Strange Behavior recording raw files in quick succession
Hello Again, I apologize if I am spamming this forum, I've gone through the documentation, but I'm still running into some strangeness I cannot explain. I am trying to record several raw files in quick succession. I'm doing this with the HAL python library
Event camera acquisition on Jetson Orin
Hello, I would be interested to make event camera recordings with a jetson orin : 945-13730-0005-000 | Kit de développement NVIDIA Jetson AGX Orin Developer Kit NVIDIA | RS (rs-online.com) Event Camera Evaluation Kit 4 HD IMX636 Prophesee-Sony How can
Issue with Importing 'EventBbox' in Metavision SDK 4.1.0 While Using Machine Learning Labeling Tool
Hello, I am currently working with the Machine Learning Labeling Tool. While executing the 'label_tracking.py' file, I encountered an import error stating that 'EventBbox' could not be imported from 'metavision_sdk_ml'. Could you kindly assist me in resolving
Help understanding the difference in response between positive and negative events.
Hello everyone, I've been working with an EVK3 as part of my PHD thesis. In order to quantify the effect that the on and off biases have on the response of the camera, I have set up a simple experiment that uses the camera to measure responses to a pulsing
Inference Pipeline of Detection and Tracking using C++
Hi! I downloaded this folder LibTorch 1.13.1 for CUDA 11.7 I'm trying to compile this example: Inference Pipeline of Detection and Tracking using C++ — Metavision SDK Docs 4.0.1 documentation (prophesee.ai) When I done this line cmake .. -DCMAKE_PREFIX_PATH=`LIBTORCH_DIR_PATH`
Image generating by accumulating events in 40us generates a aliased image in Gen4
Hi, I was trying to capture a really fast event that lasts a few microseconds. After recording, I tried plotting the image by accumulating events in 40 us time window. I see that the images are getting formed row-wise. I have attached snapshots of the
3~4ms temporal interval between event callbacks
I'm testing the speed of callbacks using the following code: (the whole code is based on your hal sample) if (i_cddecoder) {
// Register a lambda function to be called on every CD events
auto start_t=std::chrono::system_clock::now();
Event rate plotting
Hello, I recently viewed the demo video on your website that showcases plotting the event rate alongside real-time video. You can reference the video at: https://youtu.be/fL_kOaDPzjQ. I've been trying to replicate this feature and was searching for the
What is the correct way to handle the device object lifetime
I have written a class that wraps the RawReader class. I instantiate the reader as described in the documentation by first creating a device, setting all of the hal properties for the device, creating the reader from the device, and returning the reader.
Bias Bandwidth
Hi, As far as I understand the Bias Bandwidth is a parameter that can be changed to avoid high or low lighting fluctuations, but I will also affects the bandwidth of the camera, which is restrictive to the maximum events per sec that can be processed.
Minimum accumulation time to visualize
Hello, Since you know that the accumulation time is the corresponding variable that is used in events to approximate the correspondence to a certain frame rate. I would like to know what is the minimum accumulation time that I can use to visualize the
Accumulation time below 100us tracking_spatter.py
Hi, I'm doing some experiments to evaluate the performance of the EVK 2 and for that I'm tracking some dots that are rotating, I used the tracking_spatter.py algorithm from Prophesee where I need to sed the accumulation time, which is going to be also
Pipeline stage that outputs the coordinates of undistorted events
Hello. I am referring to the sample code (https://docs.prophesee.ai/stable/samples/modules/cv/undistortion.html#chapter-samples-cv-undistortion) and using pipeline processing to event I am trying to correct the undistortion of the image output from the
What is the default bias value of IMX636? And how to calculate the event based the bias?
We read the default bias by the provided software and obtain the default pos_thres=102%,neg_thres=73%. It's very curious. If we set the intensity of two images as I1 and I2. We can get a positive event only if I2 - I1 > 102% * I1. It seems not very reasonable.
Band Pass filter bias units
I have an EVK3 and I'm trying to understand the units of the fall off and high pass filter settings. I'm hoping I can use this to target a specific frequency of motion if I know the expected frequency ahead of time. For instance, suppose I have a light
Relation of abolute bias values of Gen4.1 and IMX636
Hello, I read in the forum: "Setting the logging level to TRACE as described here : https://docs.prophesee.ai/3.0.1/faq.html#how-can-i-change-the-logging-level-at-runtime should help you to see absolute bias values on your Gen4.2 sensor (IMX636) and the
Where to get EVK4's Sensor Characterization?
Hi, Prophesee! Now I have an EVK4, but Sensor Characterization need to know, such as latency, shot noise rate, threshold and etc. In this link, https://support.prophesee.ai/portal/en/kb/prophesee-1/metavision-sensing/sensors, I found the documents of
How to save files (of my EVK4) by Python API?
hello,i can use `metavision_core.event_io.EventsIterator` to capture evs signals, but how can i save them to RAW file or H5 file with python API ?
Stereo Calibration
I am trying to add a stage to a stereo calibration pipeline this stage must receive the flow of two streams of the events coming from two Prophesee cameras and the stage must filter them and lets go only the events coming from both cameras having the
Stage as class member
Hello, Prophesee Team! Small question, I have searched all examples, and everywhere you have: Metavision::Pipeline p(true); auto &cam_stage = p.add_stage(std::make_unique<Metavision::CameraStage>(std::move(camera), ebd_ms)); but how would you properly
Feature tracking algorithm
Hi, Is there a feature tracker specifically made or compatible with a prophesee event camera? I want to compare such an algorithm to a conventional KLT tracker. Does metavision or a third-party company provide such an algorithm? Kind regards, Rik van
Next Page