Bypass Bias range check in python
Hi all, Do you know how we can bypass bias range check with the python API please provide some example the documentation has none and I am lost.
Error while compiling metavision_sdk_get_started
Hi, I have an error while trying to compile the file metavision_sdk_get_started. PS C:\Users\ahamm\Documents\MA4\Project_event_camera\Code\PROPHESEE\core> python metavision_sdk_get_started.py Traceback (most recent call last): File "C:\Users\ahamm\Documents\MA4\Project_event_camera\Code\PROPHESEE\core\metavision_sdk_get_started.py",
Dense Optical Flow Estimation - Speed Values
Hello, I am using the dense optical flow: code (https://docs.prophesee.ai/stable/samples/modules/cv/dense_flow_py.html) to estimate the speed and position of the water drops while spraying. I am using the triplet matching method. A part of the results
Different definitions of contrast
Hello, I was wondering why contrast was defined differently between the IMX636 sensor and the GENX320 : For IMX636 : For GENX320 : If we want to plot the S-curve, what influences the choice between plotting pixel response probability in function of log-contrast
How to select event data format to generate on GENX320
Hello, I'm using GENX320 and I'd like to generate events in the EVT2.0 format instead of the default EVT 2.1 to use the evt to csv decoding functions from the sdk. I can't find this information on the datasheet and i can't change the data format on metavision
How to use I_ROI in python I am trying to do some blob detection and then use that to set I_ROI.
Can you help me with this approach and code.
Sample not running in new version
Hello, With the new release Metavision SDK (4.6) I am unable to open samples (ex. metavision_time_surface.py). But with version 4.53 there is no problem. The following error is shown: ImportError: DLL load failed while importing _errors: The specified
Quantifying Signal Strength
I'm trying to quantify the strength of a signal at different accumulation periods, from 100us up. Typically I would use SNR for this, but in the case of an event sensor, that metric is mostly irrelevant. Can someone recommend a strong candidate for quantifying
Why do the OFF events come first in a moving object?
Hello, In this example the spinner end is spinning from left to right (clockwise). Why are the first events happening OFF events? Because the contour of ON events also do not look round, so I was wondering how the camera records the movement.
Ask for help
Hello, I would like to ask what is the difference between setting bias_diff_off to -20 and 20?
How is equivalent frame rate calculated?
Hello and good day, As I know frame rate is not a defined parameter in event-based cameras but is there a formula to estimate how fast of a change in the scene can be detected? In Prohphesee's website more than 10000 fps is mentioned as equivalent temporal
"Passthrough Not Supported, GL Disabled, ANGLE Error in Prophesee on Windows"
Bonjour Madame/Monsieur, I'm encountering an issue when running Prophesee on my Windows. The error message I receive is: ``` C:\Program Files\Prophesee>[11660:0516/100937.361:ERROR:gpu_init.cc(486)] Passthrough is not supported, GL is disabled, ANGLE
Flickering bars on event stream
I have a Prophesee EVK4 camera and I'm noticing a strange occurrence when attempting to record a stream with the camera face down. There is constant bar flickering across the stream. Most other orientations are completely fine; it only falls apart with
Request camera information
Hello, the picture is the information of my camera. I would like to know what the unit of the contrast threshold of this camera is? And what is the range of each parameter? Thank you.
EVT3 NonMonotonicTimeHigh error happened
Hi All I'm trying to use four EVT3 in one PC. I use metavison studio version 4.5 to open four EVT3 camera. Sometimes it will occur NonMonotonicTimeHigh error, it happened randomly not always in the same camera. I found this document from prophesee website,
GenericProducerAlgorithm::is_done() does not change status automatically
Hello, I use Metavision::camera for reading from a file or a camera, and GenericProducerAlgorithm to get events (in the same way as in the metavision_player example) Now I want to close the application when all events was read from a file and processed
RCT value of GENX320
Hello, I have a question regarding the RCT (Ramp Contrast Threshold) value of the GENX320 from the datasheet. I don't understand how RCT=50% if NCT=30% : Let's suppose that NCT=30%. Using the values of the experimental setup for the RCT measurement, if
Using Prophesee camera for paint thickness estimation
Hello, I want to use the Prophesee camera, EVK4, for estimating the paint thickness while spraying. Is this possible to measure the size and velocity of the tiny paint particles using your camera? Is there any code or instruction that I can use? Thank
Multiple DVS synchronization (more than 1 slave)
Hello everyone, Our team plans to synchronize more than one slave DVS (with one master and three slaves). We've followed the recommendations for multiple DVS synchronization, but during testing, we found that only one slave is precisely synced with the
ColorPalette in metavision_simple_viewer.py
Hi, All! When I change ColorPalette in metavision_simple_viewer.py I get these error: Code lines 55, 56: As the same time ColorPalette with Light, Dark and CoolWarm is working. How can I manage with this error? Best Regards, Rafael.
Why does the sensor detect better changes in higher initial light level?
Hello, I have a question regarding CDP (Contrast Detection Probability) graphs and contrast sensitivity, Why does the sensor have a higher chance of detecting the changes (contrast) when the initial light level is brighter? What is the reasoning or mechanism
Reduce resolution of a recording
Hello, I am running some recorded .raw data with 1280*720 resolution and I would like to make some comparisons on how this would work on a 640*480 resolution. I have been searching on how to do this and I have seen the posibility of applying a ROI filter
Delayed response of events in Low Light Conditions.
Hii, my experiment consist in a black rotating dot with a background light to generate contrast. When I keep all conditions the same only changing the light intensity of the background light I notice that there is like a time delay or a latency (I dont
High latency in generating negative events
Hello, In an extremely dark environment, a laser creates a point light source on the IMX636 and moves at a constant speed. According to the imaging principle of an event-based camera, the time for the point light source to generate positive and negative
Measuring Pixel array current GENX320
Hello, Is there a pin on the GENX320 EVK3 kit where it possible to measure the total current generated by the photodiodes of the pixel matrix ? I'm trying to measure dark current from the pixel's photodiode. For that, the closest i got to the value of
Help! how to deal with the problem?
OS: Windows 11 Professional (23H2) Error: Metavision studio internal error, code 3221225785 signal null
Video writing for spatter tracking
Hello, I am using the metavision_spatter_tracking.py code with these parameters: python metavision_spatter_tracking.py -i recording.raw --cell-width 7 --cell-height 7 --activation-ths 10 --accumulation-time-us 50 -f 3 -o video I want to save the video
How to record in stereo?
Hello. I'm connecting 2 event cameras in stereo. I already did the synchronization by using the metavision_sync.py algorithm but I couldn't find out how to start to record with both cameras at the same time. I used the metavision_simple_recorder.py but
When the saturation takes place, do the remaining columns become more sensitive?
Hii, I have been doing an experiment to understand a bit more the behavior of the camera when the saturation takes place and my results show that the higher the saturation, the higher the average event rate per row. My experiment consists of several points
Writing in a GENX320 register
Hi ! I'm trying to measure the quantum efficiency for GENX320. I'm trying to follow the same procedure as in the related article https://support.prophesee.ai/portal/en/kb/articles/qe-estimation As the article mentions, quantum efficiency can be determined
OpenEB: metavision_player (.cpp example) doesn't write video in .avi
Hi! I compiled OpenEB on Jetson platform (Ubuntu 20.04) according the manual: https://docs.prophesee.ai/stable/installation/linux_openeb.html#chapter-installation-linux-openeb I need to save recorded video from event camera EVK3 in .avi format: ~/openeb/sdk/modules/core/cpp/samples/metavision_player/
How to get a symmetry between bias_diff_off and bias_diff_on about IMX636
Hi, there is a note that it is possible to get a symmetry between "bias_diff_off" and "bias_diff_on" by setting those two bias parameters to the same value in the biases of metavision SDK docs. I want to know the same value is point to the current value
How to improve the writing speed of CSV files?
# Define a thread function to write to the CSV def write_to_csv(event_queue, output_file): with open(output_file, 'w', newline='') as csv_file: csv_writer = csv.writer(csv_file) while True: try: # Get events
How do I output recorded events directly to a.CSV file instead of a.raw file
I'm using python. I saw the simple_recorder routine and the raw_to_csv routine in the python example. But I had a problem when I tried to combine the functionality of these two routines (to display camera contents through a window and output a csv file
Dynamic ROI
Hi guys I am trying to solve the following problem and am stuck: I have detected some regions of interest. In addition to the position, I have recorded the corrosponding timestamp. Now I want to dynamically extract the regions from the existing RAW file
access to the introspect player
Hi there, I found out about the Introspect Player shown in the bias tunning video. https://support.prophesee.ai/portal/en/kb/articles/bias-tuning-region-of-interest-roi-31-8-2023 It would be very useful for our research if we could access it. I was wondering
Problem generating Histo or Diff raw files
Hello, I am running "event frame gpu loading sample" by following the exact instructions given: First, compile the sample as described in this tutorial. Then, you can use the metavision_event_frame_generation sample to generate event frame RAW files in
Event Frame GPU Loading Sample using C++
Hello, I am trying to run the Event Frame GPU Loading Sample using C++ and I am getting an unexpected error. I installed CUDA 11.7, and i am running in a Windows OS. I am following the steps as stated here: Event Frame GPU Loading Sample using C++ — Metavision
Duplicate events in the event stream
Hi, I'm using EVK4 HD for event data collection. I find that there are several duplicate cd events (i.e. the same events in pixel location, timestamp and polarity) in the event stream. Are these duplicate events redundant, which I can remove without any
Some routines do not run and cannot find corresponding module
When I tried to run the metavision_sdk_get_started file, I found that some of the files were not found in the set file (see the image for details). My python version is 3.8. I'm using the windows version. Can you tell me what I should do to successfully
Next Page