Sparse Optical Flow in C++ Speed information
For extracting the speed information using the metavision_sparse_optical_flow I changed some code lines as in the website is said with the function void writeCSV. This function gets the flow_output and writes it down in a csv format. I try to run the
Issues of EVT2 RAW File encoder
Hi, when I used metavision_evt2_raw_file_encoder to encode a CSV file into an EVT2 format RAW file, I found that the timestamps of the events in the RAW file were different from that in the original CSV file. There is a difference of tens of microseconds
Accumulate events between triggers
Hi, Just wondering if it is possible to accumulate events between triggers to frames. For example, when an on trigger fires, then accumulate events until next trigger. Thanks!
OPENEB 4.3.0 Pipeline Jetson Xavier Performance
Hello Prophesee Team, I successfully compiled OPENEB 4.3.0 on arm64 Ubuntu 20.04 using the guide provided at https://docs.prophesee.ai/4.3.0/installation/linux_openeb.html Additionally, I managed to compile and run my C++ application, that utilizes the
Error gotten while finding modules in Python samples
I have an issue while trying to convert with the provided "metavision_file_to_csv" one of the samples. This is what i input in the command windor: py metavision_raw_to_csv.py -i spinner.raw. While finding the different functions and modules this calls,
Error when installing extra libraries
I am re installing the whole metavision SDK and i am checking all the extra libraries are installed. When i run this command: python -m pip install "opencv-python==4.5.5.64" "sk-video==1.1.10" "fire==0.4.0" "numpy==1.23.4" "h5py==3.7.0" pandas scipy the
Camera connection error: [SERVER] - stderr - [HAL][ERROR] Unable to open device
I connected an IMX636 to my computer, ran metavision studio (SDK 3.1.2) and clicked "open a camera". However, cmd will display the ERROR message "[SERVER] -stderr - [HAL][ERROR] Unable to open device". This error will be reported for a total of 4 times,
Dataset Structure for YOLOv8 Training Issue:
I'm encountering problems while training a YOLOv8 model using a custom dataset structure. I've followed the steps outlined in Prophesee YOLOv8 tutorial and used the Prophesee ML labeling tool. I am getting the following error: ValueError: num_samples
How to read .h5 file format in Metavision 4.4.0
Hello, I am trying to read the dataset DSEC (Download – DSEC (uzh.ch)) with the Metavision 4.4.0. The dataset is supposed to be from Prophesee Gen3, and the file format provided is .h5. From my understanding .h5 and .hdf5 have the same porperties, and
Metavision SDK version.
Hello, To work with my EVK2 Prophesee Evaluation Kit Camera - Gen41 which version of Metavision SDK should I install? can I have the link please? Thank you. Have a great day.
Multi camera sync connection
I've connected two cameras by aligning the sync out and sync in as shown in the following photo. However, upon consulting the schematic for connecting the devices, I noticed that two sync ins are originating from a single sync out. In such a case, would
FPGA not properly configured
When trying to open my EVK1 with the Metavision Studio 3.1 i get the following error message in the console: FPGA not properly configrured. I have never changed any settings or send any FPGA commands. Does anybody have an idea what happened or how i can
Capturing events between event triggers
I need a way to generate frames using only events between 2 consecutive event triggers. For example, consider the diagram below, where dashes represent CD events, and pipes represent trigger events. I need to be able to ignore events labeled "x" and use
Whether new events are generated during readout latency
Hello. I use the EVK4 event camera for some high activity cases, such as recording the rapid movement of many particles. In this case, due to the high number of particles, the timestamps of the generated events will have a maximum delay of 60 microseconds.
Recommendation for Frame based Camera for Integration with DVS
Hi, I plan to sync frame based camera with DVS, according to guideline the frame based camera should output sync out signal to DVS Trig in. my question is do you have any recommendation of frame based camera that has dedicated port to sync out signal?
PyTorch version of detection_and_tracking_pipeline.py.
Hi, I have encountered an issue while running detection_and_tracking_pipeline.py. Could you please advise which version of PyTorch I should install? It seems that the version I have installed is incorrect. My graphics card is NVIDIA 4060.
Request for Information on Video Deblurring Technique from Prophesee YouTube Demo
Hi everyone, I recently watched a video on Prophesee's official YouTube channel (link) demonstrating video deblurring. It showcased a method combining a CMOS camera's output with an event sensor's stream. Unfortunately, I couldn't find any related samples
get_illumination returning -1
I'm trying to compare the sensor lux readout to the events/sec at different bias settings, but the get_illumination readout from the I_monitoring interface only returns -1. Am I missing something about how to get this information? Thanks, Rich
How to calibrate?
Hey, How to use the calibration command? I am using this command with `metavision_mono_calibration_recording` the checkerboard html blinking on my iPad. The right side window (black screen) of the application is empty. I am not even seeing any pattern
Quantification of active pixels/s.
Hi, I would like to know please if there is any code from Metavision to calculate the active pixels/second in a recording? I was thinking in converting my recording to frames by using the Frame Generator and then, having frames with certain accumulation
OPENEB 4.3.0 on ubuntu 20.04, compiling cpp examples
Hello Prophesee Team, I have compiled OPENEB 4.3.0 on arm64 ubuntu 20.04 using the guide https://docs.prophesee.ai/4.3.0/installation/linux_openeb.html and have used Option 1 - working from build folder. the test suite: ctest -C Release run without errors.
MVTec HALCON Camera Not recognized evk4
经过: https://support.prophesee.ai/portal/en/kb/articles/mvtec-halcon-acquisition-interface 目前我下载的halcon版本是22.11 相机无法识别 软件开发工具包4.1 MV SDK4.1 目前测试通过后,软件无法识别EVK4相机。我需要安装其他插件吗?
Could not find a configuration file for package "MetavisionSDK"
When compiling the C++ “sample metavision_sdk_get_started.cpp” with the cmake command, the following error occurs: ------------------------------------------------------------------------------------------------------------- -- Selecting Windows SDK version
EVK4 measurement error information
I want to track an object using a Kalman filter, and among the variables, there is an item called variance of the sensor's measurement error, which can be provided by the manufacturer, where can I request it? My model is evk4
OPENEB_4.3.0 Compling Problems Met
I was compling openeb-4.3.0 no my jetson and was in the 'cmake .. -DCMAKE_BUILD_TYPE=Release -DBUILD_TESTING=OFF' step when this error occured: CMake Warning at sdk/modules/core/cpp/samples/metavision_sdk_get_started/CMakeLists.txt:20 (add_executable):
After cutting the RAW CSV text, the time is inconsistent, this is the cause.THK
https://docs.prophesee.ai/stable/samples/modules/driver/file_info.html#chapter-samples-driver-file-info Attached is the same time as the previous one, the same time record, the original RAW text has a lot of points, the more you have written, the more
Reset timestamps for multi-cam sync
Hello - I am utilizing 4 SilkyEvCam modules in a stereo setup. I have connected configured one of the cameras as the master and the other three as the slaves in software. I then connect the sync out of the master to the sync in of each slave. Distances
Background information/reference on analytics tracking schema
Hello, I am looking for information or literature describing the tracking algorithms used in the Analytics toolbox: Generic Tracking using Python — Metavision SDK Docs 4.4.0 documentation (prophesee.ai). I am specifically interested in the approaches
Frequency Histogram
Is there any existing algorithm for determining the frequency component at every x,y location? Similar to the histogram, except instead of time bins, they are divided into frequency bins. If not, can anyone advise on how I might adjust an existing algorithm
Negative bias_diff_off and bias_diff_on values of IMX636. What do they mean?
I have set the following biases in the IMX636 Sensor. bias_diff_off = -10 and bias_diff_on = -50. I have read the documentation and would expect to get more OFF events with these settings. However I am getting more ON Events. Why is that? I am still not
Using OpenEB expelliarmus to decode data
Hi Community, I was trying to decode the raw files from the camera using expelliarmus to decode the data and put it to a pandas df. What is the time stamps scale from it. When I use the raw to csv python scripts I get different results and the timescale
Converting Raw Files to Video - Questions regarding FPS
In the Metavision Studio application, you can export videos at varying frame rates. Regardless of the frame rate chosen, the resulting video is rendered at 30 FPS. This results in a slow/fast motion effect depending on whether the FPS is more or less
Active pixels behind the moving object - Pixel latency
Hi, I'm doing an experiment about a rotating black DOT in a white background with backlight to maximize contrast. In all the cases I have this ON events after the movement of the DOT which in this case is going clockwise, this actually looks like a tail
Assistance Needed in Creating Dataset from Golf Shot Recordings
Hello, I'm currently working on a university project where I've encountered difficulties in creating a dataset. My project involves analyzing golf shots, for which I've recorded a large quantity of strikes, specifically focusing on the moment of impact
noise filtering post recording
I have an object that has an intrinsic blinking that can be captured with the EVK3, but the blinking is excited by a vibrating light. The vibrations have a known frequency, and I'm wondering if there is any way to filter this frequency after recording
The quickest recording of a range of events
Hello Prophesee Team, I want to write some amounts of events (200ms, several pipeline slices) before a dynamically calculated point in time. I am always saving last 5 pointers of event buffers to deque, and use it to write events when I need it. WriteCSV()
Pipelines Supported Algorithms
Hello, I was wondering what algorithm are supported as part of the data pipeline systems? We are looking at using this to test out multi-threading by converting the Event Frame Generation Sample into one using a pipeline approach. From what I have been
How to read camera data using fpga
We hope to use fpga to read the camera data instead of SDK. Is there any relevant information or documentation?
Getting GPU device error when running the detection pipeline
Hi team, I am trying to run the detection pipeline on my system. The command i am entering is python3 sdk/ml/python_samples/detection_and_tracking_pipeline/detection_and_tracking_pipeline.py --object_detector_dir ~/Documents/pre-trained_models/metavision_sdk_3.x/red_event_cube_05_2020/
Data transmission & Processor
Hello, According to the data announced on your site, the equivalent temporal precision of the event-technology is over 10 000 fps. Is this value fixed? Or can it decrease again in the most unfavorable cases? For example, we have a VGA EVK1 with a resolution
Next Page