Converting Raw Files to Video - Questions regarding FPS

Converting Raw Files to Video - Questions regarding FPS

In the Metavision Studio application, you can export videos at varying frame rates. Regardless of the frame rate chosen, the resulting video is rendered at 30 FPS. This results in a slow/fast motion effect depending on whether the FPS is more or less than 30. I totally understand why this is, and that it is impractical (impossible) to display real time 10k FPS on consumer hardware.

My question is how I can achieve the same effect in code. Here is the basic concept

def main():
'''
Process and convert input raw files into output mp4 videos
'''
# Collecting necessary inputs from the user to customize the processing
raw_files = input("Raw File(s): ")
outdir = input("Output directory: ")
fps = float(input("FPS: "))
frequencies = input("Specify the frequency range separated by a comma: ").split(",")
# Extract the min and max frequencies from the user input
# Required to work with AntiFlickerAlgorithm
(min_freq, max_freq) = [int(f.strip()) for f in frequencies]

# Splitting raw file names for individual processing.
raw_files = raw_files.split(',')

# Initializing an anti-flicker algorithm to reduce visual flickering in
# the output, crucial for high-quality video output.
flicker = AntiFlickerAlgorithm(800, 600, 7, min_freq, max_freq, 15000)

# Setting up a frame generation algorithm to convert event data into frames,
# with parameters tailored to the sensor and desired video properties.
event_frame_gen = PeriodicFrameGenerationAlgorithm(
sensor_width=800,
sensor_height=600,
fps=fps,
accumulation_time_us=int(1e6//fps),
palette=ColorPalette.Dark
)

# Choosing the MP4 format for the output video for wide compatibility
# and good balance of quality and file size.
fourcc = cv2.VideoWriter_fourcc(*'MP4V')

# Processing each raw file to generate a video: the loop handles each file
# individually to create separate videos for each.
for infile in raw_files:
# Generating a unique filename for each output video,
# incorporating filtering parameters for easy identification.
outfile = path.join(outdir, path.basename(infile) +
f"_vendor_filter_{fps}fps_freq_filter_{min_freq}Hz-{max_freq}Hz.mp4"
)
# Setting up a video writer to save the generated frames into a video file.
video_writer = cv2.VideoWriter(outfile, fourcc, fps, (800, 600))
# Linking the frame generation algorithm to the video writer
# to enable direct writing of frames.
event_frame_gen.set_output_callback(lambda _, f: video_writer.write(f))

# Loading event data from the input file,
# with a cap on the duration for processing efficiency.
it = helpers.load_evts_from_file(infile, max_duration=int(5e6))
# Integrating the flicker reduction into the frame generation process.
filtered_frame_gen = helpers.FilteredFrameGenerator(event_frame_gen, flicker)

# Iterating through the events, processing them to generate and
# write frames, ensuring efficient handling of event data.
for evs in tqdm(it):
if evs.size == 0:
continue # Skipping empty event sets
filtered_frame_gen.process_events(evs)

# Finalizing the video file by releasing the video writer resources.
video_writer.release()

It works to the extent that it creates a video, but it doesn't achieve the same change in motion. For instance, if I render a raw file at 10k FPS the slow down factor is 10k/33 = 333.33. A 5 second video should take approximately 27 minutes to play. That is exactly what happens when I use the metavision studio app. How can I achieve the same thing here?

    As a Prophesee customer, join the community conversation. 
    Request your free access today.

      Information

      In this Community Forum, you can discuss products, technology and applications. Only registred users can post but everyone can read. To ask a private question, enter a support ticket in My Area https://support.prophesee.ai/portal/en/newticket