Build your Event-based Application 5/5: Example Use-Cases

Build your Event-based Application 5/5: Example Use-Cases

Let’s review two use-cases in light of all the information provided in this document. Two examples were selected:

  • Welding monitoring in a factory

  • Gait analysis with Active Markers

Welding monitoring

Welding is a fabrication process whereby two or more parts are fused together by means of heat, pressure or both, forming a join as the parts cool. In the selected use-case, a vertical arm providing the filler moves down to reach two metal plates meant to be welded together. The welding process produces spatters of melting metal, whose angular distribution provides information on the process quality. If a direction is favored, it might mean that the fusion point is not well positioned, or the welding failed etc.

Figure 1. Welding setup with event-based camera monitoring spatters

Let’s define the welding monitoring system:

  • The system will run on the power grid (no consumption constraint).

  • It needs to be integrated on a monitoring bench which contains a desktop computer.

  • There is no high constraint on the memory nor the storage.

  • No need for high data rate transmission, the processing is done locally on the computer. Processed information might be transferred for further analysis or action taking.

  • Latency is not a critical issue.

  • The phenomenon to observe is spatters produced by the welding process. They are quite fast (~1-5 m/s) but not extremely fast (relatively to the sensor ability). They are moving in the 3D space, starting from the welding point, and can be dozens at the same time. We need to detect them and track them, to compute statistics on their angular position.

  • No other moving parts are in the FOV, and spatters being incandescent, no additional lighting is necessary.

  • Environment is indoor, so the system is not subject to wind, fog, rain, etc. The welding arm produces heat very locally, so no high temperature affects the camera or sensor.

Figure 2. Screenshot of a welding monitoring process observed with an EVK4 (integration time of 10 ms)

Use-case analysis:

  • Focus: The camera can be placed at approximately 50cm of the welding point. The ideal depth field would be a couple dozen cm. An 8mm lens can be chosen. The calibration module of the Metavision SDK can be used to set the focus correctly for this distance.

  • Calibration: Angular distribution of spatters is the tracked metric for this use-case. As the lens applies distortion in particular, the camera needs to be calibrated to further undistort the data. Metavision SDK calibration module can again be used for this purpose.

  • Bias tuning: A factory is generally well lit, so that processes can be lead efficiently, which makes it a high light scene. Little noise might be generated, but bias tuning can still help concentrating on moving spatters. Increasing bias_diff_on and bias_diff_off, as well as bias_hpf can help (for instance, 30, 20, 50). As an example, here are the biases used to generate the previous image.

0     % bias_diff
190   % bias_diff_off
30    % bias_diff_on
0     % bias_fo
120   % bias_hpf
0     % bias_refr
  • Synchronization/Triggers: No synchronization is need, but a trigger signal could be send when the welding process starts and finishes to associate corresponding events.

  • ROI: In the scene, there are three types of regions where events are produced but don’t correspond to spatters. Event generation could be blocked at all those locations by selecting adequate sensor ROIs.

    • The first one is the center region, the welding point, where the spatters are generated. A burst of light is generated, which prevents to see interesting things.

    • Further around the welding point, several reflective parts are visible in the FOV.

    • Finally, lens flare is also visible.

Figure 3. Example of ROIs to filter out for event generation
  • ESP: No flickering → AFK OFF. STC ON to filter out noise. The event rate can be measured when the welding occurs. If it is too high, ERC can be enabled to limit it (it was not in the record above). There is no privileged direction and event rate might be high during the process, so EVT3.0 can be used.

  • Output mode: We want to detect spatters and track them "continuously", so raw events should be used to take advantage of their fine time granularity.

  • Lighting: Incandescent spatters natively provide high contrast with the background, so no additional lighting should be necessary for this application.

  • Light filtering: Spatters produce light in the visible spectrum, so no filter will be used.

  • Sensor orientation: Spatters move more along the welding plane, not that much vertically, so the longest sensor direction should be used to monitor spatters along this direction. If the camera is mounted on the welding arm and pointing downwards, then no direction is a priori favored. On the other hand, if the camera is placed on the side, it should be kept parallel to the welding plane.

  • Camera movement: If the camera is placed on the robotic arm, it might generate more events than appropriate due to the arm movement or vibrations. Depending on trials, it might be preferred to have it on the side of the welding setup, as "on top of the welding point" as possible not to favor any direction.

Gait analysis with Active Markers

Gait analysis relates to monitoring the limbs of both upper and lower body to analyze how they work together to enable an individual’s movement. To do gait analysis, it is necessary to track their 3D position.

Typically, OptiTrackTM systems can be used. It consists of a number of IR projectors and cameras all around a room, projecting IR light which is reflected by markers placed on the subject body, and captured by IR cameras. The setup is heavy, expensive, requires calibration and cannot be moved.

The Active Marker principle could be used to provide a more flexible solution, less expensive and more flexible solution, using a stereo event-based setup.

Figure 4. Gait monitoring setup with 4 event-based cameras

Description:

  • The system will run on a desktop computer (no power consumption constraint).

  • There is no high constraint on the memory nor the storage.

  • No need for high data rate transmission, the processing is done locally on the computer.

  • Latency is important to accurately track fast-blinking LEDs.

  • Two event-based cameras are used to detect a set of markers composed of blinking IR LEDs, compute respective 3D pose of each marker, and build a numerical avatar reproducing the pose of the subject’s body. A second pair could be used to observe two sides of the subject simultaneously.

  • The subject’s body is also in the FOV and will generate events from the movement of its limbs.

  • Environment is indoor, so the system is not subject to wind, fog, rain, etc.

Figure 5. Gait monitoring example (focus on upper body here). [LEFT] Observed scene (tracked LEDs as green cross) [RIGHT] Reconstructed 3D pose (tracked LEDs as spheres)

Use-case analysis:

  • Focus: To track a subject walking, the first camera pair could be placed 2-3m on the right side, and the second one on the left side of the subject. The ideal depth field would be around 1m. A 5mm or 8mm lens can be chosen, depending on the distance to the walker. The focus needs to be set. The calibration module of the Metavision SDK can be used for this purpose.

  • Calibration: 3D pose needs to be computed from the setup and camera pairs are used. All 4 cameras will require intrinsic calibration, as well as a global extrinsic calibration for all 4 cameras.

  • Bias tuning: Active marker biases will be used to filter out events not corresponding to blinking LEDs. In particular, a high bias_hpf value allows to filter out slow changes and keep only high speed ones. Also, increasing bias_diff_off and bias_diff_on helps filtering out the noise and generating only relevant events.

0     % bias_diff
180   % bias_diff_off
60    % bias_diff_on
30    % bias_fo
140   % bias_hpf
0     % bias_refr
  • Synchronization/Triggers: Camera pairs need to be synchronized together to associate their event stream and compute 3D poses for each camera pair. Also, synchronizing camera pairs might be necessary to combine computed 3D poses from both dual setup.

  • ROI: No ROI is a priori required.

  • ESP: No flickering → AFK OFF. STC OFF to track accurately fast blinking. No ERC as it could prevent correct LED ID decoding if ID-related events are filtered out. There is no privileged direction and event rate should not be too high, so EVT2.0 could be used.

  • Output mode: Raw events are necessary for Metavision Active Marker use-case to preserve high temporal granularity of events.

  • Lighting: Bias tuning allows to only see very fast changes. In this use-case, LEDs are blinking in the IR spectrum, so other fast blinking IR light should be avoided. No additional lighting is necessary, as light variation is tracked here.

  • Light filtering: IR filters could be added to further filter out motion events, but Active Marker biases should already filter out most (if not all) motion events.

  • Sensor orientation: No privileged direction. Sensor orientation doesn’t matter.

  • Camera movement: Cameras are fixed, on a tripod for instance. Camera pairs should be maintained firmly together, to avoid repeating the calibration process over and over. Finally camera pairs should also be fixed relative to each other for the same reason.


    As a Prophesee customer, get access to your personal ticketing tool, application notes product manuals and more resources. 
    Request your free access today.