Optical flow cores
Optical flow computation estimates bidimensional motion fields projected by the scene on the image plane of a camera from a sequence of captured frames. In general, we assume the constancy of the intensity for consecutive frames and usually a smooth motion and small displacements.
Optical flow is a low-level vision feature widely used for many applications, as for instance, motion in depth, structure from motion, or ego-motion. Furthermore, its potential applications encompass video stabilization, object tracking, segmentation, or active vision. All of them are useful in a wide range of fields such as autonomous robot navigation, video surveillance, or driving assistance systems. Some results of our implementation are shown in Fig 1.
As mentioned in Image-processing-core-library, all the work is supported by several publications in international scientific journals. However, the main difference is that here we are only including the cores for the one-scale computation, not for the whole coarse-to-fine process. The reason is that the coarse-to-fine structure requires intensive use of the memory and therefore, it is platform-dependent in our case.
Fig 1. Optical flow estimation for some examples of the Middlebury
dataset.
Gradient-based model
The first implementation is based on the well-known Lucas and Kanade (L&K) algorithm [1][2]. The problem of estimating optical flow (2D data) assuming the intensity constancy is ill-posed, we usually need more assumptions to compute it. The L&K approach is one of the most accurate, computationally efficient, and widely used methods for the optical flow computation. It is a local gradient-based method because the estimation is based on the image derivatives. It solves the optical flow problem estimation also assuming the flow to be constant in the same local neighborhood. The hardware architecture of the core for the estimation of optical flow is show in Fig 2. More details on the hardware implementation can be found in [4] and in img-proc-core-lib:Papers.
Fig 2. Scheme of the architecture for the optical flow estimation core.
Phase-based model
An implementation of the phase-based algorithm from Pauwels and Van Hulle [3]. Cores are able to process 640 by 480 images at a frame rate up to 32 fps for the Lucas-Kanade algorithm. Cores are implemented in Handel-C and detailed in [3-4].
Contacts
- Javier Diaz Seven Solutions
- Matteo Tomasi Harvard University
- Francisco Barranco University of Maryland and University of Granada
Project Status
Date | Release |
---|---|
10/02/2014 | v1.0 Release |
References
[1] B. D. Lucas and T. Kanade, “An iterative image registration
technique with an application to stereo vision,” in Proc. 7th Int. Joint
Conference on Artificial Intelligence. 2, pp. 674–679, (1981).
[2] J. Y. Bouguet. Pyramidal Implementation of the Lucas Kanade
Feature Tracker.
[3] K. Pauwels, M. M. V. Hulle: Optic flow from unstable sequences
through local velocity constancy maximization, Image Vision Comput.
27(5): 579-587 (2009)
[4] F. Barranco, M. Tomasi, J. Diaz, M. Vanegas, E. Ros: Parallel
Architecture for Hierarchical Optical Flow Estimation Based on FPGA.
IEEE Trans. VLSI Syst. 20(6): 1058-1067 (2012)
[5] M. Tomasi, M. Vanegas, F. Barranco, J. Diaz, E. Ros:
High-Performance Optical-Flow Architecture Based on a Multi-Scale,
Multi-Orientation Phase-Based Model. IEEE Trans. Circuits Syst. Video
Techn. 20(12): 1797-1807 (2010)