To quantify locomotory behavior, tools for determining the location and shape of an animal’s body are a first requirement. Video recording is a convenient technology to store raw movement data, but extracting body coordinates from video recordings is a nontrivial task. The algorithm described in this paper solves this task for videos of leeches or other quasi-linear animals in a manner inspired by the mammalian visual processing system: the video frames are fed through a bank of Gabor filters, which locally detect segments of the animal at a particular orientation.
The algorithm assumes that the image location with maximal filter output lies on the animal’s body and traces its shape out in both directions from there. The algorithm successfully extracted location and shape information from video clips of swimming leeches, as well as from still photographs of swimming and crawling snakes. A Matlab implementation with a graphical user interface is available online, and should make this algorithm conveniently usable in many other contexts.
When fast-moving animals are recorded, this delay results in displacements between the subimages formed by even and odd scan lines respectively. Therefore, the first step of preprocessing is to separate the subimages, i.e., to de-interlace (step 1 in Fig. 1). The first subimage is constructed by interpolating between the odd scan lines from the stored VHS image; the second by interpolating between the even scan lines. The net result is a sequence of images at 60 frames per second with 640 × 480 resolution.
With that average longitudinal shift taken out of the equation, Wormfinder’s reports of head and tail location differed by more than 5% of animal length in only 10 and 17 frames respectively (0.85% and 1.45%; Fig. 3a). In contrast, the reference algorithm’s output for heads and tails was off by more than 5% in as many as 99 and 109 frames respectively (8.4% and 9.3%), and by as much as 100% of animal length or more in 9 and 17 frames.
A Matlab (The Mathworks, Natick, MA) implementation of the Wormfinder algorithm with a graphical user interface (Fig. 5) is available from the authors website. The user interface allows importing of single images or sequences of images and movie files. Detection parameters can be specified, and annotations can be made in individual frames to prevent spurious detection in unusually noisy areas. After detection, an animation can be shown of the results in each frame, and corrections can be made as necessary by modifying detection parameters and reprocessing individual frames or all frames.
We have described an algorithm for detecting the location and body shape of leeches and other quasi-linear animals in video sequences based on determining directionality for every pixel of the image, and tracing out the animal starting from the point where directionality is strongest. Directionality was computed in a manner inspired by the architecture of the mammalian visual system: using a bank of Gabor filters. The algorithm successfully located leeches in over 95% of video frames, without tuning any parameters for individual video clips.
It also successfully located snakes in two natural scenes, despite the substantial visual clutter in these images. We expect that it would perform equally well on still images or movie clips of other similarly shaped animals such as eels or lampreys. The described approach offers significant advantages over the use of beads stitched to the body wall: First, it does not require surgery, simplifying experimental procedures and eliminating any concerns that the observed behavior is distorted by the observation method.
Second, it allows for measurement along the entire body of the animal rather than only on a few discrete locations. By validating results against human judgment, we determined that in nearly all cases Wormfinder produced smaller errors than a brightness threshold-based skeletonization method used as a reference algorithm, and that its worst-case behavior was better by an order of magnitude. Wormfinder required no fine tuning of parameters and was robust against background clutter in the image.
Source: University of California
Authors: Daniel A. Wagenaar | Wiliam B. Kristan