Object Detection

Object Detection

The operator detects object such as ships on sea surface from SAR imagery.

Major Processing Steps

The object detection operation consists of the following four major operations:

  1. Pre-processing: Calibration is applied to source image to make further pre-screening easier and more accurate.
  2. Land-sea masking: A land-sea mask is generated to ensure that detection is focused only on the area of interest.
  3. Pre-screening: Objects are detected with a Constant False Alarm Rate(CFAR) detector.
  4. Discrimination: False alarms are rejected based on object dimension.

For details of calibration, the reader is referred to the Calibration operator.  Here it is assumed that the calibration pre-processing step has been performed before applying object detection.

For details of land-sea mask generation, the reader is referred to the Create Land Mask operator. 

Two-Parameter Constant False Alarm Rate (CFAR) Detector

The detector used in pre-screening operation is the two-parameter constant false alarm rate (CFAR) detector. The basic idea is to searche pixels which are unusually bright when compared to pixels in surrounding area.

Let x t be the pixel under test and T be a given threshold, then the detection criterion can be expressed as

Let f(x) be the ocean clutter probability density function and x range through the possible pixel values, then the probability of false alarm (PFA) is given by

and the above detection criterion is equivalent to the criterion below

If Gaussian distribution is assumed for the ocean clutter, the above detection criterion can be further expressed as

where μb is the background mean, σb is the background standard deviation and t is a detector design parameter which is computed from PFA by the following equation

The valid PFA value is in range [0, 1].

In real implementation of the two-parameter CFAR detector, a setup shown in Figure 1 is employed. The target window contains the pixel under test, the background “ring” contains pixels for estimating the underlying background statistics while the guard “ring” separates the target window from the background ring so that no pixels of an extended target are included in the background ring. The background mean μb and the standard deviation σb used in the criterion are estimated from the pixles in the background ring.

In case that the target window contains more than one pixels, this operator uses the following detection criterion

where μt is the mean value of pixels in the target window. In this case, t should be replaced by tn (where n is the number of pixels in the target window) in the PFA calculation.

Adaptive Threshold Algorithm

The object detection is performed in an adaptive manner by the Adaptive Thresholding operator. For each pixel under test, there are three windows, namely target window, guard window and background window, surrounding it (see Figure 1). 

Normally the target window size should be about the size of the smallest object to detect, the guard window size should be about the size of the largest object, and the background window size should be large enough to estimate accurately the local statistics.

The operator


 

Figure 1. Window setup for adaptive thresholding algorithm.

Discrimination

The discrimination operation is conducted by the Object Discrimination operator. During this operation, false detections are eliminated based on simple target measurements.

  1. The operator first clusters contiguous detected pixels into a single cluster.
  2. Then the width and length information of the clusters are extracted.
  3. Finally based on these measurements and user input discrimination criteria, clusters that are too big or too small are eliminated.

Parameters Used

For Adaptive Thresholding operator, the following parameters are used (see Figure 2):

  1. Target Window Size (m): The target window size in meters. It should be set to the size of the smallest target to detect.
  2. Guard Window Size (m): The guard window size in meters. It should be set to the size of the largest target to detect.
  3. Background Window Size (m): The background window size in meters. It should be far larger than the guard window size to ensure accurate calculation of the background statistics.
  4. PFA (10^(-x)): Here user enters a positive number for parameter x, and the PFA value is computed by 10^(-x). For example, if user enters x = 6, then PFA = 10^(-6) which is 0.000001.


                           Figure 2. Adaptive Thresholding Operator dialog box.


For Object Discrimination operator, the following parameters are used (see Figure 3):

  1. Minimum Target Size (m): Target with dimension smaller than this threshold is eliminated.

  2. Maximum Target Size (m): Target with dimension larger than this threshold is eliminated.


                             Figure 3. Object Discrimination Operator dialog box.

Visualize Detected Objects

To view the object detection results, the following steps should be followed:

  1. Bring up the image.
  2. Go to Layer Manager and add layer called "Object Detection Results".

The detected object will be circled on top of the image view (see example in the figure below). An Object Detection Report will also be produced in XML in the .s1tbx/log folder.

                                                Figure 4. Object Detection Results overlayed on the image.

Considerations When Running with Graph Builder

  The object detection can also be run in Graph Builder. However, if it is run together with other operator, such as terrain correction like in the graph shown below, then the detection result get lost.

 This is because the detection result is saved into vectors only at the end of the processing. However, the terrain correction operator (or other operator) is trying to copy the vectors from the source product before processing has even started. At this moment, the vectors in the output product of object detection is still empty. That is why no vectors get copied. However, if the object detection and terrain correction are run in two separate graphs, then there is no such a problem.

Reference:

[1] D. J. Crisp, "The State-of-the-Art in Ship Detection in Synthetic Aperture Radar Imagery." DSTO–RR–0272, 2004-05.