Updated date:

Image Retrieval: Color Coherence Vector

Tarek is a software engineer, enjoys implementing side research projects and blogging them.

Content-Based Image Retrieval System

Content-Based Image Retrieval System

Introduction and a brief literature history

Content-based image retrieval, is the field that is concerned with being able to retrieve an image based on the actual content of it (not based on any textual/meta data attached with it). The process of retrieving the right features from the image is done by an image descriptor. One important use case for any image descriptor is the ability to use its generated features to define the similarity between images

In this post, we are going to talk about one of the commonly known techniques used in image retrieval which is Color coherence vector, it is an image descriptor (or more specifically, it is a color descriptor), that extracts color-related features from the image which can be used as a low dimensional representation of this image.

Global Color Histogram (GCH) and Local Color Histogram (LCH). Both descriptors are based on computing the Color Histogram of the image, the difference is that GCH computes the color histogram for the whole image and uses this frequency table as a low dimensional representation of the image, while on the other hand, LCH first partitions the image into blocks and each block will have a separate color histogram calculated, and the concatenation of these local color histograms is the low dimensional representation of the image.

Due to the sparsity of the resultant color histogram representation, some papers (like "Local vs. Global Histogram-Based Color Image Clustering") suggest applying Principle Component Analysis (a method used for dimensionality reduction, and extracting only the useful features) to the outputted color histograms.

However, these methods have some clear issues, for example GCH doesn't encode any information about the color spatial distribution in the image. LCH performs much better than GCH as it overcomes this specific problem to some extent, but it is still not robust enough to some little variations like image rotations and flips.

Now, we’ll discuss a more useful yet fast color descriptor that is capable of encoding information about color spatial distribution which is called Color Coherence Vector (CCV).

Color Coherence Vector

Color Coherence Vector (CCV) is a more complex method than Color Histogram. It works by classifying each pixel as either coherent or incoherent. Coherent pixel means that it is part of a big connected component (CC) while incoherent pixel means that it is part of a small connected component. A crucial step for this method to work is defining the criteria by which we decide whether a connected component is big or not.

How features are extracted in CCV?

These steps target building a low dimensional representation of the image.

  1. Blur the image (by replacing each pixel’s value with the average value of the 8 adjacent pixels surrounding that pixel).
  2. Quantize the color-space (images’ colors) into n distinct color.
  3. Classify each pixel either as coherent or incoherent, this is computed by
    • Finding the connected components for each quantized color.
    • Determining the tau’s value (Tau is a user-specified value, normally, it’s about 1% of image’s size), any connected component with number of pixels more than or equal to tau then its pixels are considered coherent otherwise they are incoherent.
  4. For each color compute two values (C and N).
    • C is the number of coherent pixels.
    • N is the number of incoherent pixels.

    It’s clear that the summation of all colors in C and N should be equal to the number of pixels.

Let’s take this example to concretely describe the steps of algorithm.
Assuming that the image has 30 unique colors.

image-retrieval-color-coherence-vector

Now we’ll quantize the colors to only three colors (0:9, 10:19, 20, 29). This quantization is essentially about combining similar colors to a single representative color.

image-retrieval-color-coherence-vector

Assuming that our tau is 4

For color 0 we have 2 CC (8 coherent pixels)

For color 1 we have 1 CC (8 coherent pixels)

For color 2 we have 2 CC (6 coherent pixels and 3 incoherent pixels)

So finally our feature vector is

image-retrieval-color-coherence-vector

Defining a distance function

The purpose of having a distance function is to quantify the dissimilarity between any two images. It complements the usefulness of the color descriptor, for example, the color descriptor can extract features for all images and store them in a database and then during the image retrieval phase this distance function will be used to retrieve the image with minimum distance to the original query image.

In order to build a distance function for CCV, we use the calculated coherent and incoherence features (C and N for each color) in our distance function to compare between any two images (let's name them a and b, in the following equation).

Ci : number of coherent pixels colored with i.

Ni : number of incoherent pixels colored with i.

image-retrieval-color-coherence-vector

Drawbacks of Color Coherence Vector

Now we see that Color Coherence Vector method considers information about color spatial distribution between pixels in its coherence component. But this method has some drawbacks. The remaining part of this post will discuss two main drawbacks of it.

Coherent pixels in CCV represent the pixels which are inside big noticeable components in the image. However, if we combined these entire components into one component, we will end up having only one bigger component where the number of its pixels will be equal to the number of the pixels in the two original big components.

To make it clear, let's look at these pictures (assuming tau equals to 8).

image-retrieval-color-coherence-vector

Although they are different pictures but they have the same CCV.

It might be clear that this problem could be solved by adjusting the threshold tau, but still tuning it is not trivial, because in many cases you will need to choose between multiple thresholds, each one of them is still not completely correctly capturing the difference between big components and small ones in you image dataset.

Another problem we may encounter is the positions of these remarkable connected components relative to each other.

The following pictures have the same CCV but with different appearance:

image-retrieval-color-coherence-vector

There are many solutions to this problem. For example, adding another dimension in the feature vector that would capture the components’ position relative to each other may break these ties. This paper "An Improved Color Coherence Vector Method for CBIR" describes this approach.

Here is the link of CCV paper in case you would like more academic details description of the method. I hope this post was beneficial to you, lastly, you can find my Matlab implementation of CCV on Github (ColorCoherenceVector Code).

© 2013 Tarek Mamdouh

Comments

muhammad junaid on July 13, 2019:

sir please tell me that how to test the similirity percentage of two image according to color coherence vector,,

thank you..

Related Articles