This page describes a rudimentary experiment with the aim of providing a proof of concept of the colour recovery method proposed here. C++ code was written to process an HD (1920x1080 pixels) digital sample of a film recording sequence, attempting to separate and decode the chroma signal embedded in the frames. This was by no means a full blown implementation of the method and a number of corners were cut. Despite this the experiment did produce results that showed a certain amount of promise. The image below is taken from one of the better resulting frames from the first run of the experiment.

a-browne_frame126_sat-col-asp_adj_small.jpg

The colour saturation of the above image has been enhanced (courtesy of James Insell) from the frame actually produced by the experiment. The reduced size at which the image is reproduced here also helps to conceal a multitude of sins, at least some of which are artefacts due to the short cuts that were taken in implementing the process. For comparison, an image taken from a colour video tape of the same programme can be found here.

There are two parts to the proposed colour recovery method: luminance/chroma separation and chroma decoding. Some notes on how each of these were implemented, and the degree to which they were successful, follow.

Luminance/Chroma Separation


The proposed method of luminance/chroma separation utilises the fact that the chroma signal in a particular frame will be the inverse of the chroma signal in the next frame but one, for all points of the original colour picture that are unchanged between the two frames. Taking the average values of luminance at each such point should therefore give the value of luminance for the original picture, since the contributions from the chroma signals will cancel each other out. In the proposed method a sequence of four frames is used, the average values of luminance at each point in the first and third frame being compared with the average values at the equivalent points in the second and fourth frame, on the basis that if they are the same then it is pretty likely that the luminance and colour of the point in the original picture did not change across those four frames and the average value therefore corresponds to the original luminance.

The experiment attempted to implement this method simply by taking sequences of four frames and comparing the averages as described above for each pixel in the frames. (No attempt was made in this process to track the original picture line structure.) This did not work very well and only yielded a fairly limited number of pixels where the average values were the same across the four frames even for visibly static parts of the picture. Each frame other than those near the extreme ends of the sequence actually forms part of four different four frame sequences, but this did not increase the pixel yield nearly enough for the method to be considered successful.

I think a significant contributory factor to the failure of this approach is likely to be the fact that, even if the film recording is extremely stable, the positions at which each digital sampling of the luminance takes place are very unlikely to be precisely the same in each frame and so corresponding pixels in each frame are unlikely to contain values of the continuously varying chroma signal from precisely corresponding points.

For the purposes of the experiment an error margin was introduced when comparing the averages and, to produce the image above, a difference in the pixel value luminance of anything up to 15 was permitted. This obviously has a degrading effect on the resulting image and an effect on the recovered colour (and even with this error margin there were still pixels scattered throughout each frame where luminance/chroma separation was not achieved. These failed pixels were set to black values in the resulting frame for the purposes of the experiment, although better visual results would have been achieved by attempting to interpolate values for them from neighbouring pixels.)

In conclusion, the proposed method of luminance/chroma separation may not be viable in practice and alternatives such as image notch filters may have to be investigated. With such alternatives there also needs to be an accompanying method for detecting static parts of the picture.

Chroma Decoding


This was the more successful part of the experiment, and obviously the more important part! The proposed method depends on identifying points in two successive frames where the absolute values of the chroma signal are the same in both frames. The U and V values can then be taken from the values of the chroma signal at points lying exactly halfway between these identified points.

Each cycle of the chroma signal contains four points where the absolute values of the signal will match the absolute values of the signal at corresponding points in a neighbouring frame. As display of the the visible part of a television picture line takes 52 micro seconds and the chroma signal frequency is 4433618.75 cycles per second, each line will contain approximately 230.5 cycles. At a resolution of 1920 pixels per line a quarter of a cycle will therefore be a little over two pixels in length. (The fact that the image on a film recording is likely to be slightly enlarged and cropped will also have some effect on this figure.) The points at which colour values can be obtained as described above therefore lie in roughly one in every two pixels on a line, giving restoration of colour to approximately every second pixel on the line.

In the experiment, an additional mathematical relationship was exploited to restore colour to more pixels. Taking two successive identified points where the absolute values of the chroma signal in both frames are the same, if both these points had the same colour in the original colour picture then the sum and difference of the chroma signals at these points should yield


and


However other approaches such as interpolation might work at least as well.

As in the signal separation part of the experiment, no attempt was made to track the picture line structure. Instead each pixel row of the frame was treated as though it were tracking a picture line and hence a chroma signal (on the basis that some of the time this will be true - or at least approximately true as the recording actually has a noticable degree of tilt to it.) Tracking the line structure doesn't look as though it would be that straightforward an exercise, and the problem is probably made more difficult with this particular recording as large parts of the picture background are near black. However as line structure appears to be visible to the human eye elsewhere in the picture it should be possible to devise a software method of tracking it if need be.

Simple linear interpolation of the "signals" was used to identify (approximately) the sub-pixel points where the absolute signal values coincide and to infer the value of the signals (and hence U and V) at the intermediate points. Again this is an approximation, although possible a reasonable one, all things considered. Any more sophisticated interpolation technique might not make a great deal of difference.

No attempt was made in the experiment to validate whether what was being tracked at any time was a valid chroma signal, rather than, say, small variations in luminance within the dark parts of the picture or variations within the areas between lines. This lead to a large number of pixels of spurious false colour appearing in the results, but it should be possible to avoid this effect in a proper implementation.

As discussed in the description of the method, although the U and V values obtained will be correct in magnitude, we need to determine what sign they should have. As far as I can see, the only way to do this is to identify by trial and error the correct initial choice out of the four possible combinations of signs, after which the signs should be determinable by position (i.e. by frame, line within the frame, and the count of the number of identified points we've reached within the line, or something to that effect.) Doing this would probably involve tracking the line structure, at least to a limited degree. For the purposes of the experiment, no attempt was made to do this. Instead all recovered U and V values were given the same sign throughout the frame. As luck would have it, the combination of negative U and positive V happened to be a good fit for the most prevalent colours in the source material (there is not much blue in the picture for example), and it is this result that is shown.

No attempt was made to address the issues of geometrical distortion of the picture (and it may be this that is responsible for the notable colour error towards the upper left corner.) It is conceivable that the identification of the points where the absolute values of the signals coincide could be used to help correct geometrical distortion (since they occur at known time and hence spatial intervals) but the practicality of this has not been investigated.

Given all the above considerations, it is perhaps surprising that the colour recovery part of the experiment worked as well as it did. Although the results so far are in need of considerable improvement, this does suggest that there should be merit in taking this aproach further.

Andrew Browne - 29/01/2008