Colour+recovery+-+using+existing+material+as+a+reference

This method is simple, but possibly effective.

When attempting to recover colour from the "Top Of The Pops" FR image on this website, I produced an image containing concentric rings of colour, alternate rings containing good colour, with full U/V inversion at the mid-points.

I suspect these are caused by the geometric distortions in the image, since these will stretch the active line subcarrier, thus increasing it's wavelength. Therefore your demodulating wave will shift in and out of phase with the active line chroma. This will cause a continuous rotation of the U and V reference axes; however if you compensate for this, the phase angles will still be correct, and therefore you are still retrieving useful colour information.

The spacing of the concentric rings indicates how fast the demodulating wave is shifting out of phase with the active line, and therefore it tells you how distorted the geometry is at that point. Thus you could use this information to model a deformation which would reverse the geometric distortion over the frame.

Rather than doing this, I thought it would be easier to simply try to compensate for the varying rotations of the U and V axes in phase space.

By comparing the concentric ring image with the existing colour reference image, I produced a mask in UV space identifying how much hue and saturation correction needs to be applied to each pixel to get back to the correct colours. Hence you could produce a mask from a frame that exists in both colour VT and b/w FR formats, and use this mask to correct portions of the footage that exist only in b/w. N.B. The saturation information in the concentric ring image will be wrong, since the rings of bad colour are mostly not sampling the peak values of the U and V carriers. Therefore they will be under-saturated. However the relative amplitudes of U and V should still be correct (allowing for the phase rotation caused by the frequency drift), and therefore the UV mask should bring them back to the correct hue and boost them back to the correct saturation.

The patterning almost certainly modulates over the PAL four frame sequence; but should come back into phase on every fourth frame. Hence you will need to produce four masks, and create a modulating UV correction filter.

This method should produce correct colour without the need for a deformation transform to recover the original geometry.

To produce a good mask, one needs to align the VT to the FR image as best possible using the same deformation employed by the Restoration Team to combine 525-colour VT's with 625 FR's. Tedious; but you should only need to do this once then use the same 4-mask sequence for the whole of the footage. (N.B. The patterning modulates in a predictable way: so you can use the first mask to generate the other three.)

If no colour reference material exists, you could create some, by recolourising one frame by hand based on photographs, or just guess work. Obviously it wouldn't be as accurate; but it might bear some fruit.

Another option is to use the recolourised image itself to generate the mask. I noticed that as I adjust the sample rate or frequency parameters, the concentric rings ripple outwards across the frame, tracing out good colour across the entire image.

So you could composite these together to get a frame of good colour. Or use them as a guide to recolourise a frame by hand. Then use that frame as your reference.

Obviously the reference frame needs to have saturated colour across the whole image. Or else you would have to composite two (or more) frames that do, and do the same with the recolourised frames you're comparing them to. Hopefully you could find a suitable frame to use though.

A suitable frame for "Dr. Who" might be captured from the opening titles, since there is lots of saturated colour. I had hoped these colours were the same on every episode; but apparently not. However a hue adjustment by eye could possibly compensate for this?

Drawbacks of this method: I'm not sure how the vertical inter-mingling of scan lines will impact on the demodulation. In order to demodulate the footage, one needs to rescale the HD back to standard definition, and I envisaged doing this in the analogue domain. If two scan lines have become merged and superimposed, then the colour information retrieved would be junk, since there are two many subcarrier wave-fronts interfering with each other. I suspect it's more likely though that you will get part of one scan line, then part of another, etc..., and if this is the case then the chroma information would still be meaningful, and the masking technique should still work. On regions of the frame which fall inbetween scan lines there will be no chroma information, and hence the colours of the reference frame image will be superimposed across all frames of the footage. This is obviously not what you want, but I'm assuming these regions will be so small that you won't notice them.

The method probably works best with PAL-Simple decoding rather than PAL-Delay. PAL--Simple produces a much more obvious rainbow of colours within the concentric rings, as one would expect. PAL-Delay is more likely to throw up false colours as it's probably averaging the wrong lines.

Alex Weidmann