The basic technique

A schematic representation of the ConvPhot algorithm.
  1. Two objects are clearly detected and separated in the high resolution detection image (blue, solid-thin line). The same two objects are blended in the low resolution measure image (red, solid-thick line) and have quite different colors.
  2. The two objects are isolated in the high resolution detection image and are individually smoothed to the PSF of the measure image, to obtain the ``model'' images.
  3. The intensity of each object is scaled to match the global profile of the measure image. The scaling factors are found with a global χ² minimization over the whole object pixels.

The technique that we discuss here has been described and adopted for the first time by the Stony-Brook group to optimize the analysis of the J, H and K images of the Hubble Deep Field North. The method is described it in Fernandez-Soto et al (1999) and the catalog obtained has been used in several scientific analysis of the HDF-N, both by the Stony-Brook group (e.g. Lanzetta et al, ...) as well as by other groups (e.g. Fontana et al 1999, ). More recently, a conceptually similar method has been developed within the GOODS survey (Papovich et al 2003), to deal with the similar problems existing in the GOODS data set.

Although in both these cases the method has been applied to the combination between HST and ground-based, near-IR data, the procedure is much more general and can be applied to any combination of data sets, provided that the following assumptions are satisfied:

  1. A high resolution image (hereafter named ``detection image'') is available, that is used to detect objects and isolate their area;
  2. Colors are to be measured in a lower resolution image (hereafter named ``measure image'');
  3. The PSF in both images is accurately estimated, and a kernel has been obtained to smooth the ``detection image'' to the PSF of the ``measure image''.

Conceptually, the method is quite straithforward, and can be better understood by looking at Fig.XXX. In the upper panel, we plot the case of two objects that clearly detected in the ``detection image'', but severely blended in the ``measure'' one. The procedure followed by ConvPhot consists of the following steps:

  1. Each object is extracted from the ``detection image'', making use of the parameters and area defined by SExtractor: in practice, we use the ``Segmentation'' image produced by SExtractor, that is obtained from the isophotal area. Since the latter is typically an underestimate of the actual object size, a software named dilate can be used to expand the SExtractor area of an amount fDIL; proportional to the object size.
  2. Each object is (individually) filtered to match the ``measure'' PSF and normalized to unit total flux: we refer to the resulting thumbnails as the ``model profiles'' of the objects.
  3. The intensity of each ``model'' object is then scaled in order to match the intensity of the objects in the ``measure'' image.

The free parameter for this scaling (that we name Fi) is computed with a χ² minimization over all the pixels of the images, and all objects are fitted simultaneously to take into account the effects of blending between nearby objects. Conceptually, this approach is somewhat similar to the line fitting of complex absorption metal systems in high redshift quasars (Fontana & Ballester 1996). Although in practical cases the number of free parameters (that is equal to the number of identified objects) can be quite large, the resulting linear system is very sparse (Fernandez-Soto 1999) and the minimization can be performed in a quite efficient way using standard techniques.

As can be appreciated from the example plotted in Fig.XXX, the main advantage of the method is that it relies on the accurate spatial and morphological information contained in the ``detection'' image to measure colors in relatively crowded fields, even in the case that the colors of blended objects is markedly different.

Still, the method relies on a few assumptions that must be well understood and taken into account. In particular, morphology and positions of the objects shoud not change significantly between the two bandwiths. Also, the depth and central bandpass of the ``detection'' image must ensure that most of the objects detected in the ``measure'' image are contained in the catalog. Indeed, the objects, deeply blended in the measure image, should be well separated on the detection image.

In practice, it is unlikely that all these conditions are satisfied in real cases. In the case of the match between ACS and ground based Ks images, for instance, very red objects may be detected in the Ks band with no counterpart in the optical images, and some morphological change is expected due to the increasing contribution of the bulge in the near-IR bands. Also, in the case that the pixelsize of the ``measure'' image (i.e. ISAAC or VIMOS) is much larger than that of the ``input'' one (ACS z in our case), the actual limit intrinsic accuracy in image aligning may lead to systematic errors.

We will show below that these effects may be minimised or corrected with reasonable accuracy: at this purpose, we have included in the code several options and fine-tuning parameters to minimize the systematics involved.

Valid XHTML 1.0! Valid CSS 1!