Abstract
For multidimensional band-limited functions, the Nyquist density is defined as that density corresponding to maximally packed spectral replications. Because of the shape of the support of the spectrum, however, sampling multidimensional functions at Nyquist densities can leave gaps among these replications. In this paper we show that, when such gaps exist, the image samples can periodically be deleted or decimated without information loss. The result is an overall lower sampling density. Recovery of the decimated samples by the remaining samples is a linear interpolation process. The interpolation kernels can generally be obtained in closed form. The interpolation noise level resulting from noisy data is related to the decimation geometry. The greater the clustering of the decimated samples, the higher the interpolation noise level is.
© 1990 Optical Society of America
INTRODUCTION
For a band-limited function of any dimension, the Nyquist density is defined as the periodic sampling density that results in the most densely packed replications of the function’s spectrum.[1] Thus, for one-dimensional (1-D) functions band limited in the low-pass sense, the Nyquist density (or rate) is equal to the extent of the support of the function’s spectrum. The samples’ spectrum is maximally packed at this rate and there are no gaps among the spectral replications. Because of the more complicated geometry of the spectrum’s support, this is not generally true for functions of higher dimension. The extent of the support is measured not only by the area but also by the shape. Consider, for example, a two-dimensional (2-D) circularly band-limited function. The Nyquist density is the density corresponding to a hexagonal replication pattern (Fig. 1).[2] Clearly, gaps exists among the replications. We will show that, if gaps exists among the replications, then the samples are linearly dependent. If the samples are linearly dependent, the function is said to be oversampled. Hence a multidimensional band-limited function can be oversampled when it is sampled at its Nyquist density.
Marks[1] showed that, if gaps exist among the spectral replications, a subset of samples can be deleted and restored by a linear combination of those remaining. In this paper we extend this result to deletion of an infinite number of samples. Specifically, given a sample set whose spectrum contains gaps, we show that samples can periodicallly be deleted or decimated. The sampling density, equal to the original uniform density minus the deletion density, is therefore reduced. If the original sample set is obtained by sampling the function at its Nyquist density, decimation will reduce the density to below that of Nyquist. The deleted samples can be restored by a linear interpolation of those samples remaining. The required interpolation kernels can be evaluated in closed form. The noise sensitivity of this process is also investigated.
PRELIMINARY DESCRIPTIONS
Before we proceed to the contribution of this paper, a brief review of the multidimensional sampling theorem and related work is in order.
Multidimensional Sampling Theorem
The multidimensional sampling theorem, initially presented by Petersen and Middleton,[3] is a direct extension of the Shannon sampling theorem.[4] Let {x(t)| t = (t1, t2, …, tN)T} be an N-dimensional signal (the subscript T denotes transposition). The Fourier transform of x(t) is
where and If X(ν) is zero outside a support region,When x(t) is sampled with a regular sampling geometry, the spectrum of the sampled signal replicates periodically. Let {vn | 1 ≤ n ≤ N} be the N sampling vectors, and let ks be an offset vector. These N + 1 vectors constitute the sampling geometry, for sampling x(t). In particular, the sampling geometry is denoted by the matrix–vector pair [V, ks]. Let
With no loss of generality, we let ks be the zero vector. The replicated spectrum of the samples follows straightforwardly from the Poisson sum formula:
where U is the periodicity matrix, and D is the sampling density, and we have adopted the notation | U | to denote the determinant of the matrix U. The N column vectors of U, referred to as the replication vectors, are obtained from Clearly, we require that V be full rank or, equivalently, that the N sampling vectors be linearly independent.An example of a 2-D sampling geometry is shown in Fig. 2. A period of the samples’ spectrum,
If there is no aliasing in
For all 1-D low-pass band-limited functions, gaps cease to exist when the function is sampled at the Nyquist rate. This, however, is not generally true in higher dimensions. In our running example, because of the circular shape of the support, gaps still exist when the function is sampled at the Nyquist density. In the following subsection we discuss the relationship between sample interdependency and gaps in the samples’ spectrum.
Sample Interdependency
If gaps exist among the spectral replications, then the samples {x(Vn)} are linearly dependent. This relationship can easily be demonstrated in cases of 1-D band-limited signals. Let x(t) be a 1-D signal whose spectrum, X(ν), is identically zero only for |ν| ≥ λ. To sample x(t) at above the Nyquist rate corresponds to a sampling frequency of 2S > 2λ. The samples’ spectrum
As shown in Fig. 1,
gaps can exist among the spectral replications in higher dimensions.
Hence the samples are linearly dependent even if the signal is sampled
at the Nyquist density. A signal is oversampled if the samples are
linearly dependent. Thus multidimensional band-limited signals may be
oversampled even if they are sampled at Nyquist densities. Exceptions
are cases in which the support is a cell (i.e.,
Sample Deletion and Restoration
Given a sample set that is linearly dependent, one can reduce dependency by deleting some samples. Indeed, Marks showed that if a signal is oversampled, a finite subset of samples can be deleted and restored as a linear combination of those remaining.
Let ℳ denote the set of indices of the M deleted samples located at the points {Vm|m ∈ ℳ}. We can thus rewrite Eq. (10) as follows:
By evaluating Eq. (11) at the M locations of the deleted samples, we obtain a system of M equations: This system of equations can be put into a matrix form: where xℳ is an M-dimensional vector whose entries are the deleted samples, xR is a vector whose entries are the remaining samples, and Hℳ is a square matrix of dimension M. One can show that Hℳ is always invertible.[6] The deleted samples can thus be restored by a linear combination of the remaining samples:The quality of restoration of the deleted samples depends on the condition of Hℳ, which, in turn depends on three factors: (1) the distance among the deleted samples, (2) the uniformity of the deletion geometry,[7],[8] and (3) the total number of the deleted samples. Ching[9] performed an extensive empirical examination of the effect of these three factors on the quality of sample restoration.
SAMPLING DECIMATION (PERIODIC DELETION)
In this section we present a second deletion scheme wherein samples are periodically deleted or decimated. Since the samples are deleted periodically, the overall sampling density is reduced. We will show that, as long as gaps exist among the spectral replications, samples can be thus decimated.
Decimation Lattice
Using the notation of sampling geometry, we denote the decimation geometry by [Vd, kd]. The vector kd is an integer vector that determines the offset of the decimation lattice from the origin. In particular, the decimation matrix, Vd, is an integer-weighted combination of the sampling vectors. The decimation matrix can therefore be written as
where the vdi’s are the N decimation vectors and M is an N × N nonsingular matrix of integers. The location of the deleted samples is the set of lattice points {Vdn + Vkd} = {V(Mn + kd)}.With reference to Eq. (5), the decimation density is equal to
where D is the sampling density. The decimation ratio, rd, defined as the fraction of the samples deleted, is the ratio of Dd to D: As an example, the lattice defined in Fig. 2 is redrawn in Fig. 6. The open circles form the deletion lattice with andA parallelepiped constructed by the N decimation vectors forms a decimation period. Let L be the number of samples enclosed within a decimation period. A decimation period, corresponding to our running example, is outlined in a dashed-line parallelogram in Fig. 6(a), in which the period contains L = 4 points. Obviously,
Let Lj be the length of the jth side of the parallelepiped-shaped decimation period. Then
To produce a structure analogous to the polyphase structure in multirate digital signal processing,[10] we divide the sample set {x(Vn)} into L subgroups: where ki is an integer vector (which is also a mixed-radix representation of the integer i). The radix of the jth element of ki is Lj. The original sample set is a union of the L sample subgroups, LetThe periodicity matrix corresponding to Vd is
For convenience, we will refer to the cells corresponding to Ud as subcells. Naturally, the area of a subcell, denotedFirst-Order Decimation and Restoration
If a subcell is totally subsumed in a gap of the samples’ spectrum, then one of L subgroups can be deleted by using the decimation geometry [Vd, kα], where α is the index of the deleted subgroup. The deleted samples can be restored with the knowledge of those samples remaining. We can prove this as follows.
Let {xα(Vdn)} be the deleted subgroup:
The spectrum of this subgroup is denotedIn summary, if there exists a subcell subsumed in a gap, a subgroup of samples can be decimated. The decimated samples can be restored by those remaining.
Examples
Here we give three examples of first-order decimation for 2-D cases of circular spectral support. A monochromatic coherent or incoherent image of an object of finite extent obtained through an optical imaging system with a circular pupil, for example, has such spectral support.[11] In our example, the circle’s diameter is set to 1.
Rectangular Sampling
If the sampling geometry is limited to a rectangular lattice, the minimum sampling density corresponds to a sampling matrix of
The samples’ spectrum is a rectangular array of circles packed as shown in Fig. 7.Example 1
By setting
we obtain a decimation geometry that produces a decimation ratio of 1/9. The kα = 0 case is shown in Fig. 8, in which the dashed line outlines a decimation period. From Eq. (6), As shown in Fig. 9, this periodicity matrix produces a subcell in the shape of a distorted hexagon. This distorted hexagonal subcell is totally subsumed in the gaps among the spectral replications when it is centered at (1/2, 1/2). The interpolation kernel to restore the decimated samples is derived, by using Eq. (30), as where hinc(t1, t2) is the inverse transform of the unit-amplitude hexagon shown in Fig. 10:The phase term in Eq. (31) shifts the hexagonal hinc function into the gap region, as is shown in Fig. 9. We found the work of Rajan[12] useful in this computation.
Example 2
A 1:8 deletion ratio for the same set of samples can be obtained by choosing
The kα = 0 case is shown in Fig. 11. The subcell corresponding to the periodicity matrix is as is shown in Fig. 12. The diamond-shaped subcell is shown to be subsumed in the gap when it is centered at (1/2, 1/2), and we compute the interpolation function in the formEffects of Truncating
To illustrate the effects of truncation for both examples 1 and 2, we use the signal x(t1, t2) = jinc(t1, t2):
where J1(x) is the Bessel function of the first kind of order 1.[13] The jinc function is also known as the sombero function.[14] The spectrum of this signal is the circle 4π rect[(u12 + u22)1/2/2]. The value of the sample at the origin, x(0) = 1, was estimated by using windows of various extents. Specifically, the estimations of the decimated samples are where and M is the number of layers used in the estimation. The window extents for the continuation of example 1 of a total of M = 3 layers are shown in Fig. 13, where M = 1 corresponds to 4 decimation periods, M = 2 corresponds to 16 decimation periods, and M = 3 corresponds to 36 decimation periods. The estimate,Sampling Below the Nyquist Density: First-Order Decimation of Hexagonal Sampling
We have seen that a circularly band-limited function is sampled at the Nyquist density by using a hexagonal sampling geometry. For a unit-diameter circle, the sampling matrix corresponding to this hexagonal sampling geometry is
The sampling density is D = 0.866. The repetition pattern of the sample’s spectrum is shown in Fig. 1.If the samples can be decimated from this sample set, the function will be sampled below the Nyquist density.
Example 3
Given the hexagonal sampling geometry, we can achieve a 1:64 deletion ratio by adopting a decimation matrix of Vd = 8V (Fig. 16). Thus we obtain
Within a parallelogram cell, we can fit 64 similarly shaped subcells, one of which is subsumed in a gap (Fig. 17). The corresponding interpolation kernel is The overall sampling density in this example is reduced below the Nyquist density.Starting from the rectangular sampling geometry, examples 1 and 2 reduce the density to 0.8889 and 0.8750, respectively. The hexagonal sampling geometry yields the Nyquist density of 0.866, and example 3 reduces the density to 0.8525, which is 0.135 below that of Nyquist.
Second-Order Decimation and Restoration
We now extend the result of first-order decimation to second order. If a subcell is small enough that two identically oriented nonoverlapping subcells are totally enclosed within a gap of a single spectral period, then two sample subgroups can be deleted and restored by the remaining L − 2 sample subgroups. The result is that the sampling density is reduced by a factor of 2/L. We present this extension by showing that, in example 3, one more subcell can be accommodated within another gap. Therefore an additional sample subgroup can be deleted in example 3 with no loss of information.
Example 4
Continuing with example 3, we can easily position a second subcell located at (1,1/2), as shown in Fig. 18. We denote this cell
Both Eqs. (37),
however, are not defined over the same domain. Nevertheless, the
kernels can be solved with the addition of some extra steps. By
recognizing that the two cells
Unfortunately, d in this example is not an integer-weighted sum of the two periodicity vectors. We found that
In order to apply the results above, we need to add some additional steps. First, we divide the cellsRecall the procedure starting at Eq. (43). By substituting each vector dk into Eq. (41), we obtain four matrix equations, each corresponding to a partition
Now we evaluate Eq. (47) at each partition . Since all the partitions are similar parallelograms but are different in scale and location, Eq. (47) can be evaluated over a parallelogram region, say, a subcell. We then evaluate hk(t) for every k by applying the corresponding scaling and shifting operations. Specifically, we first evaluate Eq. (47) over a subcell with its lower left-hand corner located at the origin. The solution of this integration is denoted as a general solution k(t1, t2), which is found to be
We consider the scaling operation first. The scaling between each partition and is represented by a scaling matrix, denoted Sk. Since each partition is in the same orientation as the subcell, Sk is a diagonal matrix: where Sk1 and Sk2 are the scaling factors in the direction of ud1 and ud2, respectively. By incorporating the shifting vector, we obtain the relationship where rk is the displacement vectors of relation to By applying Rajan’s result,[12] we obtain where k(t) is the function expressed in Eq. (48). The measurements of Sk and rk for each partition are listed in Table 1. Finally, we substitute the corresponding values listed in the table into Eq. (49); the result is then substituted into Eq. (46). We obtain the following interpolation kernels for restoring the two deleted sample subgroups: In this example we obtain a sampling density of 0.8389, the lowest so far in all the examples.A sufficient condition for the existence of the solution to restore the deleted samples is that all matrices Hdk in Eq. (45), are nonsingular. One can show, for example, that a solution exists for the case of k1 = (0,0)T and k2 = (1, l)T. If k2 is changed to (1, 0)T, the condition is violated, and therefore no solution exists for this decimation geometry. In contrast to the first-order decimation, second-order decimation does not guarantee that solutions exist for all possible decimation geometries.
Multiple-Order Decimation and Restoration
The interpolation kernels in Eq. (46) have a general form of
By comparing this with Eq. (46), we obtain and In this form, the kernels are determined by the functions Ci,j(d), which are obtained by inverting the matrix Hd in Eq. (41). The values of the sk’s and rk’s can be determined immediately after these factors are known.With this approach, we consider extending the decimation to higher orders. By the same reasoning, if a subcell is small enough that q > 2 identically oriented subcells without overlap are totally enclosed within gaps of a single spectral period, the q of L sample subgroups can be deleted and restored by the L − q remaining sample subgroups. Specifically, let {udi| = 1, … , N} be the N periodicity vectors, and let , j = 1, … , q, be the q subcells enclosed within the gap regions. If we choose to be the reference subcell, the locations of the other q − 1 subcells are identified by the following displacement notations:
where ⊕ denotes offset by. After the location of every subcell is identified, the next step is consideration of the partitioning of the subcells. In general, each subcell introduces an extra partitioning boundary in every udi direction. For q subcells in a N-dimensional space, then, the total number of partitions is qN. After the partitioning, we have, for every partition In , a corresponding partition in every jth subcell such that where dj,k is the displacement vector of the kth partition in the jth subcell relative to the kth partition in the reference subcell , and pj,k is an integer vector. Also, the dimension of the is obtained through the scaling of the reference subcell, . The scaling factor in every udi direction of the kth partition is denoted sk,i, i = 1, … , N. The displacement vectors, dj,k, and the scaling factors, sk, can be obtained geometrically.Let ℳ be the set of the indices of the q decimated sample subgroups. In a form similar to Eq. (41), we obtain, in every partition,
where the matrix Hd,k has the entries . Also, we let The interpolation kernels follow: where ai,j,k is obtained from the inversion of the matrix Hd,k, and The vector rk is the displacement vector of relative to . The cardinal series is then Again, a sufficient condition for the existence of a solution is that all qN matrices, Hd,k, be nonsingular. As is evident from the derivation, complexity increases with the order of decimation and the dimension of the function.EFFECTS OF DATA NOISE
We now consider the effects of data noise on the recovery of the decimated samples. The data noise is modeled as discrete white noise.[1] The discrete-white-noise model was used previously to represent the effect of quantization error[15] and the effect of sampling position jitter.[16] We can show that the interpolation noise level (INL) is the data noise level multiplied by an amplification ratio, which is referred to as the noise-level amplification ratio (NLAR).
Interpolation Noise Level of First-Order Decimation
Let {ξ(Vn) | n ∉ ℳ}, the data noise, be a zero-mean stationary discrete white process,
where E denotes expectation, is the data noise level, and δ[·] is the Kronecker delta (note the square brackets). Let the interpolation noise be η(t). With reference to Eq. (29), the interpolation noise at the decimation lattice points is Again, we let the offset vector kα = 0. Since the data noise is stationary and the deletion is periodic, the restoration noise statistics will be the same at every point. Thus it suffices to examine the restoration noise at the origin only. Since and then squaring both sides of Eq. (55) and expectation gives the expression for the INL: The sum inside the large brackets can be evaluated by Parseval’s theorem. For every r from 0 to L − 1, where Fd(ν) is the Fourier spectrum of the kernel fd(t). With reference to Eq. (30), By substituting the value into Eq. (57) and also because the area of is equal to Dd, we obtain and the interpolation noise level in Eq. (56) is obtained: The ratio , defined as the NLAR, is equal to L − 1. Clearly, this ratio is the same regardless of the sampling geometry and the decimation geometry. The NLAR’s for examples 1–3 of the first-order sample decimation are therefore 8, 7, and 63, respectively.Interpolation Noise Level of Multiple-Order Decimation
For multiple-order decimation, the INL’s for different decimation subgroups are different. Briefly stated, the NLAR for the ith subgroup is
For first-order decimation, the coefficients {ai,j,k} and {skn} are equal to 1, and therefore Eq. (59) is reduced to Eq. (58). For the multiple-order case, the first set of coefficients is generally different for different i’s and thus for the NLAR’s. For our example 4, the NLAR’s for the two subgroups, k1 = (0,0)T and k2 = (1,1)T, are identical and equal to 36.3012.Normally, the magnitudes of the ai,j,k ’s are higher, and hence the NLAR’s are higher, in cases in which the deleted subgroups are more densely clustered.[7],[8] Such an effect of the NLAR versus the varying degrees of clustering was demonstrated by Yen[8] and Marks and Radbel.[17]–[19] We therefore need an algorithm to search for the combination of decimated subgroups for which the NLAR’s are the lowest and most uniform of all possible combinations. The NLAR will generally be lower for higher-order decimation because the number of terms in Eq. (59) decreases with the order of decimation. Such an effect can be seen from the different NLAR’s in examples 3 and 4.
CONCLUSION
In this paper we have demonstrated a technique to sample directly certain multidimensional bandlimited functions below the Nyquist density. The technique is implemented by sample decimation and can be applied directly to band-limited functions of arbitrary dimension and arbitrary support using arbitrary nonaliasing sampling geometries. The decimated samples are restored by discrete interpolation. The interpolation kernels in many cases can be obtained in closed form. The INL caused by interpolation from noisy data was also analyzed. The magnitude of the INL decreases with the order of decimation and the uniformity of the decimation lattice.
Figures and Table
REFERENCES
1. R. J. Marks II, “Multidimensional-signal sample dependency at Nyquist densities,” J. Opt. Soc. Am. A 3, 268–273 (1986) [CrossRef] .
2. D. E. Dudgeon and R. M. Meresereau, Multidimensional Digital Signal Processing (Prentice-Hall, Englewood Cliffs, N.J., 1984).
3. D. P. Petersen and D. Middleton, “Sampling and reconstruction of wave-number limited functions in N-dimensional Euclidean spaces,” Inf. Control 5, 279–323 (1962) [CrossRef] .
4. C. E. Shannon, “Communication in the presence of noise,” IRE Proc. 37, 10–21 (1948) [CrossRef] .
5. E. Dubois, “The sampling and reconstruction of time-varying imagery with application in video systems,” IEEE Proc. 23, 502–522 (1985) [CrossRef] .
6. K. F. Cheung, “Image sampling density below that of Nyquist,” Ph.D. dissertation (Department of Electrical Engineering, University of Washington, Seattle, Wash., 1988).
7. D. S. Chen and J. P. Allebach, “Analysis of error in reconstruction of two-dimensional signals from irregularly spaced samples,” IEEE Trans. Acoust. Speech Signal Process. ASSP-35, 173–180 (1987) [CrossRef] .
8. J. L. Yen, “On the nonuniform sampling of bandwidth limited signals,” IRE Trans. Circuit Theory 3, 251–257 (1956) [CrossRef] .
9. H. K. Ching, “Truncation effects in the estimation of two-dimensional continuous bandlimited signals,” master’s thesis (Department of Electrical Engineering, University of Washington, Seattle, Wash., 1985).
10. R. E. Crochiere and L. R. Rabiner, Multirate Digital Signal Processing (Prentice-Hall, Englewood Cliffs, N.J., 1983).
11. J. W. Goodman, Introduction to Fourier Optics (McGraw-Hill, New York, 1968).
12. P. K. Rajan, “A study on the properties of multidimensional fourier transforms,” IEEE Trans. Circuits Syst. CAS-31, 748–750 (1984).
13. R. N. Bracewell, The Fourier Transform and Its Application (McGraw-Hill, New York, 1965).
14. J. D. Gaskill, Linear Systems, Fourier Transforms and Optics (Wiley, New York, 1978).
15. A. V. Oppenheim and R. W. Shafer, Digital Signal Processing (Prentice-Hall, Englewood Cliffs, N.J., 1975).
16. A. Papoulis, “Error analysis in sampling theory,” Proc. IEEE 54, 947–955 (1966) [CrossRef] .
17. R. J. Marks II and D. Radbel, “Error of linear estimation of lost samples in an oversampled bandlimited signal,” IEEE Trans. Acoust. Speech Signal Process. ASSP-32, 648–654 (1985).
18. D. Radbel, “Noise and truncation effects in the estimation of sampled bandlimited signals,” master’s thesis (Department of Electrical Engineering, University of Washington, Seattle, Wash., 1983).
19. D. Radbel and R. J. Marks II, “An FIR estimation filter based on the sampling theorem,” IEEE Trans. Acoust. Speech Signal Process. ASSP-33, 455–460 (1985) [CrossRef] .