Introduction
During amplifier design and optimization, as well any search process, it is necessary to choose some initial starting location. In cases where the optimization search locations are known in advance (dubbed “simultaneous search” by [1]), the choice of starting point has no impact on the final result. Examples of such search approaches include exhaustive searches and full amplifier load-pulls. However, significant time savings can be achieved by selecting a specific region of the search space, assuming the chosen region contains the desired optimum.
For techniques that select additional search locations based on previous measurements (dubbed “sequential search” by [1]) significant speed improvements can be gained based on the start location. Here, choosing a starting location near the final result typically allows the search to converge to the optimum value faster than from a more distant location.
In order to choose a good starting location, it is helpful to have at least a sparse or estimated understanding of the underlying search space. The common load-pull technique of performing consecutive load-pulls with increasing density utilizes the sparse understanding generated by previous iterations to select a region of focus for the next iteration [2]. For real time circuit optimization techniques, the conjugate of the device under test's (DUT) output reflection coefficient is considered a good starting location [3]. Absent a priori knowledge of the DUT, all possible locations are equally likely to be the optimum, so beginning at the center of the Smith Chart minimizes the average expected distance to the device optimum. For search spaces where convexity, and by extension convergence, is not guaranteed, “Sarvin's method” of [4] has demonstrated the ability to select a good starting location by testing a few locations spread throughout the load tuner search space and choosing the best performing location, with the intention that the best performing point will be close enough to the optimum to ensure the search is able to converge.
This work presents an alternative method to generating an estimate of the underlying amplifier performance across the complete search space by employing a deep learning image completion on a generative neural network. The objective is to extrapolate a complete evaluation using only measured samples from a small region, as illustrated in Fig. 1. The optimum of this estimated search space can be used in lieu of some other optimization technique or refined by applying a more traditional search technique starting from this optimum, as opposed to the method of [4] that only provides feedback on directly measured points.
Extrapolation Method
To achieve the partial load-pull extrapolation results of this paper, a gradient-based image completion technique is applied to a generative adversarial network (GAN) trained on known load-pull contours. GANs consist of two adversarial neural networks trained with opposing goals [5]. The first network (discriminator) is trained to classify instances of data as either belonging to some defined dataset or not belonging to the dataset. The second network (generator) is trained to utilize random input values to produce instances of data that are misclassified by the discriminator. These networks are trained in an alternating fashion to allow each to adapt to weaknesses in the other's behavior. Once a GAN is successfully trained on a specific dataset, the generator can be used to synthesize elements that belong to the dataset, despite not being in the explicit subset used for training. Given a functional GAN, image completion can be performed by searching for an input to the generator that produces a result that closely agrees with the known portion of the image undergoing completion. As the GAN is pre-trained prior to image completion, this approach provides a good basis for quickly evaluating an unknown device.
A. Implemented GAN Architecture
This work utilizes a variation of the traditional GAN architecture known as Wasserstein GAN (WGAN) [6]. This architecture uses the Wasserstein distance (also called Earth Mover distance, representing the amount of work required to transfer the mass of one probability distribution to another) as the loss metric when training the generator and critic (WGAN discriminator) networks. This distance metric avoids the metric saturation that can occur when one of the networks performs too well (such as if the critic is never wrong). Avoiding this saturation greatly improves the stability and robustness of the network training process by ensuring that the gradient calculations used to update the generator network do not vanish if the critic becomes too performant. The WGAN system used in this work is trained according to the approach presented by [7] utilizing gradient penalties (WGAN-GP) as opposed to the weight clipping methods of the initial WGAN architecture [6]. The implementation (in Tensorflow 2.0), including network layer topology, is adapted from [8].
B. Image Completion Process
The trained WGAN system is then extended for image completion using the techniques of [9]. Although [9] originally utilizes a Deep Convolution GAN (DCGAN) architecture, the image completion process readily translates to other GAN architectures, such as WGAN.
The goal of image completion is to find an input
Two loss metrics are used to determine the quality of
Perceptual loss
describes how closely the generated image resembles members of the
trained dataset according to the critic network
These two losses are combined with a hyperparameter
Note that the method of [9] combines the generated and partial images, utilizing the mask to remove part of the generated image and replace it with the original partial image. This work forgoes this step, using only the generated image as the final output.
C. Training Dataset Generation
The data used for this project is generated in MATLAB by simulating the output power contours for randomly generated sets of amplifier S-Parameters. As such, these contours represent linear device performance. This approach is used in lieu of large-signal device behavior because of the need for a large dataset during training and the relative ease of generating S- Parameters compared to simulating non-linear models. Datasets for large-signal operation could be produced by performing load-pull simulations using existing nonlinear device models across a variety of settings (frequency, bias conditions, input power, etc.).
Given a set of amplifier S-Parameters and source and
load impedances, the amplifier output power can be calculated using the
methods of Gonzalez [11]. The magnitude bounds for the amplifier S-Parameters are included in Table I.
These bounds were selected to avoid generating potentially unstable
amplifiers. Only unconditionally stable amplifiers are used for training
and evaluation to avoid unbounded performance values, determined by the
stability conditions
The source impedance was fixed at
The
resulting dataset was pre-processed by standardizing to zero mean and
unit variance and applying sigmoidal normalization with a hyperbolic
tangent function as recommended by [12], such that
For computational efficiency, each load-pull was rendered as 32×32 pixel greyscale images for use in the load-pull extrapolation system.
Results
The
load-pull extrapolation system was trained using 100,000 simulated
amplifier load-pulls. The resulting system was then tested by presenting
incomplete load-pulls dictated by the mask of (1). Three mask sizes were selected such that known values were given for regions of the Smith Chart where
Each mask was then tested with 100 different simulated amplifier load-pulls. The prediction performance for each set is summarized in Table II.
As expected, the extrapolation system performs better, on
average, when given a larger initial load-pull sample (the 0.25 mask
case). However, it still performs remarkably well even when given only
nine measurements (0.05 Mask) as a prediction basis. Performance of
optimum
In some cases, the prediction fails to converge to a reasonable set of contours for the given data, such as in Fig. 5. Such failures are indicated by large values of
Conclusion
The
ability to extrapolate device performance from partial load-pull
datasets utilizing deep learning image completion techniques has been
demonstrated. Such an approach can be used to usefully estimate the
location and value of optimum device performance with as few as nine
sample points near
The authors are optimistic that this technique can be applied to large-signal operation, as similar deep learning approaches have achieved great success in much more complex applications, such as generation of full-color photographs [7] and completion of partial electron microscopy [13]. Additionally, the ability of neural networks to learn and extrapolate performance of an individual large-signal microwave device has been previously demonstrated [14], which suggests favorable outcomes in applying the techniques of this paper to this class of devices in general.
ACKNOWLEDGMENT
This work has been funded by the Army Research Laboratory (Grant No. W911NF-16-2-0054). The views and opinions expressed do not necessarily represent the opinions of the U.S. Government. The authors are grateful to John Clark of the Army Research Laboratory for assistance in development of this paper.