Crucial transistor performance information for amplifier design is often gathered through an extensive process that includes nonlinear measurement and characterization techniques, such as load-pull measurements. Performing load-pull measurements to characterize the maximum power, efficiency, or linearity performance of a device over changing load impedance often requires the measurement of many preselected impedance points. To reduce the number of measurements required in the design process, designers often rely on software nonlinear device models created from measurements over a subset of possible device configurations. Meanwhile, in reconfigurable power amplifiers, the optimum load impedance must be found through real-time measurements, which must be minimized to allow fast adaptivity. Both traditional and real-time design scenarios can benefit from techniques that are able to extract more information from fewer data points, either by reducing the size of load-pull measurement datasets, accelerating model development, or improving the rate of real-time optimization.
The ability of neural networks to learn and extrapolate performance of an individual large-signal microwave device using a measured dataset has been previously demonstrated , , , , , , . Such approaches often seek to train an individual network to model a single device for simulation and design purposes, accelerating system development. However, given the typical computational cost associated with training neural networks, these methods may struggle in application to real-time optimization problems.
Instead of using neural networks, many other approaches for impedance optimization have been explored in the literature, including locating constant power contours and proceeding toward increased performance , fitting measured data to an equation and solving for the optimal value , genetic algorithms controlling a hybrid varactor/MEMS switch tuner , and alternatively searching for optimal reflection phase and gradually increasing reflection magnitude . However, Barkate et al.  demonstrated that gradient-based methods perform well for real-time circuit optimization when compared with other typical algorithms.
Rather than train neural networks to model a device (i.e., predict the output of a device given some input), it is possible to instead train a network to directly predict measured performance characteristics, such as output power or gain, from sample measurements at a small number of configurations. As an example, consider the proposed problem shown in Fig. 1. Here, a subset of a device’s load impedance gain contours has already been evaluated, and it is desired to obtain the complete set of load-pull contours without additional measurement.
Saini et al.  previously demonstrated a form of load-pull extrapolation by generating polyharmonic distortion behavioral models  for the device in question, using as few as 15 measurements. It is possible to reduce the required information even further by instead applying image completion techniques from deep learning to extrapolate the unknown portions of the load-pull contours, as shown in Fig. 1. In general, image completion techniques have been successfully applied to complicated datasets, including full-color photographs  and partial electron microscopy .
a preceding conference paper, we demonstrated the application of image
completion for the simple case of simulated linear amplifier
Section II provides an overview of the load-pull extrapolation method providing the basis for this work and introduces a novel family of circuit optimization algorithms built upon the iterative application of load-pull extrapolation. Section III then shows the performance of this optimization algorithm for live measurement data and compares the results to an existing, gradient-based method. Section IV provides a more detailed look at the quality of the individual extrapolations completed in the experiments of Section III. Finally, Section V presents conclusions from this work.
The load-pull extrapolation and circuit optimization techniques here are built upon the deep image completion method demonstrated for simulated, linear devices in . For purposes of clarity, here is a brief overview of the approach.
To extrapolate full load-pull contours from a partial dataset, performance measurements from the known impedances are treated as an image for the purposes of image completion. The term “image completion” is derivative of the use of this approach to images. The “images” here correspond to performance data plotted on the Smith chart, while the “image completion” corresponds to filling in unknown portions of these data.
The image completion process applies a gradient search to a generative adversarial network (GAN) trained on known load-pull contours. GANs utilize two networks pitted against each other with opposing goals . In this present article, the discriminator network is trained to determine whether an image is a member of the training set of load-pull contours or a false, generated image. Meanwhile, the generator network is trained to produce load-pull images that meet the criteria learned by the discriminator, resulting in misclassification. By iteratively training these networks (and taking care to avoid overfitting), the discriminator learns how to recognize valid load-pull contours and the generator learns how to produce valid load-pull contours. The operation of these networks is shown in Fig. 2, and the relationship of these networks to one another is described in Fig. 3.
a well-trained GAN, load-pull extrapolation can be achieved by
performing a gradient search on the inputs to the generator network,
searching for an input
We use a Wasserstein GAN with gradient penalty (WGAN-GP). The Wasserstein GAN architecture reduces loss metric saturation, provides a more stable and robust training process, and is less susceptible to vanishing gradients and mode collapse , . Because the WGAN-GP architecture uses the Wasserstein distance for quantifying how far the generated image is from the expected set (as opposed to categorizing the generated image as belonging to the set), the discriminator network is typically referred to as the “critic” network instead. The network implementation and layer topology for this present article are the same as in , originally adapted from . Figs. 4 and 5 show the illustrations of the resulting network structure for the WGAN-GP generator and critic networks used in this present article.
These networks have been trained using 100 000 sets of simulated output power contours for linear amplifier devices , represented as 32
For image completion, this present article uses a slightly modified version of the algorithm in , which is originally derived from . This approach provides known data, specified as a masked image
For loss minimization, we use the Adam optimization  with learning rate 0.01,
The general process flow for a circuit optimization algorithm using only load-pull extrapolation to drive exploration of the search space is shown in Fig. 6. For each of the given steps in the algorithm, different variations are possible.
algorithm implementation used in this present article is built on a
simple “maximum addition” technique, where the best predicted impedance
from the previous load-pull extrapolation is chosen for measurement, and
its performance is added to the dataset used for the next
extrapolation. The initial point is chosen at random from one of the
four pixels closest to 50
This maximum addition technique is well suited for systems where the time cost of performing a load-pull extrapolation is low relative to the time required to tune to and evaluate a new impedance value. However, if this situation is reversed (i.e., it is faster to tune and measure the performance extrapolations), it may be beneficial to evaluate multiple impedances per extrapolation. For example, the eight pixels surrounding a predicted optimal impedance could also be evaluated in addition to the optimal impedance. This also permits a more sophisticated convergence check that looks for the presence of some local maximum performance. Given the typical convex nature of output power contours on the Smith chart, any local maximum would be assumed to be the global maximum without requiring an additional search.
Demonstration of Circuit Optimization via Load-Pull Extrapolation
A. Test Configuration
The circuit optimization via load-pull extrapolation technique of Section II
has been tested and measurement demonstrated using the Skyworks
65017-70LF InGaP packaged amplifier with a bias voltage of 7 V. For each
frequency under test, the Skyworks amplifier was power swept with a 50-
In this simple load-pull setup, the harmonic terminations of the device under test were not controlled or known. Stancliff and Poulin  and Colantonio et al.  demonstrated the use of harmonic load-pull measurements in device design and optimum termination. The harmonic terminations of the device impact performance and contour shape. In this situation, since the harmonic terminations, in general, vary as the fundamental load reflection coefficient is varied, it is expected that the contours will be different than for fixed, known harmonic terminations. This, perhaps, may make it even more difficult to accurately predict the device performance in most modeling cases. However, in this case, an image processing approach is used to generate the contours, and it is limited to extrapolating from what it can “see” based on the limited measurement dataset, irrespective of device harmonic termination impacts.
All image completions in this section are run using a computer with an NVIDIA GeForce RTX 2070 Super GPU, AMD Ryzen 3700X CPU, and 32 GB of DDR4-3200 RAM, with an average image completion time of 1.1 s. Measurements are acquired using custom MATLAB software with a traditional load-pull system (Maury Microwave passive automated load-pull tuner and Keysight Technologies signal generator and power meter).
B. Measurement Results
The circuit optimization technique was tested at 21 frequencies between 2 and 4 GHz, spaced at 100-MHz intervals. The 50 searches were performed at each frequency (a total of 1050 searches). The results from several example searches at various frequencies are shown in Figs. 7–9. Each figure shows the similarity between the real and final predicted contours, as well as the true optimal impedance and the optimized impedance. “Original Data” represents the real contours, obtained for comparison through traditional load-pull measurement. “Final Extrapolated Data” represents the extrapolated load-pull contours based on the complete collection of “Measured Points.” The “Final Predicted Optimum Impedance” is the optimum point extrapolated from all measured data. The “Search Optimum Impedance” is the impedance with the best performance over the course of the search. These points are often different, as the final predicted optimum impedance point is extrapolated from all data, whereas the best measured point (labeled the “Search Optimum Impedance”) is the best performing point of the measured data, rather than the extrapolated optimum. In all three of these examples, the selected maximum power varies from the optimum power by less than 0.1 dB, with eight or fewer impedances evaluated. This is a significant improvement on traditional load-pull measurements and on impedance optimization algorithms, given both the accuracy and the small number of required measurements.
A complete overview of the achieved performance (compared to load-pull maximum) and elapsed time for all 1050 trials is shown in Fig. 10. To accurately compare the search performance against the maximum power reported by the load-pull for each frequency, the load-pull performance (obtained from the traditional load-pull measurements) for the final search impedance is used in place of the power reported at the end of the search. This eliminates the impact of small variations in measured power that occur over the significant time needed to run all 1050 trials.
Fig. 10 shows that the search algorithm is generally successful, converging to within 0.5 dB of the optimal performance 95% of the time (1001 out of 1050 trials). Of these trials, the search converges within 60.5 s on average, with 95% of these searches converging within 84 s. In addition, more than half the searches obtain performance within 0.1 dB of the true optimum (547 out of 1050 trials). Instances that miss the optimal impedance by a significant margin generally obtained an early repeated maximum predicted impedance, leading to early search convergence, such as the result shown in Fig. 11. This can be mitigated by requiring a minimum number of measured points prior to convergence and selecting additional impedances to evaluate by some other means when an early repeat does occur. However, imposing a minimum number of measurements also imposes a floor on how quickly the search is allowed to converge.
Compared to a load-pull gradient search such as the one implemented in , this algorithm is able to converge much faster with significantly fewer measurements. The load-pull gradient search mentioned n  proceeds by beginning at a candidate point, measuring two neighboring points around the candidate, and then proceeding a search distance in the direction of steepest ascent. The gradient search mentioned in  stops when the search distance, decreased when the next candidate performs more poorly than its predecessor, is decremented below a defined resolution distance. Histograms of the elapsed times for the 14 gradient search trials mentioned in  and the 1050 extrapolation search trials of this work are shown in Fig. 12. For each search type, the percentage of the total searches performed for each algorithm finishing in a given time is plotted in the histogram. While work  does not report elapsed times for each search, we have estimated the timing by applying the average tuning and measurement time for the traditional load-pull system to the number of performance queries reported in  (assuming that the computation time required for the gradient search is negligible). By this standard, the searches performed in  converge in an average time of 166.7 s—nearly double the 95th percentile time for the extrapolation searches that achieve within 0.5 dB of the optimal performance. This faster convergence comes at a cost of performance consistency: the total variation in final impedance for the gradient search presented is reported to be on the order of half a single pixel in the load-pull extrapolation search (each pixel is roughly 0.06 units wide on the Smith chart). Note that it may be possible to improve the accuracy of the load-pull extrapolation algorithm by training on a collection of nonlinear data, as opposed to the simulated, linear data used in this article, or by increasing the pixel density of the load-pull images.
Analysis of Nonlinear Load-Pull Extrapolation Quality
Following up on the results mentioned in  for linear devices, this section considers the quality of the load-pull extrapolations performed as part of the searches in Section III. Over the course of those searches, extrapolations were performed using anywhere from 1 to 14 measurements. However, only one extrapolation each used 13 and 14 measurements, and only five extrapolations used 12 measurements. Figs. 13 and 14 show statistics for the error in the predicted power and optimal impedance for all extrapolations with at least ten extrapolations per input dataset size. Table I provides a numeric summary of these results for comparison with the simulated, linear results mentioned in .
Comparing the results mentioned in Table I and ,
the extrapolations based on at least seven measurements perform
comparably to the linear results with 81 measurements grouped around the
center of the Smith chart, based on the median
ability to extrapolate nonlinear characteristics of amplifier output
power versus reflection coefficient (load-pull) from partial datasets
utilizing deep learning image completion techniques has been
demonstrated. This approach has produced good predictive ability (median