Conditional generative adversarial networks (cGANs) have gained a considerable attention in recent years due to its class-wise controllability and superior quality for complex generation tasks. This paper introduces a simple yet effective approach to improving cGANs by estimating the discrepancy between the data distribution and the model distribution on given samples. The proposed measure, coined the gap of log-densities (GOLD) estimator, provides an effective self-diagnosis metric for cGAN training while being efficiently computed from two branches of the cGAN discriminator. We propose three applications of the GOLD estimator: example re-weighting, rejection sampling, and active learning, which improves the training, inference, and data selection of cGANs, respectively. We demonstrate that the proposed methods outperform corresponding baselines for all three applications on different image datasets.