Gradient algorithms for designing predictive vector quantizers

Pao Chi Chang, Robert M. Gray

Research output: Contribution to journalArticlepeer-review

79 Scopus citations


A predictive vector quantizer (PVQ) is a vector extension of a predictive quantizer. It consists of two parts: A conventional memoryless vector quantizer (VQ) and a vector predictor. Two gradient algorithms for designing a PVQ are developed in this paper: The steepest descent (SD) algorithm and the stochastic gradient (SG) algorithm. Both have the property of improving the quantizer and the predictor in the sense of minimizing the distortion as measured by the average mean-squared error. The differences between the two design approaches are the period and the step size used in each iteration to update the codebook and predictor. The SG algorithm updates once for each input training vector and uses a small step size, while the SD updates only once for a long period, possibly one pass over the entire training sequence, and uses a relatively large step size. Code designs and tests are simulated for both Gauss-Markov sources and for sampled speech waveforms, and the results are compared to codes designed using techniques that attempt to optimize only the quantizer for the predictor and not vice versa.

Original languageEnglish
Pages (from-to)679-690
Number of pages12
JournalIEEE Transactions on Acoustics, Speech, and Signal Processing
Issue number4
StatePublished - Aug 1986


Dive into the research topics of 'Gradient algorithms for designing predictive vector quantizers'. Together they form a unique fingerprint.

Cite this