TY - JOUR

T1 - Gradient algorithms for designing predictive vector quantizers

AU - Chang, Pao Chi

AU - Gray, Robert M.

PY - 1986/8

Y1 - 1986/8

N2 - A predictive vector quantizer (PVQ) is a vector extension of a predictive quantizer. It consists of two parts: A conventional memoryless vector quantizer (VQ) and a vector predictor. Two gradient algorithms for designing a PVQ are developed in this paper: The steepest descent (SD) algorithm and the stochastic gradient (SG) algorithm. Both have the property of improving the quantizer and the predictor in the sense of minimizing the distortion as measured by the average mean-squared error. The differences between the two design approaches are the period and the step size used in each iteration to update the codebook and predictor. The SG algorithm updates once for each input training vector and uses a small step size, while the SD updates only once for a long period, possibly one pass over the entire training sequence, and uses a relatively large step size. Code designs and tests are simulated for both Gauss-Markov sources and for sampled speech waveforms, and the results are compared to codes designed using techniques that attempt to optimize only the quantizer for the predictor and not vice versa.

AB - A predictive vector quantizer (PVQ) is a vector extension of a predictive quantizer. It consists of two parts: A conventional memoryless vector quantizer (VQ) and a vector predictor. Two gradient algorithms for designing a PVQ are developed in this paper: The steepest descent (SD) algorithm and the stochastic gradient (SG) algorithm. Both have the property of improving the quantizer and the predictor in the sense of minimizing the distortion as measured by the average mean-squared error. The differences between the two design approaches are the period and the step size used in each iteration to update the codebook and predictor. The SG algorithm updates once for each input training vector and uses a small step size, while the SD updates only once for a long period, possibly one pass over the entire training sequence, and uses a relatively large step size. Code designs and tests are simulated for both Gauss-Markov sources and for sampled speech waveforms, and the results are compared to codes designed using techniques that attempt to optimize only the quantizer for the predictor and not vice versa.

UR - http://www.scopus.com/inward/record.url?scp=0001659837&partnerID=8YFLogxK

U2 - 10.1109/TASSP.1986.1164905

DO - 10.1109/TASSP.1986.1164905

M3 - 期刊論文

AN - SCOPUS:0001659837

VL - 34

SP - 679

EP - 690

JO - IEEE Transactions on Acoustics, Speech, and Signal Processing

JF - IEEE Transactions on Acoustics, Speech, and Signal Processing

SN - 0096-3518

IS - 4

ER -