Publication Date: 7/10/2018
Event: ICML 2018
Reference: pp. 1-10, 2018
Authors: Ting Chen, UCLA; Martin Renqiang Min, NEC Laboratories America, Inc.; Yizhou Sun, UCLA
Abstract: Conventional embedding methods directly associate each symbol with a continuous embedding vector, which is equivalent to applying a linear transformation based on a one-hot encoding of the discrete symbols. Despite its simplicity, such approach yields the number of parameters that grows linearly with the vocabulary size and can lead to overfitting. In this work, we propose a much more compact K-way D-dimensional discrete encoding scheme to replace the one-hot” encoding. In the proposed KD encoding, each symbol is represented by a D-dimensional code with a cardinality of K, and the final symbol embedding vector is generated by composing the code embedding vectors. To end-to-end learn semantically meaningful codes, we derive a relaxed discrete optimization approach based on stochastic gradient descent, which can be generally applied to any differentiable computational graph with an embedding layer. In our experiments with various applications from natural language processing to graph convolutional networks, the total size of the embedding layer can be reduced up to 98% while achieving similar or better performance.
Publication Link: https://proceedings.mlr.press/v80/chen18g.html