Background
Deep Learning in NLP (一)词向量和语言模型
CS224n: Natural Language Processing with Deep Learning
Schedule and Syllabus
http://web.stanford.edu/class/cs224n/syllabus.html
Skip-Gram Model – Word2Vec
Efficient Estimation of Word Representations in Vector Space
https://arxiv.org/pdf/1301.3781.pdf
Distributed Representations of Words and Phrases and their Compositionality
https://arxiv.org/pdf/1310.4546.pdf
Word2Vec Tutorial – The Skip-Gram Model
http://mccormickml.com/2016/04/19/word2vec-tutorial-the-skip-gram-model/
Softmax Regression (is used in Skip-Gram model)
http://ufldl.stanford.edu/tutorial/supervised/SoftmaxRegression/
中英文维基百科语料上的Word2Vec实验
GloVe Model
GloVe: Global Vectors for Word Representation
http://www-nlp.stanford.edu/pubs/glove.pdf
http://nlp.stanford.edu/projects/glove/
斯坦福大学深度学习与自然语言处理第二讲:词向量