Word Embeddings are both a hot research topic and a useful tool for NLP practitioners, as they provide representations used in many intermediate tasks, like part-of-speech tagging, syntactic parsing or named entity recognition, as well as end to end tasks like text classification, sentiment analysis and question answering.
The recent attention to this topic started in 2013 when the original word2vec paper was published at NIPS alongside with an efficient and scalable implementation, but a lot of research was carried out on the topic since the 50s in computer science, cognitive science, and computational linguistics.
The Historical part of the talk will focus on this body of work, with the aim of distilling ideas and learned lessons many practitioners and machine learning researchers may not be unaware of.
The second part of the talk will focus on recent developments and novel methods, highlighting interesting directions that are being explored lately, like the compositionality of meaning, representing words as probability distributions and how to learn representations of knowledge graphs.