In text generation we build a model trained on sequences like:
The model would learn how to map the input and output and learn how to answer, complete text, translate…
The model is composed by an encoder and a decoder, we start from long-short-term-memory models and we evolve into transformers.
There are different procedures to preprocess and parse the text to be able to feed the data into the model.
We use this text preprocessing routine to clean and simplify the text
After we preprocessed the text we create a corpus using the remaning lemmas and choose a vocabulary. Usually the words for the vocabulary are chosen as the most frequent until a maximum vocabulary size.
For each model we select a maximum number of lemmas and create a token for each word depending on the occurency in the training data set.
A token can be a caracter, a word, a bag of words… To reduce the token dimension we can introduce semantic relationship between lemmas as per word2vec.
The tokens need to be reshaped as the model needs, usually adding an additional dimension for batching.
We need to save a consistent function to preprocess the text, select the words for the vocabulary, tokenize and reshape the data.
We use and compare different models.
Different models tried to learn natural language sequences starting with long short term memory where we use two layers of LSTM of 256 characters and where the model learns the next character. Results are usually poor and lack of semantic consitency.
Transformers focus on attention maps and learn flexibly the cross correlation of words and sequences. Transformers have a built in positional embedding that helps learning grammmar features.
With BERT we show an extended example of a BERT architecture with its characteristic features.
in word2vec we use a shallow neural network to understand similarity between lemmas and reduce text input dimensions.
We analyze different documents and create 2-3 dimensional plots to show dimension reduction and clustering of words.
private conversations
private conversations
private conversations
private conversations
A 3d representation shows a more complex structure
private conversations
private conversations
We use the routine to load example files and test the different models.