text generation

In text generation we build a model trained on sequences like:

The model would learn how to map the input and output and learn how to answer, complete text, translate…

The model is composed by an encoder and a decoder, we start from long-short-term-memory models and we evolve into transformers.

There are different procedures to preprocess and parse the text to be able to feed the data into the model.

text preprocessing

We use this text preprocessing routine to clean and simplify the text

After we preprocessed the text we create a corpus using the remaning lemmas and choose a vocabulary. Usually the words for the vocabulary are chosen as the most frequent until a maximum vocabulary size.

tokenization

For each model we select a maximum number of lemmas and create a token for each word depending on the occurency in the training data set.

A token can be a caracter, a word, a bag of words… To reduce the token dimension we can introduce semantic relationship between lemmas as per word2vec.

The tokens need to be reshaped as the model needs, usually adding an additional dimension for batching.

We need to save a consistent function to preprocess the text, select the words for the vocabulary, tokenize and reshape the data.

models

We use and compare different models.

Different models tried to learn natural language sequences starting with long short term memory where we use two layers of LSTM of 256 characters and where the model learns the next character. Results are usually poor and lack of semantic consitency.

Transformers focus on attention maps and learn flexibly the cross correlation of words and sequences. Transformers have a built in positional embedding that helps learning grammmar features.

With BERT we show an extended example of a BERT architecture with its characteristic features.

generative results

We use the routine to load example files and test the different models.

word2vec

in word2vec we use a shallow neural network to understand similarity between lemmas and reduce text input dimensions.

We analyze different documents and create 2-3 dimensional plots to show dimension reduction and clustering of words. In practive 3d are too few for an effective embedding of text.

vec2d private conversations

vec2d private conversations

vec2d private conversations

vec2d private conversations

A 3d representation shows a more complex structure

vec3d private conversations

vec3d private conversations