NLP model creation and training¶
The main thing here is RNNLearner
. There are also some utility functions to help create and update text models.
Quickly get a learner¶
The model used is given by arch
and config
. It can be:
- an
AWD_LSTM
(Merity et al.) - a
Transformer
decoder (Vaswani et al.) - a
TransformerXL
(Dai et al.)
They each have a default config for language modelling that is in {lower_case_class_name}\_lm\_config
if you want to change the default parameter. At this stage, only the AWD LSTM and Tranformer support pretrained=True
but we hope to add more pretrained models soon. drop_mult
is applied to all the dropouts weights of the config
, learn_kwargs
are passed to the Learner
initialization.
If your data
is backward, the pretrained model downloaded will also be a backward one (only available for AWD_LSTM
).
path = untar_data(URLs.IMDB_SAMPLE)
data = TextLMDataBunch.from_csv(path, 'texts.csv')
learn = language_model_learner(data, AWD_LSTM, drop_mult=0.5)
Here again, the backbone of the model is determined by arch
and config
. The input texts are fed into that model by bunch of bptt
and only the last max_len
activations are considered. This gives us the backbone of our model. The head then consists of:
- a layer that concatenates the final outputs of the RNN with the maximum and average of all the intermediate outputs (on the sequence length dimension),
- blocks of (
nn.BatchNorm1d
,nn.Dropout
,nn.Linear
,nn.ReLU
) layers.
The blocks are defined by the lin_ftrs
and drops
arguments. Specifically, the first block will have a number of inputs inferred from the backbone arch and the last one will have a number of outputs equal to data.c (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by lin_ftrs
(of course a block has a number of inputs equal to the number of outputs of the previous block). The dropouts all have the same value ps if you pass a float, or the corresponding values if you pass a list. Default is to have an intermediate hidden size of 50 (which makes two blocks model_activation -> 50 -> n_classes) with a dropout of 0.1.
path = untar_data(URLs.IMDB_SAMPLE)
data = TextClasDataBunch.from_csv(path, 'texts.csv')
learn = text_classifier_learner(data, AWD_LSTM, drop_mult=0.5)
Handles the whole creation from data
and a model
with a text data using a certain bptt
. The split_func
is used to properly split the model in different groups for gradual unfreezing and differential learning rates. Gradient clipping of clip
is optionally applied. alpha
and beta
are all passed to create an instance of RNNTrainer
. Can be used for a language model or an RNN classifier. It also handles the conversion of weights from a pretrained model as well as saving or loading the encoder.
If ordered=True
, returns the predictions in the order of the dataset, otherwise they will be ordered by the sampler (from the longest text to the shortest). The other arguments are passed to Learner.get_preds
.
The darker the word-shading in the below example, the more it contributes to the classification. Results here are without any fitting. After fitting to acceptable accuracy, this class can show you what is being used to produce the classification of a particular case.
import matplotlib.cm as cm
txt_ci = TextClassificationInterpretation.from_learner(learn)
test_text = "Zombiegeddon was perhaps the GREATEST movie i have ever seen!"
txt_ci.show_intrinsic_attention(test_text,cmap=cm.Purples)
You can also view the raw attention values with .intrinsic_attention(text)
txt_ci.intrinsic_attention(test_text)[1]
Create a tabulation showing the first k
texts in top_losses along with their prediction, actual,loss, and probability of actual class. max_len
is the maximum number of tokens displayed. If max_len=None
, it will display all tokens.
txt_ci.show_top_losses(5)
Loading and saving¶
Opens the weights in the wgts_fname
of self.model_dir
and the dictionary in itos_fname
then adapts the pretrained weights to the vocabulary of the data
. The two files should be in the models directory of the learner.path
.
Utility functions¶
Uses the dictionary stoi_wgts
(mapping of word to id) of the weights to map them to a new dictionary itos_new
(mapping id to word).
Get predictions¶
If no_unk=True
the unknown token is never picked. Words are taken randomly with the distribution of probabilities returned by the model. If min_p
is not None
, that value is the minimum probability to be considered in the pool of words. Lowering temperature
will make the texts less randomized.
Basic functions to get a model¶
This model uses an encoder taken from the arch
on config
. This encoder is fed the sequence by successive bits of size bptt
and we only keep the last max_seq
outputs for the pooling layers.
The decoder use a concatenation of the last outputs, a MaxPooling
of all the outputs and an AveragePooling
of all the outputs. It then uses a list of BatchNorm
, Dropout
, Linear
, ReLU
blocks (with no ReLU
in the last one), using a first layer size of 3*emb_sz
then following the numbers in n_layers
. The dropouts probabilities are read in drops
.
Note that the model returns a list of three things, the actual output being the first, the two others being the intermediate hidden states before and after dropout (used by the RNNTrainer
). Most loss functions expect one output, so you should use a Callback to remove the other two if you're not using RNNTrainer
.