from fastai.gen_doc.nbdoc import *
from fastai.text import *
The main thing here is RNNLearner. There are also some utility functions to help create and update text models.
show_doc(language_model_learner)
language_model_learner[source][test]
language_model_learner(data:DataBunch,arch,config:dict=*None,drop_mult:float=1.0,pretrained:bool=True,pretrained_fnames:OptStrTuple=None, ***learn_kwargs**) →LanguageLearner
No tests found for language_model_learner. To contribute a test please refer to this guide and this discussion.
Create a Learner with a language model from data and arch.
The model used is given by arch and config. It can be:
AWD_LSTM(Merity et al.)Transformer decoder (Vaswani et al.)TransformerXL (Dai et al.)They each have a default config for language modelling that is in {lower_case_class_name}_lm_config if you want to change the default parameter. At this stage, only the AWD LSTM support pretrained=True but we hope to add more pretrained models soon. drop_mult is applied to all the dropouts weights of the config, learn_kwargs are passed to the Learner initialization.
jekyll_note("Using QRNN (change the flag in the config of the AWD LSTM) requires to have cuda installed (same version as pytorch is using).")
path = untar_data(URLs.IMDB_SAMPLE)
data = TextLMDataBunch.from_csv(path, 'texts.csv')
learn = language_model_learner(data, AWD_LSTM, drop_mult=0.5)
show_doc(text_classifier_learner)
text_classifier_learner[source][test]
text_classifier_learner(data:DataBunch,arch:Callable,bptt:int=*70,max_len:int=1400,config:dict=None,pretrained:bool=True,drop_mult:float=1.0,lin_ftrs:Collection[int]=None,ps:Collection[float]=None, ***learn_kwargs**) →TextClassifierLearner
Create a Learner with a text classifier from data and arch.
Here again, the backbone of the model is determined by arch and config. The input texts are fed into that model by bunch of bptt and only the last max_len activations are considered. This gives us the backbone of our model. The head then consists of:
nn.BatchNorm1d, nn.Dropout, nn.Linear, nn.ReLU) layers.The blocks are defined by the lin_ftrs and drops arguments. Specifically, the first block will have a number of inputs inferred from the backbone arch and the last one will have a number of outputs equal to data.c (which contains the number of classes of the data) and the intermediate blocks have a number of inputs/outputs determined by lin_ftrs (of course a block has a number of inputs equal to the number of outputs of the previous block). The dropouts all have a the same value ps if you pass a float, or the corresponding values if you pass a list. Default is to have an intermediate hidden size of 50 (which makes two blocks model_activation -> 50 -> n_classes) with a dropout of 0.1.
path = untar_data(URLs.IMDB_SAMPLE)
data = TextClasDataBunch.from_csv(path, 'texts.csv')
learn = text_classifier_learner(data, AWD_LSTM, drop_mult=0.5)
show_doc(RNNLearner)
class RNNLearner[source][test]
RNNLearner(data:DataBunch,model:Module,split_func:OptSplitFunc=*None,clip:float=None,alpha:float=2.0,beta:float=1.0,metrics=None, ***learn_kwargs**) ::Learner
No tests found for RNNLearner. To contribute a test please refer to this guide and this discussion.
Basic class for a Learner in NLP.
Handles the whole creation from data and a model with a text data using a certain bptt. The split_func is used to properly split the model in different groups for gradual unfreezing and differential learning rates. Gradient clipping of clip is optionally applied. alpha and beta are all passed to create an instance of RNNTrainer. Can be used for a language model or an RNN classifier. It also handles the conversion of weights from a pretrained model as well as saving or loading the encoder.
show_doc(RNNLearner.get_preds)
get_preds[source][test]
get_preds(ds_type:DatasetType=*<DatasetType.Valid: 2>,with_loss:bool=False,n_batch:Optional[int]=None,pbar:Union[MasterBar,ProgressBar,NoneType]=None,ordered:bool=False*) →List[Tensor]
No tests found for get_preds. To contribute a test please refer to this guide and this discussion.
Return predictions and targets on the valid, train, or test set, depending on ds_type.
If ordered=True, returns the predictions in the order of the dataset, otherwise they will be ordered by the sampler (from the longest text to the shortest). The other arguments are passed Learner.get_preds.
show_doc(RNNLearner.load_encoder)
load_encoder[source][test]
load_encoder(name:str)
No tests found for load_encoder. To contribute a test please refer to this guide and this discussion.
Load the encoder name from the model directory.
show_doc(RNNLearner.save_encoder)
save_encoder[source][test]
save_encoder(name:str)
No tests found for save_encoder. To contribute a test please refer to this guide and this discussion.
Save the encoder to name inside the model directory.
show_doc(RNNLearner.load_pretrained)
load_pretrained[source][test]
load_pretrained(wgts_fname:str,itos_fname:str,strict:bool=*True*)
No tests found for load_pretrained. To contribute a test please refer to this guide and this discussion.
Load a pretrained model and adapts it to the data vocabulary.
Opens the weights in the wgts_fname of self.model_dir and the dictionary in itos_fname then adapts the pretrained weights to the vocabulary of the data. The two files should be in the models directory of the learner.path.
show_doc(convert_weights)
convert_weights[source][test]
convert_weights(wgts:Weights,stoi_wgts:Dict[str,int],itos_new:StrList) →Weights
No tests found for convert_weights. To contribute a test please refer to this guide and this discussion.
Convert the model wgts to go with a new vocabulary.
Uses the dictionary stoi_wgts (mapping of word to id) of the weights to map them to a new dictionary itos_new (mapping id to word).
show_doc(LanguageLearner, title_level=3)
class LanguageLearner[source][test]
LanguageLearner(data:DataBunch,model:Module,split_func:OptSplitFunc=*None,clip:float=None,alpha:float=2.0,beta:float=1.0,metrics=None, ***learn_kwargs**) ::RNNLearner
No tests found for LanguageLearner. To contribute a test please refer to this guide and this discussion.
Subclass of RNNLearner for predictions.
show_doc(LanguageLearner.predict)
predict[source][test]
predict(text:str,n_words:int=*1,no_unk:bool=True,temperature:float=1.0,min_p:float=None,sep:str=' ',decoder='decode_spec_tokens'*)
No tests found for predict. To contribute a test please refer to this guide and this discussion.
Return the n_words that come after text.
If no_unk=True the unknown token is never picked. Words are taken randomly with the distribution of probabilities returned by the model. If min_p is not None, that value is the minimum probability to be considered in the pool of words. Lowering temperature will make the texts less randomized.
show_doc(LanguageLearner.beam_search)
beam_search[source][test]
beam_search(text:str,n_words:int,no_unk:bool=*True,top_k:int=10,beam_sz:int=1000,temperature:float=1.0,sep:str=' ',decoder='decode_spec_tokens'*)
No tests found for beam_search. To contribute a test please refer to this guide and this discussion.
Return the n_words that come after text using beam search.
show_doc(get_language_model)
get_language_model[source][test]
get_language_model(arch:Callable,vocab_sz:int,config:dict=*None,drop_mult:float=1.0*)
No tests found for get_language_model. To contribute a test please refer to this guide and this discussion.
Create a language model from arch and its config, maybe pretrained.
show_doc(get_text_classifier)
get_text_classifier[source][test]
get_text_classifier(arch:Callable,vocab_sz:int,n_class:int,bptt:int=*70,max_len:int=1400,config:dict=None,drop_mult:float=1.0,lin_ftrs:Collection[int]=None,ps:Collection[float]=None,pad_idx:int=1*) →Module
No tests found for get_text_classifier. To contribute a test please refer to this guide and this discussion.
Create a text classifier from arch and its config, maybe pretrained.
This model uses an encoder taken from the arch on config. This encoder is fed the sequence by successive bits of size bptt and we only keep the last max_seq outputs for the pooling layers.
The decoder use a concatenation of the last outputs, a MaxPooling of all the outputs and an AveragePooling of all the outputs. It then uses a list of BatchNorm, Dropout, Linear, ReLU blocks (with no ReLU in the last one), using a first layer size of 3*emb_sz then following the numbers in n_layers. The dropouts probabilities are read in drops.
Note that the model returns a list of three things, the actual output being the first, the two others being the intermediate hidden states before and after dropout (used by the RNNTrainer). Most loss functions expect one output, so you should use a Callback to remove the other two if you're not using RNNTrainer.
show_doc(MultiBatchEncoder.forward)
forward[source][test]
forward(input:LongTensor) →Tuple[Tensor,Tensor]
No tests found for forward. To contribute a test please refer to this guide and this discussion.
Defines the computation performed at every call. Should be overridden by all subclasses.
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:Module instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.
show_doc(LanguageLearner.show_results)
show_results[source][test]
show_results(ds_type=*<DatasetType.Valid: 2>,rows:int=5,max_len:int=20*)
No tests found for show_results. To contribute a test please refer to this guide and this discussion.
Show rows result of predictions on ds_type dataset.
show_doc(MultiBatchEncoder.concat)
concat[source][test]
concat(arrs:Collection[Tensor]) →Tensor
No tests found for concat. To contribute a test please refer to this guide and this discussion.
Concatenate the arrs along the batch dimension.
show_doc(MultiBatchEncoder)
class MultiBatchEncoder[source][test]
MultiBatchEncoder(bptt:int,max_len:int,module:Module,pad_idx:int=*1*) ::Module
No tests found for MultiBatchEncoder. To contribute a test please refer to this guide and this discussion.
Create an encoder over module that can process a full sentence.
show_doc(decode_spec_tokens)
decode_spec_tokens[source][test]
decode_spec_tokens(tokens)
No tests found for decode_spec_tokens. To contribute a test please refer to this guide and this discussion.
show_doc(MultiBatchEncoder.reset)
reset[source][test]
reset()
No tests found for reset. To contribute a test please refer to this guide and this discussion.