from fastai import * # Quick access to most common functionality
from fastai.text import * # Quick access to NLP functionality
An example of creating a language model and then transfering to a classifier.
path = untar_data(URLs.IMDB_SAMPLE)
path
PosixPath('/home/ubuntu/.fastai/data/imdb_sample')
Open and view the independent and dependent variables:
df = pd.read_csv(path/'train.csv', header=None)
df.head()
| 0 | 1 | |
|---|---|---|
| 0 | 0 | Un-bleeping-believable! Meg Ryan doesn't even ... |
| 1 | 1 | This is a extremely well-made film. The acting... |
| 2 | 0 | Every once in a long while a movie will come a... |
| 3 | 1 | Name just says it all. I watched this movie wi... |
| 4 | 0 | This movie succeeds at being one of the most u... |
classes = read_classes(path/'classes.txt')
classes[0], classes[1]
('negative', 'positive')
Create a DataBunch for each of the language model and the classifier:
data_lm = TextLMDataBunch.from_csv(path)
data_clas = TextClasDataBunch.from_csv(path, vocab=data_lm.train_ds.vocab)
We'll fine-tune the language model. fast.ai has a pre-trained English model available that we can download, we jsut have to specify it like this:
learn = RNNLearner.language_model(data_lm, pretrained_model=URLs.WT103)
learn.unfreeze()
learn.fit(2, slice(1e-4,1e-2))
VBox(children=(HBox(children=(IntProgress(value=0, max=2), HTML(value='0.00% [0/2 00:00<00:00]'))), HTML(value…
Total time: 00:44 epoch train loss valid loss accuracy 1 4.921718 4.155864 0.245985 (00:22) 2 4.640784 4.090212 0.252899 (00:22)
Save our language model's encoder:
learn.save_encoder('enc')
Fine tune it to create a classifier:
learn = RNNLearner.classifier(data_clas)
learn.load_encoder('enc')
learn.fit(3, 1e-3)
VBox(children=(HBox(children=(IntProgress(value=0, max=3), HTML(value='0.00% [0/3 00:00<00:00]'))), HTML(value…
Total time: 00:46 epoch train loss valid loss accuracy 1 0.686322 0.669499 0.635000 (00:15) 2 0.651778 0.617119 0.715000 (00:15) 3 0.654797 0.593843 0.695000 (00:15)