from fastai.gen_doc.nbdoc import *
from fastai.train import *
from fastai.vision import *
These methods are automatically added to all Learner objects created after importing this module. They provide convenient access to a number of callbacks, without requiring them to be manually created.
show_doc(fit_one_cycle)
fit_one_cycle[source]
fit_one_cycle(learn:Learner,cyc_len:int,max_lr:Union[float,Collection[float],slice]=*slice(None, 0.003, None),moms:Point=(0.95, 0.85),div_factor:float=25.0,pct_start:float=0.3,wd:float=None,callbacks:Optional[Collection[Callback]]=None,tot_epochs:int=None,start_epoch:int=1*)
Fit a model following the 1cycle policy.
show_doc(one_cycle_scheduler)
one_cycle_scheduler[source]
one_cycle_scheduler(lr_max:float, ****kwargs**:Any) →OneCycleScheduler
Instantiate a OneCycleScheduler with lr_max.
See OneCycleScheduler for details.
show_doc(lr_find)
See LRFinder for details.
show_doc(to_fp16)
See MixedPrecision for details.
show_doc(to_fp32)
show_doc(mixup)
mixup[source]
mixup(learn:Learner,alpha:float=*0.4,stack_x:bool=False,stack_y:bool=True*) →Learner
Add mixup https://arxiv.org/abs/1710.09412 to learn.
show_doc(ClassificationInterpretation)
class ClassificationInterpretation[source]
ClassificationInterpretation(data:DataBunch,probs:Tensor,y_true:Tensor,losses:Tensor,ds_type:DatasetType=*<DatasetType.Valid: 2>*)
Interpretation methods for classification models.
See MixUpCallback for more details.
We'll show examples below using our MNIST sample. As usual the on_something methods are directly called by the fastai library, no need to call them yourself.
path = untar_data(URLs.MNIST_SAMPLE)
data = ImageDataBunch.from_folder(path)
show_doc(ShowGraph, title_level=3)
class ShowGraph[source]
ShowGraph(learn) ::LearnerCallback
Update a graph of learner stats and metrics after each epoch.
learn = create_cnn(data, models.resnet18, metrics=accuracy, callback_fns=ShowGraph)
learn.fit(3)

show_doc(ShowGraph.on_epoch_end)
on_epoch_end[source]
on_epoch_end(n_epochs:int,last_metrics:MetricsList, ****kwargs**) →bool
If we have last_metrics plot them in our pbar graph
show_doc(GradientClipping)
class GradientClipping[source]
GradientClipping(learn:Learner,clip:float=*0.0*) ::LearnerCallback
Gradient clipping during training.
learn = create_cnn(data, models.resnet18, metrics=accuracy,
callback_fns=partial(GradientClipping, clip=0.1))
learn.fit(1)
| epoch | train_loss | valid_loss | accuracy |
|---|---|---|---|
| 1 | 0.131133 | 0.078190 | 0.973013 |
show_doc(GradientClipping.on_backward_end)
show_doc(BnFreeze)
class BnFreeze[source]
BnFreeze(learn) ::LearnerCallback
Freeze moving average statistics in all non-trainable batchnorm layers.
For batchnorm layers where requires_grad==False, you generally don't want to update their moving average statistics, in order to avoid the model's statistics getting out of sync with its pre-trained weights. You can add this callback to automate this freezing of statistics (internally, it calls eval on these layers).
learn = create_cnn(data, models.resnet18, metrics=accuracy, callback_fns=BnFreeze)
learn.fit(1)
| epoch | train_loss | valid_loss | accuracy |
|---|---|---|---|
| 1 | 0.132564 | 0.078910 | 0.972031 |
show_doc(BnFreeze.on_epoch_begin)
on_epoch_begin[source]
on_epoch_begin(****kwargs**:Any)
Put bn layers in eval mode just after model.train().