pystiche_papers.johnson_alahi_li_2016
Title |
Perceptual Losses for Real-Time Style Transfer and Super-Resolution |
Authors |
Justin Johnson, Alexandre Alahi, and Fei-Fei Li |
Citation |
[JAL2016] |
Reference implementation |
|
Variant |
Model optimization |
Content loss |
|
Style loss |
|
Regularization |
Behavioral changes
See also
The following parts are affected:
Hyper parameters
See also
content_loss()
Parameter |
|
|
---|---|---|
|
|
|
|
|
|
|
|
style_loss()
Parameter |
|
|
---|---|---|
|
|
|
|
|
|
|
|
|
|
|
regularization()
Parameter |
|
|
---|---|---|
|
|
|
|
|
content_transform()
Parameter |
|
|
---|---|---|
|
|
|
|
|
style_transform()
Parameter |
|
|
---|---|---|
|
|
|
|
|
|
|
|
batch_sampler()
Parameter |
|
|
---|---|---|
|
|
|
|
|
|
|
|
API
- pystiche_papers.johnson_alahi_li_2016.content_transform(impl_params=True, hyper_parameters=None)
Content image transformation from [JAL2016].
- Parameters
impl_params (
bool
) –Switch the behavior and hyper-parameters between the reference implementation of the original authors and what is described in the paper. For details see here.
Additionally, if
True
, appends thepreprocessor()
as a last transformation step.hyper_parameters (
Optional
[HyperParameters
]) – If omitted,hyper_parameters()
is used.
- Return type
- pystiche_papers.johnson_alahi_li_2016.style_transform(hyper_parameters=None)
Style image transformation from [JAL2016].
- Parameters
hyper_parameters (
Optional
[HyperParameters
]) – If omitted,hyper_parameters()
is used.- Return type
- pystiche_papers.johnson_alahi_li_2016.images()
- Return type
- pystiche_papers.johnson_alahi_li_2016.dataset(root, impl_params=True, transform=None)
- Return type
ImageFolderDataset
- pystiche_papers.johnson_alahi_li_2016.batch_sampler(data_source, hyper_parameters=None)
Batch sampler from [JAL2016].
- Parameters
data_source (
Sized
) – Dataset to sample from.hyper_parameters (
Optional
[HyperParameters
]) – If omitted,hyper_parameters()
is used.
- Return type
- pystiche_papers.johnson_alahi_li_2016.image_loader(dataset, hyper_parameters=None, pin_memory=True)
- Return type
- pystiche_papers.johnson_alahi_li_2016.content_loss(impl_params=True, multi_layer_encoder=None, hyper_parameters=None)
Content loss from [JAL2016].
- Parameters
impl_params (
bool
) – Switch the behavior and hyper-parameters between the reference implementation of the original authors and what is described in the paper. For details see here.multi_layer_encoder (
Optional
[MultiLayerEncoder
]) – PretrainedMultiLayerEncoder
. If omitted, the defaultmulti_layer_encoder()
is used.hyper_parameters (
Optional
[HyperParameters
]) – If omitted,hyper_parameters()
is used.
- Return type
- class pystiche_papers.johnson_alahi_li_2016.GramLoss(encoder, impl_params=True, **gram_op_kwargs)
Gram loss from [JAL2016].
- Parameters
encoder (
Encoder
) – Encoder used to encode the input.impl_params (
bool
) – IfTrue
, normalize the Gram matrix additionally by the number of channels.**gram_op_kwargs – Additional parameters of a
pystiche.loss.GramLoss
.
See also
- pystiche_papers.johnson_alahi_li_2016.style_loss(impl_params=True, multi_layer_encoder=None, hyper_parameters=None)
Style loss from [JAL2016].
- Parameters
impl_params (
bool
) – Switch the behavior and hyper-parameters between the reference implementation of the original authors and what is described in the paper. For details see here.multi_layer_encoder (
Optional
[MultiLayerEncoder
]) – PretrainedMultiLayerEncoder
. If omitted, the defaultmulti_layer_encoder()
is used.hyper_parameters (
Optional
[HyperParameters
]) – If omitted,hyper_parameters()
is used.
- Return type
- class pystiche_papers.johnson_alahi_li_2016.TotalVariationLoss(**total_variation_op_kwargs)
Total variation loss from [LW2016].
- Parameters
**total_variation_op_kwargs – Additional parameters of a
pystiche.loss.TotalVariationLoss
.
In contrast to
pystiche.loss.TotalVariationLoss
, the score is calculated with the squared error (SE) instead of the mean squared error (MSE).See also
- pystiche_papers.johnson_alahi_li_2016.regularization(hyper_parameters=None)
Regularization from [JAL2016].
- Parameters
hyper_parameters (
Optional
[HyperParameters
]) – If omitted,hyper_parameters()
is used.- Return type
- pystiche_papers.johnson_alahi_li_2016.perceptual_loss(impl_params=True, multi_layer_encoder=None, hyper_parameters=None)
Perceptual loss comprising content and style loss as well as a regularization.
- Parameters
impl_params (
bool
) – Switch the behavior and hyper-parameters between the reference implementation of the original authors and what is described in the paper. For details see here.multi_layer_encoder (
Optional
[MultiLayerEncoder
]) – PretrainedMultiLayerEncoder
. If omitted, the defaultmulti_layer_encoder()
is used.hyper_parameters (
Optional
[HyperParameters
]) – If omitted,hyper_parameters()
is used.
- Return type
- pystiche_papers.johnson_alahi_li_2016.encoder(instance_norm=True)
Encoder part of the
Transformer
from [JAL2016] .- Parameters
instance_norm (
bool
) – IfTrue
, useInstanceNorm2d
rather thanBatchNorm2d
as described in the paper. In addition, the number of channels of the convolution layers is reduced by half.- Return type
SequentialModule
- pystiche_papers.johnson_alahi_li_2016.decoder(impl_params=True, instance_norm=True)
Decoder part of the
Transformer
from [JAL2016].- Parameters
impl_params (
bool
) – IfTrue
, the output of the is not externally pre-processed before being fed into theperceptual_loss()
. Since this step is necessary to get meaningful encodings from themulti_layer_encoder()
, the pre-processing transform has to be learned within the output layer of the decoder. To make this possible,150 * tanh(input)
is used as activation in contrast to the(tanh(input) + 1) / 2
given in the paper.instance_norm (
bool
) – IfTrue
, useInstanceNorm2d
rather thanBatchNorm2d
as described in the paper. In addition, the number of channels of the convolution layers is reduced by half.
- Return type
SequentialModule
- class pystiche_papers.johnson_alahi_li_2016.Transformer(impl_params=True, instance_norm=True, init_weights=True)
- pystiche_papers.johnson_alahi_li_2016.transformer(style=None, framework='pystiche', impl_params=True, instance_norm=True)
Pretrained transformer from [JAL2016] .
- Parameters
style (
Optional
[str
]) – Style the transformer was trained on. Can be one of styles given byimages()
. If omitted, the transformer is initialized with random weights according to the procedure used by the original authors.framework (
str
) – Framework that was used to train the the transformer. Can be one of"pystiche"
(default) and"luatorch"
.impl_params (
bool
) – IfTrue
, use the parameters used in the reference implementation of the original authors rather than what is described in the paper.instance_norm (
bool
) – IfTrue
, useInstanceNorm2d
rather thanBatchNorm2d
as described in the paper.
For
framework == "pystiche"
all combinations of parameters are available.The weights for
framework == "luatorch"
were ported from the reference implementation (impl_params is True
) of the original authors. See https://download.pystiche.org/models/LICENSE for licensing details. The following combinations of parameters are available:style
instance_norm
True
False
"candy"
x
"composition_vii"
x
"feathers"
x
"la_muse"
x
x
"mosaic"
x
"starry_night"
x
"the_scream"
x
"the_wave"
x
"udnie"
x
- Return type
- pystiche_papers.johnson_alahi_li_2016.training(content_image_loader, style_image, impl_params=True, instance_norm=None, hyper_parameters=None, quiet=False)
Training a transformer for the NST.
- Parameters
content_image_loader (
DataLoader
) – Content images used as input for thetransformer
.style_image (
Union
[str
,Tensor
]) – Style image on which thetransformer
should be trained. Ifstr
, the image is read fromimages()
.impl_params (
bool
) – IfTrue
, uses the parameters used in the reference implementation of the original authors rather than what is described in the paper. For details see below.instance_norm (
Optional
[bool
]) – IfTrue
, useInstanceNorm2d
rather thanBatchNorm2d
as described in the paper. If omitted, defaults toimpl_params
.hyper_parameters (
Optional
[HyperParameters
]) – If omitted,hyper_parameters()
is used.quiet (
bool
) – IfTrue
, not information is logged during the optimization. Defaults toFalse
.
If
impl_params is True
, an external preprocessing of the images is used.- Return type
- pystiche_papers.johnson_alahi_li_2016.stylization(input_image, transformer, impl_params=True, instance_norm=None, framework='pystiche')
Transforms an input image into a stylised version using the transfromer.
- Parameters
input_image (
Tensor
) – Image to be stylised.transformer (
Union
[Module
,str
]) – Pretrained transformer for style transfer or thestyle
to load a pretrained transformer withtransformer()
.impl_params (
bool
) – IfTrue
, uses the parameters used in the reference implementation of the original authors rather than what is described in the paper. For details see below.instance_norm (
Optional
[bool
]) – IfTrue
, useInstanceNorm2d
rather thanBatchNorm2d
as described in the paper. If omitted, defaults toimpl_params
.framework (
str
) – Framework that was used to train the the transformer. Can be one of"pystiche"
(default) and"luatorch"
. This only has an effect, if if a pretrainedtransformer
is loaded.
- Return type
- pystiche_papers.johnson_alahi_li_2016.hyper_parameters()
Hyper parameters from [JAL2016].
- Return type
- pystiche_papers.johnson_alahi_li_2016.preprocessor(impl_params=True)
Preprocessor from [JAL2016].
- Parameters
impl_params (
bool
) – IfTrue
, the input is preprocessed for models trained with the Caffe framework. IfFalse
, the preprocessor performs the identity operation.
See also
pystiche.enc.CaffePreprocessing
- Return type
- pystiche_papers.johnson_alahi_li_2016.postprocessor(impl_params=True)
Postprocessor from [JAL2016].
- Parameters
impl_params (
bool
) – IfTrue
, the input is postprocessed from models trained with the Caffe framework. IfFalse
, the postprocessor performs the identity operation.
See also
pystiche.enc.CaffePostprocessing
- Return type