Texture Networks: Feed-forward Synthesis of Textures and Stylized Images

Dmitry Ulyanov, Vadim Lebedev,  Andrea, Victor Lempitsky
Proceedings of The 33rd International Conference on Machine Learning, PMLR 48:1349-1357, 2016.

Abstract

Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.

Cite this Paper


BibTeX
@InProceedings{pmlr-v48-ulyanov16, title = {Texture Networks: Feed-forward Synthesis of Textures and Stylized Images}, author = {Ulyanov, Dmitry and Lebedev, Vadim and Andrea, and Lempitsky, Victor}, booktitle = {Proceedings of The 33rd International Conference on Machine Learning}, pages = {1349--1357}, year = {2016}, editor = {Balcan, Maria Florina and Weinberger, Kilian Q.}, volume = {48}, series = {Proceedings of Machine Learning Research}, address = {New York, New York, USA}, month = {20--22 Jun}, publisher = {PMLR}, pdf = {http://proceedings.mlr.press/v48/ulyanov16.pdf}, url = {https://proceedings.mlr.press/v48/ulyanov16.html}, abstract = {Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.} }
Endnote
%0 Conference Paper %T Texture Networks: Feed-forward Synthesis of Textures and Stylized Images %A Dmitry Ulyanov %A Vadim Lebedev %A Andrea %A Victor Lempitsky %B Proceedings of The 33rd International Conference on Machine Learning %C Proceedings of Machine Learning Research %D 2016 %E Maria Florina Balcan %E Kilian Q. Weinberger %F pmlr-v48-ulyanov16 %I PMLR %P 1349--1357 %U https://proceedings.mlr.press/v48/ulyanov16.html %V 48 %X Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions.
RIS
TY - CPAPER TI - Texture Networks: Feed-forward Synthesis of Textures and Stylized Images AU - Dmitry Ulyanov AU - Vadim Lebedev AU - Andrea AU - Victor Lempitsky BT - Proceedings of The 33rd International Conference on Machine Learning DA - 2016/06/11 ED - Maria Florina Balcan ED - Kilian Q. Weinberger ID - pmlr-v48-ulyanov16 PB - PMLR DP - Proceedings of Machine Learning Research VL - 48 SP - 1349 EP - 1357 L1 - http://proceedings.mlr.press/v48/ulyanov16.pdf UR - https://proceedings.mlr.press/v48/ulyanov16.html AB - Gatys et al. recently demonstrated that deep networks can generate beautiful textures and stylized images from a single texture example. However, their methods requires a slow and memory-consuming optimization process. We propose here an alternative approach that moves the computational burden to a learning stage. Given a single example of a texture, our approach trains compact feed-forward convolutional networks to generate multiple samples of the same texture of arbitrary size and to transfer artistic style from a given image to any other image. The resulting networks are remarkably light-weight and can generate textures of quality comparable to Gatys et al., but hundreds of times faster. More generally, our approach highlights the power and flexibility of generative feed-forward models trained with complex and expressive loss functions. ER -
APA
Ulyanov, D., Lebedev, V., Andrea, & Lempitsky, V.. (2016). Texture Networks: Feed-forward Synthesis of Textures and Stylized Images. Proceedings of The 33rd International Conference on Machine Learning, in Proceedings of Machine Learning Research 48:1349-1357 Available from https://proceedings.mlr.press/v48/ulyanov16.html.

Related Material