The introduction of convolutional neural networks (CNNs) has led to dramatic improvements in computer vision tasks, including visual recognition [1,2], understanding image aesthetics [3,4] and extracting perceptions of urban neighbourhoods [5,6]. In this workshop we not only introduce you to CNNs, but we will also teach you how to construct a CNN model to classify your own image dataset.
Data scientists might hesitate to use deep learning in their own research, as deep learning models tend to require vast quantities of training data to create useful models. However, transfer learning is a useful technique that can be used to train your own CNN models even when you have a limited amount of training data. With transfer learning, you can use an existing pretrained CNN, such as ImageNet (http://www.image-net.org/), and fine-tune the pretrained CNN for a related task. In our own past research, we have used the Places CNN trained on the Places2 dataset (a repository of 8 million scene photographs)  to create new CNNs to predict the scenicness of an image  and classify images of an urban environments for various design features .
While our approaches are specific to urban data analysis, transfer learning can be used to create CNNs for a variety of tasks that utilise image data. In this course we will be using the Pytorch  to show you how to work with CNNs, including using an existing CNN to classify an image as well as using transfer learning to create your own CNN models.
Beginner to Intermediate knowledge of python; a basic knowledge of what deep learning is.
- Donahue J, Jia Y, Vinyals O, Hoffman J, Zhang N, Tzeng E, Darrell T. 2014 DeCAF: a deep convolutional activation feature for generic visual recognition. In Int. Conf. in Machine Learning, Beijing, China, 21–16 June 2014, vol. 32, pp. 647–655.
- Sharif Razavian A, Azizpour H, Sullivan J, Carlsson S. 2014 CNN features off-the-shelf: an astounding baseline for recognition. In Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition Workshops, Columbus, OH, 23–28 June 2014, pp. 806–813.
- TanY,TangP,ZhouY,LuoW,KangY,LiG.2017 Photograph aesthetical evaluation and classification with deep convolutional neural networks. Neurocomputing 228, 165–175. (doi:10.1016/j.neucom.2016.08.098)
- Lu X, Lin Z, Jin H, Yang J, Wang JZ. 2015 Rating image aesthetics using deep learning. IEEE Trans. Multimedia 17, 2021–2034. (doi:10.1109/TMM.2015. 247 7040)
- De Nadai M, Vieriu RL, Zen G, Dragicevic S, Naik N, Caraviello M, Hidalgo CA, Sebe N, Lepri B. 2016 Are safer looking neighborhoods more lively? A multimodal investigation into urban life. In Proc. of the 2016 ACM on Multimedia Conf., Amsterdam, The Netherlands, 15–19 October, pp. 1127–1135.
- Dubey A, Naik N, Parikh D, Raskar R, Hidalgo CA. 2016 Deep learning the city: quantifying urban perception at a global scale. In European Conf. on Computer Vision, Amsterdam, The Netherlands, 8–16 October 2016, pp. 196–212.
- Zhou B, Khosla A, Lapedriza A, Torralba A, Oliva A. 2016 Places: an image database for deep scene understanding. (https://arxiv.org/abs/1610.02055).
- Seresinhe, C. I., Preis, T., & Moat, H. S. 2017 Using Deep Learning to Quantify the Beauty of Outdoor Places. Royal Society Open Science. Royal Society Open Science, 4, 170170.
- Law, S., Shen, Y. and Seresinhe, C., 2017 An application of convolutional neural network in street image classification: the case study of london. In Proc. of the 1st Workshop on Artificial Intelligence and Deep Learning for Geographic Knowledge Discovery, Redondo Beach, CA, 07 – 10 October 2017, pp. 5-9.
- Paszke, A., Chintala, S., Collobert, R., Kavukcuoglu, K., Farabet, C., Bengio, S., Melvin, I., Weston, J. and Mariethoz, J., 2017 Pytorch: Tensors and dynamic neural networks in python with strong gpu acceleration. http://pytorch.org/