E-Book, Englisch, 746 Seiten
Ayyadevara / Reddy Modern Computer Vision with PyTorch
2. Auflage 2024
ISBN: 978-1-80324-093-0
Verlag: De Gruyter
Format: PDF
Kopierschutz: 1 - PDF Watermark
A practical roadmap from deep learning fundamentals to advanced applications and Generative AI
E-Book, Englisch, 746 Seiten
ISBN: 978-1-80324-093-0
Verlag: De Gruyter
Format: PDF
Kopierschutz: 1 - PDF Watermark
Whether you are a beginner or are looking to progress in your computer vision career, this book guides you through the fundamentals of neural networks (NNs) and PyTorch and how to implement state-of-the-art architectures for real-world tasks.
The second edition of Modern Computer Vision with PyTorch is fully updated to explain and provide practical examples of the latest multimodal models, CLIP, and Stable Diffusion.
You'll discover best practices for working with images, tweaking hyperparameters, and moving models into production. As you progress, you'll implement various use cases for facial keypoint recognition, multi-object detection, segmentation, and human pose detection. This book provides a solid foundation in image generation as you explore different GAN architectures. You'll leverage transformer-based architectures like ViT, TrOCR, BLIP2, and LayoutLM to perform various real-world tasks and build a diffusion model from scratch. Additionally, you'll utilize foundation models' capabilities to perform zero-shot object detection and image segmentation. Finally, you'll learn best practices for deploying a model to production.
By the end of this deep learning book, you'll confidently leverage modern NN architectures to solve real-world computer vision problems.
Autoren/Hrsg.
Fachgebiete
- Mathematik | Informatik EDV | Informatik Informatik Künstliche Intelligenz Mustererkennung, Biometrik
- Mathematik | Informatik EDV | Informatik Professionelle Anwendung
- Mathematik | Informatik EDV | Informatik Informatik Theoretische Informatik
- Mathematik | Informatik EDV | Informatik Informatik Künstliche Intelligenz Computer Vision
Weitere Infos & Material
Table of Contents - Artificial Neural Network Fundamentals
- PyTorch Fundamentals
- Building a Deep Neural Network with PyTorch
- Introducing Convolutional Neural Networks
- Transfer Learning for Image Classification
- Practical Aspects of Image Classification
- Basics of Object Detection
- Advanced Object Detection
- Image Segmentation
- Applications of Object Detection and Segmentation
- Autoencoders and Image Manipulation
- Image Generation Using GANs
- Advanced GANs to Manipulate Images
- Combining Computer Vision and Reinforcement Learning
- Combining Computer Vision and NLP Techniques
- Foundation Models in Computer Vision
- Applications of Stable Diffusion
- Moving a Model to Production
- Appendix
1
Artificial Neural Network Fundamentals
An Artificial Neural Network (ANN) is a supervised learning algorithm that is loosely inspired by the way the human brain functions. Similar to the way neurons are connected and activated in the human brain, a neural network takes input and passes it through a function, resulting in certain subsequent neurons getting activated, and consequently, producing the output.
There are several standard ANN architectures. The universal approximation theorem says that we can always find a large enough neural network architecture with the right set of weights that can exactly predict any output for any given input. This means that for a given dataset/task, we can create an architecture and keep adjusting its weights until the ANN predicts what we want it to predict. Adjusting the weights until the ANN learns a given task is called training the neural network. The ability to train on large datasets and customized architectures is how ANNs have gained prominence in solving various relevant tasks.
One of the prominent tasks in computer vision is to recognize the class of the object present in an image. ImageNet (https://www.image-net.org/challenges/LSVRC/index.php) was a competition held to identify the class of objects present in an image. The reduction in classification error rate over the years is as follows:
Figure 1.1: Classification error rate in ImageNet competition (source: https://www.researchgate.net/publication/331789962_Basics_of_Supervised_Deep_Learning)
The year 2012 was when a neural network (AlexNet) won the ImageNet competition. As you can see from the preceding chart, there was a considerable reduction in errors from the year 2011 to the year 2012 by leveraging neural networks. Since then, with more deep and complex neural networks, the classification error kept reducing and has surpassed human-level performance.
Not only did neural networks reach a human-level performance in image classification (and related tasks like object detection and segmentation) but they have enabled a completely new set of use cases. Generative AI (GenAI) leverages neural networks to generate content in multiple ways:
- Generating images from input text
- Generating novel custom images from input images and text
- Leveraging content from multiple input modalities (image, text, and audio) to generate new content
- Generating video from text/image input
This gives a solid motivation for us to learn and implement neural networks for our custom tasks, where applicable.
In this chapter, we will create a very simple architecture on a simple dataset and mainly focus on how the various building blocks (feedforward, backpropagation, and learning rate) of an ANN help in adjusting the weights so that the network learns to predict the expected outputs from given inputs. We will first learn, mathematically, what a neural network is, and then build one from scratch to have a solid foundation. Then we will learn about each component responsible for training the neural network and code them as well. Overall, we will cover the following topics:
- Comparing AI and traditional machine learning
- Learning about the ANN building blocks
- Implementing feedforward propagation
- Implementing backpropagation
- Putting feedforward propagation and backpropagation together
- Understanding the impact of the learning rate
- Summarizing the training process of a neural network
All code snippets within this chapter are available in the folder of the Github repository at https://bit.ly/mcvp-2e.
We strongly recommend you execute code using the Open in Colab button within each notebook.
Comparing AI and traditional machine learning
Traditionally, systems were made intelligent by using sophisticated algorithms written by programmers. For example, say you are interested in recognizing whether a photo contains a dog or not. In the traditional Machine Learning (ML) setting, an ML practitioner or a subject matter expert first identifies the features that need to be extracted from images. Then they extract those features and pass them through a well-written algorithm that deciphers the given features to tell whether the image is of a dog or not. The following diagram illustrates this idea:
Figure 1.2: Traditional Machine Learning workflow for classification
Take the following samples:
Figure 1.3: Sample images to generate rules
From the preceding images, a simple rule might be that if an image contains three black circles aligned in a triangular shape, it can be classified as a dog. However, this rule would fail against this deceptive close-up of a muffin:
Figure 1.4: Image on which simple rules can fail
Of course, this rule also fails when shown an image with anything other than a dog’s face close up. Naturally, therefore, the number of manual rules we’d need to create for the accurate classification of images can be exponential, especially as images become more complex. Therefore, the traditional approach works well in a very constrained environment (say, taking a passport photo where all the dimensions are constrained within millimeters) and works badly in an unconstrained environment, where every image varies a lot.
We can extend the same line of thought to any domain, such as text or structured data. In the past, if someone was interested in programming to solve a real-world task, it became necessary for them to understand everything about the input data and write as many rules as possible to cover every scenario. This is tedious and there is no guarantee that all new scenarios would follow said rules.
However, by leveraging ANNs, we can do this in a single step.
Neural networks provide the unique benefit of combining feature extraction (hand-tuning) and using those features for classification/regression in a single shot with little manual feature engineering. Both these subtasks only require labeled data (for example, which pictures are dogs and which are not dogs) and a neural network architecture. It does not require a human to come up with rules to classify an image, which takes away the majority of the burden traditional techniques impose on the programmer.
Notice that the main requirement is that we provide a considerable number of examples for the task that needs a solution. For example, in the preceding case, we need to provide multiple and pictures to the model so it learns the features. A high-level view of how neural networks are leveraged for the task of classification is as follows:
Figure 1.5: Neural network based approach for classification
Now that we have gained a very high-level overview of the fundamental reason why neural networks perform better than traditional computer vision methods, let’s gain a deeper understanding of how neural networks work throughout the various sections in this chapter.
Learning about the ANN building blocks
An ANN is a collection of tensors (weights) and mathematical operations arranged in a way that loosely replicates the functioning of a human brain. It can be viewed as a mathematical function that takes in one or more tensors as inputs and predicts one or more tensors as outputs. The arrangement of operations that connects these inputs to outputs is referred to as the architecture of the neural network – which we can customize based on the task at hand, that is, based on whether the problem contains structured (tabular) or unstructured (image, text, and audio) data (which is the list of input and output tensors).
An ANN is made up of the following:
- Input layers: These layers take the independent variables as input.
- Hidden (intermediate) layers: These layers connect the input and output layers while performing transformations on top of input data. Furthermore, the hidden layers contain nodes (units/circles in the following diagram) to modify their input values into higher-/lower-dimensional values. The functionality to achieve a more complex representation is achieved by using various activation functions that modify the values of the nodes of intermediate layers.
- Output layer: This generates the values the input variables are expected to result in when passed through the network.
With this in mind, the typical structure of a neural network is as follows:
Figure 1.6: Neural network structure
The number of nodes (circles in the preceding diagram) in the output layer depends on the task at hand and whether we are trying to...




