Computer Vision (Part A)

Submitted by sylvia.wong@up… on Mon, 06/27/2022 - 18:49

In this section of the course, we will embark on an exciting journey into the world of computer vision and its relationship with AI. Computer vision is a field of study that focuses on teaching machines to perceive, analyse, and understand visual information, much like the human visual system.

Computer vision is closely intertwined with AI, as it enables machines to interpret and make sense of visual data, facilitating a wide range of applications across various industries. By leveraging sophisticated algorithms and machine learning techniques, computer vision systems can extract meaningful insights from images and videos, paving the way for advancements in autonomous systems, robotics, medical imaging, surveillance, and much more.

By working through this topic, you will gain a comprehensive understanding of the foundations of computer vision, witness real-time examples of its applications, and delve into essential tasks such as object detection, classification, and localization. You will also gain insights into the training and testing process, and the significance of training and testing instances, and explore cutting-edge advancements in vision recognition and image captioning systems. Prepare to unveil the remarkable capabilities of computer vision as we unlock its potential in understanding and interpreting visual information!

Sub Topics
Boxes with QR codes stickers

Computer vision is a specialized branch of AI that deals specifically with visual perception and understanding. It involves developing algorithms and techniques that allow computers to extract meaningful insights from images and videos, recognize objects and patterns, understand scenes, and make intelligent decisions based on visual data.

Computer vision complements AI methodologies by incorporating visual data as an input source for intelligent systems. Just as humans rely on their vision to understand and navigate the world, computer vision enables machines to perceive and interpret the visual world around them. It allows machines to go beyond simple pixel-level analysis and extract higher-level information, such as identifying objects, understanding spatial relationships, and recognizing complex patterns.

By integrating computer vision into AI frameworks, machines can leverage visual data to enhance decision-making processes and perform tasks that require visual understanding. For example, in autonomous vehicles, computer vision systems enable the vehicle to detect and recognize pedestrians, traffic signs, and other vehicles, allowing it to make informed decisions while navigating the roads.

Computer vision also contributes to AI by bridging the gap between the physical world and digital data, by enabling machines to interact with visual content and extract valuable information. This has applications in various domains, including healthcare, where computer vision assists in medical image analysis and diagnosis, or in surveillance systems, where it aids in identifying and tracking objects of interest in video feeds.

Watch the video below to find out more about computer vision.

In the first topic, we looked at how computer vision is being used in the real world through facial recognition, autonomous vehicles and more. Lets look at some more ways computer vision is being used. 

Medical imaging 

Computer vision plays a vital role in medical imaging, aiding in the diagnosis and treatment of various conditions. From analysing X-rays and MRIs to detecting abnormalities in mammograms, computer vision algorithms assist healthcare professionals in detecting diseases and providing more accurate diagnoses. The benefits include improved early detection, faster diagnosis, and enhanced treatment planning. However, challenges remain in terms of standardization, reducing false positives/negatives, and addressing ethical concerns surrounding patient data privacy.

Augmented reality

Computer vision forms the backbone of augmented reality (AR) applications, which overlay digital content onto the real world. By accurately tracking the user's environment and recognizing objects, AR systems can seamlessly blend virtual elements with the physical world. This technology finds applications in gaming, education, interior design, and more. The benefits include immersive experiences, enhanced visualization, and interactive learning. Improvements are needed in real-time tracking, occlusion handling, and precise object recognition to further enhance AR applications.

A person using Augmented reality
Surveillance systems

Computer vision is extensively used in surveillance systems for detecting and monitoring activities in public spaces, buildings, and critical infrastructure. By analysing video feeds, these systems can identify suspicious behaviour, track objects, and provide real-time alerts to security personnel. The benefits include enhanced security, crime prevention, and quick response to potential threats. However, issues related to privacy, false alarms, and the ethical use of surveillance technologies require ongoing consideration and refinement.

A person looking intently at their tablet

Object detection, classification, and localization are essential tasks in computer vision that involve analysing images or videos to identify and understand the objects present within them. Let's explore each of these tasks in detail:

Object detection

A top shot of city roads

Object detection refers to the process of identifying and localizing specific objects within images or videos. It involves detecting the presence of objects and drawing bounding boxes around them to indicate their location. Object detection is a fundamental task in computer vision as it enables machines to recognize and locate multiple objects simultaneously.

Object detection algorithms typically employ a combination of techniques such as feature extraction, pattern recognition, and machine learning. These algorithms analyse visual data and search for distinctive features or patterns that correspond to different objects. The output of object detection is a set of bounding boxes along with the class labels of the detected objects.

Watch the video below to find out more about object detection using Python and YOLO.

Object classification

Object classification involves categorizing objects into predefined classes or categories. Once an object has been detected, object classification algorithms analyse the visual features of the object and assign it to a specific class or category. Common approaches for object classification include traditional machine learning techniques, such as support vector machines (SVM) and decision trees, as well as deep learning methods, including convolutional neural networks (CNN).

Object classification plays a crucial role in various applications, such as image recognition, content-based image retrieval, and autonomous systems. By accurately classifying objects, machines can understand the visual content and make informed decisions based on the recognized objects' characteristics.

Object localisation

Object localisation aims to precisely determine the position and extent of objects within visual data. While object detection provides bounding boxes around the objects, object localization focuses on determining the precise boundaries or contours of the objects. This allows for a more detailed understanding and analysis of the objects' spatial information.

Localisation algorithms often utilize techniques such as edge detection, contour extraction, or semantic segmentation to precisely outline the objects within images or videos. The output of object localisation provides more accurate spatial information, enabling machines to understand the exact position and shape of the objects.

Learning Activity

Research the term semantic segmentation to start building your knowledge.

The tasks of object detection, classification, and localisation are closely related and often interconnected. Object detection is a prerequisite for object classification and localisation, as it provides the initial identification and location of objects within visual data. Object classification helps in assigning semantic labels to the detected objects, while object localisation provides more precise spatial information.

These tasks have numerous applications across various domains. For example, in autonomous driving, object detection, classification, and localisation are crucial for identifying and tracking vehicles, pedestrians, and traffic signs, enabling the vehicle to make informed decisions. In surveillance systems, these tasks aid in detecting and recognising objects of interest, facilitating security monitoring.

Continued advancements in computer vision algorithms, along with the integration of deep learning techniques, have significantly improved the accuracy and efficiency of object detection, classification, and localization. However, challenges remain in handling occlusion, scale variations, complex scenes, and real-time processing. Ongoing research and development efforts are focused on improving the robustness, speed, and accuracy of these tasks, enabling machines to better understand and interact with the visual world.

The training and testing process is a crucial aspect of developing computer vision programs. The steps are described below.

A diagram depicting...

Training dataset

To train a computer vision model, a comprehensive training dataset is required. This dataset consists of a large collection of labelled images or videos that represent the visual patterns or objects the model needs to learn. The training dataset should include both positive and negative examples. Let's consider this in the context of facial detection.

Positive Examples Negative Examples
These are images or videos that contain the objects or features the system needs to identify. These are images or videos that represent elements or backgrounds that the system should disregard.
  1. Faces of young children of different cultural backgrounds. 
  2. Faces of young people of different cultural backgrounds. 
  3. Faces of elderly people of different cultural backgrounds. 
  4. Faces of people with coverings (hijab, sunglasses, mask, beard)
  1. Images of empty streets.
  2. Images of empty parks.
  3. Images of empty rooms.
  4. Images of animals.
  5. An image of eggs.
  6. An image of children's toys.
A face looking straight into a camera

 

 

Including both positive and negative examples is essential because it enables the machine learning algorithm to learn to differentiate between different visual patterns.

 

By presenting the model with a diverse range of positive and negative instances, it can learn the distinctive features and characteristics of the objects of interest. Once the training dataset is prepared, the next step is to train the computer vision model.

Training the model

Training the model involves feeding the labelled training data into the model and allowing it to learn the underlying patterns and relationships between the visual features and the corresponding labels.

During training, the model adjusts its internal parameters and learns to make accurate predictions based on the input data. The process involves optimization techniques, such as backpropagation and gradient descent, to iteratively update the model's parameters and minimize the difference between its predictions and the ground truth labels.

The training process continues for multiple iterations or 'epochs', gradually improving the model's ability to recognize and classify objects based on the visual features present in the training dataset. Once the model is trained, its performance on unseen data must be evaluated.

Testing the model

Evaluating the model is done using a separate testing dataset that is distinct from the training data. The testing dataset contains images or videos that the model has not seen during training.

During testing, the model makes predictions on the testing data, and its performance is measured by comparing its predictions with the ground truth labels. Common evaluation metrics include accuracy, precision, recall, and F1-score.

Testing the model on unseen data helps determine its generalization ability, i.e., how well it can perform on new, unseen examples. It provides an objective measure of the model's performance and helps identify any issues, such as overfitting or underfitting.
  • Overfitting: when the model performs well on the training data but poorly on the testing data, failing to generalize to new instances.
  • Underfitting: when the model fails to capture the underlying patterns in the data.

Iterative refinement

Based on the model's performance during testing, further iterations of training and testing may be required to improve its accuracy and generalization. This iterative refinement process involves adjusting various aspects of the model, such as its architecture, hyperparameters, or training strategy, to enhance its performance.

By analysing the model's strengths and weaknesses, developers can identify areas for improvement and apply techniques such as data augmentation, regularization, or advanced network architectures to enhance the model's capabilities.

It's important to note that the training and testing process in computer vision can be resource-intensive, requiring significant computational power and labelled datasets. Additionally, careful consideration should be given to issues such as bias in the training data, ensuring diversity and representativeness, and ethical considerations surrounding the use of data.

Overall, the training and testing process is an iterative and critical component of developing computer vision programs. By using comprehensive training datasets, training the model, and evaluating its performance on testing data, developers can enhance the model's ability to learn and differentiate between visual patterns, leading to more accurate and reliable computer vision systems.

Watch the video below to find out more about the difference between training and testing data in machine learning:

Learning Activity

Training and testing instances play a crucial role in the development of computer vision systems. They are used to provide labelled data to the system, allowing it to learn and generalise from the provided examples.

Training instances

Training instances refer to the labelled data used to train the computer vision system. These instances typically consist of images or videos along with corresponding labels or annotations that indicate the objects or patterns of interest. The system learns from these instances to recognize objects, detect patterns, and make accurate predictions.

The quality and diversity of training instances have a direct impact on the system's learning and generalization capabilities. A comprehensive and well-curated training dataset is essential for teaching the system the visual characteristics, variations, and relationships associated with the objects it needs to recognize. The more varied and representative the training instances are, the better the system can learn to handle different scenarios and generalize its knowledge to unseen data.

During the training process, the system analyses the training instances, extracts relevant features, and learns the underlying patterns and relationships between the visual data and their labels. Machine learning techniques, such as deep neural networks, are commonly used to train computer vision systems by optimizing their internal parameters based on the provided training instances.

Testing instances

Testing instances, also known as evaluation instances, are used to assess the performance and generalization ability of the trained computer vision system. These instances differ from the training data and are typically unseen by the system during the training phase. The purpose of testing instances is to evaluate how well the system can apply its learned knowledge to new, unfamiliar data.

Testing instances are crucial for measuring the system's performance and identifying any issues such as overfitting or underfitting. By evaluating the system on diverse and representative testing instances, developers can assess its accuracy, robustness, and ability to handle real-world scenarios.

It is important to note that the testing instances should be carefully selected to represent the distribution and challenges of the real-world data the system will encounter. The testing dataset should cover a range of variations, such as different:

  • lighting conditions
  • viewpoints
  • scales
  • occlusions.


This ensures that the system's performance is evaluated in a realistic and comprehensive manner.

Importance of sufficient instances

Both the training and testing datasets should have a sufficient number of instances to effectively train and evaluate the computer vision system. Insufficient instances may result in poor generalization, limited coverage of variations, and an increased risk of overfitting. A larger dataset allows the system to learn from a more diverse range of examples, improving its ability to handle different situations and increasing its overall accuracy.

Additionally, the instances should be carefully labelled or annotated to provide accurate ground truth information. Proper labelling ensures that the system learns from correct information and helps evaluate its performance during testing accurately.

Data augmentation

In some cases, the available training instances may be limited, making it challenging to train a robust computer vision system. In such situations, data augmentation techniques can be used to artificially increase the diversity and size of the training dataset. Data augmentation involves applying various transformations, such as rotation, scaling, flipping, or adding noise to the existing instances to create new, augmented training examples. This helps improve the system's ability to generalize and handle different variations and conditions.

Watch the video below to find out more about the different data sets:

Vision recognition refers to the ability of machines to understand and interpret visual information accurately. It involves the recognition and classification of objects, scenes, or patterns within images or videos. Vision recognition systems use various computer vision techniques, such as feature extraction, object detection, and deep learning algorithms, to analyse visual data and make sense of the underlying content.

The goal of vision recognition is to enable machines to identify and categorize objects or scenes with a high level of accuracy, similar to how humans perceive and recognize visual information. This technology finds applications in various domains, including autonomous vehicles, surveillance systems, medical imaging, and augmented reality.

Vision recognition systems utilize machine learning algorithms trained on large datasets to learn the visual features and patterns associated with different objects or scenes. Deep learning architectures, such as CNNs, have shown remarkable success in achieving state-of-the-art performance in vision recognition tasks.

By harnessing the power of vision recognition, machines can perform tasks such as object recognition, image classification, facial recognition, and scene understanding, contributing to advancements in fields like robotics, automation, and human-computer interaction.

Watch the video below to find out more about how computers learn to recognise objects.

Image captioning systems combine computer vision with natural language processing (NLP) to generate descriptive captions or textual explanations for images. These systems aim to bridge the gap between visual perception and natural language understanding, allowing machines to describe visual content in a human-like manner.

Image captioning involves two main components: visual feature extraction and language generation.

  • In the visual feature extraction step, computer vision algorithms process the input image to extract relevant visual features. These features represent the key elements, objects, or patterns present in the image.
    The extracted visual features are then fed into an NLP model, typically a recurrent neural network (RNN) or transformer-based architecture, which generates a coherent and meaningful caption based on the visual input.
  • The language generation component utilizes the learned visual features along with contextual information to generate descriptive sentences that accurately describe the content of the image.

Image captioning systems find applications in areas such as image indexing and retrieval, content-based image search, and accessibility tools for visually impaired individuals. They enable machines to understand and communicate visual information effectively, opening up new possibilities for human-machine interaction.

Integration of vision recognition and image captioning systems

The integration of vision recognition and image captioning systems showcases the synergy between computer vision and other AI disciplines, such as NLP. By combining the understanding of visual content with the generation of textual descriptions, these systems enhance the capabilities of machines to comprehend and interpret the visual world in a more holistic manner.

It is worth mentioning that vision recognition and image captioning systems are active areas of research and development. Ongoing advancements in deep learning, neural networks, and multimodal learning continue to improve the accuracy, robustness, and naturalness of these systems. Researchers are continuously exploring innovative techniques and architectures to enhance vision recognition and image captioning capabilities.

In summary, vision recognition and image captioning systems represent the integration of computer vision with other AI disciplines, enabling machines to understand and describe visual content accurately. These technologies have a wide range of applications and contribute to the advancement of fields such as robotics, automation, image search, and accessibility tools.

Learning Activity

Use information from this course and your own research to answer the following questions. Write approximately 50-100 words for each answer in the forum 'Learning Activity 2: Computer Vision'. 

  1. What is artificial intelligence?
  2. What is computer vision?
  3. What are some of the applications where computer vision is required?
  4. What is the training process?
  5. Why testing is important?
Module Linking
Main Topic Image
Face id technology
Is Study Guide?
Off
Is Assessment Consultation?
Off