Open In App

Getting Started with OpenCV CUDA Module

Last Updated : 20 Jun, 2024
Comments
Improve
Suggest changes
Like Article
Like
Report

OpenCV is an well known Open Source Computer Vision library, which is widely recognized for computer vision and image processing projects. The OpenCV CUDA (Compute Unified Device Architecture ) module introduced by NVIDIA in 2006, is a parallel computing platform with an application programming interface (API) that allows computers to use a variety of graphics processing units (GPUs) for faster general-purpose processing. When combined with OpenCV, developers can carry out computationally intensive image and video processing much faster than purely CPU-based methods.

The OpenCV CUDA module does not require any prior knowledge of CUDA as it is designed for ease of use. But it's helpful to understand various operation costs, what a GPU does, how a GPU works, what are the preferred data formats, and so on.

Why do we use OpenCV's CUDA module?

It has gained Significant improvement in performance and that is why people choose to use the CUDA module in OpenCV for their Computer Vision based tasks. By leveraging GPU acceleration, it becomes possible to process quickly real-time applications and large datasets. In fields such as robotics, autonomous vehicles, medical imaging, and surveillance systems this is especially useful.

This guide will walk you through the following key areas:

  1. Installing OpenCV with CUDA.
  2. Prerequisites to meet before installing OpenCV with CUDA.
  3. Available functionality in OpenCV’s CUDA module.
  4. A detailed discussion of the GpuMat class for GPU data storage.
  5. Methods of efficiently transferring data between CPU and GPU.
  6. How to use multiple GPUs for better performance upgrades?
  7. The last is a demo, showing a complete usage of the CUDA module within Open CV.

Pre-requisites:

Install OpenCV with CUDA support: First, you must confirm that OpenCV is installed on your system with CUDA support enabled. This typically involves building OpenCV from the source with the appropriate CUDA options configured.

Install CUDA and cuDNN: You should also confirm CUDA as well as cuDNN are installed in your computer Follow the installation instructions provided to set up these tools properly. You can download CUDA from the NVIDIA website

Supported Modules:

The OpenCV CUDA module supports a wide range of functionalities, including:

  1. Core Operations: Basic arithmetic, logical operations, and matrix manipulations.
  2. Image Processing: Filters, transformations, and image analysis tools.
  3. Object Detection: Pre-trained models and custom implementations for detecting objects.
  4. Machine Learning: Accelerated algorithms for training and inference.
  5. Video Analysis: Efficient video capture, processing, and analysis.
  6. Feature Detection and Description: Keypoint detection and feature extraction algorithms.

Installing OpenCV with CUDA Support

Using Pre-built Binaries:

For Windows:

  • Download the pre-built OpenCV package with CUDA support from the official website or repositories (e.g., opencv-contrib-python package from PyPI).
  • Follow the installation instructions provided with the package.

For Linux:

  • Check if your distribution's package manager (e.g., apt, yum) offers pre-built OpenCV packages with CUDA support.
  • Install the package using the appropriate command (e.g., sudo apt-get install libopencv-dev python3-opencv for Ubuntu).

For macOS:

  • Use a package manager like Homebrew to install OpenCV with CUDA support: brew install opencv.
  • Alternatively, download the pre-built package from the official website and follow the installation instructions.

Building from Source

Download the OpenCV source code from the official repository (https://.com/opencv/opencv). Install the required dependencies (e.g., CMake, CUDA Toolkit, and development libraries). Create a build directory and navigate to it: mkdir build && cd build.

Configure the build system with CUDA support enabled

cmake -D WITH_CUDA=ON -D CUDA_ARCH_BIN=<target_architecture> -D CUDA_ARCH_PTX=<target_architecture> -D WITH_CUBLAS=1 ..

Replace <target_architecture> with the appropriate CUDA architecture for your GPU.

Build and Install OpenCV

make -j<num_cores>
sudo make install

Setting Up Your Development Environment

Environment Variables:

  • Set the CUDA_PATH environment variable to the CUDA installation directory.
  • Add the OpenCV library directories to your system's library path (e.g., LD_LIBRARY_PATH on Linux, DYLD_LIBRARY_PATH on macOS).

IDE Configuration:

For Visual Studio (Windows):

  • Add the OpenCV include directories to your project's include paths.
  • Link against the required OpenCV libraries (e.g., opencv_world<version>.lib, opencv_cuda<version>.lib).
  • Set the CUDA configuration in your project settings.

For other IDEs (e.g., Eclipse, Xcode):

  • Add the OpenCV include directories to your project's include paths.
  • Link against the required OpenCV libraries.
  • Configure the CUDA settings as per your IDE's documentation.

Build System Configuration

For CMake-based projects:

  • Use the find_package(OpenCV REQUIRED) command to locate the OpenCV installation.
  • Link against the required OpenCV libraries (e.g., ${OpenCV_LIBS}, ${OpenCV_CUDA_LIBS}).

For Make-based projects:

  • Update your makefiles to include the OpenCV include directories and link against the required libraries.

After setting up your development environment, you should be ready to start writing code that utilizes the OpenCV CUDA module.

Basic Block - GpuMat

The OpenCV CUDA module depends on the GpuMat class. In contrast to Mat, which is used for memory storage of images and matrices in CPU, the GpuMat is made strictly for storing data in GPU.

Creating and Using GpuMat

Python
import cv2
import numpy as np

# Check if GPU support is available
if not cv2.cuda.getCudaEnabledDeviceCount():
    print("CUDA-enabled GPU not found. Exiting...")
    exit()

# Load an image using OpenCV
image = cv2.imread('image.jpg', cv2.IMREAD_COLOR)

# Check if image is loaded
if image is None:
    print("Error loading image. Exiting...")
    exit()

# Upload the image to the GPU
gpu_image = cv2.cuda_GpuMat()
gpu_image.upload(image)

# Perform a Gaussian blur on the image using the GPU
gpu_blurred_image = cv2.cuda_GaussianBlur(gpu_image, (15, 15), 0)

# Download the result back to the CPU
blurred_image = gpu_blurred_image.download()

# Display the original and blurred images
cv2.imshow('Original Image', image)
cv2.imshow('Blurred Image', blurred_image)

cv2.waitKey(0)
cv2.destroyAllWindows()

CPU/GPU Data Transfer

Efficient data transfer between CPU and GPU is essential to increase performance. Here are some tips .

  1. Minimize transfers: Minimize the number of transfers between the CPU and GPU to avoid crashes.
  2. Batching: Transfer data in batches to increase GPU utilization.
  3. Pinned Memory: Pinned (page locked) memory is used for faster data transfer.

Utilizing Multiple GPUs

OpenCV's CUDA module supports multiple GPU configurations. Using multiple GPUs requires the context to be fully managed for each GPU.

Example for using multiple GPUs:

This example splits an image into parts, processes each part on a different GPU, and then combines the results.

Python
import cv2
import numpy as np


def process_on_gpu(image_part, gpu_id):
    # Set the GPU device
    cv2.cuda.setDevice(gpu_id)

    # Upload the image part to the GPU
    gpu_image = cv2.cuda_GpuMat()
    gpu_image.upload(image_part)

    # Perform Gaussian blur on the GPU
    gaussian_filter = cv2.cuda.createGaussianFilter(
        cv2.CV_8UC1, cv2.CV_8UC1, (15, 15), 0)
    gpu_blurred_image = gaussian_filter.apply(gpu_image)

    # Download the result back to the host
    result_image = gpu_blurred_image.download()

    return result_image


# Load an image in grayscale
image = cv2.imread('image.jpg', cv2.IMREAD_GRAYSCALE)

# Check if the image is loaded successfully
if image is None:
    print("Error: Could not load the image.")
    exit()

# Split the image into two parts for two GPUs
height, width = image.shape
half_height = height // 2

image_part1 = image[:half_height, :]
image_part2 = image[half_height:, :]

# Process each part on different GPUs
result_part1 = process_on_gpu(image_part1, 0)  # GPU 0
result_part2 = process_on_gpu(image_part2, 1)  # GPU 1

# Combine the results
combined_result = np.vstack((result_part1, result_part2))

# Display the result
cv2.imshow('Blurred Image', combined_result)
cv2.waitKey(0)
cv2.destroyAllWindows()

Sample Demo

Here is a sample demo that demonstrates the use of the OpenCV CUDA module :-

Python
import cv2
import numpy as np
import time


def main():
    # Check if CUDA is available
    if not cv2.cuda.getCudaEnabledDeviceCount():
        print("CUDA is not available. Please ensure you have a CUDA-capable GPU and the correct drivers installed.")
        return

    # Initialize CUDA device
    device_count = cv2.cuda.getCudaEnabledDeviceCount()
    print(f"Number of CUDA-capable GPUs: {device_count}")

    # Read the image
    image_path = 'path_to_your_image.jpg'
    image = cv2.imread(image_path)
    if image is None:
        print("Error: Image not found.")
        return

    # Function to perform GPU processing on an image
    def process_image_on_gpu(image, device_id):
        cv2.cuda.setDevice(device_id)

        # Upload the image to the GPU
        gpu_image = cv2.cuda_GpuMat()
        gpu_image.upload(image)

        # Apply Gaussian blur on the GPU
        gpu_blur = cv2.cuda.createGaussianFilter(
            gpu_image.type(), -1, (15, 15), 0)
        gpu_blurred_image = gpu_blur.apply(gpu_image)

        # Download the result back to the CPU
        result_image = gpu_blurred_image.download()
        return result_image

    # Measure performance
    start_time = time.time()

    # Process image on GPU 0
    result_image_gpu0 = process_image_on_gpu(image, 0)

    # If multiple GPUs are available, process on GPU 1 as well
    if device_count > 1:
        result_image_gpu1 = process_image_on_gpu(image, 1)

    total_gpu_time = time.time() - start_time

    # Display processing time
    print(f"Total GPU Processing Time: {total_gpu_time:.6f} seconds")

    # Display the original and blurred images
    cv2.imshow('Original Image', image)
    cv2.imshow('Blurred Image (GPU 0)', result_image_gpu0)
    if device_count > 1:
        cv2.imshow('Blurred Image (GPU 1)', result_image_gpu1)
    cv2.waitKey(0)
    cv2.destroyAllWindows()


if __name__ == "__main__":
    main()

This uploads the image to the GPU, applies Gaussian blur using the GPU, measures the processing time, and downloads the result back to the CPU.

Expected Output :-

Number of CUDA-capable GPUs: 2
Total GPU Processing Time: 0.045123 seconds

If everything is set up correctly, you should see the processing times printed in the console and the original and blurred images displayed in separate windows. The GPU processing time should be significantly lower than the CPU processing time, demonstrating the performance benefit of using the CUDA module in OpenCV.

Conclusion

Thus using the CUDA module the processing time would become significantly lower than the CPU processing time and that performance benefit would lead to saving much time and cost using the CUDA module in OpenCV

Now with this guide you are set to begin using the knowledge you have so far gained to apply the enormous CUDA force in your OpenCV projects, which will result in more rapid, efficient and easy-to-scale computer vision solutions.


Next Article
Article Tags :
Practice Tags :

Similar Reads