In this blog post, we’ll explore how GPU technology is being used in various industries, including science, healthcare, and artificial intelligence, beyond gaming graphics, and what changes it will bring in the future.
In recent years, the gaming market has experienced explosive growth, and as a result, graphics technology has also advanced significantly. The graphics in the latest games are so realistic that they are almost indistinguishable from reality, with screens displaying over 144 frames per second and high-quality images in 4K or higher resolutions. Graphics cards play a crucial role in smoothly running these high-spec games, and their performance directly determines the visual quality of the game. Inside a graphics card is a core circuit called a GPU (Graphics Processing Unit), which is responsible for complex graphic calculations such as 3D model rendering. Rendering refers to the process of converting 3D models or scene data into video using computer programs. The better the GPU’s performance, the more realistic and detailed the video can be rendered. GPUs are specialized for rendering operations, enabling them to process complex videos quickly and efficiently.
Originally designed for graphics processing, GPUs quickly generate images within the frame buffer required for screen output. Today, they are widely used in desktop computers, laptops, game consoles, and even smartphones, contributing to faster processing speeds across various applications. While GPUs and central processing units (CPUs) share similar names, they differ significantly in their computational methods. The CPU uses a serial processing method, executing commands one at a time in sequence. As a result, it is generally composed of 4 to 8 high-performance cores. On the other hand, the GPU uses thousands of cores to perform multiple calculations simultaneously using a parallel processing method, and this structure allows it to quickly process complex graphics calculations.
For everyday tasks, the processing power of the CPU is usually sufficient. For example, tasks such as document creation, web browsing, and using a simple calculator can be handled by a CPU alone without any issues. However, the situation changes when it comes to high-performance tasks such as gaming or 3D rendering. In games, complex scenes such as vehicles colliding and debris flying, buildings collapsing, and bombs exploding require simultaneous physics calculations and particle effect calculations, and these elements must be rendered realistically. If the CPU were to handle all of these calculations alone, the computational load would increase dramatically, leading to performance degradation and ultimately resulting in video quality that falls short of expectations. This is where the GPU comes into play, leveraging its parallel processing capabilities to offload graphical calculations, reducing the CPU’s workload and enabling much faster and more vivid video rendering.
While GPUs are fundamentally specialized for graphics processing, their parallel processing capabilities have enabled them to be increasingly utilized for general-purpose computations beyond graphics. The technology that made this possible is GPGPU (General-Purpose computing on Graphics Processing Units). GPGPU shifts parallel computations from general applications to GPUs for processing, significantly expanding the versatility of GPUs. In the past, this technology was mainly used in specialized software for science and engineering, but it is now widely used in general user environments such as media players and video conversion programs. For example, video codec conversion and real-time filter application can be processed much faster using GPUs, and recently, GPUs have become essential devices in the fields of artificial intelligence (AI) and machine learning.
In particular, with the development of programming languages and frameworks for GPUs, such as OpenCL and CUDA, parallel processing technology using GPUs is becoming increasingly diverse and sophisticated. One of the most representative examples of the use of GPUs in fields other than graphics is cryptocurrency mining. Cryptocurrency mining is the process of verifying transaction records based on blockchain and generating new blocks, and miners who successfully complete this process receive a certain amount of cryptocurrency as a reward.
Initially, mining difficulty was low, allowing mining to be performed using CPUs alone. However, as difficulty increased, high-performance computing capabilities became necessary, leading to the emergence of GPUs specialized for parallel computing. Subsequently, GPU-based mining rigs became widely adopted, temporarily causing a shortage of graphics cards.
Now, let’s examine the primary rendering technologies handled by GPUs. First, anti-aliasing is a technique used to reduce the “staircase effect” that occurs at the edges of images. The staircase effect refers to the phenomenon where pixels in diagonal or curved areas appear jagged or stair-step-like on low-resolution screens. One of the most common solutions is 4X Supersampling Anti-Aliasing (SSAA). This method involves enlarging the rendering screen by a factor of four, dividing each pixel into smaller units, performing calculations on each unit, and then scaling the image back to its original resolution. However, this approach has the drawback of requiring significant computational power, which can strain the GPU. A technology that addresses this drawback is Multi-Sample Anti-Aliasing (MSAA). MSAA selectively processes only the areas where the boundaries of 3D objects pass through, rather than the entire screen, thereby balancing performance and image quality.
Next are Ambient Occlusion (AO) and Ray Tracing (RT). Both technologies are rendering methods related to light sources, but they differ in their approach. Ambient Occlusion approximates the phenomenon where brightness decreases in areas where light has difficulty reaching, such as gaps, edges, or corners of objects. This adds natural shadows to the scene, enhancing overall depth and realism. On the other hand, ray tracing is a technology that more precisely renders visual elements such as light reflection, refraction, and shadows. This technique is divided into forward ray tracing, which traces light rays from the light source, and backward ray tracing, which traces light rays from the user’s viewpoint. While forward tracing can more accurately reflect natural phenomena, it is inefficient because it traces light rays that are not actually visible to the eye. In contrast, the inverse method tracks only the rays in the direction of the view, resulting in higher computational efficiency. In the past, the high computational demands of ray tracing made real-time implementation challenging, but recent GPUs like NVIDIA’s RTX series, equipped with dedicated RT cores, can now efficiently process ray tracing. As a result, we now live in an era where realistic light reflections and shadows can be achieved in real-time game environments.
Current-generation GPUs can stably render over 60 frames per second at 1920×1080 resolution (Full HD) even with physical effects, lighting effects, and particle effects applied. However, maintaining the same quality at 4K UHD (3840×2160) resolution while achieving over 60 frames per second still requires significant computational resources. As more gamers and users demand higher resolution screens, the demand for high-performance GPUs is expected to continue to grow.
In this way, GPUs have evolved from simple graphics-only devices to become core computing devices in a wide range of fields, including gaming, science, engineering, artificial intelligence, and cryptocurrency mining. Recently, high-performance GPUs have been integrated into smartphones, enabling high-quality graphics processing in mobile environments such as photo processing, gaming, and video editing. Next-generation smartphones and computer devices are also expected to see significant performance improvements alongside advancements in GPU technology. GPU technology continues to evolve, and its potential remains vast. GPUs will be at the heart of the virtual reality, artificial intelligence, and high-resolution video technologies we will encounter in the future.