Symmetric Multiprocessing (SMP) and Asymmetric Multiprocessing (AMP)

Introduction

With multicore processors there are different techniques that can be deployed to improve overall system performance. There are two main types of multiprocessing: Symmetric Multiprocessing (SMP) and Asymmetric Multiprocessing (AMP).

In SMP, all processors or CPU cores are considered equal and share the same system resources like the operating system, the address space of the main memory, and the input/output (I/O) or peripheral devices. Whereas in AMP, each processor or CPU core has its own software and processes applications independently of the other processors or cores.

Typically, SMP is used when applications need more CPU power to handle its workload, because SMP systems can execute tasks in parallel, which can reduce overall processing and time and increase system throughput. AMP on the other hand offers simplicity in design and implementation, and can improve overall system performance and efficiency.

Using SMP

Symmetric multiprocessing is typically used in high end computing. These computing environments require large amounts of computing power to perform application tasks and processes. Most multiprocessor systems use the SMP architecture. SMP is most useful for timesharing and multi-threaded timesharing systems. Timesharing is the distribution of computing resources to multiple users simultaneously. Similarly, multithreading is a feature of the central processing unit (CPU) that allows a single process to perform multiple tasks simultaneously. More specifically, multithreading allows multiple threads of instructions to execute independently, all sharing the same processing resources. Timesharing operating systems also use SMP. This is because computing resources can be distributed among multiple users and multiple processes can run in parallel. This process is supported by SMP, which is designed to run multiple processes on different processing units. SMP is multithreaded for the same reason. This is because multithreading handles multiple processes simultaneously, and SMP divides the threads among each processor. However, SMP is generally not used on PCs or applications that have not been modified for multithreaded programming. Applications and programs should be designed to allow multithreading. This allows threads to be scheduled on different parallel processors.

Sharing Memory Space and I/O Across CPUs with SMP

Symmetric Multiprocessing in Single OS Systems – Linux Utilizes Multiple CPU Cores

Most often, a computing system runs a single operating system, like Linux. In SMP, the operating system can take advantage of the multiple processors or CPU cores to run the various applications of the system.

Symmetric Multiprocessing in Hypervisor Solutions – Real-time Linux Utilizes Multiple CPU Cores

Deploying multiple operating systems on a single computing system is becoming more and more common, especially for industrial applications. In a hypervisor solution using SMP, multiple different operating systems (for example Windows + real-time Linux) can each utilize a group of multiple processors or CPU cores independently from one another.

Using AMP

Traditional single processor systems provide an execution environment that is very similar to how asymmetric multiprocessing works. It provides a relatively easy way to port legacy code and provides a direct mechanism for controlling CPU usage. In most cases you can work with standard debugging tools and techniques. AMP systems can either be homogeneous (each CPU running the same type and version of operating system) or heterogeneous (each CPU running a different operating system or a different version of the same operating system). Furthermore, AMP is most likely to be used when different CPU architectures are optimal for specific activities - like a Digital Signal Processor (DSP) and an microcontroller (MCU). In an AMP system, there is an opportunity to deploy a different operating system on each processor or CPU core.

If your operating system supports a particular distributed programming model, you can take full advantage of multiple CPUs in a homogeneous environment. Applications that run on a specific CPU will be able to communicate transparently with applications and system services (for example, protocol stacks, device drivers, etc.) on other CPUs, without the high CPU load imposed by traditional inter-processor communication. In heterogeneous systems, you must either choose two operating systems that share a common infrastructure (most commonly IP-based) or implement a proprietary communications scheme for the inter-processor communications. The operating systems should also provide mechanisms for accessing shared hardware components to help avoid resource conflicts. With AMP, the shared hardware resources used by applications have to be divided up between the CPUs. Resources like peripherals, physical memory, as well as interrupt handling is typically allocated statically during boot time. Allocating the resources dynamically would lead to complex coordination between the CPUs. In an AMP system, even when other CPUs run idle, a process always runs on the same CPU. As a result, the CPU can be starved or overloaded. To address this issue, systems can allow applications to dynamically migrate from one CPU to another. However, this can involve complex checking of state information, which can disrupt service if an application is stopped on one CPU and restarted on another. Also, if the CPUs are running different operating systems, such a migration would be difficult, if not impossible.

Partitioning Memory and I/O on Different CPUs with AMP

Asymmetric Multiprocessing in Hypervisor Solutions – Multiple Linux instances

In a hypervisor solution running multiple Linux operating systems, the hypervisor can partition the hardware so that each instance of the Linux OS is assigned to a specific CPU core with specific resources.

Asymmetric Multiprocessing in Hypervisor Solutions – Multiple Real-time Linux Instances

In a real-time hypervisor solution with multiple real-time Linux operating systems, the hypervisor must partition the hardware and ensure that each instance of real-time Linux is assigned a specific CPU core and specific resources.