My blog

Read it anywhere by subscribing to my RSS feed!

[Valid RSS]

Back to list of articles

Article published on Mon, 11 Dec 2023 21:13:48 GMT

Written by Bart Nijbakker


Have you ever wondered how a computer with limited resources is able to do so many tasks at once? (hint: by sharing time)


How Computers Do Multitasking

Our modern computers are incredible machines. They are able to process insane amounts of data, sometimes magnitudes faster than humans. Although we make our processors faster and contain more and more cores, it is almost never enough for true multitasking. So what are some of the tricks we use to make our limited processing power exceed its apparent limitations?

0. The basics

Firstly, let me explain some of the terms used in this article. I will use analogies to describe their functions, so it should not be considered technically accurate.

Now that these concepts are explained, let's look at the most obvious form of multitasking.

1. More cores!

Quite obvious, if you read the definition. If a core is like a sigle standalone processing unit, why not make more of them? This is exactly what computer manufacturers do, especially as the speed of a single core is harder and harder to increase due to physical limitations.

Having multiple cores, in combination with the concept of threads, leads to the following:

2. Multithreading

As the name implies, this method allows the use of multiple cores (or sometimes a single core) to speed up the entire process.

This is a divide-and-conquer type of method. Big, complicated operations can often be divided into small tasks which complete quickly and can be run simultaneously.

This is likely the truest form of multitasking, as multiple threads are actually run next to one another.

An added benefit of multithreading

By using multithreading, software can improve the user experience. This is because they can split up time-consuming tasks and separate them from important real-time tasks like maintaining the user interface. By preventing the user interface from having to wait, it stays responsive and does not freeze as easily!

3. Time-sharing

A single thread may not need the continuous attention of the CPU; for example, it may be waiting for data to arrive from a slow hard drive. During this time, another part of the process (or even another program entirely) can take over and do its job instead.

This method is undoubtedly imperative due to the limited number of cores. Modern CPUs have as many cores as the makers can dream of, but that was not always the case. Early processors used to have only a single core avaialable! This made it quite difficult to do any form of multitasking if time-sharing was not used.

4. Suspending

Much like mentioned under time-sharing, a process is not always active. Sometimes, it may need to wait longer, which is why it will suspend itself. This means the process will "go to sleep", and only wake up once it can move on. This frees up precious CPU time for other tasks to continue.

5. Pipelining

Another way to speed up the work is by dedicating certain parts of a core for specific tasks. This then forms a pipeline, much like a production line in a factory. Each part of the pipeline performs its job, this allows multiple tasks to be done at once, in order.

6. Multiple threads on a single core

Cores sometimes work faster than the rate at which work is given to them. This causes them to have to wait, and therefore waste precious time. Adding another "lane" over which tasks are sent allows the core to wait less and work more. This explains why most CPUs may have 6 cores but 12 threads, for example.

7. Bonus: extreme multithreading in GPUs

Not every computation is made equal. Some tasks (such as graphics processing or AI training) require such immense effort, that splitting them up into smaller pieces becomes increasingly effective.

This is why a special Graphics Processing Unit was invented. These often have hundreds or thousands of tiny cores. Although less powerful on their own, these cores can do certain tasks very well.

Besides having more cores, GPUs are different in another way: they use a different computing method. CPUs use what is called SISD (acronym for Single Instruction, Single Data). This makes CPUs very effective in processing sequential tasks. GPUs use SIMD (Single Instruction, Multiple Data). This means multiple cores will execute the same type of calculation on multiple sets of data. This is further explained in this SuperUser answer.

Conclusion

Hopefully these points have shed some light on the hidden magic of computer multitasking. It should be mentioned that there are more methods than mentioned here, one more creative and brilliant than the other. If you know of any worth adding, feel free to send them in.

Creative Commons License

Web design and content © 2024 Bart Nijbakker.
This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.
Read more about sharing

Mastodon

Contact me

Donate using Liberapay

About donating