Todas las respuestas
A multitasking operating system divides the available processor time among the processes or threads that need it. The system is designed for preemptive multitasking; it allocates a processor time slice to each thread it executes. The currently executing thread is suspended when its time slice elapses, allowing another thread to run. When the system switches from one thread to another, it saves the context of the preempted thread and restores the saved context of the next thread in the queue.
The length of the time slice depends on the operating system and the processor. Because each time slice is small (approximately 20 milliseconds), multiple threads appear to be executing at the same time. This is actually the case on multiprocessor systems, where the executable threads are distributed among the available processors. However, you must use caution when using multiple threads in an application, because system performance can decrease if there are too many threads.
For more information, see the following topics:
- Propuesto como respuesta Luigi Bruno domingo, 30 de octubre de 2011 17:36
It should be noted that this technique is not new. Even back in the 80s and the Commodore 64 (my first computer), multitasking (albeit basic) was implemented.
Its basic function is to make it appear that several things are happening at the same time, even though the processor could only execute code from a single thread at a particular instant of time. Early computers, such as the C64, Spectrum, et al, had a relatively slow processor and so this technique was only used for basic things such as reading the keyboard and joysticks.
As processors were developed and became faster, it was possible to have more than 2 threads which appear to be executed simultaneously (indeed, modern CPUs and OSes can have up to 1,000 (or even more) such threads). Another development has been the use of dynamic time-slice sharing. The C64 used to switch between its two threads every 60th of a second and this rate was fixed. Modern computers are able to use the time-slices more effectively, and instead of the thread changing on every slice, it can execute for a number of time-slice periods (up to a certain limit). With the advent of true multi-core processors, each core can work on related (or even non-related) threads simultaneously, thus giving systems an apparent boost (if the OS supports it).