[Note: This document is from MSDN, and is currently available at:
http://msdn.microsoft.com/library/sdkdoc/winbase/prothred_0n03.htm
I present it here only in case MS moves it. Sorry, the links in this version don't
work, but it's good for printing.]
[Up to: Data Acquisition Topics]
Each process provides the resources needed to execute a program. A process has a virtual address space, executable code, data, object handles, environment variables, a base priority, and minimum and maximum working set sizes. Each process is started with a single thread, often called the primary thread, but can create additional threads from any of its threads.
All threads of a process share its virtual address space and system resources. In addition, each thread maintains exception handlers, a scheduling priority, and a set of structures the system will use to save the thread context until it is scheduled. The thread context includes the thread's set of machine registers, the kernel stack, a thread environment block, and a user stack in the address space of the thread's process.
Windows NT/2000 and Windows 95/98 support preemptive multitasking, which creates the effect of simultaneous execution of multiple threads from multiple processes. On a multiprocessor computer, Windows NT/2000 can simultaneously execute as many threads as there are processors on the computer.
This overview discusses the following topics:
A multitasking operating system divides the available processor time among the processes or threads that need it. The system is designed for preemptive multitasking; it allocates a processor time slice to each thread it executes. The currently executing thread is suspended when its time slice elapses, allowing another thread to run. When the system switches from one thread to another, it saves the context of the preempted thread and restores the saved context of the next thread in the queue.
The length of the time slice depends on the operating system and the processor. Because each time slice is small (approximately 20 milliseconds), multiple threads appear to be executing at the same time. This is actually the case on multiprocessor systems, where the executable threads are distributed among the available processors. However, you must use caution when using multiple threads in an application, because system performance can decrease if there are too many threads.
To the user, the advantage of multitasking is the ability to have several applications open and working at the same time. For example, a user can edit a file with one application while another application is recalculating a spreadsheet.
To the application developer, the advantage of multitasking is the ability to create applications that use more than one process and to create processes that use more than one thread of execution. For example, a process can have a user interface thread that manages interactions with the user (keyboard and mouse input), and worker threads that perform other tasks while the user interface thread waits for user input. If you give the user interface thread a higher priority, the application will be more responsive to the user, while the worker threads use the processor efficiently during the times when there is no user input.
There are two ways to implement multitasking: as a single process with multiple threads or as multiple processes, each with one or more threads. An application can put each thread that requires a private address space and private resources into its own process, to protect it from the activities of other process threads.
A multithreaded process can manage mutually exclusive tasks with threads, such as providing a user interface and performing background calculations. Creating a multithreaded process can also be a convenient way to structure a program that performs several similar or identical tasks concurrently. For example, a named pipe server can create a thread for each client process that attaches to the pipe. This thread manages the communication between the server and the client. Your process could use multiple threads to accomplish the following tasks:
It is typically more efficient for an application to implement multitasking by creating a single, multithreaded process, rather than creating multiple processes, for the following reasons:
The Win32 API also provides alternative methods that can be used in the place of multithreading. The most significant of these methods are asynchronous input and output (I/O), I/O completion ports, asynchronous procedure calls (APC), and the ability to wait for multiple events.
A single thread can initiate multiple time-consuming I/O requests that can run concurrently using asynchronous I/O. Asynchronous I/O can be performed on files, pipes, and serial communication devices. For more information, see Synchronization and Overlapped Input and Output.
A single thread can block its own execution while waiting for any one or all of several events to occur. This is more efficient than using multiple threads, each waiting for a single event, and more efficient than using a single thread that consumes processor time by continually checking for events to occur. For more information, see Wait Functions.
The recommended guideline is to use as few threads as possible, thereby minimizing the use of system resources. This improves performance. Multitasking has resource requirements and potential conflicts to be considered when designing your application. The resource requirements are as follows:
Providing shared access to resources can create conflicts. To avoid them, you must synchronize access to shared resources. This is true for system resources (such as communications ports), resources shared by multiple processes (such as file handles), or the resources of a single process (such as global variables) accessed by multiple threads. Failure to synchronize access properly (in the same or in different processes) can lead to problems such as deadlock and race conditions. The Win32 API provides a set of synchronization objects and functions you can use to coordinate resource sharing among multiple threads. For more information about synchronization, see Synchronizing Execution of Multiple Threads. Reducing the number of threads makes it easier and more effective to synchronize resources.
A good design for a multithreaded application is the pipeline server. In this design, you create one thread per processor and build queues of requests for which the application maintains the context information. A thread would process all requests in a queue before processing requests in the next queue.
The system scheduler controls multitasking by determining which of the competing threads receives the next processor time slice. The scheduler determines which thread runs next using its scheduling priority.
This section discusses the following topics:
Each thread is assigned a scheduling priority. The priority levels range from zero (lowest priority) to 31 (highest priority). Only the zero-page thread can have a priority of zero. The zero-page thread is a system thread.
The priority of each thread is determined by the following criteria:
The priority class and priority level are combined to form the base priority of a thread.
Each process belongs to one of the following priority classes:
IDLE_PRIORITY_CLASS
BELOW_NORMAL_PRIORITY_CLASS
NORMAL_PRIORITY_CLASS
ABOVE_NORMAL_PRIORITY_CLASS
HIGH_PRIORITY_CLASS
REALTIME_PRIORITY_CLASS
Windows NT 5.0: Windows NT 5.0 introduces BELOW_NORMAL_PRIORITY_CLASS and ABOVE_NORMAL_PRIORITY_CLASS.
By default, the priority class of a process is NORMAL_PRIORITY_CLASS. Use the CreateProcess function to specify the priority class of a child process when you create it. If the calling process is IDLE_PRIORITY_CLASS or BELOW_NORMAL_PRIORITY_CLASS, the new process will inherit this class. Use the GetPriorityClass function to determine the current priority class of a process and the SetPriorityClass function to change the priority class of a process.
Processes that monitor the system, such as screen savers or applications that periodically update a display, should use IDLE_PRIORITY_CLASS. This prevents the threads of this process, which do not have high priority, from interfering with higher priority threads.
Use HIGH_PRIORITY_CLASS with care. If a thread runs at the highest priority level for extended periods, other threads in the system will not get processor time. If several threads are set at high priority at the same time, the threads lose their effectiveness. The high-priority class should be reserved for threads that must respond to time-critical events. If your application performs one task that requires the high-priority class while the rest of its tasks are normal priority, use SetPriorityClass to raise the priority class of the application temporarily; then reduce it after the time-critical task has been completed. Another strategy is to create a high-priority process that has all of its threads blocked most of the time, awakening threads only when critical tasks are needed. The important point is that a high-priority thread should execute for a brief time, and only when it has time-critical work to perform.
You should almost never use REALTIME_PRIORITY_CLASS, because this interrupts system threads that manage mouse input, keyboard input, and background disk flushing. This class can be appropriate for applications that "talk" directly to hardware or that perform brief tasks that should have limited interruptions.
The following are priority levels within each priority class
THREAD_PRIORITY_IDLE
THREAD_PRIORITY_LOWEST
THREAD_PRIORITY_BELOW_NORMAL
THREAD_PRIORITY_NORMAL
THREAD_PRIORITY_ABOVE_NORMAL
THREAD_PRIORITY_HIGHEST
THREAD_PRIORITY_TIME_CRITICAL
All threads are created using THREAD_PRIORITY_NORMAL. This means that the thread priority is the same as the process priority class. After you create a thread, use the SetThreadPriority function to adjust its priority relative to other threads in the process.
A typical strategy is to use THREAD_PRIORITY_ABOVE_NORMAL or THREAD_PRIORITY_HIGHEST for the process's input thread, to ensure that the application is responsive to the user. Background threads, particularly those that are processor intensive, can be set to THREAD_PRIORITY_BELOW_NORMAL or THREAD_PRIORITY_LOWEST, to ensure that they can be preempted when necessary. However, if you have a thread waiting for another thread with a lower priority to complete some task, be sure to block the execution of the waiting high-priority thread. To do this, use a wait function, critical section, or the Sleep function, SleepEx, or SwitchToThread function. This is preferable to having the thread execute a loop. Otherwise, the process may become deadlocked, because the thread with lower priority is never scheduled.
To determine the current priority level of a thread, use the GetThreadPriority function.
The priority level of a thread is determined by both the priority class of its process and its priority level. The priority class and priority level are combined to form the base priority of each thread.
The following table shows the base priority levels for combinations of priority class and priority value.
Process Priority Class | Thread Priority Level | |
---|---|---|
1 | IDLE_PRIORITY_CLASS | THREAD_PRIORITY_IDLE |
1 | BELOW_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_IDLE |
1 | NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_IDLE |
1 | ABOVE_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_IDLE |
1 | HIGH_PRIORITY_CLASS | THREAD_PRIORITY_IDLE |
2 | IDLE_PRIORITY_CLASS | THREAD_PRIORITY_LOWEST |
3 | IDLE_PRIORITY_CLASS | THREAD_PRIORITY_BELOW_NORMAL |
4 | IDLE_PRIORITY_CLASS | THREAD_PRIORITY_NORMAL |
4 | BELOW_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_LOWEST |
5 | IDLE_PRIORITY_CLASS | THREAD_PRIORITY_ABOVE_NORMAL |
5 | BELOW_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_BELOW_NORMAL |
5 | Background NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_LOWEST |
6 | IDLE_PRIORITY_CLASS | THREAD_PRIORITY_HIGHEST |
6 | BELOW_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_NORMAL |
6 | Background NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_BELOW_NORMAL |
7 | BELOW_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_ABOVE_NORMAL |
7 | Background NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_NORMAL |
7 | Foreground NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_LOWEST |
8 | BELOW_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_HIGHEST |
8 | NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_ABOVE_NORMAL |
8 | Foreground NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_BELOW_NORMAL |
8 | ABOVE_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_LOWEST |
9 | NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_HIGHEST |
9 | Foreground NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_NORMAL |
9 | ABOVE_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_BELOW_NORMAL |
10 | Foreground NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_ABOVE_NORMAL |
10 | ABOVE_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_NORMAL |
11 | Foreground NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_HIGHEST |
11 | ABOVE_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_ABOVE_NORMAL |
11 | HIGH_PRIORITY_CLASS | THREAD_PRIORITY_LOWEST |
12 | ABOVE_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_HIGHEST |
12 | HIGH_PRIORITY_CLASS | THREAD_PRIORITY_BELOW_NORMAL |
13 | HIGH_PRIORITY_CLASS | THREAD_PRIORITY_NORMAL |
14 | HIGH_PRIORITY_CLASS | THREAD_PRIORITY_ABOVE_NORMAL |
15 | HIGH_PRIORITY_CLASS | THREAD_PRIORITY_HIGHEST |
15 | HIGH_PRIORITY_CLASS | THREAD_PRIORITY_TIME_CRITICAL |
15 | IDLE_PRIORITY_CLASS | THREAD_PRIORITY_TIME_CRITICAL |
15 | BELOW_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_TIME_CRITICAL |
15 | NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_TIME_CRITICAL |
15 | ABOVE_NORMAL_PRIORITY_CLASS | THREAD_PRIORITY_TIME_CRITICAL |
16 | REALTIME_PRIORITY_CLASS | THREAD_PRIORITY_IDLE |
22 | REALTIME_PRIORITY_CLASS | THREAD_PRIORITY_LOWEST |
23 | REALTIME_PRIORITY_CLASS | THREAD_PRIORITY_BELOW_NORMAL |
24 | REALTIME_PRIORITY_CLASS | THREAD_PRIORITY_NORMAL |
25 | REALTIME_PRIORITY_CLASS | THREAD_PRIORITY_ABOVE_NORMAL |
26 | REALTIME_PRIORITY_CLASS | THREAD_PRIORITY_HIGHEST |
31 | REALTIME_PRIORITY_CLASS | THREAD_PRIORITY_TIME_CRITICAL |
The scheduler maintains a queue of executable threads for each priority level. These are known as ready threads. When a processor becomes available, the system performs a context switch. The steps in a context switch are:
The following classes of threads are not ready threads.
Until threads that are suspended or blocked become ready to run, the scheduler does not allocate any processor time to them, regardless of their priority.
The most common reasons for a context switch are:
When a running thread needs to wait, it relinquishes the remainder of its time slice.
Each thread has a dynamic priority. This is the priority the scheduler uses to determine which thread to execute. Initially, a thread's dynamic priority is the same as its base priority. The system can boost and lower the dynamic priority, to ensure that it is responsive and that no threads are starved for processor time. The system does not boost the priority of threads with a base priority level between 16 and 31. Only threads with a base priority between 0 and 15 receive dynamic priority boosts.
The system boosts the dynamic priority of a thread to enhance its responsiveness as follows.
Windows NT/2000: The user can control the boosting of processes that use NORMAL_PRIORITY_CLASS through the System control panel application.
Windows NT/2000: You can disable the priority-boosting feature by calling the SetProcessPriorityBoost or SetThreadPriorityBoost function. To determine whether this feature has been disabled, call the GetProcessPriorityBoost or GetThreadPriorityBoost function.
After raising a thread's dynamic priority, the scheduler reduces that priority by one level each time the thread completes a time slice, until the thread drops back to its base priority. A thread's dynamic priority is never less than its base priority.
Priority inversion occurs when two or more threads with different priorities are in contention to be scheduled. Consider a simple case with three threads: thread 1, thread 2, and thread 3. Thread 1 is high priority and becomes ready to be scheduled. Thread 2, a low-priority thread, is executing code in a critical section. Thread 1, the high-priority thread, begins waiting for a shared resource from thread 2. Thread 3 has medium priority. Thread 3 receives all the processor time, because the high-priority thread (thread 1) is waiting for shared resources from the low-priority thread (thread 2). Thread 2 won't leave the critical section, because does not have the highest priority and won't be scheduled.
Windows NT uses a symmetric multiprocessing (SMP) model to schedule threads on multiple processors. With this model, any thread can be assigned to any processor. Therefore, scheduling threads on a computer with multiple processors is similar to scheduling threads on a computer with a single processor. However, the scheduler has a pool of processors, so that it can schedule threads to run concurrently. Scheduling is still determined by thread priority. However, on a multiprocessor computer, you can also affect scheduling by setting thread affinity and thread ideal processor, as discussed here.
Thread affinity forces a thread to run on a specific subset of processors. Use the SetProcessAffinityMask function to specify thread affinity for all threads of the process. To set the thread affinity for a single thread, use the SetThreadAffinityMask function. The thread affinity must be a subset of the process affinity. You can obtain the current process affinity by calling the GetProcessAffinityMask function.
Setting thread affinity should generally be avoided, because it can interfere with the scheduler's ability to schedule threads effectively across processors. This can decrease the performance gains produced by parallel processing. An appropriate use of thread affinity is testing each processor.
When you specify a thread ideal processor, the scheduler runs the thread on the specified processor when possible. Use the SetThreadIdealProcessor function to specify a preferred processor for a thread. This does not guarantee that the ideal processor will be chosen, but provides a useful hint to the scheduler.
Each process is started with a single thread, but can create additional threads from any of its threads.
This section discusses the following topics:
The CreateThread function creates a new thread for a process. The creating thread must specify the starting address of the code that the new thread is to execute. Typically, the starting address is the name of a function defined in the program code. This function takes a single parameter and returns a DWORD value. A process can have multiple threads simultaneously executing the same function.
The following example demonstrates how to create a new thread that executes the locally defined function, ThreadFunc.
DWORD WINAPI ThreadFunc( LPVOID lpParam ) { char szMsg[80]; wsprintf( szMsg, "ThreadFunc: Parameter = %d\n", *lpParam ); MessageBox( NULL, szMsg, "Thread created.", MB_OK ); return 0; } VOID main( VOID ) { DWORD dwThreadId, dwThrdParam = 1; HANDLE hThread; hThread = CreateThread( NULL, // no security attributes 0, // use default stack size ThreadFunc, // thread function &dwThrdParam, // argument to thread function 0, // use default creation flags &dwThreadId); // returns the thread identifier // Check the return value for success. if (hThread == NULL) ErrorExit( "CreateThread failed." ); CloseHandle( hThread ); }
For simplicity, this example passes a pointer to a DWORD value as an argument to the thread function. This could be a pointer to any type of data or structure, or it could be omitted altogether by passing a NULL pointer and deleting the references to the parameter in ThreadFunc.
It is risky to pass the address of a local variable if the creating thread exits before the new thread, because the pointer becomes invalid. Instead, either pass a pointer to dynamically allocated memory or make the creating thread wait for the new thread to terminate. Data can also be passed from the creating thread to the new thread using global variables. With global variables, it is usually necessary to synchronize access by multiple threads. For more information about synchronization, see Synchronizing Execution of Multiple Threads.
In processes where a thread might create multiple threads to execute the same code, it is inconvenient to use global variables. For example, a process that enables the user to open several files at the same time can create a new thread for each file, with each of the threads executing the same thread function. The creating thread can pass the unique information (such as the file name) required by each instance of the thread function as an argument. You cannot use a single global variable for this purpose, but you could use a dynamically allocated string buffer.
The creating thread can use the arguments to CreateThread to specify the following:
You can also create a thread by calling the CreateRemoteThread function. This function is used by debugger processes to create a thread that runs in the address space of the process being debugged.
Each new thread receives its own stack space, consisting of both committed and reserved memory. By default, each thread uses 1 MB of reserved memory, and one page of committed memory. The system will commit one page blocks from the reserved stack memory as needed, until the stack cannot grow any father. To specify a different default stack size, use the STACKSIZE statement in the module definition (.DEF) file. Your linker may also support a command-line option for setting the stack size. For more information, see the documentation included with your linker.
To increase the amount of stack space which is to be initially committed for a thread, specify the value in the dwStackSize parameter of the CreateThread function. This value is rounded to the nearest page and used to set the initial size of the committed memory. The call to CreateThread will fail if there is not enough memory to commit the number of bytes you request. If the dwStackSize value is smaller than the default size, the new thread uses the same size as the thread that created it.
The stack is freed when the thread terminates.
When a new thread is created by the CreateThread or CreateRemoteThread function, a handle to the thread is returned. By default, this handle has full access rights, and subject to security access checking can be used in any of the functions that accept a thread handle. This handle can be inherited by child processes, depending on the inheritance flag specified when it is created. The handle can be duplicated by DuplicateHandle, which enables you to create a thread handle with a subset of the access rights. The handle is valid until closed, even after the thread it represents has been terminated.
The CreateThread and CreateRemoteThread functions also return an identifier that uniquely identifies the thread throughout the system. A thread can use the GetCurrentThreadId function to get its own thread identifier. The identifiers are valid from the time the thread is created until the thread has been terminated.
Windows 2000: If you have a thread identifier, you can get the thread handle by calling the OpenThread function. OpenThread enables you to specify the handle's access rights and whether it can be inherited.
Windows NT 4.0 and earlier, Windows 95/98: The Win32 API does not provide a way to get the thread handle from the thread identifier. If the handles were made available this way, the owning process could fail because another process unexpectedly performed an operation on one of its threads, such as suspending it, resuming it, adjusting its priority, or terminating it. Instead, you must request the handle from the thread creator or the thread itself.
A thread can use the GetCurrentThread function to retrieve a pseudo handle to its own thread object. This pseudo handle is valid only for the calling process; it cannot be inherited or duplicated for use by other processes. To get the real handle to the thread, given a pseudo handle, use the DuplicateHandle function.
A thread can suspend and resume the execution of another thread using the SuspendThread and ResumeThread functions. While a thread is suspended, it is not scheduled for time on the processor.
The SuspendThread function is not particularly useful for synchronization because it does not control the point in the code at which the thread's execution is suspended. However, you might want to suspend a thread in a situation where you are waiting for user input that could cancel the work the thread is performing. If the user input cancels the work, have the thread exit; otherwise, call ResumeThread.
If a thread is created in a suspended state (with the CREATE_SUSPENDED flag), it does not begin to execute until another thread calls ResumeThread with a handle to the suspended thread. This can be useful for initializing the thread's state before it begins to execute. See Using a Multithreaded Multiple Document Interface Application for an example that uses this method to modify the thread's priority before it can run. Suspending a thread at creation can be useful for one-time synchronization, because this ensures that the suspended thread will execute the starting point of its code when you call ResumeThread.
A thread can temporarily yield its execution for a specified interval by calling the Sleep or SleepEx functions This is useful particularly in cases where the thread responds to user interaction, because it can delay execution long enough to allow users to observe the results of their actions. During the sleep interval, the thread is not scheduled for time on the processor.
The SwitchToThread function is similar to Sleep and SleepEx, except that you cannot specify the interval. SwitchToThread allows the thread to give up its time slice.
To avoid race conditions and deadlocks, it is necessary to synchronize access by multiple threads to shared resources. Synchronization is also necessary to ensure that interdependent code is executed in the proper sequence.
The Win32 API provides a number of objects whose handles can be used to synchronize multiple threads. These objects include:
The state of each of these objects is either signaled or not signaled. When you specify a handle to any of these objects in a call to one of the wait functions, the execution of the calling thread is blocked until the state of the specified object becomes signaled.
Some of these objects are useful in blocking a thread until some event occurs. For example, a console input buffer handle is signaled when there is unread input, such as a keystroke or mouse button click. Process and thread handles are signaled when the process or thread terminates. This allows a process, for example, to create a child process and then block its own execution until the new process has terminated.
Other objects are useful in protecting shared resources from simultaneous access. For example, multiple threads can each have a handle to a mutex object. Before accessing a shared resource, the threads must call one of the wait functions to wait for the state of the mutex to be signaled. When the mutex becomes signaled, only one waiting thread is released to access the resource. The state of the mutex is immediately reset to not signaled so any other waiting threads remain blocked. When the thread is finished with the resource, it must set the state of the mutex to signaled to allow other threads to access the resource.
For the threads of a single process, critical-section objects provide a more efficient means of synchronization than mutexes. A critical section is used like a mutex to enable one thread at a time to use the protected resource. A thread can use the EnterCriticalSection function to request ownership of a critical section. If it is already owned by another thread, the requesting thread is blocked. A thread can use the TryEnterCriticalSection function to request ownership of a critical section, without blocking upon failure to obtain the critical section. After it receives ownership, the thread is free to use the protected resource. The execution of the other threads of the process is not affected unless they attempt to enter the same critical section.
The WaitForInputIdle function makes a thread wait until a specified process is initialized and waiting for user input with no input pending. Calling WaitForInputIdle can be useful for synchronizing parent and child processes, because CreateProcess returns without waiting for the child process to complete its initialization.
For more information, see Synchronization.
To enhance performance, access to graphics device interface (GDI) objects (such as palettes, device contexts, regions, and the like) is not serialized. This creates a potential danger for processes that have multiple threads sharing these objects. For example, if one thread deletes a GDI object while another thread is using it, the results are unpredictable. This danger can be avoided simply by not sharing GDI objects. If sharing is unavoidable (or desirable), the application must provide its own mechanisms for synchronizing access. For more information about synchronizing access, see Synchronizing Execution of Multiple Threads.
All threads of a process share the virtual address space and the global variables of that process. The local variables of a thread function are local to each thread that runs the function. However, the static or global variables used by that function have the same value for all threads. With thread local storage (TLS), you can create a unique copy of a variable for each thread. Using TLS, one thread allocates an index that can be used by any thread of the process to retrieve its unique copy.
Use the following steps to implement TLS:
The constant TLS_MINIMUM_AVAILABLE defines the minimum number of TLS indexes available in each process. This minimum is guaranteed to be at least 64 for all systems.
It is ideal to use TLS in a DLL. Perform the initial TLS operations in the DllMain function in the context of the process or thread attaching to the DLL. When a new process attaches to the DLL, call TlsAlloc in the entry-point function to allocate a TLS index for that process. Then store the TLS index in a global variable that is private to each attached process. When a new thread attaches to the DLL, allocate dynamic memory for that thread in the entry-point function, and use TlsSetValue with the TLS index from TlsAlloc to save private data to the index. Then you can use the TLS index in a call to TlsGetValue to access the private data for the calling thread from within any function in the DLL. When a process detaches from the DLL, call TlsFree.
For an example illustrating the use of thread local storage, see Using Thread Local Storage.
Any thread can create a window. The thread that creates the window owns the window and its associated message queue. Therefore, the thread must provide a message loop to process the messages in its message queue. In addition, you must use MsgWaitForMultipleObjects or MsgWaitForMultipleObjectsEx in that thread, rather than the other wait functions, so that it can process messages. Otherwise, the system can become deadlocked when the thread is sent a message while it is waiting.
The AttachThreadInput function can be used to allow a set of threads to share the same input state. By sharing input state, the threads share their concept of the active window. By doing this, one thread can always activate another thread's window. This function is also useful for sharing focus state, mouse capture state, keyboard state, and window Z-order state among windows created by different threads whose input state is shared.
A thread executes until one of the following events occurs:
The GetExitCodeThread function returns the termination status of a thread. While a thread is executing, its termination status is STILL_ACTIVE. When a thread terminates, its termination status changes from STILL_ACTIVE to the exit code of the thread. The exit code is either the value specified in the call to ExitThread, ExitProcess, TerminateThread, or TerminateProcess, or the value returned by the thread function.
When a thread terminates, the state of the thread object changes to signaled, releasing any other threads that had been waiting for the thread to terminate. For more about synchronization, see Synchronizing Execution of Multiple Threads.
If a thread is terminated by ExitThread, the system calls the entry-point function of each attached DLL with a value indicating that the thread is detaching from the DLL (unless you call the DisableThreadLibraryCalls function). If a thread is terminated by ExitProcess, the DLL entry-point functions are invoked once, to indicate that the process is detaching. DLLs are not notified when a thread is terminated by TerminateThread or TerminateProcess. For more information about DLLs, see Dynamic-Link Libraries.
Warning The TerminateThread and TerminateProcess functions should be used only in extreme circumstances, since they do not allow threads to clean up, do not notify attached DLLs, and do not free the initial stack. The following steps provide a better solution:
The GetThreadTimes function obtains timing information for a thread. It returns the thread creation time, how much time the thread has been executing in kernel mode, and how much time the thread has been executing in user mode. These times do not include time spent executing system threads or waiting in a suspended or blocked state. If the thread has exited, GetThreadTimes returns the thread exit time.
Windows NT/Windows 2000 security enables you to control access to thread objects. For more information about security, see Access-Control Model.
You can specify a security descriptor for a thread when you call the CreateProcess, CreateProcessAsUser, CreateProcessWithLogonW, CreateThread, or CreateRemoteThread function. To retrieve a thread's security descriptor, call the GetSecurityInfo function. To change a thread's security descriptor, call the SetSecurityInfo function.
The handle returned by the CreateThread function has THREAD_ALL_ACCESS access to the thread object. When you call the GetCurrentThread function, the system returns a pseudohandle with the maximum access that the thread's security descriptor allows to the caller.
The valid access rights for thread objects include the DELETE, READ_CONTROL, SYNCHRONIZE, WRITE_DAC, and WRITE_OWNER standard access rights, in addition to the following thread-specific access rights.
Value | Meaning |
---|---|
SYNCHRONIZE | A standard right required to wait for the thread to exit. |
THREAD_ALL_ACCESS | Specifies all possible access rights for a thread object. |
THREAD_DIRECT_IMPERSONATION | Required for a server thread that impersonates a client. |
THREAD_GET_CONTEXT | Required to read the context of a thread using GetThreadContext. |
THREAD_IMPERSONATE | Required to use a thread's security information directly without calling it by using a communication mechanism that provides impersonation services. |
THREAD_QUERY_INFORMATION | Required to read certain information from the thread object. |
THREAD_SET_CONTEXT | Required to write the context of a thread. |
THREAD_SET_INFORMATION | Required to set certain information in the thread object. |
THREAD_SET_THREAD_TOKEN | Required to set the impersonation token for a thread. |
THREAD_SUSPEND_RESUME | Required to suspend or resume a thread. |
THREAD_TERMINATE | Required to terminate a thread. |
You can request the ACCESS_SYSTEM_SECURITY access right to a thread object if you want to read or write the object's SACL. For more information, see Access-Control Lists (ACLs) and SACL Access Right.
A child process is a process that is created by another process, called the parent process.
This section discusses the following topics:
[... many more pages, but this is all I needed at the time...]