Understanding Basic Multithreading Concepts Understanding Basic Multithreading Concepts Concurrency and Parallelism In a multithreaded process on a single processor, the processor can switch execution resources between threads, resulting in concurrent execution. In the same multithreaded process in a shared-memory multiprocessor environment, each thread in the process can run on a separate processor at the same time, resulting in parallel execution. When the process has fewer or as many threads as there are processors, the threads support system in conjunction with the operating environment ensure that each thread runs on a different processor. For example, in a matrix multiplication that has the same number of threads and processors, each thread (and each processor) computes a row of the result. Looking at Multithreading Structure Traditional UNIX already supports the concept of threads–each process contains a single thread, so programming with multiple processes is programming with multiple threads. But a process is also an address space, and creating a process involves creating a new address space. Creating a thread is much less expensive when compared to creating a new process, because the newly created thread uses the current process address space. The time it takes to switch between threads is much less than the time it takes to switch between processes, partly because switching between threads does not involve switching between address spaces. Communicating between the threads of one process is simple because the threads share everything–address space, in particular. So, data produced by one thread is immediately available to all the other threads. The interface to multithreading support is through a subroutine library, libpthread for POSIX threads, and libthread for Solaris threads. Multithreading provides flexibility by decoupling kernel-level and user-level resources. User-Level Threads Threads are the primary programming interface in multithreaded programming. User-level threads [User-level threads are named to distinguish them from kernel-level threads, which are the concern of systems programmers, only. Because this book is for application programmers, kernel-level threads are not discussed.] are handled in user space and avoid kernel context switching penalties. An application can have hundreds of threads and still not consume many kernel resources. How many kernel resources the application uses is largely determined by the application. Threads are visible only from within the process, where they share all process resources like address space, open files, and so on. The following state is unique to each thread. Thread ID Register state (including PC and stack pointer) Stack Signal mask Priority Thread-private storage Because threads share the process instructions and most of the process data, a change in shared data by one thread can be seen by the other threads in the process. When a thread needs to interact with other threads in the same process, it can do so without involving the operating environment. By default, threads are very lightweight. But, to get more control over a thread (for instance, to control scheduling policy more), the application can bind the thread. When an application binds threads to execution resources, the threads become kernel resources (see “System Scope (Bound Threads)” for more information). To summarize, user-level threads are: Inexpensive to create because they do not need to create their own address space. They are bits of virtual memory that are allocated from your address space at run time. Fast to synchronize because synchronization is done at the application level, not at the kernel level. Easily managed by the threads library; either libpthread or libthread. Lightweight Processes The threads library uses underlying threads of control called lightweight processes that are supported by the kernel. You can think of an LWP as a virtual CPU that executes code or system calls. You usually do not need to concern yourself with LWPs to program with threads. The information here about LWPs is provided as background, so you can understand the differences in scheduling scope, described on “Process Scope (Unbound Threads)”. Note – The LWPs in the Solaris 2, Solaris 7, and Solaris 8 operating environments are not the same as the LWPs in the SunOSTM 4.0 LWP library, which are not supported in the Solaris 2, Solaris 7, and Solaris 8 operating environments. Much as the stdio library routines such as fopen() and fread() use the open() and read() functions, the threads interface uses the LWP interface, and for many of the same reasons. Lightweight processes (LWPs) bridge the user level and the kernel level. Each process contains one or more LWP, each of which runs one or more user threads. (See Figure 1-1.) The creation of a thread usually involves just the creation of some user context, but not the creation of an LWP. Figure 1-1 User-level Threads and Lightweight Processes 
2024最新激活全家桶教程,稳定运行到2099年,请移步至置顶文章:https://sigusoft.com/99576.html
版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请联系我们举报,一经查实,本站将立刻删除。 文章由激活谷谷主-小谷整理,转载请注明出处:https://sigusoft.com/64827.html