算法与数据结构代写 | ADS103 Assessment 3: Programming Assignment 2

这是一篇加拿大的操作系统C语言代写

 

Requirements

Implement a solution to the Traffic problem in a C program called traffic.c; start from the provided file.

Use mutexes for mutual exclusion and condition variables for thread signalling.

Create N threads and assign each a randomly chosen direction. Threads should loop attempting to enter the street a large, fixed number of times. When a thread is in the street, it should call uthread_yield() a total of N times and then exit the street. It should then call uthread_yield() at least another N times before attempting to enter the street again. The program terminates when every thread has entered the street the specified number of times. Experiment with different values of N, starting with small numbers while debugging and ending with a number that is at least twenty.

Testing

Test your program with N=20 and each thread performing a least 100 iterations to ensure that the two street-occupancy constraints are never violated using assert statements. Count the number of times that each of the following occupancy conditions occur: one East, two East, three East, one West, two West, and three West. Print these numbers when the program terminates.

Implement a counter that is incremented each time a thread enters the street. For each thread entering the street, record the value of the counter when it starts waiting and the value when it enters the street.

Subtract these two numbers to determine the thread’s waiting time and record this information in a histogram like this.

 

if (waitingTime < WAITING_HISTOGRAM_SIZE)

waitingHistogram [waitingTime] ++;

else

waitingHistogramOverflow ++;

 

Print the histogram and the overflow bucket when the program terminates. If you access the histogram or other test data from multiple threads be sure to guarantee mutual exclusion for these critical sections.

In this part of assignment you will implement the same traffic simulation you did in the previous question,but using semaphores as the only synchronization principle.

Copy your traffic.c file from part 1 into a new file called traffic_sem.c. Now modify your implementation to replace all mutexes and condition variables with semaphores. Alternatively, start with the provided traffic_sem.c file and implement the same functionality as your traffic.c file.

In previous questions you implemented the Traffic problem with semaphores and with condition variables. In both cases, you printed a histogram of waiting threads, and were able to observe its fairness.

You will notice that no matter how hard you try to make the condition variable implementation fair, if you have enough cars trying to get into the street at the same time, you can’t make it fair for everyone. You will see that cars occasionally end up waiting much longer than it seems they should. The problem is that there is an inherent unfairness with wait. When a thread is woken up by a call to wait, a race between the awoken thread re-entering the critical section when returning from wait and a new thread attempting to enter (e.g.,a car coming from the cross-street) may ensue. Resolving this unfairness is tricky and not necessary for this assignment.

To see what is happening in this case, let’s assume there is a long queue of cars waiting on a condition variable. When signal is called indicating that the street can accommodate another car, the thread that has been waiting the longest is awoken. This is fair as ensured by the fact that the condition-variable waiter queue is a FIFO. However, if some other thread that has not been waiting at all (or has been waiting on the mutex lock queue instead) is, at this very moment, trying to get into the critical section and it beats the awoken thread into the mutex lock and the critical section, then it may get that thread’s position in the street, bypassing the awoken thread and every thread on the waiter queue. When this happens the awoken thread must wait again; and it does this by moving all the way to the back of the waiter queue.

The purpose of the uthread_yield() loop after exiting the street is to minimize how often this situation occurs. You won’t see it happen often. But, it will happen often enough that a few threads occasionally end up waiting a very long time to get into the street. You might experiment with calling uthread_yield() more times after leaving the street (or less) and see how this affects fairness.

Now notice what happens to the unfairness problem in the semaphore implementation. Describe the differences you see and explain why the results are different.

A thread pool is a design pattern created to exploit the concurrency of threads without having application code ever call thread_create directly. It is based on maintaining a stable pool of worker threads that are ready to execute tasks as soon as new tasks become available, blocking if no tasks need to execute. The number of available threads is typically based on a combination of available resources and the load of tasks that must be performed in sequence.

One benefit of a thread pool over creating a new thread for each task is that thread creation and destruction overhead is restricted to the initial creation of the pool, which may result in better performance and better system stability. Creating and destroying a thread and its associated resources can be an expensive process in terms of time. Resources like database and network connections may also take many CPU cycles to drop and re-establish, and can be maintained more efficiently by associating them with a thread that lives over the course of more than one transaction.

Thread pools also solve an important problem faced by thread systems by placing a cap on the maximum number of currently running threads. You saw in Assignment 9 why having such a cap can be important. In applications that call thread_create directly, the total number of threads will tend to be a function of the amount of parallel work that is available at any given time. However, the total number of threads should really be capped based on the hardware resources available, for example. With a thread pool, applications submit as much work as they have and the pool determines, independently, how many threads it will created to run those tasks.

Thread pools (or similar concepts, like server farms) are used often in cloud computation; for example,PrairieLearn uses a pool of workers to handle requests for automatic code grading. If the number of submissions is smaller than the number of workers, then external grading jobs can be executed immediately,but most workers will remain idle (or blocked) for the majority of time. If, however, a large number of submissions is received, over and above the number of workers available for grading, then some submissions may be required to wait until a worker becomes available to run the grading job.

In this assignment you will implement a simple version of a thread pool, with a fixed number of worker threads. Your implementation will be able to schedule individual tasks in the form of function pointers, which will be executed by worker threads as they become available.