Java代写 | Algorithms and Analysis COSC 2123  

Algorithms and Analysis COSC 2123  
Assignment 1: Process Scheduler  
Assessment Type Group assignment. Submit online via Canvas Assignments  
Assignment Task 3: Assignment 1 Assignment 1:  
Process Scheduler. Marks awarded for meeting require-  
ments as closely as possible. Clarifications/updates may be  
made via announcements/relevant discussion forums.  
Week 6, Wednesday 8th April 2020, 11:59pm  
Due Date  
There are three key objectives for this assignment:  
Understand how a real problem can be mapped to various representations and their  
Use a number of fundamental data structures to implement the runqueue abstract  
data type.  
Evaluate and contrast the performance of the data structures with respect to dif-  
ferent usage scenarios and input data.  
Learning Outcomes  
This assignment assesses learning outcomes CLO 2, 4 and 5 of the course. Please refer  
to the course guide for the relevant learning outcomes:  
Scheduling is the process of arranging, controlling and optimizing tasks and workloads. In  
computing, scheduling concerns about which task is assigned to resources that complete  
the task. The task may be virtual computation elements such as threads, processes  
or data flows, which are in turn scheduled onto hardware resources such as processors,  
network links or expansion cards [2]. Scheduling is an intrinsic part of the execution  
model of a computer system. It makes it possible to have computer multitasking with  
a single central processing unit (CPU). For more detailed information about scheduling,  
please refer to [2].  
A process scheduler carries out the scheduling activity in an operating system. It  
arranges active processes in a runqueue. The runqueue represents a timeline for the
scheduled processes. For example, the Completely Fair Scheduler (CFS) is a process  
scheduler for the 2.6.23 (October 2007) release of the Linux kernel. Schedulers like the  
CFS keep track of time (in nanoseconds) a process has executed on processor, called  
vruntime. In this assignment, we will implement and analysis CFS types of schedulers  
that keep track of vruntime of a process. Processes that has got a small amount of  
time are boosted to the front of the runqueue to get the processor, those who got bigger  
amount of time are thwarted [1]. When the scheduler is invoked to run a new process: the  
process with the lowest vruntime from the runqueue is chosen, and sent for execution,  
. If the process simply completes execution, it is removed from the runqueue  
. Otherwise, if it is interrupted for a reason, it is reinserted into the runqueue based  
on its new vruntime. If there are multiple processes has the same vruntime, they  
follow a First-In-First-Out (FIFO) ordered.  
This type of schedulers commonly require efficient data structure implementation of  
the runqueue to schedule new process, delete process, track process information and high-  
performance code to generate the schedules. Despite the name, the runqueue need not be  
implemented in the traditional way, e.g., as an array. In this assignment, we defines the  
runqueue as an abstract data type similar to a PriorityQueue and consider the following  
two representations:  
The Sequential Representation  
The Tree Representation  
We will implement both representations for the runqueue and evaluate how well they  
perform when representing some given runqueues and compute the average speed of  
various operations.  
The assignment is broken up into a number of tasks, to help you progressively complete  
the project.  
.1 Task A: Implement the runqueue representations and its op-  
erations (8 marks)  
In this task, you will implement the runqueue using linear and tree representations,  
respectively. Each representation will be implemented by various data structures. Each  
element of the runqueue represents a process to be scheduled, containing a process label,  
and its vruntime.  
.1.1 Data Structure Details  
In this assignment, each element of the runqueue (i.e., process) will be represented with  
an abstract data type Proc that consists of the following information:
procLabel – a string label of the process, i.e., the unique identifier of a process.  
The format of the process label starts with letter P followed by digits, e.g., P1,  
P240, P1000, etc.  
vt – vruntime of the process. For the sake of simplicity, we use time units (positive  
integers) to represent vruntime in this assignment.  
The priority of each element/process is assessed by its vruntime. Process that has a  
shorter vruntime should be of higher priority, i.e., should be dequeue/removed before  
other processes that have a longer vruntime.  
The runqueue can be implemented using a number of data structures. In this assign-  
ment, you are to implement the runqueue using the following representations and data  
Sequential Representation  
using an 1D Ordered Array  
using an Ordered Linked-list  
Tree Representation  
using a Binary Search Tree (BST)  
Note that you must program your own implementation, and not use existing libraries  
in any kind, e.g. the PriorityQueue, the LinkedList, Set, MultiSet, Tree type of data  
structures in java.utils or any other libraries. You must implement your own nodes and  
methods to handle the operations. If you use java.utils or other implementation from  
libraries, this will be considered as an invalid implementation and attract 0 marks for  
that data structure.  
.1.2 Operations Details  
Operations to perform on the implemented runqueue abstract data type are specified on  
the command line. They are in the following format:  
operation> [arguments]  
where operation is one of {EN, DE, FP, RP, PT, ST, PA, Q} and arguments is for optional  
arguments of some of the operations. The operations take the following form:  
EN <procLabel> <vt> – Enqueue: create and add a process (with lable procLabel  
and vruntime vt) into the runqueue. There is no output for this operation.  
DE – Dequeue: delete the process with the highest priority (i.e., shortest vruntime)  
from the runqueue. Processes that have the same vruntime follow the FIFO order.  
If the runqueue is not empty, the output should be the label of the deleted process.  
Otherwise, it outputs an empty string.  
FP <procLabel> Find the process with the given label procLabel in the runqueue.  
The output should be true if the search is successful, otherwise false (i.e., the  
process does not exist).  
RP <procLabel> – Remove the process with the given label procLabel in the run-  
queue. The output should be true if successfully deleted, otherwise false (i.e., the  
process does not exist).  
PT <procLabel> – Calculate the total vruntime of the preceding processes of the  
process labelled as procLabel. Preceding processes are the processes that will be  
executed before the given process. If the process with procLabel does not exist in  
the runqueue, return -1; otherwise, the output is the calculated total vruntime.  
ST <procLabel> – Calculate the total vruntime of the succeeding processes of the  
process labelled as procLabel. Succeeding processes are the processes that will be  
executed after the given process. If the process with procLabel does not exist in  
the runqueue, return -1; otherwise, the output is the calculated total vruntime.  
PA – Print the labels of all the processes in the runqueue in ascending order of their  
vruntime, i.e., the time line. Please refer to the following example for the exact  
format of the output. If the runqueue is empty, this operation prints out an empty  
Q – quits the program.  
As an example of the operations, consider an initial empty runqueue and the following  
list of operations (inputs) and their associated outputs:  
Int : ]EN P2 2  
Out : ]  
Int : ]EN P1 1  
Out : ]  
Int : ]EN P3 2  
Out : ]  
Int : ]DE  
Out : ] P1  
Int : ]DE  
Out : ] P2  
Int : ] FP P2  
Out : ] f a l s e  
Int : ]RP P2  
Out : ] f a l s e  
Int : ]EN P1 1  
Out : ]  
Int : ]EN P2 4  
Out : ]  
Int : ]EN P4 4  
Out : ]  
Int : ]EN P5 5  
Out : ]  
Int : ]PA  
Out : ] P1 P3 P2 P4 P5  
Int : ]PT P2  
Out : ] 3  
Int : ] ST P2  
Out : ] 9  
Int : ] FP P2  
Out : ] true  
Int : ]RP P2  
Out : ] true  
Int : ]PT P2  
Out:] 1  
Int : ] ST P2  
Out:] 1  
Int : ]PA  
Out : ] P1 P3 P4 P5  
.1.3 Testing Framework  
We provide Java skeleton code (see Table 1) to help you get started and test the correct-  
ness of your code. You may add your own Java files to your final submission, but please  
ensure that they work with the supplied framework (see below).  
Code for testing (details see below). No need to  
modify this file.  
Interface for Runqueue.  
All implementations  
should implement the Runqueue class defined in  
this file. Read the javaDoc of each method care-  
fully and do not modify this file.  
Code that implements the sequential represen-  
tation of runqueue using an 1D ordered array.  
Complete the implementation (implement parts la-  
belled “Implement me!”). Code that implements the sequential represen-  
tation of runqueue using an ordered linked list.  
Complete the implementation (implement parts la-  
belled “Implement me!”).  
Code that implements the tree representation of  
runqueue using an binary search tree. Complete  
the implementation (implement parts labelled “Im-  
plement me!”).  
Table 1: Table of Java files.  
We provide the Java file that helps with automating testing.  
Once completed the implementations of the runqueues and compiled the necessary files,  
you can test your implementation using the RunqueueTester in either interactive or  
non-interactive mode:  
Interactive mode reads in operation commands from stdin then executes those on the  
selected runqueue implementation.  
java RunqueueTester <impl>  
where <impl> is one of the implementation, <impl> = [array | linkedlist |  
For example:  
java RunqueueTester array  
The above command executes the RunququeTester with the 1D ordered array im-  
plementation of the runqueue in the interactive mode. Please refer to the input  
output example in Section 4.1.2  
Non-interactive mode reads input files of operations (such as example above). These  
are fed into the Java framework which calls your implementations. The outputs  
resulting from any print operations are stored. The format of the command call  
upon the non-interactive mode is as follows:  
java RunqueueTester <impl> inputfile_name outputfile_name  
For example:  
Java RunqueueTester array output.out  
The above command executes the RunqueueTester with the 1D ordered array im-  
plementation of the runqueue . It reads the commands from and stores  
all the output to output.out We provide two pairs of input and expected output  
files for your testing and examination.  
For our evaluation on the correctness of your implementation, we will use the same  
Java file for auto testing, as well as input/expected output files of  
the same format as the provided examples. To avoid unexpected failures, please do not  
change the file. If you wish to use the java file for your timing  
evaluation, make a copy and use the unaltered script to test the correctness of your  
implementations, and modify the copy for your timing evaluation.  
We will run the supplied test file on your implementation on the university’s core  
teaching servers. If you develop on your own machines, please ensure your code  
compiles and runs on these machines. You don’t want to find out last minute that  
your code doesn’t compile on these machines. If your code doesn’t run on these  
machines, we unfortunately do not have the resources to debug each one and cannot  
award marks for testing.  
All submissions should compile with no warnings on OpenJDK 1.8 – this is the  
default javac/java version on the Core teaching servers.  
.2 Task B: Evaluate your Data Structures (7 marks)  
In this second task, you will evaluate your three implemented structures both theoret-  
ically and empirically in terms of their time complexities for the different operations  
and different use case scenarios. Scenarios arise from the possible use cases of a process  
Write a report on your analysis and evaluation of the different implementations. Con-  
sider and recommend in which scenarios each type of implementation would be most  
appropriate. The report should be 6 pages or less, in font size 12. See the assessment  
rubric (Appendix A) for the criteria we are seeking in the report.  
.2.1 Use Case Scenarios  
There are many possibilities, but for this assignment, consider the following scenarios:  
Scenario 1 Growing runqueue (Enqueue). In this scenario, the runqueue is  
growing with increasing processes are adding into the runqueue. In this scenario,  
you are to evaluate the performance of your implementations in terms of the enqueue  
operation (EN).  
Scenario 2 Shrinking runqueue (Dequeue). In this scenario, the runqueue  
is shrinking with increasing processes are dequeuing from the runqueue. In this  
scenario, you are to evaluate the performance of your implementations in terms of  
the dequeue operation (DE).  
Scenario 3 Calculating total vruntime of proceeding processes. In this sce-  
nario, assume the runqueue is not changing, but important operation of calculating  
total vruntime of proceeding processes of a given process is required. In this sce-  
nario, you are to evaluate the performance of your implementations in terms of the  
PT operation.  
.2.2 Theoretical Analysis  
At first, you are to conduct a theoretical analysis of the performance. Given a runqueue  
of size n, report the best case, worse case running time estimate, and the asymptotic  
complexity (Big O) of your implementations in terms of the different scenarios outlined  
above. You will also need to provide an example scenario and explanation on the each of  
the best case and worse case running time estimate, e.g. using an runqueue with a small  
number (e.g.,5) of elements/processes. Put the results in the follow format of a table:  
Theoretical Analysis  
Worse Case  
[time estimation] [asymptotic  
Scenario 1  
Best Case  
[time estimation]  
Big O  
example and explanation] [example and explanation] complexity]  
Scenario 2  
Scenario 3  
[time estimation]  
[time estimation]  
[time estimation] [asymptotic  
example and explanation] [example and explanation] complexity]  
example and explanation] [example and explanation] complexity]  
[time estimation]  
.2.3 Empirical Analysis  
Typically, you use real usage data to evaluate your data structures. However, for this  
assignment, you will write data generators to enable testing over different scenarios of  
Data Generation  
The time performance of the above mentioned scenarios and operations could vary sig-  
nificantly depends on the structure and elements/processes of the initial runqueue.  
We provide an artificial processes dataset of about 5000 processes, each with a label  
file stores a process in the format of: label,vruntime  
string) and a vruntime (integer number) in a file called “processes.txt”. Each line of this  
For generating runqueues with different initial structure and elements, you may want  
to generate a series of add process operations (EN) to grow the runqueue to the desired  
size, by either:  
randomly choosing processes from the file we supplied (‘processes.txt’); or  
writing a process generator to generate a new process. For example, for each pro-  
cess, generate a unique string identifier/process label and a random integer (i.e.  
vruntime) uniformly sampled within a fixed range, e.g., [1, 100].  
Note, whichever method you decide to use, remember to generate runqueues of different  
sizes to evaluate on (e.g., 10, 50, 100, 500, …,1000, 5000). Due to the randomness of the  
data, you may wish to generate a significant number of datasets with the same parameters  
settings (same size and a scenario) and take the average across a number of runs. In your  
analysis, you should evaluate each of your representations and data structures in terms  
of the different scenarios outlined above. Hence, we advise you to get started on this part  
relatively early.  
Report Structure  
As a guide, the report could contain the following sections:  
Theoretical analysis on running time and complexities of the different data structure  
implementation as outlined in Section 4.2.2.  
Explain your data generation and experimental setup. Things to include are (brief)  
explanations of the generated data you decide to evaluate on, the complexity pa-  
rameters you tested on, describe how the scenarios were generated (a paragraph  
and perhaps a figure or high level pseudo code suffice), which approach(es) you  
decide to use for measuring the timing results.  
Evaluation of the data structures using the generated data. Analyse, compare  
and discuss your results across different complexity parameters, representations  
and scenarios. Provide your explanation on why you think the results are as you  
observed. You may consider using the known theoretical time complexities of the  
operations of each data structure to help in your explanation.  
For some particular data structure like Binary Search Tree (BST), you will also  
need to suggest possible improvement to reduce running time on different scenarios.  
In order to do that, you will need to understand well the complexity of various  
operations on a BST, and be able to research beyond.  
Summarise your analysis as recommendations, e.g., “for this certain data scenario  
of this parameters setup, I recommend to use this data structure because….”. We  
suggest you refer to your previous analysis for help.  
.1 Clarification to Specifications  
Please periodically check the assignment FAQ for further clarifications about specifica-  
tions. In addition, the lecturer will go through different aspects of the assignment each  
week, so even if you cannot make it to the lectures, be sure to check the course material  
page on Canvas to see if there are additional notes posted.  
Team Structure  
This assignment is designed to be done in pairs (group of two). If you have difficulty  
in finding a partner, post on the discussion forum or contact your lecturer. Any issues  
e.g., work division or workload) that result within the team should be resolved within  
the team if possible; if this is not possible, then this should be brought to the attention  
of the course coordinators as soon as possible. Marks are awarded to the individual team  
members, according to the contributions made towards the final work. Please submit  
what percentage each partner made to the assignment (a contribution sheet will be made  
available for you to fill in), and submit this sheet in your submission. The contributions  
of your group should add up to 100%. If the contribution percentages are not 50-50, the  
partner with less than 50% will have their marks reduced. Let student A has contribution  
X%, and student B has contribution Y%, and X > Y . The group is given a group mark  
of M. Student A will get M for assignment 1, but student B will get  
Teams need to be formed using the self sign up tool in Canvas.  
The final submission will consist of four parts:  
Your Java source code of your implementations. Your source code should be  
placed into in a flat structure, i.e., all the files should be in the same direc-  
tory/folder, and that directory/folder should be named as Assign1-<partner 1  
student number>-<partner 2 student number>. Specifically, if your student num-  
bers are s12345 and s67890, then all the source code files should be in folder  
Your written report for part B in PDF format, called “assign1.pdf”. Place this  
pdf within the Java source file directory/folder, e.g., Assign1-s12345-s67890.  
Your data generation code. Create a sub-directory/sub-folder called “generation”  
within the Java source file directory/folder. Place your generation code within that  
sub-folder. We will not run the code, but will examine their contents.  
Your group’s contribution sheet. See the following ‘Team Structure’ section for  
more details. This sheet should also be placed within your Java source file folder.  
Then, the Java source file folder (and files within) should be zipped up and named as  
Assign1-<partner 1 student number>-<partner 2 student number>.zip. E.g.,  
if your student numbers are s12345 and s67890, then your submission file should be  
called, and when we unzip that zip file, then all the  
submission files should be in the folder Assign1-s12345-s67890.  
The project will be marked out of 15. Late submissions will incur a deduction of 1.5  
marks per day, and no submissions will be accepted 5 days beyond the due date.  
The assessment in this project will be broken down into two parts. The following  
criteria will be considered when allocating marks.  
Implementation (8/15):  
You implementation will be assessed on the different implemented data structures,  
respectively, and on the number of tests it passes in our automated testing.  
While the emphasis of this project is not programming, we would like you to main-  
tain decent coding design, readability and commenting, hence these factors will  
contribute towards your marks.  
Report (7/15):  
The marking sheet in Appendix A outlines the criteria that will be used to guide the  
marking of your evaluation report. Use the criteria and the suggested report structure  
Section 4) to inform you of how to write the report.  
Late Submission Penalty: Late submissions will incur a 10% penalty on the total  
marks of the corresponding assessment task per day or part of day late. Submissions  
that are late by 5 days or more are not accepted and will be awarded zero, unless special  
consideration has been granted. Granted Special Considerations with new due date set  
after the results have been released (typically 2 weeks after the deadline) will automat-  
ically result in an equivalent assessment in the form of a practical test, assessing the  
same knowledge and skills of the assignment (location and time to be arranged by the  
coordinator). Please ensure your submission is correct (all files are there, compiles etc),  
re-submissions after the due date and time will be considered as late submissions. The  
core teaching servers and Canvas can be slow, so please do double check ensure you have  
your assignments done and submitted a little before the submission deadline to avoid  
submitting late.  
Assessment declaration: By submitting this assessment, all the members fo the group  
agree to the assessment declaration –  
Academic integrity and plagiarism (standard warning)  
Academic integrity is about honest presentation of your academic work. It means ac-  
knowledging the work of others while developing your own insights, knowledge and ideas.  
You should take extreme care that you have:  
Acknowledged words, data, diagrams, models, frameworks and/or ideas of others  
you have quoted (i.e. directly copied), summarised, paraphrased, discussed or men-  
tioned in your assessment through the appropriate referencing methods  
Provided a reference list of the publication details so your reader can locate the  
source if necessary. This includes material taken from Internet sites. If you do not  
acknowledge the sources of your material, you may be accused of plagiarism because  
you have passed off the work and ideas of another person without appropriate  
referencing, as if they were your own.  
RMIT University treats plagiarism as a very serious offence constituting misconduct.  
Plagiarism covers a variety of inappropriate behaviours, including:  
Failure to properly document a source  
Copyright material from the internet or databases  
Collusion between students  
For further information on our policies and procedures, please refer to the following:  
0 Getting Help  
There are multiple venues to get help. There are weekly consultation hours (see Canvas  
for time and location details). In addition, you are encouraged to discuss any issues you  
have with your Tutor or Lab Demonstrator. We will also be posting common questions on  
the assignment 1 Q&A section on Canvas and we encourage you to check and participate  
in the discussion forum on Canvas. However, please refrain from posting solutions,  
particularly as this assignment is focused on algorithmic and data structure design.  
A Marking Guide for the Report  
Design of Evaluation  
Maximum = 1.5 marks)  
.5 marks  
Data generation is well  
designed, systematic and  
well explained. All  
Analysis of Results  
(Maximum = 4 marks)  
4 marks  
Analysis is thorough and demon- Very clear, well struc-  
strates understanding and critical tured and accessible re-  
Report Clarity  
(Maximum = 1.5 marks)  
1.5 marks  
Well-reasoned explana- port, an undergraduate  
suggested scenarios, data  
structures and a  
tions and comparisons are provided student can pick up the  
for all the data structures, scenar- report and understand  
reasonable range of size of ios and different input sizes of the it with no difficulty.  
the runqueue were  
evaluated. Each type of  
test was run over a  
runqueue . All analysis, compar-  
isons and conclusions are supported  
by empirical evidence and theoreti-  
number of runs and results cal complexities. Well reasoned rec-  
were averaged.  
ommendations are given.  
Data generation is  
3 marks  
1 marks  
Analysis is reasonable and demon- Clear and structured for  
strates good understanding and crit- the most part, with a  
ical analysis. Adequate comparisons few unclear minor sec-  
and explanations are made and il- tions.  
lustrated with most of the suggested  
scenarios and input sizes of the run-  
queue . Most analysis and compar-  
reasonably designed,  
systematic and explained.  
There are at least one  
obvious missing suggested  
scenarios, data structures  
or reasonable size of the  
runqueue . Each type of  
test was run over a  
isons are supported by empirical evi-  
dence and theoretical analysis. Rea-  
number of runs and results sonable recommendations are given.  
were averaged.  
Data generation is  
.5 mark  
2 marks  
0.5 mark  
Analysis is adequate and demon- Generally clear and well  
strates some understanding and crit- structured, but there  
ical analysis. Some explanations are notable gaps and/or  
and comparisons are given and il- unclear sections.  
lustrated with one or two scenarios  
and sizes of the runqueue . A por-  
tion of analysis and comparisons are  
somewhat adequately  
designed, systematic and  
explained. There are  
several obvious missing  
suggested scenarios, data  
structures or reasonable  
size of the runqueue .  
supported by empirical evidence and  
Each type of test may only theoretical analysis. Adequate rec-  
have been run once.  
ommendations are given.  
1 mark  
0 marks  
Data generation is poorly  
designed, systematic and  
explained. There are  
many obvious missing  
suggested scenarios, data  
structures or reasonable  
size of the runqueue .  
Analysis is poor and demonstrates The report is unclear  
minimal understanding and critical on the whole and the  
analysis. Few explanations or com- reader has to work hard  
parisons are made and illustrated to understand.  
with one scenario and size setting.  
Little analysis and comparisons are  
supported by empirical evidence and  
Each type of test has only theoretical analysis. Poor or no rec-  
have been run once. ommendations are given.  
1] Completely fair scheduler.  
2] Scheduling (computing).  


本网站支持淘宝 支付宝 微信支付  paypal等等交易。如果不放心可以用淘宝交易!

E-mail: [email protected]  微信:itcsdx