Python代写 | FIT5196-SSB-2021 assessment 1

本次Python代写是提取文本文件中的数据并进行预处理

FIT5196-SSB-2021 assessment 1

Task 1: Parsing Text Files (%55)
This assessment touches the very first step of analyzing textual data, i.e., extracting data from
semi-structured text files. Each student is provided with a data-set that contains information about
COVID-19 related tweets (please find the Dataset_a1_part1.zip in Moodle). The zip file contains
140 text files. Each text file contains information about the tweets, i.e., “id”, “text”, and “created_at”
attributes. Your task is to extract the data and transform the data into the XML format with the
following elements:
1. id: is a 19-digit number and letter.
2. text: is the actual tweet.
3. Created_at: is the date and time that the tweet was created
The XML file must be in the same structure as the sample folder. Please note that, as we are
dealing with large datasets, the manual checking of outputs is impossible and output files
would be processed and marked automatically therefore, any deviation from the XML
structure (i.e. sample.xml) and any deviation from this structure (e.g. wrong key names
which can be caused by different spelling, different upper/lower case, etc., wrong hierarchy,
not handling the XML special characters,…) will result in receiving zero for the output mark
as the marking script would fail to load your file. (hint: run your code on the provided example
and make sure that your code results in the exact same output as the sample output. You can
also use the “xmltodict” package to make sure that your XML is loadable). Beside the XML
structure, the following constraints must also be satisfied:
1. The “id”s must be unique, so if there are multiple instances of the same tweets, you must
only keep one of them in your final XML file.
2. The non-english tweets should be filtered out from the dataset and the final XML should
only contain the tweets in English language. For the sake of consistency, you must use
the langid package to classify the language of a tweet.
3. The re, os, and the langid packages in Python are the only packages that you are allowed
to use for the task 1 of this assessment (e.g., “pandas” is not allowed!). Any other packages
that you need to “import” before usage is not allowed.
The output and the documentation will be marked separated in this task, and each carries its own
mark.

Task 2: Text Pre-Processing (%45)
This assessment touches on the next step of analyzing textual data, i.e., converting the extracted
data into a proper format. In this assessment, you are required to write Python code to preprocess
a set of tweets and convert them into numerical representations (which are suitable for input into
recommender-systems/ information-retrieval algorithms).
The data-set that we provide contains 30+ days of COVID-19 related tweets (from late March to
mid July 2020). Please find Dataset_a1_part2.zip in Moodle. The zip file contains
tweet_dataset.xlsx and stopwords_en.txt. The excel file contains 20+ sheets. Your task is to
extract and transform the information of the excel file performing the following task:
1. Generate the corpus vocabulary with the same structure as sample_vocab.txt. Please
note that the vocabulary must be sorted alphabetically.
2. For each day (i.e., sheet in your excel file), calculate the top 100 frequent unigram and top100 frequent bigrams according to the structure of the sample_100uni.txt and
sample_100bi.txt. If you have less than 100 bigrams for a particular day, just include the
top-n bigrams for that day (n<100).
3. Generate the sparse representation (i.e., doc-term matrix) of the excel file according to the
structure of the sample_countVec.txt
These sample txt files and sample.xlsx are compressed into Task2_Sample_Files.zip file,
which can be downloaded from Moodle.
Please note that the following steps must be performed (not necessarily in the same order) to
complete the assessment.
1. Using the “langid” package, only keeps the tweets that are in English language.
2. The word tokenization must use the following regular expression, “[a-zA-Z]+(?:[-‘][azA-Z]+)?”
3. The context-independent and context-dependent (with the threshold set to more than
24 days) stop words must be removed from the vocab. The provided contextindependent stop words list (i.e, stopwords_en.txt) must be used.
4. Tokens should be stemmed using the Porter stemmer.
5. Rare tokens (with the threshold set to less than 2 days) must be removed from the
vocab.
6. Creating sparse matrix using countvectorizer.
7. Tokens with the length less than 3 should be removed from the vocab.
8. First 200 meaningful bigrams (i.e., collocations) must be included in the vocab using
PMI measure.
Please note that you are allowed to use any Python packages as you see fit to complete the
task 2 of this assessment. The output and the documentation will be marked separately in this
task, and each carries its own mark.


程序代写代做C/C++/JAVA/安卓/PYTHON/留学生/PHP/APP开发/MATLAB


本网站支持淘宝 支付宝 微信支付  paypal等等交易。如果不放心可以用淘宝交易!

E-mail: [email protected]  微信:itcsdx


如果您使用手机请先保存二维码,微信识别。如果用电脑,直接掏出手机果断扫描。

blank

发表评论