Python代写 | CSC242 Intro to AI Project 4: Learning

本次美国CS代写主要是AI相关的机器学习方法

CSC242 Intro to AI Project 4: Learning

In this project you will have the opportunity to implement and evaluate one or more machine learning methods. This is a huge area of application and research, and is well- covered in upper-level Computer Science courses. It’s also nearing the end of the term. So we can really only scratch the surface with this project. But remember, the more you do and the more you do yourself, the more you’ll learn.

For the project, you MUST implement linear classifiers as described in AIMA Sec. 18.6 and in class. Linear classifiers are the basis of neural networks (AIMA 18.7), so for extra credit (max 20%) you may also implement neural networks. Both aspects of the project are described in detail below.

For this project, you MUST also produce a short report describing what you did and presenting the results of your learning program(s). The requirements for this are also detailed below.

Finally, if you’re looking for information about machine learning on the Internet, be very careful NOT to look at code. . . especially code from this course. . .

Linear Classifiers

Linear classifiers are well covered in AIMA Sect. 18.6.
Think about what it takes to represent a linear classifier and the data used to train it in a

computer program. THINK ABOUT IT NOW.

Ok. I hope you thought about things like input vectors, outputs, weight vectors, and update rules. None of these are hard to implement, but you should think through the design before jumping in with unstructured code.

You will learn the most if you develop your implementation yourself. If you need a little help getting started, look at the documentation for the code I have provided from pack- age “lc”. It will suggest some classes and give you their APIs. That may be enough for you to write the code. But if you need a bit more help, you can build off the code I have provided. You will need to implement the crucial classes and/or methods for actually learning the linear classifiers.

1

You MUST implement both a perceptron classifier (18.6.3) and a logistic classifier (18.6.4). Almost all of the code can be shared if you design it right. You should be able to replicate something like the textbook results for the earthquake problem (Figures 18.15, 18.16, and 18.18).

Demonstrate your program on the earthquake data (both clean and noisy datasets) and the “house votes” dataset (“numerical” version), all of which are provided in our code bundle. Feel free to try other datasets also. See below for details regarding your report.

Neural Networks

Neural networks are covered in AIMA Section 18.7. It does cover all the important definitions for both single-layer and multi-layer feed-forward networks, and it provides the algorithm for backpropagation in multi-layer networks (Fig. 18.24). That said, the presentation is very concise. So if you choose to implement this type of learner, be prepared to do some thinking and/or additional research as you develop and evaluate your system.

It isn’t hard to think about what you need to represent a neural network in a computer program. THINK ABOUT IT NOW.

Ok. I hope you thought about “units,” layers, connections, weights, activation functions, inputs, and outputs. Remember that a neural network is simply a graph of linear classi- fiers (typically using a logistic threshold). It is not hard to design classes incorporating these elements. However I suggest that you understand how the backpropagation algo- rithm works before you lock in your design. In particular, note that it requires that you be able to go both forward and backward through the layers of your networks, even though the network is “feed-forward.”

As always, you will learn the most if you develop your implementation yourself. And in this case, if you need a little help, I’m sorry but the code I can provide is less useful. You are welcome to look at the documentation for the package “nn”. That will give you some suggestions for classes and APIs. And if you need more help, you can build off the code I have provided. You will need to implement the crucial classes and/or methods for actually learning the linear classifiers in nn.core.

If you choose to do this (for extra credit), you MUST implement a multi-layer network with hidden units. A single-layer network is essentially a set of one or more linear classifiers. A multi-layer network is a “true” neural network that must be trained using backpropaga- tion.