SKNN的几个例子
回归 (Regression)
回归器(sknn.mlp.Regressor)的输入与输出都是连续的,不过你最好将训练数据先归一化到[0..1]或[-1..1]。
Assuming your data is in the form of numpy.ndarray stored in the variables X_train and y_train you can train a sknn.mlp.Regressor neural network. The input and output arrays are continuous values in this case, but it’s best if you normalize or standardize your inputs to the [0..1] or [-1..1] range. (See the sklearn Pipeline example below.)
|
|
神经网络的结构由layers
参数决定,学习速率由learning_rate
决定,迭代次数由n_iter
参数决定。
This will train the regressor for 10 epochs (specified via the n_iter parameter). The layers parameter specifies how the neural network is structured; see the sknn.mlp.Layer documentation for supported layer types and parameters.
在预测时的使用方法如下:
Then you can use the trained NN as follows:
|
|
分类 (Classification)
如果你的训练数据有整数作为标签,你可以按如下方式训练分类器(sknn.mlp.Classifier)。
If your data in numpy.ndarray contains integer labels as outputs and you want to train a neural network to classify the data, use the following snippet:
|
|
最好将训练数据归一化。在分类问题中推荐使用Softmax函数。
It’s also a good idea to normalize or standardize your data in this case too, for example using a sklearn Pipeline below. The code here will train for 25 iterations. Note that a Softmax output layer activation type is used here, and it’s recommended as a default for classification problems.
以下为多标签分类的解决办法。
If you want to do multi-label classification, simply fit using a y array of integers that has multiple dimensions, e.g. shape (N, 3) for three different classes. Then, make sure the last layer is Sigmoid instead.
卷积 (Convolution)
图像处理时使用的技巧
Working with images as inputs in 2D (as greyscale) or 3D (as RGB) images stored in numpy.ndarray, you can use convolution to train a neural network with shared weights. Here’s an example how classification would work:
|
|
The neural network here is trained with eight kernels of shared weights in a 3x3 matrix, each outputting to its own channel. The rest of the code remains the same, but see the sknn.mlp.Layer documentation for supported convolution layer types and parameters.
训练集权值预处理 (Per-Sample Weighting)
训练集各样本比例不均衡时的一个小技巧
When training a classifier with data that has unbalanced labels, it’s useful to adjust the weight of the different training samples to prevent bias. This is achieved via a feature called masking. You can specify the weights of each training sample when calling the fit() function.
|
|
In this case, there are two classes 0 given weight 1.2, and 1 with weighting 0.8. This feature also works for regressors as well.
sklearn Pipeline
归一化工具的推荐
Typically, neural networks perform better when their inputs have been normalized or standardized. Using a scikit-learn’s pipeline support is an obvious choice to do this.
Here’s how to setup such a pipeline with a multi-layer perceptron as a classifier:
|
|