
利用Python和NumPy构建BP神经网络。
5星
- 浏览量: 0
- 大小:None
- 文件类型:None
简介:
本资源完全采用NumPy库构建了一个简洁的BP神经网络模型。由于任务侧重于回归问题而非分类问题,因此在输出层中,我们选择的激励函数设定为f(x) = x。为了避免冗余,BP神经网络的具体工作原理及详细阐述在此处不再赘述。以下是实现该神经网络的代码框架:
```python
import numpy as np
class NeuralNetwork(object):
def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):
# Set number of nodes in input, hidden and output layers.
self.input_nodes = input_nodes
self.hidden_nodes = hidden_nodes
self.output_nodes = output_nodes
self.learning_rate = learning_rate
# Initialize weights randomly
self.wb1 = np.random.rand(self.input_nodes, self.hidden_nodes)
self.b1 = np.random.rand(1, self.hidden_nodes)
self.wb2 = np.random.rand(self.hidden_nodes, self.output_nodes)
self.b2 = np.random.rand(1, self.output_nodes)
def feedforward(self, X):
# Hidden layer: f(x) = activation(W * x + b) where activation is ReLU here
self.z1 = np.dot(X, self.wb1) + self.b1
self.a1 = self._relu(self.z1) #ReLU activation function
# Output layer: f(x) = activation(W * a + b) where activation is linear here (identity function for regression). This is the chosen function f(x)=x in this case since we are doing regression rather than classification .
self.z2 = np.dot(self.a1, self.wb2) + self.b2
return self._linear(self .z2) #Linear Activation Function
def _relu(self, z): #ReLU Activation Function - Rectified Linear Unit - f(x)=max{0, x} for regression we use the identity function which is equivalent to ReLU when the input is positive and zero otherwise
return np .maximum (0 , z )
def _linear (self , z ): #Linear Activation Function - Identity Function - f(x)=x for regression we use the identity function which is equivalent to ReLU when the input is positive and zero otherwise
return z
```
全部评论 (0)


