* ability to specify per-layer activation function
* some tests for new addition to layer
* appease style CI whitespace issue
* more flexible addition of layers, and developer can pass Layer object in manually
* new test for layer object in mlp constructor
* documentation for added MLP functionality
- Backpropagation using the neuron activation functions derivative
- instead of hardcoded sigmoid derivative
- Added missing activation functions derivatives
- Sigmoid forced for the output layer
- Updated ThresholdedReLU default threshold to 0 (acts as a ReLU)
- Unit tests for derivatives
- Unit tests for classifiers using different activation functions
- Added missing docs