How to build Neural Networks in Model
In a recent internal discussion, we came across the question if it is possible to use more than one hidden layer in the "Shallow Neural Net" node from our Machine Learning elements in Model. So we decided to show you how this can be achieved.
Building Neural Networks in Model
The difference between a shallow neural network and a deep neural network is the number of layers between the input and output layers. A shallow neural network only has one hidden layer, while a deep neural network can have many.
On Pyramid, you can build both types in Model.
Shallow Neural Network
- To create a shallow neural network, attach the Shallow Neural Net Node to your prepped data. (Assuming splitting, imputation and other transformations have been performed.)
- Then specify the inputs and output classifier (target) of the model in the properties panel.
- Choose the running process type:
a. The running process type determines how much of the data is used to train the model. Fast uses 20% of the data, Accurate use 90% and custom allows users to enter a percentage of their choice.
- Users also have the option to save the trained model, to be applied to a different dataset at a later stage for testing or scoring.
a. If the output of the model does not need to be saved, the option to “set as target” enables users to create the model without generating a dataset. Useful during training.
- After you have run your models, you will see the model’s performance scores.
Deep Neural Network
The steps to create a Deep Neural Network are largely similar, but there are a few extra options to tweak. Namely the number of hidden layers, epochs and drop out.
- For a Deep Neural Network, use the Tensorflow node.
- Like the Shallow Neural Network, specify your settings:
a. Select the inputs and output.
b. Choose the running process type.
c. Specify saving options.
- Specify number of hidden layers you want between your input and output layer.
a. More hidden layers, the more complex and flexible the neural network.
b. Specify how many neurons in each hidden later, separated by commas.
- Choose the number of epochs.
a. 1 epoch means the full dataset is passed through the neural network once.
b. Increasing this number can help you find an optimal model, but if it’s too high can result in overfitting.
- Choose dropout percentage.
a. Drop out refers to ignoring a percentage of randomly selected neurons in your input and hidden layers at each update during training.
b. It’s a technique to prevent overfitting.
- Model scores are available once the model is run.
Shallow vs Deep
An advantage of a shallow neural network is that it is less computationally intensive. However, it may not be able to capture more complex patterns in your data, so it can be less accurate.
A deep neural network is much more flexible and will be able to find those complex patterns, but it also comes with its own challenges. Since its more complex, it’s much computationally expensive. It has more hyperparameters to experiment with and it’s also more prone to overfitting.
So, which one should be used?
That depends on the data. Try out both and see which one is more suitable.
The general rule for machine learning is: if models are performing similarly, always stick to the simpler model. Or as my professor use to say, “don’t use a bulldozer to hammer a nail”.
- 2 Likes
- 9 mths agoLast active
- 2 Following