Learn the parameters of the model directly from the features of the training examples.
Traditional ML is shallow and relies on manually engineered features (Feature Engineering) and relatively simple models like linear/logistic regression, decision trees.
When compared to Deep Learning,
- Training shallow models is computationally cheaper.
- Performs well on small/medium-sized datasets.
- Struggles with complex patterns and unstructured data (e.g. images, audio)
#Shallow Learning Strategy
- Define a performance metric P and a Baseline B.
- Shortlist learning algorithms #ml/algorithms .
- Choose a Hypterparameter Tuning strategy T.
- Pick a learning algorithm A.
- Pick a combination H of hyperparameter values for algorithm A using strategy T.
- Use the training set and train a model M using algorithm A parametrized with Hyperparameter values H.
- Use the Validation Dataset and calculate the value of metric P for model M.
- Decide:
- If there are still untested hyperparameter values, pick another combination H of hyperparameter values using strategy T and go back to step 6.
- Otherwise, pick a different learning algorithm A and go back to step 5, or proceed to step 9 if there are no more learning algorithms to try.
- Return the model for which the value of metric P is maximized.
- Compare performace of the returned model to baseline B.