Once upon a time in the realm of machine learning, decision trees ruled the land. They were revered for their simplicity and interpretability, but as the kingdom’s datasets grew larger and more complex, decision trees started to show their limitations. These trees were too focused, too determined to split the data exactly as it was presented, leaving no room for exploration or randomness.
But fear not, for in the enchanted forest of ML algorithms, a modified version of decision trees emerged — Extra Trees! They brought with them a touch of unpredictability, injecting randomness into the tree construction process. By doing so, they introduced a refreshing twist to the traditional decision tree algorithm.
The Element of Randomness
Extra trees realized that sometimes, rules are meant to be broken. They believed that exploring different possibilities and considering various perspectives could lead to better predictions. So, during the construction of each decision tree, they conspired to use random subsets of the training data.
Imagine a scenario where the kingdom’s dataset contains 1,000 samples. Instead of using all 1,000 samples to build their trees, Extra trees cunningly select a subset — for instance, 700 samples. By doing this, they introduced a hint of randomness into the decision-making process.
But their randomness doesn’t stop there! Extra trees also decide to consider random subsets of the available features at…