This is it the automated hyperparameter tuning model. Here’s the full function of my Optuna study, I will explain each line in details afterwards:

`def objective(trial):`

n_estimators = trial.suggest_int('n_estimators', 100, 1000)

max_depth = trial.suggest_int('max_depth', 10, 50)

min_samples_split = trial.suggest_int('min_samples_split', 2, 32)

min_samples_leaf = trial.suggest_int('min_samples_leaf', 1, 32)

max_features = trial.suggest_categorical('max_features', ['sqrt', 'log2'])

criterion = trial.suggest_categorical('criterion', ["squared_error", "absolute_error", "friedman_mse", "poisson"])model = RandomForestRegressor(

n_estimators=n_estimators,

max_depth=max_depth,

min_samples_split=min_samples_split,

min_samples_leaf=min_samples_leaf,

max_features=max_features,

criterion=criterion,

random_state= 21

)

model.fit(X_train, y_train)

y_pred = model.predict(X_test)

# metric to optimize

score = mean_squared_error(y_test, y_pred)

return score

study = optuna.create_study(direction='minimize', sampler=optuna.samplers.RandomSampler(seed=42))

study.optimize(objective, n_trials=200)

# Print the best parameters found

print("Best trial:")

trial = study.best_trial

print("Value: {:.4f}".format(trial.value))

print("Params: ")

for key, value in trial.params.items():

print(" {}: {}".format(key, value))

Let’s break down our Optuna model’s `objective` function line by line:

`def objective(trial):`

## 1. Function Definition:

Defines the objective function named **`objective` **that takes a single argument **`trial`**. This function represents the metric to be optimized during the hyperparameter tuning process.

`n_estimators = trial.suggest_int('n_estimators', 100, 1000)`

## 2. Hyperparameter Suggestion — `n_estimators`:

Suggests an integer value for the hyperparameter** ‘n_estimators’** in the range from 100 to 1000 (inclusive). This value will be used in the construction of the Random Forest model.

`max_depth = trial.suggest_int('max_depth', 10, 50)`

## 3. Hyperparameter Suggestion — `max_depth`:

Suggests an integer value for the hyperparameter** ‘max_depth’** in the range from 10 to 50 (inclusive). This parameter represents the maximum depth of the individual trees in the Random Forest.

`min_samples_split = trial.suggest_int('min_samples_split', 2, 32)`

## 4. Hyperparameter Suggestion — `min_samples_split`:

Suggests an integer value for the hyperparameter **‘min_samples_split’** in the range from 2 to 32 (inclusive). It determines the minimum number of samples required to split an internal node in a tree.

`min_samples_leaf = trial.suggest_int('min_samples_leaf', 1, 32)`

## 5. Hyperparameter Suggestion — `min_samples_leaf`:

Suggests an integer value for the hyperparameter **‘min_samples_leaf’** in the range from 1 to 32 (inclusive). It sets the minimum number of samples required to be at a leaf node.

`max_features = trial.suggest_categorical('max_features', ['sqrt', 'log2'])`

## 6. Hyperparameter Suggestion — `max_features`:

Suggests a categorical value for the hyperparameter** ‘max_features’**, choosing either **‘sqrt’** or **‘log2’.** This parameter controls the number of features to consider when looking for the best split.

`criterion = trial.suggest_categorical('criterion', ["squared_error", "absolute_error", "friedman_mse", "poisson"])`

## 7. Hyperparameter Suggestion — `criterion`:

Suggests a categorical value for the hyperparameter **‘criterion’**, choosing one of** [“squared_error”, “absolute_error”, “friedman_mse”, “poisson”]**. This parameter specifies the function to measure the quality of a split.

`model = RandomForestRegressor(`

n_estimators=n_estimators,

max_depth=max_depth,

min_samples_split=min_samples_split,

min_samples_leaf=min_samples_leaf,

max_features=max_features,

criterion=criterion,

random_state= 21

)

## 8. Random Forest Model Initialization:

Initializes a **`RandomForestRegressor`** model with the hyperparameters suggested by Optuna, as well as a specified random state for reproducibility.

`model.fit(X_train, y_train)`

y_pred = model.predict(X_test)

## 9. Model Training and Prediction:

Fits the model to the training data **(`X_train` and `y_train`)** and predicts the target variable for the test data **(`X_test`)**.

`#metric to optimize`

score = mean_squared_error(y_test, y_pred)

## 10. Metric Calculation:

Calculates the** mean squared error** between the true test labels **(`y_test`)** and the predicted values **(`y_pred`)**. This metric will be optimized during the hyperparameter tuning process.

`return score`

## 11. Return Score:

Returns the calculated score, which Optuna will attempt to minimize since the direction is set to **`’minimize’`**.

`study = optuna.create_study(direction='minimize', sampler=optuna.samplers.RandomSampler(seed=42))`

## 12. Study Initialization:

Initializes an Optuna study for optimization with a random sampler and a specified random seed. The direction is set to minimize, indicating that the objective function should be minimized.

`study.optimize(objective, n_trials=200)`

## 13. Optimization Process:

Optimizes the** `objective`** function with **200 trials**, attempting to find the set of hyperparameters that **minimizes the mean squared error**.

`print("Best trial:")`

trial = study.best_trial

print("Value: {:.4f}".format(trial.value))

## 14. Print Best Trial Information:

Prints information about the best trial found by Optuna, including the best objective value **(mean squared error) **achieved.

`print("Params: ")`

for key, value in trial.params.items():

print(" {}: {}".format(key, value))

## 15. Print Best Hyperparameters:

Prints the hyperparameters associated with the best trial found by Optuna. This includes the values of** ‘n_estimators’, ‘max_depth’, ‘min_samples_split’, ‘min_samples_leaf’, ‘max_features’,** and** ‘criterion’**.

Executing the Optuna model with 200 trials is a computationally intensive operation. With a system configuration of 16MB RAM and an SSD hard drive, the entire process took approximately 4 minutes to complete. The obtained results from this exhaustive hyperparameter tuning are as follows:

To ensure precision in parameter assignment for our next Random Forest model and to **avoid the risk of inadvertent errors,** we capture the best parameters determined by the Optuna study in a variable named **‘best_params’**:

`best_params = study.best_params`

However, before proceeding to build the next Random Forest model with these optimized parameters, let’s dive into the visualization of Optuna graphs. These visualizations will provide insights into the optimization process, allowing us to better comprehend the evolution of hyperparameter tuning throughout the study.