Random Forests

import numpy as np

from sklearn.ensemble import RandomForestClassifier


# create sample data

X = np.array([[1, 2], [2, 3], [3, 3], [2, 1], [3, 2]])

y = np.array([1, 1, 1, -1, -1])


# create Random Forest object

rfc = RandomForestClassifier(n_estimators=100)


# fit the Random Forest model

rfc.fit(X, y)


# predict the output for a new input

X_new = np.array([[4, 4]])

y_new = rfc.predict(X_new)

print('Prediction for X = [4, 4]: ', y_new)

This code creates a sample dataset X and y, where X represents the input features and y represents the target variable (binary classification). Then, a Random Forest object rfc is created using scikit-learn's RandomForestClassifier class with n_estimators=100 argument, indicating that we want to create 100 decision trees for our Random Forest. The fit() method is called to fit the model to the data.

Finally, we use the predict() method to predict the output for a new input X_new and print the predicted output. Since this is a binary classification problem, the predicted output would either be 1 or -1. Note that Random Forest can also be used for regression problems by using RandomForestRegressor class. Additionally, we can also tune the hyperparameters of Random Forest using techniques such as Grid Search or Random Search.