High Impact Factor : 4.396 icon | Submit Manuscript Online icon |

Ensemble Classifiers Based on Random Forest to Improve Accuracy for Prediction

Author(s):

Rupali Lohar , Lord Krishna College of Technology Indore ; Vijay kumar Verma, Lord Krishna College of Technology Indore

Keywords:

Random Forest, Ensemble Classifiers

Abstract

Random Forests are an ensemble learning method (also thought of as a form of nearest neighbor predictor) for classification and regression that construct a number of decision trees at training time and outputting the class that is the mode of the classes output by individual trees. Random Forests are a combination of tree predictors where each tree depends on the values of a random vector sampled independently with the same distribution for all trees in the forest. The basic principle is that a group of “weak learners” can come together to form a “strong learner”. Random Forests are a wonderful tool for making predictions considering they do not over fit because of the law of large numbers. Introducing the right kind of randomness makes them accurate classifiers and repressors. Single decision trees often have high variance or high bias. Random Forests attempts to mitigate the problems of high variance and high bias by averaging to find a natural balance between the two extremes. Considering that Random Forests have few parameters to tune and can be used simply with default parameter settings, they are a simple tool to use without having a model or to produce a reasonable model fast and efficiently. In this paper e proposed an efficient ensemble classifiers based on random forest using boosting and bagging.

Other Details

Paper ID: IJSRDV8I80031
Published in: Volume : 8, Issue : 8
Publication Date: 01/11/2020
Page(s): 61-65

Article Preview

Download Article