Wrapper feature selection approaches are widely used to select a small subset of relevant features from a dataset. However, Wrappers suffer from the fact that they only use a single classifier when selecting the features. The downside to this approach is that each classifier will have its own biases and will therefore select very different features. In order to overcome the biases of individual classifiers, we propose Consensus Feature Selection (CFS), which combines different classifiers for feature selection. In this way, selecting classifiers for use in the combinations is very important. Therefore, we investigate how the number and nature of classifiers influence the number of features selected and the classification accuracies that these features generate. In terms of number of classifiers, results showed that few selected more relevant features whereas many selected few features. In addition, 3-classifier combinations selected features that led to highest accuracies. In terms of nature of classifiers, decision trees identified most number of features whereas Bayesian classifiers identified least number of features. However, features selected by Bayesian classifiers led to accuracies higher than the other classifiers.