Classifier fusion is used to combine multiple classification decisions and improve classification performance. While various classifier fusion algorithms have been proposed in literature, average fusion is almost always selected as the baseline for comparison. Little is done on exploring the potential of average fusion and proposing a better baseline. In this paper we empirically investigate the behavior of soft labels and classifiers in average fusion. As a result, we find that; by proper sampling of soft labels and classifiers, the average fusion performance can be evidently improved. This result presents sampling based average fusion as a better baseline; that is, a newly proposed classifier fusion algorithm should at least perform better than this baseline in order to demonstrate its effectiveness. © 2014 Jian Hou et al.
|Titolo:||Sampling based average classifier fusion|
|Autori interni:||KARIMI, HAMID REZA|
|Data di pubblicazione:||2014|
|Rivista:||MATHEMATICAL PROBLEMS IN ENGINEERING|
|Appare nelle tipologie:||01.1 Articolo in Rivista|