Classifier ensembles have been one of the main topics of interest in the neural networks, machine learning and pattern recognition communities during the past fifteen years [21,28,16,17,26,36,27,23,11]. They are currently one of the state of the art techniques available for the design of classification systems and an effective option to the traditional approach based on the design of a single, monolithic classifier in many applications. Broadly speaking, two main choices have to be made in the design of a classifier ensemble: how to generate individual classifiers and how to combine them. Two main approaches have emerged to deal with these design steps: coverage optimisation, focused on generating an ensemble of classifiers as much complementary as possible, which are then fused with simple combining rules, and decision optimisation, focused on finding the most effective combining rule to exploit at best a given classifier ensemble . One of the most studied and widely used combining rules, especially in the former approach, is the linear combination of classifier outputs. Linear combiners are often used for neural network ensembles, given that neural networks provide continuous outputs. The simplicity of linear combiners and their continuous nature favoured the development of analytical models for the analysis of the performance of ensembles of predictors, both for the case of regression problems and for the relatively more complex case of classification problems.
Scheda prodotto non validato
Attenzione! I dati visualizzati non sono stati sottoposti a validazione da parte dell'ateneo
|Titolo:||Bayesian Linear Combination of Neural Networks|
|Data di pubblicazione:||2009|
|Tipologia:||2.1 Contributo in volume (Capitolo o Saggio)|