Boosting en el modelo de aprendizaje PAC

Ricardo Mendoza

Resumen


Una revisión de la idea de Boosting en el modelo de aprendizaje PAC es presentada. Adicionalmente se provee una revisión del primer método de Boosting práctico, el Boosting adaptativo (Adaboost), dando detalles respecto a las garantías teóricas en la convergencia del error y explorando el importante concepto de margen.


Palabras clave


aprendizaje de máquina; modelo PAC; dimensión V C

Texto completo:

PDF

Referencias


Schapire, R.E.: The strength of weak learnability. Machine learning. 5, 197–227

(1990).

Schapire, R.E., Freund, Y., Bartlett, P., Lee, W.S.: Boosting the margin: A new

explanation for the effectiveness of voting methods. The annals of statistics. 26,

–1686 (1998).

Maclin, R., Opitz, D.: Popular ensemble methods: An empirical study, (2011).

Recuperado de: http://arxiv.org/pdf/1106.0257.

Rudin, C., Daubechies, I., Schapire, R.E.: On the dynamics of boosting. NIPS Proceedings. (2003).

Schapire, R.E.: The boosting approach to machine learning: An overview. Lecture Notes in Statistics. Springer Verlag. 149–172 (2003).

Freund, Y., Schapire, R., Abe, N.: A short introduction to boosting. JournalJapanese Society For Artificial Intelligence. 14, 1612 (1999).

Sebastiani, F.: Machine learning in automated text categorization. ACM computing surveys (CSUR). 34, 1–47 (2002).

Long, P.M., Servedio, R.A., Anderson, R.N., Boulanger, A.: Systems and methods for martingale boosting in machine learning. Google Patents (2011).

Godec, M., Grabner, H., Leistner, C., Bischof, H.: Speeding Up Semi-Supervised On-line Boosting for Tracking. (2010).

Chen, K., Wang, S.: Semi-supervised learning via regularized boosting working on multiple semi-supervised assumptions. Pattern Analysis and Machine Intelligence,

IEEE Transactions on. 33, 129–143 (2011).

Schapire, R.E., Freund, Y.: Boosting: Foundations and Algorithms. (2012).

Valiant, L.G.: A theory of the learnable. Communications of the ACM. 27, 1134–1142 (1984).

Pitt, L., Valiant, L.G.: Computational limitations on learning from examples. Journal of the ACM (JACM). 35, 965–984 (1988).

Kearns, M., Mansour, Y., Ng, A.Y., Ron, D.: An experimental and theoretical comparison of model selection methods. Machine Learning. 27, 7–50 (1997).

Haussler, D.: Probably Approximately Correct Learning. Proceedings of the Eighth National Conference on Artificial Intelligence (1990).

Vapnik, V., Chervonenkis, A.: Uniform convergence of frequencies of occurence of events to their probabilities. Dokl. Akad. Nauk SSSR. págs. 915–918 (1968).

Blumer, A., Ehrenfeucht, A., Haussler, D., Warmuth, M.K.: Learnability and the Vapnik-Chervonenkis dimension. Journal of the ACM (JACM). 36, 929–965 (1989).

Pestov, V.: PAC learnability versus VC dimension: a footnote to a basic result of statistical learning. Neural Networks (IJCNN), The 2011 International Joint Conference on. págs. 1141–1145 (2011).

Freund, Y., Schapire, R.: A desicion-theoretic generalization of on-line learning and an application to boosting. Computational learning theory. págs. 23–37 (1995).

Rudin, C., Daubechies, I., Schapire, R.E.: The dynamics of AdaBoost: Cyclic behavior and convergence of margins. The Journal of Machine Learning Research. 5, 1557–1595 (2004).

Merler, S., Caprile, B., Furlanello, C.: Parallelizing AdaBoost by weights dynamics.Computational statistics & data analysis. 51, 2487–2498 (2007).

Schapire, R.E.: The convergence rate of adaboost. The 23rd Conference on Learning Theory, open problem (2010).

Blanchard, G., Lugosi, G., Vayatis, N., others: On the rate of convergence of regularized boosting classifiers. The Journal of Machine Learning Research. 4, 861–894 (2003).

Reyzin, L., Schapire, R.E.: How boosting the margin can also boost classifier complexity. Proceedings of the 23rd international conference on Machine learning.

págs. 753–760 (2006).

Rosset, S., Zhu, J., Hastie, T.: Boosting as a regularized path to a maximum margin classifier. The Journal of Machine Learning Research. 5, 941–973 (2004).

Rudin, C., Cortes, C., Mohri, M., Schapire, R.: Margin-based ranking meets boosting in the middle. Learning Theory. 63–78 (2005).

Koltchinskii, V., Panchenko, D.: Empirical Margin Distributions and Bounding the Generalization Error of Combined Classifiers. Annals of Statistics. 1–50 (2002).

Barak, O., Rigotti, M.: A simple derivation of a bound on the perceptron margin using singular value decomposition. Neural computation. 23, 1935–1943 (2011).

Shen, C., Li, H.: Boosting through optimization of margin distributions. Neural Networks, IEEE Transactions on. 21, 659–666 (2010).

Agapitos, A., O’Neill, M., Brabazon, A., Theodoridis, T.: Maximum margin decision surfaces for increased generalisation in evolutionary decision tree learning. Genetic Programming. 61–72 (2011).

Freund, Y., Mason, L.: The alternating decision tree learning algorithm. MACHINE LEARNING - INTERNATIONAL WORKSHOP THEN CONFERENCE. págs. 124–133 (1999).

Burges, C.J.C.: A tutorial on support vector machines for pattern recognition. Data mining and knowledge discovery. 2, 121–167 (1998).

Smola, A.J., Schölkopf, B.: Learning with kernels. MIT Press (1998).




DOI: http://dx.doi.org/10.15765/e.v3i3.413

Enlaces refback

  • No hay ningún enlace refback.