Jantke / Yokomori / Kobayashi | Algorithmic Learning Theory | Buch | 978-3-540-57370-8 | sack.de

Buch, Englisch, 428 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 1380 g

Reihe: Lecture Notes in Artificial Intelligence

Jantke / Yokomori / Kobayashi

Algorithmic Learning Theory

4th International Workshop, ALT '93, Tokyo, Japan, November 8-10, 1993. Proceedings
1993
ISBN: 978-3-540-57370-8
Verlag: Springer Berlin Heidelberg

4th International Workshop, ALT '93, Tokyo, Japan, November 8-10, 1993. Proceedings

Buch, Englisch, 428 Seiten, Format (B × H): 155 mm x 235 mm, Gewicht: 1380 g

Reihe: Lecture Notes in Artificial Intelligence

ISBN: 978-3-540-57370-8
Verlag: Springer Berlin Heidelberg


This volume contains all the papers that were presented at
the Fourth Workshop on Algorithmic Learning Theory, held in
Tokyo in November 1993. In addition to 3 invited papers, 29
papers were selected from 47 submitted extended abstracts.
The workshop was the fourth in a series of ALT workshops,
whose focus is on theories of machine learning and the
application of such theories to real-world learning
problems. The ALT workshops have been held annually since
1990, sponsored by the Japanese Society for Artificial
Intelligence. The volume is organized into parts on
inductive logic and inference, inductive inference,
approximate learning, query learning, explanation-based
learning, and new learning paradigms.

Jantke / Yokomori / Kobayashi Algorithmic Learning Theory jetzt bestellen!

Zielgruppe


Research

Weitere Infos & Material


Identifying and using patterns in sequential data.- Learning theory toward Genome Informatics.- Optimal layered learning: A PAC approach to incremental sampling.- Reformulation of explanation by linear logic toward logic for explanation.- Towards efficient inductive synthesis of expressions from input/output examples.- A typed ?-calculus for proving-by-example and bottom-up generalization procedure.- Case-based representation and learning of pattern languages.- Inductive resolution.- Generalized unification as background knowledge in learning logic programs.- Inductive inference machines that can refute hypothesis spaces.- On the duality between mechanistic learners and what it is they learn.- On aggregating teams of learning machines.- Learning with growing quality.- Use of reduction arguments in determining Popperian FIN-type learning capabilities.- Properties of language classes with finite elasticity.- Uniform characterizations of various kinds of language learning.- How to invent characterizable inference methods for regular languages.- Neural Discriminant Analysis.- A new algorithm for automatic configuration of Hidden Markov Models.- On the VC-dimension of depth four threshold circuits and the complexity of Boolean-valued functions.- On the sample complexity of consistent learning with one-sided error.- Complexity of computing Vapnik-Chervonenkis dimension.- ?-approximations of k-label spaces.- Exact learning of linear combinations of monotone terms from function value queries.- Thue systems and DNA — A learning algorithm for a subclass.- The VC-dimensions of finite automata with n states.- Unifying learning methods by colored digraphs.- A perceptual criterion for visually controlling learning.- Learning strategies using decision lists.- A decomposition basedinduction model for discovering concept clusters from databases.- Algebraic structure of some learning systems.- Induction of probabilistic rules based on rough set theory.



Ihre Fragen, Wünsche oder Anmerkungen
Vorname*
Nachname*
Ihre E-Mail-Adresse*
Kundennr.
Ihre Nachricht*
Lediglich mit * gekennzeichnete Felder sind Pflichtfelder.
Wenn Sie die im Kontaktformular eingegebenen Daten durch Klick auf den nachfolgenden Button übersenden, erklären Sie sich damit einverstanden, dass wir Ihr Angaben für die Beantwortung Ihrer Anfrage verwenden. Selbstverständlich werden Ihre Daten vertraulich behandelt und nicht an Dritte weitergegeben. Sie können der Verwendung Ihrer Daten jederzeit widersprechen. Das Datenhandling bei Sack Fachmedien erklären wir Ihnen in unserer Datenschutzerklärung.