Tadej Janež (2008) Active learning with planning of experiments. EngD thesis.
In machine learning, active learning is becoming increasingly more widely used, especially for types of problems, where we have an enormous amount of unlabeled examples and their labeling is either expensive or time consuming. In such cases, we can use active learning methods that try to build a good prediction model from as few labeled examples as possible. A new contribution is the use of active learning in time-space bounded domains. An example of such domain is an autonomous learning robot that makes experiments to build its own model of the world. In order to do that, it has to consider the physical restrictions when choosing new learning examples. After an overview of standard active learning methods comes their evaluation on various experimental domains. Then I explain qualitative modeling in a robotic domain, where robot learns its model by experimentation. After that comes a description of adaptation of active learning methods for planning of experiments. Their aim is to give robot an ability of planning its actions for quicker building of better models of the robotic world. Experimental evaluation of proposed methods was conducted in a simple robotic world, which contained only a robot and a ball. It turned out that planing of experiments can help with quicker building of better models of the robotic world. However, a standard active learning method that does not use planning, performed really well on selected set of testing examples. Finally, I describe the possibilities of future development of proposed methods.
Actions (login required)