Jaka Demšar (2012) Explanation of predictive models and individual predictions in incremental learning. EngD thesis.
Use of machine learning for decision support is evermore prevalent. We can increase the trust in computer-generated decisions by explaining them using feature value contributions which provide us with additional insight into concepts behind the domain of the problem. While a generalised method for static data sets has already been developed, we still face the problem of explaining examples and decision models built on data streams which demand limited resources, incremental learning models and the functionality of concept drift detection. We present a novel generalised method for explanation of incremental learning models and individual instances. We derive our solution from existing incremental machine learning techniques and the static method for explanation based on game theory. We also develop a novel concept drift visualization method. The solution is tested on several datasets and then compared to static methods. Results are visualised and analysed with similarity measure. We conclude that the proposed method successfully explains incremental learning models and individual instances while outperforming the static method. Visualization proves to be a valuable technique for presentation of concepts behind data streams.
Actions (login required)