Deakin University
Browse
khan-knowledgeextraction-2006.pdf (467.19 kB)

Knowledge extraction from a mixed transfer function artificial neural network

Download (467.19 kB)
journal contribution
posted on 2006-01-01, 00:00 authored by I Khan, Yakov Frayman, Saeid Nahavandi
One of the main problems with Artificial Neural Networks (ANNs) is that their results are not intuitively clear. For example, commonly used hidden neurons with sigmoid activation function can approximate any continuous function, including linear functions, but the coefficients (weights) of this approximation are rather meaningless. To address this problem, current paper presents a novel kind of a neural network that uses transfer functions of various complexities in contrast to mono-transfer functions used in sigmoid and hyperbolic tangent networks. The presence of transfer functions of various complexities in a Mixed Transfer Functions Artificial Neural Network (MTFANN) allow easy conversion of the full model into user-friendly equation format (similar to that of linear regression) without any pruning or simplification of the model. At the same time, MTFANN maintains similar generalization ability to mono-transfer function networks in a global optimization context. The performance and knowledge extraction of MTFANN were evaluated on a realistic simulation of the Puma 560 robot arm and compared to sigmoid, hyperbolic tangent, linear and sinusoidal networks.

History

Journal

Journal of advanced computational intelligence and intelligent informatics

Volume

10

Issue

3

Pagination

295 - 301

Publisher

Fuji Technology Press Ltd

Location

Tokyo, Japan

ISSN

1343-0130

Language

eng

Publication classification

C1 Refereed article in a scholarly journal