Издательство Imperial College Press, 2007, -322 pp.The area of Neural computing that we shall discuss in this book represents a combination of techniques of classical optimization, statistics, and information theory. Neural network was once widely called artificial neural networks, which represented how the emerging technology was related to artificial intelligence. It once was a topic that had captivated the interest of most computer scientists, engineers, and mathematicians. Its charm of being an adaptive system or universal functional approximator has compelling appeal to most researchers and engineers. The Backpropagation training algorithm was once the most popular keywords used in most engineering conferences. There is an interesting history on this area dated back from the late fifties which we saw the advent of the Mark I Perceptron. But the real intriguing history started from the sixties that we saw Minsky and Papert’s book Perceptrons discredited the early neural research work. For all neural researchers, the late eighties are well remembered because the research of neural networks was reinstated and repositioned. From the nineties to the new millennium is history to be made by all neural researchers. We saw the flourish of this topic and its applications stretched from rigorous mathematical proof to different physical science and even business applications. Researchers now tend to use the term neural networks instead of artificial neural networks when we have understood the theoretical background more. There have been volumes of research literature published on the new development of neural theory and applications. There have been many attempts to discuss this topic from either a very mathematical way or a very practical way. But to most users including students and engineers, how to employ an appropriate neural network learning algorithm and the selection of model for a given physical problem appear to be the main issue.
This book, written from a more application perspective, provides thorough discussions on neural network learning algorithms and their related issues. We strive to find the balance in covering the major topics in neurocomputing, from learning theory, learning algorithms, network architecture to applications. We start the book from the fundamental building block neuron and the earliest neural network model, McCulloh and Pitts Model. In the beginning, we treat the learning concept from the wellknown regression problem which shows how the idea of data fitting can be used to explain the fundamental concept of neural learning. We employ an error convex surface to illustrate the optimization concept of learning algorithm. This is important as it shows readers that the neural learning algorithm is nothing more than a high dimensional optimization problem. One of the beauties of neural network is being a soft computing approach in which the selection of a model structure and initial settings may not have noticeable effect on the final solution. But neural learning process also suffers from its problem of being slow and stuck in local minima especially when it is required to handle a rather complex problem. These are the two main issues addressed in the later chapters of this book. We study the neural learning problem from a new perspective and offer several modified algorithms to enhance the learning speed and its convergence ability. We also show initializations of a network have significant effect on the learning performance. Different initialization methods are then discussed and elaborated.
Later chapters of the book deal with Basis function network, Self-Organizing map, and Feature Selection. These are interesting and useful topics to most engineering and science researchers. The Self-Organizing map is the most widely used unsupervised neural network. It is useful for performing clustering, dimensional reduction, and classification. The SOM is very different from the feedforward neural network in the sense of architecture and learning algorithm. In this book, we have provided thorough discussions and newly developed extended algorithms for readers to use. Classification and Feature selection is discussed in Chapter 6_. We include this topic in the book because bioinformatics has recently become a very important research area. Gene selection using computational methods, and performing cancer classification computationally have become the 21st Century research. This book provides a detail discussion on feature selection and how different methods be applied to gene selection and cancer classification. We hope this book will provide useful and inspiring information to readers. A number of software algorithms written in MATLAB are available for readers to use. Although the authors have gone through the book for few times checking typos and errors, we would appreciate readers notifying us about any typos found.Introduction
Learning Performance and Enhancement
Generalization and Performance Enhancement
Basis Function Networks for Classification
Self-organizing Maps
Classification and Feature Selection
Engineering Applications
Издательство Imperial College Press, 2007, -322 pp.我们将在本书中讨论的神经计算领域代表了经典优化、统计和信息理论的技术组合。神经网络曾经被广泛地称为人工神经网络,它代表了这项新兴技术与人工智能的关系。它曾经是一个吸引了大多数计算机科学家、工程师和数学家兴趣的话题。它作为一个自适应系统或通用函数近似器的魅力对大多数研究人员和工程师有着令人信服的吸引力。背向传播训练算法曾经是大多数工程会议中使用最多的关键词。这个领域有一段有趣的历史,可以追溯到50年代末,我们看到了Mark I Perceptron的出现。但真正耐人寻味的历史是从60年代开始的,我们看到Minsky和Papert的书Perceptrons使早期的神经研究工作信誉扫地。对于所有的神经研究者来说,八十年代末是值得纪念的,因为神经网络的研究被恢复并重新定位了。从九十年代到新千年是所有神经研究者要创造的历史。我们看到这一课题的蓬勃发展,其应用从严格的数学证明延伸到不同的物理科学,甚至商业应用。当我们对理论背景有了更多的了解后,研究人员现在倾向于使用神经网络这个术语,而不是人工神经网络。关于神经理论和应用的新发展,已经有大量的研究文献发表。有很多人试图从非常数学的方式或非常实用的方式来讨论这个话题。但对大多数用户包括学生和工程师来说,如何采用适当的神经网络学习算法和为给定的物理问题选择模型似乎是主要问题。
本书从更多的应用角度出发,对神经网络学习算法及其相关问题进行了详尽的讨论。我们努力在涵盖神经计算的主要课题中找到平衡点,从学习理论、学习算法、网络结构到应用。我们从基本构件神经元和最早的神经网络模型--McCulloh和Pitts模型开始本书。在开始时,我们从众所周知的回归问题来处理学习概念,这表明数据拟合的思想可以用来解释神经学习的基本概念。我们采用一个误差凸面来说明学习算法的优化概念。这一点很重要,因为它向读者表明,神经学习算法只不过是一个高维的优化问题。神经网络的优点之一是作为一种软计算方法,模型结构和初始设置的选择可能不会对最终解决方案产生明显的影响。但是,神经学习过程也存在缓慢和陷入局部最小值的问题,特别是当它需要处理相当复杂的问题时。这就是本书后面几章要解决的两个主要问题。我们从一个新的角度研究了神经学习问题,并提供了几个修改过的算法来提高学习速度和收敛能力。我们还表明网络的初始化对学习性能有很大影响。然后,我们讨论并阐述了不同的初始化方法。
本书后面几章涉及基数函数网络、自组织地图和特征选择。这些对大多数工程和科学研究人员来说都是有趣和有用的话题。自组织图是最广泛使用的无监督神经网络。它对于进行聚类、降维和分类很有用。SOM在结构和学习算法上与前馈神经网络有很大不同。在本书中,我们提供了详尽的讨论和新开发的扩展算法供读者使用。分类和特征选择在第6章_中讨论。我们在书中加入这个话题是因为生物信息学最近已经成为一个非常重要的研究领域。使用计算方法进行基因选择,以及通过计算进行癌症分类已经成为21世纪的研究。本书详细讨论了特征选择以及如何将不同的方法应用于基因选择和癌症分类。我们希望这本书能给读者提供有用的和有启发的信息。本书提供了一些用MATLAB编写的软件算法供读者使用。虽然作者已经把书中的错字和错误检查了几遍,但如果发现任何错字,我们还是希望读者能通知我们。
学习性能和增强功能
归纳和性能提升
分类的基函数网络
自组织图
分类和特征选择
工程应用
相关文库
电子书-Linux是如何工作的How Linux Works(英)
1432
类型:电子书
上传时间:2022-04
标签:计算机、操作系统、内部结构)
语言:英文
金额:5积分
《面向初学者的机器学习》Machine Learning For Absolute Beginners
1071
类型:电子书
上传时间:2021-05
标签:机器学习、计算机、算法)
语言:英文
金额:5积分
计算机行业深度研究报告:ChatGPT,开启AI新纪元-20230201-31页
1009
类型:行研
上传时间:2023-02
标签:计算机、处理器)
语言:中文
金额:免费
电子书-DAMA数据管理知识体系指南(DAMA DMBOK)(英)
1005
类型:电子书
上传时间:2022-03
标签:计算机、数据库、数据管理)
语言:英文
金额:5积分
电子书-用FastAPI构建数据科学应用:用Python开发、管理和部署高效的机器学习应用程序(英)
1002
类型:电子书
上传时间:2022-03
标签:计算机、数据库、人脸检测系统)
语言:英文
金额:5积分
电子书-高维数据统计:方法、理论与应用(英)
913
类型:电子书
上传时间:2021-10
标签:计算机、统计学、数据统计)
语言:英文
金额:5积分
电子书-软件架构师手册:通过实施有效的架构概念成为成功的软件架构师(英)
895
类型:电子书
上传时间:2021-11
标签:计算机、软件架构 、软件)
语言:英文
金额:5积分
计算机行业:多模态大模型技术演进及研究框架-20230318-51页
889
类型:行研
上传时间:2023-03
标签:计算机、虚拟人、智能人)
语言:中文
金额:免费
计算机行业:GPT产业梳理,GPT_1到ChatGPT-20230214-17页
848
类型:行研
上传时间:2023-02
标签:计算机、GPT)
语言:中文
金额:免费
电子书-大数据MBA:用数据科学推动商业战略(英)
839
类型:电子书
上传时间:2021-11
标签:计算机、数据库、大数据)
语言:英文
金额:5积分
积分充值
30积分
6.00元
90积分
18.00元
150+8积分
30.00元
340+20积分
68.00元
640+50积分
128.00元
990+70积分
198.00元
1640+140积分
328.00元
微信支付
余额支付
积分充值
应付金额:
0 元
请登录,再发表你的看法
登录/注册