Binary Neural Networks


Book Description

Deep learning has achieved impressive results in image classification, computer vision, and natural language processing. To achieve better performance, deeper and wider networks have been designed, which increase the demand for computational resources. The number of floatingpoint operations (FLOPs) has increased dramatically with larger networks, and this has become an obstacle for convolutional neural networks (CNNs) being developed for mobile and embedded devices. In this context, Binary Neural Networks: Algorithms, Architectures, and Applications will focus on CNN compression and acceleration, which are important for the research community. We will describe numerous methods, including parameter quantization, network pruning, low-rank decomposition, and knowledge distillation. More recently, to reduce the burden of handcrafted architecture design, neural architecture search (NAS) has been used to automatically build neural networks by searching over a vast architecture space. Our book will also introduce NAS and binary NAS and its superiority and state-of-the-art performance in various applications, such as image classification and object detection. We also describe extensive applications of compressed deep models on image classification, speech recognition, object detection, and tracking. These topics can help researchers better understand the usefulness and the potential of network compression on practical applications. Moreover, interested readers should have basic knowledge of machine learning and deep learning to better understand the methods described in this book. Key Features Reviews recent advances in CNN compression and acceleration Elaborates recent advances on binary neural network (BNN) technologies Introduces applications of BNN in image classification, speech recognition, object detection, and more




Neural Information Processing: Research and Development


Book Description

The field of neural information processing has two main objects: investigation into the functioning of biological neural networks and use of artificial neural networks to sol ve real world problems. Even before the reincarnation of the field of artificial neural networks in mid nineteen eighties, researchers have attempted to explore the engineering of human brain function. After the reincarnation, we have seen an emergence of a large number of neural network models and their successful applications to solve real world problems. This volume presents a collection of recent research and developments in the field of neural information processing. The book is organized in three Parts, i.e., (1) architectures, (2) learning algorithms, and (3) applications. Artificial neural networks consist of simple processing elements called neurons, which are connected by weights. The number of neurons and how they are connected to each other defines the architecture of a particular neural network. Part 1 of the book has nine chapters, demonstrating some of recent neural network architectures derived either to mimic aspects of human brain function or applied in some real world problems. Muresan provides a simple neural network model, based on spiking neurons that make use of shunting inhibition, which is capable of resisting small scale changes of stimulus. Hoshino and Zheng simulate a neural network of the auditory cortex to investigate neural basis for encoding and perception of vowel sounds.




Neural Networks


Book Description




Multi-Valued and Universal Binary Neurons


Book Description

Multi-Valued and Universal Binary Neurons deals with two new types of neurons: multi-valued neurons and universal binary neurons. These neurons are based on complex number arithmetic and are hence much more powerful than the typical neurons used in artificial neural networks. Therefore, networks with such neurons exhibit a broad functionality. They can not only realise threshold input/output maps but can also implement any arbitrary Boolean function. Two learning methods are presented whereby these networks can be trained easily. The broad applicability of these networks is proven by several case studies in different fields of application: image processing, edge detection, image enhancement, super resolution, pattern recognition, face recognition, and prediction. The book is hence partitioned into three almost equally sized parts: a mathematical study of the unique features of these new neurons, learning of networks of such neurons, and application of such neural networks. Most of this work was developed by the first two authors over a period of more than 10 years and was only available in the Russian literature. With this book we present the first comprehensive treatment of this important class of neural networks in the open Western literature. Multi-Valued and Universal Binary Neurons is intended for anyone with a scholarly interest in neural network theory, applications and learning. It will also be of interest to researchers and practitioners in the fields of image processing, pattern recognition, control and robotics.




Artificial Neural Networks and Machine Learning -- ICANN 2013


Book Description

The book constitutes the proceedings of the 23rd International Conference on Artificial Neural Networks, ICANN 2013, held in Sofia, Bulgaria, in September 2013. The 78 papers included in the proceedings were carefully reviewed and selected from 128 submissions. The focus of the papers is on following topics: neurofinance graphical network models, brain machine interfaces, evolutionary neural networks, neurodynamics, complex systems, neuroinformatics, neuroengineering, hybrid systems, computational biology, neural hardware, bioinspired embedded systems, and collective intelligence.




Embedded Deep Learning


Book Description

This book covers algorithmic and hardware implementation techniques to enable embedded deep learning. The authors describe synergetic design approaches on the application-, algorithmic-, computer architecture-, and circuit-level that will help in achieving the goal of reducing the computational cost of deep learning algorithms. The impact of these techniques is displayed in four silicon prototypes for embedded deep learning. Gives a wide overview of a series of effective solutions for energy-efficient neural networks on battery constrained wearable devices; Discusses the optimization of neural networks for embedded deployment on all levels of the design hierarchy – applications, algorithms, hardware architectures, and circuits – supported by real silicon prototypes; Elaborates on how to design efficient Convolutional Neural Network processors, exploiting parallelism and data-reuse, sparse operations, and low-precision computations; Supports the introduced theory and design concepts by four real silicon prototypes. The physical realization’s implementation and achieved performances are discussed elaborately to illustrated and highlight the introduced cross-layer design concepts.




Advances in Neural Networks - ISNN 2007


Book Description

Annotation The three volume set LNCS 4491/4492/4493 constitutes the refereed proceedings of the 4th International Symposium on Neural Networks, ISNN 2007, held in Nanjing, China in June 2007. The 262 revised long papers and 192 revised short papers presented were carefully reviewed and selected from a total of 1.975 submissions. The papers are organized in topical sections on neural fuzzy control, neural networks for control applications, adaptive dynamic programming and reinforcement learning, neural networks for nonlinear systems modeling, robotics, stability analysis of neural networks, learning and approximation, data mining and feature extraction, chaos and synchronization, neural fuzzy systems, training and learning algorithms for neural networks, neural network structures, neural networks for pattern recognition, SOMs, ICA/PCA, biomedical applications, feedforward neural networks, recurrent neural networks, neural networks for optimization, support vector machines, fault diagnosis/detection, communications and signal processing, image/video processing, and applications of neural networks.




Neural Networks and Deep Learning


Book Description

Dr.K.Saravanan, Assistant Professor, Department of Mathematics, Shree Amirtha College of Education, Namakkal, Tamil Nadu, India. Dr. O. Nethaji, Assistant Professor, PG and Research Department of Mathematics, Kamaraj College, Manonmaniam Sundaranar University, Thoothukudi, Tamilnadu, India. Mrs.V.Suganthi, Assistant Professor, Department of Computer Science, C.T.T.E College for Women, University of Madras, Chennai, Tamil Nadu, India. Dr.Sangeetha Rajendran, Assistant Professor, Department of Computer Science, Mangayarkarasi College of Arts and Science for Women, Madurai, Tamil Nadu, India. Dr.P.Murugabharathi, Guest Faculty, Mother Teresa Women's University Research and Extension Centre, Chennai, Tamil Nadu, India.




Neural Networks


Book Description

Neural networks are a computing paradigm that is finding increasing attention among computer scientists. In this book, theoretical laws and models previously scattered in the literature are brought together into a general theory of artificial neural nets. Always with a view to biology and starting with the simplest nets, it is shown how the properties of models change when more general computing elements and net topologies are introduced. Each chapter contains examples, numerous illustrations, and a bibliography. The book is aimed at readers who seek an overview of the field or who wish to deepen their knowledge. It is suitable as a basis for university courses in neurocomputing.