4
4.0

Jun 30, 2018
06/18

by
Jan Žegklitz; Petr Pošík

texts

######
eye 4

######
favorite 0

######
comment 0

We propose a new type of leaf node for use in Symbolic Regression (SR) that performs linear combinations of feature variables (LCF). These nodes can be handled in three different modes -- an unsynchronized mode, where all LCFs are free to change on their own, a synchronized mode, where LCFs are sorted into groups in which they are forced to be identical throughout the whole individual, and a globally synchronized mode, which is similar to the previous mode but the grouping is done across the...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1704.05134

2
2.0

Jun 29, 2018
06/18

by
Surafel Luleseged Tilahun; Jean Medard T Ngnotchouye

texts

######
eye 2

######
favorite 0

######
comment 0

Firefly algorithm is a swarm based metaheuristic algorithm inspired by the flashing behavior of fireflies. It is an effective and an easy to implement algorithm. It has been tested on different problems from different disciplines and found to be effective. Even though the algorithm is proposed for optimization problems with continuous variables, it has been modified and used for problems with non-continuous variables, including binary and integer valued problems. In this paper a detailed review...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1602.07884

5
5.0

Jun 29, 2018
06/18

by
Artem Chernodub; Dimitri Nowicki

texts

######
eye 5

######
favorite 0

######
comment 0

We propose a novel activation function that implements piece-wise orthogonal non-linear mappings based on permutations. It is straightforward to implement, and very computationally efficient, also it has little memory requirements. We tested it on two toy problems for feedforward and recurrent networks, it shows similar performance to tanh and ReLU. OPLU activation function ensures norm preservance of the backpropagated gradients, therefore it is potentially good for the training of deep, extra...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1604.02313

5
5.0

Jun 29, 2018
06/18

by
Chun Liu; Andreas Kroll

texts

######
eye 5

######
favorite 0

######
comment 0

The performance of different mutation operators is usually evaluated in conjunc-tion with specific parameter settings of genetic algorithms and target problems. Most studies focus on the classical genetic algorithm with different parameters or on solving unconstrained combinatorial optimization problems such as the traveling salesman problems. In this paper, a subpopulation-based genetic al-gorithm that uses only mutation and selection is developed to solve multi-robot task allocation problems....

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1606.00601

4
4.0

Jun 29, 2018
06/18

by
Wen-Bo Du; Wen Ying; Gang Yan; Yan-Bo Zhu; Xian-Bin Cao

texts

######
eye 4

######
favorite 0

######
comment 0

PSO is a widely recognized optimization algorithm inspired by social swarm. In this brief we present a heterogeneous strategy particle swarm optimization (HSPSO), in which a proportion of particles adopt a fully informed strategy to enhance the converging speed while the rest are singly informed to maintain the diversity. Our extensive numerical experiments show that HSPSO algorithm is able to obtain satisfactory solutions, outperforming both PSO and the fully informed PSO. The evolution...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1608.00138

2
2.0

Jun 29, 2018
06/18

by
J. Fischer; S. Lackner

texts

######
eye 2

######
favorite 0

######
comment 0

Recurrent Bistable Gradient Networks are attractor based neural networks characterized by bistable dynamics of each single neuron. Coupled together using linear interaction determined by the interconnection weights, these networks do not suffer from spurious states or very limited capacity anymore. Vladimir Chinarov and Michael Menzinger, who invented these networks, trained them using Hebb's learning rule. We show, that this way of computing the weights leads to unwanted behaviour and...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1608.08265

4
4.0

Jun 30, 2018
06/18

by
Yimin Yang; Q. M. Jonathan Wu; Guangbin Huang; Yaonan Wang

texts

######
eye 4

######
favorite 0

######
comment 0

According to conventional neural network theories, the feature of single-hidden-layer feedforward neural networks(SLFNs) resorts to parameters of the weighted connections and hidden nodes. SLFNs are universal approximators when at least the parameters of the networks including hidden-node parameter and output weight are exist. Unlike above neural network theories, this paper indicates that in order to let SLFNs work as universal approximators, one may simply calculate the hidden node parameter...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1405.1445

4
4.0

Jun 30, 2018
06/18

by
Michiel Hermans; Michaël Burm; Joni Dambre; Peter Bienstman

texts

######
eye 4

######
favorite 0

######
comment 0

Machine learning algorithms, and more in particular neural networks, arguably experience a revolution in terms of performance. Currently, the best systems we have for speech recognition, computer vision and similar problems are based on neural networks, trained using the half-century old backpropagation algorithm. Despite the fact that neural networks are a form of analog computers, they are still implemented digitally for reasons of convenience and availability. In this paper we demonstrate...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1407.6637

4
4.0

Jun 30, 2018
06/18

by
Wojciech Zaremba; Ilya Sutskever; Oriol Vinyals

texts

######
eye 4

######
favorite 0

######
comment 0

We present a simple regularization technique for Recurrent Neural Networks (RNNs) with Long Short-Term Memory (LSTM) units. Dropout, the most successful technique for regularizing neural networks, does not work well with RNNs and LSTMs. In this paper, we show how to correctly apply dropout to LSTMs, and show that it substantially reduces overfitting on a variety of tasks. These tasks include language modeling, speech recognition, image caption generation, and machine translation.

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1409.2329

2
2.0

Jun 30, 2018
06/18

by
Malte Probst; Franz Rothlauf; Jörn Grahl

texts

######
eye 2

######
favorite 0

######
comment 0

Estimation of Distribution Algorithms (EDAs) require flexible probability models that can be efficiently learned and sampled. Restricted Boltzmann Machines (RBMs) are generative neural networks with these desired properties. We integrate an RBM into an EDA and evaluate the performance of this system in solving combinatorial optimization problems with a single objective. We assess how the number of fitness evaluations and the CPU time scale with problem size and with problem complexity. The...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1411.7542

4
4.0

Jun 30, 2018
06/18

by
Alexander Hagg; Maximilian Mensing; Alexander Asteroth

texts

######
eye 4

######
favorite 0

######
comment 0

Neuroevolution methods evolve the weights of a neural network, and in some cases the topology, but little work has been done to analyze the effect of evolving the activation functions of individual nodes on network size, which is important when training networks with a small number of samples. In this work we extend the neuroevolution algorithm NEAT to evolve the activation function of neurons in addition to the topology and weights of the network. The size and performance of networks produced...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1703.07122

4
4.0

Jun 30, 2018
06/18

by
Ozgur Yilmaz

texts

######
eye 4

######
favorite 0

######
comment 0

We introduce a novel framework of reservoir computing. Cellular automaton is used as the reservoir of dynamical systems. Input is randomly projected onto the initial conditions of automaton cells and nonlinear computation is performed on the input via application of a rule in the automaton for a period of time. The evolution of the automaton creates a space-time volume of the automaton state space, and it is used as the reservoir. The proposed framework is capable of long short-term memory and...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1410.0162

11
11

Jun 27, 2018
06/18

by
Nikhil Padhye; Pulkit Mittal; Kalyanmoy Deb

texts

######
eye 11

######
favorite 0

######
comment 0

Evolutionary Algorithms (EAs) are being routinely applied for a variety of optimization tasks, and real-parameter optimization in the presence of constraints is one such important area. During constrained optimization EAs often create solutions that fall outside the feasible region; hence a viable constraint- handling strategy is needed. This paper focuses on the class of constraint-handling strategies that repair infeasible solutions by bringing them back into the search space and explicitly...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1504.04421

5
5.0

Jun 28, 2018
06/18

by
James J. Q. Yu; Albert Y. S. Lam; Victor O. K. Li

texts

######
eye 5

######
favorite 0

######
comment 0

A newly proposed chemical-reaction-inspired metaheurisic, Chemical Reaction Optimization (CRO), has been applied to many optimization problems in both discrete and continuous domains. To alleviate the effort in tuning parameters, this paper reduces the number of optimization parameters in canonical CRO and develops an adaptive scheme to evolve them. Our proposed Adaptive CRO (ACRO) adapts better to different optimization problems. We perform simulations with ACRO on a widely-used benchmark of...

Topics: Computing Research Repository, Neural and Evolutionary Computing

Source: http://arxiv.org/abs/1507.02492

2
2.0

Jun 28, 2018
06/18

by
Murilo Zangari de Souza; Roberto Santana; Aurora Trinidad Ramirez Pozo; Alexander Mendiburu

texts

######
eye 2

######
favorite 0

######
comment 0

Evolutionary algorithms based on modeling the statistical dependencies (interactions) between the variables have been proposed to solve a wide range of complex problems. These algorithms learn and sample probabilistic graphical models able to encode and exploit the regularities of the problem. This paper investigates the effect of using probabilistic modeling techniques as a way to enhance the behavior of MOEA/D framework. MOEA/D is a decomposition based evolutionary algorithm that decomposes a...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1511.05625

3
3.0

Jun 28, 2018
06/18

by
Jun He

texts

######
eye 3

######
favorite 0

######
comment 0

An important question in evolutionary computation is how good solutions evolutionary algorithms can produce. This paper aims to provide an analytic analysis of solution quality in terms of the relative approximation error, which is defined by the error between 1 and the approximation ratio of the solution found by an evolutionary algorithm. Since evolutionary algorithms are iterative methods, the relative approximation error is a function of generations. With the help of matrix analysis, it is...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1511.03483

3
3.0

Jun 28, 2018
06/18

by
Emre O. Neftci; Bruno U. Pedroni; Siddharth Joshi; Maruan Al-Shedivat; Gert Cauwenberghs

texts

######
eye 3

######
favorite 0

######
comment 0

Recent studies have shown that synaptic unreliability is a robust and sufficient mechanism for inducing the stochasticity observed in cortex. Here, we introduce Synaptic Sampling Machines, a class of neural network models that uses synaptic stochasticity as a means to Monte Carlo sampling and unsupervised learning. Similar to the original formulation of Boltzmann machines, these models can be viewed as a stochastic counterpart of Hopfield networks, but where stochasticity is induced by a random...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1511.04484

4
4.0

Jun 30, 2018
06/18

by
Ronald Hochreiter; Christoph Waldhauser

texts

######
eye 4

######
favorite 0

######
comment 0

In this paper, we apply genetic algorithms to the field of electoral studies. Forecasting election results is one of the most exciting and demanding tasks in the area of market research, especially due to the fact that decisions have to be made within seconds on live television. We show that the proposed method outperforms currently applied approaches and thereby provide an argument to tighten the intersection between computer science and social science, especially political science, further....

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1401.4674

4
4.0

Jun 29, 2018
06/18

by
Thomas Schmickl; Payam Zahadat; Heiko Hamann

texts

######
eye 4

######
favorite 0

######
comment 0

In evolutionary robotics an encoding of the control software, which maps sensor data (input) to motor control values (output), is shaped by stochastic optimization methods to complete a predefined task. This approach is assumed to be beneficial compared to standard methods of controller design in those cases where no a-priori model is available that could help to optimize performance. Also for robots that have to operate in unpredictable environments, an evolutionary robotics approach is...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1609.07722

2
2.0

Jun 29, 2018
06/18

by
Akhilesh Jaiswal; Sourjya Roy; Gopalakrishnan Srinivasan; Kaushik Roy

texts

######
eye 2

######
favorite 0

######
comment 0

The efficiency of the human brain in performing classification tasks has attracted considerable research interest in brain-inspired neuromorphic computing. Hardware implementations of a neuromorphic system aims to mimic the computations in the brain through interconnection of neurons and synaptic weights. A leaky-integrate-fire (LIF) spiking model is widely used to emulate the dynamics of neuronal action potentials. In this work, we propose a spin based LIF spiking neuron using the...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1609.09158

2
2.0

Jun 30, 2018
06/18

by
Ke Li; Kalyanmoy Deb; Xin Yao

texts

######
eye 2

######
favorite 0

######
comment 0

Most existing studies on evolutionary multi-objective optimization focus on approximating the whole Pareto-optimal front. Nevertheless, rather than the whole front, which demands for too many points (especially in a high-dimensional space), the decision maker might only interest in a partial region, called the region of interest. In this case, solutions outside this region can be noisy to the decision making procedure. Even worse, there is no guarantee that we can find the preferred solutions...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1701.05935

4
4.0

Jun 29, 2018
06/18

by
Saba Emrani; Hamid Krim

texts

######
eye 4

######
favorite 0

######
comment 0

We propose a geometric model-free causality measurebased on multivariate delay embedding that can efficiently detect linear and nonlinear causal interactions between time series with no prior information. We then exploit the proposed causal interaction measure in real MEG data analysis. The results are used to construct effective connectivity maps of brain activity to decode different categories of visual stimuli. Moreover, we discovered that the MEG-based effective connectivity maps as a...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1607.07078

5
5.0

Jun 29, 2018
06/18

by
Jun Haeng Lee; Tobi Delbruck; Michael Pfeiffer

texts

######
eye 5

######
favorite 0

######
comment 0

Deep spiking neural networks (SNNs) hold great potential for improving the latency and energy efficiency of deep neural networks through event-based computation. However, training such networks is difficult due to the non-differentiable nature of asynchronous spike events. In this paper, we introduce a novel technique, which treats the membrane potentials of spiking neurons as differentiable signals, where discontinuities at spike times are only considered as noise. This enables an error...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1608.08782

2
2.0

Jun 29, 2018
06/18

by
Jaekoo Lee; Hyunjae Kim; Jongsun Lee; Sungroh Yoon

texts

######
eye 2

######
favorite 0

######
comment 0

Graphs provide a powerful means for representing complex interactions between entities. Recently, deep learning approaches are emerging for representing and modeling graph-structured data, although the conventional deep learning methods (such as convolutional neural networks and recurrent neural networks) have mainly focused on grid-structured inputs (image and audio). Leveraged by the capability of representation learning, deep learning based techniques are reporting promising results for...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1611.04687

6
6.0

Jun 30, 2018
06/18

by
Eric O. Scott; Kenneth A. De Jong

texts

######
eye 6

######
favorite 0

######
comment 0

We introduce a genetic programming method for solving multiple Boolean circuit synthesis tasks simultaneously. This allows us to solve a set of elementary logic functions twice as easily as with a direct, single-task approach.

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1702.02217

6
6.0

Jun 29, 2018
06/18

by
Joachim Ott; Zhouhan Lin; Ying Zhang; Shih-Chii Liu; Yoshua Bengio

texts

######
eye 6

######
favorite 0

######
comment 0

Recurrent Neural Networks (RNNs) produce state-of-art performance on many machine learning tasks but their demand on resources in terms of memory and computational power are often high. Therefore, there is a great interest in optimizing the computations performed with these models especially when considering development of specialized low-power hardware for deep networks. One way of reducing the computational needs is to limit the numerical precision of the network weights and biases, and this...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1611.07065

10
10.0

Jun 26, 2018
06/18

by
James J. Q. Yu; Victor O. K. Li

texts

######
eye 10

######
favorite 0

######
comment 0

The growing complexity of real-world problems has motivated computer scientists to search for efficient problem-solving methods. Metaheuristics based on evolutionary computation and swarm intelligence are outstanding examples of nature-inspired solution techniques. Inspired by the social spiders, we propose a novel Social Spider Algorithm to solve global optimization problems. This algorithm is mainly based on the foraging strategy of social spiders, utilizing the vibrations on the spider web...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1502.02407

10
10.0

Jun 28, 2018
06/18

by
Stéphane Doncieux; Jean Liénard; Benoît Girard; Mohamed Hamdaoui; Joël Chaskalovic

texts

######
eye 10

######
favorite 0

######
comment 0

Computational models are of increasing complexity and their behavior may in particular emerge from the interaction of different parts. Studying such models becomes then more and more difficult and there is a need for methods and tools supporting this process. Multi-objective evolutionary algorithms generate a set of trade-off solutions instead of a single optimal solution. The availability of a set of solutions that have the specificity to be optimal relative to carefully chosen objectives...

Topics: Computing Research Repository, Neural and Evolutionary Computing

Source: http://arxiv.org/abs/1507.06877

14
14

Jun 26, 2018
06/18

by
James J. Q. Yu; Victor O. K. Li; Albert Y. S. Lam

texts

######
eye 14

######
favorite 0

######
comment 0

Optimization techniques are frequently applied in science and engineering research and development. Evolutionary algorithms, as a kind of general-purpose metaheuristic, have been shown to be very effective in solving a wide range of optimization problems. A recently proposed chemical-reaction-inspired metaheuristic, Chemical Reaction Optimization (CRO), has been applied to solve many global optimization problems. However, the functionality of the inter-molecular ineffective collision operator...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1502.00197

5
5.0

Jun 29, 2018
06/18

by
Charles Siegel; Jeff Daily; Abhinav Vishnu

texts

######
eye 5

######
favorite 0

######
comment 0

We present novel techniques to accelerate the convergence of Deep Learning algorithms by conducting low overhead removal of redundant neurons -- apoptosis of neurons -- which do not contribute to model learning, during the training phase itself. We provide in-depth theoretical underpinnings of our heuristics (bounding accuracy loss and handling apoptosis of several neuron types), and present the methods to conduct adaptive neuron apoptosis. Specifically, we are able to improve the training time...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1610.00790

7
7.0

Jun 26, 2018
06/18

by
Jun He; Yong Wang; Yuren Zhou

texts

######
eye 7

######
favorite 0

######
comment 0

Multi-objective optimisation is regarded as one of the most promising ways for dealing with constrained optimisation problems in evolutionary optimisation. This paper presents a theoretical investigation of a multi-objective optimisation evolutionary algorithm for solving the 0-1 knapsack problem. Two initialisation methods are considered in the algorithm: local search initialisation and greedy search initialisation. Then the solution quality of the algorithm is analysed in terms of the...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1502.03699

10
10.0

Jun 27, 2018
06/18

by
Jascha A. Schewtschenko

texts

######
eye 10

######
favorite 0

######
comment 0

Kohonen's Self-Organizing Maps (SOMs) have proven to be a successful data-reduction method to identify the intrinsic lower-dimensional sub-manifold of a data set that is scattered in the higher-dimensional feature space. Motivated by the possibly non-Euclidian nature of the feature space and of the intrinsic geometry of the data set, we extend the definition of classic SOMs to obtain the General Riemannian SOM (GRiSOM). We additionally provide an implementation as a proof-of-concept for...

Topics: Computing Research Repository, Neural and Evolutionary Computing

Source: http://arxiv.org/abs/1505.03917

6
6.0

Jun 28, 2018
06/18

by
David Howard; Larry Bull; Pier-Luca Lanzi

texts

######
eye 6

######
favorite 0

######
comment 0

Learning Classifier Systems (LCS) are population-based reinforcement learners that were originally designed to model various cognitive phenomena. This paper presents an explicitly cognitive LCS by using spiking neural networks as classifiers, providing each classifier with a measure of temporal dynamism. We employ a constructivist model of growth of both neurons and synaptic connections, which permits a Genetic Algorithm (GA) to automatically evolve sufficiently-complex neural structures. The...

Topics: Computing Research Repository, Neural and Evolutionary Computing

Source: http://arxiv.org/abs/1508.07700

5
5.0

Jun 28, 2018
06/18

by
Subhrajit Roy; Arindam Basu

texts

######
eye 5

######
favorite 0

######
comment 0

In this article, we propose a novel Winner-Take-All (WTA) architecture employing neurons with nonlinear dendrites and an online unsupervised structural plasticity rule for training it. Further, to aid hardware implementations, our network employs only binary synapses. The proposed learning rule is inspired by spike time dependent plasticity (STDP) but differs for each dendrite based on its activation level. It trains the WTA network through formation and elimination of connections between...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1512.01314

3
3.0

Jun 29, 2018
06/18

by
Hojjat Salehinejad

texts

######
eye 3

######
favorite 0

######
comment 0

The advantage of recurrent neural networks (RNNs) in learning dependencies between time-series data has distinguished RNNs from other deep learning models. Recently, many advances are proposed in this emerging field. However, there is a lack of comprehensive review on memory models in RNNs in the literature. This paper provides a fundamental review on RNNs and long short term memory (LSTM) model. Then, provides a surveys of recent advances in different memory enhancements and learning...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1602.04335

3
3.0

Jun 29, 2018
06/18

by
Paweł B. Myszkowski; Marek E. Skowroński; Łukasz P. Olech; Krzysztof Oślizło

texts

######
eye 3

######
favorite 0

######
comment 0

In this paper Hybrid Ant Colony Optimization (HAntCO) approach in solving Multi--Skill Resource Constrained Project Scheduling Problem (MS--RCPSP) has been presented. We have proposed hybrid approach that links classical heuristic priority rules for project scheduling with Ant Colony Optimization (ACO). Furthermore, a novel approach for updating pheromone value has been proposed, based on both the best and worst solutions stored by ants. The objective of this paper is to research the usability...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1603.08538

5
5.0

Jun 29, 2018
06/18

by
Andrew Pulver; Siwei Lyu

texts

######
eye 5

######
favorite 0

######
comment 0

Previous RNN architectures have largely been superseded by LSTM, or "Long Short-Term Memory". Since its introduction, there have been many variations on this simple design. However, it is still widely used and we are not aware of a gated-RNN architecture that outperforms LSTM in a broad sense while still being as simple and efficient. In this paper we propose a modified LSTM-like architecture. Our architecture is still simple and achieves better performance on the tasks that we tested...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1605.01988

3
3.0

Jun 29, 2018
06/18

by
Leonard Johard; Lukas Breitwieser; Alberto Di Meglio; Marco Manca; Manuel Mazzara; Max Talanov

texts

######
eye 3

######
favorite 0

######
comment 0

This paper is a brief update on developments in the BioDynaMo project, a new platform for computer simulations for biological research. We will discuss the new capabilities of the simulator, important new concepts simulation methodology as well as its numerous applications to the computational biology and nanoscience communities.

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1608.01818

2
2.0

Jun 29, 2018
06/18

by
Bin Liu; Shi Cheng; Yuhui Shi

texts

######
eye 2

######
favorite 0

######
comment 0

In this paper, we are concerned with a branch of evolutionary algorithms termed estimation of distribution (EDA), which has been successfully used to tackle derivative-free global optimization problems. For existent EDA algorithms, it is a common practice to use a Gaussian distribution or a mixture of Gaussian components to represent the statistical property of available promising solutions found so far. Observing that the Student's t distribution has heavier and longer tails than the Gaussian,...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1608.03757

2
2.0

Jun 28, 2018
06/18

by
Nicholas Léonard; Sagar Waghmare; Yang Wang; Jin-Hwa Kim

texts

######
eye 2

######
favorite 0

######
comment 0

The rnn package provides components for implementing a wide range of Recurrent Neural Networks. It is built withing the framework of the Torch distribution for use with the nn package. The components have evolved from 3 iterations, each adding to the flexibility and capability of the package. All component modules inherit either the AbstractRecurrent or AbstractSequencer classes. Strong unit testing, continued backwards compatibility and access to supporting material are the principles followed...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1511.07889

2
2.0

Jun 28, 2018
06/18

by
Hojjat Salehinejad; Shahryar Rahnamayan; Hamid R. Tizhoosh

texts

######
eye 2

######
favorite 0

######
comment 0

The differential evolution (DE) algorithm suffers from high computational time due to slow nature of evaluation. In contrast, micro-DE (MDE) algorithms employ a very small population size, which can converge faster to a reasonable solution. However, these algorithms are vulnerable to a premature convergence as well as to high risk of stagnation. In this paper, MDE algorithm with vectorized random mutation factor (MDEVM) is proposed, which utilizes the small size population benefit while...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1512.07980

4
4.0

Jun 29, 2018
06/18

by
Joachim Ott; Zhouhan Lin; Ying Zhang; Shih-Chii Liu; Yoshua Bengio

texts

######
eye 4

######
favorite 0

######
comment 0

Recurrent Neural Networks (RNNs) produce state-of-art performance on many machine learning tasks but their demand on resources in terms of memory and computational power are often high. Therefore, there is a great interest in optimizing the computations performed with these models especially when considering development of specialized low-power hardware for deep networks. One way of reducing the computational needs is to limit the numerical precision of the network weights and biases. This has...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1608.06902

2
2.0

Jun 29, 2018
06/18

by
Farkhondeh Kiaee; Christian Gagné; Mahdieh Abbasi

texts

######
eye 2

######
favorite 0

######
comment 0

The storage and computation requirements of Convolutional Neural Networks (CNNs) can be prohibitive for exploiting these models over low-power or embedded devices. This paper reduces the computational complexity of the CNNs by minimizing an objective function, including the recognition loss that is augmented with a sparsity-promoting penalty term. The sparsity structure of the network is identified using the Alternating Direction Method of Multipliers (ADMM), which is widely used in large...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1611.01590

3
3.0

Jun 29, 2018
06/18

by
Anton Eremeev

texts

######
eye 3

######
favorite 0

######
comment 0

The paper is devoted to upper bounds on the expected first hitting times of the sets of local or global optima for non-elitist genetic algorithms with very high selection pressure. The results of this paper extend the range of situations where the upper bounds on the expected runtime are known for genetic algorithms and apply, in particular, to the Canonical Genetic Algorithm. The obtained bounds do not require the probability of fitness-decreasing mutation to be bounded by a constant less than...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1606.05784

9
9.0

Jun 30, 2018
06/18

by
Alex Graves; Greg Wayne; Ivo Danihelka

texts

######
eye 9

######
favorite 0

######
comment 0

We extend the capabilities of neural networks by coupling them to external memory resources, which they can interact with by attentional processes. The combined system is analogous to a Turing Machine or Von Neumann architecture but is differentiable end-to-end, allowing it to be efficiently trained with gradient descent. Preliminary results demonstrate that Neural Turing Machines can infer simple algorithms such as copying, sorting, and associative recall from input and output examples.

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1410.5401

3
3.0

Jun 30, 2018
06/18

by
Norbert Michael Mayer

texts

######
eye 3

######
favorite 0

######
comment 0

Recurrent networks with transfer functions that fulfill the Lipschitz continuity with K=1 may be echo state networks if certain limitations on the recurrent connectivity are applied. It has been shown that it is sufficient if the largest singular value of the recurrent connectivity is smaller than 1. The main achievement of this paper is a proof under which conditions the network is an echo state network even if the largest singular value is one. It turns out that in this critical case the...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1411.6757

6
6.0

Jun 26, 2018
06/18

by
James J. Q. Yu; Albert Y. S. Lam; Victor O. K. Li

texts

######
eye 6

######
favorite 0

######
comment 0

The set covering problem (SCP) is one of the representative combinatorial optimization problems, having many practical applications. This paper investigates the development of an algorithm to solve SCP by employing chemical reaction optimization (CRO), a general-purpose metaheuristic. It is tested on a wide range of benchmark instances of SCP. The simulation results indicate that this algorithm gives outstanding performance compared with other heuristics and metaheuristics in solving SCP.

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1502.00199

7
7.0

Jun 28, 2018
06/18

by
K. Eswaran; Vishwajeet Singh

texts

######
eye 7

######
favorite 0

######
comment 0

In this paper we introduce a new method which employs the concept of "Orientation Vectors" to train a feed forward neural network and suitable for problems where large dimensions are involved and the clusters are characteristically sparse. The new method is not NP hard as the problem size increases. We `derive' the method by starting from Kolmogrov's method and then relax some of the stringent conditions. We show for most classification problems three layers are sufficient and the...

Topics: Computing Research Repository, Neural and Evolutionary Computing

Source: http://arxiv.org/abs/1509.05177

2
2.0

Jun 29, 2018
06/18

by
Soheil Hashemi; Nicholas Anthony; Hokchhay Tann; R. Iris Bahar; Sherief Reda

texts

######
eye 2

######
favorite 0

######
comment 0

Deep neural networks are gaining in popularity as they are used to generate state-of-the-art results for a variety of computer vision and machine learning applications. At the same time, these networks have grown in depth and complexity in order to solve harder problems. Given the limitations in power budgets dedicated to these networks, the importance of low-power, low-memory solutions has been stressed in recent years. While a large number of dedicated hardware using different precisions has...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1612.03940

3
3.0

Jun 30, 2018
06/18

by
Sadique Sheik; Somnath Paul; Charles Augustine; Gert Cauwenberghs

texts

######
eye 3

######
favorite 0

######
comment 0

Several learning rules for synaptic plasticity, that depend on either spike timing or internal state variables, have been proposed in the past imparting varying computational capabilities to Spiking Neural Networks. Due to design complications these learning rules are typically not implemented on neuromorphic devices leaving the devices to be only capable of inference. In this work we propose a unidirectional post-synaptic potential dependent learning rule that is only triggered by pre-synaptic...

Topics: Neural and Evolutionary Computing, Computing Research Repository

Source: http://arxiv.org/abs/1701.01495