Final Call for Papers: SEAL 2014
Posted by Kay Chen Tan ( and updated by Mengjie Zhang (
********* FINAL CALL FOR PAPERS **********
(SEAL 2014)
15-18 December 2014, Dunedin, New Zealand
======= New and Final Grace Period for Submissions =======
The paper submission deadline (28 July 2014) has passed with a pleasing number of submissions. Due to a number of
requests, a one week “grace period” will be established from now on to 4 August — the submission site will not be closed
until 4 August 2014 to allow new submissions, and earlier submissions can also be revised and resubmitted.
1. Three Keynote speakers have been finalised: Prof Xin Yao from University of Birmingham; Prof Kay Chen Tan from
National University of Singapore; and Prof Zbigniew Michalewicz from University of Adelaide.
2. Three special sessions have been organised: (1) Evolutionary Feature Reduction; (2) Evolutionary Machine Learning;
and (3) Evolutionary Scheduling and Combinatorial Optimisation.
3. Six (free) Tutorials have been accepted including Evolving and Designing Neural Network Ensembles Effectively (by
Professor Xin Yao), How to develop a killer EC-based application? (by Professor Zbigniew Michalewicz), Parameterized
Complexity Analysis of Bio-Inspired Computing (by Professor Frank Neumann), Evolutionary Multi-objective and Many-
Objective Optimisation (by Hernan Aguirre), Estimation of Distribution Algorithms and Probabilistic Modelling in
Evolutionary Computation (by Marcus Gallagher), and (United Kingdom) – Monte Carlo Tree Search and Evolutionary
Enhancements (by Simon Lucas).
4. Selected papers will be invited for further revision and extension for possible publication in a special issue of two SCI
journals after further review: Genetic Programming and Evolvable Machines (GPEM, springer, Impact Factor 1.333) and
Soft Computing (Springer, Impact Factor 1.124). =======================
Evolution and learning are two fundamental forms of daptation. SEAL 2014 is the tenth biennial conference in the highly
successful series that aims at exploring these two forms of adaptation and their roles and interactions in adaptive
systems. Cross-fertilization between evolutionary learning and other machine learning approaches, such as neural
network learning, reinforcement learning, decision tree learning, fuzzy system learning, etc., will be strongly encouraged
by the conference. The other major theme of the conference is optimization by evolutionary approaches or hybrid
evolutionary approaches.
General Chairs:
Prof. Mengjie Zhang (Victoria University Wellington) Assoc. Prof. Peter Whigham (University of Otago)
Programme Chairs:
Dr. Grant Dick (University of Otago) Dr. Will Browne (Victoria University of Wellington)
Technical Co-Chairs:
Prof. Lam Thu Bui (LQDTU Vietnam)
Prof. Hisao Ishibuchi (Osaka Prefecture University)
Prof. Yaochu Jin (University of Surrey)
Assoc. Prof. Xiaodong Li (RMIT, Australia)
Prof. Yuhui Shi (Xi’an Jiaotong-Liverpool University)
Assoc. Prof. Pramod Singh (IIITM, Gwalior)
Prof. Kay Chen Tan (National University of Singapore)
Prof. Ke Tang (University of Science and Technology of China)
International Advisory Board:
Prof. Hussein Abbass (ADFA, Australia)
Prof. Carlos A. Coello Coello (CINVESTAV-IPN, Mexico)
Prof. Kalyanmoy Deb (IIT Kanpur)
Home About CIS Awards Conferences History Education Membership Publications Technical Activities
Join CIS
1/8/2014 Final Call for Papers: SEAL 2014 | IEEE Computational Intelligence Society… 2/5
Prof. Garry Greenwood (Portland State University)
Prof. Jong-Hwan Kim (KAIST, Korea)
Prof. Bob McKay (Soul National University)
Prof. Zbignew Michalewicz (University of Adelaide)
Prof. Lipo Wang (National University of Singapore)
Prof. Xin Yao (University of Birmingham)
Local Organising Chairs:
Mrs. Heather Cooper (University of Otago)
Mr. Stephen Hall-Jones (University of Otago)
Tutorial Chair:
Dr. Mark Johnston (Victoria University of Wellington)
Special Session Chair:
Dr. Aaron Chen (Victoria University of Wellington)
Publicity Chairs:
Prof. Jing Liu (Xidian University, China)
Dr. Kourosh Neshatian (University of Canterbury, NZ)
Dr. Andy Song (RMIT University, Australia)
Assoc. Prof. Nguyen Xuan Hoai (Hanoi University, Vietnam)
Grace Period due: 4 August 2014 (absolutely final)
Paper submission due: 28 July 2014 (Extended)
Acceptance notification due: 29 August 2014
Camera ready due: 16 September 2014
Conference sessions: 15-18 December 2014
All papers should be submitted in PDF format via electronic submission at the SEAL 2014 conference submission site (via
EasyChair): or
The submitted papers must represent original works, and must not have been accepted for publication elsewhere or be
under review for another conference or journal.
All accepted papers that are presented at the conference will be included in the conference proceedings, to be published in
Lecture Notes in Computer Science (LNCS) by Springer, typically indexed by EI, DBLP, and ISI-Proceesing/ISTP.
In addition, selected papers will be invited for further revision and extension for possible publication in a special issue of
two SCI journals after further review: Genetic Programming and Evolvable Machines (GPEM, springer, Impact Factor
1.333) and Soft Computing (Springer, Impact Factor 1.124).
Special Session 1: Evolutionary Feature Reduction
Large numbers of features/attributes are often problematic in machine learning and data mining. They lead to conditions
known as “the cures of dimensionality”. Feature reduction aims to solve this problem by selecting a small number of
original features or constructing a smaller set of new features. Feature selection and construction are challenging tasks
due to the large search space and feature interaction problems. Recently, there has been increasing interest in using
evolutionary computation approaches to solve these problems.
The theme of this special session is the use of evolutionary computation for feature reduction, covering ALL different
evolutionary computation paradigms including evolutionary algorithms, swarm intelligence, learning classifier systems,
harmony search, artificial immune systems, and cross-fertilization of evolutionary computation and other techniques such
as neural networks, and fuzzy and rough sets. This special session aims to investigate both the new theories and
methods in different evolutionary computation paradigms to feature reduction, and the applications of evolutionary
computation for feature reduction. Authors are invited to submit their original and unpublished work to this special
Topics of interest include but are not limited to:
* Feature ranking/weighting
* Feature subset selection
* Dimensionality reduction
* Feature construction
* Filter, wrapper, and embedded feature selection
* Hybrid feature selection
* Feature reduction for both supervised and unsupervised learning
* Multi-objective feature reduction
* Feature reduction with imbalanced data
* Analysis on evolutionary feature reduction methods
* Real-world applications of evolutionary feature reduction, e.g. gene analysis, bio-marker detection, et al.
Bing Xue
School of Engineering and Computer Science,
Victoria University of Wellington
Kourosh Neshatian
1/8/2014 Final Call for Papers: SEAL 2014 | IEEE Computational Intelligence Society… 3/5
Computer Science and Software Engineering
College of Engineering
University of Canterbury
Special Session 2: Evolutionary Machine Learning
Machine learning and evolutionary computation are two major fields of computational intelligence. They share many
fundamental similarities and are frequently explored together to tackle complex, large-scale, and dynamic learning
problems under various sources of uncertainties.
This special session will cover a broad range of topics related to evolutionary machine learning, including novel learning
algorithms and their innovative applications. We will focus on both theoretical and practical research in this field. The aim
is to show how the global search performed by evolutionary methods can complement the local search of nonevolutionary
methods and how the combination of the two can improve learning effectiveness and performance within a
wide range of clustering, classification, regression, prediction, and control tasks.
Topics of interest include, but not limited to:
* Learning Classifier Systems
* Genetic Programming (GP) and its application to machine learning tasks
* Evolutionary ensembles
* Neuroevolution and its application to machine learning tasks
* Genetic fuzzy systems
* Hyper-parameter tuning with evolutionary methods
* Theoretical analysis of evolutionary learning algorithms
* Interesting practical applications
* Advanced computing platforms for evolutionary machine learning
* Other Genetics-Based Machine Learning: hybrid learning systems combining evolutionary techniques with machine
learning methods
Aaron Chen
School of Engineering and Computer Science
Victoria University of Wellington
Will Browne
School of Engineering and Computer Science, Victoria University of Wellington
Special Session 3: Evolutionary Scheduling and Combinatorial Optimisation
Evolutionary Scheduling and Combinatorial Optimization is an active research area in both Artificial Intelligence and
Operations Research due to its applicability and interesting computational aspects. Evolutionary techniques are suitable
for these problems since they are highly flexible in terms of handling constraints, dynamic changes and multiple
conflicting objectives.
This special issue focuses on both theoretical and practical aspects of Evolutionary Scheduling and Combinatorial
Optimization. Examples of evolutionary methods include genetic algorithm, genetic programming, evolutionary strategies,
ant colony optimisation, particle swarm optimisation, evolutionary based hyper-heuristics, memetic algorithms.
Topics of interest include, but not limited to:
* Production scheduling
* Timetabling
* Vehicle routing
* Transport scheduling
* Grid/cloud scheduling
* Project scheduling
* 2D/3D strip packing
* Space allocation
* Multi-objective scheduling
* Multiple interdependent decisions
* Automated heuristic design
* New real-world and innovative applications
Su Nguyen
Victoria University of Wellington
New Zealand
Mengjie Zhang
Victoria University of Wellington
New Zealand
Kay Chen Tan
National University of Singapore
Tutorial 1: Evolving and Designing Neural Network Ensembles Effectively (by Professor Xin Yao, University of Birmingham.
This tutorial starts with an overview of different evolutionary approaches to learn the weights, architectures and learning
rules of neural networks. However, monolithic neural networks become too complex to train and evolve for large and
complex problems. It is often better to design a collection of simpler neural networks that work collectively and
cooperatively to solve a large and complex problem. The key issue here is how to design such a collection automatically so
that it has the best generalisation. This tutorial next describes the motivation of evolving neural network ensembles and
explains the potential links between evolving a diverse population of neural networks and designing a neural network
ensemble. Negative correlation learning is introduced as an example to illustrate such a link. Inspired by negative
correlation learning and evolving ensembles, several improved ensemble learning algorithms, including multi-objective
ensemble learning, are also introduced. Some applications examples are given. Finally the tutorial ends with some recent
1/8/2014 Final Call for Papers: SEAL 2014 | IEEE Computational Intelligence Society… 4/5
ensemble approaches to online learning, class imbalance learning and semi-supervised learning.
Tutorial 2: How to develop a killer EC-based application? (Professor Zbigniew Michalewicz, University of Adelaide. )
The talk is based on 14 years industry experience – in particular, we will talk about some EC-based applications developed
at SolveIT Software that allowed to grow the business from zero to almost 180 employees and $20 million in revenue
before selling the business to Schneider Electric. Because of these applications, SolveIT Software became the 3rd fastestgrowing
company in Australia in 2012, as ranked by Deloitte; the company won numerous awards, and counted among
its customers some of the largest corporations in the world, including Rio Tinto, BHP Billiton, and Xstrata.
In this tutorial we will focus on a few features of decision-support software that make the applications “irresistible” … We
will discuss concepts of adaptive business intelligence, dynamic environments, what-if scenarios, trade-off analysis,
strategic optimisation, interfaces, and global optimisation in the context of multi-silo problems. The talk will be illustrated
by a few pieces of software.
Tutorial 3: Parameterized Complexity Analysis of Bio-Inspired Computing. (By Associate Professor Frank Neumann,
University of Adelaide. )
In real applications, problem inputs are typically structured or restricted in some way. Evolutionary algorithms and other
bio-inspired algorithms can sometimes exploit such extra structure, while in some cases it can be problematic. In any
case, from a theoretical perspective, little is understood about how different structural parameters affect the running time
of such algorithms.
In this tutorial we present techniques from the new and thriving field of parameterized complexity theory. These
techniques allow for a rigorous understanding of the influence of problem structure on the running time of evolutionary
algorithms on NP-hard combinatorial optimization problems. We show how these techniques allow one to decompose
algorithmic running time as a function of both problem size and additional parameters. In this way, one can attain a more
detailed description of what structural aspects contribute to the exponential running time of EAs applied to solving hard
After a general introduction into the computational complexity analysis of bio-inspired computation, we will present
detailed and thorough parameterised results for bio-inspired computing on problems such as the traveling salesperson
problem and makespan scheduling. We will also outline directions for future research and discuss some open questions.
Tutorial 4: Advances on Evolutionary Many-objective Optimization (Associate Professor Hernan Aguirre, Shinshu
University, Japan.
Multi-objective evolutionary algorithms (MOEAs) are widely used in practice for solving multi-objective design and
optimization problems. Historically, most applications of MOEAs have dealt with two and three objective problems, leading
to the development of several evolutionary approaches that work successfully in these low dimensional objective spaces.
Recently, there is a growing interest in industry to solve many-objective optimization problems, where the number of
objective functions to optimize simultaneously is more than three. However, conventional MOEAs were not designed to
cope with the challenges imposed by many-objective optimization and scale up poorly with the number of objectives of
the problem. The development of robust, scalable, many-objective optimizers is an ongoing effort and a promising line of
research. Critical to the development of such algorithms is an understanding of fundamental features of many-objective
landscapes and the interaction between selection, variation, and population size to appropriately support the evolutionary
search in high-dimensional spaces.
This tutorial aims at giving an introduction to evolutionary many-objective optimization, discussing important
characteristics of many-objective landscapes and relating them to working principles, performance and behavior of the
optimizers. Some of the recent research results will be presented in some detail emphasizing the real world application of
many-objective algorithms. More specifically, the tutorial will
(i) introduce the basic principles of multi-objective evolutionary algorithms, (ii) show scalability issues of conventional
multi-objective optimizers when applied to many-objective problems, (iii) introduce important features of many-objective
landscapes and show the effectiveness of selection and variation operators when the characteristics of the many-objective
landscapes are taken into account, (iv) show the effects of population size, (v) present a general overview of the
approaches to many-objective optimization, together with their state-of-the-art algorithms and techniques, (vi) discuss
real world applications of evolutionary many-objective algorithms, and (vii) present open question throughout the tutorial
that can serve for all participants as a starting point for future research and/or discussions during the conference.
Tutorial 5: Estimation of Distribution Algorithms and Probabilistic Modelling in Evolutionary Computation (A/Prof Marcus
Gallagher, University of Queensland, Australia.
Estimation of Distribution Algorithms (EDAs) are a class of evolutionary algorithms that utilize probabilistic modelling and
learning techniques to drive the stochastic search process for solving optimization problems. In recent years, EDAs have
emerged as a significant class of algorithms, with numerous algorithms proposed for both discrete and continuous spaces.
The proposed tutorial would provide an introduction to the fundamental concepts and principles of EDAs, review existing
and state of the art algorithms and discuss current directions in the research.
The main topics to be covered are:
** Part I:
* Introduction and Origins of EDAs
* Background: probability density estimation and learning, optimization
* Simple continuous and discrete EDAs
** Part II:
* Dependency modelling and advanced EDA models
* Performance Results and Relationship to other algorithms
* Current topics: Natural Gradient, other explicit modelling techniques
Tutorial 6: Monte Carlo Tree Search and Evolutionary Enhancements (Prof Simon Lucas, University of Essex, UK.
Monte Carlo tree search (MCTS) is a powerful search method that combines the precision of tree search with the
generality of random sampling. It has received considerable interest due to its outstanding success in the challenging
board games such as Go and Hex, but has also proved to be a leading method in many other games and some
applications beyond games.
In this tutorial I will cover the basics of the algorithm starting with flat Monte Carlo (no tree), then show the benefits of
building a tree, and the standard ways of balancing exploration versus exploitation using the Upper Confidence Bounds for
Trees (UCT) formula. Despite the theoretical appeal of UCT, many MCTS programs rely more heavily on heuristics, so
examples of these are also included. I’ll then explore some cases such as real-time games and control problems where
1/8/2014 Final Call for Papers: SEAL 2014 | IEEE Computational Intelligence Society… 5/5
standard MCTS can perform poorly, and show ways in which evolution can be used to tune the algorithm to achieve good
The tutorial will include many demonstrations to help explain the key points, and snippets of code / pseudocode will be
explained to provide a practical understanding of the algorithm. A complete implementation in a high level language will
also be provided so that delegates can take away some working programs ready to apply to their own problems.
All inquiries about the conference should go to Dr Grant Dick ( or Prof Mengjie Zhang

Both comments and pings are currently closed.

Comments are closed.

Design by 2b Consult