Loading...
 

Tutorials

List of Tutorials

Introductory Tutorials

                                   
TitleOrganizers
A Practical Guide to Experimentation
  • Nikolaus Hansen INRIA Saclay, France
Evolution of Neural Networks
  • Risto Miikkulainen University of Texas at Austin
Evolutionary Computation and Games
  • Julian Togelius New York University
  • Sebastian Risi IT University of Copenhagen
  • Georgios N. Yannakakis IT University of Copenhagen, Denmark
Evolutionary Computation: A Unified Approach
  • Kenneth A. De Jong George Mason University, Fairfax, VA
Evolutionary Multiobjective Optimization
  • Dimo Brockhoff Inria Saclay - Ile-de-France and CMAP, Ecole Polytechnique, France
Hyper-heuristics
  • Daniel R. Tauritz Missouri University of Science and Technology
  • John R. Woodward University of Stirling, UK
Introducing Learning Classifier Systems: Rules that Capture Complexity
  • Ryan Urbanowicz University of Pennsylvania, USA
  • Danilo Vasconcellos Vargas Kyushu University
Introduction to Genetic Programming
  • Una-May O'Reilly Massachusetts Institute of Technology, US
NEW Introductory Mathematical Programming for EC
  • Ofer Shir Tel-Hai College
Introductory Statistics for EC: A Visual Approach
  • Mark Wineberg
Model-Based Evolutionary Algorithms
  • Dirk Thierens Utrecht University, The Netherlands
  • Peter A.N. Bosman Centre for Mathematics and Computer Science, The Netherlands
NEW Neuroevolution for Deep Reinforcement Learning Problems
  • David Ha Google Brain
Representations for Evolutionary Algorithms
  • Franz Rothlauf Johannes Gutenberg-Universtität Mainz
Runtime Analysis of Evolutionary Algorithms: Basic Introduction
  • Per Kristian Lehre University of Birmingham, UK
  • Pietro S. Oliveto University of Sheffield, UK
NEW Search-based Test Optimization for Software Systems
  • Shaukat Ali Simula Research Laboratory, Norway
  • Tao Yue Simula Research Laboratory and University of Oslo, Norway
NEW Search-Maps: Visualising and Exploiting the Global Structure of Computational Search Spaces
  • Gabriela Ochoa University of Stirling, UK
  • Nadarajen Veerapen University of Stirling, UK
NEW Shift Your Research & Laboratory into Higher Gear with 3 Shift Skills & 4 Smooth Rules
  • David E. Goldberg ThreeJoy Associates, Big Beacon, and University of Illinois (Emeritus)
Theory for Non-Theoreticians
  • Benjamin Doerr Ecole Polytechnique

Advanced Tutorials

                           
TitleOrganizers
Adaptive Parameter Choices in Evolutionary Computation
  • Carola Doerr CNRS & Universite Pierre et Marie Curie (Paris 6), France
CMA-ES and Advanced Adaptation Mechanisms
  • Youhei Akimoto Shinshu University, Japan
  • Nikolaus Hansen INRIA Saclay, France
Constraint-Handling Techniques used with Evolutionary Algorithms
  • Carlos Artemio Coello CINVESTAV-IPN, Mexico
NEW Decomposition Multi-Objective Optimization: Current Developments and Future Opportunities
  • Ke Li University of Exeter, UK
  • Qingfu Zhang City University of Hong-Kong
NEW Evolutionary Computation for Digital Art
  • Frank Neumann University of Adelaide, Australia
  • Aneta Neumann University of Adelaide
NEW Evolutionary Reinforcement Learning: General Models and Adaptation
  • Danilo Vasconcellos Vargas Kyushu University
Expressive Genetic Programming: Concepts and applications
  • Lee Spector Hampshire College, USA
  • Nic McPhee University of Minnesota
Next Generation Genetic Algorithms
  • Darrell Whitley Colorado State University, USA
Particle Swarm Optimization: A Guide to Effective, Misconception Free, Real World Use
  • Andries Engelbrecht University Of Pretoria, South Africa
  • Christopher Cleghorn University of Pretoria, South Africa
NEW Promoting Diversity in Evolutionary Optimization: Why and How
  • Giovanni Squillero Politecnico di Torino
  • Alberto Tonda UMR 782 GMPA, INRA, Thiverval-Grignon, France
Sequential Experimentation by Evolutionary Algorithms
  • Ofer Shir Tel-Hai College
  • Thomas Bäck Leiden Institute of Advanced Computer Science
Simulation Optimization
  • Jürgen Branke Warwick Business School
Solving Complex Problems with Coevolutionary Algorithms
  • Krzysztof Krawiec Poznan University of Technology, Poland
  • Malcolm Heywood Dalhousie University
Visualization in Multiobjective Optimization
  • Bogdan Filipic Jozef Stefan Institute, Ljubljana, Slovenia
  • Tea Tušar Jožef Stefan Institute, Ljubljana, Slovenia

Specialized Tutorials

               
TitleOrganizers
Automated Offline Design of Algorithms
  • Manuel López-Ibáñez University of Manchester, UK
  • Thomas Stützle IRIDIA laboratory, ULB, Belgium
NEW Bio-Inspired Approaches to Anomaly and Intrusion
  • Luis Martí Universidade Federal Fluminense
  • Marc Schoenauer INRIA Saclay Île-de-France
Evolutionary Algorithms in the cloud
  • JJ Merelo CITIC U. Granada
NEW Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition
  • Mengjie Zhang Victoria University of Wellington
  • Stefano Cagnoni Universita' degli Studi di Parma, Italy
Evolutionary Computation for Feature Selection and Feature Construction
  • Bing Xue Victoria University of Wellington
  • Mengjie Zhang Victoria University of Wellington
Evolutionary Robotics
  • Nicolas Bredeche Université Pierre et Marie Curie
  • Stéphane Doncieux Sorbonne Université
  • Jean-Baptiste Mouret Inria
Medical Applications of Evolutionary Computation
  • Stephen L. Smith University of York, UK
NEW Theory of Estimation-of-Distribution Algorithms
  • Carsten Witt Technical University of Denmark)

Introductory Tutorials

A Practical Guide to Experimentation

Many evolutionary algorithms, in particular those we use in practice, and many if not most relevant optimization problems, are not amenable to comprehensive theoretical analysis. For this reason, experimentation plays a central role in evolutionary computation and comes in three main flavours: Explorative experiments, Systematic experimentation, Real-world problem solving. We use explorative experiments to investigate, to improve, or to improve our understanding of algorithms. We use systematic experimentation for the assessment of performance of algorithms and for benchmarking. In this tutorial, we cover the first two flavours of experimentation which are somewhat complementary in their objectives and, to some extent, in their approach, but we also touch upon the third. The tutorial emphasises on ways to make experimentation as effective and valuable as possible and on preventing common pitfalls. Attendees will learn how to make quick yet meaningful experiments, how to present, analyse and interpret experimental results and to measurement performance. We introduce run-length distributions (data profiles), performance profiles, scale-up studies, and connect success probabilities with restarts. Furthermore, the number of trials, the role of invariance, statistical significance, and the most common mistakes when conducting experimental studies are touched upon.

Nikolaus Hansen

Nikolaus Hansen is a research scientist at INRIA, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined INRIA, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the InGene Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

Evolution of Neural Networks

Evolution of artificial neural networks has recently emerged as a powerful technique in two areas. First, while the standard value-function based reinforcement learning works well when the environment is fully observable, neuroevolution makes it possible to disambiguate hidden state through memory. Such memory makes new applications possible in areas such as robotic control, game playing, and artificial life. Second, deep learning performance depends crucially on the network architecture and hyperparameters. While many such architectures are too complex to be optimized by hand, neuroevolution can be used to do so automatically. As a result, deep learning can be scaled to utilize more of the available computing power, and extended to domains combining multiple modalities. In this tutorial, I will review (1) neuroevolution methods that evolve fixed-topology networks, network topologies, and network construction processes, (2) ways of combining gradient-based training with evolutionary methods, and (3) applications of these techniques in control, robotics, artificial life, games, image processing, and language.

Risto Miikkulainen

Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and a Fellow at Sentient Technologies, Inc. He received an M.S. in Engineering from the Helsinki University of Technology, Finland, in 1986, and a Ph.D. in Computer Science from UCLA in 1990. His recent research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing, and self-organization of the visual cortex; he is an author of over 370 articles in these research areas. He is an IEEE Fellow, member of the Board of Governors of the Neural Network Society, and an action editor of Cognitive Systems Research and IEEE Transactions on Computational Intelligence and AI in Games.

Evolutionary Computation and Games

In recent years, research in AI for Games and Games for AI has enjoyed rapid progress and a sharp rise in popularity. In this field, various kinds of AI algorithms are tested on benchmarks based on e.g. board games and video games, and new AI-based solutions are developed for problems in game development and game design. This tutorial will give an overview of key research challenges and methods of choice in evolutionary computation applied to games. The tutorial is divided in two parts, where the second part builds on methods and results introduced in the first part. The first part will focus on evolutionary computation methods for playing games, including neuroevolution and evolutionary planning. The second part will focus on the use of evolutionary computation for game testing and procedural content generation.

Julian Togelius

Julian Togelius is an Associate Professor in the Department of Computer Science and Engineering at New York University, and co-director of the NYU Game Innovation Lab. He works on all aspects of computational intelligence and games and on selected topics in evolutionary computation and evolutionary reinforcement learning. His current main research directions involve search-based procedural content generation in games, game adaptation through player modelling, automatic game design, and fair and relevant benchmarking of game AI through competitions. He is an author of a recent textbook on Procedural Content Generation in Games and an upcoming textbook on Artificial Intelligence and Games. Togelius holds a BA from Lund University, an MSc from the University of Sussex, and a PhD from the University of Essex.

Sebastian Risi

Sebastian Risi is an Associate Professor at the IT University of Copenhagen where he is part of the Center for Computer Games Research and the Robotics, Evolution and Art Lab (REAL). His interests include computational intelligence in games, neuroevolution, evolutionary robotics and human computation. Risi completed his PhD in computer science from the University of Central Florida. He has won several best paper awards at GECCO, EvoMusArt, IJCNN, and the Continual Learning Workshop at NIPS for his work on adaptive systems, the HyperNEAT algorithm for evolving complex artificial neural networks, and music generation.

Georgios N. Yannakakis

Evolutionary Computation: A Unified Approach

The field of Evolutionary Computation has experienced tremendous growth over the past 20 years, resulting in a wide variety of evolutionary algorithms and applications. The result poses an interesting dilemma for many practitioners in the sense that, with such a wide variety of algorithms and approaches, it is often hard to set the relationships between them, assess strengths and weaknesses, and make good choices for new application areas. This tutorial is intended to give an overview of a general EC framework that can help compare and contrast approaches, encourages crossbreeding, and facilitates intelligent design choices. The use of this framework is then illustrated by showing how traditional EAs can be compared and contrasted with it, and how new EAs can be effectively designed using it. Finally, the framework is used to identify some important open issues that need further research.

Kenneth A. De Jong

Kenneth A. De Jong is a senior and well-known researcher in the EC community with a rich and diverse research profile. De Jong's research interests include genetic algorithms, evolutionary computation, machine learning, and adaptive systems. He is currently involved in research projects involving the development of new evolutionary algorithm (EA) theory, the use of EAs as heuristics for NP-hard problems, and the application of EAs to the problem of learning task programs in domains such as robotics, diagnostics, navigation and game playing. Support for these projects is provided by NSF, DARPA, ONR, and NRL. He is an active member of the Evolutionary Computation research community and has been involved in organizing many of the workshops and conferences in this area. He is the founding editor-in-chief of the journal Evolutionary Computation (MIT Press), and a member of the board of ACM SIGEVO.

Evolutionary Multiobjective Optimization

Many optimization problems are multiobjective in the sense that multiple, conflicting criteria need to be considered simultaneously. Due to the conflict between these objectives, usually, no single optimum solution exist. Instead, a set of so-called Pareto-optimal solutions, for which no other solution has better function values in all objectives, does result. In practice, Evolutionary Multiobjective Optimization (EMO) algorithms are widely used for solving these multiobjective optimization problems. Being stochastic blackbox optimizers, EMO approaches allow problems with nonlinear, non-differentiable, or noisy objective functions to be tackled. By inherently working on sets of solutions, they allow the Pareto-optimal set to be computed or approximated in one algorithm run---opposed to classical techniques from the multi-criteria decision making (MCDM) field, which aim for single solutions. Defining problems in a multiobjective way, has two further advantages: “1. The set of (Pareto-optimal) solutions may reveal design principles shared by different solutions (innovization)” and “2. Some single-objective problems become easier to solve using randomized search heuristics if additional objectives are added (multiobjectivization).” Within this tutorial, a comprehensive introduction to the EMO field will be given and selected research results will be presented in more detail. More specifically, we are going to (i) introduce the basic principles of EMO algorithms in comparison to classical solution-based approaches, (ii) show a few practical examples which motivate the use of EMO in terms of the mentioned innovization and multiobjectivization principles, and (iii) present a general overview of state-of-the-art algorithms and techniques. Moreover, we will present important recent research results in areas such as preference articulation, performance assessment, and benchmarking. Though classified as introductory, this tutorial is intended for both novices and people familiar with EMO. Those without experience will learn about the foundations of multiobjective optimization and the basic working principles of state-of-the-art EMO algorithms. The recent results and open questions, highlighted after each chapter of the tutorial, can stimulate future research and/or discussions during the conference.

Dimo Brockhoff

Dimo Brockhoff received his diploma in computer science from University of Dortmund, Germany in 2005 and his PhD (Dr. sc. ETH) from ETH Zurich,
Switzerland in 2009. Afterwards, he held two postdoctoral research positions in France at Inria Saclay Ile-de-France (2009-2010) and at Ecole
Polytechnique (2010-2011) before joining Inria in November 2011 as a permanent researcher (first in its Lille - Nord Europe research center and since October 2016 in the Saclay - Ile-de-France center). His research interests are focused on evolutionary multiobjective optimization (EMO), in particular on theoretical aspects of indicator-based search and on the benchmarking of blackbox algorithms in general.

Hyper-heuristics

The automatic design of algorithms has been an early aim of both machine learning and AI, but has proved elusive. The aim of this tutorial is to introduce hyper-heuristics as a principled approach to the automatic design of algorithms. Hyper-heuristics are metaheuristics applied to a space of algorithms; i.e., any general heuristic method of sampling a set of candidate algorithms. In particular, this tutorial will demonstrate how to mine existing algorithms to obtain algorithmic primitives for the hyper-heuristic to compose new algorithmic solutions from, and to employ various types of genetic programming to execute the composition process; i.e., the search of program space. This tutorial will place hyper-heuristics in the context of genetic programming - which differs in that it constructs solutions from scratch using atomic primitives - as well as genetic improvement - which takes a program as starting point and improves on it (a recent direction introduced by William Langdon). The approach proceeds from the observation that it is possible to define an invariant framework for the core of any class of algorithms (often by examining existing human-written algorithms for inspiration). The variant components of the algorithm can then be generated by genetic programming. Each instance of the framework therefore defines a family of algorithms. While this allows searches in constrained search spaces based on problem knowledge, it does not in any way limit the generality of this approach, as the template can be chosen to be any executable program and the primitive set can be selected to be Turing-complete. Typically, however, the initial algorithmic primitive set is composed of primitive components of existing high-performing algorithms for the problems being targeted; this more targeted approach very significantly reduces the initial search space, resulting in a practical approach rather than a mere theoretical curiosity. Iterative refining of the primitives allows for gradual and directed enlarging of the search space until convergence. This leads to a technique for mass-producing algorithms that can be customised to the context of end-use. This is perhaps best illustrated as follows: typically a researcher might create a travelling salesperson algorithm (TSP) by hand. When executed, this algorithm returns a solution to a specific instance of the TSP. We will describe a method that generates TSP algorithms that are tuned to representative instances of interest to the end-user. This method has been applied to a growing number of domains including; data mining/machine learning, combinatorial problems including bin packing (on- and off-line), Boolean satisfiability, job shop scheduling, exam timetabling, image recognition, black-box function optimization, wind-farm layout, and the automated design of meta-heuristics themselves (from selection and mutation operators to the overall meta-heuristic architecture). This tutorial will provide a step-by-step guide which takes the novice through the distinct stages of automatic design. Examples will illustrate and reinforce the issues of practical application. This technique has repeatedly produced results which outperform their manually designed counterparts, and a theoretical underpinning will be given to demonstrate why this is the case. Automatic design will become an increasingly attractive proposition as the cost of human design will only increase in-line with inflation, while the speed of processors increases in-line with Moore's law, thus making automatic design attractive for industrial application. Basic knowledge of genetic programming will be assumed.

Daniel R. Tauritz

Daniel R. Tauritz is an Associate Professor in the Department of Computer Science at the Missouri University of Science and Technology (S&T), a contract scientist for Sandia National Laboratories, a former Guest Scientist at Los Alamos National Laboratory (LANL), the founding director of S&T's Natural Computation Laboratory, and founding academic director of the LANL/S&T Cyber Security Sciences Institute. He received his Ph.D. in 2002 from Leiden University for Adaptive Information Filtering employing a novel type of evolutionary algorithm. He served previously as GECCO 2010 Late Breaking Papers Chair, GECCO 2012 & 2013 GA Track Co-Chair, GECCO 2015 ECADA Workshop Co-Chair, GECCO 2015 MetaDeeP Workshop Co-Chair, GECCO 2015 Hyper-heuristics Tutorial co-instructor, and GECCO 2015 CBBOC Competition co-organizer. For several years he has served on the GECCO GA track program committee, the Congress on Evolutionary Computation program committee, and a variety of other international conference program committees. His research interests include the design of hyper-heuristics and self-configuring evolutionary algorithms and the application of computational intelligence techniques in cyber security, critical infrastructure protection, and program understanding. He was granted a US patent for an artificially intelligent rule-based system to assist teams in becoming more effective by improving the communication process between team members.

John R. Woodward

John R. Woodward s a lecturer at the University of Stirling, within the CHORDS group (http://chords.cs.stir.ac.uk/) and is employed on the DAASE project (http://daase.cs.ucl.ac.uk/), and for the previous four years was a lecturer with the University of Nottingham. He holds a BSc in Theoretical Physics, an MSc in Cognitive Science and a PhD in Computer Science, all from the University of Birmingham. His research interests include Automated Software Engineering, particularly Search Based Software Engineering, Artificial Intelligence/Machine Learning and in particular Genetic Programming. He has over 50 publications in Computer Science, Operations Research and Engineering which include both theoretical and empirical contributions, and given over 100 talks at International Conferences and as an invited speaker at Universities. He has worked in industrial, military, educational and academic settings, and been employed by EDS, CERN and RAF and three UK Universities.

Introducing Learning Classifier Systems: Rules that Capture Complexity

In the context of evolutionary machine learning, rule-based machine learning (RBML) algorithms are an often overlooked class of algorithms with flexible features employing an alternative paradigm of piece-wise modeling that sets them apart from other strategies, particularly with respect to modeling complexity. This tutorial will introduce the RBML paradigm and three of the most successful methodological RBML families, i.e. Learning Classifier Systems (LCS), Association Rule Learning (ARL) and Artificial Immune Systems (AIS). Particular focus will be placed on learning classifier systems (LCS), or more specifically Michigan-style LCSs, being the most prevalent and foundational of the RBML algorithms. Michigan-style LCSs combine the global search of evolutionary algorithms with the local optimization of reinforcement or supervised learning. Michigan-style LCS algorithms uniquely distribute learned patterns over a collaborative population of individually interpretable (IF: THEN) rules. This allows the algorithm to flexibly and effectively describe complex and diverse problem spaces found in behavior modeling, function approximation, classification, prediction, and data mining. This proposed tutorial will improve and expand upon the one Dr. Urbanowicz presented last year adding additional content on complex representations and reinforcement learning systems (a specialty of Dr. Vargas who is co-presenting this tutorial) as well as drawing from a recent LCS animated video tutorial Dr. Urbanowicz recently produced and made available on YouTube (https://www.youtube.com/watch?v=CRge_cZ2cJc) as well as a recent introductory textbook authored by Dr. Urbanowicz and Dr. Browne. We will focus on presenting an accessible, gentle introduction to how learning classifier systems and related rule-based machine learning algorithms work, and to what types of problems they have been or could be applied to, discussing their advantages and drawbacks with respect to other machine learning approaches. As time allows, we will end the tutorial with a practical application demo of working LCS software adapted to interpretably model and make accurate predictions on noisy, real-world bioinformatics data. The overall goal of this tutorial is to increase awareness of RBML algorithms as not only a viable, but often advantageous alternative to other machine learning approaches, as well as to promote a fundamental understanding of how they work, and where they can be applied. While ExSTraCS will be the focus of the demonstration, this tutorial will review and contrast many of the other major LCS algorithms available in the literature, as well as discuss statistical analysis, interpretation, and data visualization strategies for knowledge discovery.

Ryan Urbanowicz

Dr. Urbanowicz’s research is focused on bioinformatics, machine learning, epidemiology, data mining, and the development of a new learning classifier system that is maximally functional, accessible, and easier to use and interpret. He has written one of the most cited and regarded reviews of the Learning Classifier System research field as well as 12 additional peer-reviewed LCS research papers, has co-chaired the International Workshop on Learning Classifier Systems for the past 4 years, and has recently published and a new open source learning classifier system algorithm implemented in python, called ExSTraCS. He has also given several invited introductory lectures on LCS algorithms in addition to co-presenting this tutorial in 2013. Dr. Urbanowicz received a Bachelors and Masters of Biological Engineering from Cornell University, as well as a PhD in Genetics from Dartmouth College. Currently he is a post-doctoral researcher in the Geisel School of Medicine, about to transition to a new research position at the University of Pennsylvania, USA.

Danilo Vasconcellos Vargas

Danilo Vasconcellos Vargas is an Assistant Professor at the Faculty of Information Science and Electrical Engineering, Kyushu University. He received the M. Eng. and Ph.D degrees from Kyushu University. His thesis was about employing a new concept of fitness to both machine learning and optimization. His current research interests focus on general learning systems which include research in evolutionary algorithms, neural networks, learning classifier systems and their applications.
In his last work, he developed an unified neural model integrating most neural network features from the literature into one representation. With this powerful representation it was possible to evolve the topology and weights of the network to learn a wide variety of problem classes.

Introduction to Genetic Programming

Genetic programming emerged in the early 1990's as one of the most exciting new evolutionary algorithm paradigms. It has rapidly grown into a thriving area of research and application. While sharing the evolutionary inspired algorithm principles of a genetic algorithm, it differs by exploiting an executable genome. Genetic programming evolves a 'program' to solve a problem rather than a single solution. This tutorial introduces the basic genetic programming framework. It explains how the powerful capability of genetic programming is derived from modular algorithmic components: executable representations such as an abstract syntax tree, variation operators that preserve syntax and explore a variable length, hierarchical solution space, appropriately chosen programming functions and fitness function specification.

Una-May O'Reilly

Una-May O'Reilly is leader of the AnyScale Learning For All (ALFA) group at CSAIL. ALFA focuses on scalable machine learning, evolutionary algorithms, and frameworks for large scale knowledge mining, prediction and analytics. The group has projects in clinical medicine knowledge discovery: arterial blood pressure forecasting and pattern recognition, diuretics in the ICU; wind energy: turbine layout optimization, resource prediction, cable layout; and MOOC Technology: MoocDB, student persistence and resource usage analysis.

Her research is in the design of scalable Artificial Intelligence systems that execute on a range of hardware systems: GPUs, workstations, grids, clusters, clouds and volunteer compute networks. These systems include machine learning components such as evolutionary algorithms (e.g. genetic programming, genetic algorithms and learning classifiers), classification, non-linear regression, and forecasting algorithms. They span the interpretation and analysis of raw data, through inference on conditioned exemplar data, to the deployment and evaluation of learned “algorithmic machines” in the original application context.

Una-May received the EvoStar Award for Outstanding Achievements in Evolutionary Computation in Europe in 2013. She is a Junior Fellow (elected before age 40) of the International Society of Genetic and Evolutionary Computation, now ACM Sig-EVO. She now serves as Vice-Chair of ACM SigEVO. She served as chair of the largest international Evolutionary Computation Conference, GECCO, in 2005. She has served on the GECCO business committee, co-led the 2006 and 2009 Genetic Programming: Theory to Practice Workshops and co-chaired EuroGP, the largest conference devoted to Genetic Programming. IIn 2013, with Anna Esparcia, Anniko Ekart and Gabriela Ochoa she inaugurated the Women@GECCO meeting and chairs the group. She is the area editor for Data Analytics and Knowledge Discovery for Genetic Programming and Evolvable Machines (Kluwer), and editor for Evolutionary Computation (MIT Press), and action editor for the Journal of Machine Learning Research.

Una-May has a patent for a original genetic algorithm technique applicable to internet-based name suggestions. She holds a B.Sc. from the University of Calgary, and a M.C.S. and Ph.D. (1995) from Carleton University, Ottawa, Canada. She joined the Artificial Intelligence Laboratory, MIT as a Post-Doctoral Associate in 1996.

NEW Introductory Mathematical Programming for EC

Global optimization of complex models has been for several decades approached by means of formal algorithms as well as Mathematical Programming (MP) (often branded as Operations Research, yet strongly rooted at Theoretical CS), and simultaneously has been treated by a wide range of dedicated heuristics (frequently under the label of Soft Computing) – where EC resides. The former is applicable when explicit modeling is available, whereas the latter is typically utilized for simulation- or experimentation-based optimization (but also applicable for explicit models). These two branches complement each other, yet practically studied under two independent CS disciplines. It is widely recognized nowadays that EC scholars become stronger, better-equipped researchers when obtaining knowledge on this so-called "optimization complement". In other words, the claim that our EC education should encompass basic MP is untenable at present times, and this tutorial aims at bridging the gap for our community's scholars. The tutorial comprises three parts, aiming to introduce basic MP for EC audience. The first part introduces the fundamentals of MP. It overviews mathematical optimization and outlines its taxonomy when classified by the underlying model: convex optimization (linear programs (pure-LP) and non-linear programs), versus combinatorial optimization (integer and mixed-integer linear programs ((M)ILP), integer quadratic programs (IQP)). It then discusses some of the theoretical aspects, such as polyhedra and the duality theorem. The second part focuses on MP in practice. The tutorial presents the principles of MP modeling, with emphasis on the roles of constraints and auxiliary/dummy decision variables in MP. It is compared to equivalent modeling for EC heuristics, which operate differently with respect to such components. It then covers selected algorithms for the major classes of problems (Dantzig's Simplex for pure-LP, Ellipsoid for convex models, and Branch-and-bound for ILP). The third part constitutes an interactive demo session of problem-solving. We plan to model in a precept-style, using ILOG Studio's OPL, several ILP benchmark problems. We then plan to demonstrate their rapid solution-attainment by a state-of-the-art solver, namely IBM's CPLEX.

Ofer Shir

Ofer Shir is a Senior Lecturer at the Computer Science Department in Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel.
Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel, and both MSc and PhD in Computer Science from Leiden University, The Netherlands (PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz – where he specialized in computer science aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.
His current fields of interest encompass Statistical Learning in Theory and in Practice, Experimental Optimization, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Information.

Introductory Statistics for EC: A Visual Approach

In the past, it was common to see papers published in Evolutionary Computational conferences that presented experimental results without any statistical analysis performed. Fortunately, it is now becoming widely accepted that experimental results lacking proper statistical analysis must be considered anecdotal at best, or even wholly inaccurate. In many areas within EC, and especially at GECCO, if your paper does not include a proper statistical analysis it will be rejected. Yet many researchers in the GEC field still exhibit an unfortunate lack of statistical rigor when performing and analyzing experiments. This tutorial, a staple at past GECCOs, aims at providing the basic statistical framework that all experimenters in the EC field should follow. It covers introductory topics in statistics such as the T test, confidence intervals, non-parametric statistics, and will touch on multivariate and multifactorial analysis such as the Analysis of Variance (ANOVA) and regression analysis (both linear and polynomial), time permitting. The tutorial is presented at an introductory level, and takes a very visual approach to statistics; relying on graphics and animation to provide an intuitive understanding of the subject, instead of the traditional equations, which cater to only the initiated. In other words, it is meant for any EC researcher, independent of their mathematical training, who want to compare their newly designed EC system against established EC systems to see if there is an improvement, or who want to determine which EC system performs better on their chosen problem; i.e. nearly everyone. It is vital, if our community is to be taken seriously, for us to continue to educate ourselves (especially new Graduate students) on the statistical techniques that are considered de rigor for experiments performed in any field, such as ours, that is stochastic in nature.

Mark Wineberg

Prof. Wineberg has been actively researching the field of Evolutionary Computation (EC) and Genetic Algorithms (GA) since 1993 while he was still a graduate student at Carleton University, Ottawa, Canada. Over the years he has published on various topics including: the intersection of Genetic Algorithms and Genetic Programming, enhancing the GA for improved behavior in dynamic environments through specialized multiple populations, and exploring the concept of distances and diversity in GA populations. He is currently continuing his research in dynamic environments and studying the properties that allow one population to evolve more readily than another. Prof. Wineberg also teaches an undergraduate course at Guelph on computer simulation and modeling of discrete stochastic systems, which emphasizes proper statistical analysis, as well as a graduate course on experimental design and analysis for computer science, which is an outgrowth of the statistical analysis tutorial given at GECCO.

Model-Based Evolutionary Algorithms

In model-building evolutionary algorithms the variation operators are guided by the use of a model that conveys as much problem-specific information as possible so as to best combine the currently available solutions and thereby construct new, improved, solutions. Such models can be constructed beforehand for a specific problem, or they can be learnt during the optimization process. Well-known types of algorithms of the latter type are Estimation-of-Distribution Algorithms (EDAs) where probabilistic models of promising solutions are built and subsequently sampled to generate new solutions. Although EDAs are the best known example, other approaches also exist, such as the Optimal Mixing EAs (e.g. the Linkage Tree Genetic Algorithm). In general, replacing traditional crossover and mutation operators by building and using models enables the use of machine learning techniques for automatic discovery of problem regularities and exploitation of these regularities for effective exploration of the search space. Using machine learning in optimization enables the design of optimization techniques that can automatically adapt to the given problem. This is an especially useful feature when considering optimization in a black-box setting. Successful applications include Ising spin glasses in 2D and 3D, graph partitioning, MAXSAT, feature subset selection, forest management, groundwater remediation design, telecommunication network design, antenna design, and scheduling. This tutorial will provide an introduction and an overview of major research directions in this area.

Dirk Thierens

Dirk Thierens is affiliated with the Department of Information and Computing Sciences at Utrecht University, the Netherlands, where he is teaching courses on Evolutionary Computation and on Computational Intelligence. He has been involved in genetic algorithm research since 1990. His current research interests are mainly focused on the design and application of model learning techniques to improve evolutionary search. Dirk is (has been) a member of the Editorial Board of the journals Evolutionary Computation, Evolutionary Intelligence, IEEE Transactions on Evolutionary Computation, and a member of the program committee of the major international conferences on evolutionary computation. He was elected member of the SIGEVO ACM board and contributed to the organization of several GECCO conferences: workshop co-chair (2003, 2004), track (co-)chair (2004, 2006, 2014), and Editor-in-Chief (2007).

Peter A.N. Bosman

Peter A. N. Bosman is a senior researcher in the Life Sciences research group at the Centrum Wiskunde & Informatica (CWI) (Centre for Mathematics and Computer Science) located in Amsterdam, the Netherlands. Peter was formerly affiliated with the Department of Information and Computing Sciences at Utrecht University, where also he obtained both his MSc and PhD degrees in Computer Science, more specifically on the design and application of estimation-of-distribution algorithms (EDAs). He has (co-)authored over 90 refereed publications on both algorithmic design aspects and real-world applications of evolutionary algorithms. At the GECCO conference, Peter has previously been track (co-)chair (EDA track, 2006, 2009), late-breaking-papers chair (2007), (co-)workshop organizer (OBUPM workshop, 2006; EvoDOP workshop, 2007; GreenGEC workshop, 2012-2014), (co-)local chair (2013) and general chair (2017).

NEW Neuroevolution for Deep Reinforcement Learning Problems

In recent years, there has been a resurgence of interest in reinforcement learning (RL), particularly in the deep learning community. While much of the attention has been focused on using Value-function learning approaches (i.e. Q-Learning) or Estimated Policy Gradient-based approaches to train neural-network policies, little attention has been paid to Neuroevolution (NE) for policy search. The larger research community may have forgotten about previous successes of Neuroevolution. Some of the most challenging reinforcement learning problems are those where reward signals are sparse and noisy. For many of these problems, we only know the outcome at the end of the task, such as whether the agent wins or loses, whether the robot arm picks up the object or not, or whether the agent has survived. Since NE only require the final cumulative reward that an agent gets at the end of its rollout in an environment, these are the types of problems where NE may have an advantage over traditional RL methods. In this tutorial, I show how Neuroevolution can be successfully applied to Deep RL problems to help find a suitable set of model parameters for a neural network agent. Using popular modern software frameworks for RL (TensorFlow, OpenAI Gym, pybullet, roboschool), I will apply NE to continuous control robotic tasks, and show we can obtain very good results to control bipedal robot walkers, Kuka robot arm for grasping tasks, Minitaur robot, and also various existing baseline locomotion tasks common in the Deep RL literature. I will even show that NE can even obtain state-of-the-art results 2 over Deep RL methods, and highlight ways to use NE that can lead to more stable and robust policies compared to traditional RL methods. I will also describe how to incorporate NE techniques into existing RL research pipelines taking advantage of distributed processing on Cloud Compute. I will also discuss how to combine techniques from deep learning, such as the use of deep generative models, with Neuroevolution to solve more challenging Deep Reinforcement Learning problems that rely on high dimensional video inputs for continous robotics control, or for video game simulation tasks. We will look at combining model-based reinforcement learning approaches with Neuroevolution to tackle these problems, using TensorFlow, OpenAI Gym, and pybullet environments.

David Ha

David Ha is Research Scientist at Google Brain. His research interests include Recurrent Neural Networks, Creative AI, and Evolutionary Computing. Prior to joining Google, He worked at Goldman Sachs as a Managing Director, where he co-ran the fixed-income trading business in Japan. He obtained undergraduate and graduate degrees in Engineering Science and Applied Math from the University of Toronto.

Representations for Evolutionary Algorithms

Successful and efficient use of evolutionary algorithms (EA) depends on the choice of the genotype, the problem representation (mapping from genotype to phenotype) and on the choice of search operators that are applied to the genotypes. These choices cannot be made independently of each other. The question whether a certain representation leads to better performing EAs than an alternative representation can only be answered when the operators applied are taken into consideration. The reverse is also true: deciding between alternative operators is only meaningful for a given representation. In EA practice one can distinguish two complementary approaches. The first approach uses indirect representations where a solution is encoded in a standard data structure, such as strings, vectors, permutations, or trees, and standard off-the-shelf search operators are applied to these genotypes. This is for example the case in standard genetic algorithms, evolution strategies, and some genetic programming approaches like grammatical evolution or cartesian genetic programming. To evaluate the solution, the genotype needs to be mapped to the phenotype space. The proper choice of this genotype-phenotype mapping is important for the performance of the EA search process. The second approach, the direct representation, encodes solutions to the problem in its most 'natural' space and designs search operators to operate on this representation. Research in the last few years has identified a number of key concepts to analyse the influence of representation-operator combinations on EA performance. Relevant concepts are the locality and redundancy of representations. Locality is a result of the interplay between the search operator and the genotype-phenotype mapping. Representations have high locality if the application of variation operators results in new solutions similar to the original ones. Representations are redundant if the number of phenotypes exceeds the number of possible genotypes. Redundant representations can lead to biased encodings if some phenotypes are on average represented by a larger number of genotypes or search operators favor some kind of phenotypes. The tutorial gives a brief overview about existing guidelines for representation design, illustrates the different aspects of representations, gives a brief overview of models describing the different aspects, and illustrates the relevance of the aspects with practical examples. It is expected that the participants have a basic understanding of EA principles.

Franz Rothlauf

Franz Rothlauf received a Diploma in Electrical Engineering from the University of Erlangen, Germany, a Ph.D. in Information Systems from the University of Bayreuth, Germany, and a Habilitation from the University of Mannheim, Germany, in 1997, 2001, and 2007, respectively.

Since 2007, he is professor of Information Systems at the University of Mainz. He has published more than 90 technical papers in the context of planning and optimization, evolutionary computation, e-business, and software engineering, co-edited several conference proceedings and edited books, and is author of the books ""Representations for Genetic and Evolutionary Algorithms"" and ""Design of Modern Heuristics"". Since 2013, he is Academic Director of the Executive MBA program at the University of Mainz.

His main research interests are the application of modern heuristics in planning and optimization systems. He is a member of the Editorial Board of Evolutionary Computation Journal (ECJ) and Business & Information Systems Engineering (BISE). Since 2007, he is member of the Executive Committee of ACM SIGEVO. He serves as treasurer for ACM SIGEVO since 2011. He has been organizer of many workshops and tracks on heuristic optimization issues, chair of EvoWorkshops in 2005 and 2006, co-organizer of the European workshop series on ""Evolutionary Computation in Communications, Networks, and Connected Systems"", co-organizer of the European workshop series on ""Evolutionary Computation in Transportation and Logistics"", and co-chair of the program committee of the GA track at GECCO 2006. He was conference chair of GECCO 2009.

Runtime Analysis of Evolutionary Algorithms: Basic Introduction

Evolutionary algorithm theory has studied the time complexity of evolutionary algorithms for more than 20 years. Different aspects of this rich and diverse research field were presented in three different advanced or specialized tutorials at last year's GECCO. This tutorial presents the foundations of this field. We introduce the most important notions and definitions used in the field and consider different evolutionary algorithms on a number of well-known and important example problems. Through a careful and thorough introduction of important analytical tools and methods, including fitness-based partitions, typical events and runs and drift analysis, by the end of the tutorial the attendees will be able to apply these techniques to derive relevant runtime results for non-trivial evolutionary algorithms. Moreover, the attendees will be fully prepared to follow the more advanced tutorials that cover more specialized aspects of the field, including the new advanced runtime analysis tutorial on realistic population-based EAs. To assure the coverage of the topics required in the specialised tutorials, this introductory tutorial will be coordinated with the presenters of the more advanced ones. In addition to custom-tailored methods for the analysis of evolutionary algorithms we also introduce the relevant tools and notions from probability theory in an accessible form. This makes the tutorial appropriate for everyone with an interest in the theory of evolutionary algorithms without the need to have prior knowledge of probability theory and analysis of randomized algorithms. The last four editions of this tutorial at GECCO 2013, GECCO 2014, GECCO 2015, and PPSN 2016 attracted well over 50 participants each. The tutorial will be based on the 'theoretical analysis of stochastic search heuristics' chapter of the forthcoming Springer Handbook of Heuristics.

Per Kristian Lehre

Per Kristian Lehre is a Senior Lecturer at the University of Birmingham, UK.

He received MSc and PhD degrees in Computer Science from the Norwegian University of Science and Technology (NTNU). After finishing his PhD in 2006, he held postdoctorial positions in the School of Computer Science at the University of Birmingham and at the Technical University of Denmark. From 2011, he was a Lecturer in the School of Computer Science at the University of Nottingham, until 2017, when he returned to Birmingham.

Dr Lehre's research interests are in theoretical aspects of nature-inspired search heuristics, in particular, runtime analysis of population-based evolutionary algorithms. His research has won several best paper awards, including at GECCO (2013, 2010, 2009, 2006), ICSTW (2008), and ISAAC (2014). He is editorial board member of Evolutionary Computation, and associate editor of IEEE Transactions on Evolutionary Computation. He was the coordinator of the successful 2M euro EU-funded project SAGE which brought together the theory of evolutionary computation and population genetics.

Pietro S. Oliveto

Pietro S. Oliveto iHe is currently a Vice-Chancellor Fellow at the University of Sheffield, UK and has recently been awarded an EPSRC Early Career Fellowship which he will start in March 2015. He received the Laurea degree and PhD degree in computer science respectively from the University of Catania, Italy in 2005 and from the University of Birmingham, UK in 2009. From October 2007 to April 2008, he was a visiting researcher of the Ecient Algorithms and Complexity Theory Institute at the Department of Computer Science of the University of Dortmund where he collaborated with Prof. Ingo Wegener's research group.

His main research interest is the time complexity analysis of randomized search heuristics for combinatorial optimization problems. He has published several runtime analysis papers on Evolutionary Algorithms (EAs), Articial Immune Systems (AIS) and Ant Colony Optimization (ACO) algorithms for classical NP-Hard combinatorial optimization problems, a review paper of the field of time complexity analysis of EAs for combinatorial optimization problems and a book chapter containing a tutorial on the runtime analysis of EAs. He has won best paper awards at the GECCO08, ICARIS11 and GECCO14 conferences and got very close at CEC09 and at ECTA11 through best paper nominations.

Dr. Oliveto has given tutorials on the runtime complexity analysis of EAs at WCCI 2012, CEC 2013, GECCO 2013, WCCI 2014 and GECCO 2014. He is part of the Steering Committee of the annual workshop on Theory of Randomized Search Heuristics (ThRaSH), IEEE member and Chair of the IEEE CIS Task Force on Theoretical Foundations of Bio-inspired Computation.

NEW Search-based Test Optimization for Software Systems

Test optimization is a crucial activity to select, generate, minimize, or prioritize test cases to test software systems cost-effectively. Test optimization is a complex problem that requires often solving contradictory relationships among cost (e.g., test execution time), effectiveness (e.g., fault detection ability), and efficiency (fault detection per execution time) objectives. Search algorithms including both single objective (e.g., Genetic Algorithms) and multiobjective (e.g., NSGA-II) have been extensively used for solving a variety of test optimization problems in diverse contexts. This tutorial will provide an introduction to test optimization with both single and multi-objective search algorithms. Mainly, it will focus on a variety of real industrial test selection, test minimization, and test prioritization problems. The tutorial will focus on the following aspects:1) How to optimally encoding test optimization problems for search algorithms? 2) How to define effective fitness functions that can guide search algorithms to find optimal solutions cost-effectively? 3) How to pick a suitable search algorithm to solve test optimization problem? 4) How to select and configure proper parameter settings for the selected search algorithms? 5) How to choose appropriate quality indicators to assess the quality of solutions generated by multi-objective algorithms? 6) How to choose appropriate statistical tests to evaluate search algorithms for a particular test optimization problem empirically? The tutorial will include interactive exercises on the aspects mentioned above. Besides, various tools demonstrations of test optimization tools that are developed in our team will be performed during the tutorial. Finally, we will also provide a short introduction on extending the test optimization tools for the other contexts.

Shaukat Ali

Shaukat Ali is currently a senior research scientist at Simula Research Laboratory, Norway. His research focuses on devising novel methods for Verification and Validation (V&V) of large-scale highly connected software-based systems, e.g., Cyber-Physical Systems (CPSs). He has led several basic research, research-based innovation, and innovation projects in the capacity of PI/Co-PI related to Model-based Testing (MBT), Search-Based Software Engineering, and Model-Based System Engineering. He has rich experience of working in several countries including UK, Canada, Norway, and Pakistan. Shaukat has been on the program committees of several international conferences (e.g., MODELS, ICST, GECCO, SSBSE) and also served as a reviewer for several software engineering journals (e.g., TSE, IST, SOSYM, JSS, TEVC). He is actively involved in the organization of international workshops, conferences, and special issues of Journals. He is the general chair of SSBSE 2019 conference. He is also actively participating in defining international standards on software modeling in Object Management Group (OMG), notably a new standard on Uncertainty Modeling.

Tao Yue

Tao Yue is a chief research scientist at Simula Research Laboratory, Oslo, Norway. She has received the Ph.D. degree in the Department of Systems and Computer Engineering at Carleton University, Ottawa, Canada in 2010. Before that, she was an aviation engineer and system engineer for seven years. She has nearly 20 years of experience of conducting industry-oriented research with a focus on Model-Based Engineering (MBE) in various application domains such as Avionics, Maritime and Energy, Communications, Automated Industry, and Healthcare in several countries including Canada, Norway, and China. Her present research area is software engineering, with specific interests in Requirements Engineering, MBE, Model-based Testing, Uncertainty-wise Testing, Uncertainty Modeling, Search-based Software Engineering, Empirical Software Engineering, and Product Line Engineering, with a particular focus on large-scale software systems such as Cyber-Physical Systems. Dr. Yue is on the steering committee of MODELS 2018 and serves as a PC co-chair of MODELS 2019, and has been on the program and organization committees of several international conferences (e.g., MODELS, RE, SPLC). She is also on the editorial board of Empirical Software Engineering. Dr. Yue is leading the standardization effort of Uncertainty Modeling at OMG and also actively participating in defining international standards in Object Management Group (OMG) such as SysML and UTP.

NEW Shift Your Research & Laboratory into Higher Gear with 3 Shift Skills & 4 Smooth Rules

The world of research has become increasing competitive. Larger numbers of researchers chase scarcer funds to compete for fewer positions at increasingly selective universities & research-friendly firms. To succeed in such an environment requires attention to developing your technical chops and what have been called "soft skills," both; but in an increasingly competitive environment, the technical skill set becomes expected—almost a commodity—and the differentiators of success shift to the soft side. In this highly interactive, hands-on tutorial session, GECCO co-founder and GEC pioneer Dave Goldberg shares key shift (don't-call-them-soft) skills learned from his years as founder of the well-remembered Illinois Genetic Algorithms Laboratory (IlliGAL) and from his 2nd career as leadership coach and educational transformation expert. In particular, the session draws from early AI work of Winograd & Flores to apply the lessons of the philosophy of language & speech acts to the creation and execution of an effective research agenda and laboratory. Interactive exercises in the skills of noticing, listening, and questioning help drive key distinctions home, and the importance of these is emphasized with Schön's distinction between technical rationality and conversation-in-action. The session concludes with a short list of simple rules to help you smoothly create a memorable, collaborative research environment wherever you work.

David E. Goldberg

David E. Goldberg (Dave) is a trained civil engineer (Michigan, 75, 76, 83) in hydraulics and hydrology, a registered engineer (PA), and a trained leadership coach (Georgetown, 2011). He taught engineering at Michigan, Alabama, and Illinois for 26 years, and as an academic, was known for his work in artificial intelligence, particularly genetic algorithms and evolutionary algorithms, amassing an h-index h=102, including 5 authored texts and a number of edited volumes, including the highly cited Genetic Algorithms in Search, Optimization, and Machine Learning (Addison-Wesley, 1989). During his career he has co-founded a Silicon Valley startup (www.sharethis.com), 3 academic conferences, including one combining philosophy & engineering, and an educational transformation incubator (iFoundry at UIUC). He now heads www.ThreeJoy.com, a coaching & change leadership firm for higher education, and www.BigBeacon.org, a 501(c3) non-profit corporation devoted to transforming higher education.

In 2010, Dave resigned his tenure and distinguished professorship at the University of Illinois to help transform higher education in alignment with the creativity imperative of the 21st century. Specifically, he traveled to Asia, South America, Europe, and back to North America to unlock the keys to authentic transformation and thereby unleash a new generation of students, faculty, and educational leaders. Today, Dave travels the globe to help co-create more transformative educational institutions and organizations. His most recent book, A Whole New Engineer: The Coming Revolution in Engineering Education, is available in hardcover and e-book formats (www.wholenewengineer.org).

Theory for Non-Theoreticians

This tutorial addresses GECCO attendees who do not or just seldom use mathematical proofs in their everyday research activities. We provide a smooth introduction to the theory of evolutionary computation (EC). Complementing other introductory theory tutorials, we do NOT aim at equipping you with tools or theorems that can be beneficial for your research. Instead, after this tutorial, you will have a clear opinion on: what theory of evolutionary algorithms aims at, how research in theory of evolutionary computation is done, how to interpret statements from the theory literature, what are some important contributions of theory to our field, and what are the current hot topics.

Benjamin Doerr

Benjamin Doerr is a full professor at the Ecole Polytechnique (France). He also is an adjunct professor at Saarland University (Germany). His research area is the theory both of problem-specific algorithms and of randomized search heuristics like evolutionary algorithms. Major contributions to the latter include runtime analyses for evolutionary algorithms and ant colony optimizers, as well as the further development of the drift analysis method, in particular, multiplicative and adaptive drift. In the young area of black-box complexity, he proved several of the current best bounds.

Together with Frank Neumann and Ingo Wegener, Benjamin Doerr founded the theory track at GECCO, served as its co-chair 2007-2009 and served again in 2014. In 2016, he chaires the Hot-off-the-press track. He is a member of the editorial boards of "Evolutionary Computation", "Natural Computing", "Theoretical Computer Science" and "Information Processing Letters". Together with Anne Auger, he edited the book "Theory of Randomized Search Heuristics".

Advanced Tutorials

Adaptive Parameter Choices in Evolutionary Computation

One of the most challenging problems in solving optimization problems with evolutionary algorithms (EAs) is the choice of the parameters, which allow to adjust the behavior of the algorithms to the problem at hand. Suitable parameter values need to be found, for example, for the population size, the mutation strength, the crossover rate, the selective pressure, etc. The choice of these parameters can have a crucial impact on the performance of the algorithm and need thus to be executed with care. In the early years of evolutionary computation there had been a quest to determine universally ``optimal parameter choices. At the same time, researchers have soon realized that different parameter choices can be optimal in different stages of the optimization process: in the beginning of an optimization process, for example, one may want to allow a larger mutation rate to increase the chance of finding the most promising regions of the search space (``exploration phase) while later on, a small mutation rate guarantees the search to stay focused (``exploitation'' phase). Such dynamic parameter choices are today standard in continuous optimization. Quite surprisingly, however, the situation is much different in discrete optimization, where non-static parameter choices have never played an important role---despite strong evidence (and existing mathematical proofs) that an adaptation of the parameter values to the state of the optimization process can bring significant performance gains. The ambition of this tutorial is to contribute to a paradigm change towards a more systematic use of adaptive parameter choices. To this end, we survey existing techniques to automatically select parameter values on the fly. We will discuss both experimental and theoretical results that demonstrate the unexploited potential of non-static parameter choices. Our tutorial thereby addresses experimentally as well as theory-oriented researchers alike. No specific background is required to follow this tutorial.

Carola Doerr

Carola Doerr (Carola.Doerr@mpi-inf.mpg.de, http://www-ia.lip6.fr/~doerr/) is a CNRS researcher at the Université Pierre et Marie Curie (Paris 6). She studied mathematics at Kiel University (Germany, Diploma in 2007) and computer science at the Max Planck Institute for Informatics and Saarland University (Germany, PhD in 2011). From Dec. 2007 to Nov. 2009, Carola Doerr has worked as a business consultant for McKinsey & Company, mainly in the area of network optimization, where she has used randomized search heuristics to compute more efficient network layouts and schedules. Before joining the CNRS she was a post-doc at the Université Diderot (Paris 7) and the Max Planck Institute for Informatics.

Carola Doerr's research interest is in the theory of randomized algorithms---both in the design of efficient algorithms as well as in finding a suitable complexity theory for randomized search heuristics. After contributing to the revival of black-box complexity, a theory-guided approach to explore the limitations of heuristic search algorithms, she recently started a series of works aimed at exploiting insights from the theory of evolutionary computation to design more efficient EAs, in particular such with a dynamic choice of parameters.

CMA-ES and Advanced Adaptation Mechanisms

The Covariance-Matrix-Adaptation Evolution Strategy is nowadays considered as the state-of-the-art continuous stochastic search algorithm, in particular for optimization of non-separable, ill-conditioned and rugged functions. The CMA-ES consists of different components that adapt the step-size and the covariance matrix separately. This tutorial will focus on CMA-ES and provide the audience with an overview of the different adaptation mechanisms used within CMA-ES and the scenarios where their relative impact is important. We will in particular present the rank-one update, rank-mu update, active CMA for the covariance matrix adaptation. Different step-size mechanisms (CSA and TPA) as well as the scenario where they should be applied will be discussed. We will address important design principles as well as questions related to parameter tuning that always accompany algorithm design. The input parameters such as the initial mean, the initial step-size, and the population size will be discussed in relation with the ruggedness of functions. Restart strategies that automatize the input parameter tuning will be presented.

Youhei Akimoto

Youhei Akimoto is an assistant professor at Shinshu University, Japan. He received his diploma (2007) in computer science and his master degree (2008) and PhD (2011) in computational intelligence and systems science from Tokyo Institute of Technology, Japan. Since 2010, he was also a research fellow of Japan Society for the Promotion of Science for one year. Afterwords, He joined TAO group at INRIA, France, for two years as a post-doc. He started working at Shinshu University in 2013. He has been co-chairing Continuous Optimization Track at GECCO 2015 and 2015. His research interests include design principle and theoretical analysis of stochastic search heuristics in continuous domain, in particular, the Covariance Matrix Adaptation Evolution Strategy.

Nikolaus Hansen

Nikolaus Hansen is a research scientist at INRIA, France. Educated in medicine and mathematics, he received a Ph.D. in civil engineering in 1998 from the Technical University Berlin under Ingo Rechenberg. Before he joined INRIA, he has been working in evolutionary computation, genomics and computational science at the Technical University Berlin, the InGene Institute of Genetic Medicine and the ETH Zurich. His main research interests are learning and adaptation in evolutionary computation and the development of algorithms applicable in practice. His best-known contribution to the field of evolutionary computation is the so-called Covariance Matrix Adaptation (CMA).

Constraint-Handling Techniques used with Evolutionary Algorithms

Evolutionary Algorithms (EAs), when used for global optimization, can be seen as unconstrained optimization techniques. Therefore, they require an additional mechanism to incorporate constraints of any kind (i.e., inequality, equality, linear, nonlinear) into their fitness function. Although the use of penalty functions (very popular with mathematical programming techniques) may seem an obvious choice, this sort of approach requires a careful fine tuning of the penalty factors to be used. Otherwise, an EA may be unable to reach the feasible region (if the penalty is too low) or may reach quickly the feasible region but being unable to locate solutions that lie in the boundary with the infeasible region (if the penalty is too severe). This has motivated the development of a number of approaches to incorporate constraints into the fitness function of an EA. This tutorial will cover the main proposals in current use, including novel approaches such as the use of tournament rules based on feasibility, multiobjective optimization concepts, hybrids with mathematical programming techniques (e.g., Lagrange multipliers), cultural algorithms, and artificial immune systems, among others. Other topics such as the importance of maintaining diversity, current benchmarks and the use of alternative search engines (e.g., particle swarm optimization, differential evolution, evolution strategies, etc.) will be also discussed (as time allows).

Carlos Artemio Coello

Carlos Artemio Coello received a BSc in Civil Engineering from the Universidad Autonoma de Chiapas in Mexico in 1991 (graduating Summa Cum Laude). Then, he was awarded a scholarship from the Mexican government to pursue graduate studies in Computer Science at Tulane University (in the USA). He received a MSc and a PhD in Computer Science in 1993 and 1996, respectively. Dr. Coello has been a Senior Research Fellow in the Plymouth Engineering Design Centre (in England) and a Visiting Professor at DePauw University (in the USA). He is currently full professor at CINVESTAV-IPN in Mexico City, Mexico. He has published over 390 papers in international peer-reviewed journals, book chapters, and conferences. He has also co-authored the book "Evolutionary Algorithms for Solving Multi-Objective Problems", which is now in its Second Edition (Springer, 2007) and has co-edited the book "Applications of Multi-Objective Evolutionary Algorithms" (World Scientific, 2004). His publications currently report over 23,600 citations, according to Google Scholar (his h-index is 61). He has delivered invited talks, keynote speeches and tutorials at international conferences held in Spain, USA, Canada, Switzerland, UK, Chile, Colombia, Brazil, Argentina, China, India, Uruguay and Mexico. Dr. Coello has served as a technical reviewer for over 50 international journals and for more than 100 international conferences and actually serves as associate editor of the journals "Evolutionary Computation", "IEEE Transactions on Evolutionary Computation", "Computational Optimization and Applications", "Applied Soft Computing", and "Pattern Analysis and Applications". He is also a member of the editorial board of "Engineering Optimization". He received the 2007 National Research Award (granted by the Mexican Academy of Science) in the area of "exact sciences" and, since January 2011, he is an IEEE Fellow for "contributions to multi-objective optimization and constraint-handling techniques." He is also the recipient of the prestigious "2013 IEEE Kiyo Tomiyasu Award" and of the "2012 National Medal of Science and Arts" in the area of "Physical, Mathematical and Natural Sciences" (this is the highest award that a scientist can receive in Mexico). His current research interests are: evolutionary multiobjective optimization and constraint-handling techniques for evolutionary algorithms.

NEW Decomposition Multi-Objective Optimization: Current Developments and Future Opportunities

Evolutionary multi-objective optimization (EMO) has been a major research topic in the field of evolutionary computation for many years. It has been generally accepted that combination of evolutionary algorithms and traditional optimization methods should be a next generation multi-objective optimization solver. As the name suggests, the basic idea of the decomposition-based technique is to transform the original complex problem into simplified subproblem(s) so as to facilitate the optimization. Decomposition methods have been well used and studied in traditional multi-objective optimization. MOEA/D decomposes a multi-objective problem into a number of subtasks, and then solves them in a collaborative manner. MOEA/D provides a very natural bridge between multi-objective evolutionary algorithms and traditional decomposition methods. It has been a commonly used evolutionary algorithmic framework in recent years. Within this tutorial, a comprehensive introduction to MOEA/D will be given and selected research results will be presented in more detail. More specifically, we are going to (i) introduce the basic principles of MOEA/D in comparison with other two state-of-the-art EMO frameworks, i.e., Pareto- and indicator-based frameworks; (ii) present a general overview of state-of-the-art MOEA/D variants and their applications; (iii) discuss the future opportunities for possible further developments. The intended audience of this tutorial can be both novices and people familiar with EMO or MOEA/D. In particular, it is self-contained that foundations of multi-objective optimization and the basic working principles of EMO algorithms will be included for those without experience in EMO to learn. Open questions will be posed and highlighted for discussion at the latter session of this tutorial.

Ke Li

Ke Li is a Lecturer (Assistant Professor) in Data Analytics at the Department of Computer Science, University of Exeter. He earned his PhD from City University of Hong Kong. Afterwards, he spent a year as a postdoctoral research associate at Michigan State University. Then, he moved to the UK and took the post of research fellow at University of Birmingham. His current research interests include the evolutionary multi-objective optimization, automatic problem solving, machine learning and applications in water engineering and software engineering.

Qingfu Zhang

Qingfu Zhang is a Professor at the Department of Computer Science, City University of Hong Kong. His main research interests include evolutionary computation, optimization, neural networks, data analysis, and their applications. He is currently leading the Metaheuristic Optimization Research (MOP) Group in City University of Hong Kong. Professor Zhang is an Associate Editor of the IEEE Transactions on Evolutionary Computation and the IEEE Transactions Cybernetics. He was awarded the 2010 IEEE Transactions on Evolutionary Computation Outstanding Paper Award. He is on the list of the Thomson Reuters 2016 and 2017 highly cited researchers in computer science. He is an IEEE fellow.

NEW Evolutionary Computation for Digital Art

Evolutionary algorithms have been used in various ways to create or guide the creation of digital art. In this tutorial we present techniques from the thriving field of biologically inspired art. We show how evolutionary computation methods can be used to enhance artistic creativity and lead to software systems that help users to create artistic work. We start by providing a general introduction into the use of evolutionary computation methods for digital art and highlight different application areas. This covers different evolutionary algorithms including genetic programming for the creation of artistic images. Afterwards, we discuss evolutionary algorithms to create artistic artwork in the context of image transition and animation. We show how the connection between evolutionary computation methods and a professional artistic approach finds application in digital animation and new media art, and discuss the different steps of involving evolutionary algorithms for image transition into the creation of paintings. Afterwards, we give an overview on the use of aesthetic features to evaluate digital art. The feature-based approach complements the existing evaluation through human judgments/analysis and allows to judge digital art in a quantitative way. Finally, we outline directions for future research and discuss some open problems. The tutorial will contain various animations to showcase digital art. We also plan to allow the audience to interact with recent computer systems for digital art.

Frank Neumann

Frank Neumann received his diploma and Ph.D. from the Christian-Albrechts-University of Kiel in 2002 and 2006, respectively. He is a professor and leader of the Optimisation and Logistics Group at the School of Computer Science, The University of Adelaide, Australia. Frank has been the general chair of the ACM GECCO 2016. With Kenneth De Jong he organised ACM FOGA 2013 in Adelaide and together with Carsten Witt he has written the textbook "Bioinspired Computation in Combinatorial Optimization - Algorithms and Their Computational Complexity" published by Springer. He is an Associate Editor of the journals "Evolutionary Computation" (MIT Press) and "IEEE Transactions on Evolutionary Computation" (IEEE). In his work, he considers algorithmic approaches in particular for combinatorial and multi-objective optimization problems and focuses on theoretical aspects of evolutionary computation as well as high impact applications in the areas of renewable energy, logistics, and mining.

Aneta Neumann

Aneta Neumann graduated from the Christian-Albrechts-University of Kiel, Germany in computer science and is currently undertaking her postgraduate research at the School of Computer Science, the University of Adelaide, Australia. She was a participant in the SALA 2016 and 2017 exhibitions in Adelaide and has presented invited talks at UCL London, Goldsmiths, University of London, the University of Nottingham and the University of Sheffield in 2016 and 2017. Aneta is a co-designer and co-lecturer for the EdX Big Data Fundamentals course in the Big Data MicroMasters® program. Her main research interest is understanding the fundamental link between bio-inspired computation and digital art.

NEW Evolutionary Reinforcement Learning: General Models and Adaptation

Recently, with the surge of Alpha-Go and Deep Q-Learning, reinforcement learning systems have gotten the spot light. Reinforcement learning is a promising field not only because of the achievements of playing Go or Atari games at human-level but also because of their paradigm which can be easily applied to more complex problems where a classical supervised learning approach is infeasible (e.g. when there is no dataset available, when adaptation is required, among others). Evolutionary Computation has two main approaches to reinforcement learning: (a) Neuroevolution (b) Learning Classifier Systems (LCS). This tutorial will present the recent advances of both fields under the light of reinforcement learning, explaining the pros and cons of each one as well as which problem they can solve. More importantly, since both are not mutually exclusive, this tutorial also aim to demonstrate how they can work together. Regarding neuroevolution, it is an area of research resulting from the combination of the representation power of artificial neural networks with the optimization capabilities of evolutionary methods. All neuroevolution methods that were applied to reinforcement learning use a direct policy search approach (i.e., the rewards are summed and the evolutionary system search for neural network policies which maximize it) instead of the value-function approach used in LCS. Particular focus will be placed on recent works that evolve both the topology and weights of the network. The tutorial will show the differences and achievements of most popular methods NEAT, CPPN-NEAT and HyperNEAT (a indirect encoding variation), highlighting how they solve reinforcement learning problems and which problems were solved. Moreover, it will introduce new approaches that are emerging in this area, such as the recent paper published in IEEE Transactions by Dr. Vargas. In this paper a higher adaptation and better general learning properties is shown to be possible if a more complex model is used (i.e. a unified neural model). For that an evolutionary system that can evolve this complex model is required (an evolutionary algorithm using a new concept of chromosome spectrum and a new diversity method was required) Danilo Vasconcellos Vargas and Junichi Murata (2017), “Spectrum-Diverse Neuroevolution with Unified Neural Models”, IEEE Transactions on Neural Networks and Learning Systems, Volume 28, Issue 8, 1759-1773.This tutorial will also present a series of LCS based reinforcement learning algorithms. LCS falls into the divide and conquer approach where a set of agents with condition-action-prediction rules cooperate and compete in an evolutionary system to solve the problem at hand. Most of the LCS algorithms that can solve reinforcement learning problems are Michigan-style where all the individuals collectively makes a single solution and it is the selection and improvement of the parts that drives the method forward. Contrary to neuroevolution algorithms which use direct policy search, LCS algorithms are mostly value-function approaches in which each agent uses the fitness of the next acting agent to update its fitness. In fact, there are many variations to how rewards are given to agents as well as the presence of prediction rules. All these differences will be compared with examples. Moreover, it will be shown how systems can adapt to changes in the environment using flexible representations like Self Organizing Maps or Novelty Organizing Maps present in the recent paradigm of Self-Organizing Classifiers (this class of algorithms was proposed by Dr. Vargas with the first algorithm presented in GECCO 2013 and followed by a series of papers about them in both GECCO and CEC). As time allows, the tutorial will end with a section of open questions to promote a discussion among the participants and increase the possibility of cooperation towards a common objective. This tutorial touch advanced topics and should be ideally scheduled in the days after both the tutorial introducing LCS (being proposed by Dr. Urbanowicz and Dr. Vargas) and the tutorial that introduces neuroevolution (usually presented by Prof. Miikkulainen). Thus, the main objective of this tutorial is to present and promote exchange of ideas between two areas inside GECCO as well as show how they are interrelated under the perspective of reinforcement learning. This should open the doors for a vibrant evolutionary reinforcement learning community to rival the non evolutionary ones.

Danilo Vasconcellos Vargas

Danilo Vasconcellos Vargas is an Assistant Professor at the Faculty of Information Science and Electrical Engineering, Kyushu University. He received the M. Eng. and Ph.D degrees from Kyushu University. His thesis was about employing a new concept of fitness to both machine learning and optimization. His current research interests focus on general learning systems which include research in evolutionary algorithms, neural networks, learning classifier systems and their applications.
In his last work, he developed an unified neural model integrating most neural network features from the literature into one representation. With this powerful representation it was possible to evolve the topology and weights of the network to learn a wide variety of problem classes.

Expressive Genetic Programming: Concepts and applications

The language in which evolving programs are expressed can have significant impacts on the dynamics and problem-solving capabilities of a genetic programming system. In GP these impacts are driven by far more than the absolute computational power of the languages used; just because a computation is theoretically possible in a language, it doesn’t mean it’s readily discoverable or leveraged by evolution. Highly expressive languages can facilitate the evolution of software using, as appropriate, multiple data types, evolved subroutines, evolved control structures, evolved data structures, and evolved modular program and data architectures. In some cases expressive languages can even support the evolution of programs that express methods for their own reproduction and variation (and hence for the evolution of their offspring). This tutorial will present a range of approaches that have been taken for evolving programs in expressive programming languages. We will then provide a detailed introduction to the Push programming language, which was designed specifically for expressiveness in genetic programming systems, and we will demonstrate the use of the PushGP genetic programming system with open source libraries in Python (PyshGP) and Clojure (Clojush). Participants will be invited to install and interact with these libraries during the tutorial. Interleaved with our description and demonstrations of Push and PushGP will be demonstrations of the use of analytical tools such as graph databases and program diff/merge tools to explore ways in which evolved Push programs are actually taking advantage of the language’s expressive features. We will illustrate, for example, the effective use of multiple types and type-appropriate functions, the evolution and modification of code blocks and looping/recursive constructs, and the ability of Push programs to handle multiple, potentially related tasks. We will conclude with a brief review of over a decade of Push-based research, including the production of human-competitive results, along with recent enhancements to Push that are intended to support the evolution of complex and robust software systems.

Lee Spector

Lee Spector is a Professor of Computer Science in the School of Cognitive Science at Hampshire College in Amherst, Massachusetts, and an adjunct professor in the Department of Computer Science at the University of Massachusetts, Amherst. He received a B.A. in Philosophy from Oberlin College in 1984 and a Ph.D. from the Department of Computer Science at the University of Maryland in 1992. His areas of teaching and research include genetic and evolutionary computation, quantum computation, and a variety of intersections between computer science, cognitive science, evolutionary biology, and the arts. He is the Editor-in-Chief of the journal Genetic Programming and Evolvable Machines (published by Springer) and a member of the editorial board of Evolutionary Computation (published by MIT Press). He is also a member of the SIGEVO executive committee and he was named a Fellow of the International Society for Genetic and Evolutionary Computation.

More info: http://hampshire.edu/lspector

Nic McPhee

Next Generation Genetic Algorithms

New developments in Gray Box Optimization makes it possible to construct new forms of Genetic Algorithms that do not use random mutation or random recombination. Instead, for certain classes of NP Hard problems (ranging from MAXSAT to the Traveling Salesman Problem), it is possible to exactly compute the location of improving moves (often in constant time), and to use highly efficient forms of greedy deterministic recombination. In some domains, this makes random mutation and random recombination obsolete. Deterministic “Partition Crossover” can be applied to k-bounded pseudo-Boolean optimization problems (such as MAXSAT and NK Landscapes) as well as problems such as the Traveling Salesman Problem (TSP). Partition Crossover locally decomposes a recombination graph into q subgraphs in O(n) time. It can then identify the best of possible offspring. For example, for q=40, partition crossover returns the best of one trillion (possible offspring. If the parents are local optima, the offspring are guaranteed to be locally optimal in the largest hyperplane subspace containing both parents, and offspring are typically also local optima in the full space. This allows partition crossover to directly “tunnel” between local optima, moving directly from local optimum to local optimum. For the TSP, these results can be used to improve the best existing TSP solvers (e.g., LKH and EAX). It can also be used to improve exact Branch and Bound algorithms such as Concorde. For k-bounded pseudo-Boolean optimization problems these new algorithms are able to solve problems with 1 million variables. Other recent results also use a similar form of local decomposition coupled with greedy and deterministic optimization. When applied to multiply constrained scheduling problems, the genetic algorithm is able to solve industrial problems with unto 1 billion variables. These results have the potential to revolutionized evolutionary computation. The methods are decidedly not black box optimizers, and the new algorithms trivially solve many black box benchmarks: ONEMAX, Leading Ones, Jump functions and Trap functions are solved in O(n) time using only 1 evaluation.

Darrell Whitley

Prof. Darrell Whitley has been active in Evolutionary Computation since 1986, and has published more than 200 papers. These papers have garnered more than 16,000 citations. Dr. Whitley's H-index is 54. He has served as Editor-in-Chief of the journal Evolutionary Computation, and recently served as Chair of the Governing Board of ACM Sigevo from 2007 to 2011. He currently serves as an Associate Editor of Artificial Intelligence.

Particle Swarm Optimization: A Guide to Effective, Misconception Free, Real World Use

The main objective of this tutorial will be inform particle swarm optimization (PSO) practitioners of the many common misconceptions and falsehoods that are actively hindering a practitioner’s successful use of PSO in solving challenging optimization problems. While the behaviour of PSO’s particles has been studied both theoretically and empirically since its inception in 1995, most practitioners unfortunately have not utilized these studies to guide their use of PSO. This tutorial will provide a succinct coverage of common PSO misconceptions, with a detailed explanation of why the misconceptions are in fact false, and how they are negatively impacting results. The tutorial will also provide recent theoretical results about PSO particle behaviour from which the PSO practitioner can now make better and more informed decisions about PSO and in particular make better PSO parameter selections. The tutorial will focus on the following important aspects of PSO behaviour: Effective control parameter selection, Parameter tuning and fair comparison, Influence of social topology on performance, Existing theoretical PSO results and what they mean to a PSO practitioner, Roaming behaviour of PSO particles, Known stability criteria of PSO algorithms, Effects of particle stability of PSO’s performance, How to derive new stability criteria for PSO variants and verify them, Is the PSO a local optimizer or a global optimizer? With the knowledge presented in this tutorial a PSO practitioner will gain up to date theoretical insights into PSO behaviour and as a result be able to make informed choices when utilizing PSO.

Andries Engelbrecht

Andries Engelbrecht received the Masters and PhD degrees in Computer Science from the University of Stellenbosch, South Africa, in 1994 and 1999 respectively. He is Professor in Computer Science at the University of Pretoria, and serves as Head of the department. He holds the position of South African Research Chair in Artificial Intelligence, and leads the Computational Intelligence Research Group. His research interests include swarm intelligence, evolutionary computation, neural networks, artificial immune systems, and the application of these paradigms to data mining, games, bioinformatics, finance, and difficult optimization problems. He has published over 300 papers in these fields and is author of two books, Computational Intelligence: An Introduction and Fundamentals of Computational Swarm Intelligence.

Christopher Cleghorn

Christopher Cleghorn received his Masters and PhD degrees in Computer Science from the University of Pretoria, South Africa, in 2013 and 2017 respectively. He is lecturer in Computer Science at the University of Pretoria, and a member of the Computational Intelligence Research Group. His research interests include swarm intelligence, evolutionary computation, and machine learning, with a strong focus of theoretical research. Dr Cleghorn annually serves as a reviewer for numerous international journals and conferences in domains ranging from swarm intelligence and neural networks to mathematical optimization.

NEW Promoting Diversity in Evolutionary Optimization: Why and How

Divergence of character is a cornerstone of natural evolution. On the contrary, evolutionary optimization processes are plagued by an endemic lack of diversity: all candidate solutions eventually crowd the very same areas in the search space. Such a lack of speciation has been pointed out in the seminal work of Holland in 1975, and nowadays is well known among scholars. It has different effects on the different search algorithms, but almost all are quite deleterious. The problem is usually labeled with the oxymoron "premature convergence", that is, the tendency of an algorithm to convergence toward a point where it was not supposed to converge to in the first place. Scientific literature contains several efficient diversity-preservation methodologies that ranged from general techniques to problem-dependent heuristics. However, the fragmentation of the field and the difference in terminology led to a general dispersion of this important corpus of knowledge in many small, hard-to-track research lines. The tutorial will stem from the 2016 PPSN tutorial and shall include the more recent development of this vibrant topic.

Giovanni Squillero

Giovanni Squillero received his M.S. and Ph.D. in computer science in 1996 and 2001, respectively. He is an assistant professor in Politecnico di Torino, Torino, Italy. His research interests mix the whole spectrum of bio-inspired metaheuristics with electronic CAD, and selected topics in computational intelligence, games, and multi-agent systems. His activities focus on developing techniques able to achieve "good" solutions while requiring an "acceptable" amount of resources, with main applications in real, industrial problems. Squillero is a member of the *IEEE Computational Intelligence Society Games Technical Committee*. He organized the *EvoHOT* workshops on evolutionary hardware optimization techniques, and he is currently a member of the editorial board of *Genetic Programming and Evolvable Machines*. He is the coordinator of *EvoApplications* for 2016.
<http://www.cad.polito.it/~squillero/cv_squillero.pdf>

Alberto Tonda

Alberto Tonda received his PhD in 2010, from Politecnico di Torino, Torino, Italy, with a thesis on real-world applications of evolutionary computation. After post-doctoral experiences on the same topics at the Institut des Systèmes Complexes of Paris and INRIA Saclay, France, he is now a permanent researcher at INRA, the French National Institute for Research in Agriculture and Agronomy. His current research topics include semi-supervised modeling of food processes, and stochastic optimization of processes for the industry.

Sequential Experimentation by Evolutionary Algorithms

This tutorial addresses applications of Evolutionary Algorithms (EAs) to global optimization tasks where the objective function cannot be calculated (no explicit model nor a simulation exist), but rather requires a measurement/assay ("wet experiment") in the real-world – e.g., in pharmaceuticals, biocatalyst design, protein expression, quantum processes – to mention only a few. The use of EAs for experimental optimization is placed in its historical context with an overview of the landmark studies in this area carried out in the 1960s at the Technical University of Berlin. At the same time, statistics-based Design of Experiments (DoE) methodologies, rooted in the late 1950s, constitute a golden-standard in existing laboratory equipment, and are therefore reviewed as well at an introductory level to EC audience. The main characteristics of experimental optimization work, in comparison to optimization of simulated systems, are discussed, and practical guidelines for real-world experiments with EAs are given. For example, experimental problems can constrain the evolution due to overhead considerations, interruptions, changes of variables, missing assays, imposed population-sizes, and target assays that have differing evaluation times (in the case of multiple objective optimization problems). Selected modern-day case studies show the persistence of experimental optimization problems today. These cover experimental quantum systems, combinatorial drug discovery, protein expression, and others. These applications can throw EAs out of their normal operating envelope, and raise research questions in a number of different areas ranging across constrained EAs, multiple objective EAs, robust and reliable methods for noisy and dynamic problems, and metamodeling methods for expensive evaluations.

Ofer Shir

Ofer Shir is a Senior Lecturer at the Computer Science Department in Tel-Hai College, and a Principal Investigator at Migal-Galilee Research Institute – both located in the Upper Galilee, Israel.
Ofer Shir holds a BSc in Physics and Computer Science from the Hebrew University of Jerusalem, Israel, and both MSc and PhD in Computer Science from Leiden University, The Netherlands (PhD advisers: Thomas Bäck and Marc Vrakking). Upon his graduation, he completed a two-years term as a Postdoctoral Research Associate at Princeton University, USA (2008-2010), hosted by Prof. Herschel Rabitz – where he specialized in computer science aspects of experimental quantum systems. He then joined IBM-Research as a Research Staff Member (2010-2013), which constituted his second postdoctoral term, and where he gained real-world experience in convex and combinatorial optimization as well as in decision analytics.
His current fields of interest encompass Statistical Learning in Theory and in Practice, Experimental Optimization, Scientific Informatics, Natural Computing, Computational Intelligence in Physical Sciences, Quantum Control and Quantum Information.

Thomas Bäck

Thomas Bäck is full professor of computer science at the Leiden Institute of Advanced Computer Science (LIACS), Leiden University, The Netherlands, where he is head of the Natural Computing group since 2002.

He received his PhD (adviser: Hans-Paul Schwefel) in computer science from Dortmund University, Germany, in 1994, and then worked for the Informatik Centrum Dortmund (ICD) as department leader of the Center for Applied Systems Analysis. From 2000 - 2009, Thomas was Managing Director of NuTech Solutions GmbH and CTO of NuTech Solutions, Inc. He gained ample experience in solving real-life problems in optimization and data mining through working with global enterprises such as BMW, Beiersdorf, Daimler, Ford, Honda, and many others.

Thomas Bäck has more than 200 publications on natural computing, as well as two books on evolutionary algorithms: Evolutionary Algorithms in Theory and Practice (1996), Contemporary Evolution Strategies (2013). He is co-editor of the Handbook of Evolutionary Computation, and most recently, the Handbook of Natural Computing. He is also editorial board member and associate editor of a number of journals on evolutionary and natural computing. Thomas received the best dissertation award from the German Society of Computer Science (Gesellschaft für Informatik, GI) in 1995 and is an elected fellow of the International Society for Genetic and Evolutionary Computation for his contributions to the field.

Simulation Optimization

The optimization of parameters of a simulation model is usually denoted as Simulation Optimization. This is a typical problem in the design and optimization of complex systems, where a solution can only be evaluated by means of running a simulation. On the one hand, simulation optimization is black box optimization, which suits evolutionary algorithms well. On the other hand, simulation models are often stochastic, which affects the selection step in evolutionary algorithms. Furthermore, running a simulation is usually computationally expensive, so the number of solutions that can be evaluated is limited. This tutorial will survey various simulation optimization techniques and then explain in more detail what it takes to successfully apply evolutionary algorithms for simulation optimization. It will briefly cover parallelization and surrogate models to reduce the runtime, but then focus in particular on the handling of noise.

Jürgen Branke

Jürgen Branke is Professor of Operational Research and Systems at Warwick Business School, University of Warwick, UK. He has been an active researcher in the field of Evolutionary Computation for over 20 years, has published over 160 papers in peer-reviewed journals and conferences, resulting in an H-Index of 48 (Google Scholar). His main research interests include optimization under uncertainty, simulation-based optimization and multi-objective optimization. Jürgen is Area Editor for the Journal of Heuristics, and Associate Editor for the Evolutionary Computation Journal, IEEE Transactions on Evolutionary Computation, and the Journal on Multi-Criteria Decision Analysis.

Solving Complex Problems with Coevolutionary Algorithms

Coevolutionary algorithms (CoEAs) go beyond the conventional paradigm of evolutionary computation in having the potential to answer questions about what to evolve against (competition) and / or establish how multi-agent behaviours can be discovered (cooperation). Competitive coevolution can be considered from the perspective of discovering tests that distinguish between the performance of candidate solutions. Cooperative coevolution implies that mechanisms are adopted for distributing fitness across more than one individual. In both these variants, the evolving entities engage in interactions that affect all the engaged parties, and result in search gradients that may be very different from those observed in conventional evolutionary algorithm, where fitness is defined externally. This allows CoEAs to model complex systems and solve problems that are difficult or not naturally addressed using conventional evolution. This tutorial will begin by first establishing basic frameworks for competitive and cooperative coevolution and noting the links to related formalisms (interactive domains and test-based problems). We will identify the pathologies that potentially appear when assuming such coevolutionary formulations (disengagement, forgetting, mediocre stable states) and present the methods that address these issues. Compositional formulations will be considered in which hierarchies of development are explicitly formed leading to the incremental complexification of solutions. The role of system dynamics will also be reviewed with regards to providing additional insight into how design decisions regarding, say, the formulation assumed for cooperation, impact on the development of effective solutions. We will also present the concepts of coordinate systems and underlying objectives and how they can make search/learning more effective. Other covered developments will include hybridization with local search and relationships to shaping. The concepts covered in the tutorial will be illustrated with numerous examples and applications, including the cooperative development of programs (Rubik’s Cube, Atari Arcade Learning Environment), competitive coevolution of game strategies (Othello), and machine learning (learning from data streams and object recognition).

Krzysztof Krawiec

Krzysztof Krawiec is an Associate Professor in the Institute of Computing Science at Poznan University of Technology, Poland. His primary research areas are genetic programming, semantic genetic programming, and coevolutionary algorithms, with applications in program synthesis, modeling, pattern recognition, and games. Dr. Krawiec co-chaired the European Conference on Genetic Programming in 2013 and 2014, GP track at GECCO'16, and is an associate editor of Genetic Programming and Evolvable Machines journal.

Malcolm Heywood

Malcolm Heywood is a Professor of Computer Science at Dalhousie University, Canada. He has conducted research in genetic programming (GP) since 2000. He has a particular interest in scaling up the tasks that GP can potentially be applied to. His current research is attempting the appraise the utility of coevolutionary methods under non-stationary environments as encountered in streaming data applications, and coevolving agents for single and multi-agent reinforcement learning tasks. In the latter case the goal is to coevolve behaviours for playing soccer under the RoboSoccer environment (a test bed for multi-agent reinforcement learning). Dr. Heywood is a member of the editorial board for Genetic Programming and Evolvable Machines (Springer). He was a track co-chair for the GECCO GP track in 2014 and a co-chair for European Conference on Genetic Programming in 2015.

Visualization in Multiobjective Optimization

Multiobjective optimization algorithms usually produce a set of trade-off solutions approximating the Pareto front where no solution from the set is better than any other in all objectives (this is called an approximation set). While there exist many measures to assess the quality of approximation sets, no measure is as effective as visualization, especially if the Pareto front is known and can be visualized as well. Visualization in evolutionary multiobjective optimization is relevant in many aspects, such as estimating the location, range, and shape of the Pareto front, assessing conflicts and trade-offs between objectives, selecting preferred solutions, monitoring the progress or convergence of an optimization run, and assessing the relative performance of different algorithms. This tutorial will provide a comprehensive overview of methods used in multiobjective optimization for visualizing either individual approximation sets resulting from a single algorithm run or multiple approximation sets stemming from repeated runs. The methods will be organized in a recently proposed taxonomy of visualization methods and analyzed according to a methodology for assessing and comparing visualization methods. The methodology uses a list of requirements for visualization methods and benchmark approximation sets in a similar way as performance metrics and benchmark test problems are used for comparing optimization algorithms.

Bogdan Filipic

Bogdan Filipic is a senior researcher and head of Computational Intelligence Group at the Department of Intelligent Systems of the Jozef Stefan Institute, Ljubljana, Slovenia, and associate professor of Computer Science at the Jozef Stefan International Postgraduate School. He is an expert in stochastic optimization, evolutionary computation and intelligent data analysis. Recently he has been focusing on parallelization, use of surrogate models and visualization of results in evolutionary multiobjective optimization. He is also active in promoting evolutionary computation in practice and has led optimization projects for steel production, car manufacturing and energy distribution. He co-chaired the biennial BIOMA conference from 2004 to 2012, and served as the general chair of PPSN 2014. He was a guest lecturer at the VU University Amsterdam, The Netherlands, in Fall 2014, and was giving tutorials on industrial applications of evolutionary algorithms at WCCI 2014 and CEC 2015, and on visualization in multiobjective optimization at GECCO 2016 and CEC 2017.

Tea Tušar

Tea Tušar is a research fellow at the Department of Intelligent Systems of the Jožef Stefan Institute in Ljubljana, Slovenia. She received the BSc degree in Applied Mathematics and the MSc degree in Computer and Information Science from the University of Ljubljana. She was awarded the PhD degree in Information and Communication Technologies by the Jožef Stefan International Postgraduate School for her work on visualizing solution sets in multiobjective optimization. She has recently completed a one-year postdoctoral fellowship at Inria Lille in France where she worked on benchmarking multiobjective optimizers. Her research interests include evolutionary algorithms for singleobjective and multiobjective optimization with emphasis on visualizing and benchmarking their results and applying them to real-world problems.

She was involved in the organization of a number of workshops at previous GECCOs (BBOB, VizGEC, Women@GECCO and Student Workshop), she proposed and organized the Job Market at GECCO 2017 and held a tutorial on Visualization in Multiobjective Optimization at GECCO 2016.

Specialized Tutorials

Automated Offline Design of Algorithms

Most optimization algorithms, including evolutionary algorithms and metaheuristics, and general-purpose solvers for integer or constraint programming, have often many parameters that need to be properly designed and tuned for obtaining the best results on a particular problem. Automatic (offline) algorithm design methods help algorithm users to determine the parameter settings that optimize the performance of the algorithm before the algorithm is actually deployed. Moreover, automatic offline algorithm design methods may potentially lead to a paradigm shift in algorithm design because they enable algorithm designers to explore much larger design spaces than by traditional trial-and-error and experimental design procedures. Thus, algorithm designers can focus on inventing new algorithmic components, combine them in flexible algorithm frameworks, and let final algorithm design decisions be taken by automatic algorithm design techniques for specific application contexts. This tutorial is structured into two main parts. In the first part, we will give an overview of the algorithm design and tuning problem, review recent methods for automatic algorithm design, and illustrate the potential of these techniques using recent, notable applications from the presenters' and other researchers work. In the second part of the tutorial will focus on a detailed discussion of more complex scenarios, including multi-objective problems, anytime algorithms, heterogeneous problem instances, and the automatic generation of algorithms from algorithm frameworks. The focus of this second part of the tutorial is, hence, on practical but challenging applications of automatic algorithm design. The second part of the tutorial will demonstrate how to tackle algorithm design tasks using our irace software (http://iridia.ulb.ac.be/irace), which implements the iterated racing procedure for automatic algorithm design. We will provide a practical step-by-step guide on using irace for the typical algorithm design scenario.

Manuel López-Ibáñez

Dr. López-Ibáñez is a lecturer in the Decision and Cognitive Sciences Research Centre at the Alliance Manchester Business School, University of Manchester, UK. He received the M.S. degree in computer science from the University of Granada, Granada, Spain, in 2004, and the Ph.D. degree from Edinburgh Napier University, U.K., in 2009. He has published 17 journal papers, 6 book chapters and 36 papers in peer-reviewed proceedings of international conferences on diverse areas such as evolutionary algorithms, ant colony optimization, multi-objective optimization, pump scheduling and various combinatorial optimization problems. His current research interests are experimental analysis and the automatic configuration and design of stochastic optimization algorithms, for single and multi-objective problems. He is the lead developer and current maintainer of the irace software package for automatic algorithm configuration (http://iridia.ulb.ac.be/irace).

Thomas Stützle

Thomas Stützle is a senior research associate of the Belgian F.R.S.-FNRS working at the IRIDIA laboratory of Université libre de Bruxelles (ULB), Belgium. He received the Diplom (German equivalent of M.S. degree) in business engineering from the Universität Karlsruhe (TH), Karlsruhe, Germany in 1994, and his PhD and his habilitation in computer science both from the Computer Science Department of Technische Universität Darmstadt, Germany, in 1998 and 2004, respectively. He is the co-author of two books about ``Stochastic Local Search: Foundations and Applications and ``Ant Colony Optimization and he has extensively published in the wider area of metaheuristics including 20 edited proceedings or books, 8 journal special issues, and more than 190 journal, conference articles and book chapters, many of which are highly cited. He is associate editor of Computational Intelligence, Swarm Intelligence, and Applied Mathematics and Computation and on the editorial board of seven other journals including Evolutionary Computation and Journal of Artificial Intelligence Research. His main research interests are in metaheuristics, swarm intelligence, methodologies for engineering stochastic local search algorithms, multi-objective optimization, and automatic algorithm configuration. In fact, since more than a decade he is interested in automatic algorithm configuration and design methodologies and he has contributed to some effective algorithm configuration techniques such as F-race, Iterated F-race and ParamILS. His 2002 GECCO paper on "A Racing Algorithm For Configuring Metaheuristics" (joint work with M. Birattari, L. Paquete, and K. Varrentrapp) has received the 2012 SIGEVO impact award.

NEW Bio-Inspired Approaches to Anomaly and Intrusion

Intrusion detection systems (IDSs) have gained a substantial a.ention because of its high-impact safety and security applications. Two main approaches are used when building those systems: i) misuse-based and ii) anomaly-based detection. While the former focuses on detecting a.acks that follow a known pattern or signature, the la.er is interested in building a model representing the system’s nominal behavior while assuming all deviated activities to be anomalous or intrusions, and, therefore provide a more robust solution. Bio-inspired approaches have been proposed to address the problem of anomaly-based intrusion detection, with artificial immune systems (AISs) being the most recognizable approach of all. However, recent developments in the area of single and multi-criterion evolutionary computing, adversarial co-evolutionary modeling and simulation have served a foundation for novel and be.er performing bio-inspired IDS that have yielded competitive results. This tutorial will present the anomaly detection topic, its peculiarities and revise the current state of the art on this topic, departing from classical machine learning approaches, presenting the current state-of-the-art methods and analyze how and why those methods have been shown to outperform many of the currently established approaches.

Luis Martí

Luis Martí is Adjunct Professor at the Institute of Computing of the Universidade Federal Fluminense (UFF) in Niterói, Rio de Janeiro, Brazil where he co-leads the Research on Intelligence and Optimisation (RIO) Group. Luis' research is mainly concerned with evolutionary computation, evolutionary multi-objective optimization, machine learning, neural networks, and related topics. He having experience on those topics both at theoretical and applied levels.

Marc Schoenauer

Evolutionary Algorithms in the cloud

The main objective of the tutorial is to familiarize participants with basic cloud computing concepts so that, by the end of the tutorial, they are able to use them in the design, development and eventual deployment of evolutionary algorithms or any other metaheuristic in the cloud. Cloud computing includes a series of technologies that imply changes in the way algorithms and the applications designed to apply and test them, are designed and built and in. In the case of evolutionary computing or other bioinspired metaheuristics, we are mainly talking about distributed implementations of evolutionary algorithms, but the changes go deeper and will need probably changes in the shape and design of the algorithm itself. At the end of the day, this tutorial addresses scientific programming in the cloud, in general, using examples drawn from our own experience in evolutionary computation. In this tutorial we will walk through the different technologies and methodologies involved with cloud computing such as microservices, message-based architectures, serverless computing and containers, showing with working examples how they can be applied to evolutionary algorithms. We will also talk about different cloud service providers and how they can be used, for free or a low price, for carrying out experiments in reproducible and thus more efficient way. Eventually, we will show how details of cloud native implementations, such as cost and lifetime of processing units, will have to be taken into consideration when designing and implementing algorithms.

JJ Merelo

JJ Merelo is professor at the university of Granada. He has been involved in evolutionary computation for 20 years and not missed a PPSN since 2000, including the organisation of PPSN 2002 in Granada. He's the author of Algorithm::Evolutionary, a Perl evolutionary computation library and has given tutorials in GECCO, PPSN and CEC conferences. He's also been plenary speaker in ECTA 2013 and IDC 2014.

NEW Evolutionary Computation and Evolutionary Deep Learning for Image Analysis, Signal Processing and Pattern Recognition

The intertwining disciplines of image analysis, signal processing and pattern recognition are major fields of computer science, computer engineering and electrical and electronic engineering, with past and on-going research covering a full range of topics and tasks, from basic research to a huge number of real-world industrial applications. Among the techniques studied and applied within these research fields, evolutionary computation (EC) including evolutionary algorithms, swarm intelligence and other paradigms is playing an increasingly relevant role. Recently, evolutionary deep learning has also started to attract very good attention to these fields. The terms Evolutionary Image Analysis and Signal Processing and Evolutionary Computer Vision are more and more commonly accepted as descriptors of a clearly defined research area and family of techniques and applications. This has also been favoured by the recent availability of environments for computer hardware and systems such as GPUs and grid/cloud/parallel computing systems, whose architecture and computation paradigm fit EC algorithms extremely well, alleviating the intrinsically heavy computational burden imposed by such techniques and allowing even for real-time applications. The tutorial will introduce the general framework within which Evolutionary Image Analysis, Signal Processing and Pattern Recognition can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include edge detection, segmentation, feature extraction/construction/selection, object tracking, object recognition, motion detection, pattern classification particularly texture classification and recognition, speech recognition, and biomarker detection. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, learning classifier systems, evolutionary multi-objective optimisation as well as memetic/hybrid paradigms. In particular, we take a focus on the use of evolutionary deep learning idea for image analysis. We will show how such EC techniques can be effectively applied to image analysis and signal processing problems and provide promising results.

Mengjie Zhang

Mengjie Zhang is currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, a member of the Faculty of Graduate Research Board at the University, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science.

His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of feature selection/construction and dimensionality reduction, computer vision and image processing, job shop scheduling, multi-objective optimisation, and classification with unbalanced and missing data. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 400 research papers in refereed international journals and conferences in these areas.

He has been serving as an associated editor or editorial board member for over ten international journals including IEEE Transactions on Evolutionary Computation, the Evolutionary Computation Journal (MIT Press), IEEE Transactions on Emergent Topics in Computational Intelligence, Genetic Programming and Evolvable Machines (GPEM, Springer), Applied Soft Computing, Natural Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He is the Tutorial Chair for GECCO 2014 and AIS-BIO Track Co-Chair for GECCO 2016. Since 2012, he has been involving GECCO, CEC, SSCI, EvoStar, SEAL and PAKDD conferences as a general/program/technical/tutorial/track/special session chair. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html).

Prof Zhang is currently chairing the IEEE CIS Emergent Technologies Technical Committee consisting of over 40 top CI researchers from the five continents and 17 task forces. He is the immediate Past Chair for the IEEE CIS Evolutionary Computation Technical Committee and a member of the IEEE CIS Award Committee. He is also a member of IEEE CIS Intelligent System Applications Technical Committee, a vice-chair of the IEEE CIS Task Force on Evolutionary Feature Selection and Construction, a vice-chair of the Task Force on Evolutionary Computer Vision and Image Processing, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.

Stefano Cagnoni

Stefano Cagnoni graduated in Electronic Engineering at the University of Florence, Italy, where he has been a PhD student and a post-doc until 1997. In 1994 he was a visiting scientist at the Whitaker College Biomedical Imaging and Computation Laboratory at the Massachusetts Institute of Technology. Since 1997 he has been with the University of Parma, where he has been Associate Professor since 2004.

Recent research grants include: co-management of a project funded by Italian Railway Network Society (RFI) aimed at developing an automatic inspection system for train pantographs; a "Marie Curie Initial Training Network" grant, for a four-year research training project in Medical Imaging using Bio-Inspired and Soft Computing; a grant from "Compagnia diS. Paolo" on "Bioinformatic and experimental dissection of the signalling pathways underlying dendritic spine function".

He has been Editor-in-chief of the "Journal of Artificial Evolution and Applications" from 2007 to 2010. Since 1999, he has been chair of EvoIASP, an event dedicated to evolutionary computation for image analysis and signal processing, now a track of the EvoApplications conference. Since 2005, he has co-chaired MedGEC, workshop on medical applications of evolutionary computation at GECCO. Co-editor of special issues of journals dedicated to Evolutionary Computation for Image Analysis and Signal Processing. Member of the Editorial Board of the journals “Evolutionary Computation” and “Genetic Programming and Evolvable Machines”.

He has been awarded the "Evostar 2009 Award", in recognition of the most outstanding contribution to Evolutionary Computation.

Evolutionary Computation for Feature Selection and Feature Construction

In data mining/big data and machine learning, many real-world problems such as bio-data classification and biomarker detection, image analysis, text mining often involve a large number of features/attributes. However, not all the features are essential since many of them are redundant or even irrelevant, and the useful features are typically not equally important. Using all the features for classification or other data mining tasks typically does not produce good results due to the big dimensionality and the large search space. This problem can be solved by feature selection to select a small subset of original (relevant) features or feature construction to create a smaller set of high-level features using the original low-level features. Feature selection and construction are very challenging tasks due to the large search space and feature interaction problems. Exhaustive search for the best feature subset of a given dataset is practically impossible in most situations. A variety of heuristic search techniques have been applied to feature selection and construction, but most of the existing methods still suffer from stagnation in local optima and/or high computational cost. Due to the global search potential and heuristic guidelines, evolutionary computation techniques such as genetic algorithms, genetic programming, particle swarm optimisation, ant colony optimisation, differential evolution and evolutionary multi-objective optimisation have been recently used for feature selection and construction for dimensionality reduction, and achieved great success. Many of these methods only select/construct a small number of important features, produce higher accuracy, and generated small models that are easy to understand/interpret and efficient to run on unseen data. Evolutionary computation techniques have now become an important means for handling big dimensionality issues where feature selection and construction are required. The tutorial will introduce the general framework within which evolutionary feature selection and construction can be studied and applied, sketching a schematic taxonomy of the field and providing examples of successful real-world applications. The application areas to be covered will include bio-data classification and biomarker detection, image analysis and pattern classification, symbolic regression, network security and intrusion detection, and text mining. EC techniques to be covered will include genetic algorithms, genetic programming, particle swarm optimisation, differential evolution, ant colony optimisation, artificial bee colony optimisation, and evolutionary multi-objective optimisation. We will show how such evolutionary computation techniques can be effectively applied to feature selection/construction and dimensionality reduction and provide promising results.

Bing Xue

Bing Xue received her PhD degree in 2014 at Victoria University of Wellington, New Zealand. Since May 2015, she has been working as a Lecturer at Victoria University of Wellington. She is with the Evolutionary Computation Research Group at VUW, and her research focuses mainly on evolutionary computation, machine learning and data mining, particularly, evolutionary computation for feature selection, feature construction, dimension reduction, symbolic regression, multi-objective optimisation, bioinformatics and big data. Bing is currently leading the strategic research direction on evolutionary feature selection and construction in Evolutionary Computation Research Group at VUW, and has been organsing special sessions and issues on evolutionary computation for feature selection and construction. She is also the Chair of IEEE CIS Task Force on Evolutionary Computation for Feature Selection and Construction. Bing is a committee member of Evolutionary Computation Technical Committee, and Emergent Technologies Technical Committee, IEEE CIS. She has been serving as a guest editor, associated editor or editorial board member for international journals, and program chair, special session chair, symposium/special session organiser for a number of international conferences, and as reviewer for top international journals and conferences in the field.

Mengjie Zhang

Mengjie Zhang is currently Professor of Computer Science at Victoria University of Wellington, where he heads the interdisciplinary Evolutionary Computation Research Group. He is a member of the University Academic Board, a member of the University Postgraduate Scholarships Committee, a member of the Faculty of Graduate Research Board at the University, Associate Dean (Research and Innovation) in the Faculty of Engineering, and Chair of the Research Committee of the Faculty of Engineering and School of Engineering and Computer Science.

His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of feature selection/construction and dimensionality reduction, computer vision and image processing, job shop scheduling, multi-objective optimisation, and classification with unbalanced and missing data. He is also interested in data mining, machine learning, and web information extraction. Prof Zhang has published over 400 research papers in refereed international journals and conferences in these areas.

He has been serving as an associated editor or editorial board member for over ten international journals including IEEE Transactions on Evolutionary Computation, the Evolutionary Computation Journal (MIT Press), IEEE Transactions on Emergent Topics in Computational Intelligence, Genetic Programming and Evolvable Machines (GPEM, Springer), Applied Soft Computing, Natural Computing, and Engineering Applications of Artificial Intelligence, and as a reviewer of over 30 international journals. He is the Tutorial Chair for GECCO 2014 and AIS-BIO Track Co-Chair for GECCO 2016. Since 2012, he has been involving GECCO, CEC, SSCI, EvoStar, SEAL and PAKDD conferences as a general/program/technical/tutorial/track/special session chair. Since 2007, he has been listed as one of the top ten world genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html).

Prof Zhang is currently chairing the IEEE CIS Emergent Technologies Technical Committee consisting of over 40 top CI researchers from the five continents and 17 task forces. He is the immediate Past Chair for the IEEE CIS Evolutionary Computation Technical Committee and a member of the IEEE CIS Award Committee. He is also a member of IEEE CIS Intelligent System Applications Technical Committee, a vice-chair of the IEEE CIS Task Force on Evolutionary Feature Selection and Construction, a vice-chair of the Task Force on Evolutionary Computer Vision and Image Processing, and the founding chair of the IEEE Computational Intelligence Chapter in New Zealand.

Evolutionary Robotics

In the same way as evolution shapes animals, is it possible to use artificial evolution to automatically design robots? This attractive vision gave birth to Evolutionary Robotics (ER), an interdisciplinary field that incorporates ideas from Biology, Robotics and Artificial Intelligence. Within the last 20 years, Evolutionary Robotics explored many research avenues, from understanding Natural Evolution thanks to ER models, to co-designing robots’ bodies and brains. This tutorial will give an overview of the various questions addressed in ER, relying on examples from the literature. Past achievements and major contributions, as well as specific challenges in ER will be described. The tutorial will in particular focus on: what is evolutionary robotics?; fitness function design and the influence of selection pressure; encodings of controllers and morphologies; evolution for physical robots and the reality gap; embodied evolution and collective robotics systems; open questions and future challenges.

Nicolas Bredeche

Nicolas Bredeche is Professeur des Universites (Professor) at Universite Pierre et Marie Curie (UPMC, Paris, France), His research is done in the team Architectures and Models for Adaptation and Cognition, within the Institute of Intelligent Systems and Robotics (ISIR, CNRS). His field of research is mostly concerned with Evolutionary Computation and Complex Systems (self-adaptive collective robotic systems, generative and developmental systems, evolutionary robotics). In particular, part of his work is about (a) evolutionary design and/or adaptation of group of embodied agents and (b) artificial ontogeny such as multi-cellular developmental systems and self-regulated networks.

Nicolas Bredeche is author of more than 30 papers in journals and major conferences in the field. He has (or currently is) advisor or co-advisor of six PhD students and has served in program committees and/or as track chair of the major conferences in the field of Evolutionary Computation (incl. GECCO track chair for the Artificial Life and Robotics track 2012/2013). He is also a member of the french Evolution Artificielle association and has been regularly co-organizing the french Artificial Evolution one-day seminar (JET) since 2009. He has also organized several international workshops in Evolution, Robotics, and Development of Artificial Neural Networks.

Stéphane Doncieux

Stéphane Doncieux is Professeur des Universités (Professor) in Computer Science at Sorbonne Université, Paris, France. His research is mainly concerned with the use of evolutionary algorithms in the context of optimization or synthesis of robot controllers. He worked in a robotics context to design, for instance, controllers for flying robots, but also in the context of modelling where he worked on the use of multi-objective evolutionary algorithms to optimize and study computational models. More recently, he focused on the use of multi-objective approaches to tackle learning problems like premature convergence or generalization.
He is the head of the AMAC (Architecture and Models of Adaptation and Cognition) research team with 12 permanent researchers, 3 post-doc students and 13 PhD students. Researchers of the team work on different aspects of learning in the context of motion control and cognition, both from a computational neuroscience perspective and a robotics perspective.

Jean-Baptiste Mouret

Jean-Baptiste Mouret is a researcher at Inria, a French research institute dedicated to computer science and mathematics. He was previously an assistant professor (maître de conférences) at ISIR (Institute for Intelligent Systems and Robotics), which is part of Université Pierre et Marie Curie–Paris 6 (UPMC). He is the principal investigator of an ERC grant (ResiBots – Robots with animal-like resilience, 2015-2020) and was the recipient of a French “ANR young researcher” grant (Creadapt – Creative adaptation by Evolution, 2012-2015).

Overall, J.-B. Mouret conducts researches that intertwine evolutionary algorithms, neuro-evolution, and machine learning to make robots more adaptive. His work was recently featured on the cover of Nature (Cully et al., 2015) and it received the "Outstanding Paper of 2015" award from the Society for Artificial Life (2016), the French "La Recherche" award (2016), the GECCO best paper award (2011, GDS track), and the IEEE CEC "best student paper" award (2009).

J.-B. Mouret obtained a M.S. in computer science from EPITA in 2004, a M.S. in artificial intelligence from the Pierre and Marie Curie University (Paris, France) in 2005 and a Ph.D. in computer science from the same university in 2008. He co-edited two books, authored more than 50 papers in major journals and conferences, and was co-organizer of two conferences and three workshops about bio-inspired robotics and evolutionary robotics.

Medical Applications of Evolutionary Computation

The application of genetic and evolutionary computation to problems in medicine has increased rapidly over the past five years, but there are specific issues and challenges that distinguish it from other real-world applications. Obtaining reliable and coherent patient data, establishing the clinical need and demonstrating value in the results obtained are all aspects that require careful and detailed consideration. My research uses genetic programming (a representation of Cartesian Genetic Programming) in the diagnosis and monitoring of Parkinson's disease, Alzheimer's disease and other neurodegenerative conditions. It is also being used in the early detection of breast cancer through automated assessment of mammograms. I have undertaken clinical studies in the UK (Leeds, Harrogate, Plymouth, Truro, Oxford and Newcastle), USA (UCSF), UAE (Dubai Rashid Hospital), Australia (Monash Medical Center), China (Ruijin Hospital, Shanghai) and Singapore (National Neuroscience Institute). The technology is protected through 3 patent applications and a spinout company marketing four medical devices. The tutorial considers the following topics: 1) Introduction to medical applications of genetic and evolutional computation and how these differ from other real-world applications, 2) Overview of past work in the from a medical and evolutional computation point of view, 3) Three case examples of medical applications: (i) diagnosis and monitoring of Parkinson's disease, (ii) detection of beast cancer from mammograms, (iii) cancer screening using Raman spectroscopy, 4) Practical advice on how to get started working on medical applications, including existing medical databases and conducting new medical studies, commercialization and protecting intellectual property, 5) Summary, further reading and links.

Stephen L. Smith

Stephen L. Smith received a BSc in Computer Science and then an MSc and PhD in Electronic Engineering from the University of Kent, UK. He is currently a reader in the Department of Electronics at the University of York, UK.

Stephen's main research interests are in developing novel representations of evolutionary algorithms particularly with application to problems in medicine. His work is currently centered on the diagnosis of neurological dysfunction and analysis of mammograms. Stephen was program chair for the Euromicro Workshop on Medical Informatics, program chair and local organizer for the Sixth International Conference on Information Processing in Cells and Tissues (IPCAT) and guest editor for the subsequent special issue of BioSystems journal. More recently he was tutorial chair for the IEEE Congress on Evolutionary Computation (CEC) in 2009, local organiser for the International Conference on Evolvable Systems (ICES) in 2010 and co-general chair for the Ninth International Conference on Information Processing in Cells and Tissues (IPCAT) in April 2012. Stephen currently holds a Royal Academy of Engineering Enterprise Fellowship.

Stephen is co-founder and organizer of the MedGEC Workshop, which is now in its tenth year. He is also guest editor for a special issue of Genetic Programming and Evolvable Machines (Springer) on medical applications and co-editor of a book on the subject (John Wiley, November 2010).

Stephen is associate editor for the journal Genetic Programming and Evolvable Machines and a member of the editorial board for the International Journal of Computers in Healthcare and Neural Computing and Applications.

Stephen has some 75 refereed publications, is a Chartered Engineer and a fellow of the British Computer Society.

NEW Theory of Estimation-of-Distribution Algorithms

Estimation-of-distribution algorithms (EDAs) are general metaheuristics for optimization that represent a more recent and popular alternative to classical approaches like evolutionary algorithms. In a nutshell, EDAs typically do not directly evolve populations of search points but build probabilistic models of promising solutions by repeatedly sampling and selecting points from the underlying search space. However, until recently the theoretical knowledge of the working principles of EDAs was relatively limited. In the last few years, there has been made significant progress in the theoretical understanding of EDAs. This tutorial provides an up-to-date overview of the most commonly analyzed EDAs and the most important theoretical results in this area, ranging from convergence to runtime analyses. Different algorithms and objective functions, including optimization under uncertainty (i. e. noise) are covered. The tutorial will present typical benchmark functions and tools relevant in the theoretical analysis. It will be demonstrated that some EDAs are very sensitive with respect to their parameter settings and that their runtime can be optimized for very different choices of learning parameters. Altogether, the tutorial with make the audience familiar with the state of the art in the theory of EDAs and the most useful techniques for the analysis. We will conclude with open problems and directions for future research.

Carsten Witt

Carsten Witt is an associate professor at the Technical University of Denmark. He received his diploma and Ph.D. in Computer Science from the Technical University of Dortmund in 2000 and 2004, respectively. Carsten's main research interests are the theoretical aspects of randomized search heuristics, in particular evolutionary algorithms, ant colony optimization and particle swarm optimization. He co-organized the biannual Dagstuhl seminar ""Theory of Evolutionary Algorithms"" in 2008 and 2010 and has given tutorials on the computational complexity of bioinspired search heuristics in combinatorial optimization at several previous GECCOs and other venues. Carsten Witt is a member of the steering committee of the international Theory of Randomized Search Heuristics (ThRaSH) workshop, which he co-organized in 2011, and a member of the editorial boards of Evolutionary Computation and Theoretical Computer Science. Together with Frank Neumann, he has authored the textbook ""Bioinspired Computation in Combinatorial Optimization – Algorithms and Their Computational Complexity"", published by Springer.