# Genetic Programming for Kernel-based Learning with Co-evolving Subsets Selection

###### Abstract

Support Vector Machines (SVMs) are well-established Machine Learning (ML) algorithms. They rely on the fact that i) linear learning can be formalized as a well-posed optimization problem; ii) non-linear learning can be brought into linear learning thanks to the kernel trick and the mapping of the initial search space onto a high dimensional feature space. The kernel is designed by the ML expert and it governs the efficiency of the SVM approach. In this paper, a new approach for the automatic design of kernels by Genetic Programming, called the Evolutionary Kernel Machine (EKM), is presented. EKM combines a well-founded fitness function inspired from the margin criterion, and a co-evolution framework ensuring the computational scalability of the approach. Empirical validation on standard ML benchmark demonstrates that EKM is competitive using state-of-the-art SVMs with tuned hyper-parameters.

## 1 Introduction

Kernel methods, including the so-called Support Vector Machines (SVMs), are well-established learning approaches with both strong theoretical foundations and successful practical applications [1]. SVMs rely on two main advances in statistical learning. First, the linear supervised machine learning task is set as a well-posed (quadratic) optimization problem. Second, the above setting is extended to non-linear learning via the kernel trick : given a (manually designed) change of representation mapping the initial space onto the so-called feature space, linear hypotheses are characterized in terms of the scalar product in the feature space, or kernel. These hypotheses correspond to non-linear hypotheses in the initial space. Although many specific kernels have been proposed in the literature, designing a kernel well suited for an application domain or a dataset so far remains an art more than a science.

This paper proposes a system, the Evolutionary Kernel Machine (EKM), for the automatic design of data-specific kernels. EKM applies Genetic Programming (GP) [2] to construct symmetric functions (kernels), and optimizes a fitness function inspired from the margin criterion [3]. Kernels are assessed within a Nearest Neighbor classification process [4, 5]. In order to cope with computational complexity, a cooperative co-evolution governs the prototype subset selection and the GP kernel design, while the fitness case subset selection undergoes a competitive co-evolution.

The paper is organized as follows. Section 2 introduces the formal background and notations on kernel methods. Sections 3 and 4 respectively describe the GP representation and the fitness function proposed for the EKM. Scalability issues are addressed in the co-evolutionary framework introduced in Section 5. Results on benchmark problems are given in Section 6. Finally, related works are discussed in Section 7 before concluding the paper in Section 8.

## 2 Formal Background and Notations

Supervised machine learning takes as input a dataset , , made of examples; and respectively stand for the description and the label of the -th example. The goal is to construct a hypothesis mapping onto with minimal generalization error. Only vectorial domains () are considered throughout this paper; further, only binary classification problems () are considered in the rest of this section.

Due to space limitations, the reader is referred to [6] for a comprehensive presentation of SVMs. In the simplest (linear separable) case, the hyper-plane maximizing the geometrical margin (distance to the closest examples) is constructed. The label associated to example is the sign of , with:

where denotes the scalar product of and . Let denotes a mapping from the instance space onto the feature space and let the kernel be defined as:

Under some conditions (the kernel trick), non-linear classifiers on are constructed as in the linear case, and characterized as .

Besides SVMs, the kernel trick can be used to revisit all learning methods involving a distance measure. In the paper, the kernel nearest neighbor (Kernel-NN) algorithm [5], which revisits the -nearest neighbors (-NN) [4], is considered. Given a distance (or dissimilarity) function defined on the instance space , given a set of labelled examples and an instance to be classified, the -NN algorithm: i) determines the examples closest to according to ; ii) outputs the majority class of these examples. Kernel-NN proceeds as -NN, where distance is defined after the kernel (more on this in Section 4).

Standard kernels on include Gaussian and polynomial kernels^{1}^{1}1Respectively and . It must be noted that the addition, multiplication and compositions of kernels are kernels, and therefore the standard SVM machinery can find the optimal value of hyper-parameters (e.g. or ) among a finite set. Quite the opposite, the functional (symbolic) optimization of cannot be tackled to our best knowledge except by Genetic Programming.

## 3 Genetic Programming of Kernels

The Evolutionary Kernel Machine applies GP to determine symmetric functions on best suited to the dataset at hand. As shown in Table 1, the main difference compared to standard symbolic regression is that terminals are symmetric expressions of and (e.g. , or ), enforcing the symmetry of the kernels ().

The initialization of GP individuals is done using a ramped half and half procedure [2]. The selection probability of terminals , and (respectively ) is divided by (resp. ), where is the dimension of the initial instance space ().

Name | # args. | Description |
---|---|---|

ADD2 | Addition of two values, . | |

ADD3 | Addition of three values, . | |

ADD4 | Addition of four values, . | |

SUB | Subtraction, . | |

MUL2 | Multiplication of two values, . | |

MUL3 | Multiplication of three values, . | |

MUL4 | Multiplication of four values, . | |

DIV | Protected division, | |

MAX | Maximum value, . | |

MIN | Minimum value, . | |

EXP | Exponential value, . | |

POW2 | Square power, . | |

Add the components, . | ||

Multiply the components, . | ||

Maximum between the components, . | ||

Minimum between the components, . | ||

Crossed multiplication-addition between the and components, . | ||

DOT | Scalar product of and , . | |

EUC | Euclidean distance of and , . | |

E | Ephemeral random constants, generated uniformly in . |

Indeed the kernel functions built after Table 1 might not satisfy Mercer’s condition () required for SVM optimization [6]. However these kernels will be assessed along a Kernel-NN classification rule [5]; therefore the fact that they are not necessarily positive is not a limitation. Quite the contrary, EKM kernels can achieve feature selection; typically, terminals associated to non-informative features should disappear along evolution. The use of EKM for feature selection will be examined in a future work.

## 4 Fitness Measure

Every kernel is assessed after the Kernel-NN classification rule, using the dissimilarity defined as

Given a prototype set and a training example , let us assuming that is ordered by increasing dissimilarity to (). Let denotes the minimum rank over all prototype examples in the same class as (); let denotes the minimum rank over all other prototype examples (not belonging to the same class as , ).

As noted by [3], the quality of the Kernel-NN classification of can be assessed from . The higher , the more confident the classification of is, e.g. with respect to perturbations of or ; measures the margin of with respect to Kernel-NN.

Accordingly, given a prototype set and a fitness case subset , the fitness function associated to is defined as

The computation of has linear complexity in the number of prototypes and in the number of fitness cases. In a standard setting, and both coincide with the whole training set (). However the quadratic complexity of the fitness computation with respect to the number of training examples is incompatible with the scalability of the approach.

## 5 Tractability Through Co-evolution

EKM scalability is obtained along two directions, by i) reducing the number of prototypes used for classification, and ii) reducing the size of the fitness case subset considered during each generation.

More precisely, a co-evolutionary framework involving three species is considered, as detailed in Figure 1.

The first species includes the GP kernels. The second species includes the prototype subset (fixed-size subsets of the training set), subject to a cooperative co-evolution [7] with the GP kernels. The third species includes the fitness case subset (fixed-size subsets of the training set), subject to a competitive host-parasite co-evolution [8] with the GP kernels.

The prototype species is evolved to find good prototypes such that they maximize the fitness of the GP kernels. The fitness case species is evolved to find hard and challenging examples, such that they minimize the kernel fitness. Of course there is a danger that the fitness case subset ultimately capture the noisy examples, as observed in the boosting framework [9] (see Section 6.2).

Both prototype and selection species are initialized using a stratified uniform sampling with no replacement (the class distribution in the sample is the same as in the whole dataset and all examples are distinct). Both species are evolved using a evolution strategy; in each generation, offsprings are generated using a uniform stratified replacement of a given fraction of the parent subset, and assessed after the best kernel in the current kernel population. The parent subset is replaced by the best offspring. In each generation, the kernels are assessed after the current prototype and fitness case individuals.

## 6 Experimental Validation

This section reports on the experimental validation of EKM, on a standard set of benchmark problems [10], detailed in Table 2. The system is implemented using the Open BEAGLE framework^{2}^{2}2http://beagle.gel.ulaval.ca for evolutionary computation [11].

Data | # of | # of | ||
---|---|---|---|---|

set | Size | features | classes | Application domain |

bcw | Wisconcin’s breast cancer, benign and malignant. | |||

bld | BUPA liver disorders, with disorders and without disorder. | |||

bos | Boston housing, with median value K$, with , and with . | |||

cmc | Contraceptive method choice, not using contraception, using short-term contraception, and using long-term contraception. | |||

ion | Ionosphere radar signal, without structure detected and with a structure detected. | |||

pid | Pima indians diabetes, tested negative and tested positive for diabetes. |

### 6.1 Experimental Setting

The parameters used in EKM are reported in Table 3. The average evolution time for one run is less than one hour (AMD Athlon 2800+).

On each problem, EKM has been evaluated along the standard 10-fold cross validation methodology. The whole data set is partitioned into 10 (stratified) subsets; the training set is made of all subsets but one; the best hypothesis learned from this training set is evaluated on the remaining subset, or test set. The accuracy is averaged over the 10 folds (as the test set ranges over the 10 subsets of the whole dataset); for each fold, EKM is launched 10 times; the 5 best hypotheses (after their accuracy on the training set) are assessed on the test set; the reported accuracy is the average over the 10 folds of these 5 best hypotheses on the test set. In total, EKM is launched 100 times on each problem.

Parameter | Description and parameter values |
---|---|

GP kernel functions evolution parameters | |

Primitives | See Table 1. |

GP population size | One population of individuals |

Stop criterion | Evolution ends after generations. |

Replacement strategy | Genetic operations applied following generational scheme. |

Selection | Lexicographic parsimony pressure tournaments selection with participants. |

Crossover | Classical subtree crossover [2] (prob. ). |

Standard mutation | Crossover with a random individual (prob. ). |

Swap node mutation | Exchange a primitive with another of the same arity (prob. ). |

Shrink mutation | Replace a branch with one of its children and remove the branch mutated and the other children subtrees (if any) (prob. ). |

Prototype subset selection parameters | |

Prototype subset size | examples in a prototype subset. |

Number of offsprings | offsprings per generation. |

Mutation rate | of the prototype examples replaced in each mutation. |

Fitness case subset selection parameters | |

Fitness case subset size | examples in a fitness case subset. |

Number of offsprings | offsprings per generation. |

Mutation rate | of the selection examples replaced in each mutation. |

EKM is compared to state of the art algorithms, including -nearest neighbor and SVMs with Gaussian kernels, similarly assessed using 10-fold cross validation. For -NN, the underlying distance is the Euclidean one, and scaling normalization option has been considered; the parameter has been varied in ; the best setting has been kept. For Gaussian SVMs, the Torch3 implementation has been used [12]; the error cost (parameter ) has been varied in , the parameter is set to , and the best setting has been similarly retained.

### 6.2 Results

Table 4 shows the results obtained by EKM compared with -NN and Gaussian SVM, together with the optimal parameters for the latter algorithms.

-NN | SVM | EKM | |||||||||

Data | Best conf. | Train | Test | Best | Train | Test | Train | Best-half | Mean | Average | |

set | Scaling | error | error | error | error | error | test error | size | rank | ||

bcw | No | ||||||||||

bld | No | ||||||||||

bos | Yes | ||||||||||

cmc | No | ||||||||||

ion | Yes | ||||||||||

pid | Yes |

The size of the best GP kernel (last column) shows that no bloat occurred, thanks to the lexicographic parsimony pressure. Each algorithm is shown to be the best performing on the half or more of the tested datasets, with frequent ties according to a paired Student’s -test.

Typically, the problems where Gaussian SVMs perform well are those where the optimal value for cost error is high, suggesting that the noise level in these datasets is high too. Indeed, the fitness case subset selection embedded in EKM might favor the selection of noisy examples, as those are more challenging to GP kernels. A more progressive selection mechanism, taking into account all kernels in the GP population to better filter out noisy examples and outliers, will be considered in further research.

The -NN outperforms SVM and EKM on the bos problem, where the noise level appears to be very low. Indeed, the optimal value for the number of nearest neighbors is , while the optimal cost error is , suggesting that the error rate is also low. Still, the fact that the error rate is close to 23% might be explained as the target concept is complex and/or many examples lie close to its frontier. On bcw, the differences between the three algorithms are not statistically different and the test error rate is about 2%, suggesting that the problem is rather easy.

EKM is found to outperform the other algorithms on bld, demonstrating that Kernel-based dissimilarity can improve on Euclidean distance with and without rescaling. Last, EKM behaves like -NN on the pid problems. Further, it must be noted that EKM classifies the test examples using a 50-examples prototype set, whereas -NN uses the whole training set (above 300 examples in the bld problem and 690 in the pid problem).

As the well-known No Free Lunch theorem applies to Machine Learning too, no learning method is expected to be universally competent. Rather, the above experimental validation demonstrates that the GP-evolved kernels can improve on standard kernels in some cases.

## 7 Related Works

The most relevant work to EKM is the Genetic Kernel Support Vector Machine (GK-SVM) [13]. GK-SVM similarly uses GP within an SVM-based approach, with two main differences compared to EKM. On one hand, GK-SVM focuses on feature construction, using GP to optimize mapping (instead of the kernel). On the other hand, the fitness function used in GK-SVM suffers from a quadratic complexity in the number of training examples. Accordingly, all datasets but one considered in the experimentations are small (less than 200 examples). On a larger dataset, the authors acknowledge that their approach does not improve on a standard SVM with well chosen parameters. Another related work similarly uses GP for feature construction, in order to classify time series [14]. The set of features (GP trees) is further evolved using a GA, where the fitness function is based on the accuracy of an SVM classifier. Most other works related to evolutionary optimization within SVMs (see [15]) actually focus on parametric optimization, e.g. achieving features selection or tuning some parameters.

Another related work is proposed by Weinberger et al. [16], optimizing a Mahalanobis distance based on the -NN margin criterion inspired from [3] and also used in EKM. However, restricted to linear changes of representation, the optimization problem is tackled by semi-definite programming in [16]. Lastly, EKM is also inspired by the Dynamic Subset Selection first proposed by Gathercole and Ross [17] and further developed by [18] to address scalability issues in EC-based Machine Learning.

## 8 Conclusion

The Evolutionary Kernel Machine proposed in this paper aims to improve kernel-based nearest neighbor classification [5], combining two original aspects. First, EKM implicitly addresses the feature construction problem by designing a new representation of the application domain better suited to the dataset at hand. However, in contrast with [13, 14], EKM takes advantage of the kernel trick, using GP to optimize the kernel function. Secondly, EKM proposes a co-evolution framework to ensure the scalability of the approach and control the computational complexity of the fitness computation. The empirical validation demonstrates that this new approach is competitive with well-founded learning algorithms such as SVM and -NN using tuned hyper-parameters.

A limitation of the approach, also observed in the well-known boosting algorithm [9], is that the competitive co-evolution of kernels and examples tends to favor noisy validation examples. A perspective for further research is to exploit the evolution archive, to estimate the probability for an example to be noisy and achieve a sensitivity analysis. Another perspective is to incorporate ensemble learning, typically bagging and boosting, within EKM. Indeed the diversity of the solutions constructed along population-based optimization enables ensemble learning almost for free.

### Acknowledgments

This work was supported by postdoctoral fellowships from the ERCIM (Europe) and the FQRNT (Québec) to C. Gagné. M. Schoenauer and M. Sebag gratefully acknowledge support by the PASCAL Network of Excellence, IST2002506778.

## References

- [1] Shawe-Taylor, J., Cristianini, N.: Kernel Methods for Pattern Analysis. Cambridge University Press, Cambridge, UK (2004)
- [2] Koza, J.R.: Genetic Programming: On the Programming of Computers by Means of Natural Selection. MIT Press, Cambridge (MA), USA (1992)
- [3] Gilad-Bachrach, R., Navot, A., Tishby, N.: Margin based feature selection - theory and algorithms. In: Proc. of the 21st Int. Conf. on Machine Learning. (2004) 43–50
- [4] Duda, R.O., Hart, P.E., Stork, D.G.: Pattern Classification. John Wiley & Sons, Inc., New York (NY), USA (2001)
- [5] Yu, K., Ji, L., Zhang, X.: Kernel nearest neighbor algorithm. Neural Processing Letters 15(2) (2002) 147–156
- [6] Cristianini, N., Shawe-Taylor, J.: An Introduction to Support Vector Machines and Other Kernel-based Learning Methods. Cambridge University Press, Cambridge, UK (2000)
- [7] Potter, M.A., De Jong, K.A.: Cooperative coevolution: An architecture for evolving coadapted subcomponents. Evolutionary Computation 8(1) (2000) 1–29
- [8] Hillis, W.D.: Co-evolving parasites improve simulated evolution as an optimization procedure. Physica D 42 (1990) 228–234
- [9] Freund, Y., Shapire, R.: Experiments with a new boosting algorithm. In: Proc. of the 13th. Int. Conf. on Machine Learning. (1996) 148–156
- [10] Newman, D., Hettich, S., Blake, C., Merz, C.: UCI repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.html (1998)
- [11] Gagné, C., Parizeau, M.: Genericity in evolutionary computation software tools: Principles and case-study. Int. J. on Artif. Intell. Tools 15(2) (2006) 173–194
- [12] Collobert, R., Bengio, S., Mariéthoz, J.: Torch: a modular machine learning software library. Technical Report IDIAP-RR 02-46, IDIAP (2002)
- [13] Howley, T., Madden, M.G.: The genetic kernel support vector machine: Description and evaluation. Artificial Intelligence Review 24(3–4) (2005) 379–395
- [14] Eads, D., Hill, D., Davis, S., Perkins, S., Ma, J., Porter, R., Theiler, J.: Genetic algorithms and support vector machines for time series classification. In: Applications and Science of Neural Networks, Fuzzy Systems, and Evolutionary Computations V. (2002) 74–85
- [15] Friedrichs, F., Igel, C.: Evolutionary tuning of multiple SVM parameters. Neurocomputing 64 (2005) 107–117
- [16] Weinberger, K., John, B., Lawrence, S.: Distance metric learning for large margin nearest neighbor classification. In: Neural Information Processing Systems. (2005) 1473–1480
- [17] Gathercole, C., Ross, P.: Dynamic training subset selection for supervised learning in genetic programming. In: Parallel Problem Solving From Nature. (1994) 312–321
- [18] Song, D., Heywood, M.I., Zincir-Heywood, A.N.: Training genetic programming on half a million patterns: an example from anomaly detection. IEEE Transactions on Evolutionary Computation 9(3) (2005) 225–239