Classical neural network learning techniques have primarily been focused on optimization in a continuous setting. Early results in the area showed that many activation functions could be used to build neural nets that represent any function, but of course this also allows for overfitting. In an effort to ameliorate this deficiency, one seeks to reduce the search space of possible functions to a special class which preserves some relevant structure. I will propose a solution to this problem of a quite general nature, which is to use polymorphisms of a relevant discrete relational structure as activation functions. I will give some concrete examples of this, then hint that this specific case is actually of broader applicability than one might guess.

Discrete neural nets and polymorphic learning Sponsored by the Meyer Fund