1. Add support for recurrent Cascade-correlation Recurrent Cascade-correlation
This would be very useful, but probably a bit involved. The SRN implementation may not mess well with the Cascade-correlation implementation without a lot of work. There are so many occasions where it would be nice to have more powerful recurrent neural network training algorithms.
2. Make sure that it works to use basic backprop instead of quickprop for weight updates. Perhaps explore implementing an Rprop variant and seeing if it could be dropped in as a replacement for quickprop in cascade-correlation. One would think that almost any batch training algorithm for simple feedforward networks could be used in conjunction with Cascade-correlation. Historically, Quickprop was probably chosen because Fahlman also invented it and it is often much superior to backprop. However, there may be other algorithms that work even better with Cascade-correlation. It would be very interesting to see how much Cascade-correlation is hobbled by not using Quickprop and using backprop. To do a fair test, one would have to adjust the patience and stagnation parameters quite a bit.
3. Add support for Candidates with different activation functions in the same candidate pool. Currently, activation functions are a property of an entire layer in conx. This item might be difficult because of that. However, it might be very powerful to have a mixed activation function pool of candidate units. Perhaps there has been some research about this and some papers could be found?