Having explained the rough idea what and why I want to investiage it’s time to jump in the topic.
I said I don’t want to explain perceptrons and so on. Nevertheless I’d like to make you familiar with my visualalisation of neurons and there connections:
Of course the dentrits connect via synapses (e.g. modeling weights). We have a threshold and an activation function. So this is a very simple view on the structure, but all considered that’s usually involved.
Now I want to show what I think should be the starting point based on the knowledge I have.
So what we have here is looking like a normal feed forward neural network (ff-NN). But I would like to take feedback connections and lateral connections into account as well. Also, I deliberately missed out an output layer. That is because this network shall not work like a conventional ff-NN in terms of training. Furthermore we have a timing related propagation so the network is able to “remember” sequences, which is vital for expectations the network will produce from its stimuli. The first approach will include a simulation toolset in which you can model NN. I won’t focus on user-interfaces. It will be more like a framework. I plan to focus on scalability and want a 3D-simulation view to visualize the propagation loop. I hope this sounds exciting. More’s coming soon.
Thank you for reading.