How Networks Learn

Stephen Downes explains Connectivism as Learning Theory. post

Explaining why learning occurs has two parts: first, describing what learning is, and second, describing how it happens. Both parts are important.

According to connectivism, learning is the formation of connections in a network. The learning theory, therefore, in the first instance, explains how connections are formed.

The literature describes either actual networks of neurons ('neural networks', such as human or animal brains) or simulations of these networks ('artificial neural networks'), which are created using computers. In both cases, these networks 'learn' by automatically adjusting the set of connections between individual neurons or nodes.

I've presented four major categories of learning theory which describe, specifically and without black boxes, how connections are formed between entities in a network.

The actual physical descriptions of these theories vary from network to network - in human neurons, it's a set of electrical-chemical reactions, in social networks, it's communications between individual people, on computer networks it's variable values sent to logical objects.

.

Downs goes on to compare learning theories at their foundation rather than at some measure of, say, efficiency, acknowledging incommensurability from the Kuhn Cycle. This leads him to several likeable but still judgement laden observations.

quote

Connectivists see a person learning as a self-managed and autonomous seeker of opportunities to create, interact and have new experiences, where learning is not the accumulation of more and more facts or memories, but the ongoing development of a richer and richer neural tapestry.

Connectivists understand that the essential purpose of education and teaching is not to produce some set of core knowledge in a person, but rather to create the conditions in which a person can become an accomplished and motivated learner in their own right.