Neuronal Transistor

Written by

Come-from-Beyond

Apr 18, 2024

Since this tweet — https://twitter.com/c___f___b/status/1757160176656752693 — I have been looking at the neuron that we are using in Qubic mining, trying to get what I’m looking at. It is very simple, yet very powerful. I could call it “just a neuron”, but giving it an appropriate name was important for better understanding of its nature. Today something clicked and I finally saw what this neuron is and for that I had to stare at it for several minutes imagining I’m a four-dimensional being.

Before revealing its name to those who haven’t read the title, I’d like to explain how the neuron functions. It has multiple inputs, and outputs, it can store only 3 values — 0, +1, and -1. Every time a signal arrives on an input the value of the signal is added to the stored value, but cannot make it exceed the permitted range of [-1; +1]. The neuron outputs the stored value to its peers, every output is fired periodically with some delay which varies depending on the output (virtual) length. All computational power is hidden in these delays, a bunch of neurons connected to each other can do different things depending on what delays the links between them have.

These days most of artificial neural network experts believe that some ANNs are better than others because of structural differences. I think they are right, if we take two different kinds of ANNs one of them will be better than another because of different level of inferiority. This inferiority comes from usage of non-universal structures (the universal structure looks as each neuron being connected to every other neuron), because the experts don’t know how to train ANNs of universal structure. In Aigarth we use universal ANNs and look for efficient methods of their training, this is what puts us apart from giants like OpenAI.

Neuronal transistor — this is what the neuron we use is. If you look at the structure of a universal ANN, consisting of such neurons, using time as the 4th dimension, you will see that the structure changes every time a neuron changes its stored value. For example, if this value is +1 then any input with +1 value will be ignored, it’s like disconnecting a part of the ANN. Few ticks later a value of -1 may arrive connecting that part back again. Our ANN changes itself while processing input data, this is so cool!

Now I understand why pigeons are better at multitasking than humans, structure of neural networks is not as important as signal propagation properties. We don’t have efficient mathematical apparatus capable to describe constantly changing ANNs yet, so we are exploiting the method stolen from the nature to train our ANNs — evolution.

© 2024 Qubic. All Rights Reserved.

English

© 2024 Qubic. All Rights Reserved.

English

© 2024 Qubic. All Rights Reserved.

English