Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ring Buffer left shift preserves smallest weight #745

Open
wants to merge 9 commits into
base: master
Choose a base branch
from

Conversation

pabogdan
Copy link
Member

Use case: cerebellum model -- though it might affect other people without them realising
Context:

============================================================
Number of incoming connections per population:
------------------------------------------------------------
        basket     ->      37424 incoming synapses
        purkinje   ->     103906 incoming synapses
        granule    ->      88484 incoming synapses
        golgi      ->      68937 incoming synapses
        dcn        ->       1213 incoming synapses
        stellate   ->      37638 incoming synapses
        glomerulus ->       1002 incoming synapses
============================================================
Number of synapses per projection:
------------------------------------------------------------
        aa_pc      ->       2259 synapses [exc] with a weight of  0.000020 uS
        bc_pc      ->        179 synapses [inh] with a weight of -0.009000 uS
        pf_pc      ->     101289 synapses [exc] with a weight of  0.075000 uS
        sc_pc      ->        179 synapses [inh] with a weight of -0.008500 uS

Symptom (on master): weights for pf_pc are 0, while some other weights (not mentioned here are way off, although not 0).

Outcome (on PR): pf_pc -> 0.00001526 uS c.f. 0.00002000 uS ( 76.29%)
pf_pc weights are now representable and match more closely.
(Potential) downside: more weight saturations occur.

This PR needs more thought + removing of some debug print statements + probably reverting IF_COND_EXP default left shift back to 10 rather than 5.

This implementation was done with @rowleya next to me.

Copy link
Member

@rowleya rowleya left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I will try to have a look at the suggested changes

@@ -44,7 +44,7 @@ static inline input_t* input_type_get_input_value(
input_t* value, input_type_pointer_t input_type, uint16_t num_receptors) {
use(input_type);
for (int i = 0; i < num_receptors; i++) {
value[i] = value[i] >> 10;
value[i] = value[i] >> 5;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might be a better value; I guess it depends on what the user provides as weight values. The idea here is for the weight to be more precise when using conductance values. For reference, conductance weight is in micro-Siemens but current weight is in nano-Amps; thus a scale of ~1000 has some justification, however the effects of conductance are non-linear since they depend upon the membrane voltage, and so weights to get similar outputs tend to be relatively bigger than a simple division by 1000. Perhaps this scaling should be dynamic depending on the expected sum of weights, in the same way as is done for the weight scaling elsewhere.

Comment on lines +547 to +552
rb_ls = list(max_weight_powers)
print("=" * 60)
print("RB left shifts for {:20}".format(application_vertex.label),
"=", rb_ls)
print("-" * 60)
return rb_ls
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned, this printing should be removed. Instead it would make sense to have a report that tells the user what scaling was used and why (i.e. expected maximum weights and scaling values).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should also possibly warn the user up-front when weight scaling cannot be achieved due to minimum weight representation issues. This would then explain the overflow messages that are likely to received later.

Additionally, this code could detect and warn the user when it is impossible to represent both the minimum and maximum weight, suggesting the 16-bit weight is not enough to represent the dynamic range of values requested.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As mentioned elsewhere, we don't have the minimum weight here yet, so this might be needed if this is to report the range between the true minimum and the maximum, rather than the minimum of the maximums.

@@ -446,6 +446,7 @@ def _get_ring_buffer_to_input_left_shifts(
weights_signed = False
rate_stats = [RunningStats() for _ in range(n_synapse_types)]
steps_per_second = 1000000.0 / machine_timestep
min_max_weight = numpy.ones(n_synapse_types) * 2 ** 32
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this stores the "minimum of the max weights" rather than the "minimum weight". This is because the connector code does not include this. This could be added, but it should be noted that:

  • This would have to be the minimum absolute weight as the maximum is the maximum absolute weight
  • This would have to account for the fact that the minimum might be 0, which is always representable! Thus it should be the minimum non-zero weight.

else:
return numpy.mean(numpy.abs(self.__weights))

@overrides(AbstractConnector.get_weight_maximum)
def get_weight_maximum(self, synapse_info):
# pylint: disable=too-many-arguments
if self.__weights is None:
return numpy.amax(synapse_info.weights)
return self._get_weight_maximum(self.__weights, len(self.__conn_list))
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Line too long (flake 8)

@dkfellows dkfellows added this to the 7.0.0 milestone Apr 12, 2021
@dkfellows dkfellows modified the milestones: 7.0.0, 7.1.0 Sep 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants