-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Neural network #1188
base: main
Are you sure you want to change the base?
Neural network #1188
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm very confused. Let's do a call.
// M' neurons wide and here M is M'/N, L layers tall | ||
pub async fn neural_network<C, S, const M: usize, const N: usize, const MTimesN: usize>( | ||
ctx: C, | ||
last_layer_neurons: &[BitDecomposed<AdditiveShare<Boolean, N>>; M], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These are the activations of the last layer of neurons? If so, let's give it a name including that word.
pub async fn neural_network<C, S, const M: usize, const N: usize, const MTimesN: usize>( | ||
ctx: C, | ||
last_layer_neurons: &[BitDecomposed<AdditiveShare<Boolean, N>>; M], | ||
edge_weights: &[BitDecomposed<AdditiveShare<Boolean, N>>; M], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's very hard to know how to use this data structure.
Boolean: FieldSimd<N>, | ||
AdditiveShare<Boolean, N>: BooleanProtocols<C, N>, | ||
Boolean: FieldSimd<M>, | ||
AdditiveShare<Boolean, M>: BooleanProtocols<C, M>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we need both N and M vectorization support?
{ | ||
// use super::step::MultiplicationStep as Step; | ||
// for each layer we get M*M vector of edge_weights | ||
let mut mults = ctx.parallel_join(zip(edge_weights.iter(), last_layer_neurons).enumerate().map(|(i, (edge_weight, neuron))| { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mults
is not a good name. Maybe input_edge_activations
?
let mut num = 0; | ||
while mults.len() > 1 { | ||
// Add each of the mults amongst themselves | ||
for (a, b) in mults.iter().tuples() { | ||
let (add_result, _) = integer_add::<_, S, N>( | ||
ctx.narrow(&TwoHundredFiftySixBitOpStep::Bit(M+num)), | ||
RecordId::from(num), | ||
&a, | ||
&b, | ||
) | ||
.await?; | ||
mults.push(add_result); | ||
num += 1; | ||
} | ||
|
||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Andy already has code that does this (log(n) depth steps, adding each time and thereby dividing the length of the list by 2). Use pub async fn aggregate_values<'ctx, 'fut, C, OV, const B: usize>(
let mut one_cell = mults[0]; | ||
while one_cell.len() > 1 { | ||
let (left, right) = one_cell.split_at((one_cell.len()/2).try_into().unwrap()); | ||
(one_cell, _) = integer_add::<_, S, N>( | ||
ctx.narrow(&TwoHundredFiftySixBitOpStep::Bit(M+num)), | ||
RecordId::FIRST, | ||
&left, | ||
&right, | ||
) | ||
.await?; | ||
num += 1; | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm lost. I don't understand what is happenning here.
.upgraded_semi_honest((edge_weights, prev_neurons), |ctx, (edge_weights, prev_neurons)| async move { | ||
let edge_weights1 = BitDecomposed::transposed_from(&edge_weights).unwrap(); | ||
let prev_neurons1 = BitDecomposed::transposed_from(&prev_neurons).unwrap(); | ||
let edge_weights = [edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1.clone(), edge_weights1]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is happenning here?
// for i in 0..M-1 // For going through all layers | ||
// for j in 0..N-1 // Current layer | ||
// for k in 0..N-1 // For previous layer | ||
// neuron(i*N + j) += neuron((i-1)*N + k) * edge_weight(neuron((i)*N + j), neuron((i-1)*N + k)) | ||
|
||
// M' neurons wide and here M is M'/N, L layers tall |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are these comments in sync with the code?
No description provided.