-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fixed epoch_interruption on new wasm instance creation #89
base: main
Are you sure you want to change the base?
Conversation
@@ -316,6 +316,7 @@ impl<T: Clone + Send + 'static + std::marker::Sync, U: Clone + Send + 'static + | |||
|
|||
let lind_manager = child_ctx.lind_manager.clone(); | |||
let mut store = Store::new_with_inner(&engine, child_host, store_inner); | |||
store.set_epoch_deadline(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
add a one line comment here about what this is doing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
these comments don't really explain why we would do this or what this is
@@ -556,6 +557,7 @@ impl<T: Clone + Send + 'static + std::marker::Sync, U: Clone + Send + 'static + | |||
let instance_pre = Arc::new(child_ctx.linker.instantiate_pre(&child_ctx.module).unwrap()); | |||
|
|||
let mut store = Store::new_with_inner(&engine, child_host, store_inner); | |||
store.set_epoch_deadline(1); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
@@ -634,6 +634,10 @@ impl Engine { | |||
self.inner.epoch.fetch_add(1, Ordering::Relaxed); | |||
} | |||
|
|||
pub fn decrement_epoch(&self) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
also comment this
@@ -424,6 +424,8 @@ impl RunCommand { | |||
thread::sleep(timeout); | |||
engine.increment_epoch(); | |||
}); | |||
} else if self.run.common.wasm.epoch_interruption.is_some() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is a good comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems like a pretty straightforward minor change, still some comments for context would be good here for the future.
A very minor PR. Fixed this issue by initializing epoch when new wasm instance is created.
This is the initial integration of epoch_interruption into lind's architecture. We plan to use epoch_interruption to implement several features (thread cancelation feature and signal implementation), so having a basic epoch_interruption working here is a good start.
Epoch is an u64 variable and we can use it as an integer if we want. But currently we only need to use epoch has a way to interrupt the wasm process, it is sufficient to just use 0 and 1 on epoch to represent
normal
andinterrupted
respectively. So I am setting epoch deadline to be always 1 here.