You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
In some of the benchmarks, parameters are passed to functions as environment variables but it would have made more sense if they were given as event/request parameters.
For instance, in map-reduce, NUM_MAPPERS (the number of mappers) and NUM_REDUCERS (the number of reducers) are set via environment variables. This creates a problem - on cloud platforms like AWS Lambda, a single set of environment variables is configured for a function. So, say, if I need to invoke the map-reduce function twice - one invocation with 4 mappers, and another with 8 mappers - I cannot send the invocations simultaneously. I would have to configure the environment variables on the cloud host (NUM_MAPPERS=4) and send the first invocation, then re-configure the environment variables (NUM_MAPPERS=8) and then send the second invocation.
In the case of locally hosted functions, I can make the invocations simultaneously, but I would need to open two separate shell sessions (one with NUM_MAPPERS set to 4, another one with it set to 8) to make the invocations.
The above problem can be solved if such parameters were fed to the function as parameters in the event request, rather than as environment variables.
Similar is the case for many parameters in stacking-training such as BUCKET_NAME (name of the s3 bucket to use), NumTrainers (number of trainer functions to invoke) and CONCURRENT_TRAINING (whether to run the trainers serially or concurrently).
What I propose is: we identify which parameters are request-specific, rather than function-specific. The function-specific parameters can be configured through environment variables, while the request-specific ones can be passed through event requests.
The text was updated successfully, but these errors were encountered:
In some of the benchmarks, parameters are passed to functions as environment variables but it would have made more sense if they were given as event/request parameters.
For instance, in map-reduce,
NUM_MAPPERS
(the number of mappers) andNUM_REDUCERS
(the number of reducers) are set via environment variables. This creates a problem - on cloud platforms like AWS Lambda, a single set of environment variables is configured for a function. So, say, if I need to invoke the map-reduce function twice - one invocation with 4 mappers, and another with 8 mappers - I cannot send the invocations simultaneously. I would have to configure the environment variables on the cloud host (NUM_MAPPERS=4
) and send the first invocation, then re-configure the environment variables (NUM_MAPPERS=8
) and then send the second invocation.In the case of locally hosted functions, I can make the invocations simultaneously, but I would need to open two separate shell sessions (one with
NUM_MAPPERS
set to 4, another one with it set to 8) to make the invocations.The above problem can be solved if such parameters were fed to the function as parameters in the event request, rather than as environment variables.
Similar is the case for many parameters in stacking-training such as
BUCKET_NAME
(name of the s3 bucket to use),NumTrainers
(number of trainer functions to invoke) andCONCURRENT_TRAINING
(whether to run the trainers serially or concurrently).What I propose is: we identify which parameters are request-specific, rather than function-specific. The function-specific parameters can be configured through environment variables, while the request-specific ones can be passed through event requests.
The text was updated successfully, but these errors were encountered: