You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
After the client sends the http request, it also calls the endpoints.infer method. In the endpoints.infer method, the model is also called to generate the final result.
There is a parameter payload of type InferenceRequest in the endpoints.infer method. This parameter represents the user’s input.
What I want to ask is where in the code the body content of http is instantiated into an InferenceRequest object. I don’t seem to find the relevant part in the code.
May I ask where this logic is implemented?
The text was updated successfully, but these errors were encountered:
I followed this documentation https://mlserver.readthedocs.io/en/stable/examples/sklearn/README.html
After running mlserver, I can get the correct results according to this operation.
I took a look at the source code of mlserver and registered the relevant routes in the source code.
After the client sends the http request, it also calls the endpoints.infer method. In the endpoints.infer method, the model is also called to generate the final result.
I have a confusion as follows:
There is a parameter payload of type InferenceRequest in the endpoints.infer method. This parameter represents the user’s input.
What I want to ask is where in the code the body content of http is instantiated into an InferenceRequest object. I don’t seem to find the relevant part in the code.
May I ask where this logic is implemented?
The text was updated successfully, but these errors were encountered: