-
Notifications
You must be signed in to change notification settings - Fork 173
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
project: Support Remote Builds #3945
base: main
Are you sure you want to change the base?
Conversation
@@ -189,6 +192,13 @@ func (p *dockerProject) Build( | |||
) *async.TaskWithProgress[*ServiceBuildResult, ServiceProgress] { | |||
return async.RunTaskWithProgress( | |||
func(task *async.TaskContextWithProgress[*ServiceBuildResult, ServiceProgress]) { | |||
if serviceConfig.Docker.RemoteBuild { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like a place where our existing abstractions break down. I'm not sure what build should do. In the local build model, we do the following:
- Build: Run
docker build
but don't assign a local build tag or remote build tag. - Package: assign a local tag to the image
- Deploy: Push the local image to the remote registry.
With remote build, we must couple building the image with pushing it to the remote registry, so we can't use this same split.
We could consider having build
be a no-op during this mode but have package
build the docker build context .tar.gz
. That might be useful as that could be a reasonable thing to provide --from-package
during Deploy?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it feels like like build/package is not supported when using remote build and that it needs to be handled in Deploy.
"github.com/azure/azure-dev/cli/azd/pkg/input" | ||
) | ||
|
||
type RemoteBuildHelper struct { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I spent a fair amount of time sort of chasing my tail here trying to figure out what the factoring should look like. Originally this logic was on the container app target, but I realized we'd want to re-use it with the AKS target later. I thought about putting it in ContainerHelper
(and actually had an iteration that did it) but felt a little strange that that that file was mainly focused around docker and this thing would be orthogonal to the rest of the file. It would also pull in the RemoteBuildManager and console as new dependencies.
It felt like keeping the seperate was cleaner. It did mean that RegistryName
which I wanted to use here needed to move off of ContainerHelper
but it didn't feel so bad to do that.
This change sketches out support for using the ACR remote build feature and uses it the the `containerapp` service target. To use it, add `remoteBuild: true` to the `docker` member of a `service` in `azure.yaml` Fixes Azure#509
Azure Dev CLI Install InstructionsInstall scriptsMacOS/Linux
bash:
pwsh:
WindowsPowerShell install
MSI install
Standalone Binary
MSIDocumentationlearn.microsoft.com documentationtitle: Azure Developer CLI reference
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good overall but added some feedback that we should consider.
Other questions:
Q: This change still only reduces our dependency on docker for build but we would still require docker for login and any additional tagging scenarios, right?
@@ -158,6 +159,8 @@ func (p *dockerProject) Requirements() FrameworkRequirements { | |||
|
|||
// Gets the required external tools for the project | |||
func (p *dockerProject) RequiredExternalTools(context.Context) []tools.ExternalTool { | |||
// TODO(ellismg): When using remoteBuild, we shouldn't need docker. We need the service config here. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes - i had this same thought. For optional dependencies based on azure.yaml
configuration we need to be able to inspect the ServiceConfig
to know what is required.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will explore a change that moves this from RequiredExternalTools(context.Context)
to RequiredExternalTools(context.Context, project.ServiceConfig)
to get us in a place where we can do this.
err := at.remoteBuildHelper.RunRemoteBuild(ctx, serviceConfig, targetResource, func(s string) { | ||
task.SetProgress(NewServiceProgress(s)) | ||
}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like we should be consistent with the way we update progress until we move over to a new model? 🤷♂️
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just checking - you mean turning this into our TaskWithProgress pattern and flowing the progress as we do when one Task has to call the other? Happy to do that.
@@ -189,6 +192,13 @@ func (p *dockerProject) Build( | |||
) *async.TaskWithProgress[*ServiceBuildResult, ServiceProgress] { | |||
return async.RunTaskWithProgress( | |||
func(task *async.TaskContextWithProgress[*ServiceBuildResult, ServiceProgress]) { | |||
if serviceConfig.Docker.RemoteBuild { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it feels like like build/package is not supported when using remote build and that it needs to be handled in Deploy.
previewerWriter := rh.console.ShowPreviewer(ctx, | ||
&input.ShowPreviewerOptions{ | ||
Prefix: " ", | ||
MaxLineCount: 8, | ||
Title: "Docker Output", | ||
}) | ||
err = rh.remoteBuildManager.RunDockerBuildRequestWithLogs( | ||
ctx, target.SubscriptionId(), target.ResourceGroupName(), registryResourceName, buildRequest, previewerWriter) | ||
rh.console.StopPreviewer(ctx, false) | ||
if err != nil { | ||
return err | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like the previewWriter
should be an optional argument to this function instead of performing this internally. This would keep the UX integration more loosely coupled and make this function more portable for other scenarios.
imageName := fmt.Sprintf("%s/%s-%s:azd-deploy-%d", | ||
strings.ToLower(serviceConfig.Project.Name), | ||
strings.ToLower(serviceConfig.Name), | ||
strings.ToLower(rh.env.Name()), | ||
rh.clock.Now().Unix()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of manually constructing the image name here we should look at reusing the containerHelper.RemoteImageTag(...)
that does all the processing to generate the correct image name based on the service configuration, etc.
if serviceConfig.Docker.RemoteBuild { | ||
err := at.remoteBuildHelper.RunRemoteBuild(ctx, serviceConfig, targetResource, func(s string) { | ||
task.SetProgress(NewServiceProgress(s)) | ||
}) | ||
if err != nil { | ||
task.SetError(fmt.Errorf("running remote build: %w", err)) | ||
return | ||
} | ||
} else { | ||
// Login, tag & push container image to ACR | ||
containerDeployTask := at.containerHelper.Deploy(ctx, serviceConfig, packageOutput, targetResource, true) | ||
syncProgress(task, containerDeployTask.Progress()) | ||
|
||
_, err := containerDeployTask.Await() | ||
if err != nil { | ||
task.SetError(err) | ||
return | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if we moved the concern of checking whether remote build is enabled to inside of containerHerlper.Deploy(...)
. Then both ACA & AKS will inherently be able to take advantage of it instead of checking directly within each service target?
Correct. However, in the remote build case where you are just building and pushing an image to an ACR, using remote builds allows you to remove the need for both For |
This change sketches out support for using the ACR remote build feature and uses it the the
containerapp
service target.To use it, add
remoteBuild: true
to thedocker
member of aservice
inazure.yaml
Fixes #509