This is to demo how to use Azure Data Factory with AKS/KEDA to run batch jobs in Azure
- A Linux machine or Windows Subsytem for Linux or Docker for Windows
- Azure Cli and an Azure Subscription
- Terraform 0.12 or greater
- Kubectl
- Helm
- A virtual network with 2 subnets defined - one for private endpoint and one for kubernetes
- DNS zones for storage private endpoints - refrenece
- An Azure Container Repository
- az extension add --name aks-preview
- az extension update --name aks-preview
- az login
- az feature register --namespace "Microsoft.ContainerService" --name "AKS-AzureKeyVaultSecretsProvider"
- az feature register --namespace "Microsoft.ContainerService" --name "EnablePodIdentityPreview"
- az feature register --namespace "Microsoft.ContainerService" --name "AKS-OpenServiceMesh"
- az feature register --namespace "Microsoft.ContainerService" --name "DisableLocalAccountsPreview"
- az feature list -o table --query "[?contains(name, 'Microsoft.ContainerService')].{Name:name,State:properties.state}"
- Wait till the above features are enabled.
- Update uat.tfvars with values for your environment
- az provider register --namespace Microsoft.ContainerService
- cd infrastructure
- terraform init -backend=true -backend-config="access_key=${access_key}" -backend-config="key=uat.terraform.tfstate"
- terraform plan -out="uat.plan" -var "resource_group_name=DevSub_K8S_RG" -var-file="uat.tfvars"
- terraform apply -auto-approve "uat.plan"
- ./aks-keda-install.sh $SUBSCRIPTION_ID $RG $CLUSTER_NAME $KEDA_IDENTITY $BATCH_IDENTITY
- cd source
- az login
- az acr login -n ${ACR_NAME}
- docker build -f DOCKERFILE -t ${ACR_NAME}.azurecr.io/queue-processor:{BUILD_ID} .
- docker push ${ACR_NAME}.azurecr.io/queue-processor:{BUILD_ID}
- cd chart
- Update values.yaml
- helm upgrade -i batchdemo .
TBD
TBD
- Update Readme with additional details