diff --git a/src/content/docs/infrastructure/elastic-container-service-integration/install-ecs-integration.mdx b/src/content/docs/infrastructure/elastic-container-service-integration/install-ecs-integration.mdx index 23ce6c52f59..e9aa046b8dc 100644 --- a/src/content/docs/infrastructure/elastic-container-service-integration/install-ecs-integration.mdx +++ b/src/content/docs/infrastructure/elastic-container-service-integration/install-ecs-integration.mdx @@ -70,7 +70,7 @@ To install using CloudFormation: > 1. Download the task definition example with the sidecar container to be deployed: - ``` + ```sh curl -O https://download.newrelic.com/infrastructure_agent/integrations/ecs/newrelic-infra-ecs-fargate-example-latest.json ``` @@ -89,28 +89,28 @@ One [install option](#install-overview) is using our install script. To use the 1. Download the ECS integration installer: - ``` + ```sh curl -O https://download.newrelic.com/infrastructure_agent/integrations/ecs/newrelic-infra-ecs-installer.sh ``` 2. Add execute permissions to the installer: - ``` + ```sh chmod +x newrelic-infra-ecs-installer.sh ``` 3. Execute it with `-h` to see the documentation and requirements: - ``` + ```sh ./newrelic-infra-ecs-installer.sh -h ``` 4. Check that your AWS profile points to the same region where your ECS cluster was created: - ``` - $ aws configure get region - us-east-1 - - $ aws ecs list-clusters - YOUR_CLUSTER_ARNS - arn:aws:ecs:us-east-1:YOUR_AWS_ACCOUNT:cluster/YOUR_CLUSTER + ```sh + aws configure get region + [output] us-east-1 + [output] + aws ecs list-clusters + [output] YOUR_CLUSTER_ARNS + [output] arn:aws:ecs:us-east-1:YOUR_AWS_ACCOUNT:cluster/YOUR_CLUSTER ``` 5. Execute the installer, specifying your and cluster name. @@ -119,7 +119,7 @@ One [install option](#install-overview) is using our install script. To use the id="auto-script-ec2" title="EC2 launch type" > - ``` + ```sh ./newrelic-infra-ecs-installer.sh -c YOUR_CLUSTER_NAME -l YOUR_LICENSE_KEY ``` @@ -127,7 +127,7 @@ One [install option](#install-overview) is using our install script. To use the id="auto-script-external" title="External (ECS Anywhere) launch type" > - ``` + ```sh ./newrelic-infra-ecs-installer.sh -c YOUR_CLUSTER_NAME -l YOUR_LICENSE_KEY -e ``` @@ -136,7 +136,7 @@ One [install option](#install-overview) is using our install script. To use the title="AWS Fargate launch type" > - ``` + ```sh ./newrelic-infra-ecs-installer.sh -f -c YOUR_CLUSTER_NAME -l YOUR_LICENSE_KEY ``` @@ -146,7 +146,7 @@ One [install option](#install-overview) is using our install script. To use the * Download the task definition example with the sidecar container to be deployed: - ``` + ```sh curl -O https://download.newrelic.com/infrastructure_agent/integrations/ecs/newrelic-infra-ecs-fargate-example-latest.json ``` @@ -156,8 +156,8 @@ One [install option](#install-overview) is using our install script. To use the Notice that the just created `NewRelicECSTaskExecutionRole` needs to be used as the task execution role. Policies attached to the role (All launch types): - - NewRelicSSMLicenseKeyReadAccess which enables access to the SSM parameter with the license key. - - AmazonECSTaskExecutionRolePolicy + - `NewRelicSSMLicenseKeyReadAccess` which enables access to the SSM parameter with the license key. + - `AmazonECSTaskExecutionRolePolicy` * Then, you can add the container you want to monitor as a sidecar. @@ -170,17 +170,17 @@ One [install option](#install-overview) is to manually do the steps that are don 1. Check that your AWS profile points to the same region where your ECS cluster was created: - ``` - $ aws configure get region - us-east-1 - - $ aws ecs list-clusters - YOUR_CLUSTER_ARNS - arn:aws:ecs:us-east-1:YOUR_AWS_ACCOUNT:cluster/YOUR_CLUSTER + ```sh + aws configure get region + [output] us-east-1 + [output] + aws ecs list-clusters + [output] YOUR_CLUSTER_ARNS + [output] arn:aws:ecs:us-east-1:YOUR_AWS_ACCOUNT:cluster/YOUR_CLUSTER ``` 2. Save your as a Systems Manager (SSM) parameter: - ``` + ```sh aws ssm put-parameter \ --name "/newrelic-infra/ecs/license-key" \ --type SecureString \ @@ -189,15 +189,15 @@ One [install option](#install-overview) is to manually do the steps that are don ``` 3. Create an IAM policy to access the license key parameter: - ``` + ```sh aws iam create-policy \ - --policy-name "NewRelicSSMLicenseKeyReadAccess" \ - --policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"ssm:GetParameters\"],\"Resource\":[\"ARN_OF_LICENSE_KEY_PARAMETER\"]}]}" \ - --description "Provides read access to the New Relic SSM license key parameter" + --policy-name "NewRelicSSMLicenseKeyReadAccess" \ + --policy-document "{\"Version\":\"2012-10-17\",\"Statement\":[{\"Effect\":\"Allow\",\"Action\":[\"ssm:GetParameters\"],\"Resource\":[\"ARN_OF_LICENSE_KEY_PARAMETER\"]}]}" \ + --description "Provides read access to the New Relic SSM license key parameter" ``` 4. Create an IAM role to be used as the task execution role: - ``` + ```sh aws iam create-role \ --role-name "NewRelicECSTaskExecutionRole" \ --assume-role-policy-document '{"Version":"2008-10-17","Statement":[{"Sid":"","Effect":"Allow","Principal":{"Service":"ecs-tasks.amazonaws.com"},"Action":"sts:AssumeRole"}]}' \ @@ -205,10 +205,10 @@ One [install option](#install-overview) is to manually do the steps that are don ``` 5. Attach the policies `NewRelicSSMLicenseKeyReadAccess`, and `AmazonECSTaskExecutionRolePolicy` to the role: - ``` + ```sh aws iam attach-role-policy \ - --role-name "NewRelicECSTaskExecutionRole" \ - --policy-arn "POLICY_ARN" + --role-name "NewRelicECSTaskExecutionRole" \ + --policy-arn "POLICY_ARN" ``` 6. Choose your launch type for more instructions: @@ -221,18 +221,18 @@ One [install option](#install-overview) is to manually do the steps that are don 1. Download the New Relic ECS integration task definition template file: - ``` + ```sh curl -O https://download.newrelic.com/infrastructure_agent/integrations/ecs/newrelic-infra-ecs-ec2-latest.json ``` 2. Replace the task execution role in the template file with the newly created role: - ``` + ```json "executionRoleArn": "NewRelicECSTaskExecutionRole", ``` 3. Replace the `valueFrom` attribute of the `secret` with the name of the Systems Manager parameter: - ``` - secrets": [ + ```json + "secrets": [ { "valueFrom": "/newrelic-infra/ecs/license-key", "name": "NRIA_LICENSE_KEY" @@ -241,20 +241,20 @@ One [install option](#install-overview) is to manually do the steps that are don ``` 4. Register the task definition file: - ``` + ```sh aws ecs register-task-definition --cli-input-json file://newrelic-infra-ecs-ec2-latest.json ``` 5. Create a service with the [daemon scheduling](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) strategy for the registered task: For EC2 launch type: - ``` + ```sh aws ecs create-service --cluster "YOUR_CLUSTER_NAME" --service-name "newrelic-infra" --task-definition "newrelic-infra" --scheduling-strategy DAEMON --launch-type EC2 ``` For EXTERNAL (ECS Anywhere) launch type: - ``` + ```sh aws ecs create-service --cluster "YOUR_CLUSTER_NAME" --service-name "newrelic-infra-external" --task-definition "newrelic-infra" --scheduling-strategy DAEMON --launch-type EXTERNAL ``` @@ -267,7 +267,7 @@ One [install option](#install-overview) is to manually do the steps that are don 1. Download the task definition example with the sidecar container to be deployed: - ``` + ```sh curl -O https://download.newrelic.com/infrastructure_agent/integrations/ecs/newrelic-infra-ecs-fargate-example-latest.json ``` diff --git a/src/content/docs/infrastructure/elastic-container-service-integration/troubleshooting/ecs-integration-troubleshooting-generate-verbose-logs.mdx b/src/content/docs/infrastructure/elastic-container-service-integration/troubleshooting/ecs-integration-troubleshooting-generate-verbose-logs.mdx index 67f4fd73a4a..6cb28b80f97 100644 --- a/src/content/docs/infrastructure/elastic-container-service-integration/troubleshooting/ecs-integration-troubleshooting-generate-verbose-logs.mdx +++ b/src/content/docs/infrastructure/elastic-container-service-integration/troubleshooting/ecs-integration-troubleshooting-generate-verbose-logs.mdx @@ -90,13 +90,13 @@ To get logs via CloudWatch: Read [more about these options](/docs/infrastructure/install-configure-manage-infrastructure/configuration/infrastructure-configuration-settings#verbose). 2. We use a CloudWatch log group called `/newrelic-infra/ecs` to forward the logs to. To see if it already exists, run: - ``` + ```sh aws logs describe-log-groups --log-group-name-prefix /newrelic-infra/ecs ``` If a log group exists with that prefix, you'll get this output: - ``` + ```json { "logGroups": [ { @@ -112,19 +112,19 @@ To get logs via CloudWatch: Because this command matches log groups with prefixes, ensure the log group name returned is exactly `/newrelic-infra/ecs`. If the log group doesn't exist, the output will be: - ``` + ```json { "logGroups": [] } ``` 3. If the log group doesn't exist, create it by running: - ``` + ```sh aws logs create-log-group --log-group-name /newrelic-infra/ecs ``` 4. Edit your task definition. In the container definition for the `newrelic-infra` container, add the following `logConfiguration`: - ``` + ```json "logConfiguration": { "logDriver": "awslogs", "options": { @@ -139,41 +139,41 @@ To get logs via CloudWatch: To get all the log streams for a given log group, run this command: - ``` + ```sh aws logs describe-log-streams --log-group-name /newrelic-infra/ecs ``` The following is an example output of a log group with two streams: - ``` - { - "logStreams": [ - { - "logStreamName": "verbose/newrelic-infra/9dfb28114e40415ebc399ec1e53a21b7", - "creationTime": 1586166741197, - "firstEventTimestamp": 1586166742030, - "lastEventTimestamp": 1586173933472, - "lastIngestionTime": 1586175101220, - "uploadSequenceToken": "49599989655680038369205623273330095416487086853777112338", - "arn": "arn:aws:logs:AWS_REGION_OF_YOUR_CLUSTER:YOUR_AWS_ACCOUNT:log-group:/newrelic-infra/ecs:log-stream:verbose/newrelic-infra/9dfb28114e40415ebc399ec1e53a21b7", - "storedBytes": 0 - }, - { - "logStreamName": "verbose/newrelic-infra/f6ce0be416804bc4bfa658da5514eb00", - "creationTime": 1586166745643, - "firstEventTimestamp": 1586166746491, - "lastEventTimestamp": 1586173037927, - "lastIngestionTime": 1586175100660, - "uploadSequenceToken": "49605664273821671319096446647846424799651902350804230514", - "arn": "arn:aws:logs:AWS_REGION_OF_YOUR_CLUSTER:YOUR_AWS_ACCOUNT:log-group:/newrelic-infra/ecs:log-stream:verbose/newrelic-infra/f6ce0be416804bc4bfa658da5514eb00", - "storedBytes": 0 - } - ] - } - ``` + ```json + { + "logStreams": [ + { + "logStreamName": "verbose/newrelic-infra/9dfb28114e40415ebc399ec1e53a21b7", + "creationTime": 1586166741197, + "firstEventTimestamp": 1586166742030, + "lastEventTimestamp": 1586173933472, + "lastIngestionTime": 1586175101220, + "uploadSequenceToken": "49599989655680038369205623273330095416487086853777112338", + "arn": "arn:aws:logs:AWS_REGION_OF_YOUR_CLUSTER:YOUR_AWS_ACCOUNT:log-group:/newrelic-infra/ecs:log-stream:verbose/newrelic-infra/9dfb28114e40415ebc399ec1e53a21b7", + "storedBytes": 0 + }, + { + "logStreamName": "verbose/newrelic-infra/f6ce0be416804bc4bfa658da5514eb00", + "creationTime": 1586166745643, + "firstEventTimestamp": 1586166746491, + "lastEventTimestamp": 1586173037927, + "lastIngestionTime": 1586175100660, + "uploadSequenceToken": "49605664273821671319096446647846424799651902350804230514", + "arn": "arn:aws:logs:AWS_REGION_OF_YOUR_CLUSTER:YOUR_AWS_ACCOUNT:log-group:/newrelic-infra/ecs:log-stream:verbose/newrelic-infra/f6ce0be416804bc4bfa658da5514eb00", + "storedBytes": 0 + } + ] + } + ``` 7. From the previous list of log streams, identify the one with the task ID for which you want to retrieve the logs and use the logStreamName in this command: - ``` + ```sh aws logs get-log-events --log-group-name /newrelic-infra/ecs --log-stream-name "LOG_STREAM_NAME" --output text > logs.txt ``` 8. Continue with the [enable verbose logs](#env-variable) instructions. @@ -186,14 +186,14 @@ To enable verbose logs by running a command from the running container: 2. Find the container ID of the New Relic integration container by running the command `docker ps -a`. The name of the container should be `nri-ecs`. 3. Enable verbose logs for a limited period of time by using `newrelic-infra-ctl`. Run the command: - ``` + ```sh docker exec INTEGRATION_CONTAINER_ID /usr/bin/newrelic-infra-ctl ``` For more details, see [Troubleshoot the agent](/docs/infrastructure/install-configure-manage-infrastructure/manage-your-agent/troubleshoot-running-agent). 4. Save the logs from the container with the command - ``` + ```sh docker logs INTEGRATION_CONTAINER_ID > logs.txt ``` diff --git a/src/content/docs/infrastructure/elastic-container-service-integration/troubleshooting/ecs-integration-troubleshooting-no-data-appears.mdx b/src/content/docs/infrastructure/elastic-container-service-integration/troubleshooting/ecs-integration-troubleshooting-no-data-appears.mdx index f99a0709ade..418bacc28bc 100644 --- a/src/content/docs/infrastructure/elastic-container-service-integration/troubleshooting/ecs-integration-troubleshooting-no-data-appears.mdx +++ b/src/content/docs/infrastructure/elastic-container-service-integration/troubleshooting/ecs-integration-troubleshooting-no-data-appears.mdx @@ -39,11 +39,11 @@ When interacting with New Relic support, use this method and send the generated 1. Retrieve the information related to the `newrelic-infra` service or the Fargate service that contains a task with a `newrelic-infra` sidecar: - ``` + ```sh aws ecs describe-services --cluster YOUR_CLUSTER_NAME --service newrelic-infra > newrelic-infra-service.json ``` - ``` + ```sh aws ecs describe-services --cluster YOUR_CLUSTER_NAME --service YOUR_FARGATE_SERVICE_WITH_NEW_RELIC_SIDECAR > newrelic-infra-sidecar-service.json ``` 2. The `failures` attribute details any errors for the services. @@ -51,7 +51,7 @@ When interacting with New Relic support, use this method and send the generated 4. The `desiredCount` should match the `runningCount`. This is the number of tasks the service is handling. Because we use the daemon service type, there should be one task per container instance in your cluster. The `pendingCount` attribute should be zero, because all tasks should be running. 5. Inspect the `events` attribute of `services` to check for issues with scheduling or starting the tasks. For example: if the service is unable to start tasks successfully, it will display a message like: - ``` + ```json { "id": "5295a13c-34e6-41e1-96dd-8364c42cc7a9", "createdAt": "2020-04-06T15:28:18.298000+02:00", @@ -60,7 +60,7 @@ When interacting with New Relic support, use this method and send the generated ``` 6. In the same section, you can also see which tasks were started by the service from the events: - ``` + ```json { "id": "1c0a6ce2-de2e-49b2-b0ac-6458a804d0f0", "createdAt": "2020-04-06T15:27:49.614000+02:00", @@ -69,13 +69,13 @@ When interacting with New Relic support, use this method and send the generated ``` 7. Retrieve the information related to the task with this command: - ``` + ```sh aws ecs describe-tasks --tasks YOUR_TASK_ID --cluster YOUR_CLUSTER_NAME > newrelic-infra-task.json ``` 8. The `desiredStatus` and `lastStatus` should be `RUNNING`. If the task couldn't start normally, it will have a `STOPPED` status. 9. Inspect the `stopCode` and `stoppedReason`. One reason example: a task that couldn't be started because the task execution role doesn't have the appropriate permissions to download the license-key-containing secret would have the following output: - ``` + ```json "stopCode": "TaskFailedToStart", "stoppedAt": "2020-04-06T15:28:54.725000+02:00", "stoppedReason": "Fetching secret data from AWS Secrets Manager in region YOUR_AWS_REGION: secret arn:aws:secretsmanager:YOUR_AWS_REGION:YOUR_AWS_ACCOUNT:secret:NewRelicLicenseKeySecret-Dh2dLkgV8VyJ-80RAHS-fail: AccessDeniedException: User: arn:aws:sts::YOUR_AWS_ACCOUNT:assumed-role/NewRelicECSIntegration-Ne-NewRelicECSTaskExecution-1C0ODHVT4HDNT/8637b461f0f94d649e9247e2f14c3803 is not authorized to perform: secretsmanager:GetSecretValue on resource: arn:aws:secretsmanager:YOUR_AWS_REGION:YOUR_AWS_ACCOUNT:secret:NewRelicLicenseKeySecret-Dh2dLkgV8VyJ-80RAHS-fail-DmLHfs status code: 400, request id: 9cf1881e-14d7-4257-b4a8-be9b56e09e3c", @@ -121,24 +121,24 @@ This means that the IAM role specified using `executionRoleArn` in the task defi 1. Get the execution role of your task: - ``` + ```sh aws ecs describe-task-definition --task-definition newrelic-infra --output text --query taskDefinition.executionRoleArn ``` You can replace the `--task-definition newrelic-infra` with the name of your fargate task that includes the sidecar container. - ``` + ```sh aws ecs describe-task-definition --task-definition YOUR_FARGATE_TASK_NAME --output text --query taskDefinition.executionRoleArn ``` 2. List the policies attached to role: - ``` + ```sh aws iam list-attached-role-policies --role-name YOUR_EXECUTION_ROLE_NAME ``` This should return 3 policies `AmazonECSTaskExecutionRolePolicy`, `AmazonEC2ContainerServiceforEC2Role` and a third one that should grant read access to the . In the following example the policy it's named `NewRelicLicenseKeySecretReadAccess`. - ``` + ```json { "AttachedPolicies": [ { @@ -158,7 +158,7 @@ This means that the IAM role specified using `executionRoleArn` in the task defi ``` 3. Retrieve the default policy version: - ``` + ```sh aws iam get-policy-version --policy-arn arn:aws:iam::YOUR_AWS_ACCOUNT:policy/YOUR_POLICY_NAME --version-id $(aws iam get-policy --policy-arn arn:aws:iam::YOUR_AWS_ACCOUNT:policy/YOUR_POLICY_NAME --output text --query Policy.DefaultVersionId) ``` @@ -169,7 +169,7 @@ This means that the IAM role specified using `executionRoleArn` in the task defi id="aws-secrets-manager" title="AWS Secrets Manager" > - ``` + ```json { "PolicyVersion": { "Document": { @@ -194,7 +194,7 @@ This means that the IAM role specified using `executionRoleArn` in the task defi id="aws-systems-manager-parameter-store" title="AWS Systems Manager Parameter Store" > - ``` + ```json { "Version": "2012-10-17", "Statement": [ diff --git a/src/content/docs/infrastructure/elastic-container-service-integration/understand-use-ecs-data.mdx b/src/content/docs/infrastructure/elastic-container-service-integration/understand-use-ecs-data.mdx index 46d0441128f..557769711c2 100644 --- a/src/content/docs/infrastructure/elastic-container-service-integration/understand-use-ecs-data.mdx +++ b/src/content/docs/infrastructure/elastic-container-service-integration/understand-use-ecs-data.mdx @@ -40,12 +40,12 @@ All the events reported from an ECS cluster contain the attributes `ecsClusterNa Here's an example [NRQL query](/docs/query-data/nrql-new-relic-query-language/getting-started/introduction-nrql) that returns the count of containers associated with each Docker image in an ECS cluster named `MyClusterName` created in `us-east-1`: -``` +```sql SELECT uniqueCount(containerId) - FROM ContainerSample - WHERE awsRegion = 'us-east-1' - AND ecsClusterName = 'MyClusterName' - FACET imageName SINCE 1 HOUR AGO +FROM ContainerSample +WHERE awsRegion = 'us-east-1' +AND ecsClusterName = 'MyClusterName' +FACET imageName SINCE 1 HOUR AGO ``` diff --git a/src/content/docs/infrastructure/elastic-container-service-integration/uninstall-ecs-integration.mdx b/src/content/docs/infrastructure/elastic-container-service-integration/uninstall-ecs-integration.mdx index 14926555529..07255454647 100644 --- a/src/content/docs/infrastructure/elastic-container-service-integration/uninstall-ecs-integration.mdx +++ b/src/content/docs/infrastructure/elastic-container-service-integration/uninstall-ecs-integration.mdx @@ -38,21 +38,21 @@ To uninstall the ECS integration using the installer script: * For EC2 and EXTERNAL launch type: run - ``` - $ ./newrelic-infrastructure-ecs-installer.sh -u -c YOUR_CLUSTER_NAME + ```ah + ./newrelic-infrastructure-ecs-installer.sh -u -c YOUR_CLUSTER_NAME ``` * For Fargate launch type: - ``` - $ ./newrelic-infrastructure-ecs-installer.sh -f -u -c YOUR_CLUSTER_NAME + ```sh + ./newrelic-infrastructure-ecs-installer.sh -f -u -c YOUR_CLUSTER_NAME ``` You only need to execute the command once, regardless of the number of nodes in your cluster. The command will delete the [AWS resources created during the install procedure](/docs/install-ecs-integration#aws-resources). The installer provides a dry run mode that shows you the awscli commands that are going to be executed. The dry run mode for the uninstall process is activated by passing the `-d` flag to the command: -``` -$ ./newrelic-infrastructure-ecs-installer.sh -d -u -c YOUR_CLUSTER_NAME +```sh +./newrelic-infrastructure-ecs-installer.sh -d -u -c YOUR_CLUSTER_NAME ``` ### Manual uninstall @@ -61,61 +61,63 @@ To uninstall manually, you must delete all the [AWS resources](/docs/install-ecs 1. Check that your AWS profile points to the same region where your ECS cluster was created: - ``` - $ aws configure get region - us-east-1 - - $ aws ecs list-clusters - YOUR_CLUSTER_ARNS - arn:aws:ecs:us-east-1:YOUR_AWS_ACCOUNT:cluster/YOUR_CLUSTER + ```sh + aws configure get region + [output] us-east-1 + [output] + aws ecs list-clusters + [output] YOUR_CLUSTER_ARNS + [output] arn:aws:ecs:us-east-1:YOUR_AWS_ACCOUNT:cluster/YOUR_CLUSTER ``` 2. Delete the Systems Manager (SSM) parameter that stores the New Relic : - ``` + ```sh aws ssm delete-parameter --name "/newrelic-infra/ecs/license-key" ``` 3. Before deleting the IAM role, you need to detach all of its policies. To get a list of the attached policies: - ``` - aws iam list-attached-role-policies --role-name "NewRelicECSTaskExecutionRole" --output text - --query 'AttachedPolicies[*].PolicyArn' + ```sh + aws iam list-attached-role-policies \ + --role-name "NewRelicECSTaskExecutionRole" \ + --output text \ + --query 'AttachedPolicies[*].PolicyArn' ``` 4. Detach all the policies returned in the previous step from the IAM role: - ``` + ```sh aws iam detach-role-policy --role-name "NewRelicECSTaskExecutionRole" --policy-arn "POLICY_ARN" ``` 5. Delete the IAM role: - ``` + ```sh aws iam delete-role --role-name "NewRelicECSTaskExecutionRole" ``` 6. Delete the IAM policy `NewRelicSSMLicenseKeyReadAccess`, which grants System Manager license key access: - ``` + ```sh aws iam delete-policy --policy-arn "POLICY_ARN" ``` 7. The remaining steps are only for EC2 and EXTERNAL launch type, and not Fargate: 1. Delete the services: - ``` + ```sh aws ecs delete-service --service "newrelic-infra" --cluster "YOUR_CLUSTER_NAME" ``` - ``` + ```sh aws ecs delete-service --service "newrelic-infra-external" --cluster "YOUR_CLUSTER_NAME" ``` 2. List the task definition for the `newrelic-infra` family of tasks: - ``` + ```sh aws ecs list-task-definitions \ - --family-prefix newrelic-infra \ - --output text \ - --query taskDefinitionArns + --family-prefix newrelic-infra \ + --output text \ + --query taskDefinitionArns ``` 3. Deregister the tasks: - ``` + ```sh aws ecs deregister-task-definition --task-definition "TASK_DEFINITION_ARN" ``` diff --git a/src/content/docs/infrastructure/host-integrations/host-integrations-list/rabbitmq-monitoring-integration.mdx b/src/content/docs/infrastructure/host-integrations/host-integrations-list/rabbitmq-monitoring-integration.mdx index a731ef2f9cc..55b94993cf2 100644 --- a/src/content/docs/infrastructure/host-integrations/host-integrations-list/rabbitmq-monitoring-integration.mdx +++ b/src/content/docs/infrastructure/host-integrations/host-integrations-list/rabbitmq-monitoring-integration.mdx @@ -63,12 +63,12 @@ To install the RabbitMQ integration, follow the instructions for your environmen 1. Install [the infrastructure agent](/docs/integrations/host-integrations/installation/install-infrastructure-host-integrations/#install), and replace the `INTEGRATION_FILE_NAME` variable with `nri-rabbitmq`. 2. Change the directory to the integrations configuration folder: - ``` + ```sh cd /etc/newrelic-infra/integrations.d ``` 3. Copy the sample configuration file: - ``` + ```sh sudo cp rabbitmq-config.yml.sample rabbitmq-config.yml ``` 4. Edit the `rabbitmq-config.yml` file as described in the [configuration settings](#config). @@ -76,7 +76,7 @@ To install the RabbitMQ integration, follow the instructions for your environmen **Example:** - ``` + ```sh sudo cp /etc/newrelic-infra/logging.d/rabbitmq-log.yml.example /etc/newrelic-infra/logging.d/rabbitmq-log.yml ``` @@ -90,12 +90,12 @@ To install the RabbitMQ integration, follow the instructions for your environmen [https://download.newrelic.com/infrastructure_agent/windows/integrations/nri-rabbitmq/nri-rabbitmq-amd64.msi](https://download.newrelic.com/infrastructure_agent/windows/integrations/nri-rabbitmq/nri-rabbitmq-amd64.msi) 2. To install from the Windows command prompt, run: - ``` + ```sh msiexec.exe /qn /i PATH\TO\nri-rabbitmq-amd64.msi ``` 3. In the Integrations directory, `C:\Program Files\New Relic\newrelic-infra\integrations.d\`, create a copy of the sample configuration file by running: - ``` + ```sh cp rabbitmq-config.yml.sample rabbitmq-config.yml ``` 4. Edit the `rabbitmq-config.yml` configuration file using the [configuration settings](#config). @@ -103,13 +103,13 @@ To install the RabbitMQ integration, follow the instructions for your environmen **Command Prompt Example:** - ``` + ```sh rename "C:\Program Files\New Relic\newrelic-infra\logging.d\rabbitmq-log-win.yml.example" rabbitmq-log-win.yml ``` **Powershell Example:** - ``` + ```powershell Rename-Item -Path "C:\Program Files\New Relic\newrelic-infra\logging.d\rabbitmq-log-win.yml.example" -NewName "rabbitmq-log-win.yml" ``` @@ -141,8 +141,7 @@ For an example configuration, see [Example config file](#example-config). The configuration file has common settings applicable to all integrations like `interval`, `timeout`, `inventory_source`. To read all about these common settings refer to our [Configuration Format](/docs/create-integrations/infrastructure-integrations-sdk/specifications/host-integrations-newer-configuration-format/#configuration-basics) document. - If you're still using our Legacy configuration/definition files please refer - to this + If you're still using our Legacy configuration/definition files please refer to this [document](/docs/create-integrations/infrastructure-integrations-sdk/specifications/host-integrations-standard-configuration-format/) for help. @@ -155,7 +154,7 @@ In cluster environments, the integration collects metrics from all the cluster w If you're running a cluster environment in Kubernetes, you need to deploy RabbitMQ as a [StatefulSet](https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/), and configure the agent to query all the metrics from the RabbitMQ pod. Set the autodiscovery matching condition [in the config file](https://github.com/newrelic/nri-rabbitmq/blob/master/rabbitmq-config.yml.k8s_sample) to this value: -``` +```yml discovery: command: exec: /var/db/newrelic-infra/nri-discovery-kubernetes @@ -169,13 +168,12 @@ discovery: ```yaml integrations: - - name: nri-rabbitmq - env: - # Integration configuration parameters. - METRICS: true - DISABLE_ENTITIES: true - QUEUES_MAX_LIMIT: "0" - + - name: nri-rabbitmq + env: + # Integration configuration parameters. + METRICS: true + DISABLE_ENTITIES: true + QUEUES_MAX_LIMIT: "0" ``` @@ -208,7 +206,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **HOSTNAME** + `HOSTNAME` @@ -216,7 +214,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - localhost + `localhost` @@ -230,7 +228,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **PORT** + `PORT` @@ -238,7 +236,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - 15672 + `15672` @@ -252,7 +250,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **USERNAME** + `USERNAME` @@ -274,7 +272,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **PASSWORD** + `PASSWORD` @@ -296,7 +294,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **TIMEOUT** + `TIMEOUT` @@ -304,7 +302,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - 30 + `30` @@ -318,7 +316,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **MANAGEMENT_PATH_PREFIX** + `MANAGEMENT_PATH_PREFIX` @@ -340,7 +338,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **USE_SSL** + `USE_SSL` @@ -348,7 +346,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -362,7 +360,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **CA_BUNDLE_DIR** + `CA_BUNDLE_DIR` @@ -384,7 +382,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **CA_BUNDLE_FILE** + `CA_BUNDLE_FILE` @@ -406,7 +404,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **NODE_NAME_OVERRIDE** + `NODE_NAME_OVERRIDE` @@ -428,7 +426,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **CONFIG_PATH** + `CONFIG_PATH` @@ -446,7 +444,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **QUEUES** + `QUEUES` @@ -454,7 +452,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory Example: - ``` + ```yml queues: '["myQueue1","myQueue2"]' ``` @@ -470,7 +468,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **QUEUES_REGEXES** + `QUEUES_REGEXES` @@ -478,7 +476,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory Example: - ``` + ```yml queues_regexes: '["queue[0-9]+",".*"]' ``` @@ -498,7 +496,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **EXCHANGES** + `EXCHANGES` @@ -522,7 +520,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **EXCHANGES_REGEXES** + `EXCHANGES_REGEXES` @@ -545,7 +543,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **VHOSTS** + `VHOSTS` @@ -569,7 +567,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **VHOSTS_REGEXES** + `VHOSTS_REGEXES` @@ -593,7 +591,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **METRICS** + `METRICS` @@ -601,7 +599,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -613,7 +611,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **INVENTORY** + `INVENTORY` @@ -621,7 +619,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -633,7 +631,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **EVENTS** + `EVENTS` @@ -641,7 +639,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -653,7 +651,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **DISABLE_ENTITIES** + `DISABLE_ENTITIES` @@ -661,7 +659,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -672,11 +670,11 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **QUEUES_MAX_LIMIT** + `QUEUES_MAX_LIMIT` - Defines the max amount of Queues that can be processed, if this number is reached all queues will be dropped. If defined as '0' no limits are applied, this used with DISABLE_ENTITIES=true to avoid, memory increase in the Agent. + Defines the max amount of Queues that can be processed, if this number is reached all queues will be dropped. If defined as '0' no limits are applied, this used with `DISABLE_ENTITIES=true` to avoid, memory increase in the Agent. @@ -700,10 +698,10 @@ Environment variables can be used to control config settings, such as your Our default sample config file includes examples of labels but, as they are not mandatory, you can remove, modify or add new ones of your choice. -``` - labels: - env: production - role: rabbitmq +```yml +labels: + env: production + role: rabbitmq ``` ## Example configuration [#example-config] @@ -715,7 +713,7 @@ Here's an example configuration file: id="example-config1" title="BASIC CONFIGURATION WITH SSL" > - ``` + ```yml integrations: - name: nri-rabbitmq env: @@ -748,7 +746,7 @@ Here's an example configuration file: id="example-config2" title="METRICS ONLY" > - ``` + ```yml integrations: - name: nri-rabbitmq env: @@ -779,7 +777,7 @@ Here's an example configuration file: id="example-config3" title="INVENTORY ONLY" > - ``` + ```yml integrations: - name: nri-rabbitmq env: @@ -811,7 +809,7 @@ Here's an example configuration file: id="example-config4" title="EVENTS ONLY" > - ``` + ```yml integrations: - name: nri-rabbitmq env: @@ -838,10 +836,10 @@ Data from this service is reported to an [integration dashboard](/docs/integrati Metrics are attached to these [event types](/docs/using-new-relic/data/understand-data/new-relic-data-types#events-new-relic): -* [RabbitmqVhostSample](#vhostsample) -* [RabbitmqNodeSample](#nodesample) -* [RabbitmqExchangeSample](#exchangesample) -* [RabbitmqQueueSample](#queuesample) +* [`RabbitmqVhostSample`](#vhostsample) +* [`RabbitmqNodeSample`](#nodesample) +* [`RabbitmqExchangeSample`](#exchangesample) +* [`RabbitmqQueueSample`](#queuesample) You can [query this data](/docs/using-new-relic/data/understand-data/query-new-relic-data) for troubleshooting purposes or to create custom charts and dashboards. @@ -921,7 +919,7 @@ These attributes are attached to the `RabbitmqVhostSample` event type: - vhost.connectionsOpening + `vhost.connectionsOpening` @@ -931,7 +929,7 @@ These attributes are attached to the `RabbitmqVhostSample` event type: - vhost.connectionsRunning + `vhost.connectionsRunning` @@ -951,7 +949,7 @@ These attributes are attached to the `RabbitmqVhostSample` event type: - vhost.connectionsTotal + `vhost.connectionsTotal` @@ -961,7 +959,7 @@ These attributes are attached to the `RabbitmqVhostSample` event type: - vhost.connectionsTuning + `vhost.connectionsTuning` @@ -991,7 +989,7 @@ These attributes are attached to the `RabbitmqNodeSample` event type: - node.averageErlangProcessesWaiting + `node.averageErlangProcessesWaiting` @@ -1005,7 +1003,7 @@ These attributes are attached to the `RabbitmqNodeSample` event type: - Node disk alarm (0 or 1). 0 shows that the alarm is not tripped and 1 shows that the alarm is tripped. In RabbitMQ this is seen as `disk_free_alarm`. + Node disk alarm (`0` or `1`). `0` shows that the alarm is not tripped and `1` shows that the alarm is tripped. In RabbitMQ this is seen as `disk_free_alarm`. @@ -1065,7 +1063,7 @@ These attributes are attached to the `RabbitmqNodeSample` event type: - Host memory alarm (0 or 1). 0 shows that the alarm is not tripped and 1 shows that the alarm is tripped. In RabbitMQ this is seen as `mem_alarm`. + Host memory alarm (`0` or `1`). `0` shows that the alarm is not tripped and `1` shows that the alarm is tripped. In RabbitMQ this is seen as `mem_alarm`. @@ -1115,7 +1113,7 @@ These attributes are attached to the `RabbitmqNodeSample` event type: - Node running (0 or 1). 0 shows that the node is not running and 1 shows that the node is running. In RabbitMQ this is seen as `running`. + Node running (`0` or `1`). `0` shows that the node is not running and `1` shows that the node is running. In RabbitMQ this is seen as `running`. @@ -1297,7 +1295,7 @@ These attributes are attached to the `RabbitmqQueueSample` event type: - Rate of messages per queue delivered to clients but not yet acknowledged per second. In RabbitMQ this is seen as `message_stats`.deliver_no_ack_details.rate. + Rate of messages per queue delivered to clients but not yet acknowledged per second. In RabbitMQ this is seen as `message_stats.deliver_no_ack_details.rate`. @@ -1337,7 +1335,7 @@ These attributes are attached to the `RabbitmqQueueSample` event type: - Rate of messages delivered in acknowledgment mode to consumers per queue per second. In RabbitMQ this is seen as `message_stats`.deliver_details.rate. + Rate of messages delivered in acknowledgment mode to consumers per queue per second. In RabbitMQ this is seen as `message_stats.deliver_details.rate`. @@ -1387,7 +1385,7 @@ These attributes are attached to the `RabbitmqQueueSample` event type: - Sum of messages delivered in acknowledgment mode to consumers, in no-acknowledgment mode to consumers, in acknowledgment mode in response to basic.get, and in no-acknowledgment mode in response to basic.get. per queue. In RabbitMQ this is seen as `message_stats.deliver_get`. + Sum of messages delivered in acknowledgment mode to consumers, in no-acknowledgment mode to consumers, in acknowledgment mode in response to `basic.get`, and in no-acknowledgment mode in response to `basic.get` per queue. In RabbitMQ this is seen as `message_stats.deliver_get`. @@ -1397,7 +1395,7 @@ These attributes are attached to the `RabbitmqQueueSample` event type: - Rate per second of the sum of messages delivered in acknowledgment mode to consumers, in no-acknowledgment mode to consumers, in acknowledgment mode in response to basic.get, and in no-acknowledgment mode in response to basic.get per queue. In RabbitMQ this is seen as `message_stats.deliver_get_details.rate`. + Rate per second of the sum of messages delivered in acknowledgment mode to consumers, in no-acknowledgment mode to consumers, in acknowledgment mode in response to `basic.get`, and in no-acknowledgment mode in response to `basic.get` per queue. In RabbitMQ this is seen as `message_stats.deliver_get_details.rate`. @@ -1487,7 +1485,7 @@ Troubleshooting tips: > If you receive this error, it means that the RabbitMQ command line tool, [rabbitmqctl](https://www.rabbitmq.com/rabbitmqctl.8.html), is not in the PATH of the root user. To correct this issue, execute the following command: - ``` + ```sh find -name "rabbitmqctl" export PATH="$PATH: ``` diff --git a/src/content/docs/infrastructure/host-integrations/host-integrations-list/statsd-monitoring-integration.mdx b/src/content/docs/infrastructure/host-integrations/host-integrations-list/statsd-monitoring-integration.mdx index 8b046bd06c3..e0fb3884469 100644 --- a/src/content/docs/infrastructure/host-integrations/host-integrations-list/statsd-monitoring-integration.mdx +++ b/src/content/docs/infrastructure/host-integrations/host-integrations-list/statsd-monitoring-integration.mdx @@ -33,10 +33,11 @@ The integration adheres to the Metric API [requirements and data limits](/docs/d ```sql SELECT count(*) FROM NrIntegrationError - WHERE newRelicFeature ='Metrics' - FACET category, message - LIMIT 100 since 1 day ago +WHERE newRelicFeature = 'Metrics' +FACET category, message +LIMIT 100 SINCE 1 day ago ``` + The integration is available as a linux container image in [DockerHub](https://hub.docker.com/r/newrelic/nri-statsd/tags) for amd64 and arm64 architectures. ## Install @@ -103,13 +104,13 @@ Here are examples of Kubernetes manifests for deployment and service objects: spec: serviceAccountName: newrelic-statsd containers: - - name: newrelic-statsd - image: newrelic/nri-statsd:latest - env: - - name: NR_ACCOUNT_ID - value: "NEW_RELIC_ACCOUNT_ID" - - name: NR_API_KEY - value: "NEW_RELIC_LICENSE_KEY" + - name: newrelic-statsd + image: newrelic/nri-statsd:latest + env: + - name: NR_ACCOUNT_ID + value: "NEW_RELIC_ACCOUNT_ID" + - name: NR_API_KEY + value: "NEW_RELIC_LICENSE_KEY" ``` **service.yml**: @@ -125,10 +126,10 @@ Here are examples of Kubernetes manifests for deployment and service objects: spec: type: ClusterIP ports: - - name: newrelic-statsd - port: 80 - targetPort: 8125 - protocol: UDP + - name: newrelic-statsd + port: 80 + targetPort: 8125 + protocol: UDP selector: app: newrelic-statsd ``` @@ -211,7 +212,8 @@ In the [install procedure](#install), you run `nri-statsd` with environment vari Indicates address on which to listen for metrics. Default: `:8125`. - From nri-statsd `v2.3.0` (goStatsD `v34.2.1`), connection via Unix Domain Socket (UDS) is supported. Use "metrics-addr=/some/path/newrelic-statsd.socket" instead of "[host]:port" in the configuration. + From nri-statsd `v2.3.0` (goStatsD `v34.2.1`), connection via Unix Domain Socket (UDS) is supported. Use `metrics-addr=/some/path/newrelic-statsd.socket` instead of `[host]:port` in the configuration. + @@ -220,11 +222,11 @@ In the [install procedure](#install), you run `nri-statsd` with environment vari To ensure FedRAMP compliance when using the StatsD integration you must define the following endpoints in the custom configuration: - ``` - address = 'https://gov-insights-collector.newrelic.com/v1/accounts/ $NR_ACCOUNT_ID/events' + ```ini + address = 'https://gov-insights-collector.newrelic.com/v1/accounts/$NR_ACCOUNT_ID/events' ``` - ``` + ```ini address-metrics = 'https://gov-infra-api.newrelic.com/metric/v1' ``` @@ -236,7 +238,8 @@ Here are some examples of customizing configuration by overwriting the default c id="config-example" title="Example of custom configuration" > - ``` + + ```ini # Specify after how long do we expire metrics, default:5m expiry-interval = '1ms' @@ -257,7 +260,7 @@ Here are some examples of customizing configuration by overwriting the default c By default, `nri_statsd` calculates the following for timer metrics: standard deviation, mean, median, sum, lower, and upper bounds for the flush interval. If you want to disable those metrics you can do it by adding a `disabled-sub-metrics` configuration section and set `true` for the ones you want disabled. Here's an example: - ``` + ```ini # disabled-sub-metrics configuration section allows disabling timer sub-metrics [disabled-sub-metrics] # Regular metrics @@ -292,7 +295,7 @@ Here are some examples of customizing configuration by overwriting the default c Example: - ``` + ```ini backends='newrelic' flush-interval='10s' @@ -324,7 +327,7 @@ Here are some examples of customizing configuration by overwriting the default c Example: - ``` + ```yml apiVersion: v1 kind: ConfigMap metadata: @@ -347,14 +350,14 @@ Here are some examples of customizing configuration by overwriting the default c Example: - ``` + ```yml apiVersion: apps/v1 kind: Deployment spec: template: spec: containers: - .... + # .... volumeMounts: - mountPath: /etc/opt/newrelic/ name: nri-statsd-config @@ -394,8 +397,9 @@ Here are explanations of these fields: - <metric name> - `string` + `` + + _string_ @@ -405,8 +409,9 @@ Here are explanations of these fields: - <value> - `string` + `` + + _string_ @@ -420,8 +425,9 @@ Here are explanations of these fields: - @<sample rate> - `float` + `@` + + _float_ @@ -433,8 +439,9 @@ Here are explanations of these fields: - \#<tags> - `string` + `#` + + _string_ @@ -459,7 +466,7 @@ Here are the types of metrics and how to format them: ``` counter:4|c - counter:-2|c + counter:-2|c ``` At each flush, the current count is sent and reset to `0`. If the count is not updated, at the next flush it will send the value `0`. You can opt to disable this behavior by setting [`expiry-interval`](#configure) to `1ms`. @@ -531,13 +538,13 @@ You can add tags to your data, which we save as [attributes](/docs/using-new-rel Here's an example that would create two tags: - ``` + ```sh -e TAGS="environment:production region:us" ``` Here's that environment variable used in the [startup command](#install): - ``` + ```sh docker run \ -d --restart unless-stopped \ --name newrelic-statsd \ @@ -566,7 +573,7 @@ You can add tags to your data, which we save as [attributes](/docs/using-new-rel Here's an example [NRQL](/docs/query-data/nrql-new-relic-query-language/getting-started/introduction-nrql) query that includes a custom tag: -``` +```sql SELECT count(*) FROM Metric WHERE environment = 'production' ``` @@ -583,13 +590,13 @@ You can alert on StatsD data using [NRQL alert conditions](/docs/alerts/new-reli First, send this data to New Relic’s StatsD container: - ``` + ```sh echo "prod.test.num:32|g" | nc -v -w 1 -u localhost 8125 ``` Next, create a [NRQL alert condition](/docs/alerts/new-relic-alerts/defining-conditions/create-alert-conditions-nrql-queries) using this query: - ``` + ```sql SELECT latest(prod.test.num) FROM Metric WHERE metricName = 'prod.test.num' ``` @@ -613,7 +620,7 @@ You can alert on StatsD data using [NRQL alert conditions](/docs/alerts/new-reli If a metric with a value above 50 is sent, then an incident is created and notified. The incident is closed automatically after 24 hours. To test that the alert is working, run this command: - ``` + ```sh echo "prod.test.num:60|g" | nc -v -w 1 -u localhost 8125 ``` @@ -623,7 +630,7 @@ You can alert on StatsD data using [NRQL alert conditions](/docs/alerts/new-reli To query your data, you'd use any New Relic [query option](/docs/using-new-relic/data/understand-data/query-new-relic-data). For example, you might run a [NRQL](/docs/query-data/nrql-new-relic-query-language/getting-started/introduction-nrql) query like: -``` +```sql SELECT count(*) FROM Metric WHERE metricName = 'myMetric' and environment = 'production' ``` diff --git a/src/install/python/python-agent-create-config-file.mdx b/src/install/python/python-agent-create-config-file.mdx index 1b2c8319627..acc6bb6f2fe 100644 --- a/src/install/python/python-agent-create-config-file.mdx +++ b/src/install/python/python-agent-create-config-file.mdx @@ -17,18 +17,15 @@ The Python agent needs some basic configurations to get started. Here are two co 1. Go to a working directory where you can store the file, and run the following: - ``` + ```sh newrelic-admin generate-config YOUR_LICENSE_KEY newrelic.ini ``` 2. Edit your `newrelic.ini` and insert values for the following: - ``` - ... + ```ini license_key = INSERT_YOUR_LICENSE_KEY - ... app_name = INSERT_YOUR_APP_NAME - ... ``` 3. Remember where this file is located because you'll use it later in the setup. @@ -40,9 +37,9 @@ The Python agent needs some basic configurations to get started. Here are two co > If you don't use a configuration file, you can use environment variables to set configuration values. While you can create a variety of configurations with environment variables, we recommend you start by setting your license key and app name: - ``` + ```sh export NEW_RELIC_LICENSE_KEY=INSERT_YOUR_LICENSE_KEY export NEW_RELIC_APP_NAME=INSERT_YOUR_APP_NAME ``` - \ No newline at end of file + diff --git a/src/install/python/python-agent-download.mdx b/src/install/python/python-agent-download.mdx index 72ba4d46b8d..6d568a16726 100644 --- a/src/install/python/python-agent-download.mdx +++ b/src/install/python/python-agent-download.mdx @@ -10,9 +10,9 @@ Download and install the agent package using one of these options: id="pip" title="pip install (RECOMMENDED)" > - Install the **newrelic** package directly from PyPi by running: + Install the `newrelic` package directly from PyPi by running: - ``` + ```sh pip install newrelic ``` @@ -23,7 +23,7 @@ Download and install the agent package using one of these options: > Run: - ``` + ```sh easy_install newrelic ``` @@ -43,11 +43,11 @@ Download and install the agent package using one of these options: > To obtain the package manually: - 1. Download the appropriate **tar.gz** file from our [download site](https://download.newrelic.com/python_agent/release). - 2. Unpack the **tar.gz** file. + 1. Download the appropriate `tar.gz` file from our [download site](https://download.newrelic.com/python_agent/release). + 2. Unpack the `tar.gz` file. 3. In the top directory of the unpacked package, install it by running: - ``` + ```sh python setup.py install ``` diff --git a/src/install/python/python-agent-non-web-apps.mdx b/src/install/python/python-agent-non-web-apps.mdx index 904450a6040..d8d70f32e3c 100644 --- a/src/install/python/python-agent-non-web-apps.mdx +++ b/src/install/python/python-agent-non-web-apps.mdx @@ -21,22 +21,22 @@ Keep the following in mind: * If the task to be tracked is running in a background thread of an existing monitored web application process, then initialization of the agent would already be performed so you shouldn't need to repeat this step. * If instrumenting an application that is not also handling web traffic, you won't need to wrap the WSGI application entry point. -To get started, add the following to the beginning of the application script file or module that holds your WSGI entry point. In this example, /some/path/newrelic.ini represents the location of the copy of the config file created earlier. The config file must be readable by your web application. +To get started, add the following to the beginning of the application script file or module that holds your WSGI entry point. In this example, `/some/path/newrelic.ini` represents the location of the copy of the config file created earlier. The config file must be readable by your web application. -``` +```py import newrelic.agent -newrelic.agent.initialize('/some/path/newrelic.ini') -... YOUR_OTHER_IMPORTS +newrelic.agent.initialize('some/path/newrelic.ini') +# YOUR_OTHER_IMPORTS ``` - Unlike standard Python functionality, the import order matters: the agent package must be imported first. + Unlike standard Python functionality, the import order matters: the agent package must be imported and [initialized](/docs/agents/python-agent/python-agent-api/initialize) first. ### Wrap the task to be monitored [#wrapping] Instead of wrapping the WSGI application entry point, you must wrap any function that performs a background task that you wish to track. For example: -``` +```py import newrelic.agent @newrelic.agent.background_task() @@ -48,7 +48,7 @@ By default the name of the task will be the name of the function the decorator i If you wish to override the task name, it can be supplied as a named argument to the decorator. An alternate group can also be specified in place of the default `Function`: -``` +```py import newrelic.agent @newrelic.agent.background_task(name='database-update', group='Task') @@ -58,7 +58,7 @@ def database_update(): If the name of the task needs to be set dynamically, then it will be necessary to use a context manager object instead. When using a context manager object, it is first necessary to retrieve the application object corresponding to the application data is to be reported against. Leaving out the name of the application when retrieving the application object will result in that corresponding to the default application named in the agent configuration being used. -``` +```py import newrelic.agent def execute_task(task_name): @@ -83,7 +83,7 @@ In this case, one can force registration in one of two ways. The simplest is to If using an agent configuration file, this is done by adding the following entry to the `newrelic` section of the agent configuration file.: -``` +```ini startup_timeout = 10.0 ``` @@ -93,7 +93,7 @@ Note that you should be careful about using this startup timeout for a web appli If necessary, forcing registration of the agent can also be performed in code within the application code as well. -``` +```py import newrelic.agent application = newrelic.agent.register_application(timeout=10.0) @@ -125,7 +125,7 @@ In this case it may be necessary to increase the shutdown timeout to ensure that If using an agent configuration file, this is changed by adding an entry: -``` +```ini shutdown_timeout = 2.5 ``` @@ -141,7 +141,7 @@ One common use case is the monitoring of Django management commands. Because it Due to the limitation on what Django management commands can be monitored, you need to add to the agent configuration file a special configuration section `[import-hook:django]`. Under this you need to then provide a space separated list under the setting `instrumentation.scripts.django_admin`: -``` +```ini [import-hook:django] instrumentation.scripts.django_admin = syncdb sqlflush ``` @@ -150,15 +150,15 @@ By default, the startup timeout is automatically specified to be 10.0 seconds wh Once the additional configuration has been specified, you can then run your Django management command wrapped by our `newrelic-admin` wrapper script: -``` +```sh NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-python manage.py syncdb ``` ### A simple "Hello, world!" example [#hello-world] -Here's a simple `Hello, world` example with manual initialization: +Here's a simple **Hello, world** example with manual initialization: -``` +```py import newrelic.agent newrelic.agent.initialize('newrelic.ini') #This is required! diff --git a/src/install/python/python-agent-uvicorn-docker.mdx b/src/install/python/python-agent-uvicorn-docker.mdx index b659a1003bf..96e672145cb 100644 --- a/src/install/python/python-agent-uvicorn-docker.mdx +++ b/src/install/python/python-agent-uvicorn-docker.mdx @@ -31,7 +31,7 @@ Insert this configuration file in the same directory as your Python app. Create `requirements.txt` (the list of dependencies for Python) by running this command in the directory where your application resides: -``` +```sh pip freeze > requirements.txt ``` @@ -39,9 +39,9 @@ pip freeze > requirements.txt This is the container that runs the agent. -1. In the directory for the New Relic agent (for example, `newrelic`), create this Dockerfile for the base container (the file name is just `Dockerfile`): +1. In the directory for the New Relic agent (for example, `newrelic`), create this `Dockerfile` for the base container (the file name is just `Dockerfile`): - ``` + ```dockerfile FROM python:3.9.14-alpine3.16 RUN pip install --no-cache-dir newrelic @@ -51,58 +51,58 @@ This is the container that runs the agent. 2. Create the container with this command: - ``` + ```sh docker build -t python_newrelic:latest . ``` -3. Change to the directory where your app is located (for example, `src) +3. Change to the directory where your app is located (for example, `src`) ### E. Create a container for your app This is the container that will hold your application. It will work in concert with the base container. Note that it pulls in the base container in the `FROM`. -1. Create the Dockerfile for your app in the same directory as your Python app. +1. Create the `Dockerfile` for your app in the same directory as your Python app. * Change `WORKDIR` to match your directory structure. * Note that we're using `0.0.0.0` instead of something like `127.0.0.1` because you must set a container's main process to bind to the all interface addresses or it will be unreachable from outside the container. - ``` + ```dockerfile FROM python_newrelic:latest RUN apk add --no-cache bzip2-dev \ - coreutils \ - gcc \ - libc-dev \ - libffi-dev \ - libressl-dev \ - linux-headers + coreutils \ + gcc \ + libc-dev \ + libffi-dev \ + libressl-dev \ + linux-headers - WORKDIR INSERT_THE_PATH_TO_YOUR_PYTHON_APP + WORKDIR INSERT_THE_PATH_TO_YOUR_PYTHON_APP - COPY requirements.txt ./ - RUN pip install --no-cache-dir -r requirements.txt + COPY requirements.txt ./ + RUN pip install --no-cache-dir -r requirements.txt - COPY . . + COPY . . - EXPOSE 8000 + EXPOSE 8000 - CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] + CMD ["uvicorn", "main:app", "--host", "0.0.0.0", "--port", "8000"] ``` 2. In the directory with your Dockerfile and your app, build the container with this command: - ``` - docker build -t my_python_api . + ```sh + docker build -t my_python_api . ``` ### F. Run the containers [#run-container] Once you have the base New Relic container and a container made for your app, you can run both with this command: -``` +```sh docker run \ --e NEW_RELIC_LICENSE_KEY=INSERT_YOUR_LICENSE_KEY \ --e NEW_RELIC_APP_NAME="docker-fastapi-example" \ --p 8000:8000 -it --rm --name CONTAINER_NAME DOCKER_IMAGE:IMAGE_TAG + -e NEW_RELIC_LICENSE_KEY=INSERT_YOUR_LICENSE_KEY \ + -e NEW_RELIC_APP_NAME="docker-fastapi-example" \ + -p 8000:8000 -it --rm --name CONTAINER_NAME DOCKER_IMAGE:IMAGE_TAG ``` Note the following about this command: diff --git a/src/install/python/python-frameworks/python-agent-cherrypy.mdx b/src/install/python/python-frameworks/python-agent-cherrypy.mdx index 9010cb80d06..33422adea52 100644 --- a/src/install/python/python-frameworks/python-agent-cherrypy.mdx +++ b/src/install/python/python-frameworks/python-agent-cherrypy.mdx @@ -10,7 +10,7 @@ You need to integrate the Python agent with your application so that your app's Instead of manually integrating the call to startup the CherryPy WSGI server in your application code, you might be using PasteDeploy with a configuration file like the following: -``` +```ini [server:main] use = egg:PasteScript#cherrypy host = 127.0.0.1 @@ -19,6 +19,6 @@ port = 8080 When you start up your WSGI application, wrap the running of the `paster` command: -``` +```sh NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program paster serve production.ini -``` \ No newline at end of file +``` diff --git a/src/install/python/python-frameworks/python-agent-paste.mdx b/src/install/python/python-frameworks/python-agent-paste.mdx index cbd295bece4..2ef531e182d 100644 --- a/src/install/python/python-frameworks/python-agent-paste.mdx +++ b/src/install/python/python-frameworks/python-agent-paste.mdx @@ -10,7 +10,7 @@ You need to integrate the Python agent with your application so that your app's Instead of manually integrating the call to startup the Paste WSGI server in your application code, you might be using PasteDeploy with a configuration file like the following: -``` +```ini [server:main] use = egg:Paste#http host = 127.0.0.1 @@ -19,6 +19,6 @@ port = 8080 When you start up your WSGI application, wrap the running of the `paster` command: -``` +```sh NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program paster serve production.ini -``` \ No newline at end of file +``` diff --git a/src/install/python/python-frameworks/python-agent-unsupported-asgi-frameworks.mdx b/src/install/python/python-frameworks/python-agent-unsupported-asgi-frameworks.mdx index 6a47615dfc7..e2e52e355f7 100644 --- a/src/install/python/python-frameworks/python-agent-unsupported-asgi-frameworks.mdx +++ b/src/install/python/python-frameworks/python-agent-unsupported-asgi-frameworks.mdx @@ -14,15 +14,15 @@ It sounds like you're using an unsupported framework, such as Quart. In cases li Since you're not using a supported framework, you won't be able to use our admin script that automatically initializes your app; rather, you must initialize the Python agent manually in your web app code. This process involves importing a Python agent package into your app and making a call to initialize the agent. This call modifies your app's import mechanism so that when libraries are imported, the agent listens for the function classes it recognizes. -To get started, add the following to the beginning of the application script file or module that holds your ASGI entry point. In this example, /some/path/newrelic.ini represents the location of the copy of the config file created earlier. The config file must be readable by your web application. +To get started, add the following to the beginning of the application script file or module that holds your ASGI entry point. In this example, `/some/path/newrelic.ini` represents the location of the copy of the config file created earlier. The config file must be readable by your web application. -``` +```py import newrelic.agent -newrelic.agent.initialize('/some/path/newrelic.ini') -... YOUR_OTHER_IMPORTS +newrelic.agent.initialize('/some/path/newrelic.ini') +# YOUR_OTHER_IMPORTS ``` - Unlike standard Python functionality, the import order matters: the agent package must be imported and initialized first. + Unlike standard Python functionality, the import order matters: the agent package must be imported and [initialized](/docs/agents/python-agent/python-agent-api/initialize) first. ### (Advanced) Configuration file overrides [#overrides] @@ -34,11 +34,11 @@ If you want to override configurations, here are some options: id="environment-override" title="Admin script with deployment environment overrides" > - To specify an override in the agent config file that corresponds to a specific deployment environment, supply the environment's name as the second argument to the [`initialize()` function](/docs/agents/python-agent/python-agent-api/initialize). If you have installed the Python package into a Python virtual environment, you must add these lines after you have activated or set up **sys.path** to find your virtual environment. + To specify an override in the agent config file that corresponds to a specific deployment environment, supply the environment's name as the second argument to the [`initialize()` function](/docs/agents/python-agent/python-agent-api/initialize). If you have installed the Python package into a Python virtual environment, you must add these lines after you have activated or set up `sys.path` to find your virtual environment. - ``` + ```py import newrelic.agent - newrelic.agent.initialize('/some/path/newrelic.ini', 'staging') + newrelic.agent.initialize('/some/path/newrelic.ini', 'staging') ``` Whenever possible, precede any imports for modules that are going to be instrumented. For some web frameworks this is mandatory. The instrumentation will not work correctly if not placed before all imports that cause code from that framework to be imported. @@ -50,7 +50,7 @@ If you want to override configurations, here are some options: > If you do not use the admin script but still want to use the environment variables `NEW_RELIC_CONFIG_FILE` and `NEW_RELIC_ENVIRONMENT` to configure the agent, you can call the `initialize()` function with no arguments, and they will be read automatically. - ``` + ```py import newrelic.agent newrelic.agent.initialize() ``` @@ -65,15 +65,15 @@ If you are using an unsupported web framework or are constructing an ASGI applic If the ASGI application entry point is a function declared in the file itself, use a decorator: -``` +```py @newrelic.agent.asgi_application() def application(environ, start_response): -... + ... ``` If the ASGI application entry point is a function or object imported from a different module, wrap it with a wrapper object: -``` +```py import myapp application = myapp.ASGIHandler() @@ -82,4 +82,4 @@ application = newrelic.agent.ASGIApplicationWrapper(application) If a supported web framework is being used, you can still use the decorator or wrapper explicitly if, for example, you want to configure additional ASGI middleware around the supported web framework. This will ensure that execution of all ASGI middleware is also covered by the monitoring done by the agent. -For more information, see the documentation for the [asgi_application()](/docs/python/python-instrumentation-api#asgi_application) and [ASGIApplicationWrapper](/docs/python/python-instrumentation-api#ASGIApplicationWrapper) wrapper. +For more information, see the documentation for the [`asgi_application()`](/docs/python/python-instrumentation-api#asgi_application) and [`ASGIApplicationWrapper`](/docs/python/python-instrumentation-api#ASGIApplicationWrapper) wrapper. diff --git a/src/install/python/python-frameworks/python-agent-unsupported-wsgi-frameworks.mdx b/src/install/python/python-frameworks/python-agent-unsupported-wsgi-frameworks.mdx index 0900259823f..398312df304 100644 --- a/src/install/python/python-frameworks/python-agent-unsupported-wsgi-frameworks.mdx +++ b/src/install/python/python-frameworks/python-agent-unsupported-wsgi-frameworks.mdx @@ -14,15 +14,15 @@ It sounds like you're using an unsupported framework, such as mod_wsgi. In cases Since you're not using a supported framework, you won't be able to use our admin script that automatically initializes your app; rather, you must initialize the Python agent manually in your web app code. This process involves importing a Python agent package into your app and making a call to initialize the agent. This call modifies your app's import mechanism so that when libraries are imported, the agent listens for the function classes it recognizes. -To get started, add the following to the beginning of the application script file or module that holds your WSGI entry point. In this example, /some/path/newrelic.ini represents the location of the copy of the config file created earlier. The config file must be readable by your web application. +To get started, add the following to the beginning of the application script file or module that holds your WSGI entry point. In this example, `/some/path/newrelic.ini` represents the location of the copy of the config file created earlier. The config file must be readable by your web application. -``` +```py import newrelic.agent -newrelic.agent.initialize('/some/path/newrelic.ini') -... YOUR_OTHER_IMPORTS +newrelic.agent.initialize('some/path/newrelic.ini') +# YOUR_OTHER_IMPORTS ``` - Unlike standard Python functionality, the import order matters: the agent package must be imported and initialized first. + Unlike standard Python functionality, the import order matters: the agent package must be imported and [initialized](/docs/agents/python-agent/python-agent-api/initialize) first. ### (Advanced) Configuration file overrides [#overrides] @@ -34,11 +34,11 @@ If you want to override configurations, here are some options: id="environment-override" title="Admin script with deployment environment overrides" > - To specify an override in the agent config file that corresponds to a specific deployment environment, supply the environment's name as the second argument to the [`initialize()` function](/docs/agents/python-agent/python-agent-api/initialize). If you have installed the Python package into a Python virtual environment, you must add these lines after you have activated or set up **sys.path** to find your virtual environment. + To specify an override in the agent config file that corresponds to a specific deployment environment, supply the environment's name as the second argument to the [`initialize()` function](/docs/agents/python-agent/python-agent-api/initialize). If you have installed the Python package into a Python virtual environment, you must add these lines after you have activated or set up `sys.path` to find your virtual environment. - ``` + ```py import newrelic.agent - newrelic.agent.initialize('/some/path/newrelic.ini', 'staging') + newrelic.agent.initialize('/some/path/newrelic.ini', 'staging') ``` Whenever possible, precede any imports for modules that are going to be instrumented. For some web frameworks, including Flask, this is mandatory. The instrumentation will not work correctly if not placed before all imports that cause code from that framework to be imported. @@ -50,7 +50,7 @@ If you want to override configurations, here are some options: > If you do not use the admin script but still want to use the environment variables `NEW_RELIC_CONFIG_FILE` and `NEW_RELIC_ENVIRONMENT` to configure the agent, you can call the `initialize()` function with no arguments, and they will be read automatically. - ``` + ```py import newrelic.agent newrelic.agent.initialize() ``` @@ -73,15 +73,15 @@ If you are using an unsupported web framework or are constructing a WSGI applica If the WSGI application entry point is a function declared in the file itself, use a decorator: -``` +```py @newrelic.agent.wsgi_application() def application(environ, start_response): -... + ... ``` If the WSGI application entry point is a function or object imported from a different module, wrap it with a wrapper object: -``` +```py import myapp application = myapp.WSGIHandler() @@ -90,5 +90,5 @@ application = newrelic.agent.WSGIApplicationWrapper(application) If a supported web framework is being used, you can still use the decorator or wrapper explicitly if, for example, you want to configure additional WSGI middleware around the supported web framework. This will ensure that execution of all WSGI middleware is also covered by the monitoring done by the agent. -For more information, see the documentation for the [wsgi_application()](/docs/python/python-instrumentation-api#wsgi_application) and [WSGIApplicationWrapper](/docs/python/python-instrumentation-api#WSGIApplicationWrapper) wrapper. +For more information, see the documentation for the [`wsgi_application()`](/docs/python/python-instrumentation-api#wsgi_application) and [`WSGIApplicationWrapper`](/docs/python/python-instrumentation-api#WSGIApplicationWrapper) wrapper. diff --git a/src/install/python/python-frameworks/python-agent-web2py.mdx b/src/install/python/python-frameworks/python-agent-web2py.mdx index e0fdb123849..3a903c4d5c4 100644 --- a/src/install/python/python-frameworks/python-agent-web2py.mdx +++ b/src/install/python/python-frameworks/python-agent-web2py.mdx @@ -8,10 +8,10 @@ You need to integrate the Python agent with your application so that your app's ### How to use with runweb2py [#runweb2py-tip] -If you run your web application using the **runweb2py** command, use the following: +If you run your web application using the `runweb2py` command, use the following: -``` -NEW_RELIC_CONFIG_FILE=/some/path/newrelic.ini newrelic-admin run-program runweb2py +```sh +NEW_RELIC_CONFIG_FILE=/some/path/newrelic.ini newrelic-admin run-program runweb2py ``` -`newrelic-admin` is a script that wraps your application startup, so that the agent can monitor your application's major functions. For more on running the wrapper script, see [Running the wrapper script](/docs/agents/python-agent/installation-configuration/python-agent-installation#integration). \ No newline at end of file +`newrelic-admin` is a script that wraps your application startup, so that the agent can monitor your application's major functions. For more on running the wrapper script, see [Running the wrapper script](/docs/agents/python-agent/installation-configuration/python-agent-installation#integration). diff --git a/src/install/python/python-servers/python-agent-ajp-wsgi.mdx b/src/install/python/python-servers/python-agent-ajp-wsgi.mdx index 99061605801..1bfa9ac9ac9 100644 --- a/src/install/python/python-servers/python-agent-ajp-wsgi.mdx +++ b/src/install/python/python-servers/python-agent-ajp-wsgi.mdx @@ -8,10 +8,10 @@ You can use the Python agent with AJP in conjunction with [flup](https://pypi.py To integrate the Python agent with your app, run the newrelic-admin script in front of your usual app startup command. The admin script was included when you downloaded the Python agent, and you can use a command like this: -``` +```sh NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program YOUR_COMMAND_OPTIONS ``` If you have difficulty with this automatic initialization, you can also try our manual instrumentation steps. To get those instructions, go back to the framework question and select **Unsupported WSGI framework**. - \ No newline at end of file + diff --git a/src/install/python/python-servers/python-agent-daphne.mdx b/src/install/python/python-servers/python-agent-daphne.mdx index c46f35cc96b..9c65e80b763 100644 --- a/src/install/python/python-servers/python-agent-daphne.mdx +++ b/src/install/python/python-servers/python-agent-daphne.mdx @@ -8,8 +8,8 @@ To integrate the Python agent with your app, you can run the newrelic-admin scri If you start your app with `daphne path_to_app:my_app` and use our Python agent **version 8.0.0.179** or higher, you can use the recommended admin script integration method. The admin script was included when you downloaded the Python agent, and you can use a command like this: -``` -NEW_RELIC_CONFIG_FILE=path/to/newrelic.ini newrelic-admin run-program daphne path_to_app:my_app +```sh +NEW_RELIC_CONFIG_FILE=path/to/newrelic.ini newrelic-admin run-program daphne path_to_app:my_app ``` diff --git a/src/install/python/python-servers/python-agent-fastcgi.mdx b/src/install/python/python-servers/python-agent-fastcgi.mdx index 4b575129f97..ec3aa0943aa 100644 --- a/src/install/python/python-servers/python-agent-fastcgi.mdx +++ b/src/install/python/python-servers/python-agent-fastcgi.mdx @@ -9,7 +9,7 @@ You can use the Python agent with FastCGI in conjunction with [flup](https://pyp Below is an example of an integrated FastCGI/WSGI adapter for flup and the corresponding newrelic-admin startup command: -``` +```py #!/usr/bin/env python import sys @@ -28,7 +28,6 @@ def application(environ, start_response): ret = WSGIServer(application).run() ``` - -``` +```sh NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program python app.py -``` \ No newline at end of file +``` diff --git a/src/install/python/python-servers/python-agent-gunicorn.mdx b/src/install/python/python-servers/python-agent-gunicorn.mdx index 7254256f028..3f767041bf0 100644 --- a/src/install/python/python-servers/python-agent-gunicorn.mdx +++ b/src/install/python/python-servers/python-agent-gunicorn.mdx @@ -15,17 +15,16 @@ The Python agent supports Gunicorn's: You can use the recommended admin script integration method with Gunicorn. Here's an example of wrapping your startup command using the admin script: -``` -NEW_RELIC_CONFIG_FILE=/PATH/TO/newrelic.ini newrelic-admin run-program gunicorn YOUR_COMMAND_OPTIONS +```sh +NEW_RELIC_CONFIG_FILE=/PATH/TO/newrelic.ini newrelic-admin run-program gunicorn YOUR_COMMAND_OPTIONS ``` You can also export the config file location before starting Gunicorn: -``` -NEW_RELIC_CONFIG_FILE=/PATH/TO/newrelic.ini +```sh +NEW_RELIC_CONFIG_FILE=/PATH/TO/newrelic.ini export NEW_RELIC_CONFIG_FILE - -newrelic-admin run-program gunicorn YOUR_COMMAND_OPTIONS +newrelic-admin run-program gunicorn YOUR_COMMAND_OPTIONS ``` ### Preloading applications @@ -44,4 +43,4 @@ For similar reasons, one should avoid executing code to perform tasks in Gunicor If you have difficulty with this automatic initialization, you can also try our manual instrumentation steps. To get those instructions, go back to the framework question and select **Unsupported WSGI framework**. - \ No newline at end of file + diff --git a/src/install/python/python-servers/python-agent-hypercorn.mdx b/src/install/python/python-servers/python-agent-hypercorn.mdx index 61d8bede59c..f5bb209d3bc 100644 --- a/src/install/python/python-servers/python-agent-hypercorn.mdx +++ b/src/install/python/python-servers/python-agent-hypercorn.mdx @@ -8,10 +8,10 @@ To integrate the Python agent with your app, you can run the newrelic-admin scri If you start your app with `hypercorn path_to_app:my_app` and use our Python agent **version 8.0.0.179** or higher, you can use the recommended admin script integration method. The admin script was included when you downloaded the Python agent, and you can use a command like this: -``` -NEW_RELIC_CONFIG_FILE=path/to/newrelic.ini newrelic-admin run-program hypercorn path_to_app:my_app +```sh +NEW_RELIC_CONFIG_FILE=path/to/newrelic.ini newrelic-admin run-program hypercorn path_to_app:my_app ``` If you have difficulty with this automatic initialization, you can also try our manual instrumentation steps. To get those instructions, go back to the framework question and select **Unsupported ASGI framework** or **Unsupported WSGI framework**. - \ No newline at end of file + diff --git a/src/install/python/python-servers/python-agent-mod-wsgi-express.mdx b/src/install/python/python-servers/python-agent-mod-wsgi-express.mdx index 098e2e606c1..09145388b90 100644 --- a/src/install/python/python-servers/python-agent-mod-wsgi-express.mdx +++ b/src/install/python/python-servers/python-agent-mod-wsgi-express.mdx @@ -8,12 +8,12 @@ If you are using `mod_wsgi-express` version 4.1.0 with a WSGI application, you c For example, using the agent with Django may require a command similar to the following: -``` -NEW_RELIC_CONFIG_FILE=newrelic.ini mod_wsgi-express start-server mysite/wsgi.py --with-newrelic +```sh +NEW_RELIC_CONFIG_FILE=newrelic.ini mod_wsgi-express start-server mysite/wsgi.py --with-newrelic ``` For more details, contact the Apache/mod_wsgi author on the [mod_wsgi mailing list](http://code.google.com/p/modwsgi/wiki/WhereToGetHelp?tm=6#Asking_Your). If you have difficulty with this automatic initialization, you can also try our manual instrumentation steps. To get those instructions, go back to the framework question and select **Unsupported ASGI framework**. - \ No newline at end of file + diff --git a/src/install/python/python-servers/python-agent-tornado.mdx b/src/install/python/python-servers/python-agent-tornado.mdx index ade2a37dfe7..32fea13dbd4 100644 --- a/src/install/python/python-servers/python-agent-tornado.mdx +++ b/src/install/python/python-servers/python-agent-tornado.mdx @@ -12,10 +12,10 @@ To integrate the Python agent with your app, you can run the newrelic-admin scri You can use the recommended admin script integration method, provided you start your app with `python app.py` and use the Tornado async interface. The admin script was included when you downloaded the Python agent, and you can use a command like this: -``` -NEW_RELIC_CONFIG_FILE=path/to/newrelic.ini newrelic-admin run-python app.py +```sh +NEW_RELIC_CONFIG_FILE=path/to/newrelic.ini newrelic-admin run-python app.py ``` If you have difficulty with this automatic initialization, you can also try our manual instrumentation steps. To get those instructions, go back to the framework question and select **Unsupported WSGI framework**. - \ No newline at end of file + diff --git a/src/install/python/python-servers/python-agent-uvicorn.mdx b/src/install/python/python-servers/python-agent-uvicorn.mdx index d5ef3fd2660..f77d80b26f9 100644 --- a/src/install/python/python-servers/python-agent-uvicorn.mdx +++ b/src/install/python/python-servers/python-agent-uvicorn.mdx @@ -8,10 +8,10 @@ To integrate the Python agent with your app, you can run the newrelic-admin scri If you start your app with `python app.py` and use our Python agent **version 5.20.0.149** or higher, you can use the admin script integration method. The admin script was included when you downloaded the Python agent, and you can use a command like this: -``` -NEW_RELIC_CONFIG_FILE=path/to/newrelic.ini newrelic-admin run-program uvicorn path_to_app +```sh +NEW_RELIC_CONFIG_FILE=path/to/newrelic.ini newrelic-admin run-program uvicorn path_to_app ``` If you have difficulty with this automatic initialization, you can also try our manual instrumentation steps. To get those instructions, go back to the framework question and select **Unsupported ASGI framework**. - \ No newline at end of file + diff --git a/src/install/python/python-servers/python-agent-uwsgi.mdx b/src/install/python/python-servers/python-agent-uwsgi.mdx index 84858a7aee3..fb89e8d5973 100644 --- a/src/install/python/python-servers/python-agent-uwsgi.mdx +++ b/src/install/python/python-servers/python-agent-uwsgi.mdx @@ -32,7 +32,7 @@ When using uWSGI you will need to supply certain specific command line options t - By default uWSGI does not enable threading support within the Python interpreter core. This means it is not possible to create background threads from Python code. As the Python agent relies on being able to create background threads, this option is required. This option will be automatically applied if uWSGI is configured for multiple threads using the --threads option. + By default uWSGI does not enable threading support within the Python interpreter core. This means it is not possible to create background threads from Python code. As the Python agent relies on being able to create background threads, this option is required. This option will be automatically applied if uWSGI is configured for multiple threads using the `--threads` option. @@ -52,19 +52,19 @@ When using uWSGI you will need to supply certain specific command line options t If you are starting your WSGI application under uWSGI using a command of the form: -``` +```sh uwsgi --socket /tmp/uwsgi.sock wsgi.py ``` Instead, run: -``` +```sh NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program uwsgi --socket /tmp/uwsgi.sock --single-interpreter --enable-threads wsgi.py ``` When doing this, instead of defining the `NEW_RELIC_CONFIG_FILE` environment variable on the same line as executing the command, it can be separately exported and set in the process environment before running the command. -``` +```sh NEW_RELIC_CONFIG_FILE=newrelic.ini export NEW_RELIC_CONFIG_FILE @@ -89,7 +89,7 @@ If you are using a framework for which the Python agent is not automatically wra For example, if you're using an INI configuration file you would have: - ``` + ```ini [uwsgi] socket = /tmp/uwsgi.sock enable-threads = true @@ -99,7 +99,7 @@ If you are using a framework for which the Python agent is not automatically wra Then, you can run: - ``` + ```sh NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program uwsgi --ini uwsgi.ini ``` @@ -110,7 +110,7 @@ If you are using a framework for which the Python agent is not automatically wra > When specifying the WSGI application to be served up by uWSGI, you can give a path to the Python WSGI script file or a direct reference to a module and the contained application. For example, you might use the latter if using Django with the configuration: - ``` + ```ini [uwsgi] socket = /tmp/uwsgi.sock enable-threads = true @@ -125,7 +125,7 @@ If you are using a framework for which the Python agent is not automatically wra Alternatively, you can use the ability of uWSGI to evaluate a small snippet of code in the configuration in order to construct the WSGI application entry point. - ``` + ```ini [uwsgi] socket = /tmp/uwsgi.sock enable-threads = true @@ -148,7 +148,7 @@ If you are using a framework for which the Python agent is not automatically wra If you're using a master process, and you are seeing no data being reported for the web application running in the worker processes, you should also use lazy loading mode: - ``` + ```ini [uwsgi] socket = /tmp/uwsgi.sock enable-threads = true diff --git a/src/install/python/python-servers/python-agent-waitress.mdx b/src/install/python/python-servers/python-agent-waitress.mdx index de8ff65b07f..142e83743b9 100644 --- a/src/install/python/python-servers/python-agent-waitress.mdx +++ b/src/install/python/python-servers/python-agent-waitress.mdx @@ -12,7 +12,7 @@ The Python agent provides support for the [Waitress](http://pypi.python.org/pypi You can use the recommended admin script integration method with a command like this: -``` +```sh NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program YOUR_COMMAND_OPTIONS ``` @@ -20,7 +20,7 @@ NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program YOUR_COMMAND_OPTIO Instead of manually integrating the call to startup the Waitress WSGI server in your application code, you might be using PasteDeploy with a configuration file like the following: -``` +```ini [server:main] use = egg:waitress#main host = 127.0.0.1 @@ -29,10 +29,10 @@ port = 8080 When you start up your WSGI application, wrap the running of the `paster` command: -``` +```sh NEW_RELIC_CONFIG_FILE=newrelic.ini newrelic-admin run-program paster serve production.ini ``` If you have difficulty with this automatic initialization, you can also try our manual instrumentation steps. To get those instructions, go back to the framework question and select **Unsupported WSGI framework**. - \ No newline at end of file + diff --git a/src/install/rabbitmq/ansible/install-ansible.mdx b/src/install/rabbitmq/ansible/install-ansible.mdx index 77320daecf6..ad5940d4675 100644 --- a/src/install/rabbitmq/ansible/install-ansible.mdx +++ b/src/install/rabbitmq/ansible/install-ansible.mdx @@ -20,16 +20,16 @@ headingText: Install the RabbitMQ monitoring integration ```yml --- - name: Install New Relic - hosts: all - roles: + hosts: all + roles: - role: newrelic.newrelic_install - vars: + vars: targets: - - infrastructure - - logs + - infrastructure + - logs tags: - foo: bar - environment: + foo: bar + environment: NEW_RELIC_API_KEY: NEW_RELIC_ACCOUNT_ID: NEW_RELIC_REGION: @@ -37,4 +37,4 @@ headingText: Install the RabbitMQ monitoring integration 4. Customize the required variables. -Go to [Configure the infrastructure agent using Ansible](/docs/infrastructure/install-infrastructure-agent/config-management-tools/configure-infrastructure-agent-using-ansible/) if you need more information. \ No newline at end of file +Go to [Configure the infrastructure agent using Ansible](/docs/infrastructure/install-infrastructure-agent/config-management-tools/configure-infrastructure-agent-using-ansible/) if you need more information. diff --git a/src/install/rabbitmq/whatsNext.mdx b/src/install/rabbitmq/whatsNext.mdx index 5c598109176..5aee2de1f0a 100644 --- a/src/install/rabbitmq/whatsNext.mdx +++ b/src/install/rabbitmq/whatsNext.mdx @@ -40,13 +40,12 @@ discovery: ```yaml integrations: - - name: nri-rabbitmq - env: - # Integration configuration parameters. - METRICS: true - DISABLE_ENTITIES: true - QUEUES_MAX_LIMIT: "0" - + - name: nri-rabbitmq + env: + # Integration configuration parameters. + METRICS: true + DISABLE_ENTITIES: true + QUEUES_MAX_LIMIT: "0" ``` @@ -79,7 +78,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **HOSTNAME** + `HOSTNAME` @@ -87,7 +86,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - localhost + `localhost` @@ -101,7 +100,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **PORT** + `PORT` @@ -109,7 +108,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - 15672 + `15672` @@ -123,7 +122,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **USERNAME** + `USERNAME` @@ -145,7 +144,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **PASSWORD** + `PASSWORD` @@ -167,7 +166,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **TIMEOUT** + `TIMEOUT` @@ -175,7 +174,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - 30 + `30` @@ -189,7 +188,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **MANAGEMENT_PATH_PREFIX** + `MANAGEMENT_PATH_PREFIX` @@ -211,7 +210,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **USE_SSL** + `USE_SSL` @@ -219,7 +218,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -233,7 +232,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **CA_BUNDLE_DIR** + `CA_BUNDLE_DIR` @@ -255,7 +254,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **CA_BUNDLE_FILE** + `CA_BUNDLE_FILE` @@ -277,7 +276,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **NODE_NAME_OVERRIDE** + `NODE_NAME_OVERRIDE` @@ -299,7 +298,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **CONFIG_PATH** + `CONFIG_PATH` @@ -317,7 +316,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **QUEUES** + `QUEUES` @@ -325,7 +324,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory Example: - ``` + ```yml queues: '["myQueue1","myQueue2"]' ``` @@ -341,7 +340,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **QUEUES_REGEXES** + `QUEUES_REGEXES` @@ -349,7 +348,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory Example: - ``` + ```yml queues_regexes: '["queue[0-9]+",".*"]' ``` @@ -369,7 +368,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **EXCHANGES** + `EXCHANGES` @@ -393,7 +392,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **EXCHANGES_REGEXES** + `EXCHANGES_REGEXES` @@ -416,7 +415,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **VHOSTS** + `VHOSTS` @@ -439,7 +438,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **VHOSTS_REGEXES** + `VHOSTS_REGEXES` @@ -463,7 +462,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **METRICS** + `METRICS` @@ -471,7 +470,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -483,7 +482,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **INVENTORY** + `INVENTORY` @@ -491,7 +490,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -503,7 +502,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **EVENTS** + `EVENTS` @@ -511,7 +510,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -523,7 +522,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **DISABLE_ENTITIES** + `DISABLE_ENTITIES` @@ -531,7 +530,7 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - false + `false` @@ -542,15 +541,15 @@ The RabbitMQ integration collects both Metrics(M) and Inventory - **QUEUES_MAX_LIMIT** + `QUEUES_MAX_LIMIT` - Defines the max amount of Queues that can be processed, if this number is reached all queues will be dropped. If defined as '0' no limits are applied, this used with DISABLE_ENTITIES=true to avoid, memory increase in the Agent. + Defines the max amount of Queues that can be processed, if this number is reached all queues will be dropped. If defined as `0` no limits are applied, this used with `DISABLE_ENTITIES=true` to avoid, memory increase in the Agent. - 2000 + `2000` @@ -571,9 +570,9 @@ You can further decorate your metrics using labels. Labels allow you to add key/ Our default sample config file includes examples of labels but, as they are not mandatory, you can remove, modify, or add new ones of your choice. ```yml - labels: - env: production - role: rabbitmq +labels: + env: production + role: rabbitmq ``` ## Example configuration [#example-config] @@ -777,7 +776,7 @@ These attributes are attached to the `RabbitmqVhostSample` event type: - vhost.connectionsOpening + `vhost.connectionsOpening` @@ -787,7 +786,7 @@ These attributes are attached to the `RabbitmqVhostSample` event type: - vhost.connectionsRunning + `vhost.connectionsRunning` @@ -797,7 +796,7 @@ These attributes are attached to the `RabbitmqVhostSample` event type: - vhost.connectionsStarting + `vhost.connectionsStarting` @@ -807,7 +806,7 @@ These attributes are attached to the `RabbitmqVhostSample` event type: - vhost.connectionsTotal + `vhost.connectionsTotal` @@ -817,7 +816,7 @@ These attributes are attached to the `RabbitmqVhostSample` event type: - vhost.connectionsTuning + `vhost.connectionsTuning` @@ -847,7 +846,7 @@ These attributes are attached to the `RabbitmqNodeSample` event type: - node.averageErlangProcessesWaiting + `node.averageErlangProcessesWaiting` @@ -971,7 +970,7 @@ These attributes are attached to the `RabbitmqNodeSample` event type: - Node running (0 or 1). 0 shows that the node is not running and 1 shows that the node is running. In RabbitMQ this is seen as `running`. + Node running (`0` or `1`). `0` shows that the node is not running and `1` shows that the node is running. In RabbitMQ this is seen as `running`. @@ -1153,7 +1152,7 @@ These attributes are attached to the `RabbitmqQueueSample` event type: - Rate of messages per queue delivered to clients but not yet acknowledged per second. In RabbitMQ this is seen as `message_stats`.deliver_no_ack_details.rate. + Rate of messages per queue delivered to clients but not yet acknowledged per second. In RabbitMQ this is seen as `message_stats.deliver_no_ack_details.rate`. @@ -1193,7 +1192,7 @@ These attributes are attached to the `RabbitmqQueueSample` event type: - Rate of messages delivered in acknowledgment mode to consumers per queue per second. In RabbitMQ this is seen as `message_stats`.deliver_details.rate. + Rate of messages delivered in acknowledgment mode to consumers per queue per second. In RabbitMQ this is seen as `message_stats.deliver_details.rate`. @@ -1341,9 +1340,9 @@ Troubleshooting tips: id="rabbitmqctl-error" title={<>Error getting local node name: exec: "rabbitmqctl": executable file not found in $PATH} > - If you receive this error, it means that the RabbitMQ command line tool, [rabbitmqctl](https://www.rabbitmq.com/rabbitmqctl.8.html), is not in the PATH of the root user. To correct this issue, execute the following command: + If you receive this error, it means that the RabbitMQ command line tool, [rabbitmqctl](https://www.rabbitmq.com/rabbitmqctl.8.html), is not in the `PATH` of the root user. To correct this issue, execute the following command: - ``` + ```sh find -name "rabbitmqctl" export PATH="$PATH: ``` @@ -1352,4 +1351,4 @@ Troubleshooting tips: ## Check the source code [#source-code] -This integration is open source software. That means you can [browse its source code](https://github.com/newrelic/nri-rabbitmq) and send improvements, or create your own fork and build it. \ No newline at end of file +This integration is open source software. That means you can [browse its source code](https://github.com/newrelic/nri-rabbitmq) and send improvements, or create your own fork and build it. diff --git a/src/install/rabbitmq/windows/install-msi.mdx b/src/install/rabbitmq/windows/install-msi.mdx index 7d63824d6c0..5743a078c30 100644 --- a/src/install/rabbitmq/windows/install-msi.mdx +++ b/src/install/rabbitmq/windows/install-msi.mdx @@ -5,6 +5,6 @@ headingText: Download using MSI 1. Download the latest .MSI installer image for the desired integration [from our repository](https://download.newrelic.com/infrastructure_agent/windows/integrations/). 2. In an admin account, run the install script using an absolute path. - ``` + ```sh msiexec.exe /qn /i PATH\TO\nri-rabbitmq-amd64.msi - ``` \ No newline at end of file + ``` diff --git a/src/install/rabbitmq/windows/install-windows.mdx b/src/install/rabbitmq/windows/install-windows.mdx index b5100c80c6d..0ef2879824b 100644 --- a/src/install/rabbitmq/windows/install-windows.mdx +++ b/src/install/rabbitmq/windows/install-windows.mdx @@ -25,7 +25,7 @@ headingText: Install the RabbitMQ monitoring integration **Powershell Example:** - ```shell + ```powershell Rename-Item -Path "C:\Program Files\New Relic\newrelic-infra\logging.d\rabbitmq-log-win.yml.example" -NewName "rabbitmq-log-win.yml" ``` diff --git a/src/install/snowflake/whatsNext.mdx b/src/install/snowflake/whatsNext.mdx index 571584e7959..82eac053fec 100644 --- a/src/install/snowflake/whatsNext.mdx +++ b/src/install/snowflake/whatsNext.mdx @@ -37,14 +37,14 @@ You can send your own custom metrics to New Relic and view that data in a dashbo 4. Add this snippet to the `flex-snowflake-linux.yml` file: ```yml - - name: longestQueries + - name: longestQueries entity: snowflake - # New Relic will capture all your Snowflake metrics when you use `event_type: SnowflakeVirtualWarehouse`. + # New Relic will capture all your Snowflake metrics when you use `event_type: SnowflakeVirtualWarehouse`. event_type: SnowflakeVirtualWarehouse custom_attributes: - metric_type: snowflake.query_performance - commands: - - run: YOUR_PATH_TO_DOWNLOADED_BINARY_FILE YOUR_PATH_TO_CLONED_REPOSITORY_DIRECTORY/config.yaml YOUR_PATH_TO_CLONED_REPOSITORY_DIRECTORY/queries/longest_queries.sql + metric_type: snowflake.query_performance + commands: + - run: YOUR_PATH_TO_DOWNLOADED_BINARY_FILE YOUR_PATH_TO_CLONED_REPOSITORY_DIRECTORY/config.yaml YOUR_PATH_TO_CLONED_REPOSITORY_DIRECTORY/queries/longest_queries.sql ``` diff --git a/src/install/vm/deployment/new.mdx b/src/install/vm/deployment/new.mdx index ae9ab287a51..d91cb677909 100644 --- a/src/install/vm/deployment/new.mdx +++ b/src/install/vm/deployment/new.mdx @@ -53,11 +53,11 @@ https://security-ingest-processor.service.newrelic.com/v1/security/import/depend ### BODY: -``` +```json { "serviceApiKey": "", "serviceParams": { - "orgName":"", + "orgName": "", "repositories": ["' \ ---header 'Content-Type: application/json' \ ---data-raw '{ + --header 'Api-Key: ' \ + --header 'Content-Type: application/json' \ + --data-raw '{ "serviceApiKey": "", "serviceParams": { "orgName":"" @@ -79,14 +79,14 @@ curl --location --request POST 'https://security-ingest-processor.service.newrel ### Confirming bulk import activity -When you POST to the v1/security/import/dependabot endpoint, the HTTP response will include a request UUID. For example: +When you POST to the `v1/security/import/dependabot` endpoint, the HTTP response will include a request UUID. For example: -``` +```json {"success":false,"errorMessage":null,"uuid":"4740e3c8-dbc4-46e6-a4b2-a7fb6f918d20"} ``` The request GUID is included in all `Log` data written to NRDB from the import job. These events are written in real time as the import job runs. To view the status and output of an import as it runs, use this NRQL query (replacing `YOUR_UUID` with the UUID returned from your HTTP POST): -``` +```sql FROM Log SELECT * WHERE source = 'GitHub Dependabot' AND requestId = 'YOUR_UUID' ``` diff --git a/src/install/vm/deployment/previous.mdx b/src/install/vm/deployment/previous.mdx index f05e131b849..61434428f92 100644 --- a/src/install/vm/deployment/previous.mdx +++ b/src/install/vm/deployment/previous.mdx @@ -35,11 +35,11 @@ https://security-ingest-processor.service.newrelic.com/v1/security/import/depend ### BODY: -``` +```json { "serviceApiKey": "", "serviceParams": { - "orgName":"", + "orgName": "", "repositories": ["' \ ---header 'Content-Type: application/json' \ ---data-raw '{ + --header 'Api-Key: ' \ + --header 'Content-Type: application/json' \ + --data-raw '{ "serviceApiKey": "", "serviceParams": { "orgName":"" @@ -61,14 +61,14 @@ curl --location --request POST 'https://security-ingest-processor.service.newrel ### Confirming bulk import activity -When you POST to the v1/security/import/dependabot endpoint, the HTTP response will include a request UUID. For example: +When you POST to the `v1/security/import/dependabot` endpoint, the HTTP response will include a request UUID. For example: -``` +```json {"success":false,"errorMessage":null,"uuid":"4740e3c8-dbc4-46e6-a4b2-a7fb6f918d20"} ``` The request GUID is included in all `Log` data written to NRDB from the import job. These events are written in real time as the import job runs. To view the status and output of an import as it runs, use this NRQL query (replacing `YOUR_UUID` with the UUID returned from your HTTP POST): -``` +```sql FROM Log SELECT * WHERE source = 'GitHub Dependabot' AND requestId = 'YOUR_UUID' ``` diff --git a/src/install/vsphere/default-install-linux.mdx b/src/install/vsphere/default-install-linux.mdx index 4669b5ba949..8cc95dfb1fd 100644 --- a/src/install/vsphere/default-install-linux.mdx +++ b/src/install/vsphere/default-install-linux.mdx @@ -17,9 +17,9 @@ headingText: Configure the vSphere integration 3. Edit the `vsphere-config.yml` file. The following is a basic config file: - ``` + ```yml integrations: - - name: nri-vsphere + - name: nri-vsphere env: # vSphere API connection data (vCenter or ESXi servers) URL: https:///sdk @@ -74,7 +74,6 @@ headingText: Configure the vSphere integration # If the integration takes more than 120s to collect data form vCenter the timeout parameter needs to be increased # to avoid the agent kill the integration before it finish. # timeout: 120s - ``` You can find all the config options at the bottom of this doc along with more complex config examples. diff --git a/src/install/vsphere/linux/install-linux.mdx b/src/install/vsphere/linux/install-linux.mdx index 4669b5ba949..24fc25190fc 100644 --- a/src/install/vsphere/linux/install-linux.mdx +++ b/src/install/vsphere/linux/install-linux.mdx @@ -17,18 +17,18 @@ headingText: Configure the vSphere integration 3. Edit the `vsphere-config.yml` file. The following is a basic config file: - ``` + ```yml integrations: - - name: nri-vsphere + - name: nri-vsphere env: # vSphere API connection data (vCenter or ESXi servers) URL: https:///sdk USER: PASS: - + # Collect events data ENABLE_VSPHERE_EVENTS: true - + # Collect vSphere tags ENABLE_VSPHERE_TAGS: true @@ -37,44 +37,43 @@ headingText: Configure the vSphere integration # INCLUDE_TAGS: > # # - + # Collect snapshots's data # ENABLE_VSPHERE_SNAPSHOTS: true - - # Collect performance metrics. Enabling this feature could overload - # vCenter depending on size of your environment. + + # Collect performance metrics. Enabling this feature could overload + # vCenter depending on size of your environment. # ENABLE_VSPHERE_PERF_METRICS: true - + # Performance metric collection level [1-4]. Be mindful when setting a # higher collection level, as this process triggers significant increase - # of resource usage on vCenter. Levels 3 and 4 should only be used for a - # short period of time. For a more granular selection check the + # of resource usage on vCenter. Levels 3 and 4 should only be used for a + # short period of time. For a more granular selection check the # vsphere-performance.metrics file. # PERF_LEVEL: 1 - + # Path to the performance metrics config file. This file contains the # performance counters that are going to be collected if available. # PERF_METRIC_FILE: /etc/newrelic-infra/integrations.d/vsphere-performance.metrics - + # Enable if you require SSL validation - # VALIDATE_SSL: true - + # VALIDATE_SSL: true + # Data center location label can be added to all entities in vSphere. # DATACENTER_LOCATION: - + # Proxy configuration can be set up. For more information, see the docs: # https://docs.newrelic.com/docs/integrations/integrations-sdk/file-specifications/integration-configuration-file-specifications-agent-v180 # Uncomment the lines below to add a proxy. # HTTP_PROXY: socks5://YOUR_PROXY_URL:PROXY_PORT # HTTPS_PROXY: socks5://YOUR_PROXY_URL:PROXY_PORT - + # Execution interval. Set a value higher than 20s, as real-time vSphere samples are run every 20s. interval: 60s - + # If the integration takes more than 120s to collect data form vCenter the timeout parameter needs to be increased # to avoid the agent kill the integration before it finish. # timeout: 120s - ``` You can find all the config options at the bottom of this doc along with more complex config examples. diff --git a/src/install/vsphere/users.mdx b/src/install/vsphere/users.mdx index 274bd8533f0..75497559428 100644 --- a/src/install/vsphere/users.mdx +++ b/src/install/vsphere/users.mdx @@ -16,13 +16,13 @@ You now need to create an vSphere user with assigned privileges. Follow the inst For standalone database if you use vSphere DB 12c or higher, use `ALTER SESSION` to access the database and manage users and user properties. Do not run this query if your vSphere DB version is lower than 12c. - ``` + ```sql ALTER SESSION set "_vSphere_SCRIPT"=true; ``` Use `CREATE USER` to add a new user to the database. Replace `USER_PASSWORD` with the [new user's password](https://docs.vSphere.com/en/database/vSphere/vSphere-database/12.2/dbseg/keeping-your-vSphere-database-secure.html#GUID-451679EB-8676-47E6-82A6-DF025FD65156). - ``` + ```sql CREATE USER USERNAME IDENTIFIED BY "USER_PASSWORD"; ``` @@ -33,20 +33,20 @@ You now need to create an vSphere user with assigned privileges. Follow the inst For multitenant databases, log in to the root database as an administrator. Use `CREATE USER` to add a new user to the database. Specified username will be a 'common user' and needs to be prefixed with 'c##' as recommended by vSphere. Replace `USER_PASSWORD` with the [new user's password](https://docs.vSphere.com/en/database/vSphere/vSphere-database/12.2/dbseg/keeping-your-vSphere-database-secure.html#GUID-451679EB-8676-47E6-82A6-DF025FD65156). - ``` + ```sql CREATE USER c##USERNAME IDENTIFIED BY "USER_PASSWORD"; ``` Grant permission to new user to access all container objects (or specific container by mentioning PDB container name and root container name in 'CONTAINER_DATA'). - ``` + ```sql ALTER USER c##USERNAME SET CONTAINER_DATA=ALL CONTAINER=CURRENT; ``` 2. Grant `CONNECT` privileges to the user: - ``` + ```sql GRANT CONNECT TO USERNAME; ``` @@ -78,7 +78,7 @@ You now need to create an vSphere user with assigned privileges. Follow the inst Execute the following SQL statements together in one script, or individually: - ``` + ```sql GRANT SELECT ON cdb_data_files TO USERNAME; GRANT SELECT ON cdb_pdbs TO USERNAME; GRANT SELECT ON cdb_users TO USERNAME; @@ -108,6 +108,6 @@ You now need to create an vSphere user with assigned privileges. Follow the inst 4. To collect PDB metrics, grant `gv$con_sysmetric` privileges by running: - ``` + ```sql GRANT SELECT ON gv$con_sysmetric TO USERNAME; ``` diff --git a/src/install/vsphere/whatsNext.mdx b/src/install/vsphere/whatsNext.mdx index 31259cfc1b9..1c7915a3c93 100644 --- a/src/install/vsphere/whatsNext.mdx +++ b/src/install/vsphere/whatsNext.mdx @@ -26,7 +26,7 @@ In addition, with [secrets management](/docs/integrations/host-integrations/inst Events are available in the **Events** page and can be queried via [NRQL](/docs/query-data/nrql-new-relic-query-language/getting-started/introduction-nrql) as `InfrastructureEvent` under `vSphereEvent`. Here is an example of vSphere events data: - ``` + ```json "summary": "User dcui@127.0.0.1 logged out (login time: Tuesday, 14 July, 2020 08:32:09 AM, number of API invocations: 0, user agent: VMware-client/6.5.0)", "vSphereEvent.computeResource": "cluster1", "vSphereEvent.datacenter": "Prod Datacenter", @@ -57,7 +57,7 @@ In addition, with [secrets management](/docs/integrations/host-integrations/inst Tags are available as attributes in the corresponding entity sample as `label.tagCategory:tagName`. - If two tags of the same category are assigned to a resource, they are added to a unique attribute separated by a pipe character. For example: `label.tagCategory:tagName|tagName`2. + If two tags of the same category are assigned to a resource, they are added to a unique attribute separated by a pipe character. For example: `label.tagCategory:tagName|tagName2`. Tags can be used to run [NRQL](/docs/query-data/nrql-new-relic-query-language/getting-started/introduction-nrql) queries, filter entities in our [entity explorer](/docs/new-relic-one/use-new-relic-one/ui-data/new-relic-one-entity-explorer), and to create [dashboards](/docs/dashboards/new-relic-one-dashboards/get-started/introduction-new-relic-one-dashboards) and [alerts](/docs/alerts/new-relic-alerts/defining-conditions/create-alert-conditions-nrql-queries). @@ -77,7 +77,7 @@ In addition, with [secrets management](/docs/integrations/host-integrations/inst For example, to only retrieve resources with a tag category `region` and include regions `us` and `eu` use a filter expression like: `region=us region=eu` - ``` + ```yml INCLUDE_TAGS: > region=us region=eu @@ -95,7 +95,7 @@ In addition, with [secrets management](/docs/integrations/host-integrations/inst id="perf-metrics" title="Enable and configure performance metrics (preview)" > - Performance metrics provide a better understanding of the current status of VMware resources and can be collected **in addition** to the metrics collected by default;and included in the samples;described at the bottom of the page. + Performance metrics provide a better understanding of the current status of VMware resources and can be collected **in addition** to the metrics collected by default and included in the samples described at the bottom of the page. All metrics collected are included in the corresponding sample with the `perf.` prefix attached to the name. For example, `net.packetsRx.summation` is collected and sent as `perf.net.packetsRx.summation`. @@ -131,13 +131,13 @@ In addition, with [secrets management](/docs/integrations/host-integrations/inst URL: https:///sdk USER: PASS: - + # Collect events data ENABLE_VSPHERE_EVENTS: true - + # Collect vSphere tags ENABLE_VSPHERE_TAGS: true - + # Execution interval. Set a value higher than 20s, as real-time vSphere samples are run every 20s. interval: 120s - name: nri-vsphere @@ -146,13 +146,13 @@ In addition, with [secrets management](/docs/integrations/host-integrations/inst URL: https:///sdk USER: PASS: - + # Collect events data ENABLE_VSPHERE_EVENTS: false - + # Collect vSphere tags ENABLE_VSPHERE_TAGS: false - + # Execution interval. Set a value higher than 20s, as real-time vSphere samples are run every 20s. interval: 300s ``` @@ -1546,7 +1546,7 @@ The vSphere integration provides metric data attached to the following New Relic - Threshold for generated ClusterRecommendations. DRS generates only those recommendations that are above the specified vmotionRate. Ratings vary from 1 to 5. This setting applies to manual, partiallyAutomated, and fullyAutomated DRS clusters. + Threshold for generated ClusterRecommendations. DRS generates only those recommendations that are above the specified `vmotionRate`. Ratings vary from `1` to `5`. This setting applies to manual, `partiallyAutomated`, and `fullyAutomated` DRS clusters. @@ -1596,7 +1596,7 @@ The vSphere integration provides metric data attached to the following New Relic - Flag that dictates whether DRS Behavior overrides for individual virtual machines (ClusterDrsVmConfigInfo) are enabled. + Flag that dictates whether DRS Behavior overrides for individual virtual machines (`ClusterDrsVmConfigInfo`) are enabled. @@ -1606,7 +1606,7 @@ The vSphere integration provides metric data attached to the following New Relic - Specifies the cluster-wide default DRS behavior for virtual machines. You can override the default behavior for a virtual machine by using the ClusterDrsVmConfigInfo object. + Specifies the cluster-wide default DRS behavior for virtual machines. You can override the default behavior for a virtual machine by using the `ClusterDrsVmConfigInfo` object. @@ -1686,7 +1686,7 @@ The vSphere integration provides metric data attached to the following New Relic - The policy on what datastores will be used by vCenter Server to choose heartbeat datastores: allFeasibleDs, allFeasibleDsWithUserPreference, userSelectedDs + The policy on what datastores will be used by vCenter Server to choose heartbeat datastores: `allFeasibleDs`, `allFeasibleDsWithUserPreference`, `userSelectedDs` @@ -1714,7 +1714,7 @@ The vSphere integration provides metric data attached to the following New Relic - Tree info for the snapshot. Es: Cluster:Vm:Snapshot1:Snapshot2 + Tree info for the snapshot. Es: `Cluster:Vm:Snapshot1:Snapshot2` @@ -1869,19 +1869,18 @@ The vSphere integration provides metric data attached to the following New Relic > One possible reason for data gaps could be because of the integration taking too long to collect and process data from vCenter. In case the integration exceeds the [timeout](https://docs.newrelic.com/docs/infrastructure/host-integrations/infrastructure-integrations-sdk/specifications/host-integrations-standard-configuration-format/#timeout), which by default is `120s`, the infrastructure agent will kill the integration, and a log message like the following will be printed: ``` shell - level=warn msg="HeartBeat timeout exceeded after 120000000000" integration_name=nri-vsphere + [output] level=warn msg="HeartBeat timeout exceeded after 120000000000" integration_name=nri-vsphere ``` In order to fix this, you could extend the [timeout](https://docs.newrelic.com/docs/infrastructure/host-integrations/infrastructure-integrations-sdk/specifications/host-integrations-standard-configuration-format/#timeout) parameter in the config file. ``` yaml integrations: - - name: nri-vsphere - env: - # Integration configuration parameters. - - interval: 120s - - timeout: 300s + - name: nri-vsphere + env: + # Integration configuration parameters. + + interval: 120s + timeout: 300s ``` diff --git a/src/install/vsphere/windows/install-msi.mdx b/src/install/vsphere/windows/install-msi.mdx index 12a6de02700..73dc70f334b 100644 --- a/src/install/vsphere/windows/install-msi.mdx +++ b/src/install/vsphere/windows/install-msi.mdx @@ -5,6 +5,6 @@ headingText: Download using MSI 1. Download the latest .MSI installer image for the desired integration [from our repository](https://download.newrelic.com/infrastructure_agent/windows/integrations/). 2. In an admin account, run the install script using an absolute path. - ``` + ```sh msiexec.exe /qn /i PATH\TO\vSpheredb.msi - ``` \ No newline at end of file + ``` diff --git a/src/install/vsphere/windows/install-windows.mdx b/src/install/vsphere/windows/install-windows.mdx index c434ab44ee2..27a099976eb 100644 --- a/src/install/vsphere/windows/install-windows.mdx +++ b/src/install/vsphere/windows/install-windows.mdx @@ -5,15 +5,15 @@ headingText: Configure the vSphere integration 1. In the Integrations directory, `C:\Program Files\New Relic\newrelic-infra\integrations.d\`, create a copy of the sample configuration file by running: - ``` + ```sh copy vsphered-config.yml.sample vSpheredb-config.yml ``` 2. Edit the `vspheredb-config.yml` file. The following is a basic Windows config file: - ``` + ```yml integrations: - - name: nri-vsphere + - name: nri-vsphere env: # vSphere API connection data (vCenter or ESXi servers) URL: https:///sdk @@ -25,7 +25,7 @@ headingText: Configure the vSphere integration # Collect vSphere tags ENABLE_VSPHERE_TAGS: true - + # If defined, only resources tagged with any of the tags will be included in the results. # You must also include 'ENABLE_VSPHERE_TAGS' in order for this option to work. # INCLUDE_TAGS: > @@ -35,14 +35,14 @@ headingText: Configure the vSphere integration # Collect snapshots's data # ENABLE_VSPHERE_SNAPSHOTS: true - # Collect performance metrics. Enabling this feature could overload - # vCenter depending on size of your environment. + # Collect performance metrics. Enabling this feature could overload + # vCenter depending on size of your environment. # ENABLE_VSPHERE_PERF_METRICS: true # Performance metric collection level [1-4]. Be mindful when setting a # higher collection level, as this process triggers significant increase - # of resource usage on vCenter. Levels 3 and 4 should only be used for a - # short period of time. For a more granular selection check the + # of resource usage on vCenter. Levels 3 and 4 should only be used for a + # short period of time. For a more granular selection check the # vsphere-performance.metrics file. # PERF_LEVEL: 1 @@ -51,24 +51,23 @@ headingText: Configure the vSphere integration # PERF_METRIC_FILE: C:\Program Files\New Relic\newrelic-infra\integrations.d\vsphere-performance.metrics # Enable if you require SSL validation - # VALIDATE_SSL: true + # VALIDATE_SSL: true # Data center location label can be added to all entities in vSphere. # DATACENTER_LOCATION: - + # Proxy configuration can be set up. For more information, see the docs: # https://docs.newrelic.com/docs/integrations/integrations-sdk/file-specifications/integration-configuration-file-specifications-agent-v180 # Uncomment the lines below to add a proxy. # HTTP_PROXY: socks5://YOUR_PROXY_URL:PROXY_PORT # HTTPS_PROXY: socks5://YOUR_PROXY_URL:PROXY_PORT - + # Execution interval. Set a value higher than 20s, as real-time vSphere samples are run every 20s. interval: 60s # If the integration takes more than 120s to collect data form vCenter the timeout parameter needs to be increased # to avoid the agent kill the integration before it finish. # timeout: 120s - ``` You can find all the config options at the bottom of this doc along with more complex config examples.