Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discrepancies in UI for resources #508

Open
psjd23 opened this issue Feb 7, 2024 · 4 comments
Open

Discrepancies in UI for resources #508

psjd23 opened this issue Feb 7, 2024 · 4 comments
Labels
bug Something isn't working

Comments

@psjd23
Copy link

psjd23 commented Feb 7, 2024

If your issue relates to the Discovery Process, please first follow the steps described in the implementation guide Debugging the Discovery Component


Describe the bug
After installing the solution in the management account (due to pre-existing technical debt and the absence of a delegated admin account), I've encountered a discrepancy in the visibility of EC2 instances. Despite meeting the prerequisites for AWS Config and setting it to AWS Organizations mode, the Resources table does not reflect all instances accurately.

  • The dashboard indicates 51 EC2 instances in the Development account, but the Resources table shows only 3.
  • Across all accounts, only 15 out of 266 instances are visible.
  • New EC2 instances are displayed correctly, yet approximately 95% of pre-existing ones are missing.
  • Despite adding AWS Config authorization from the development account to the management account, the issue persists. New instances were displayed regardless of making this change.

To Reproduce
The exact steps to reproduce this behavior are currently unknown.

Expected behavior
All EC2 instances should be accurately displayed in the Resources table.

Screenshots
image
image
image
image

Browser (please complete the following information):

  • Chrome
  • Version 121.0.6167.139 (Official Build) (arm64)

Additional context

This issue may not be limited to EC2 instances; however, they are the primary focus of my troubleshooting efforts. I'm using the aws-controltower-ConfigAggregatorForOrganizations aggregator, and AWS Config is enabled. I'm questioning whether opting out of the ConfigAggregatorName parameter and allowing the solution to provision its necessary components could resolve this issue.

Following the flowchart, there were no spikes above 75%. Opensearch average is 30%, and Neptune is 15%. Neptune had an initial spike to 70% during its first few minutes of monitored data. No OOM errors seen in ECS tasks. The target regions are us-east-1 and us-west-2, both of these regions account for 99% of our resources.

Parameters for CFN template:

AccountType	MANAGEMENT	-
AdminUserEmailAddress	redacted	-
AlreadyHaveConfigSetup	Yes	-
ApiAllowListedRanges	0.0.0.0/1,128.0.0.0/1	-
AthenaWorkgroup	primary	-
ConfigAggregatorName	aws-controltower-ConfigAggregatorForOrganizations	-
CpuUnits	1 vCPU	-
CreateNeptuneReplica	No	-
CreateOpensearchServiceRole	Yes	-
CrossAccountDiscovery	AWS_ORGANIZATIONS	-
DiscoveryTaskFrequency	15mins	-
MaxNCUs	3	-
Memory	2048	-
MinNCUs	1	-
NeptuneInstanceClass	db.t4g.medium	-
OpensearchInstanceType	m6g.large.search	-
OpensearchMultiAz	No	-
OrganizationUnitId	r-XXXX	-
PrivateSubnet0	subnet-redacted	-
PrivateSubnet1	subnet-redacted	-
VpcCidrBlock	10.111.0.0/16	-
VpcId	vpc-redacted	-

ECS logs from the latest run:
log-events-viewer-result.1.csv

Please let me know any other data you need to help and I will get it ASAP. Thanks!

@psjd23 psjd23 added the bug Something isn't working label Feb 7, 2024
@svozza
Copy link
Contributor

svozza commented Feb 8, 2024

Thank you for such a detailed error report, this was very useful for me to rule out issues. You should be fine to use the Control Tower aggragator, I have done so myself with no issues.

As a sanity check, could you go to the advanced query in Config and run the query SELECT * WHERE resourceType = 'AWS::EC2::Instance' on the Control Tower aggragator and verify that the missing EC2 instances are in the aggregator. If they are, could you update one one the missing instances, e.g., add a tag, to trigger an update to the Config configuration item and then verify if the EC2 instance now appears the WD UI the next time the discovery process runs. Bear in mind, that the discovery process only runs every fifteen minutes os it might take a while for it to update.

@psjd23
Copy link
Author

psjd23 commented Feb 8, 2024

Thanks for the quick response.

I ran the query and saw many results, so I did SELECT COUNT(*) WHERE resourceType = 'AWS::EC2::Instance' and the result was 266.

I made a change to an EC2 an hour ago (added a tag) and it did not update in the WD UI. I didn't see any errors in the ECS logs. Normal exits.

@svozza
Copy link
Contributor

svozza commented Feb 9, 2024

Hmm, this is very odd. If you're happy to do so, I would like to add some logging to the discovery process so I can try to get more information. My email address is the my GitHub user handle at amazon dot com and we can co-ordinate there as I will need to give you special build with the extra logging.

@psjd23
Copy link
Author

psjd23 commented Feb 9, 2024

Thanks, email has been sent with title "GH Issue 508".

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants