Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Link fuzzer reports with actual vulnerabilities #111

Open
ocervell opened this issue Mar 5, 2024 · 2 comments
Open

Link fuzzer reports with actual vulnerabilities #111

ocervell opened this issue Mar 5, 2024 · 2 comments

Comments

@ocervell
Copy link

ocervell commented Mar 5, 2024

cats is good when cherry-picking a fuzzer and running it on one endpoint.

However, today I'm scanning an API for a customer, ran on all endpoints using:

cats -c open_api.yml -s https://<API_URL> --proxyHost 127.0.0.1 --proxyPort 8080 -H "Authorization=Basic <TOKEN>" --checkHeaders --refData refs.yml --iu

I'm still getting more than 3k errors, which makes it difficult to identify what to look at in priority. Some of them are timeouts due to the app not handling as many requests, others don't mean much (for instance, ExtraHeaders fuzzer when the app doesn't even process them will result in errors, but they don't mean anything - but there are hundreds of examples like this).

The way I workaround this at the moment is to run one fuzzer at a time, but this defeats a bit the purpose of running cats somehow (ideally we want to do a full run, then pickle on the vulns we're interested in, and then re-run with a different set of inputs).

Proposal for improvements:

  • It would be helpful to match different type of fuzzers with known vulnerabilities, attack types, or just an explanation of how it could be used to exploit, so that we could sort them in the UI and prioritize some of them.

  • It would also be helpful if cats could help us ignore some errors, for instance if fuzzing the Accept header result in the wrong expected error code, but that code is the same no matter what the Accept header is: it could be considered that the app ignores it altogether, meaning a 'normal behavior'

  • Have a way to have fuzzer info in the requests with the fuzzer that made them, for instance by tweaking the User-Agent header: could be something like cats/<version> (<FUZZER_NAME> <EXPECTED_CODE>). This could allow for instance linking cats with Burp proxy and doing the analysis by simply looking at the info in the request to link it back to the actual test.

  • Have a mode where we can make the "good" request (without tampering) to check if the code is "good" (i.e expected in the conf). There are a lot of times where even the normal request will fail (wrong or bad data replaces, wrong authorization header ...), so it would be nice to be able to detect it and flag it somehow instead of making all fuzzed requests output failures.

@en-milie
Copy link
Contributor

en-milie commented Mar 6, 2024

hi @ocervell. Some of the things you mention can already be done. Some examples:

  • you can control the number of requests per minute using the --maxRequestsPerMinute argument; this will avoid the timeouts you mention
  • rather than running each fuzzer individually, you can exclude the fuzzers which are not relevant for you using the --skipFuzzers argument; you can provide a comma separated list of fuzzers to be excluded
  • you can ignore specific response codes, response regexes from the body, etc. using the --ignoreXXX arguments; this will allow you to ignore specific errors returned by the service, or specific response codes. Ignoring will mean they will be reported as success and included in the report, but you can skip reporting for them using --sri
  • I wouldn's say that because an app returns the same result for all Accept headers should mean that it's fine; maybe the app behaves equally badly for all Accept headers, so I would rely on the --ingoreXXX mentioned above
  • the "good" requests are done through the HappyFuzzer; if the requests will need additional context (like some entities needed to be created) you can supply static/reference data using the --refData argument; this is a great way to provided additional context to make some fields static in order for the requests to meet business constraints

It's a good idea to extend the User-Agent header with additional context. I'll add that to the backlog.

I would typically recommend a first round with all fuzzers in blackbox mode: cats ... -b -k which will only report 500. It just needs the contract and authorization headers.
After, you can play with the --ingoreXXX arguments --matchXXX arguments and different filtering arguments.

@en-milie
Copy link
Contributor

The User-Agent header is enhanced in the latest release: https://github.com/Endava/cats/releases/tag/cats-11.3.0

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants