Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement bfac tricks into url_fuzzer #17434

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

alexandreborgo
Copy link

See #16271

Hey, here is a first try.

What I added:

  • new appendables, backup_exts and file_types
  • new set of prependables (like Copy_of_)
  • a function _mutate_by_prepending to generate new URLs from this set
  • a function _mutate_file_name to generate change on the name of the file like : index.html -> index-backup.html, index (copy).html, .index.html.swp, etc
  • and ofc the calls to the new functions

I checked that new patterns don't create duplicates of existing URLs.
I don't know if the function _mutate_file_name is the best way to do it. I used the same technique as bfac, what do you think about this function ?

I performed some tests, everything works properly but I found something strange. The plugin finds all 26 URLs it could find on the server but the list of fuzzable requests contains several times http://localhost/.
See the attached file output.txt for the list of file on the test server and w3af output.

This bug as something to do with the new function _mutate_file_name. It found 6 new URLs and there's 6 more http://localhost/ in the fuzzable requests. If the plugin don't call the function the result is good. I'm sure that the function doesn't create this URL directly. I'll work on it.

'.gzip', '.bzip2', '.inc', '.zip', '.rar', '.jar', '.java',
'.class', '.properties', '.bak', '.bak1', '.bkp', '.back',
'.backup', '.backup1', '.old', '.old1', '.$$$'
_appendables = ('~', '~~', '_', '.', '.tar.gz', '.gz', '.7z', '.cab',
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that these lists are getting larger, we have the risk of duplicating one item in them.

Let's change the type from tuple (current) to set (desired). By doing so we make sure that even if we add duplicates in the variable definition, the set will eliminate them.

Just change: _appendables = (...) with _appendables = {...}.

The same change should be applied to all other class attributes that define lists like this in the url_fuzzer class.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

if filename:
domain = url.get_domain_path()
url_string = domain.url_string
name = filename[:filename.rfind(u'.')]
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens here when the input URL is http://w3af.org/foobar (there is no point in the filename)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See below's answer.


mutate_name_testing = (
url_string + '#' + filename + '#',
url_string + name + ' (copy).' + extension,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And also, following the same example above where the filename has no "name", what happens in these lines when extension is empty? Does it make sense to send these tests?

These cases should be investigated and handled:

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes it wasn't handling these files correctly, I did some modifications and I believe it does it well now.

Here are the tests I did and the results : https://pastebin.com/WG5szsnB (GitHub importing files system doesn't work right now so I put the output on Pastebin)

)

for change in mutate_name_testing:
newurl = URL(change)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just in case, do a try/except around the URL instance creation, we never know what could go wrong when crafting these URLs.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if you want the try/except in w3af or if it is just to find out what's generating all http://localhost/ in the list fuzzable requests.

I did this:

try :
     newurl = URL(change)
     yield newurl
except ValueError as exception:
     om.out.information('Error while generating a new URL: '\
     + exception)

And nothing wrong came from it during all my tests.

Let me know if I have to add the exception to the pull request.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the code you show in the comment is what I wanted, just in case... you never know...

The only change I would propose to that code is to move the yield newurl to the else: section.

@andresriancho
Copy link
Owner

I took a quick look at the output.txt file and noticed these lines:

The list of fuzzable requests is:
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/

This is what you said was strange, right?

I also believe it is strange... but can't understand why it is like that...

Could you share the w3af script you are using to run the scan, and a little bit more about the target web application? Thanks!

@alexandreborgo
Copy link
Author

I took a quick look at the output.txt file and noticed these lines:

The list of fuzzable requests is:
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/
- Method: GET | http://localhost/

This is what you said was strange, right?

I also believe it is strange... but can't understand why it is like that...

Could you share the w3af script you are using to run the scan, and a little bit more about the target web application? Thanks!

Yes this is it. I haven't looked for what causes it yet.

I'm using w3af with these modifications (my fork's branch url_fuzzer-bfac-tricks). It only appears when the function _mutate_file_name is called (and not with _mutate_by_prepending).

I tested it with two servers and find the same result in both:

  • first with Python's SimpleHTTServer with the filles in the working directory
  • then with Apache server with default configuration and all the files in /var/www/html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants