Skip to content

Commit

Permalink
XSSer v1.8.1 - 'The Hive' release
Browse files Browse the repository at this point in the history
  • Loading branch information
epsylon committed Sep 20, 2019
1 parent 9f196b7 commit 808d1ba
Show file tree
Hide file tree
Showing 61 changed files with 7,179 additions and 4,562 deletions.
2 changes: 1 addition & 1 deletion xsser/Makefile → Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ PYTHON=`which python`
DESTDIR=/
BUILDIR=$(CURDIR)/debian/xsser
PROJECT=xsser
VERSION=0.7.0
VERSION=1.8.1

all:
@echo "make source - Create source package"
Expand Down
74 changes: 49 additions & 25 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,53 +1,77 @@
![XSSer](https://xsser.03c8.net/xsser/zika1.png "XSSerBanner")
![XSSer](https://xsser.03c8.net/xsser/thehive1.png "XSSer")

===================================================================
----------

Cross Site "Scripter" (aka XSSer) is an automatic -framework- to detect, exploit and report XSS vulnerabilities.
+ Web: https://xsser.03c8.net

----------

XSSer is released under the GPLv3. You can find the full license text
in the [COPYING](./xsser/doc/COPYING) file.
Cross Site "Scripter" (aka XSSer) is an automatic -framework- to detect, exploit and report XSS vulnerabilities in web-based applications.

----------
It provides several options to try to bypass certain filters and various special techniques for code injection.

+ Web: https://xsser.03c8.net
XSSer has pre-installed [ > 1300 XSS ] attacking vectors and can bypass-exploit code on several browsers/WAFs:

----------
[PHPIDS]: PHP-IDS
[Imperva]: Imperva Incapsula WAF
[WebKnight]: WebKnight WAF
[F5]: F5 Big IP WAF
[Barracuda]: Barracuda WAF
[ModSec]: Mod-Security
[QuickDF]: QuickDefense
[Chrome]: Google Chrome
[IE]: Internet Explorer
[FF]: Mozilla's Gecko rendering engine, used by Firefox/Iceweasel
[NS-IE]: Netscape in IE rendering engine mode
[NS-G]: Netscape in the Gecko rendering engine mode
[Opera]: Opera

![XSSer](https://xsser.03c8.net/xsser/zika2.png "XSSerManifesto")
![XSSer](https://xsser.03c8.net/xsser/url_generation.png "XSSer URL Generation Schema")

----------

#### Installing:

XSSer runs on many platforms. It requires Python and the following libraries:
XSSer runs on many platforms. It requires Python and the following libraries:

- python-pycurl - Python bindings to libcurl
- python-xmlbuilder - create xml/(x)html files - Python 2.x
- python-beautifulsoup - error-tolerant HTML parser for Python
- python-geoip - Python bindings for the GeoIP IP-to-country resolver library
python-pycurl - Python bindings to libcurl
python-xmlbuilder - create xml/(x)html files - Python 2.x
python-beautifulsoup - error-tolerant HTML parser for Python
python-geoip - Python bindings for the GeoIP IP-to-country resolver library

On Debian-based systems (ex: Ubuntu), run:
On Debian-based systems (ex: Ubuntu), run:

sudo apt-get install python-pycurl python-xmlbuilder python-beautifulsoup python-geoip
sudo apt-get install python-pycurl python-xmlbuilder python-beautifulsoup python-geoip

On other systems such as: Kali, Ubuntu, ArchLinux, ParrotSec, Fedora, etc... also run:
On other systems such as: Kali, Ubuntu, ArchLinux, ParrotSec, Fedora, etc... also run:

pip install geoip
pip install geoip

#### Source libs:

* Python: https://www.python.org/downloads/
* PyCurl: http://pycurl.sourceforge.net/
* PyBeautifulSoup: https://pypi.python.org/pypi/BeautifulSoup
* PyGeoIP: https://pypi.python.org/pypi/GeoIP
* Python: https://www.python.org/downloads/
* PyCurl: http://pycurl.sourceforge.net/
* PyBeautifulSoup: https://pypi.python.org/pypi/BeautifulSoup
* PyGeoIP: https://pypi.python.org/pypi/GeoIP

----------

#### License:

XSSer is released under the GPLv3. You can find the full license text
in the [LICENSE](./docs/LICENSE) file.

----------

#### Screenshots:

![XSSer](https://xsser.03c8.net/xsser/url_generation.png "XSSerSchema")
![XSSer](https://xsser.03c8.net/xsser/thehive2.png "XSSer Shell")

![XSSer](https://xsser.03c8.net/xsser/thehive3.png "XSSer Manifesto")

![XSSer](https://xsser.03c8.net/xsser/thehive4.png "XSSer Configuration")

![XSSer](https://xsser.03c8.net/xsser/zika3.png "XSSerAdvanced")
![XSSer](https://xsser.03c8.net/xsser/thehive5.png "XSSer Bypassers")

![XSSer](https://xsser.03c8.net/xsser/zika4.png "XSSerGeoMap")
![XSSer](https://xsser.03c8.net/xsser/zika4.png "XSSer GeoMap")

6 changes: 2 additions & 4 deletions xsser/core/__init__.py → core/__init__.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,7 @@
"""
$Id$
This file is part of the XSSer project, https://xsser.03c8.net
This file is part of the xsser project, http://xsser.03c8.net
Copyright (c) 2011/2016 psy <[email protected]>
Copyright (c) 2010/2019 | psy <[email protected]>
xsser is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Expand Down
95 changes: 46 additions & 49 deletions xsser/core/crawler.py → core/crawler.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,9 @@
# -*- coding: utf-8 -*-"
# vim: set expandtab tabstop=4 shiftwidth=4:
"""
$Id$
This file is part of the XSSer project, https://xsser.03c8.net
This file is part of the xsser project, http://xsser.03c8.net
Copyright (c) 2011/2016 psy <[email protected]>
Copyright (c) 2010/2019 | psy <[email protected]>
xsser is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Expand Down Expand Up @@ -40,14 +38,10 @@ class EmergencyLanding(Exception):
class Crawler(object):
"""
Crawler class.
Crawls a webpage looking for url arguments.
Dont call from several threads! You should create a new one
for every thread.
"""
def __init__(self, parent, curlwrapper=None, crawled=None, pool=None):
# verbose: 0-no printing, 1-prints dots, 2-prints full output
self.verbose = 1
self.verbose = 0
self._parent = parent
self._to_crawl = []
self._parse_external = True
Expand Down Expand Up @@ -81,7 +75,10 @@ def _find_args(self, url):
find parameters in given url.
"""
parsed = urllib2.urlparse.urlparse(url)
qs = urlparse.parse_qs(parsed.query)
if "C=" in parsed.query and "O=" in parsed.query:
qs = ""
else:
qs = urlparse.parse_qs(parsed.query)
if parsed.scheme:
path = parsed.scheme + "://" + parsed.netloc + parsed.path
else:
Expand All @@ -92,6 +89,14 @@ def _find_args(self, url):
if not zipped or not path in zipped[0]:
self._found_args[key].append([path, url])
self.generate_result(arg_name, path, url)
if not qs:
parsed = urllib2.urlparse.urlparse(url)
if path.endswith("/"):
attack_url = path + "XSS"
else:
attack_url = path + "/XSS"
if not attack_url in self._parent.crawled_urls:
self._parent.crawled_urls.append(attack_url)
ncurrent = sum(map(lambda s: len(s), self._found_args.values()))
if ncurrent >= self._max:
self._armed = False
Expand Down Expand Up @@ -121,6 +126,7 @@ def crawl(self, path, depth=3, width=0, local_only=True):
attack_urls = []
if not self._parent._landing and self._armed:
self._crawl(basepath, path, depth, width)
# now parse all found items
if self._ownpool:
self.pool.dismissWorkers(len(self.pool.workers))
self.pool.joinAllDismissedWorkers()
Expand All @@ -138,7 +144,7 @@ def generate_result(self, arg_name, path, url):
for key, val in qs.iteritems():
qs_joint[key] = val[0]
attack_qs = dict(qs_joint)
attack_qs[arg_name] = "VECTOR"
attack_qs[arg_name] = "XSS"
attack_url = path + '?' + urllib.urlencode(attack_qs)
if not attack_url in self._parent.crawled_urls:
self._parent.crawled_urls.append(attack_url)
Expand Down Expand Up @@ -178,37 +184,35 @@ def _curl_main(self, pars):
self._get_done(basepath, depth, width, path, res, c_info)

def _get_error(self, request, error):
try:
path, depth, width, basepath = request.args[0]
e_type, e_value, e_tb = error
if e_type == pycurl.error:
errno, message = e_value.args
if errno == 28:
print("requests pyerror -1")
self.enqueue_jobs()
self._requests.remove(path)
return # timeout
else:
self.report('crawler curl error: '+message+' ('+str(errno)+')')
elif e_type == EmergencyLanding:
pass
path, depth, width, basepath = request.args[0]
e_type, e_value, e_tb = error
if e_type == pycurl.error:
errno, message = e_value.args
if errno == 28:
print("requests pyerror -1")
self.enqueue_jobs()
self._requests.remove(path)
return # timeout
else:
traceback.print_tb(e_tb)
self.report('crawler error: '+str(e_value)+' '+path)
if not e_type == EmergencyLanding:
for reporter in self._parent._reporters:
reporter.mosquito_crashed(path, str(e_value))
self.enqueue_jobs()
self._requests.remove(path)
except:
return
self.report('crawler curl error: '+message+' ('+str(errno)+')')
elif e_type == EmergencyLanding:
pass
else:
traceback.print_tb(e_tb)
self.report('crawler error: '+str(e_value)+' '+path)
if not e_type == EmergencyLanding:
for reporter in self._parent._reporters:
reporter.mosquito_crashed(path, str(e_value))
self.enqueue_jobs()
self._requests.remove(path)

def _emergency_parse(self, html_data, start=0):
links = set()
pos = 0
if not html_data:
return
data_len = len(html_data)
try:
data_len = len(html_data)
except:
data_len = html_data
while pos < data_len:
if len(links)+start > self._max:
break
Expand Down Expand Up @@ -236,35 +240,31 @@ def enqueue_jobs(self):
next_job = self._to_crawl.pop()
self._crawl(*next_job)

def _get_done(self, basepath, depth, width, path, html_data, content_type): # request, result):
def _get_done(self, basepath, depth, width, path, html_data, content_type):
if not self._armed or len(self._parent.crawled_urls) >= self._max:
raise EmergencyLanding
try:
encoding = content_type.split(";")[1].split("=")[1].strip()
except:
encoding = None
try:
soup = BeautifulSoup(html_data, from_encoding=encoding)
soup = BeautifulSoup(html_data, fromEncoding=encoding)
links = None
except:
soup = None
links = self._emergency_parse(html_data)

for reporter in self._parent._reporters:
reporter.start_crawl(path)

if not links and soup:
links = soup.find_all('a')
forms = soup.find_all('form')

links = soup.findAll('a')
forms = soup.findAll('form')
for form in forms:
pars = {}
if form.has_key("action"):
action_path = urlparse.urljoin(path, form["action"])
else:
action_path = path
for input_par in form.find_all('input'):

for input_par in form.findAll('input'):
if not input_par.has_key("name"):
continue
value = "foo"
Expand All @@ -284,8 +284,6 @@ def _get_done(self, basepath, depth, width, path, html_data, content_type): # re
elif self.verbose:
sys.stdout.write(".")
sys.stdout.flush()
if not links:
return
if len(links) > self._max:
links = links[:self._max]
for a in links:
Expand Down Expand Up @@ -323,7 +321,6 @@ def _check_url(self, basepath, path, href, depth, width):
self._find_args(href)
for reporter in self._parent._reporters:
reporter.add_link(path, href)
self.report("\n[Info] Spidering: " + str(href))
if self._armed and depth>0:
if len(self._to_crawl) < self._max:
self._to_crawl.append([basepath, href, depth-1, width])
45 changes: 21 additions & 24 deletions xsser/core/curlcontrol.py → core/curlcontrol.py
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,9 @@
# -*- coding: utf-8 -*-"
# vim: set expandtab tabstop=4 shiftwidth=4:
"""
$Id$
This file is part of the XSSer project, https://xsser.03c8.net
This file is part of the xsser project, http://xsser.03c8.net
Copyright (c) 2011/2018 psy <[email protected]>
Copyright (c) 2010/2019 | psy <[email protected]>
xsser is free software; you can redistribute it and/or modify it under
the terms of the GNU General Public License as published by the Free
Expand Down Expand Up @@ -469,38 +467,37 @@ def print_options(cls):
"""
Print selected options.
"""
print "\n[-]Verbose: active"
print "[-]Cookie:", cls.cookie
print "[-]HTTP User Agent:", cls.agent
print "[-]HTTP Referer:", cls.referer
print "[-]Extra HTTP Headers:", cls.headers
print "\nCookie:", cls.cookie
print "User Agent:", cls.agent
print "Referer:", cls.referer
print "Extra Headers:", cls.headers
if cls.xforw == True:
print "[-]X-Forwarded-For:", "Random IP"
print "X-Forwarded-For:", "Random IP"
else:
print "[-]X-Forwarded-For:", cls.xforw
print "X-Forwarded-For:", cls.xforw
if cls.xclient == True:
print "[-]X-Client-IP:", "Random IP"
print "X-Client-IP:", "Random IP"
else:
print "[-]X-Client-IP:", cls.xclient
print "[-]Authentication Type:", cls.atype
print "[-]Authentication Credentials:", cls.acred
print "X-Client-IP:", cls.xclient
print "Authentication Type:", cls.atype
print "Authentication Credentials:", cls.acred
if cls.ignoreproxy == True:
print "[-]Proxy:", "Ignoring system default HTTP proxy"
print "Proxy:", "Ignoring system default HTTP proxy"
else:
print "[-]Proxy:", cls.proxy
print "[-]Timeout:", cls.timeout
print "Proxy:", cls.proxy
print "Timeout:", cls.timeout
if cls.tcp_nodelay == True:
print "[-]Delaying:", "TCP_NODELAY activate"
print "Delaying:", "TCP_NODELAY activate"
else:
print "[-]Delaying:", cls.delay, "seconds"
print "Delaying:", cls.delay, "seconds"
if cls.followred == True:
print "[-]Follow 302 code:", "active"
print "Follow 302 code:", "active"
if cls.fli:
print"[-]Limit to follow:", cls.fli
print"Limit to follow:", cls.fli
else:
print "[-]Delaying:", cls.delay, "seconds"
print "Delaying:", cls.delay, "seconds"

print "[-]Retries:", cls.retries, "\n"
print "Retries:", cls.retries, "\n"

def answered(self, check):
"""
Expand Down
Loading

0 comments on commit 808d1ba

Please sign in to comment.