This is a project mainly aimed at Attack/Defense CTFs, although it could be used for other tasks such as web scraping, if you have enough fantasy.
You can run it in Docker, but there is still something that you have to do outside of Docker in order to use the tool (serialization of exploits and, if needed, serialization of the submitter).
The idea is the following:
- you write a Python function ("exploit") which takes an IP address as input, and returns a list of "flags";
- you serialize this function using a module provided, specifying an arbitrary name for the service and a list of IPs ("opponents") with which the function shall interact;
- using the Flask interface, you upload the serialized function, and a different thread executing the function in an infinite loop ("attacker") is spawned for each opponent;
- you can repeat this process as many times as you want, and you can see from the interface the number of valid flags aggregated for each service;
- you can further expand each service to see the stats for each opponent and the state of the thread related to that opponent (running, in pause, old);
- you can also stop threads and restore the running state of a stopped thread (but not of an old thread);
- you can overwrite threads by submitting another serialized function with the same service name, without losing old stats;
- the system takes care of "submitting" flags to a certain endpoint, without submitting the same flags more times: you have to configure the protocol (or eventually a custom submitter), the URI of the endpoint and a submission token.
A thread is "old" if it is related to stats obtained from another execution of the exploit_manager: the system will not make a snapshot of thread objects, it will only save the flags they found on the DB.
You can configure parameters from conf.json:
- db: the database used is sqlite; it will be created and configured if it doesn't exist.
- name: db name without extension and without path.
- extension: you would usually set this to .db.
- path: directory in which you will have the db file.
- name: db name without extension and without path.
- submit: configuration of the flag submitter.
- url: endpoint to which the submitter will send valid flags.
- token: API token for the submit, if you need it.
- regex: this regex is usually the same regex used by attackers to know if a flag was found; assuming that the attacker returns valid flags, the submitter doesn't use this regex.
- protocol: can be GET, POST (both JSON-based with 'team_token' and 'flag' fields), TCP (uses pwntools) or CUSTOM / CUSTOM-QUEUED (in these cases you have to store a serialized Python function in a file, in a way very similar to the serializaton of exploits).
- custom_serialized_script: full path of the serialized function used for the CUSTOM submit; the function must take as input a flag, the URL for the submit and the token, and it is not required to return any value. If it is CUSTOM-QUEUED, then the function is very similar but the first argument must be a list of flags, not a single flag.
- url: endpoint to which the submitter will send valid flags.
- message: unused but checked, keep default values.
- attack: specifies the default targets.
- targets: default list of IP addresses to use when the list provided with the serialized exploit is an empty list.
- tick: it was thought as the "round time" in A/D CTFs, we preferred using time.sleep() directly in exploit functions; it's only used by attackers to stop themselves if they don't find new flags after a time greater than 2 * tick.
- targets: default list of IP addresses to use when the list provided with the serialized exploit is an empty list.
- thread: configuration of attacker threads.
- pool: unused but checked.
- timeout: unused but checked.
- error_threshold: the number of consecutive errors allowed in an attacker before it stops itself.
- pool: unused but checked.
- exploit_modules: modules specified will be imported dynamically in the globals dict to make them available to the attackers and to the custom submitter, so you can count on them when you develop the related functions.
- you need to specify a list like ['requests', 'pwntools', 'time', 'string', 'b64']; make sure that you have all the modules specified installed in the venv used to run the exploit_manager.
- you need to specify a list like ['requests', 'pwntools', 'time', 'string', 'b64']; make sure that you have all the modules specified installed in the venv used to run the exploit_manager.
- docker: unused and not checked.
If you want an high-level depiction of the architecture of this system, check out diagrams/ folder.
At this point, you understood how to configure conf.json.
Now, to make you understand how to properly serialize an exploit function and a custom submitter, we thought that the best way is to provide some examples.
We already described the interfaces that an exploit and a custom submitter must provide. Now check out examples/ folder:
- in test_custom_submitter we provided just a dumb example;
- in test_exploit we provided both a dumb example and an exploit we actually used in an A/D CTF; some considerations about exploit_wit:
- usually, you wouldn't put a while loop inside an exploit, because the loop is handled by the Attacker class, but this was the case of a "stateful" exploit: there was poor access control, so we could loop on ticket IDs to get confidential data (flags);
- stateful exploits are not very common, but we think that the best way to handle them in the exploit_manager is by using files: the exploit writes its state to a file at the end of each one of its executions and restores it at the following execution;
- we didn't implement any stateful mechanism during the CTF, in fact you can notice that the exploit always loops on the same IDs; the good idea of using files didn't come to us at that time.
- usually, you wouldn't put a while loop inside an exploit, because the loop is handled by the Attacker class, but this was the case of a "stateful" exploit: there was poor access control, so we could loop on ticket IDs to get confidential data (flags);
An important thing to keep in mind is that serialization / deserialization doesn't work properly if done with different Python versions.
To address this issue, we added /custom folder:
- there is custom_submitter.py, which you edit manually and will be executed by Dockerfile to generate the serialized custom submitter;
- there is exploit.py, which you can edit and execute through browser interface (something like "fill this function"), and it will generate and return the serialized exploit (it uses Session instead of modifying the original exploit.py, and it writes temporary files to /volume, but erases them as soon as possible).
We also added a good logging level: if something goes bad, you will notice it and you will fix the error.
For convenience, we added the possibility of modifying the custom submitter using /volume, to avoid the re-build of the Docker.
If there is something like a rate-limited endpoint for flag submit, then you don't want to submit one flag at a time, but you want to hoard more flags in a queue and submit a set of them together.
To handle this, we implemented a QueuedSubmitter: each attacker appends flags on a thread-safe queue, and there is a thread which works in polling on this queue, trying to fetch together many flags.
The QueuedSubmitter then calls the submit function of the CustomSubmitter, and it will not pass a single flag at a time as usual, but a list of flags.