Skip to content

Best effort attempt to extract all possible domains/subdomains/FQDNs from a specified file

Notifications You must be signed in to change notification settings

equilibrium123/domainExtractor

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 

Repository files navigation

domainExtractor

Best effort attempt to extract all possible domains and subdomains / FQDNs from a specified file

Installation:

git clone https://github.com/intrudir/domainExtractor.git

Usage:

Run script without args to see usage
python3 domainExtractor.py
usage: domainExtractor.py [-h] [--file INPUTFILE] [--target TARGET] [--verbose]

This script will extract domains from the file you specify and add it to a final file

optional arguments:
  -h, --help        show this help message and exit
  --file INPUTFILE  Specify the file to extract domains from
  --target TARGET   Specify the target top-level domain you'd like to find and extract e.g. uber.com
  --verbose         Enable slightly more verbose console output

Matching a specified target domain

Specify a file and a target domain to search for and extract from the file. The test.html file I am using is just the source HTML of yahoo.com index page.
python3 domainExtractor.py --file test.html --target yahoo.com

It will extract, sort and dedup all domains that are found.

image

Example output:

image

Specifying all domains

Specifying 'all' as the target extracts all domains it finds (at the moment .com, .net, .org, .tv, .io)
python3 domainExtractor.py --file test.html --target all

image

Example output:

image

Domains not previously found

If you run the script again while checking for the same target, a few things occur:
1) if you already have a final file for it it will notify you of domains you didnt have before
2) It will append them to the final file
3) It will also create a new file with the date and log them in there with the time.


This allows you to check the same target across multiple files and be notified of any new domains found!

Example: extracting from Amass and Assetfinder output files

image

I first use it against my Amass results, then against my Assetfinder results.
The script will sort and dedup as well as notify me of how many new, unique domains came from assetfinder's results.

image

It will add them to the final file as well as log just the new ones to a file with the date and time.

image

About

Best effort attempt to extract all possible domains/subdomains/FQDNs from a specified file

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%