duplicate-url-discarder contains a Scrapy fingerprinter that uses customizable URL processors to canonicalize URLs before fingerprinting.
pip install duplicate-url-discarder
Alternatively, you can also include in the installation the predefined rules in duplicate-url-discarder-rules via:
pip install duplicate-url-discarder[rules]
If such rules are installed, they would automatically be used if the
DUD_LOAD_RULE_PATHS
setting is left empty (see configuration).
Requires Python 3.9+.
If you use Scrapy >= 2.10 you can enable the fingerprinter by enabling the provided Scrapy add-on:
ADDONS = {
"duplicate_url_discarder.Addon": 600,
}
If you are using other Scrapy add-ons that modify the request fingerprinter, such as the scrapy-zyte-api add-on, configure this add-on with a higher priority value so that the fallback fingerprinter is set to the correct value.
With older Scrapy versions you need to enable the fingerprinter directly:
REQUEST_FINGERPRINTER_CLASS = "duplicate_url_discarder.Fingerprinter"
If you were using a non-default request fingerprinter already, be it one you implemented or one from a Scrapy plugin like scrapy-zyte-api, set it as fallback:
DUD_FALLBACK_REQUEST_FINGERPRINTER_CLASS = "scrapy_zyte_api.ScrapyZyteAPIRequestFingerprinter"
duplicate_url_discarder.Fingerprinter
will make canonical forms of the
request URLs and get the fingerprints for those using the configured fallback
fingerprinter (which is the default Scrapy one unless another one is configured
in the DUD_FALLBACK_REQUEST_FINGERPRINTER_CLASS
setting). Requests with the
"dud"
meta value set to False
are processed directly, without making a
canonical form.
duplicate-url-discarder utilizes URL processors to make canonical versions of URLs. The processors are configured with URL rules. Each URL rule specifies an URL pattern for which the processor applies, and specific processor arguments to use.
The following URL processors are currently available:
queryRemoval
: removes query string parameters (i.e. key=value), wherein the keys are specified in the arguments. If a given key appears multiple times with different values in the URL, all of them are removed.queryRemovalExcept
: likequeryRemoval
, but the keys specified in the arguments are kept while all others are removed.subpathRemoval
: removes the subpaths of a URL based on its integer positions.normalizer
: removes trailing/
andwww.
prefixes which also includes numbers likewww2.
.
A URL rule is a dictionary specifying the url-matcher
URL pattern(s), the
URL processor name, the URL processor args and the order that is used to sort
the rules. They are loaded from JSON files that contain arrays of serialized
rules:
[
{
"args": [
"foo",
"bar",
],
"order": 100,
"processor": "queryRemoval",
"urlPattern": {
"include": [
"foo.example"
]
}
},
{
"args": [
"PHPSESSIONID"
],
"order": 100,
"processor": "queryRemoval",
"urlPattern": {
"include": []
}
}
]
All non-universal rules (ones that have non-empty include pattern) that match a request URL are applied according to their order field. If there are no non-universal rules that match the URL, the universal ones are applied.
duplicate-url-discarder uses the following Scrapy settings:
DUD_LOAD_RULE_PATHS
: it should be a list of file paths (str
orpathlib.Path
) pointing to JSON files with the URL rules to apply:DUD_LOAD_RULE_PATHS = [ "/home/user/project/custom_rules1.json", ]
The default value of this setting is empty. However, if the package duplicate-url-discarder-rules is installed and
DUD_LOAD_RULE_PATHS
has been left empty, the rules in said package are automatically used.As this setting requires a file path, it's not straightforward to deploy custom rule files to Scrapy Cloud or other similar environments, one way for that is this: put custom rule files into some location inside your Scrapy project, list them in the package data files, disable the zip_safe flag and calculate the absolute file path(s) in the setting value. So a sample
setup.py
would include:setup( ... zip_safe=False, package_data={ "my_project": [ "data/dud_rules.json", ] }, )
and
settings.py
can have code like this:DUD_LOAD_RULE_PATHS = [ os.path.join( os.path.dirname(os.path.realpath(__file__)), "data", "dud_rules.json" ) ]
DUD_ATTRIBUTES_PER_ITEM
: it's a mapping of a type (or its import path) into a list of attributes present in the instances of that type.For example:
DUD_ATTRIBUTES_PER_ITEM = { "zyte_common_items.Product": [ "canonicalUrl", "brand", "name", "gtin", "mpn", "productId", "sku", "color", "size", "style", ], # Other than strings representing import paths, types are supported as well. dict: ["name"] }
This allows DUD to select which attributes to use to derive a signature for an item. This signature is then used to compare the identities of different items. For instance,
duplicate_url_discarder.DuplicateUrlDiscarderPipeline
uses this to find duplicate items that were extracted so it can drop them.