Skip to content

πŸ•·οΈ Scrapyd is an application for deploying and running Scrapy spiders.

License

Notifications You must be signed in to change notification settings

EasyPi/docker-scrapyd

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

75 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

scrapyd

scrapy is an open source and collaborative framework for extracting the data you need from websites. In a fast, simple, yet extensible way.

scrapyd is a service for running Scrapy spiders. It allows you to deploy your Scrapy projects and control their spiders using a HTTP JSON API.

scrapyd-client is a client for scrapyd. It provides the scrapyd-deploy utility which allows you to deploy your project to a Scrapyd server.

scrapy-splash provides Scrapy+JavaScript integration using Splash.

scrapyrt allows you to easily add HTTP API to your existing Scrapy project.

spidermon is a framework to build monitors for Scrapy spiders.

scrapy-poet is the web-poet Page Object pattern implementation for Scrapy.

scrapy-playwright is a Scrapy Download Handler which performs requests using Playwright for Python.

This image is based on debian:bookworm, 8 latest stable python packages are installed:

  • scrapy==2.12.0
  • scrapyd==1.5.0
  • scrapyd-client==2.0.0
  • scrapy-splash==0.9.0
  • scrapyrt==v0.16.0
  • spidermon==1.23.0
  • scrapy-poet==0.24.0
  • scrapy-playwright==v0.0.42
# fetch latest versions
echo "scrapy scrapyd scrapyd-client scrapy-splash scrapyrt spidermon scrapy-poet scrapy-playwright" |
  xargs -n1 pip --disable-pip-version-check index versions 2>/dev/null |
    grep -v Available

Please use this as base image for your own project.

⚠️ Scrapy (since 2.0.0) has dropped support for Python 2.7, which reached end-of-life on 2020-01-01.

docker-compose.yml

version: "3.8"

services:

  scrapyd:
    image: easypi/scrapyd
    ports:
      - "6800:6800"
    volumes:
      - ./data:/var/lib/scrapyd
      - /usr/local/lib/python3.11/dist-packages
    restart: unless-stopped

  scrapy:
    image: easypi/scrapyd
    command: bash
    volumes:
      - .:/code
    working_dir: /code
    restart: unless-stopped

  scrapyrt:
    image: easypi/scrapyd
    command: scrapyrt -i 0.0.0.0 -p 9080
    ports:
      - "9080:9080"
    volumes:
      - .:/code
    working_dir: /code
    restart: unless-stopped

Run it as background-daemon for scrapyd

$ docker-compose up -d scrapyd
$ docker-compose logs -f scrapyd
$ docker cp scrapyd_scrapyd_1:/var/lib/scrapyd/items .
$ tree items
└── myproject
    └── myspider
        └── ad6153ee5b0711e68bc70242ac110005.jl
$ mkvirtualenv -p python3 webbot
$ pip install scrapy scrapyd-client

$ scrapy startproject myproject
$ cd myproject
$ setvirtualenvproject

$ scrapy genspider myspider mydomain.com
$ scrapy edit myspider
$ scrapy list

$ vi scrapy.cfg
$ scrapyd-client deploy
$ curl http://localhost:6800/schedule.json -d project=myproject -d spider=myspider
$ firefox http://localhost:6800

File: scrapy.cfg

[settings]
default = myproject.settings

[deploy]
url = http://localhost:6800/
project = myproject

Run it as interactive-shell for scrapy

$ cat > stackoverflow_spider.py << _EOF_
import scrapy

class StackOverflowSpider(scrapy.Spider):
    name = 'stackoverflow'
    start_urls = ['http://stackoverflow.com/questions?sort=votes']

    def parse(self, response):
        for href in response.css('.question-summary h3 a::attr(href)'):
            full_url = response.urljoin(href.extract())
            yield scrapy.Request(full_url, callback=self.parse_question)

    def parse_question(self, response):
        yield {
            'title': response.css('h1 a::text').extract()[0],
            'votes': response.css('.question div[itemprop="upvoteCount"]::text').extract()[0],
            'body': response.css('.question .postcell').extract()[0],
            'tags': response.css('.question .post-tag::text').extract(),
            'link': response.url,
        }
_EOF_

$ docker-compose run --rm scrapy
>>> scrapy runspider stackoverflow_spider.py -o top-stackoverflow-questions.jl
>>> cat top-stackoverflow-questions.jl
>>> exit

Run it as realtime crawler for scrapyrt

$ git clone https://github.com/scrapy/quotesbot.git .
$ docker-compose up -d scrapyrt
$ curl -s 'http://localhost:9080/crawl.json?spider_name=toscrape-css&callback=parse&url=http://quotes.toscrape.com/&max_requests=5' | jq -c '.items[]'

About

πŸ•·οΈ Scrapyd is an application for deploying and running Scrapy spiders.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages