Skip to content

Commit

Permalink
python-beautifulsoup: use HTTPX instead of Requests
Browse files Browse the repository at this point in the history
  • Loading branch information
vdusek committed Oct 5, 2023
1 parent f3867b2 commit 53e7380
Show file tree
Hide file tree
Showing 3 changed files with 23 additions and 19 deletions.
14 changes: 7 additions & 7 deletions templates/python-beautifulsoup/README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
## BeautifulSoup and Requests template
# BeautifulSoup and HTTPX template

A template for [web scraping](https://apify.com/web-scraping) data from websites enqueued from starting URL using Python. The URL of the web page is passed in via input, which is defined by the [input schema](https://docs.apify.com/platform/actors/development/input-schema). The template uses the [Requests](https://requests.readthedocs.io/) to get the HTML of the page and the [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to parse the data from it. Enqueued URLs are available in [request queue](https://docs.apify.com/sdk/python/reference/class/RequestQueue). The data are then stored in a [dataset](https://docs.apify.com/platform/storage/dataset) where you can easily access them.
A template for [web scraping](https://apify.com/web-scraping) data from websites enqueued from starting URL using Python. The URL of the web page is passed in via input, which is defined by the [input schema](https://docs.apify.com/platform/actors/development/input-schema). The template uses the [HTTPX](https://www.python-httpx.org) to get the HTML of the page and the [Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/) to parse the data from it. Enqueued URLs are available in [request queue](https://docs.apify.com/sdk/python/reference/class/RequestQueue). The data are then stored in a [dataset](https://docs.apify.com/platform/storage/dataset) where you can easily access them.

## Included features

- **[Apify SDK](https://docs.apify.com/sdk/python/)** for Python - a toolkit for building [Actors](https://apify.com/actors) and scrapers in Python
- **[Input schema](https://docs.apify.com/platform/actors/development/input-schema)** - define and easily validate a schema for your actor's input
- **[Request queue](https://docs.apify.com/sdk/python/docs/concepts/storages#working-with-request-queues)** - queues into which you can put the URLs you want to scrape
- **[Dataset](https://docs.apify.com/sdk/python/docs/concepts/storages#working-with-datasets)** - store structured data where each object stored has the same attributes
- **[Requests](https://requests.readthedocs.io/)** - an elegant and simple HTTP library for Python
- **[HTTPX](https://www.python-httpx.org)** - library for making asynchronous HTTP requests in Python
- **[Beautiful Soup](https://www.crummy.com/software/BeautifulSoup/bs4/doc/)** - a Python library for pulling data out of HTML and XML files

## How it works

This code is a Python script that uses Requests and Beautiful Soup to scrape web pages and extract data from them. Here's a brief overview of how it works:
This code is a Python script that uses HTTPX and Beautiful Soup to scrape web pages and extract data from them. Here's a brief overview of how it works:

- The script reads the input data from the Actor instance, which is expected to contain a `start_urls` key with a list of URLs to scrape and a `max_depth` key with the maximum depth of nested links to follow.
- The script enqueues the starting URLs in the default request queue and sets their depth to 0.
- The script processes the requests in the queue one by one, fetching the URL using Requests and parsing it using BeautifulSoup.
- The script processes the requests in the queue one by one, fetching the URL using HTTPX and parsing it using BeautifulSoup.
- If the depth of the current request is less than the maximum depth, the script looks for nested links in the page and enqueues their targets in the request queue with an incremented depth.
- The script extracts the desired data from the page (in this case, all the links) and pushes it to the default dataset using the `push_data` method of the Actor instance.
- The script catches any exceptions that occur during the scraping process and logs an error message using the `Actor.log.exception` method.
Expand All @@ -26,11 +26,11 @@ This code is a Python script that uses Requests and Beautiful Soup to scrape web
## Resources

- [BeautifulSoup Scraper](https://apify.com/apify/beautifulsoup-scraper)
- [Beautifulsoup Scraper tutorial](https://www.youtube.com/watch?v=1KqLLuIW6MA)
- [Beautifulsoup Scraper tutorial](https://www.youtube.com/watch?v=1KqLLuIW6MA)
- [Python tutorials in Academy](https://docs.apify.com/academy/python)
- [Web scraping with Beautiful Soup and Requests](https://blog.apify.com/web-scraping-with-beautiful-soup/)
- [Beautiful Soup vs. Scrapy for web scraping](https://blog.apify.com/beautiful-soup-vs-scrapy-web-scraping/)
- [Integration with Zapier](https://apify.com/integrations), Make, Google Drive, and others
- [Integration with Make, GitHub, Zapier, Google Drive, and other apps](https://apify.com/integrations)
- [Video guide on getting scraped data using Apify API](https://www.youtube.com/watch?v=ViYYDHSBAKM)
- [Video introduction to Python SDK](https://www.youtube.com/watch?v=C8DmvJQS3jk)

Expand Down
4 changes: 2 additions & 2 deletions templates/python-beautifulsoup/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -2,5 +2,5 @@
# See https://pip.pypa.io/en/latest/reference/requirements-file-format/
# for how to format them
apify ~= 1.1.5
beautifulsoup4 ~= 4.12.0
requests ~= 2.31.0
beautifulsoup4 ~= 4.12.2
httpx ~= 0.25.0
24 changes: 14 additions & 10 deletions templates/python-beautifulsoup/src/main.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
from urllib.parse import urljoin

import requests
from bs4 import BeautifulSoup
from httpx import AsyncClient

from apify import Actor

Expand All @@ -18,41 +18,45 @@ async def main():
await Actor.exit()

# Enqueue the starting URLs in the default request queue
default_queue = await Actor.open_request_queue()
rq = await Actor.open_request_queue()
for start_url in start_urls:
url = start_url.get('url')
Actor.log.info(f'Enqueuing {url} ...')
await default_queue.add_request({'url': url, 'userData': {'depth': 0}})
await rq.add_request({'url': url, 'userData': {'depth': 0}})

# Process the requests in the queue one by one
while request := await default_queue.fetch_next_request():
while request := await rq.fetch_next_request():
url = request['url']
depth = request['userData']['depth']
Actor.log.info(f'Scraping {url} ...')

try:
# Fetch the URL using `requests` and parse it using `BeautifulSoup`
response = requests.get(url)
# Fetch the URL using `httpx`
async with AsyncClient() as client:
response = await client.get(url)

# Parse the response using `BeautifulSoup`
soup = BeautifulSoup(response.content, 'html.parser')

# If we haven't reached the max depth,
# look for nested links and enqueue their targets
# If we haven't reached the max depth, look for nested links and enqueue their targets
if depth < max_depth:
for link in soup.find_all('a'):
link_href = link.get('href')
link_url = urljoin(url, link_href)
if link_url.startswith(('http://', 'https://')):
Actor.log.info(f'Enqueuing {link_url} ...')
await default_queue.add_request({
await rq.add_request({
'url': link_url,
'userData': {'depth': depth + 1},
})

# Push the title of the page into the default dataset
title = soup.title.string if soup.title else None
await Actor.push_data({'url': url, 'title': title})

except Exception:
Actor.log.exception(f'Cannot extract data from {url}.')

finally:
# Mark the request as handled so it's not processed again
await default_queue.mark_request_as_handled(request)
await rq.mark_request_as_handled(request)

0 comments on commit 53e7380

Please sign in to comment.