We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
为了更好的解决问题,请认真回答下面的问题。等到问题解决,请及时关闭本issue。
答: github版
答: 是的
答: 否,只尝试了一个用户
答: weiboid -> 1640337222
答:
答: "random_wait_pages": [1, 2], "random_wait_seconds": [70, 110], 在这个设置下仍然会在第200条微博(第二十页附近)被封
The text was updated successfully, but these errors were encountered:
可能和目标账号有关,某些类型的微博限制比较严。您可以修改spider.py,把range(1, page_num + 1)改成range(20, page_num + 1),这样程序就会从20页开始获取。
Sorry, something went wrong.
感谢解答,但从20页开始获取仍然会在40页左右被封,也许确实是因为这个账号比较严,现在的解决方法是将参数设置为"random_wait_pages": [1, 2], "random_wait_seconds": [120, 180] 就可以无限获取了,为了效率只能考虑用多个代理ip同时爬
爬取多个微博账号时都出现同样的情况,无法爬取。 比如以下微博目标账户:2974325495;1682207150
No branches or pull requests
为了更好的解决问题,请认真回答下面的问题。等到问题解决,请及时关闭本issue。
答:
github版
答:
是的
答:
否,只尝试了一个用户
答:
weiboid -> 1640337222
答:
答:
"random_wait_pages": [1, 2],
"random_wait_seconds": [70, 110], 在这个设置下仍然会在第200条微博(第二十页附近)被封
The text was updated successfully, but these errors were encountered: