You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for using scraper.
After writing readme I made a lot of changes. I updated the overall structure of scraper.
linksScraper first scrapes sales navigator and write profile links to database.
profileScraper then takes unscraped links from database and then scrape them.
I don't have sales navigator account right now to test the scraper.
Actually, I faild at this project as I was not able to bypass linkedin detection. After 200 or 300 scrapes my account gets banned. It was 2 years before. Now I have better ideas to improve the scraper. I will update readme as well as scraper as soon as I find time.
Thanks for your response. When you try to scrap leads page by page from search results, after 100-200 pages LinkedIn hangs the account. To bypass this, I realized we need to save search and loop over the saved search URL, not actual search URL with params.
I don't know how to find a workaround for scrap profile URLs. I'm sure there is a limit for those too.
Do you know a way to find emails for extracted profiles from search results. There are services but they are too expensive. Because I need it for 1.000.000 profiles.
Where is dist/index.js?
The text was updated successfully, but these errors were encountered: