NotificationPython's best | Explore our 10th annual Python top picks for 2024. Check it out!icon
/_next/static/media/placeholder-social.f0796e1f.png
blog
Calling Scrapy from a Python script

Tue, Sep 27, 2011

When you need to do some web scraping job in Python, an excellent choice is the Scrapy framework. Not only it takes care of most of the networking (HTTP, SSL, proxies, etc) but it also facilitates the process of extracting data from the web by providing things such as nifty xpath selectors.

Scrapy is built upon the Twisted networking engine. A limitation of its core component, the reactor, is that it cannot be restarted. This might cause us some troubles if we are trying to devise a mechanism to run Scrapy spiders independently from a Python script (and not from Scrapy shell). Say for example we want to implement a Python function that receives some parameters, performs a search/web scraping in some sites and returns a list of scrapped items. A naive solution such as this will not work, since in each of the function calls we need to have the Twisted reactor restarted, and this is unfortunately not possible.

A workaround for this is to run Scrapy on its own process. After doing a search, I could get no solution to work on latest Scrapy. However one of those used Multiprocessing and it came pretty close! Here is an updated version for Scrapy 0.13:

One way to invoke this, say inside a function, would be:

where MySpider is of course the class of the Spider you want to run, and myArgs are the arguments you wish to invoke the spider with.

Wondering how AI can help you?

This website uses cookies to improve user experience. Read more