Back
Nov 21, 2016

Crawling FTP server with Scrapy

Rostyslav Stekh
Rostyslav Stekh

I won’t describe how to install the library and use it, but here is the link to the documentation where it is described in detail.

At first glance, it is a trivial task. Scrapy can work with FTP and handle files.

A simple example:

import scrapy
from scrapy.http import Request


class FtpSpider(scrapy.Spider):
    name = "mozilla"
    allowed_domains = ["ftp.mozilla.org"]
    handle_httpstatus_list = [404]

    def start_requests(self):
        yield Request('ftp://ftp.mozilla.org/pub/firefox/releases/9.0b4/contrib/solaris_pkgadd/README.txt',
                      meta={'ftp_user': '', 'ftp_password': ''}) 

    def parse(self, response):
        print response.body

As you can see, we simply put link with ftp:// and add data for authorization. Scrapy understands that it deals with the FTP server and uses FTPDownloadHandler that is able connect and download files. The difficulty here is that Scrapy can download a file using a specific link to it, but it can’t download a list of files from the directory and walk the directory tree.

In my case it is the FTP server with a list of files and I need to get a list of links and deal with each link separately. Default FTP handler in Scrapy can’t work with the file list.

After googling a couple of articles and libraries, I came across ftptree. After analyzing the code, it became clear that the default FTP handler is replaced by their own one, which can download a list of files, but can not handle the files separately,

import json
from twisted.protocols.ftp import FTPFileListProtocol
from scrapy.http import Response
from scrapy.core.downloader.handlers.ftp import FTPDownloadHandler


class FtpListingHandler(FTPDownloadHandler):
    def gotClient(self, client, request, filepath):
        self.client = client
        protocol = FTPFileListProtocol()
        return client.list(filepath, protocol).addCallbacks(
            callback=self._build_response,
            callbackArgs=(request, protocol),
            errback=self._failed,
            errbackArgs=(request,))

    def _build_response(self, result, request, protocol):
        self.result = result
        body = json.dumps(protocol.files)
        return Response(url=request.url, status=200, body=body)

To replace the default scrapy handler, you need to write your handler in the parser settings:

DOWNLOAD_HANDLERS = {'ftp': '.FtpListingHandler'}

While writing the parser (spider) we decided to combine the two approaches as follows. When the link was pointed not to the file, but to the server itself, the code from ftptree was used, which retrieved and returned a list of all links to the file. Then this list was transmitted and each link from it was handled by the default handler in Scrapy.

Everything worked as it should, all links to files were received and each file was handled, and articles were received. But each time while running it, all the files were received and handled, even those that had already been handled, so we had to do something with that. Meta-data was transmitted with each file, for example: owner, creation date, etc. We decided to define new files by Date Modified indicator. While handling, I saved the most recent date when the file was created, and used it to filter files in the next operation. As a result, only the newest files were handled. This is an example of how a list of links to all files was received:

import scrapy


class FtpMetaRequest(scrapy.http.Request):
    # add user with password to ftp meta request
    user_meta = {'ftp_user': 'username', 'ftp_password': ''}

    def init(self, args, **kwargs):
        super(FtpMetaRequest, self).init(args, **kwargs)
        self.meta.update(self.user_meta)


class FileFtpRequest(FtpMetaRequest):
    pass


class ListFtpRequest(FtpMetaRequest):
    pass


class MedisumSpider(scrapy.Spider):
    name = "articlespider"

    def start_requests(self):
        # start request to get all files
           yield ListFtpRequest("ftp:///")

    def parse(self, response):
        # get response with all files
        files = json.loads(response.body)

        # filter files
        with open("article_max_date.txt", "r") as outfile:
            date = outfile.read()
        if date:
            scrp_time = parser.parse(date)
            files = filter(
                lambda i: parser.parse(i['date']) >= scrp_time, files)

        # get max date
        date_max = max([parser.parse(fl['date']) for fl in files])
        with open("article_max_date.txt", "w") as outfile:
             outfile.write(date_max.isoformat())

        # get data from each file
        for f in files:
            path = os.path.join(response.url, f['filename'])
            request = FileFtpRequest(path, callback=self.parse_item)
            yield request

    def parse_item(self, response):
         # do some actions
         pass

Let's analyze the code:

  • FtpMetaRequest - adds user and password for requests to the server to meta
  • FileFtpRequest, ListFtpRequest - with the help of these classes our FtpListingHandler detects when it is necessary to get a single file, and when to get a list. This detection is also possible by adding your own flag to the request.meta, but we prefer individual classes.
  • MedisumSpider - parser (spider) itself

And here is the code of the handler itself:

import json
from twisted.protocols.ftp import FTPFileListProtocol
from scrapy.http import Response
from scrapy.core.downloader.handlers.ftp import FTPDownloadHandler

class FtpListingHandler(FTPDownloadHandler):
    # get files list or one file
    def gotClient(self, client, request, filepath):
        # check what class sent a request
        if isinstance(request, 'FileFtpRequest'):
            return super(FtpListingHandler, self).gotClient(
                client, request, filepath)

        protocol = FTPFileListProtocol()
        return client.list(filepath, protocol).addCallbacks(
            callback=self._build_response,
            callbackArgs=(request, protocol),
            errback=self._failed,
            errbackArgs=(request,))


    def _build_response(self, result, request, protocol):
        # get files list or one file
        # check what class sent a request
        if request.class.name == 'FtpMetaRequest':
            return super(FtpListingHandler, self)._build_response(
                result, request, protocol)

        self.result = result
        body = json.dumps(protocol.files)
        return Response(url=request.url, status=200, body=body)

Let’s analyze the handler code. As we can see, the FtpListingHandler itself is inherited from the default Scrapy FTP handler and it overrides two methods - gotClient and _build_response. When FileFtpRequest is received, it falls back to base handler behaior to process single file, in other cases it uses custom methods to work with list of files.

This example shows that we can add and change logic of handlers, which are available in Scrapy, in the way we need. Yet, the code remains understandable, concise and it is easy to maintain or extend it further if necessary. But Scrapy has more than enough standard solutions, which cover 99% of needs when writing a parser (spider).

More thoughts

Jan 12, 2017Technology
Making Custom Report Tables Using AngularJS and Django

In this article I will tell you how to create an interactive interface with a widely customized visual look and different filtering to view reports.

Pasha Volkov
Pasha Volkov
May 10, 2018Technology
How to Build a Cloud-Based Leads Management System for Universities

Lead management is an important part of the marketing strategy of every company of any size. Besides automating various business processes, privately-held organizations should consider implementing an IT solution that would help them manage their leads. So, how should you make a web-based leads management system for a University in order to significantly increase sales?

Vladimir Sidorenko
Vladimir Sidorenko
Aug 27, 2020Technology
5 tips for designing database architecture

Designing database architecture is a challenging task, and it gets even more difficult when your app keeps getting bigger. Here are several tips on how to manage your data structure in a more efficient way.

Yurii Mironov
Yurii Mironov
May 26, 2017Technology
Tutorial: Django User Registration and Authentication

In this beginners friends article I'll explain how to make authentication with Google account on your Django site and how to make authentication for you REST API.

Denis Untevskiy
Denis Untevskiy
Jun 25, 2011Technology
Ajax blocks in Django

Quite often we have to write paginated or filtered blocks of information on page. I created a decorator that would automate this process.

Vladimir Sidorenko
Vladimir Sidorenko
Feb 18, 2010Technology
Absolute urls in models

Everybody knows about permalink, but it's usually used only in get_absolute_url. I prefer to use it for all related model urls.class Event(models.Model):# [email protected] edit_url(self):return ('event_edit', (self.pk, ))And then in template:<a href="{{ event.edit_url }}">Редактировать событие</a>

Vladimir Sidorenko
Vladimir Sidorenko