Scrapy不使用File下载文件

j9per5c4  于 2022-11-09  发布在  其他
关注(0)|答案(1)|浏览(150)

我不知道我代码发生了什么。我编写了spider并加载了https://docs.scrapy.org/en/latest/topics/media-pipeline.html中描述的项目,但Scrapy不下载任何文件:

2022-07-19 01:35:09 [scrapy.core.engine] INFO: Spider opened
2022-07-19 01:35:09 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2022-07-19 01:35:09 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2022-07-19 01:35:09 [scrapy.downloadermiddlewares.redirect] DEBUG: Redirecting (301) to <GET https://www.tp-link.com/br/support/download/> from <GET https://www.tp-link.com/br/support/download>
2022-07-19 01:35:09 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.tp-link.com/br/support/download/> (referer: None)
2022-07-19 01:35:10 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.tp-link.com/br/support/download/archer-c7/> (referer: https://www.tp-link.com/br/support/download/)
2022-07-19 01:35:10 [scrapy.core.engine] DEBUG: Crawled (200) <GET https://www.tp-link.com/br/support/download/archer-c7/v4/> (referer: https://www.tp-link.com/br/support/download/archer-c7/)
2022-07-19 01:35:10 [scrapy.core.scraper] DEBUG: Scraped from <200 https://www.tp-link.com/br/support/download/archer-c7/v4/>
{'file_urls': ['https://static.tp-link.com/2019/201912/20191206/Archer%20C7(US)_V4_190411.zip',
               'https://static.tp-link.com/2018/201804/20180428/Archer%20C7(US)_V4_180425.zip',
               'https://static.tp-link.com/2017/201712/20171221/Archer%20C7(US)_V4_171101.zip'],
 'model': 'Archer C7 ',
 'vendor': 'TP-Link',
 'version': 'V4'}
2022-07-19 01:35:10 [scrapy.core.engine] INFO: Closing spider (finished)
2022-07-19 01:35:10 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 1082,
 'downloader/request_count': 4,
 'downloader/request_method_count/GET': 4,
 'downloader/response_bytes': 77004,
 'downloader/response_count': 4,
 'downloader/response_status_count/200': 3,
 'downloader/response_status_count/301': 1,
 'elapsed_time_seconds': 1.45197,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2022, 7, 19, 1, 35, 10, 957079),
 'httpcompression/response_bytes': 667235,
 'httpcompression/response_count': 3,
 'item_scraped_count': 1,
 'log_count/DEBUG': 6,
 'log_count/INFO': 10,
 'memusage/max': 93999104,
 'memusage/startup': 93999104,
 'request_depth_max': 2,
 'response_received_count': 3,
 'scheduler/dequeued': 4,
 'scheduler/dequeued/memory': 4,
 'scheduler/enqueued': 4,
 'scheduler/enqueued/memory': 4,
 'start_time': datetime.datetime(2022, 7, 19, 1, 35, 9, 505109)}
2022-07-19 01:35:10 [scrapy.core.engine] INFO: Spider closed (finished)

我的项目分类我定义了一些自定义字段,但file_urlsfiles被创建为零碎的文档:

class FirmwareItem(scrapy.Item):
    # define the fields for your item here like:
    vendor = scrapy.Field()
    model = scrapy.Field()
    version = scrapy.Field()
    file_urls = scrapy.Field()
    files = scrapy.Field()

ParseMethod在这里,我在页面上获得zip文件链接,并将它们作为列表加载到“file_urls”:

def parseVersion(self,response):

        firmware = FirmwareItem()
        firmware['vendor'] = "TP-Link"
        firmware['model'] = response.xpath('//em[@id="model-version-name"]/text()').get()
        firmware['version'] = response.xpath('//em[@id="model-version-name"]/span/text()').get()
        urls = response.xpath('//div[@data-id="Firmware"]//a[@class="download-resource-btn ga-click"]/@href').getall()

        firmware['file_urls'] = [url.replace(' ',"%20") for url in urls]
        yield firmware

设置启用项目管道并设置默认目录:

ITEM_PIPELINES = {
    'router.pipelines.RouterPipeline': 300,
}
FILES_STORE = "downloaded"

管道保留为默认值:

from itemadapter import ItemAdapter

class RouterPipeline:
    def process_item(self, item, spider):
        return item
xv8emn3q

xv8emn3q1#

就快完成了...请确保您的www.example.com中已设置了所有这些设置settings.py:

ITEM_PIPELINES = {
    'scrapy.pipelines.files.FilesPipeline': 1,
    'router.pipelines.RouterPipeline': 300,
}

FILES_STORE = '/path/to/save/directory'  # set your preferred download directory

FILES_URLS_FIELD = 'files_urls'  # copy verbatim
FILES_RESULT_FIELD = 'files'     # copy verbatim

相关问题