scrapy shell工作正常,但不适合scrapy爬行

vmjh9lq9  于 2023-08-05  发布在  Shell
关注(0)|答案(1)|浏览(95)

今天我偶然发现了一个使用Scrapy的问题(我没有使用Scrapy的经验)

import scrapy
from scrapy_splash import SplashRequest

class AtmSpider(scrapy.Spider):
    name = 'scrapebca'
    allowed_domains = ['www.bca.co.id']
    starts_url = ['https://www.bca.co.id/id/lokasi-bca']

    def parse(self, response):
        data = response.xpath('//div[@class="a-card shadow0 m-maps-location-container-wrapper"]')
        # terminal_ids = response.xpath('//p[@class="a-text a-text-small m-maps-location-container-wrapper-code"]')
        # terminal_names = response.xpath('//p[@class="a-text a-text-subtitle a-text-ellipsis-single m-maps-location-container-wrapper-title"]')
        # terminal_locations = response.xpath('//p[@class="a-text a-text-body a-text-ellipsis-address m-maps-location-container-wrapper-address"]')
        # services = response.xpath('//p[@class="a-text a-text-small m-maps-location-container-wrapper-code service-value"]')
        # longitudes = response.xpath('//div[@class="action-link maps-show-route"]/@data-long"]').extract()
        # latitudes = response.xpath('//div[@class="action-link maps-show-route"]/@data-lat"]').extract()        

        for item in data:
            terminal_id = item.xpath('.//p[@class="a-text a-text-small m-maps-location-container-wrapper-code"]/text()').getall()

            yield {
                'terminal_id': terminal_id
            }

字符串
问题是,scrapy shell工作正常:

  1. fetch('https://www.bca.co.id/id/lokasi-bca')
  2. data = response.xpath('//div[@class=“a-card shadow0 m-maps-location-container-wrapper”]')
  3. data.xpath('.//p[@class=“a-text a-text-small m-maps-location-container-wrapper-code”]/text()').getall()-->它返回一些东西
    但当我试图执行使用爬虫
scrapy crawl <appname>


它返回以下错误:

c:\phyton373\lib\site-packages\OpenSSL\_util.py:6: UserWarning: You are using cryptography on a 32-bit Python on a 64-bit Windows Operating System. Cryptography will be significantly faster if you switch to using a 64-bit Python.
  from cryptography.hazmat.bindings.openssl.binding import Binding
2023-07-29 20:45:18 [scrapy.utils.log] INFO: Scrapy 2.9.0 started (bot: scrapebca)
2023-07-29 20:45:18 [scrapy.utils.log] INFO: Versions: lxml 4.9.3.0, libxml2 2.10.3, cssselect 1.2.0, parsel 1.8.1, w3lib 2.1.1, Twisted 22.10.0, Python 3.7.3 (v3.7.3:ef4ec6ed12, Mar 25 2019, 21:26:53) [MSC v.1916 32 bit (Intel)], pyOpenSSL 23.2.0 (OpenSSL 3.1.1 30 May 2023), cryptography 41.0.2, Platform Windows-10-10.0.19041-SP0
2023-07-29 20:45:18 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'scrapebca',
 'DUPEFILTER_CLASS': 'scrapy_splash.SplashAwareDupeFilter',
 'FEED_EXPORT_ENCODING': 'utf-8',
 'HTTPCACHE_STORAGE': 'scrapy_splash.SplashAwareFSCacheStorage',
 'NEWSPIDER_MODULE': 'scrapebca.spiders',
 'REQUEST_FINGERPRINTER_IMPLEMENTATION': '2.7',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['scrapebca.spiders'],
 'TWISTED_REACTOR': 'twisted.internet.asyncioreactor.AsyncioSelectorReactor'}
2023-07-29 20:45:18 [asyncio] DEBUG: Using selector: SelectSelector
2023-07-29 20:45:18 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2023-07-29 20:45:18 [scrapy.utils.log] DEBUG: Using asyncio event loop: asyncio.windows_events._WindowsSelectorEventLoop
2023-07-29 20:45:18 [scrapy.extensions.telnet] INFO: Telnet Password: 47c5b5514938a8f3
2023-07-29 20:45:18 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.logstats.LogStats']
2023-07-29 20:45:18 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy_splash.SplashCookiesMiddleware',
 'scrapy_splash.SplashMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2023-07-29 20:45:18 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy_splash.SplashDeduplicateArgsMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2023-07-29 20:45:18 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2023-07-29 20:45:18 [scrapy.core.engine] INFO: Spider opened
2023-07-29 20:45:18 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2023-07-29 20:45:18 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2023-07-29 20:45:18 [scrapy.core.engine] INFO: Closing spider (finished)
2023-07-29 20:45:18 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.003025,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2023, 7, 29, 13, 45, 18, 446634),
 'log_count/DEBUG': 3,
 'log_count/INFO': 10,
 'start_time': datetime.datetime(2023, 7, 29, 13, 45, 18, 443609)}
2023-07-29 20:45:18 [scrapy.core.engine] INFO: Spider closed (finished)


我错过了什么步骤?

nzkunb0c

nzkunb0c1#

我没有看到一个错误,每说,但它是不获取任何网址。正如@ Alexandria 所指出的,您在spider中拼错了start_urls属性:

import scrapy
from scrapy_splash import SplashRequest

class AtmSpider(scrapy.Spider):
    name = 'scrapebca'
    allowed_domains = ['www.bca.co.id']
    starts_url = ['https://www.bca.co.id/id/lokasi-bca']
    ˆˆˆˆˆˆˆˆˆˆ

字符串
如果将starts_url重命名为start_urls,应该没问题。

相关问题