scrapy spider在启动spider之前添加健康检查

bprjcwpo  于 2021-06-10  发布在  Cassandra
关注(0)|答案(1)|浏览(433)

如果无法访问外部依赖的api(cassandra、mysql等),我不想启动spider作业

classHealthCheck:
    @staticmethod
    def is_healthy():
        config = json.loads(configHelper.get_data())
        cassandra_config = config['cassandra']
        cluster = Cluster(cassandra_config['hosts'],
                          port=cassandra_config['port'])
        session = cluster.connect(cassandra_config['keyspace'])
        try:
            session.execute('SELECT 1')
        except Exception as e:
            logging.error(e)
        return True

我可以在spider的init方法中调用is\u health,但是我必须对所有spider都这样做。有没有人有更好的建议,从哪里调用是健康的?

wqnecbli

wqnecbli1#

这不是一项容易的任务,例如,请参见本期。问题是,您不能在打开卡盘后立即将其关闭,因为这可能发生在发动机启动之前(请参阅此处)。然而,似乎有一个解决方案,虽然有点黑客。这是一个工作原型,作为一个粗略的扩展:

import logging

from scrapy import signals
from twisted.internet import task

logger = logging.getLogger(__name__)

class HealthcheckExtension(object):
    """Close spiders if healthcheck fails"""

    def __init__(self, crawler):
        self.crawler = crawler
        crawler.signals.connect(self.engine_started, signal=signals.engine_started)
        crawler.signals.connect(self.engine_stopped, signal=signals.engine_stopped)

    @classmethod
    def from_crawler(cls, crawler):
        return cls(crawler)

    def engine_started(self):
        healthy = self.perform_healthcheck()
        if not healthy:
            logger.info('Healthcheck failed, closing all spiders')
            self.task = task.LoopingCall(self.close_spiders)
            self.task.start(0.0001, now=True)

    def engine_stopped(self):
        task = getattr(self, 'task', False)
        if task and task.running:
            task.stop()

    def perform_healthcheck(self):
        # perform the health check here and return True if passes
        return False  # simulate failed healthcheck...

    def close_spiders(self):
        if self.crawler.engine.running:
            for spider in self.crawler.engine.open_spiders:
                self.crawler.engine.close_spider(spider, 'healthcheck_failed')

它在中执行健康检查 engine_started 信号处理器。如果失败,它将创建一个周期性任务(循环间隔尽可能短),尝试尽快(在引擎启动后)关闭spider。
在中启用扩展 settings.py :

EXTENSIONS = {
    'demo.extensions.HealthcheckExtension': 100
}

运行任意蜘蛛。它以适当的速度立即关闭 finish_reason :

2020-02-29 17:17:43 [scrapy.utils.log] INFO: Scrapy 1.8.0 started (bot: demo)
2020-02-29 17:17:43 [scrapy.utils.log] INFO: Versions: lxml 4.5.0.0, libxml2 2.9.10, cssselect 1.1.0, parsel 1.5.2, w3lib 1.21.0, Twisted 19.10.0, Python 3.6.9 (default, Nov  7 2019, 10:44:02) - [GCC 8.3.0], pyOpenSSL 19.1.0 (OpenSSL 1.1.1d  10 Sep 2019), cryptography 2.8, Platform Linux-5.3.0-40-generic-x86_64-with-Ubuntu-18.04-bionic
2020-02-29 17:17:43 [scrapy.crawler] INFO: Overridden settings: {'BOT_NAME': 'demo', 'NEWSPIDER_MODULE': 'demo.spiders', 'ROBOTSTXT_OBEY': True, 'SPIDER_MODULES': ['demo.spiders']}
2020-02-29 17:17:43 [scrapy.extensions.telnet] INFO: Telnet Password: 8253cb10ff171340
2020-02-29 17:17:43 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats',
 'scrapy.extensions.telnet.TelnetConsole',
 'scrapy.extensions.memusage.MemoryUsage',
 'scrapy.extensions.logstats.LogStats',
 'demo.extensions.HealthcheckExtension']
2020-02-29 17:17:43 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-02-29 17:17:43 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-02-29 17:17:43 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-02-29 17:17:43 [scrapy.core.engine] INFO: Spider opened
2020-02-29 17:17:43 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2020-02-29 17:17:43 [scrapy.extensions.telnet] INFO: Telnet console listening on 127.0.0.1:6023
2020-02-29 17:17:43 [demo.extensions] INFO: Healthcheck failed, closing all spiders
2020-02-29 17:17:43 [scrapy.core.engine] INFO: Closing spider (healthcheck_failed)
2020-02-29 17:17:43 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'elapsed_time_seconds': 0.005618,
 'finish_reason': 'healthcheck_failed',
 'finish_time': datetime.datetime(2020, 2, 29, 16, 17, 43, 766734),
 'log_count/INFO': 11,
 'memusage/max': 52596736,
 'memusage/startup': 52596736,
 'scheduler/enqueued': 1,
 'scheduler/enqueued/memory': 1,
 'start_time': datetime.datetime(2020, 2, 29, 16, 17, 43, 761116)}
2020-02-29 17:17:43 [scrapy.core.engine] INFO: Spider closed (healthcheck_failed)

相关问题