scrapy apscheduler+ scrappy +asyncio无法顺利执行第一个任务

iqxoj9l9  于 2022-12-18  发布在  其他
关注(0)|答案(4)|浏览(135)

版本:Python 3.7,Scrapy 2.1.0,AP调度程序3.6.1
我创建了一个简单Spider用于测试

# -*- coding: utf-8 -*-
import scrapy

class TestSpider(scrapy.Spider):
    name = 'test'
    start_urls = ['https://stackoverflow.com//']
    custom_settings = {
        'EXTENSIONS': {
            'scrapy.extensions.logstats.LogStats': None,
        },
        'TELNETCONSOLE_ENABLED': False,
        'LOG_LEVEL': 'INFO'
    }

    def parse(self, response):
        self.logger.info('parse--------------------------')

运行使用代码:

from datetime import datetime
from scrapy.crawler import CrawlerProcess
from scrapy.utils.project import get_project_settings
from scrapy.utils.log import configure_logging
from scrapy.utils.reactor import install_reactor
install_reactor('twisted.internet.asyncioreactor.AsyncioSelectorReactor')
from apscheduler.schedulers.twisted import TwistedScheduler
from twisted.internet import reactor
configure_logging()
scheduler = TwistedScheduler(reactor=reactor)
process = CrawlerProcess(get_project_settings())
scheduler.add_job(process.crawl, 'interval', args=['test'], minutes=1, next_run_time=datetime.now())
scheduler.start()
reactor.run()

我想要蜘蛛立即运行,然后每分钟运行一次.蜘蛛立即打开,但似乎蜘蛛睡眠直到下一个任务运行

2020-05-25 14:34:33 [scrapy.utils.log] INFO: Scrapy 2.1.0 started (bot: testspider)
2020-05-25 14:34:33 [scrapy.utils.log] INFO: Versions: lxml 4.3.4.0, libxml2 2.9.5, cssselect 1.0.3, parsel 1.5.1, w3lib 1.20.0, Twisted 20.3.0, Python 3.7.3 (default, Apr 24 2019, 15:29:51) [MSC v.1915 64 bit (AMD64)], pyOpenSSL 19.0.0 (OpenSSL 1.1.1c  28 May 2019), cryptography 2.7, Platform Windows-7-6.1.7601-SP1
2020-05-25 14:34:33 [scrapy.utils.log] DEBUG: Using reactor: twisted.internet.asyncioreactor.AsyncioSelectorReactor
2020-05-25 14:34:33 [apscheduler.scheduler] INFO: Adding job tentatively -- it will be properly scheduled when the scheduler starts
2020-05-25 14:34:33 [apscheduler.scheduler] INFO: Added job "CrawlerRunner.crawl" to job store "default"
2020-05-25 14:34:33 [apscheduler.scheduler] INFO: Scheduler started
2020-05-25 14:34:33 [apscheduler.scheduler] DEBUG: Looking for jobs to run
2020-05-25 14:34:33 [apscheduler.scheduler] DEBUG: Next wakeup is due at 2020-05-25 14:35:33.270223+08:00 (in 59.966950 seconds)
2020-05-25 14:34:33 [apscheduler.executors.default] INFO: Running job "CrawlerRunner.crawl (trigger: interval[0:01:00], next run at: 2020-05-25 14:35:33 CST)" (scheduled at 2020-05-25 14:34:33.270223+08:00)
2020-05-25 14:34:33 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'testspider',
 'LOG_LEVEL': 'INFO',
 'NEWSPIDER_MODULE': 'testspider.spiders',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['testspider.spiders'],
 'TELNETCONSOLE_ENABLED': False}
2020-05-25 14:34:33 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats']
2020-05-25 14:34:33 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-05-25 14:34:33 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-05-25 14:34:33 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-05-25 14:34:33 [scrapy.core.engine] INFO: Spider opened
2020-05-25 14:34:33 [apscheduler.executors.default] INFO: Job "CrawlerRunner.crawl (trigger: interval[0:01:00], next run at: 2020-05-25 14:35:33 CST)" executed successfully
2020-05-25 14:35:33 [apscheduler.executors.default] INFO: Running job "CrawlerRunner.crawl (trigger: interval[0:01:00], next run at: 2020-05-25 14:36:33 CST)" (scheduled at 2020-05-25 14:35:33.270223+08:00)
2020-05-25 14:35:33 [scrapy.crawler] INFO: Overridden settings:
{'BOT_NAME': 'testspider',
 'LOG_LEVEL': 'INFO',
 'NEWSPIDER_MODULE': 'testspider.spiders',
 'ROBOTSTXT_OBEY': True,
 'SPIDER_MODULES': ['testspider.spiders'],
 'TELNETCONSOLE_ENABLED': False}
2020-05-25 14:35:33 [scrapy.middleware] INFO: Enabled extensions:
['scrapy.extensions.corestats.CoreStats']
2020-05-25 14:35:33 [scrapy.middleware] INFO: Enabled downloader middlewares:
['scrapy.downloadermiddlewares.robotstxt.RobotsTxtMiddleware',
 'scrapy.downloadermiddlewares.httpauth.HttpAuthMiddleware',
 'scrapy.downloadermiddlewares.downloadtimeout.DownloadTimeoutMiddleware',
 'scrapy.downloadermiddlewares.defaultheaders.DefaultHeadersMiddleware',
 'scrapy.downloadermiddlewares.useragent.UserAgentMiddleware',
 'scrapy.downloadermiddlewares.retry.RetryMiddleware',
 'scrapy.downloadermiddlewares.redirect.MetaRefreshMiddleware',
 'scrapy.downloadermiddlewares.httpcompression.HttpCompressionMiddleware',
 'scrapy.downloadermiddlewares.redirect.RedirectMiddleware',
 'scrapy.downloadermiddlewares.cookies.CookiesMiddleware',
 'scrapy.downloadermiddlewares.httpproxy.HttpProxyMiddleware',
 'scrapy.downloadermiddlewares.stats.DownloaderStats']
2020-05-25 14:35:33 [scrapy.middleware] INFO: Enabled spider middlewares:
['scrapy.spidermiddlewares.httperror.HttpErrorMiddleware',
 'scrapy.spidermiddlewares.offsite.OffsiteMiddleware',
 'scrapy.spidermiddlewares.referer.RefererMiddleware',
 'scrapy.spidermiddlewares.urllength.UrlLengthMiddleware',
 'scrapy.spidermiddlewares.depth.DepthMiddleware']
2020-05-25 14:35:33 [scrapy.middleware] INFO: Enabled item pipelines:
[]
2020-05-25 14:35:33 [scrapy.core.engine] INFO: Spider opened
2020-05-25 14:35:33 [apscheduler.executors.default] INFO: Job "CrawlerRunner.crawl (trigger: interval[0:01:00], next run at: 2020-05-25 14:36:33 CST)" executed successfully
2020-05-25 14:35:34 [test] INFO: parse--------------------------
2020-05-25 14:35:34 [scrapy.core.engine] INFO: Closing spider (finished)
2020-05-25 14:35:34 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 764,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 26324,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 2,
 'downloader/response_status_count/301': 1,
 'elapsed_time_seconds': 61.426001,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 5, 25, 6, 35, 34, 905538),
 'log_count/INFO': 17,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2020, 5, 25, 6, 34, 33, 479537)}
2020-05-25 14:35:34 [scrapy.core.engine] INFO: Spider closed (finished)
2020-05-25 14:35:35 [test] INFO: parse--------------------------
2020-05-25 14:35:35 [scrapy.core.engine] INFO: Closing spider (finished)
2020-05-25 14:35:35 [scrapy.statscollectors] INFO: Dumping Scrapy stats:
{'downloader/request_bytes': 764,
 'downloader/request_count': 3,
 'downloader/request_method_count/GET': 3,
 'downloader/response_bytes': 26322,
 'downloader/response_count': 3,
 'downloader/response_status_count/200': 2,
 'downloader/response_status_count/301': 1,
 'elapsed_time_seconds': 1.62243,
 'finish_reason': 'finished',
 'finish_time': datetime.datetime(2020, 5, 25, 6, 35, 35, 9694),
 'log_count/INFO': 13,
 'response_received_count': 2,
 'robotstxt/request_count': 1,
 'robotstxt/response_count': 1,
 'robotstxt/response_status_count/200': 1,
 'scheduler/dequeued': 2,
 'scheduler/dequeued/memory': 2,
 'scheduler/enqueued': 2,
 'scheduler/enqueued/memory': 2,
 'start_time': datetime.datetime(2020, 5, 25, 6, 35, 33, 387264)}
2020-05-25 14:35:35 [scrapy.core.engine] INFO: Spider closed (finished)


如果不使用AsyncioSelectorReactor,第一个任务将顺利运行

6yjfywim

6yjfywim1#

我遇到了同样的问题,我找到了问题和解决方案:
首先是解决方案:看起来scrapy.utils.reactor.install_reactor使用来自包twisted.internetasyncioasyncioreactor作为全局变量,如果找不到它,就会自动失败。

# asyncio reactor installation (CORRECT) - `reactor` must not be defined at this point
# https://docs.scrapy.org/en/latest/_modules/scrapy/utils/reactor.html?highlight=asyncio%20reactor#
import scrapy 
import asyncio
from twisted.internet import asyncioreactor
scrapy.utils.reactor.install_reactor('twisted.internet.asyncioreactor.AsyncioSelectorReactor')
is_asyncio_reactor_installed = scrapy.utils.reactor.is_asyncio_reactor_installed()
print(f"Is asyncio reactor installed: {is_asyncio_reactor_installed}")
from twisted.internet import reactor

但是,以下指令集失败:

# asyncio reactor BAD INSTALLED (INCORRECT) Import order IS important
import scrapy 
import asyncio
from twisted.internet import reactor
scrapy.utils.reactor.install_reactor('twisted.internet.asyncioreactor.AsyncioSelectorReactor')
is_asyncio_reactor_installed = scrapy.utils.reactor.is_asyncio_reactor_installed()
print(f"Is asyncio reactor installed: {is_asyncio_reactor_installed}")

它的坏代码,它不应该是这样的。我希望scrappy开发人员尽快纠正这一点。

x759pob2

x759pob22#

它首先等待时间间隔,然后才运行预定的函数,所以它工作正常,如果你想让它立即运行,添加next_run_time=datetime.now()到你的add_job()调用。

4ioopgfo

4ioopgfo3#

我找到了一个解决方案,“使用另一个任务来激活它”,例如:添加任务:

sched.add_job(lambda :print('activate'), 'interval', minutes=1, next_run_time=datetime.now() + timedelta(seconds=5))
xmjla07d

xmjla07d4#

我发现了这个老问题,似乎TwistedScheduler不能很好地与scrappy框架一起工作。这是我通过使用另一个仅用于APScheduler的线程来解决的问题。

import threading
from twisted.internet import reactor
from apscheduler.schedulers.blocking import BlockingScheduler
from scrapy.crawler import CrawlerRunner

runner = CrawlerRunner( YOUR_SCRAPY_SETTINGS )
scheduler = BlockingScheduler()

scheduler.add_job(
        lambda: reactor.callFromThread(lambda: runner.crawl('test')),
        trigger='cron', minute='*/1',
    )

scheduler_thread = threading.Thread(target=lambda: scheduler.start())
scheduler_thread.start()
reactor.run()
scheduler.shutdown()

参考文献:
https://docs.scrapy.org/en/latest/topics/practices.html
https://docs.twisted.org/en/twisted-18.7.0/core/howto/threading.html

相关问题