我在6个不同的蜘蛛刮6个网站。但现在,我必须刮这些网站在一个单一的蜘蛛。有没有办法刮多个链接在同一个蜘蛛?
0dxa2lsx1#
我这样做是因为
def start_requests(self): yield Request('url1',callback=self.url1) yield Request('url2',callback=self.url2) yield Request('url3',callback=self.url3) yield Request('url4',callback=self.url4) yield Request('url5',callback=self.url5) yield Request('url6',callback=self.url6)
soat7uwm2#
import spider1 import spider2 import spider3 from scrapy.crawler import CrawlerProcess if require_spider1: spider = spider1 urls = ['https://site1.com/'] elif require_spider2: spider = spider2 urls = ['https://site2.com/', 'https://site2-1.com/'] elif require_spider3: spider = spider3 urls = ['https://site3.com'] process = CrawlerProcess() process.crawl(spider, urls=urls) process.start()
2条答案
按热度按时间0dxa2lsx1#
我这样做是因为
soat7uwm2#