Python Scrapy(英语:Python Scrapy)(2023)

f4t66c6m  于 12个月前  发布在  Python
关注(0)|答案(1)|浏览(148)

我们希望抓取文章(内容+标题)来扩展我们的数据集,用于文本分类目的。
目标:从>> https://www.bbc.com/news/technology的所有页面中抓取所有文章
问题:看起来代码只提取了https://www.bbc.com/news/technology?page=1中的文章,即使我们跟踪了所有的页面。我们如何跟踪页面会有问题吗?

class BBCSpider_2(scrapy.Spider):

    name = "bbc_tech"
    start_urls = ["https://www.bbc.com/news/technology"]

    def parse(self, response: Response, **kwargs: Any) -> Any:
        max_pages = response.xpath("//nav[@aria-label='Page']/div/div/div/div/ol/li[last()]/div/a//text()").get()
        max_pages = int(max_pages)
        for p in range(max_pages):
            page = f"https://www.bbc.com/news/technology?page={p+1}"
            yield response.follow(page, callback=self.parse_articles2)

字符串
接下来,我们将进入相应页面上的每篇文章:

def parse_articles2(self, response):
        container_to_scan = [4, 8]
        for box in container_to_scan:
            if box == 4:
                articles = response.xpath(f"//*[@id='main-content']/div[{box}]/div/div/ul/li")
            if box == 8:
                articles = response.xpath(f"//*[@id='main-content']/div[{box}]/div[2]/ol/li")
            for article_idx in range(len(articles)):
                if box == 4:
                    relative_url = response.xpath(f"//*[@id='main-content']/div[4]/div/div/ul/li[{article_idx+1}]/div/div/div/div[1]/div[1]/a/@href").get()
                elif box == 8:
                    relative_url = response.xpath(f"//*[@id='main-content']/div[8]/div[2]/ol/li[{article_idx+1}]/div/div/div[1]/div[1]/a/@href").get()
                else:
                    relative_url = None

                if relative_url is not None:
                    followup_url = "https://www.bbc.com" + relative_url
                    yield response.follow(followup_url, callback=self.parse_article)


最后但并非最不重要的是,我们正在抓取每篇文章的内容和标题:

def parse_article(response):
        article_text = response.xpath("//article/div[@data-component='text-block']")
        content = []
        for box in article_text:
            text = box.css("div p::text").get()
            if text is not None:
                content.append(text)

        title = response.css("h1::text").get()

        yield {
            "title": title,
            "content": content,
        }


当我们运行这个函数时,我们得到的items_scraped_count为24。但是它应该是24 x 29 +/-.

cunj1qz1

cunj1qz11#

看起来你对第2页和第3页等的后续调用正在被scrappy的重复过滤功能过滤,发生这种情况的原因是因为无论你在url查询中输入什么页码,网站都会提供相同的首页。在呈现首页之后,它使用json API来获取所请求页面的实际文章信息,除非您直接调用API,否则scrapy无法单独捕获它。
json API可以在你的浏览器开发工具中的网络选项卡中找到,或者我在下面的例子中使用它。你只需要输入所需的页码,就像你已经为.../news/technology?page=? url做的那样。参见下面的例子.
还有一件事.你的parse_article方法缺少self作为第一个参数,这将抛出一个错误,并阻止你实际抓取任何页面内容。我还重写了几个xpath,使它们更具可读性。

import scrapy

class BBCSpider_2(scrapy.Spider):
    name = "bbc_tech"
    start_urls = ["https://www.bbc.com/news/technology"]

    def parse(self, response):
        max_pages = response.xpath("//nav[@aria-label='Page']//ol/li[last()]//text()").get()
        for article in response.xpath("//div[@type='article']"):
            if link := article.xpath(".//a[contains(@class, 'LinkPostLink')]/@href").get():
                yield response.follow(link, callback=self.parse_article)
        for i in range(2, int(max_pages)):
            yield scrapy.Request(f"https://www.bbc.com/wc-data/container/topic-stream?adSlotType=mpu_middle&enableDotcomAds=true&isUk=false&lazyLoadImages=true&pageNumber={i}&pageSize=24&promoAttributionsToSuppress=%5B%22%2Fnews%22%2C%22%2Fnews%2Ffront_page%22%5D&showPagination=true&title=Latest%20News&tracking=%7B%22groupName%22%3A%22Latest%20News%22%2C%22groupType%22%3A%22topic%20stream%22%2C%22groupResourceId%22%3A%22urn%3Abbc%3Avivo%3Acuration%3Ab2790c4d-d5c4-489a-84dc-be0dcd3f5252%22%2C%22groupPosition%22%3A5%2C%22topicId%22%3A%22cd1qez2v2j2t%22%7D&urn=urn%3Abbc%3Avivo%3Acuration%3Ab2790c4d-d5c4-489a-84dc-be0dcd3f5252", callback=self.parse_json)

    def parse_json(self, response):
        for post in response.json()["posts"]:
            yield scrapy.Request(response.urljoin(post["url"]), callback=self.parse_article)

    def parse_article(self, response):
        article_text = response.xpath("//article/div[@data-component='text-block']//text()").getall()
        content = " ".join([i.strip() for i in article_text])
        title = response.css("h1::text").get()
        yield {
            "title": title,
            "content": content,
        }

字符串

相关问题