我试图使用scrapy的项目管道在表中插入spider中的四个项目,除了doc_id项目中有空值(来自原始源,并使用regex清理)之外,一切都很顺利。
我这里的问题是,一旦它找到一个空值,整行将被丢弃,而当我将数据导出为csv文件时,如果有一个列中没有任何值,则完全可以。
下面是我得到的关键错误:
Traceback (most recent call last):
File "/Users/opt/anaconda3/lib/python3.7/site-packages/twisted/internet/defer.py", line 654, in _runCallbacks
current.result = callback(current.result, *args,**kw)
File "/Users/opt/anaconda3/lib/python3.7/site-packages/scrapy/utils/defer.py", line 154, in f
return deferred_from_coro(coro_f(*coro_args,**coro_kwargs))
File "/Users/user/document_scraper/doc/pipelines.py", line 32, in process_item
self.store_db(item)
File "/Users/user/document_scraper/doc/pipelines.py", line 40, in store_db
item['doc_id'][0],
File "/Users/opt/anaconda3/lib/python3.7/site-packages/scrapy/item.py", line 83, in __getitem__
return self._values[key]
KeyError: 'doc_id'
my items.py:
import scrapy
from scrapy.loader import ItemLoader
from itemloaders.processors import TakeFirst, MapCompose, Compose
import re
class DocItem(scrapy.Item):
date = scrapy.Field()
office = scrapy.Field(output_processor = TakeFirst())
doc_body = scrapy.Field()
doc_id = scrapy.Field(input_processor = MapCompose(lambda item: re.findall('C\.\s\d{4}', item)[0].split('C. ')[1]))
my pipelines.py:
import sqlite3
class DocPipeline(object):
def __init__(self):
self.create_connection()
self.create_table()
def create_connection(self):
self.conn = sqlite3.connect("//document.db")
self.curr = self.conn.cursor()
def create_table(self):
self.curr.execute("""DROP TABLE IF EXISTS DOC""")
self.curr.execute("""CREATE TABLE DOC(
DOC_DATE date,
OFFICE text,
BODY text,
DOC_ID number
)""")
def process_item(self, item, spider):
self.store_db(item)
return item
def store_db(self, item):
self.curr.execute("""INSERT INTO DOC VALUES (?,?,?,?)""",(
item['date'][0],
item['office'],
item['doc_body'][0],
item['doc_id'][0],
))
self.conn.commit()
我如何告诉sqlite或scrapy我希望返回我的项目,而不管其中一列中的值是“无”呢?
暂无答案!
目前还没有任何答案,快来回答吧!