如何利用GDAL python加快从栅格文件中提取点的数据

hjzp0vay  于 2022-12-17  发布在  Python
关注(0)|答案(2)|浏览(154)

我有366个栅格图像文件(MODIS卫星每日数据),tif格式,包含雪数据,另一个csv文件包含19,000个位置(纬度和经度)。我需要从栅格文件中收集雪数据。我已经尝试使用GDAL python库收集数据。但是,这个程序从每个文件中收集数据大约需要30分钟。2这意味着我必须运行这个代码大约180个小时。3下面是我正在使用的代码。请建议,如果有无论如何我可以提高程序执行的速度,或者如果有任何更好的方法,我可以实现相同的。

import gdal
import pandas
import numpy as np
import os,subprocess
def runCmdAndGetOutput(cmd) :
    outList = []
    proc = subprocess.Popen(cmd,stdout=subprocess.PIPE)
    while True:
        line = proc.stdout.readline()
        if not line:
            break
        #the real code does filtering here
        outList.append(line.rstrip())
        print(outList)
    # value = float(outList[2].decode("utf-8").replace("<Value>","").replace("</Value>",""))
    value = float(outList[0].decode("utf-8"))
    return value

# ndsiFile = "2016001.tif"
locs = "hkkhlocations.csv"
ndsFileLoc = r"D:\SrinivasaRao_Docs\MODIS_NDSI_V6_2016\5000000499560\out"
# with open(locs) as f:
#     locData = f.readlines()
latLnginfo = pandas.read_csv(locs)
print(latLnginfo.columns)
print(latLnginfo.shape)

# outDf = pandas.DataFrame()

outDf = pandas.DataFrame(np.zeros([len(latLnginfo),370])*np.nan)
day =1
print(os.listdir(ndsFileLoc))
print(type(os.listdir(ndsFileLoc)))
datasetsList = os.listdir(ndsFileLoc)
for eFile in datasetsList:
    rCount = 0
    # print(eFile)
    cCount = int(eFile[4:7])
    # print(cCount)
    with open("output.csv") as f :
        for line in f :
            locData = line.split(",")
            cmdToRun = ["gdallocationinfo" ,"-valonly", "-wgs84", os.path.join(ndsFileLoc,eFile) ,str(latLnginfo.iloc[rCount,4]), str(latLnginfo.iloc[rCount,3])]# str(locData[0]), str(locData[1])]
            v = runCmdAndGetOutput(cmdToRun)
            outDf.iloc[rCount,cCount]= float(v)
            rCount = rCount + 1
            print("rowno: ", rCount, "Dayno :", cCount, "SCF value: ", v)

    day = day+1
outDf.to_csv('test.csv')

'''
csga3l58

csga3l581#

def run_cmd_processor(efile):
    r_count = 0
    c_count = int(efile[4:7])
    with open("output.csv") as f :
        for line in f :
            loc_data = line.split(",")
            # ~

pool = multiprocessing.Pool(processes=2) # You can add more processes
pool.map(run_cmd_processor, datasetsList)
pool.close()
pool.join()

似乎唯一可以有多个处理分支的点是“数据集列表中的电子文件:“。它可以像upper一样更改。

i5desfxk

i5desfxk2#

我的建议是,你不需要通过subprocess调用gdal,只需要通过gdal读取HDF文件,然后在你的long/lat坐标上得到像素值:

from osgeo import gdal

src = <location_to_your_hdf>
ds = gdal.Open(src,gdal.GA_ReadOnly)

## get your subdataset, to find out which one -> ds.GetSubdatasets()
subdata = gdal.Open(ds.GetSubDatasets()[0][0], gdal.GA_ReadOnly)

## get Geometadata
gt = subdata.GetGeoTransform()

## now locate the pixel by transforming them from coordinates to width/height
px = int((lat - gt[0]) / gt[1])
py = int((long - gt[3]) / gt[5])
pixelval = subdata.ReadAsArray(px, py, 1, 1)

这应该比subprocess-call快得多,因为您只需要打开hdf文件一次,然后循环遍历坐标列表,而不是为每个坐标调用gdallocationinfo。
干杯

相关问题