我正在加载由数千张MRI图像组成的数据。我使用nibabel
从MRI文件中获取3D数据数组:
def get_voxels(path):
img = nib.load(path)
data = img.get_fdata()
return data.copy()
df = pd.read_csv("/home/paths_updated_shuffled_4.csv")
df = df.reset_index()
labels = []
images = []
for index, row in df.iterrows():
images.append(get_voxels(row['path']))
labels.append(row['pass'])
labels = np.array(labels)
images = np.array(images)
n = len(df.index)
train_n = int(0.8 * n)
train_images = images[:train_n]
train_labels = labels[:train_n]
validation_n = (n - train_n) // 2
validation_end = train_n + validation_n
validation_images, validation_labels = images[train_n:validation_end], labels[train_n:validation_end]
test_images = images[validation_end:]
test_labels = labels[validation_end:]
train_ds = tf.data.Dataset.from_tensor_slices((train_images, train_labels))
validation_ds = tf.data.Dataset.from_tensor_slices((validation_images, validation_labels))
test_ds = tf.data.Dataset.from_tensor_slices((test_images, test_labels))
正如您所看到的,我使用的是tf.data.Dataset.from_tensor_slices
,但是,由于大量的大文件,内存不足。
在TensorFlow或Keras中是否有更好的方法来执行此操作。
1条答案
按热度按时间7eumitmz1#
按照3D image classification from CT scans中的说明执行Hasib Zunair。