I am writing a custom BERT model on my own corpus, I generated the vocab file using BertWordPieceTokenizer and then running below code 👍
!python create_pretraining_data.py
--input_file=/content/drive/My Drive/internet_archive_scifi_v3.txt
--output_file=/content/sample_data/tf_examples.tfrecord
--vocab_file=/content/sample_data/sifi_13sep-vocab.txt
--do_lower_case=True
--max_seq_length=128
--max_predictions_per_seq=20
--masked_lm_prob=0.15
--random_seed=12345
--dupe_factor=5
Getting output as :
INFO:tensorflow:*** Reading from input files ***
INFO:tensorflow:*** Writing to output files ***
INFO:tensorflow: /content/sample_data/tf_examples.tfrecord
INFO:tensorflow:Wrote 0 total instances
Not sure why I am always getting 0 instances in tf_examples.tfrecord, what am I doing wrong?
I am using TF version 1.12
3条答案
按热度按时间3zwjbxry1#
仅供参考..生成的词汇表文件为290 KB。
roqulrg32#
输入文件有多大?我之前遇到过这样的情况,一个文件里有太多文档,导致tfrecord文件没有写入任何内容。也许可以尝试将输入文件拆分成更小的文件,看看是否有效!
nom7f22z3#
我也有同样的问题,如何解决?