如何创建hdfs微服务与namenode目录写在一个外部硬盘驱动器

xkftehaa  于 2021-05-31  发布在  Hadoop
关注(0)|答案(0)|浏览(226)

情况:我正在尝试找出如何将外部卷添加到minikube,到目前为止,我设法通过minikube mount—ip$myip$externalvolumepath:$externalvolumepath:$externalvolumepathinsideminikube以及指向路径:$externalvolumepathinsideminikube的hostpath persistentvolume来完成。以下是用于声明persistent volume、persistentvolumeclaim和deployment的清单示例:

  1. kind: PersistentVolume
  2. apiVersion: v1
  3. metadata:
  4. name: files-archive-volume
  5. labels:
  6. type: local
  7. spec:
  8. capacity:
  9. storage: 5Gi
  10. accessModes:
  11. - ReadWriteMany
  12. hostPath:
  13. path: $externalvolumepathinsideminikube
  14. kind: PersistentVolumeClaim
  15. apiVersion: v1
  16. metadata:
  17. name: files-archive-volumeclaim
  18. spec:
  19. accessModes:
  20. - ReadWriteOnce
  21. storageClassName: ""
  22. volumeName: files-archive-volume
  23. resources:
  24. requests:
  25. storage: 5Gi
  26. apiVersion: apps/v1
  27. kind: Deployment
  28. metadata:
  29. name: hdfs
  30. spec:
  31. selector:
  32. matchLabels:
  33. run: hdfs
  34. replicas: 1
  35. template:
  36. metadata:
  37. labels:
  38. run: hdfs
  39. spec:
  40. hostname: hdfs
  41. volumes:
  42. # - name: "files-archive-volumeclaim"
  43. # hostPath:
  44. # path: "/external/files/archive/"
  45. # - name: "files-tmp-volumeclaim"
  46. # hostPath:
  47. # path: "/external/files/tmp/"
  48. - name: files-archive-volumeclaim
  49. persistentVolumeClaim:
  50. claimName: files-archive-volumeclaim
  51. - name: files-tmp-volumeclaim
  52. persistentVolumeClaim:
  53. claimName: files-tmp-volumeclaim
  54. - name: config-hdfs-core-site
  55. configMap:
  56. name: config-hdfs-core-site
  57. - name: config-hdfs-hadoop-env
  58. configMap:
  59. name: config-hdfs-hadoop-env
  60. - name: config-hdfs-hdfs-site
  61. configMap:
  62. name: config-hdfs-hdfs-site
  63. - name: config-hdfs-mapred-site
  64. configMap:
  65. name: config-hdfs-mapred-site
  66. - name: config-hdfs-yarn-site
  67. configMap:
  68. name: config-hdfs-yarn-site
  69. - name: config-sshd-config
  70. configMap:
  71. name: config-sshd-config
  72. - name: config-ssh-config
  73. configMap:
  74. name: config-ssh-config
  75. - name: init-hdfs
  76. configMap:
  77. name: init-hdfs
  78. containers:
  79. - name: hdfs
  80. image: hdfs:mar2020
  81. env:
  82. - name: PROTOCOLE
  83. value: "init"
  84. resources:
  85. limits:
  86. cpu: "1"
  87. ports:
  88. - containerPort: 22
  89. - containerPort: 2122
  90. - containerPort: 8030
  91. - containerPort: 8031
  92. - containerPort: 8032
  93. - containerPort: 8033
  94. - containerPort: 8040
  95. - containerPort: 8042
  96. - containerPort: 8088
  97. - containerPort: 9000
  98. - containerPort: 19888
  99. - containerPort: 49707
  100. - containerPort: 50010
  101. - containerPort: 50020
  102. - containerPort: 50070
  103. - containerPort: 50075
  104. - containerPort: 50090
  105. volumeMounts:
  106. - mountPath: /home/hadoop/
  107. name: files-archive-volumeclaim
  108. - mountPath: /tmp/
  109. name: files-tmp-volumeclaim
  110. - mountPath: usr/local/hadoop/etc/hadoop/core-site.xml
  111. subPath: core-site.xml
  112. name: config-hdfs-core-site
  113. - mountPath: usr/local/hadoop/etc/hadoop/hadoop-env.sh
  114. subPath: hadoop-env.sh
  115. name: config-hdfs-hadoop-env
  116. - mountPath: usr/local/hadoop/etc/hadoop/hdfs-site.xml
  117. subPath: hdfs-site.xml
  118. name: config-hdfs-hdfs-site
  119. - mountPath: usr/local/hadoop/etc/hadoop/mapred-site.xml
  120. subPath: mapred-site.xml
  121. name: config-hdfs-mapred-site
  122. - mountPath: usr/local/hadoop/etc/hadoop/yarn-site.xml
  123. subPath: yarn-site.xml
  124. name: config-hdfs-yarn-site
  125. - mountPath: etc/ssh/sshd_config
  126. subPath: sshd_config
  127. name: config-sshd-config
  128. - mountPath: etc/ssh/ssh_config
  129. subPath: ssh_config
  130. name: config-ssh-config
  131. - mountPath: home/init-hdfs.sh
  132. subPath: init-hdfs.sh
  133. name: init-hdfs
  134. command: ["tail", "-f" , "/dev/null"]

在这个部署中,我使用一个自定义映像来安装hdfs。dockerfile是

  1. FROM ubuntu:18.04
  2. ### Ubuntu with some basic tools
  3. RUN apt-get update \
  4. && apt-get install -y curl git unzip wget openssh-server
  5. RUN echo "root:dummypassword" | chpasswd
  6. # RUN mkdir /home/hadoop/.ssh/ \
  7. # && ssh-keygen -b 2048 -t rsa -f /home/hadoop/.ssh/id_rsa -q -N "" \
  8. # && cat /home/hadoop/.ssh/id_rsa.pub >> root/.ssh/authorized_keys \
  9. # && ssh localhost
  10. ### Java 8
  11. RUN apt update \
  12. && apt install -y openjdk-8-jdk openjdk-8-jre
  13. ENV JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
  14. ENV JRE_HOME=/usr/lib/jvm/java-8-openjdk-amd64/jre
  15. ENV PATH=$PATH:$JAVA_HOME/bin
  16. ### Hadoop
  17. RUN wget http://apache.claz.org/hadoop/common/hadoop-2.10.0/hadoop-2.10.0.tar.gz \
  18. && tar xzf hadoop-2.10.0.tar.gz \
  19. && mv hadoop-2.10.0 /usr/local/hadoop/
  20. ENV HADOOP_HOME=/usr/local/hadoop
  21. ENV HADOOP_MAPRED_HOME=$HADOOP_HOME
  22. ENV HADOOP_COMMON_HOME=$HADOOP_HOME
  23. ENV HADOOP_HDFS_HOME=$HADOOP_HOME
  24. ENV YARN_HOME=$HADOOP_HOME
  25. ENV HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
  26. ENV HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib/native"
  27. ENV PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
  28. COPY init-hdfs.sh home/init-hdfs.sh
  29. COPY ssh_config etc/ssh/ssh_config
  30. COPY sshd_config etc/ssh/sshd_config
  31. COPY yarn-site.xml usr/local/hadoop/etc/hadoop/yarn-site.xml
  32. COPY mapred-site.xml usr/local/hadoop/etc/hadoop/mapred-site.xml
  33. COPY hdfs-site.xml usr/local/hadoop/etc/hadoop/hdfs-site.xml
  34. COPY hadoop-env.sh usr/local/hadoop/etc/hadoop/hadoop-env.sh
  35. COPY core-site.xml usr/local/hadoop/etc/hadoop/core-site.xml
  36. RUN /etc/init.d/ssh restart \
  37. && ssh-keygen -b 2048 -t rsa -f /root/.ssh/id_rsa -q -N "" \
  38. && cat $HOME/.ssh/id_rsa.pub >> $HOME/.ssh/authorized_keys \
  39. && ssh-keyscan 'localhost (127.0.0.1)' >> /root/.ssh/known_hosts \
  40. && ssh-keyscan 'localhost' >> /root/.ssh/known_hosts \
  41. && ssh-keyscan '0.0.0.0' >> /root/.ssh/known_hosts

要初始化微服务,我首先启动ssh并格式化namenode。在这之前,一切似乎都很正常(没有错误消息,并且microservice能够在外部卷中写入文件/文件夹)

  1. /etc/init.d/ssh start
  2. echo Y | hdfs namenode -format
  3. 20/04/09 07:29:19 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
  4. 20/04/09 07:29:19 INFO namenode.NameNode: createNameNode [-format]
  5. Formatting using clusterid: CID-e90fa19a-af02-4995-90c9-2e5e8b48e82f
  6. 20/04/09 07:29:20 INFO namenode.FSEditLog: Edit logging is async:true
  7. 20/04/09 07:29:20 INFO namenode.FSNamesystem: KeyProvider: null
  8. 20/04/09 07:29:20 INFO namenode.FSNamesystem: fsLock is fair: true
  9. 20/04/09 07:29:20 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
  10. 20/04/09 07:29:20 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
  11. 20/04/09 07:29:20 INFO namenode.FSNamesystem: supergroup = supergroup
  12. 20/04/09 07:29:20 INFO namenode.FSNamesystem: isPermissionEnabled = true
  13. 20/04/09 07:29:20 INFO namenode.FSNamesystem: HA Enabled: false
  14. 20/04/09 07:29:20 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
  15. 20/04/09 07:29:20 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
  16. 20/04/09 07:29:20 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
  17. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
  18. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Apr 09 07:29:20
  19. 20/04/09 07:29:20 INFO util.GSet: Computing capacity for map BlocksMap
  20. 20/04/09 07:29:20 INFO util.GSet: VM type = 64-bit
  21. 20/04/09 07:29:20 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
  22. 20/04/09 07:29:20 INFO util.GSet: capacity = 2^21 = 2097152 entries
  23. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
  24. 20/04/09 07:29:20 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
  25. 20/04/09 07:29:20 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
  26. 20/04/09 07:29:20 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
  27. 20/04/09 07:29:20 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
  28. 20/04/09 07:29:20 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
  29. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: defaultReplication = 1
  30. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: maxReplication = 512
  31. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: minReplication = 1
  32. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
  33. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
  34. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: encryptDataTransfer = false
  35. 20/04/09 07:29:20 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
  36. 20/04/09 07:29:20 INFO namenode.FSNamesystem: Append Enabled: true
  37. 20/04/09 07:29:20 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
  38. 20/04/09 07:29:20 INFO util.GSet: Computing capacity for map INodeMap
  39. 20/04/09 07:29:20 INFO util.GSet: VM type = 64-bit
  40. 20/04/09 07:29:20 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
  41. 20/04/09 07:29:20 INFO util.GSet: capacity = 2^20 = 1048576 entries
  42. 20/04/09 07:29:20 INFO namenode.FSDirectory: ACLs enabled? false
  43. 20/04/09 07:29:20 INFO namenode.FSDirectory: XAttrs enabled? true
  44. 20/04/09 07:29:20 INFO namenode.NameNode: Caching file names occurring more than 10 times
  45. 20/04/09 07:29:20 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
  46. 20/04/09 07:29:20 INFO util.GSet: Computing capacity for map cachedBlocks
  47. 20/04/09 07:29:20 INFO util.GSet: VM type = 64-bit
  48. 20/04/09 07:29:20 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
  49. 20/04/09 07:29:20 INFO util.GSet: capacity = 2^18 = 262144 entries
  50. 20/04/09 07:29:20 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
  51. 20/04/09 07:29:20 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
  52. 20/04/09 07:29:20 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
  53. 20/04/09 07:29:20 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
  54. 20/04/09 07:29:20 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
  55. 20/04/09 07:29:20 INFO util.GSet: Computing capacity for map NameNodeRetryCache
  56. 20/04/09 07:29:20 INFO util.GSet: VM type = 64-bit
  57. 20/04/09 07:29:20 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
  58. 20/04/09 07:29:20 INFO util.GSet: capacity = 2^15 = 32768 entries
  59. 20/04/09 07:29:20 INFO namenode.FSImage: Allocated new BlockPoolId: BP-723004653-172.17.0.8-1586417360741
  60. 20/04/09 07:29:20 INFO common.Storage: Storage directory /home/hadoop/hadoopinfra/hdfs/namenode has been successfully formatted.
  61. 20/04/09 07:29:20 INFO namenode.FSImageFormatProtobuf: Saving image file /home/hadoop/hadoopinfra/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 using no compression
  62. 20/04/09 07:29:21 INFO namenode.FSImageFormatProtobuf: Image file /home/hadoop/hadoopinfra/hdfs/namenode/current/fsimage.ckpt_0000000000000000000 of size 323 bytes saved in 0 seconds .
  63. 20/04/09 07:29:21 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
  64. 20/04/09 07:29:21 INFO namenode.FSImage: FSImageSaver clean checkpoint: txid = 0 when meet shutdown.
  65. 20/04/09 07:29:21 INFO namenode.NameNode: SHUTDOWN_MSG:
  66. /************************************************************
  67. SHUTDOWN_MSG: Shutting down NameNode at hdfs/172.17.0.8

当我开始运行hdfs(start-dfs.sh&&start-yarn.sh)时,问题就出现了。shell中没有错误打印,但是我没有看到使用jps命令的java进程。此外,任何$hadoop\u home/bin/hadoop fs-mkdir/tmp都会返回一个错误,表示hdfs/ipcontainer与localhost:9000 could 不能成立。我进一步研究了一下,终于意识到namenode不起作用了。我尝试使用hdfs namenode命令,以便只启动namenode,它会在下面的行中返回这些信息。

  1. 20/04/09 07:29:49 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
  2. 20/04/09 07:29:49 INFO namenode.NameNode: createNameNode []
  3. 20/04/09 07:29:49 INFO impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
  4. 20/04/09 07:29:49 INFO impl.MetricsSystemImpl: Scheduled Metric snapshot period at 10 second(s).
  5. 20/04/09 07:29:49 INFO impl.MetricsSystemImpl: NameNode metrics system started
  6. 20/04/09 07:29:49 INFO namenode.NameNode: fs.defaultFS is hdfs://localhost:9000
  7. 20/04/09 07:29:49 INFO namenode.NameNode: Clients are to use localhost:9000 to access this namenode/service.
  8. 20/04/09 07:29:50 INFO util.JvmPauseMonitor: Starting JVM pause monitor
  9. 20/04/09 07:29:50 INFO hdfs.DFSUtil: Starting Web-server for hdfs at: http://0.0.0.0:50070
  10. 20/04/09 07:29:50 INFO mortbay.log: Logging to org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via org.mortbay.log.Slf4jLog
  11. 20/04/09 07:29:50 INFO server.AuthenticationFilter: Unable to initialize FileSignerSecretProvider, falling back to use random secrets.
  12. 20/04/09 07:29:50 INFO http.HttpRequestLog: Http request log for http.requests.namenode is not defined
  13. 20/04/09 07:29:50 INFO http.HttpServer2: Added global filter 'safety' (class=org.apache.hadoop.http.HttpServer2$QuotingInputFilter)
  14. 20/04/09 07:29:50 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context hdfs
  15. 20/04/09 07:29:50 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context logs
  16. 20/04/09 07:29:50 INFO http.HttpServer2: Added filter static_user_filter (class=org.apache.hadoop.http.lib.StaticUserWebFilter$StaticUserFilter) to context static
  17. 20/04/09 07:29:50 INFO http.HttpServer2: Added filter 'org.apache.hadoop.hdfs.web.AuthFilter' (class=org.apache.hadoop.hdfs.web.AuthFilter)
  18. 20/04/09 07:29:50 INFO http.HttpServer2: addJerseyResourcePackage: packageName=org.apache.hadoop.hdfs.server.namenode.web.resources;org.apache.hadoop.hdfs.web.resources, pathSpec=/webhdfs/v1/*
  19. 20/04/09 07:29:50 INFO http.HttpServer2: Jetty bound to port 50070
  20. 20/04/09 07:29:50 INFO mortbay.log: jetty-6.1.26
  21. 20/04/09 07:29:50 INFO mortbay.log: Started HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
  22. 20/04/09 07:29:50 WARN namenode.FSNamesystem: Only one image storage directory (dfs.namenode.name.dir) configured. Beware of data loss due to lack of redundant storage directories!
  23. 20/04/09 07:29:50 WARN namenode.FSNamesystem: Only one namespace edits storage directory (dfs.namenode.edits.dir) configured. Beware of data loss due to lack of redundant storage directories!
  24. 20/04/09 07:29:51 INFO namenode.FSEditLog: Edit logging is async:true
  25. 20/04/09 07:29:51 INFO namenode.FSNamesystem: KeyProvider: null
  26. 20/04/09 07:29:51 INFO namenode.FSNamesystem: fsLock is fair: true
  27. 20/04/09 07:29:51 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
  28. 20/04/09 07:29:51 INFO namenode.FSNamesystem: fsOwner = root (auth:SIMPLE)
  29. 20/04/09 07:29:51 INFO namenode.FSNamesystem: supergroup = supergroup
  30. 20/04/09 07:29:51 INFO namenode.FSNamesystem: isPermissionEnabled = true
  31. 20/04/09 07:29:51 INFO namenode.FSNamesystem: HA Enabled: false
  32. 20/04/09 07:29:51 INFO common.Util: dfs.datanode.fileio.profiling.sampling.percentage set to 0. Disabling file IO profiling
  33. 20/04/09 07:29:51 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit: configured=1000, counted=60, effected=1000
  34. 20/04/09 07:29:51 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
  35. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
  36. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: The block deletion will start around 2020 Apr 09 07:29:51
  37. 20/04/09 07:29:51 INFO util.GSet: Computing capacity for map BlocksMap
  38. 20/04/09 07:29:51 INFO util.GSet: VM type = 64-bit
  39. 20/04/09 07:29:51 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
  40. 20/04/09 07:29:51 INFO util.GSet: capacity = 2^21 = 2097152 entries
  41. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
  42. 20/04/09 07:29:51 WARN conf.Configuration: No unit for dfs.heartbeat.interval(3) assuming SECONDS
  43. 20/04/09 07:29:51 WARN conf.Configuration: No unit for dfs.namenode.safemode.extension(30000) assuming MILLISECONDS
  44. 20/04/09 07:29:51 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
  45. 20/04/09 07:29:51 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.min.datanodes = 0
  46. 20/04/09 07:29:51 INFO blockmanagement.BlockManagerSafeMode: dfs.namenode.safemode.extension = 30000
  47. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: defaultReplication = 1
  48. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: maxReplication = 512
  49. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: minReplication = 1
  50. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: maxReplicationStreams = 2
  51. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
  52. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: encryptDataTransfer = false
  53. 20/04/09 07:29:51 INFO blockmanagement.BlockManager: maxNumBlocksToLog = 1000
  54. 20/04/09 07:29:51 INFO namenode.FSNamesystem: Append Enabled: true
  55. 20/04/09 07:29:51 INFO namenode.FSDirectory: GLOBAL serial map: bits=24 maxEntries=16777215
  56. 20/04/09 07:29:51 INFO util.GSet: Computing capacity for map INodeMap
  57. 20/04/09 07:29:51 INFO util.GSet: VM type = 64-bit
  58. 20/04/09 07:29:51 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
  59. 20/04/09 07:29:51 INFO util.GSet: capacity = 2^20 = 1048576 entries
  60. 20/04/09 07:29:51 INFO namenode.FSDirectory: ACLs enabled? false
  61. 20/04/09 07:29:51 INFO namenode.FSDirectory: XAttrs enabled? true
  62. 20/04/09 07:29:51 INFO namenode.NameNode: Caching file names occurring more than 10 times
  63. 20/04/09 07:29:51 INFO snapshot.SnapshotManager: Loaded config captureOpenFiles: falseskipCaptureAccessTimeOnlyChange: false
  64. 20/04/09 07:29:51 INFO util.GSet: Computing capacity for map cachedBlocks
  65. 20/04/09 07:29:51 INFO util.GSet: VM type = 64-bit
  66. 20/04/09 07:29:51 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
  67. 20/04/09 07:29:51 INFO util.GSet: capacity = 2^18 = 262144 entries
  68. 20/04/09 07:29:51 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
  69. 20/04/09 07:29:51 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
  70. 20/04/09 07:29:51 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
  71. 20/04/09 07:29:51 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
  72. 20/04/09 07:29:51 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
  73. 20/04/09 07:29:51 INFO util.GSet: Computing capacity for map NameNodeRetryCache
  74. 20/04/09 07:29:51 INFO util.GSet: VM type = 64-bit
  75. 20/04/09 07:29:51 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
  76. 20/04/09 07:29:51 INFO util.GSet: capacity = 2^15 = 32768 entries
  77. 20/04/09 07:29:51 INFO common.Storage: Lock on /home/hadoop/hadoopinfra/hdfs/namenode/in_use.lock acquired by nodename 86@hdfs
  78. 20/04/09 07:29:51 INFO namenode.FileJournalManager: Recovering unfinalized segments in /home/hadoop/hadoopinfra/hdfs/namenode/current
  79. 20/04/09 07:29:51 INFO namenode.FSImage: No edit log streams selected.
  80. 20/04/09 07:29:51 INFO namenode.FSImage: Planning to load image: FSImageFile(file=/home/hadoop/hadoopinfra/hdfs/namenode/current/fsimage_0000000000000000000, cpktTxId=0000000000000000000)
  81. 20/04/09 07:29:51 INFO namenode.FSImageFormatPBINode: Loading 1 INodes.
  82. 20/04/09 07:29:51 INFO namenode.FSImageFormatPBINode: Successfully loaded 1 inodes
  83. 20/04/09 07:29:51 INFO namenode.FSImageFormatProtobuf: Loaded FSImage in 0 seconds.
  84. 20/04/09 07:29:51 INFO namenode.FSImage: Loaded image for txid 0 from /home/hadoop/hadoopinfra/hdfs/namenode/current/fsimage_0000000000000000000
  85. 20/04/09 07:29:51 INFO namenode.FSNamesystem: Need to save fs image? false (staleImage=false, haEnabled=false, isRollingUpgrade=false)
  86. 20/04/09 07:29:51 INFO namenode.FSEditLog: Starting log segment at 1
  87. 20/04/09 07:29:52 INFO namenode.NameCache: initialized with 0 entries 0 lookups
  88. 20/04/09 07:29:52 INFO namenode.FSNamesystem: Finished loading FSImage in 795 msecs
  89. 20/04/09 07:29:52 INFO namenode.NameNode: RPC server is binding to localhost:9000
  90. 20/04/09 07:29:52 INFO namenode.NameNode: Enable NameNode state context:false
  91. 20/04/09 07:29:52 INFO ipc.CallQueueManager: Using callQueue: class java.util.concurrent.LinkedBlockingQueue queueCapacity: 1000 scheduler: class org.apache.hadoop.ipc.DefaultRpcScheduler
  92. 20/04/09 07:29:52 INFO ipc.Server: Starting Socket Reader #1 for port 9000
  93. 20/04/09 07:29:52 INFO namenode.FSNamesystem: Registered FSNamesystemState MBean
  94. 20/04/09 07:29:52 INFO namenode.FSNamesystem: Stopping services started for active state
  95. 20/04/09 07:29:52 INFO namenode.FSEditLog: Ending log segment 1, 1
  96. 20/04/09 07:29:52 INFO namenode.FSEditLog: Number of transactions: 2 Total time for transactions(ms): 1 Number of transactions batched in Syncs: 0 Number of syncs: 3 SyncTimes(ms): 327
  97. 20/04/09 07:29:52 INFO namenode.FileJournalManager: Finalizing edits file /home/hadoop/hadoopinfra/hdfs/namenode/current/edits_inprogress_0000000000000000001 -> /home/hadoop/hadoopinfra/hdfs/namenode/current/edits_0000000000000000001-0000000000000000002
  98. 20/04/09 07:29:52 INFO namenode.FSEditLog: FSEditLogAsync was interrupted, exiting
  99. 20/04/09 07:29:52 INFO ipc.Server: Stopping server on 9000
  100. 20/04/09 07:29:52 INFO namenode.FSNamesystem: Stopping services started for active state
  101. 20/04/09 07:29:52 INFO namenode.FSNamesystem: Stopping services started for standby state
  102. 20/04/09 07:29:52 INFO mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
  103. 20/04/09 07:29:52 INFO impl.MetricsSystemImpl: Stopping NameNode metrics system...
  104. 20/04/09 07:29:52 INFO impl.MetricsSystemImpl: NameNode metrics system stopped.
  105. 20/04/09 07:29:52 INFO impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
  106. 20/04/09 07:29:52 ERROR namenode.NameNode: Failed to start namenode.
  107. java.io.IOException: Could not parse line: 192.168.1.XX 0 0 0 - /home/hadoop
  108. at org.apache.hadoop.fs.DF.parseOutput(DF.java:195)
  109. at org.apache.hadoop.fs.DF.getFilesystem(DF.java:76)
  110. at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker$CheckedVolume.<init>(NameNodeResourceChecker.java:69)
  111. at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.addDirToCheck(NameNodeResourceChecker.java:165)
  112. at org.apache.hadoop.hdfs.server.namenode.NameNodeResourceChecker.<init>(NameNodeResourceChecker.java:134)
  113. at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startCommonServices(FSNamesystem.java:1134)
  114. at org.apache.hadoop.hdfs.server.namenode.NameNode.startCommonServices(NameNode.java:816)
  115. at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:755)
  116. at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:961)
  117. at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:940)
  118. at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1714)
  119. at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1782)
  120. 20/04/09 07:29:52 INFO util.ExitUtil: Exiting with status 1: java.io.IOException: Could not parse line: 192.168.1.XX 0 0 0 - /home/hadoop
  121. 20/04/09 07:29:52 INFO namenode.NameNode: SHUTDOWN_MSG:
  122. /************************************************************
  123. SHUTDOWN_MSG: Shutting down NameNode at hdfs/172.17.0.8

异常返回“java.io.ioexception:could not parse line:192.168.1.xx 0-/home/hadoop”我很困惑,因为192.168.1.xx是我的电脑ip。之后我尝试部署pods,将hostpath卷定向到minikube硬盘中的一个文件夹,namenode工作正常。如何在kubernetes的外部卷上运行namenode而不遇到此问题?我用minikube安装卷的方式正确吗?问题来自hdfs方面吗?

暂无答案!

目前还没有任何答案,快来回答吧!

相关问题