Hadoop datanode reload failed to start solving

  
                  

The author uses a virtual machine-based Hadoop distributed installation. Since the order of closing the datanode and the namenode is not appropriate, the datanode fails to load frequently.

My solution is for the first time that the entire cluster has been successfully started, but it will not start properly for the second time due to abnormal operation. There may be many reasons for the first startup failure: it may be caused by a configuration file error write, or due to ssh passwordless login configuration error.

The reason for the second error is different from the first one. The focus of troubleshooting should be on the files generated by some dynamic loading of the program. The author is discussing the second case. :

Most of the reasons are due to the inconsistency between the namespaceID in the VERSION file of the datanode of Hadoop and the namespaceID in the VERSION file in the namenode. The author of the namespaceID is supposed to be generated when the :hdfs namenode -format command is executed.

The solution steps are as follows:

1. First stop the related process on the namenode: switch to the /sbin directory of hadoop:

sh stop-dfs.sh

sh stop-yarn.sh

2, switch to the corresponding /current directory of hadoop to clear all files under current.

3, after the datanode and namenode /current VERSION and other corresponding file files are cleared, go back to the namenode, execute the hsfs namenode -format command, and then switch to the /node directory of the namenode's hadoop:

Execute sh start-dfs.sh

sh start-yarn.sh

(The old version of mapre is replaced by the new version of yarn, the command is somewhat different)

You can see that the corresponding node was successfully loaded.

The corresponding idea is that when something goes wrong, clear all the files that interfere with the idea, then sort out the thoughts and start over again, which is far better than the original place.

(Because we only have hdfs tmp log in the folder specified in the configuration file, the rest of the files are also created by dynamic script generation. After the deletion, the whole system can work. Will generate, even if deleted, VM snapshot will save the world.)

Copyright © Windows knowledge All Rights Reserved