乡下人产国偷v产偷v自拍,国产午夜片在线观看,婷婷成人亚洲综合国产麻豆,久久综合给合久久狠狠狠9

  • <output id="e9wm2"></output>
    <s id="e9wm2"><nobr id="e9wm2"><ins id="e9wm2"></ins></nobr></s>

    • 分享

      hadoop 安裝出現(xiàn)的幾種異常的處理方法,hadoop無法啟動,no namenode to stop問題的解決方法,no datanode

       feiyacz 2013-01-23

      hadoop無法正常啟動(1)

      執(zhí)行  $ bin/hadoop start-all.sh之后,無法啟動.

      異常一

       Exception in thread "main" java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.

      localhost:      at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:214)

      localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:135)

      localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:119)

      localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:481)


      解決方法:此時是沒有配置conf/mapred-site.xml的緣故.  在0.21.0版本上是配置mapred-site.xml,在之前的版本是配置core-site.xml,0.20.2版本中配置mapred-site.xml無效,只能配置core-site.xml文件

      <configuration>

        <property>

          <name>fs.default.name</name>

          <value>hdfs://localhost:9000</value>

        </property>

        <property>

          <name>mapred.job.tracker</name>

          <value>hdfs://localhost:9001</value>

        </property>

        <property>

          <name>dfs.replication</name>

          <value>1</value>

        </property>

      </configuration>

       

      hadoop無法正常啟動(2)

      異常二、

      starting namenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out

      localhost: starting datanode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out

      localhost: starting secondarynamenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out

      localhost: Exception in thread "main" java.lang.NullPointerException

      localhost:      at org.apache.hadoop.net.NetUtils.createSocketAddr(NetUtils.java:134)

      localhost:      at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:156)

      localhost:      at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:160)

      localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.initialize(SecondaryNameNode.java:131)

      localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.<init>(SecondaryNameNode.java:115)

      localhost:      at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.main(SecondaryNameNode.java:469)

      starting jobtracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out

      localhost: starting tasktracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out

       

      解決方法:此時是沒有配置conf/mapred-site.xml的緣故.  在0.21.0版本上是配置mapred-site.xml,在之前的版本是配置core-site.xml   , 0.20.2版本中配置mapred-site.xml無效,只能配置core-site.xml文件

      <configuration>

        <property>

          <name>fs.default.name</name>

          <value>hdfs://localhost:9000</value>

        </property>

        <property>

          <name>mapred.job.tracker</name>

          <value>hdfs://localhost:9001</value>

        </property>

        <property>

          <name>dfs.replication</name>

          <value>1</value>

        </property>

      </configuration>

       

      hadoop無法正常啟動(3)

      異常三、

      starting namenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-namenode-aist.out

      localhost: starting datanode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-datanode-aist.out

      localhost: Error: JAVA_HOME is not set.

      localhost: starting secondarynamenode, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-secondarynamenode-aist.out

      localhost: Error: JAVA_HOME is not set.

      starting jobtracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-jobtracker-aist.out

      localhost: starting tasktracker, logging to /home/xixitie/hadoop/bin/../logs/hadoop-root-tasktracker-aist.out

      localhost: Error: JAVA_HOME is not set.

       

      解決方法:

      請在$hadoop/conf/hadoop-env.sh文件中配置JDK的環(huán)境變量

      JAVA_HOME=/home/xixitie/jdk

      CLASSPATH=$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

      export JAVA_HOME CLASSPATH




      hadoop無法正常啟動(4)

      異常四:mapred-site.xml配置中使用hdfs://localhost:9001,而不使用localhost:9001的配置

      異常信息如下:

      11/04/20 23:33:25 INFO security.Groups: Group mapping impl=org.apache.hadoop.sec                                                                             urity.ShellBasedUnixGroupsMapping; cacheTimeout=300000

      11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste                                                                             m name. Use "hdfs://localhost:9000/" instead.

      11/04/20 23:33:25 WARN conf.Configuration: mapred.task.id is deprecated. Instead                                                                             , use mapreduce.task.attempt.id

      11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste                                                                             m name. Use "hdfs://localhost:9000/" instead.

      11/04/20 23:33:25 WARN fs.FileSystem: "localhost:9000" is a deprecated filesyste                                                                             m name. Use "hdfs://localhost:9000/" instead.

       

      解決方法:

      mapred-site.xml配置中使用hdfs://localhost:9000,而不使用localhost:9000的配置

        <property>

          <name>fs.default.name</name>

          <value>hdfs://localhost:9000</value>

        </property>

        <property>

          <name>mapred.job.tracker</name>

          <value>hdfs://localhost:9001</value>

        </property>

       

      hadoop無法正常啟動(5)

      異常五、no namenode to stop 問題的解決:

      異常信息如下:

      11/04/20 21:48:50 INFO ipc.Client: Retrying connect to server: localhost/127.0.0                                                                             .1:9000. Already tried 0 time(s).

      11/04/20 21:48:51 INFO ipc.Client: Retrying connect to server: localhost/127.0.0                                                                             .1:9000. Already tried 1 time(s).

      11/04/20 21:48:52 INFO ipc.Client: Retrying connect to server: localhost/127.0.0                                                                             .1:9000. Already tried 2 time(s).

      11/04/20 21:48:53 INFO ipc.Client: Retrying connect to server: localhost/127.0.0                                                                             .1:9000. Already tried 3 time(s).

      11/04/20 21:48:54 INFO ipc.Client: Retrying connect to server: localhost/127.0.0                                                                             .1:9000. Already tried 4 time(s).

      11/04/20 21:48:55 INFO ipc.Client: Retrying connect to server: localhost/127.0.0                                                                             .1:9000. Already tried 5 time(s).

      11/04/20 21:48:56 INFO ipc.Client: Retrying connect to server: localhost/127.0.0                                                                             .1:9000. Already tried 6 time(s).

      11/04/20 21:48:57 INFO ipc.Client: Retrying connect to server: localhost/127.0.0                                                                             .1:9000. Already tried 7 time(s).

      11/04/20 21:48:58 INFO ipc.Client: Retrying connect to server: localhost/127.0.0                                                                             .1:9000. Already tried 8 time(s).

       

      解決方法:

      這個問題是由namenode沒有啟動起來引起的,為什么no namenode to stop,可能之前的一些數(shù)據(jù)對namenode有影響,

      你需要執(zhí)行:

      $ bin/hadoop namenode -format

      然后

      $bin/hadoop start-all.sh


      hadoop無法正常啟動(6)

      異常五、no datanode to stop 問題的解決:

      有時數(shù)據(jù)結(jié)構(gòu)出現(xiàn)問題會產(chǎn)生無法啟動datanode的問題。

      然后用 hadoop namenode -format  重新格式化后仍然無效,/tmp中的文件并沒有清楚。

      其實還需要清除/tmp/hadoop*里的文件。

      執(zhí)行步驟:

           一、先刪除hadoop:///tmp 

             hadoop  fs -rmr /tmp

          二、停止 hadoop   

             stop-all.sh

          三、刪除/tmp/hadoop*

             rm -rf /tmp/hadoop*

          四、格式化hadoop

             hadoop namenode -format

          五、啟動hadoop 

              start-all.sh

      之后即可解決這個datanode沒法啟動的問題

        本站是提供個人知識管理的網(wǎng)絡(luò)存儲空間,所有內(nèi)容均由用戶發(fā)布,不代表本站觀點。請注意甄別內(nèi)容中的聯(lián)系方式、誘導(dǎo)購買等信息,謹(jǐn)防詐騙。如發(fā)現(xiàn)有害或侵權(quán)內(nèi)容,請點擊一鍵舉報。
        轉(zhuǎn)藏 分享 獻(xiàn)花(0

        0條評論

        發(fā)表

        請遵守用戶 評論公約

        類似文章 更多