Błąd replikacji danych w Hadoop

Wdrażam klaster Hadoop Single Node Cluster na moim komputerze, wykonując następujące czynnościSamouczek Michaela Nolla i napotkali błąd replikacji danych:

Oto pełny komunikat o błędzie:

<code>> hadoop@laptop:~/hadoop$ bin/hadoop dfs -copyFromLocal
> tmp/testfiles testfiles
> 
> 12/05/04 16:18:41 WARN hdfs.DFSClient: DataStreamer Exception:
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/hadoop/testfiles/testfiles/file1.txt could only be replicated to
> 0 nodes, instead of 1   at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>     at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)     at
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)     at
> java.security.AccessController.doPrivileged(Native Method)  at
> javax.security.auth.Subject.doAs(Subject.java:396)  at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
>     at org.apache.hadoop.ipc.Client.call(Client.java:740)   at
> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)  at
> $Proxy0.addBlock(Unknown Source)    at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>     at $Proxy0.addBlock(Unknown Source)     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
> 
> 12/05/04 16:18:41 WARN hdfs.DFSClient: Error Recovery for block null
> bad datanode[0] nodes == null 12/05/04 16:18:41 WARN hdfs.DFSClient:
> Could not get block locations. Source file
> "/user/hadoop/testfiles/testfiles/file1.txt" - Aborting...
> copyFromLocal: java.io.IOException: File
> /user/hadoop/testfiles/testfiles/file1.txt could only be replicated to
> 0 nodes, instead of 1 12/05/04 16:18:41 ERROR hdfs.DFSClient:
> Exception closing file /user/hadoop/testfiles/testfiles/file1.txt :
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/hadoop/testfiles/testfiles/file1.txt could only be replicated to
> 0 nodes, instead of 1   at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>     at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)     at
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)     at
> java.security.AccessController.doPrivileged(Native Method)  at
> javax.security.auth.Subject.doAs(Subject.java:396)  at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
> /user/hadoop/testfiles/testfiles/file1.txt could only be replicated to
> 0 nodes, instead of 1   at
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>     at
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>     at sun.reflect.GeneratedMethodAccessor7.invoke(Unknown Source)  at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)     at
> org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)     at
> org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)     at
> java.security.AccessController.doPrivileged(Native Method)  at
> javax.security.auth.Subject.doAs(Subject.java:396)  at
> org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> 
>     at org.apache.hadoop.ipc.Client.call(Client.java:740)   at
> org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)  at
> $Proxy0.addBlock(Unknown Source)    at
> sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)     at
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
>     at
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>     at java.lang.reflect.Method.invoke(Method.java:597)     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>     at
> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>     at $Proxy0.addBlock(Unknown Source)     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102)
>     at
> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288)
</code>

Również gdy wykonuję:

<code>bin/stop-all.sh
</code>

Mówi, że kod danych nie został uruchomiony, a zatem nie można go zatrzymać. Choć wyjściejps mówi, że obecny jest kod danych.

próbowałemformatowanie namenode, zmiana uprawnień właściciela, ale to nie działa. Mam nadzieję, że nie przegapiłem żadnych innych istotnych informacji.

Z góry dziękuję.

questionAnswers(8)

yourAnswerToTheQuestion