Wednesday 15 April 2015

hadoop - LeaseExpiredException: No lease error on HDFS (Failed to close file) -



hadoop - LeaseExpiredException: No lease error on HDFS (Failed to close file) -

i trying load big info dynamically partitioned table in hive.

i maintain on getting error. if load info without partitioning, works fine. if work smaller info set (with partition), works fine well. big dataset start getting error

the error:

2014-11-10 09:28:01,112 error org.apache.hadoop.hdfs.dfsclient: failed close file /tmp/hive-username/hive_2014-11-10_09-25-26_785_2042278847834453465/_task_tmp.-ext-10002/ pseudo_element_id=nn%09/_tmp.000002_2 org.apache.hadoop.ipc.remoteexception(org.apache.hadoop.hdfs.server.namenode.leaseexpiredexception): no lease on /tmp/hive-username/hive_2014-11-10_09-25-26_785_2042278847834453465/_task_tmp.-ext-10002 /pseudo_element_id=nn%09/_tmp.000002_2: file not exist. holder dfsclient_nonmapreduce_-737676454_1 not have open files. @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checklease(fsnamesystem.java:2445) @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checklease(fsnamesystem.java:2437) @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.completefileinternal(fsnamesystem.java:2503) @ org.apache.hadoop.hdfs.server.namenode.fsnamesystem.completefile(fsnamesystem.java:2480) @ org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.complete(namenoderpcserver.java:535) @ org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.complete(clientnamenodeprotocolserversidetranslatorpb.java:337) @ org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java:44958) @ org.apache.hadoop.ipc.protobufrpcengine$server$protobufrpcinvoker.call(protobufrpcengine.java:453) @ org.apache.hadoop.ipc.rpc$server.call(rpc.java:1002) @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1701) @ org.apache.hadoop.ipc.server$handler$1.run(server.java:1697) @ java.security.accesscontroller.doprivileged(native method) @ javax.security.auth.subject.doas(subject.java:396) @ org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1408) @ org.apache.hadoop.ipc.server$handler.run(server.java:1695) @ org.apache.hadoop.ipc.client.call(client.java:1225) @ org.apache.hadoop.ipc.protobufrpcengine$invoker.invoke(protobufrpcengine.java:202) @ $proxy10.complete(unknown source) @ org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocoltranslatorpb.complete(clientnamenodeprotocoltranslatorpb.java:330) @ sun.reflect.nativemethodaccessorimpl.invoke0(native method) @ sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:39) @ sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:25) @ java.lang.reflect.method.invoke(method.java:597) @ org.apache.hadoop.io.retry.retryinvocationhandler.invokemethod(retryinvocationhandler. java:164) @ org.apache.hadoop.io.retry.retryinvocationhandler.invoke(retryinvocationhandler.java:83) @ $proxy11.complete(unknown source) @ org.apache.hadoop.hdfs.dfsoutputstream.completefile(dfsoutputstream.java:1795) @ org.apache.hadoop.hdfs.dfsoutputstream.close(dfsoutputstream.java:1782) @ org.apache.hadoop.hdfs.dfsclient.closeallfilesbeingwritten(dfsclient.java:709) @ org.apache.hadoop.hdfs.dfsclient.close(dfsclient.java:726) @ org.apache.hadoop.hdfs.distributedfilesystem.close(distributedfilesystem.java:561) @ org.apache.hadoop.fs.filesystem$cache.closeall(filesystem.java:2398) @ org.apache.hadoop.fs.filesystem$cache$clientfinalizer.run(filesystem.java:2414) @ org.apache.hadoop.util.shutdownhookmanager$1.run(shutdownhookmanager.java:54)

this happens when multiple mappers tries access same file. check code bug causes same file accessed simultaneously.this can happen when mapper access file deleted.

hadoop hive hdfs

No comments:

Post a Comment