Categories

Versions

You are viewing the RapidMiner Radoop documentation for version 9.2 -Check here for latest version

Known Hadoop Errors

This section lists errors in the Hadoop components that might effect RapidMiner Radoop process execution. If there is a workaround for an issue, it's also described here.

General Hadoop errors

When using a Radoop Proxy or a SOCKS Proxy, HDFS operations may fail

  • The cause isHDFS-3068
  • Affects: probably newer Hadoop versions, and is still unresolved
  • Error message (during Full Test or file upload):
    java.lang.IllegalStateException: Must not use direct buffers with InputStream API
  • Workaround is to add this property to Advanced Hadoop Parameters:dfs.client.use.legacy.blockreaderwith a value of true

Windows client does not work with Linux cluster on Hadoop 2.2 (YARN)

  • The cause isYARN-1824
  • Affects: Hadoop 2.2 - YARN, with Windows client and Linux cluster
  • The import test fails, with the single line in the log:/bin/bash: /bin/java: No such file or directory
  • Settingmapreduce.app-submission.cross-platformtofalse改变了错误消息"No job control"
  • There is no workaround for this issue, upgrading to Hadoop 2.4+ is recommended.

AccessControlException in log messages

  • The cause isHADOOP-11808
  • Warning message isWARN Client: Exception encountered while connecting to the server : org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]
  • This message doesn't affect the execution or the results of the process.

General Hive errors

SocketTimeoutException: Read timed out is thrown when using Hive-on-Spark

  • When Hive-on-Spark is used with Spark Dynamic Allocation disabled, the parsing of a HiveQL command may start a SparkSession, and during that period other requests may fail. SeeHIVE-17532.
  • Affects: Hive with Hive-on-Spark enabled
  • Solution: enabling Spark Dynamic Allocation in the Hive service avoids this issue. Note thatSocketTimeoutExceptionmay still be thrown for other reasons, please consult your Hadoop support in that case.

IllegalMonitorStateException is thrown during process execution

  • Probably the cause isHIVE-9598. Usually occurs after long period of inactivity on the Studio interface, or if the HiveServer2 service is changed or restarted.
  • Affects: Hive 0.13 (may affect earlier releases), said to be fixed in Hive 0.14
  • Error message example:
java.lang.RuntimeException: java.lang.IllegalMonitorStateException at eu.radoop.datahandler.hive.HiveHandler.runFastScriptTimeout(HiveHandler.java:761) at eu.radoop.datahandler.hive.HiveHandler.runFastScriptsNoParams(HiveHandler.java:727) at eu.radoop.datahandler.hive.HiveHandler.runFastScript(HiveHandler.java:654) at eu.radoop.datahandler.hive.HiveHandler.dropIfExists(HiveHandler.java:1853) ... Caused by: java.lang.IllegalMonitorStateException at java.util.concurrent.locks.ReentrantLock$Sync.tryRelease(Unknown Source) at java.util.concurrent.locks.AbstractQueuedSynchronizer.release(Unknown Source) at java.util.concurrent.locks.ReentrantLock.unlock(Unknown Source) at org.apache.hive.jdbc.HiveStatement.execute(HiveStatement.java:239)
  • Workaround: Re-opening the process in Studio may solve it. If not, a Studio restart should solve it. If this is a constant issue on your cluster, please upgrade to a Hive version where this issue has been fixed (see ticket above).

Hive job fails with MapredLocalTask error

  • Hive may start a so-called local taskto perform a JOIN. If there is an error during this local work (typically, an out of memory error), it may only return an error code and not a proper error message.
  • Affects: Hive 0.13.0, Hive 1.0.0, Hive 1.1.0 and potentially other versions
  • Error message example (return code may differ):return code 3 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
  • See full error message in/tmp/hive/local directory, by default, on the cluster node that performed the task.
  • Workaround: check whether theJoinoperator in your process uses the appropriate keys, so the result set does not explode. If the Join is defined correctly, add the following key-value pair to theAdvanced Hive Parameterslist for your connection: keyhive.auto.convert.joinwith valuefalse.

Hive job fails with Kryo serializer error

  • The cause is probably the same as forHIVE-7711
  • Affects: Hive 1.1.0 and potentially other versions
  • Error message:org.apache.hive.com.esotericsoftware.kryo.KryoException: Unable to find class: [...]
  • Workaround: process re-run might help. Adding the following key-value pair toAdvanced Hive Parameterslist for the connection prevents this type of error: keyhive.plan.serialization.formatwith valuejavaXML. Please note, that starting from Hive 2.0,kryois the only supported serialization format. SeeHive-12609andHive Configuration Propertiesfor more information.
  • Manually installingRapidMiner Radoop functions prevents this type of error.

Hive job fails with NoClassDefFoundError error

  • The cause is addressed inHIVE-2573, this patch isincluded in CDH 5.4.3
  • Affects: CDH 5.4.0 to CDH 5.4.2
  • Error message:java.sql.SQLException: java.lang.NoClassDefFoundError: [...]
  • Error message in HiveServer2 log:java.lang.RuntimeException: java.lang.NoClassDefFoundError: [...]
  • Solution: process re-run may help, but an upgrade to CDH 5.4.3 is the permanent solution.
  • Manually installingRapidMiner Radoop functions also prevents this type of error.

Hive job fails before completion

  • Probably the cause isHIVE-4605
  • Affects: Hive 0.13.1 and below
  • Error message:Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
  • Error message in HiveServer2 log:org.apache.hadoop.hive.ql.metadata.HiveException: Unable to rename output from [...] .hive-staging_hive_[...] to: .hive-staging_hive_...
  • There is no known workaround, please re-run the process preferably without any concurrent read from the same Hive table.

JOIN may lead to NullPointerException in CDH

  • The cause may beHIVE-3872, but there are no MAP JOIN hints
  • Usually some kind of self join of a complex view leads to this error
  • Workaround: use Materialize Data (or Multiply) before the Join operator (in case of multiple Joins, you may have to find out exactly which is the first Join that leads to this error, and materialize right before it)

Number of connections to Zookeeper reaches the maximum allowed

  • The cause is that each HiveServer2 "client" creates a new connection to Zookeeper:HIVE-4132,HIVE-6375
  • Affects Hive 0.10 and several newer versions.
  • After the maximum number (default is 60, Radoop can easily reach that) is reached, HiveServer2 becomes inaccessible, since its connection attempt to Zookeeper fails. A HiveServer2 or Zookeeper restart is required in this case.
  • Workaround: increase maxClientCnxns property of Zookeeper, e.g. to 500.

Non latin1 characters may cause encoding issues when used in a filter clause or in a script

  • The cause is that RapidMiner Radoop relies heavily on Hive VIEW objects. The code of a VIEW is stored in the Hive Metastore database, which is an arbitrary relational database usually created during the Hadoop cluster install. If this database does not handle the character encoding well, then the RapidMiner Radoop interface will also have issues.
  • Affects: Hive Metastore DB created by default MySQL scripts, and it may affect other databases as well
  • Workaround: your Hadoop administrator can test if your Hive Metastore database can deal with the desired encoding. A Hive VIEW, created through Beeline, that contains a filter clause with non latin1 characters should return the expected result set when used as a source object in a SELECT query. Please contact your Hadoop support regarding enconding issues with Hive.

Confidence values are missing or predicted label may be shown incorrectly (e.g. after a Discretize operator)

  • This issue probably comes up only if Hive Metastore is installed on MySQL relational database with latin1 character set as database default and the label contains special multibyte UTF-8 characters, like the infinity symbol (∞) that a Discretize operator uses.
  • Affects: Hive Metastore DB created by default MySQL scripts, and it may affect other databases as well
  • Workaround: your Hadoop administrator can test if your Hive Metastore database can deal with the desired encoding. A Hive VIEW, created through Beeline, that contains a filter clause with non latin1 characters should return the expected result set when used as a source object in a SELECT query. Please contact your Hadoop support regarding enconding issues with Hive.

Attribute roles and binominal mappings may be lost when storing in a Hive table with non-default format

  • The cause isHIVE-6681
  • Probably it is fixed in Hive 0.13
  • As the roles and the binominal mappings are stored in column comments, when these are replaced with 'from deserializer", the roles are lost.

PARQUET format may cause Hive to fail

  • There are several related issues, one of them isHIVE-6375
  • Affects: Hive 0.13 is said to be fixed, but may still have issues likeHIVE-6938
  • CREATE TABLE AS SELECT statement fails with MapRedTask return code 2; or we getNullPointerException; or we getArrayIndexOutOfBoundsExceptionbecause of column name issues
  • Workaround is to use different format. Materialize Data operator may not be enough, as the CTAS statement gives the error.

Hive may hang while parsing large queries

  • When submitting large queries to the Hive parser, the execution may stop and later fail or never recover.
  • Workaround: This issue usually happens with theApply Modeloperator with very large models (like Trees). Set theuse general applierparameter to true to avoid the large queries but get the same result.

Unable to cancel certain Hive-on-Spark queries

  • The cause isHIVE-13626
  • The YARN application can be stuck in RUNNING state on the cluster if the query is canceled immediately after it is submitted.
  • This issue can be experienced by chance when a process using Hive-on-Spark is stopped.
  • To resolve the situation, one can kill the job manually.

Starting a Hive-on-Spark job fails due to timeout

  • Thehive.spark.client.server.connect.timeoutproperty is set to 90000ms by default. This may be short for a Hive-on-Spark job to start, especially when multiple jobs are waiting for execution (e.g. parallel execution of processes).
  • A dedicated error message explains this issue.
  • The property value can be modified only inhive-site.xmlcluster configuration file.

General Impala errors

Impala may return empty results when DISTINCT or COUNT(DISTINCT ..) expressions are used

  • There are a lot of similar bug tickets.
  • Seems to only come up when an INSERT is used (Store in Hive operator). The DISTINCT expression may be used in Aggregate or Remove Duplicates operators.
  • A related issue: Impala still does not support multiple COUNT(DISTINCT ..) expressionsIMPALA-110

General MapR errors

Error util.Shell: Failed to locate the winutils binary in the hadoop binary path
Or
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.

  • Both these errors indicate that the environment variableHADOOP_HOMEis not set correctly.
  • Most often seen on Windows system, when theHADOOP_HOMEenvironment variable is not set.
  • Set theHADOOP_HOMEsystem-wide environment variable to the location of the hadoop directory. A normal MapR install this will be%MAPR_HOME%/hadoop/hadoop-x.y.z, wherex.y.zis the hadoop version.

java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

  • Error message indicates, unable to find approrate binary libraries.
  • Most often seen in Windows system, when the路径does not contain the binaries located at%HADOOP_HOME%\bin.
  • Workaround is to check that theHADOOP_HOMEsystem-wide environment variable is set correctly, and append%HADOOP_BIN%\binto the system-wide environment variable路径.

一般火花呃rors

Spark Script with Hive access fails with "GSS initiate failed" when user impersonation is enabled

  • When usingSpark 1.5orSpark 1.6versions and user impersonation in the Radoop connection, aSpark Scriptoperator that hasenable Hive accessset to true may fail with security error.
  • Cause isSPARK-13478.
  • Workaround is to update Spark on the cluster.

Spark 2.0.1 is unable to create database

  • 在火花2.0.1 follo执行失败wing exception:"Unable to create database default as failed to create its directory hdfs://"...
  • Cause isSPARK-17810.
  • Affects only Spark 2.0.1. Please use Spark 2.0.0 or upgrade to Spark 2.0.2 or later.
  • Workaround is to addspark.sql.warehouse.diras an Advanced Spark Parameter with a path that begins with "file:/". This is not expected to work on Windows.

Spark job may fail with relatively large ORC input data

  • Error message is"Size exceeds Integer.MAX_VALUE"
  • The cause isSPARK-1476
  • Workaround is using Text input format. The bug may occur with Text format too if the HDFS blocks are large.

Reading the output of Spark Script may fail for older Hive versions if the DataFrame contains too many null values

  • The Spark job succeeds, but reading the output parquet table fails with NullPointerException.
  • Affects Hive 1.1 and below. The cause isPARQUET-136.
  • Workaround is using thefillna()function on the output DataFrame (Python API,R API)

Exception in Spark job may not fail the process if no output connection is defined.

  • An exception occured in the Spark script but the Spark job succeeds. If the operator has no output, the process succeeds.
  • Cause isSPARK-7736andSPARK-10851.
  • Affects Spark 1.5.x. Fixed in Spark 1.5.1 for Python, Spark 1.6.0 for R. The exception can be checked in the ApplicationMaster logs on the Resource Manager web interface.
  • Workaround is upgrading to Spark 1.5.1/1.6.0 or making a dummy output connection and returning a (small) dummy dataset.

Spark ErrorInvalidAuxServiceException: The auxService:spark_shuffle does not exist

  • Typically caused when a Spark Resource Allocation Policy for a connection is selected to be Dynamic, but the Hadoop cluster that Spark is running on is not setup for Dynamic Resource Allocation.
  • Either adjust Hadoop cluster to have Dynamic Resource Allocation accordingly or changeSpark Resource Allocation Policyfor the connection in the connection editor.

Spark Errorjava.lang.StackOverflowErroron executor side

  • This is a general error, which can be caused by multiple problems, but there is a known Spark issue among the possible causes. It can be identified by a very large Java deserialization stack trace, with lots ofObjectInputStreamcalls.Random ForestandDecision Treeare known to be affected, but you might also experience this problem with other Spark operators as well. Example error snippet:

    在java.io.ObjectInpu java.lang.StackOverflowErrortStream$BlockDataInputStream.peekByte(ObjectInputStream.java:2956) at java.io.ObjectInputStream.readClassDesc(ObjectInputStream.java:1736) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2040) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:2285) at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:2209) at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:2067) at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1571) at java.io.ObjectInputStream.readObject(ObjectInputStream.java:431) at scala.collection.immutable.List$SerializationProxy.readObject(List.scala:479) at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498)
  • The issue is caused by the deserialization process exceeding the available stack size inside the executor process. A possible solution to the problem is increasing the executor stack size, by adding the following key-value pair toAdvanced Spark Parametersspark.executor.extraJavaOptionswith the value of-Xss4m, and making sure thatspark.driver.extraJavaOptionshas the same value. The -Xss value is a parameter used to set the the java runtime thread-stack size, see basicjava documents.