完全分布式集群(三)hive-2.1.1安装部署
时间: 2018-10-09来源:OSCHINA
前景提要
「深度学习福利」大神带你进阶工程师,立即查看>>>
环境信息
完全分布式集群(一)集群基础环境及zookeeper-3.4.10安装部署
hadoop集群安装配置过程
安装hive前需要先部署hadoop集群
完全分布式集群(二)hadoop2.6.5安装部署
安装hive2.1.1
下载并通过FTP工具将apache-hive-2.1.1-bin.tar.gz安装包上传至服务器,解压,并修改软件目录。因整个集群目前node225节点只作为集群的DataNode,所以本次将hive安装在node225节点上 [root@node225 ~]# gtar -xzf /home/hadoop/apache-hive-2.1.1-bin.tar.gz -C /usr/local/ [root@node225 ~]# mv /usr/local/apache-hive-2.1.1-bin /usr/local/hive-2.1.1
配置hive环境变量信息 [root@node225 ~]# vi /etc/profile #追加如下内容,目录需要结合实际修改 export HIVE_HOME=/usr/local/hive-2.1.1 export HIVE_CONF_DIR=${HIVE_HOME}/conf export PATH=${HIVE_HOME}/bin:$PATH # 使配置生效 [root@node225 ~]# source /etc/profile
在集群的任意节点上创建hive配置需要的目录并设置操作权限,前提是确保hadoop集群正常启动,如下在node222节点创建。 [hadoop@node222 ~]$ hdfs dfs -mkdir -p /user/hive/warehouse [hadoop@node222 ~]$ hdfs dfs -chmod 777 /user/hive/warehouse [hadoop@node222 ~]$ hdfs dfs -mkdir -p /tmp/hive/ [hadoop@node222 ~]$ hdfs dfs -chmod 777 /tmp/hive
本次安装为多用户模式,需要在mysql上创建hive元数据库 -- 创建hive数据库 ipems_dvp@localhost : (none) 10:26:05> create database hive; Query OK, 1 row affected (0.01 sec) -- 创建hive用户并设置密码 ipems_dvp@localhost : (none) 10:27:01> create user 'hive'@'%' identified by 'Aa123456789'; Query OK, 0 rows affected (0.07 sec) -- 授权 ipems_dvp@localhost : (none) 10:36:12> grant all privileges on hive.* to 'hive'@'%'; Query OK, 0 rows affected (0.07 sec) -- 刷新权限 ipems_dvp@localhost : (none) 10:36:26> flush privileges; Query OK, 0 rows affected (0.02 sec)
拷贝模板生成hive配置文件,并修改文件内容,为屏蔽每次hive连接时提示“WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set.”是Mysql数据库的SSL连接问题,提示警告不建议使用没有带服务器身份验证的SSL连接。在javax.jdo.option.ConnectionURL配置项中增加“&useSSL=false”,其中“&”为XML中的“&”符号。 [root@node225 ~]# cp /usr/local/hive-2.1.1/conf/hive-default.xml.template /usr/local/hive-2.1.1/conf/hive-site.xml # hive-site.xml默认里边配置项非常多,可先清空,后填充如下内容 [root@node225 ~]# cat "" /usr/local/hive-2.1.1/conf/hive-site.xml [root@node225 ~]# vi /usr/local/hive-2.1.1/conf/hive-site.xml # 配置项 <?xml version="1.0" encoding="UTF-8" standalone="no"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?><!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <configuration> <property> <name>hive.default.fileformat</name> <value>TextFile</value> </property> <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://192.168.0.200:3306/hive?createDatabaseIfNotExist=true&useSSL=false</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>Aa123456789</value> </property> <property> <name>hive.metastore.schema.verification</name> <value>false</value> </property> </configuration>
将mysql的jdbc连接驱动上传至hive的lib目录 [root@node225 ~]# ls /usr/local/hive-2.1.1/lib/mysql-connector-java-5.1.40-bin.jar /usr/local/hive-2.1.1/lib/mysql-connector-java-5.1.40-bin.jar
将hadoop的home和hive配置目录环境信息追加至/usr/local/hive-2.1.1/conf/hive-env.sh [root@node225 ~]# vi /usr/local/hive-2.1.1/conf/hive-env.sh # 追加内容 export HADOOP_HOME=/usr/local/hadoop-2.6.5 export HIVE_CONF_DIR=/usr/local/hive-2.1.1/conf export HIVE_AUX_JARS_PATH=/usr/local/hive-2.1.1/lib
初始化hive元数据库 [root@node225 ~]# /usr/local/hive-2.1.1/bin/schematool -initSchema -dbType mysql which: no hbase in (.:/usr/local/jdk1.8.0_66//bin:/usr/local/zookeeper-3.4.10/bin:ZK_HOME/sbin:ZK_HOME/lib:/usr/local/hadoop-2.6.5//bin:/usr/local/hadoop-2.6.5//sbin:/usr/local/hadoop-2.6.5//lib:/usr/local/hive-2.1.1/bin:/usr/local/mongodb/bin:.:/usr/local/jdk1.8.0_66//bin:/usr/local/zookeeper-3.4.10/bin:ZK_HOME/sbin:ZK_HOME/lib:/usr/local/hadoop-2.6.5//bin:/usr/local/hadoop-2.6.5//sbin:/usr/local/hadoop-2.6.5//lib:/usr/local/hive-2.1.1/bin/bin:/usr/local/mongodb/bin:/usr/local/zookeeper-3.4.10/bin:ZK_HOME/sbin:ZK_HOME/lib:/usr/local/hadoop-2.6.5//bin:/usr/local/hadoop-2.6.5//sbin:/usr/local/hadoop-2.6.5//lib:/usr/local/jdk1.8.0_66//bin:/usr/local/mongodb/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) ...... Sun Sep 30 10:52:41 CST 2018 WARN: Establishing SSL connection without server's identity verification is not recommended. According to MySQL 5.5.45+, 5.6.26+ and 5.7.6+ requirements SSL connection must be established by default if explicit option isn't set. For compliance with existing applications not using SSL the verifyServerCertificate property is set to 'false'. You need either to explicitly disable SSL by setting useSSL=false, or set useSSL=true and provide truststore for server certificate verification. schemaTool completed
通过hiveCLI连接测试,正常进入,并执行简单的hiveQL命令测试 [root@node225 ~]# /usr/local/hive-2.1.1/bin/hive which: no hbase in (.:/usr/local/jdk1.8.0_66//bin:/usr/local/zookeeper-3.4.10/bin:ZK_HOME/sbin:ZK_HOME/lib:/usr/local/hadoop-2.6.5//bin:/usr/local/hadoop-2.6.5//sbin:/usr/local/hadoop-2.6.5//lib:/usr/local/hive-2.1.1/bin:/usr/local/mongodb/bin:/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/root/bin) Logging initialized using configuration in jar:file:/usr/local/hive-2.1.1/lib/hive-common-2.1.1.jar!/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. hive> show databases;
连接如果提示如下"SLF4J: Class path contains multiple SLF4J bindings."是SLF4J相关提示是因为发生jar包冲突了,本次采用hadoop的jar包,所以重命名hive的对应jar包, # 提示信息 SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/usr/local/hive-2.1.1/lib/log4j-slf4j-impl-2.4.1.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/usr/local/hadoop-2.6.5/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] # 处理方法 [root@node225 ~]# mv /usr/local/hive-2.1.1/lib/log4j-slf4j-impl-2.4.1.jar /usr/local/hive-2.1.1/lib/log4j-slf4j-impl-2.4.1.jar.bak
如果提示“Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standby”是因为两个互为HA的namenode节点均处于standby 状态,通过50070端口查看确定该状态,启动NameNode节点上的zkfc服务 [hadoop@node222 ~]$ /usr/local/hadoop-2.6.5/sbin/hadoop-daemon.sh start zkfc [hadoop@node224 ~]$ /usr/local/hadoop-2.6.5/sbin/hadoop-daemon.sh start zkfc

科技资讯:

科技学院:

科技百科:

科技书籍:

网站大全:

软件大全:

热门排行