读书人

Hive目录

发布时间: 2013-07-08 14:13:00 作者: rapoo

Hive索引

一、Hive Index具体实现

索引是标准的数据库技术,hive 0.7版本之后支持索引。hive索引采用的不是'one size fites all'的索引实现方式,而是提供插入式接口,并且提供一个具体的索引实现作为参考。Hive的Index接口如下:

/** * HiveIndexHandler defines a pluggable interface for adding new index handlers * to Hive. */public interface HiveIndexHandler extends Configurable {  /**   * Determines whether this handler implements indexes by creating an index   * table.   *   * @return true if index creation implies creation of an index table in Hive;   *         false if the index representation is not stored in a Hive table   */  boolean usesIndexTable();  /**   * Requests that the handler validate an index definition and fill in   * additional information about its stored representation.   *   * @param baseTable   *          the definition of the table being indexed   *   * @param index   *          the definition of the index being created   *   * @param indexTable   *          a partial definition of the index table to be used for storing the   *          index representation, or null if usesIndexTable() returns false;   *          the handler can augment the index's storage descriptor (e.g. with   *          information about input/output format) and/or the index table's   *          definition (typically with additional columns containing the index   *          representation, e.g. pointers into HDFS).   *   * @throws HiveException if the index definition is invalid with respect to   *         either the base table or the supplied index table definition   */  void analyzeIndexDefinition(      org.apache.hadoop.hive.metastore.api.Table baseTable,      org.apache.hadoop.hive.metastore.api.Index index,      org.apache.hadoop.hive.metastore.api.Table indexTable)      throws HiveException;  /**   * Requests that the handler generate a plan for building the index; the plan   * should read the base table and write out the index representation.   *   * @param baseTbl   *          the definition of the table being indexed   *   * @param index   *          the definition of the index   *   * @param baseTblPartitions   *          list of base table partitions with each element mirrors to the   *          corresponding one in indexTblPartitions   *   * @param indexTbl   *          the definition of the index table, or null if usesIndexTable()   *          returns null   *   * @param inputs   *          inputs for hooks, supplemental outputs going   *          along with the return value   *   * @param outputs   *          outputs for hooks, supplemental outputs going   *          along with the return value   *   * @return list of tasks to be executed in parallel for building the index   *   * @throws HiveException if plan generation fails   */  List<Task<?>> generateIndexBuildTaskList(      org.apache.hadoop.hive.ql.metadata.Table baseTbl,      org.apache.hadoop.hive.metastore.api.Index index,      List<Partition> indexTblPartitions, List<Partition> baseTblPartitions,      org.apache.hadoop.hive.ql.metadata.Table indexTbl,      Set<ReadEntity> inputs, Set<WriteEntity> outputs)      throws HiveException;  /**   * Generate the list of tasks required to run an index optimized sub-query for the   * given predicate, using the given indexes. If multiple indexes are   * provided, it is up to the handler whether to use none, one, some or all of   * them. The supplied predicate may reference any of the columns from any of   * the indexes. If the handler decides to use more than one index, it is   * responsible for generating tasks to combine their search results   * (e.g. performing a JOIN on the result).   * @param indexes   * @param predicate   * @param pctx   * @param queryContext contains results, such as query tasks and input configuration   */  void generateIndexQuery(List<Index> indexes, ExprNodeDesc predicate,    ParseContext pctx, HiveIndexQueryContext queryContext);  /**   * Check the size of an input query to make sure it fits within the bounds   *   * @param inputSize size (in bytes) of the query in question   * @param conf   * @return true if query is within the bounds   */  boolean checkQuerySize(long inputSize, HiveConf conf);}

创建索引的时候,Hive首先调用接口的usesIndexTable方法,判断索引是否是已Hive Table的方式存储(默认的实现是存储在Hive中的)。然后调用analyzeIndexDefinition分析索引创建语句是否合法,如果没有问题将在元数据标IDXS中添加索引表,否则抛出异常。如果索引创建语句中使用with deferred rebuild,在执行alter index xxx_index on xxx rebuild时将调用generateIndexBuildTaskList获取Index的MapReduce,并执行为索引填充数据。

?

测试索引例子:

?

1.创建测试数据

?

#! /bin/bash  #generating 350M raw data.  i=0  while [ $i -ne 1000000 ]  do          echo -e "$i\tA decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America."          i=$(($i+1))  done
?

?

2.创建测试表1

?

create table table01( id int, name string)  ROW FORMAT DELIMITED  FIELDS TERMINATED BY '\t'; 
?
load data local inpath '/data/tmp/huzhirong/dual.txt' overwrite into table table01;
?

?

3.创建测试表2,并从表1中select数据

?

create table table02 as select id,name as text from table01;
?table02在hdfs的数据
hive> dfs -ls /user/hive/warehouse/table02; Found 5 items-rw-r--r--   3 hadoop supergroup   88453176 2013-04-26 20:56 /user/hive/warehouse/table02/000000_0-rw-r--r--   3 hadoop supergroup   67108860 2013-04-26 20:56 /user/hive/warehouse/table02/000001_0-rw-r--r--   3 hadoop supergroup   67109134 2013-04-26 20:56 /user/hive/warehouse/table02/000002_0-rw-r--r--   3 hadoop supergroup   67108860 2013-04-26 20:56 /user/hive/warehouse/table02/000003_0-rw-r--r--   3 hadoop supergroup   67108860 2013-04-26 20:56 /user/hive/warehouse/table02/000004_0
?测试查询:
select * from table02 where id=500000;
?耗时:
OK500000  A decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America.Time taken: 35.022 seconds

?

4.创建索引

?

create index table02_index on table table02(id) as 'org.apache.hadoop.hive.ql.index.compact.CompactIndexHandler' with deferred rebuild;
alter index table02_index on table02 rebuild;
Loading data to table default.default__table02_table02_index__Moved to trash: hdfs://namenode.hadoop.game.yy.com/user/hive/warehouse/default__table02_table02_index__Table default.default__table02_table02_index__ stats: [num_partitions: 0, num_files: 1, num_rows: 0, total_size: 87733114, raw_data_size: 0]MapReduce Jobs Launched: Job 0: Map: 3  Reduce: 1   Cumulative CPU: 51.28 sec   HDFS Read: 357021261 HDFS Write: 87733114 SUCCESSTotal MapReduce CPU Time Spent: 51 seconds 280 msecOKTime taken: 65.6 seconds
hive> dfs -ls /user/hive/warehouse/default__table02_table02_index__;Found 1 items-rw-r--r--   3 hadoop supergroup   87733114 2013-04-26 21:04 /user/hive/warehouse/default__table02_table02_index__/000000_0
?可以看到索引内存储的数据:
hive> select * from default__table02_table02_index__ limit 3;OK0       hdfs://namenode.hadoop.game.yy.com/user/hive/warehouse/table02/000002_0 [0]1       hdfs://namenode.hadoop.game.yy.com/user/hive/warehouse/table02/000002_0 [352]2       hdfs://namenode.hadoop.game.yy.com/user/hive/warehouse/table02/000002_0 [704]
?应该是{值,HDFS文件位置,偏移量的数组(可能有多个)}自定义索引文件:
insert overwrite directory "/tmp/table02_index_data" select `_bucketname`, `_offsets` from   default__table02_table02_index__ where id =500000;
?查询table02数据直接查询:
hive> select * from table02 where id =500000;Total MapReduce jobs = 1Launching Job 1 out of 1Number of reduce tasks is set to 0 since there's no reduce operatorStarting Job = job_201301301559_29049, Tracking URL = http://namenode.hadoop.game.yy.com:50030/jobdetails.jsp?jobid=job_201301301559_29049Kill Command = /home/hadoop/hadoop-1.0.3/libexec/../bin/hadoop job  -Dmapred.job.tracker=namenode.hadoop.game.yy.com:8021 -kill job_201301301559_29049Hadoop job information for Stage-1: number of mappers: 6; number of reducers: 02013-04-26 22:34:20,755 Stage-1 map = 0%,  reduce = 0%2013-04-26 22:34:26,797 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec2013-04-26 22:34:27,812 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec2013-04-26 22:34:28,859 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec2013-04-26 22:34:29,871 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec2013-04-26 22:34:30,874 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec2013-04-26 22:34:31,877 Stage-1 map = 17%,  reduce = 0%, Cumulative CPU 2.23 sec2013-04-26 22:34:32,879 Stage-1 map = 83%,  reduce = 0%, Cumulative CPU 11.58 sec2013-04-26 22:34:33,882 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec2013-04-26 22:34:34,884 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec2013-04-26 22:34:35,887 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec2013-04-26 22:34:36,890 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec2013-04-26 22:34:37,893 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec2013-04-26 22:34:38,895 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 12.99 sec2013-04-26 22:34:39,898 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 12.99 secMapReduce Total cumulative CPU time: 12 seconds 990 msecEnded Job = job_201301301559_29049MapReduce Jobs Launched: Job 0: Map: 6   Cumulative CPU: 12.99 sec   HDFS Read: 357021325 HDFS Write: 357 SUCCESSTotal MapReduce CPU Time Spent: 12 seconds 990 msecOK500000  A decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America.Time taken: 33.189 seconds
?指定索引:
hive> set hive.index.compact.file=/tmp/table02_index_data;                                        hive> set hive.optimize.index.filter=false; hive> set hive.input.format=org.apache.hadoop.hive.ql.index.compact.HiveCompactIndexInputFormat;hive> select * from table02 where id =500000; Total MapReduce jobs = 1Launching Job 1 out of 1Number of reduce tasks is set to 0 since there's no reduce operatorStarting Job = job_201301301559_29051, Tracking URL = http://namenode.hadoop.game.yy.com:50030/jobdetails.jsp?jobid=job_201301301559_29051Kill Command = /home/hadoop/hadoop-1.0.3/libexec/../bin/hadoop job  -Dmapred.job.tracker=namenode.hadoop.game.yy.com:8021 -kill job_201301301559_29051Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 02013-04-26 22:40:06,793 Stage-1 map = 0%,  reduce = 0%2013-04-26 22:40:12,803 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec2013-04-26 22:40:13,806 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec2013-04-26 22:40:14,808 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec2013-04-26 22:40:15,811 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec2013-04-26 22:40:16,813 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec2013-04-26 22:40:17,815 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.69 sec2013-04-26 22:40:18,818 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 1.69 secMapReduce Total cumulative CPU time: 1 seconds 690 msecEnded Job = job_201301301559_29051MapReduce Jobs Launched: Job 0: Map: 1   Cumulative CPU: 1.69 sec   HDFS Read: 33554658 HDFS Write: 357 SUCCESSTotal MapReduce CPU Time Spent: 1 seconds 690 msecOK500000  A decade ago, many were predicting that Cooke, a New York City prodigy, would become a basketball shoe pitchman and would flaunt his wares and skills at All-Star weekends like the recent aerial show in Orlando, Fla. There was a time, however fleeting, when he was more heralded, or perhaps merely hyped, than any other high school player in America.Time taken: 26.776 seconds
?看Map数:变成1个了,不过这里是手工插入id=500000的索引。总结:索引表的基本包含几列:1. 源表的索引列;2. _bucketname hdfs中文件地址 3. 索引列在hdfs文件中的偏移量。原理是通过记录索引列在HDFS中的偏移量,精准获取数据,避免全表扫描。

读书人网 >开源软件

热点推荐