Best Practices for Hive Efficiency

Storage Format
Most importantly you want to store your data in an optimal storage format.
Avoid text, JSON, and even sequence file when possible.
Ideally use RCFile. RCFile (Row Columnar File) format is optimised to utilise Hadoop’s parallelism by storing data in row groups which then are organised in columnar sections. The row groups are stored (one ore more per file) redundantly on HDFS and the same data can be processed by several nodes. The columnar structure improves compression (similarity of data) and allows to skip columns irrelevant to a query (reduce disk IO).

Compression
Generally, use block compression over value compression since it is more efficient (see mapred.output.compression.type setting). Try and use splittable compression algorithms provided by Hadoop & Hive like Snappy (see mapred.output.compression.codec setting). Avoid Gzip – not splittable and CPU intensive – unless you ingest pre-split Gzipped files from a secondary process.

Ensure your final Hive output is always compressed:
set hive.exec.compress.output=true;
And also your intermediate output should be compressed:
set hive.exec.compress.intermediate=true;

Data Locality
Hive supports S3 as a storage layer. Use it for backups and not as your main read/write storage since you will need to read and write across the network. HDFS is great to store your data and gives Hadoop a chance to optimise data access.

Partition large tables
Partition tables if you collect time series data or logs that accumulate over time and you only need to query parts of it. You can store the data in a subdirectory tree like year/month/day or if it is geographic data continent/country/region/city and so forth. That allows your query to skip the irrelevant data.

Querying

Map Joins
Use small tables on the left side of your joins and enable auto optimisation:
set hive.auto.convert.join=true;
This will cache small enough tables and allow for mapper-only joins which are much faster. You can also tweak the size of what HIve considers small enough to cache:
set hive.mapjoin.smalltable.filesize=50000000;
Make sure your JVM has enough memory though.

Avoid Order By
One of the slowest things you can do is to order a large output. Order is global and not yet optimised in Hive and results in one final reducer sorting the data. If you can use sort by or cluster by which organises data on a per reducer (local) level which is much faster of course.

Tune Hadoop
Lastly, optimise Hadoop of course and keep an eye on the jobtracker and logs of the mapreduce jobs like spills. You can increment your JVM and sort cache size, increase min and max split size to ensure that you don’t create countless tiny map tasks and so forth.

盘点SQL on Hadoop中用到的主要技术

数据仓库中的SQL性能优化(Hive篇)

发表评论

电子邮件地址不会被公开。 必填项已用*标注