1/11/2024 0 Comments Snappy compression ration%timeit pd.read_parquet(path='file.parquet. %timeit pd.read_parquet(path='', engine='pyarrow') %timeit df.to_parquet(path='', compression='gzip', engine='pyarrow', index=True) %timeit df.to_parquet(path='', compression='snappy', engine='pyarrow', index=True) Results (small file, 4 KB, Iris dataset): +-+-+-+ Let's test speed and size with large and small parquet files in Python. The tradeoff depends on the retention period of the data. However, cloud compute is a one-time cost whereas cloud storage is a recurring cost. It's important to keep in mind that speed is essentially compute cost. See extensive research and benchmark code and results in this article ( Performance of various general compression algorithms – some of them are unbelievably fast!).īased on the data below, I'd say gzip wins outside of scenarios like streaming, where write-time latency would be important. LZO focus on decompression speed at low CPU usage and higher compression at the cost of more CPU.įor longer term/static storage, the GZip compression is still better. GZIP compresses data 30% more as compared to Snappy and 2x more CPU when reading GZIP data compared to one that is consuming Snappy data. If you need your compressed data to be splittable, BZip2, LZO, and Snappy formats are splittable, but GZip is not. Hadoop mainly uses deflate,gzip,bzip2,lzo,lz4 and snappy compression format and only bzip2 is a compression format which support splittable and all other compression format are not splittable. It is worth running tests to see if you detect a significant difference. Snappy or LZO are a better choice for hot data, which is accessed frequently. GZip is often a good choice for cold data, which is accessed infrequently. GZIP compression uses more CPU resources than Snappy or LZO, but provides a higher compression ratio.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |