The next wave in BI


TPC-H fun with Greenplum (single node edition)

27/01/2010 23:12

Introduction

There's a nice quote on the Greenplum site from Brian Dolan, Fox Interactive Media which says “Very impressed with the speed... 3 minutes to do a sum on 100 million rows of data”. That pretty much sums it up. I wouldn't consider that anything to be proud of; 3 minutes to do a simple sum over a measly 100 million rows of data is SLOW nowadays. A quick look at the TPC site will tell you that more complex queries on at least 600 million records run in seconds, not in minutes. Heck, with Excel 2010 PowerPivot summing 100 million rows takes less than a second! And you don't need expensive hardware or software to do any of this. My own benchmark machine is a dual Xeon 5520 with 64 GB ram and 12 Intel X-25M SSD drives of 80 GB each, connected to 2 Adaptec 5805 Raid controllers. The system is running Centos 5.4 and total cost is way below $10K. Anyway, it's always nice to see how a database behaves in practice, so I downloaded the free, single node Greenplum edition, installed it and ran the TPC-H sf100 benchmark. By the way, typing that last sentence took a lot less time than the actual work...

 

Installation

Before you install the Greenplum (GP) database it's good to first plan your disk layout. GP likes to distribute its data over so called data segments, and each data segment is tied to a GP process which is tied to a CPU core. So in my case I had 8 cores (in fact with hyperthreading it is 16) which allowed me to use 8 disks or partitions. No raid, no failover (although GP has facilities for that), I was just interested in testing load and query speed. The installation process itself is pretty straightforward and well explained in the single node install manual. What's not explained at all, not even in the more voluminous admin guide, is how the system should be tuned. Remember that GP is based on PostgreSQL so all the same settings are available. However, the settings advised for PostgreSQL are for OLTP systems, not for data warehousing. So in my first run the system used only about 3GB of the available 64GB ram. It turned out that the most important configuration parameter (shared_buffers, which tells how much memory a process can use) was set at only 32 MB (!). I toyed around with several parameters, including the system kernel settings, but couldn't use a value over 1920MB (GP won't start when it's set to e.g. 2048), so I suspect that value is capped somewhere.

 

Database creation

Again, if you're familiar with PostgreSQL, GP's DDL and DML shouldn't have a lot of secrets. They did add a few extra's though, such as the ability to define a table as column oriented instead of the standard row orientation, and the ability to compress data. But, unlike modern column stores as Vertica or Paraccel, the DBA is responsible for thinking out the best storage and compression strategy as everything has to be explicitly created using DDL statements. What's more, the column orientation and compression are only available for append-only tables. That's right: if you want columns and/or compression, it's dump and reload if you want to update your data. In that respect it's a bit similar to Infobright Community Edition which lacks all DML capabilities. For the benchmarks I ran I used went with the default row orientation.

 

Loading & database size

The data loaded into the database is generated using the dbgen tool which can be compiled from the source available from www.tpc.org. Dbgen will create 8 ASCII files that comprise the TPC schema which can then be loaded into the created database. GP lets you use the standard COPY statement to load files into a table, but also offers a facility called 'external tables' to support fast, parallel data loading. I didn't use external tables but the plain COPY which took about 45 minutes to load the 100 GB dataset. The tables already had indexes defined so that's actually pretty good. What's not so good is the resulting database size. As I couldn't use compression, the resulting database size was about 160 GB, which is 10 times as much as you need when loading the same dataset into an Infobright database. It's also about two to three times as much as less aggressivily compressed databases need.

 

Running queries

I first started by starting some individual queries to test whether the system was working and whether is was fast. It was working, but fast? TPC-H query 1 took 7 minutes and 7 seconds, which is more than 60 times slower than the fastest result (5.5 secs) I got with another database on this machine . I decided to fire up my TPC-H script anyway which runs the official TPC-H bechmark consisting of a power test (a single stream) and a throughput test (5 parallel streams). After a couple of minutes I got this message:

 

ERROR: Greenplum Database does not yet support that query. DETAIL: The query contains a correlated subquery

 

Ouch, that hurts. Had the same problem with InfoBright some time ago but haven't tested recent editions. What's funny is that Query 4 runs fine (that one also contains a CSQ but is probably easier to rewrite) but Q2, Q17, Q20, Q21 and Q22 return an error which also invalidates the TPC results I got. I also got a couple of 'out of memory' errors while running the trhoughput test, even though the memory utilization never exceeded 45GB. So as disappointing as it may be, I'm not going to publish the results here. What I can tell is that if you're running PostgreSQL and want a similar but faster database to run your data warehouse, GP single node edition will be a major improvement. It's also much faster than MySQL for typical BI queries. On the other hand, it's also (a lot) slower than SQL Server 2008, Kickfire, or Sybase IQ, the current leaders in the single node SF100 benchmarks.

 

Conclusion

Greenplum might be an interesting product if you are using the MPP version and can invest time and money in optimizing and tuning the system, but I'm not very impressed with the single node edition. Some of the issues probably also apply to the MPP version. It lacks intelligent auto tuning capabilities, query optimization seems to work not too well, it does a bad job of utilizing available memory (which might be due to my lack of experience with tuning PostgreSQL db's too, btw). Note though that TPC-H is just 'a' benchmark: you should always do your own testing with your own data! Overall I would say it's 'not bad', but not very good either.

—————

Back


Topic: TPC-H fun with Greenplum

Date: 05/02/2010

By: Jos van Dongen

Subject: Re: Re: Re: Re: Re: Some hints

My bad, sorry. Meant indeed work_mem. And thanks for the very insightful post on tuning GP!

best, Jos

—————

Date: 30/01/2010

By: pguser

Subject: Re: Some hints

Also, with my workloads, GreenPlum's fast loading speed doesn't help much, because what I really need is fast merging (i.e. MERGE in SQL:2003) instead of fast appending. Unfortunately, it seems like all benchmarks focus on initial data loading speed only.

—————

Date: 30/01/2010

By: Chuck McD

Subject: Re: Re: Some hints

It would be a lot better if Greenplum had MERGE, but as a workaround, this is fast:

1) Load the incoming stuff into a temp table.
2) Do a single big join UPDATE to handle all the changed rows.
3) Do a single big INSERT/SELECT to handle all the new rows.

I don't have much experience doing this with Single-node Edition, but on the full GP product this can be really fast.

—————

Date: 30/01/2010

By: Luke Lonergan

Subject: Re: Re: Re: Some hints

Hi,

There is a MERGE alternative that GP provides in the gpload function. It does the trick that Chuck describes, but does it automatically and has all of the basic functionality of a MERGE statement.

- Luke

—————

Date: 06/10/2010

By: cris2per

Subject: Re: Re: Re: Re: Some hints

I own a small mom and pop style storage unit business in Alabama, now I am moving to Des Moines. I am trying to find a company that will help me get started in a self storage facility in Iowa. anyone know of any?
<a href="https://tampabaylawyers.com/">tampa divorce attorney</a>

—————

Date: 12/02/2010

By: Alin Dobra

Subject: Re: Some hints

Luke,

If I would work for GP, I would be really worried about the TPC-H Q1 performance (even though you are 20 times faster than Postgres). Q1 is probably the most hated query in TPC-H since no DB trick helps (short of pre-computing the answers); the performance depends only on the ability to perform computation -- evaluate expressions. This is precisely why you should be worried about your results since GP does not do very well on Q1.

The MonetDB people have a paper in CIDR 2005 (MonetDB/X100: Hyper-Pipelining Query Execution) that points out exactly what the problem is in most databases: very poor pipelining performance. Running a little interpreter for every evaluation of an expression is bound to be very inefficient and it kills the branch predictions that are the soul of the pipelining. The new engine that MonetDB people use, X100, gets within a factor of 2 of hand-hacked code so no wonder in the test that started this discussion MonetDB runs in 5.5s and GP in 7 minutes.

Unfortunately, there is no easy fix in GP for this problem since the execution engine has to be rewritten. The MonetDB people took this hard decision in 2005 but for them it was an easy decision to make since they had an academic project. One solution is probably some form of just-in-time compilation. That would cut some of the inefficiency but not solve the problem completely since Postgres (and probably GP) is still an one-tuple-at-the-time engine. Peak performance can be obtained only with no interpretation and blocks of tuples at a time (preferably millions). In your particular setup, there is probably a 10x factor speedup in actual running time for Q1 (not more due to the I/O).

A last comment. Performance on computational tasks is important in databases since databases should not be used as a fancy data scanner. You want to perform the computation in the database not in middleware since shipping the data is never efficient.

Alin

—————

Date: 29/01/2010

By: pguser

Subject: PostgreSQL is certainly not slow

If as stated you don't have much experience tuning PostgreSQL, then how can you also make the following statement:

"What I can tell is that if you're running PostgreSQL and want a similar but faster database to run your data warehouse, GP single node edition will be a major improvement."

I have tested Greenplum single node against PostgreSQL 8.4. As I am no expert, both installations are only tuned slightly (e.g. increasing shared_buffers, work_mem).

With 2 segments, Greenplum generally performed poorer than PostgreSQL. For certain queries, it actually performed much worse because some less optimized plans were chosen, probably due to Greenplum's bizzare default setting of enable_seqscan = 0 and the older PostgreSQL version (8.2) it uses.

Anyway, I just want to say that PostgreSQL is one bad ass database that you have greatly underestimated.

—————

Date: 30/01/2010

By: Chuck McD

Subject: Re: PostgreSQL is certainly not slow

Greenplum's default is enable_seqscan = on, the same as PostgreSQL. If you have something different, that is wrong.

2 segments is OK for some queries, but on a system with more than 2 CPU cores, the performance can be a lot better if you increase it.

PostgreSQL is a great database, and does a lot of Transaction Processing things that Greenplum isn't good at, but for data sizes larger than main memory, and complex queries, Greenplum should be much faster... If not, there might be something wrong it how it is set up.

—————

Date: 28/01/2010

By: mark

Subject: mixing storage types

Interesting that they offer a column option for table storage. I think the future of databases is likely be to have both storage types. It's hard to do because of the query optimization needed to deal with row and column stored tables in the same place. Since the physical storage is different, it's no surprise that you have to unload. Heck, not too long ago you had to unload and reload DB2 tables for certain table changes (not that DB2 has ever been the most friendly or easy to administer).

I am surprised at no CSQs. I thought they were better than that. Given the types of queries needed for BI, it's hard to position a product as an analytic database if CSQs can't be handled.

—————