The next wave in BI


TPC-H fun with Greenplum (single node edition)

27/01/2010 23:12

Introduction

There's a nice quote on the Greenplum site from Brian Dolan, Fox Interactive Media which says “Very impressed with the speed... 3 minutes to do a sum on 100 million rows of data”. That pretty much sums it up. I wouldn't consider that anything to be proud of; 3 minutes to do a simple sum over a measly 100 million rows of data is SLOW nowadays. A quick look at the TPC site will tell you that more complex queries on at least 600 million records run in seconds, not in minutes. Heck, with Excel 2010 PowerPivot summing 100 million rows takes less than a second! And you don't need expensive hardware or software to do any of this. My own benchmark machine is a dual Xeon 5520 with 64 GB ram and 12 Intel X-25M SSD drives of 80 GB each, connected to 2 Adaptec 5805 Raid controllers. The system is running Centos 5.4 and total cost is way below $10K. Anyway, it's always nice to see how a database behaves in practice, so I downloaded the free, single node Greenplum edition, installed it and ran the TPC-H sf100 benchmark. By the way, typing that last sentence took a lot less time than the actual work...

 

Installation

Before you install the Greenplum (GP) database it's good to first plan your disk layout. GP likes to distribute its data over so called data segments, and each data segment is tied to a GP process which is tied to a CPU core. So in my case I had 8 cores (in fact with hyperthreading it is 16) which allowed me to use 8 disks or partitions. No raid, no failover (although GP has facilities for that), I was just interested in testing load and query speed. The installation process itself is pretty straightforward and well explained in the single node install manual. What's not explained at all, not even in the more voluminous admin guide, is how the system should be tuned. Remember that GP is based on PostgreSQL so all the same settings are available. However, the settings advised for PostgreSQL are for OLTP systems, not for data warehousing. So in my first run the system used only about 3GB of the available 64GB ram. It turned out that the most important configuration parameter (shared_buffers, which tells how much memory a process can use) was set at only 32 MB (!). I toyed around with several parameters, including the system kernel settings, but couldn't use a value over 1920MB (GP won't start when it's set to e.g. 2048), so I suspect that value is capped somewhere.

 

Database creation

Again, if you're familiar with PostgreSQL, GP's DDL and DML shouldn't have a lot of secrets. They did add a few extra's though, such as the ability to define a table as column oriented instead of the standard row orientation, and the ability to compress data. But, unlike modern column stores as Vertica or Paraccel, the DBA is responsible for thinking out the best storage and compression strategy as everything has to be explicitly created using DDL statements. What's more, the column orientation and compression are only available for append-only tables. That's right: if you want columns and/or compression, it's dump and reload if you want to update your data. In that respect it's a bit similar to Infobright Community Edition which lacks all DML capabilities. For the benchmarks I ran I used went with the default row orientation.

 

Loading & database size

The data loaded into the database is generated using the dbgen tool which can be compiled from the source available from www.tpc.org. Dbgen will create 8 ASCII files that comprise the TPC schema which can then be loaded into the created database. GP lets you use the standard COPY statement to load files into a table, but also offers a facility called 'external tables' to support fast, parallel data loading. I didn't use external tables but the plain COPY which took about 45 minutes to load the 100 GB dataset. The tables already had indexes defined so that's actually pretty good. What's not so good is the resulting database size. As I couldn't use compression, the resulting database size was about 160 GB, which is 10 times as much as you need when loading the same dataset into an Infobright database. It's also about two to three times as much as less aggressivily compressed databases need.

 

Running queries

I first started by starting some individual queries to test whether the system was working and whether is was fast. It was working, but fast? TPC-H query 1 took 7 minutes and 7 seconds, which is more than 60 times slower than the fastest result (5.5 secs) I got with another database on this machine . I decided to fire up my TPC-H script anyway which runs the official TPC-H bechmark consisting of a power test (a single stream) and a throughput test (5 parallel streams). After a couple of minutes I got this message:

 

ERROR: Greenplum Database does not yet support that query. DETAIL: The query contains a correlated subquery

 

Ouch, that hurts. Had the same problem with InfoBright some time ago but haven't tested recent editions. What's funny is that Query 4 runs fine (that one also contains a CSQ but is probably easier to rewrite) but Q2, Q17, Q20, Q21 and Q22 return an error which also invalidates the TPC results I got. I also got a couple of 'out of memory' errors while running the trhoughput test, even though the memory utilization never exceeded 45GB. So as disappointing as it may be, I'm not going to publish the results here. What I can tell is that if you're running PostgreSQL and want a similar but faster database to run your data warehouse, GP single node edition will be a major improvement. It's also much faster than MySQL for typical BI queries. On the other hand, it's also (a lot) slower than SQL Server 2008, Kickfire, or Sybase IQ, the current leaders in the single node SF100 benchmarks.

 

Conclusion

Greenplum might be an interesting product if you are using the MPP version and can invest time and money in optimizing and tuning the system, but I'm not very impressed with the single node edition. Some of the issues probably also apply to the MPP version. It lacks intelligent auto tuning capabilities, query optimization seems to work not too well, it does a bad job of utilizing available memory (which might be due to my lack of experience with tuning PostgreSQL db's too, btw). Note though that TPC-H is just 'a' benchmark: you should always do your own testing with your own data! Overall I would say it's 'not bad', but not very good either.

—————

Back


Topic: TPC-H fun with Greenplum

Date: 08/11/2021

By: MichaelFup

Subject: cialis coupon cialiswithdapoxetine.com

cialis pills [url=https://cialiswithdapoxetine.com/#]cialis without a doctor prescription[/url]

—————

Date: 28/10/2021

By: MichaelFup

Subject: cialis price cialiswithdapoxetine.com

cialis price [url=https://cialiswithdapoxetine.com/#]cialis without a doctor prescription[/url]

—————

Date: 14/10/2021

By: MichaelFup

Subject: cialis alternative cialiswithdapoxetine.com

cheap cialis [url=https://cialiswithdapoxetine.com/#]cialis 20mg[/url]

—————

Date: 14/04/2021

By: ShawnBep

Subject: сайт

Hi, cool video to watch for everyone [url=https://bit.ly/3u38cSA]https://bit.ly/3u38cSA[/url]

—————

Date: 30/03/2021

By: inetryconydot

Subject: Amoxil Order Online

[b]AMOXIL[/b] is used to treat many different types of infections caused by bacteria, such as ear infections, bladder infections, pneumonia, gonorrhea, and E. coli or salmonella infection.
Active Ingredient: [b]Amoxicillin[/b]

[url=https://fhbroute.com/5.html?group=2021&parameter=Amoxil][b]Can I Buy Amoxil Online[/b][/url]


[b]Amoxil In Order Online[/b]
Buy Amoxil 250mg Online Without Prescription
Buy Amoxil On Line
[url=https://antibioticsline.tumblr.com/]Can I Buy Antibiotics Over The Counter[/url]
Order Amoxil 250mg Online
[b]Buy Amoxil[/b]
Ordering Amoxil Online
[url=https://buydoxycyclineus.tumblr.com/]Buy Doxycycline Online Next Day Delivery[/url]
Get Online Amoxil No Prescription
Can You Buy Amoxil Online
Order Amoxil 500mg Online
Get Amoxil Online No Prescription
[b]Get Amoxil Online No Prescription[/b]
[url=https://neurontn.tumblr.com/]Neurontn Order Online[/url]
Where To Buy Amoxil 500mg
[b]Purchasing Amoxil Online[/b]
Amoxil Buy
Get Amoxil 250mg Online
[url=https://azithrom.webstarts.com/]Buying Azithromycin[/url]
Buy Amoxil On Line
Buying Amoxil
Get Online Amoxil No Prescription
[url=https://cpsobsessedforums.com/threads/54599-Buy-cheap-amoxil-online-amoxil-buy-online-no-prescription]Get Amoxil Online No Prescription[/url]
Amoxil Online Uk Buy
Get Amoxil Online
Amoxil Without Prescriptions Buy
[b]Buy Amoxil 250mg Online[/b]

—————

Date: 11/02/2019

By: rixtero

Subject: Help to get apex legends gen mod

hello. i am wertekson

—————

Date: 21/10/2014

By: Richardkr

Subject: Nike Free Run 5.0 Pas Cher https://www.drcaftatrade.com/


—————

Date: 17/12/2013

By: Tusadidarix

Subject: yf92


—————

Date: 17/03/2011

By: Anitta chan

Subject: Mixing storage types

I am surprised at no CSQs. I thought they were better than that. Given the types of queries needed for BI, it's hard to position a product as an analytic database if CSQs can't be handled.
<a href="https://prowowlevelingguide.com">Wow leveling guide</a>

—————

Date: 11/02/2011

By: Randolph

Subject: TCP-H

The TPC-H has some tests in it that rule out many MPP systems (ie transactions and true ACID compliance). In fact I'm not sure that the TPC-H is a relevant test of these systems while these transactional tests are retained in the benchmark. These systems do not try to be transactional systems and do not require these slow sequential features.

If you want to see good TPC-H figures, apparently Vectorwise has just blitzed the TPC-H with a very significant result.

We are using Vectorwise on our MPP nodes and are finding staggering performance improvements. But like GP (although Luke would probably never admit this) we cant run the TPC-H legally either yet.

For more info look here: www.deepcloud.co

—————