Search

Top 60 Oracle Blogs

Recent comments

August 2010

Running Oracle Exadata V2 on Dell Hardware

Well we had to give it a shot.

So we created an Oracle Exadata Storage Server Software CELLBOOT USB flash drive. I’m not kidding, that’s what the Oracle/Sun guys decided to call it. They didn’t even use an acronym in the manual (I guess “ESSSCB USB FD” doesn’t roll off the tongue much better than the whole thing anyway). We used the make_cellboot_usb utility to create the thing off one of our storage servers, which by the way was not that easy to do, since the USB ports are in the back of the 4275’s and they are not easy to get to with all the cabling that’s back there. Anyway, once we had the little bugger created we pulled it out of the back of the rack and booted a Dell Latitude D630 off of it. Here’s a picture:

Notice the thumb drive is all lit up like a Christmas tree.

Here is a close up of the screen (in case your eyes are going bad like mine):

So we tried a couple of different options but eventually got to this screen:

Index rebuilds

A couple of years ago I wrote about a poster on the OTN db forum who was puzzled by the fact that when he started rebuilding tables they got bigger.  (Bad luck, sometimes that’s what happens !)

A few days ago a related question appeared: I rebuilt some indexes and my query got slower. (Bad luck, sometimes that’s what happens – again!)

If you rebuild an index it’s physically different and its statistics are different. Plans can change and go slower because the index stats look sufficiently different; plans can stay the same and go slower because the index is physically different. 

I’ve added a couple of comments to the thread – there may still be some further mileage in it.

TPC-Hの結果

In-Memory Non Parallel Queryのテストなんて、あまり見かけない。
でも「Parallel Queryが使えないOracle SEでmini DWHができるか?」という意味でテストをした。
それに今回の試作機は16GBのメモリを積んだんだから、メモリを使い倒すテストはやりたくて仕方がなかった。

当然、次回からはIn-Memory Parallel Queryのテスト結果をBlogするのだけど、今回は、これまで結果を整理することにした。

今までの結果をグラフにしてみた

T1 Parallel Query no_compress + no_partion table
T2 compress + no_partion table
T3 no_compress + partion table
T4 compress + partion table
T5 In-Memory Non PQ compress
T6 no_compress
T7 compress + result_cache
T8 no_compress + result_cache

               [X軸=同時セッション数、Y軸=qph(Query Per Hour)]

  • qphは同時8セッションでIn-Memory Non PQ(Parallel Query)が優れているのが分かる(T5-T8)
  • そして、Compressをしていない(no_compress)がダントツ1位に見える(T8)
  • でも、これをセッション数で割ると:

  • Compress Tableに対する1セッションのParallel Queryが1位になる(T2)
  • でもParallel Queryは同時2セッションが限界で、それ以上は安定した計測値が出ないし、時間も掛かり過ぎた(T1-T3)
  • その場合でもPartitioning+Compressを行えば6セッションまで動いた(T4)
  • 対して、In-Memory Non PQは8セッションまで安定して動いた(T5-T8)
  • そしてIn-Memory non Parallel Queryじゃダメなんだよ!の中で、

    In-Memory nonParallelはTPC-Hの総合点には大きく貢献したが、実際のデータウェアハウスではParallel Queryの方が優れていることが確認できた。
    そして、Parallel Queryは初めから限界ディスク転送量を出してしまうので「同時実行制御」の工夫が必要だということが、今回のテストを通じて理解できた。

    と書いた。

    ディスク転送量はCrystalDiskMarkの限界量522MB/sを常に超える場合もあった(T2)
    でもReadされているデータは殆んどが必要のないもの、Readしてメモリ上でフィルタリングされる。
    1分間のRead量は:

    500MB/s x 60(秒) = 30000MB = 30GB  ....たった1分で実際のデータ量を簡単に超えている。

    Exadata Storage ServerがインテリジェントにWhere条件をフィルタリングして結果を返したり、Join条件をハッシュ値として返すことが重要な機能だということが実感できた。
    以前Oracle Closed Worldの中で、

    でもStorage Server内での「Secondary Oracle」はExadataだけのClosedなものでしょ?
    →そんなのこれからも「どんどん変わる」。既に細かな話になり始めている。

    と書いた。
    「SSDの普及で転送速度が上がればフィルタリングなんてそれほど重要じゃなくなる」というのは認識不足でした。

    でも、
    TPC-Hベンチマークのように同時8セッションがThink Time無しにHeavy SQLを発行し続けるようなことは、現実世界ではあまりないので、2セッションぐらいで限界になるParallel Queryが活躍できるんだね。
    そうそう、T4はCompress+Partitioningで実質Read量を減らしたので、6セッションまで行けた。

    それから、
    In-Memoryはスケーラビリティでは大勝です。
    でも、メモリに収まるサイズだからです。。。

    An investigation into exadata

    This is an investigation into an half rack database machine (the half rack database machine at VX Company). It’s an exadata/database V2, which means SUN hardware and database and cell (storage) software version 11.2.

    I build a table (called ‘CG_VAR’), which consists of:
    - bytes: 50787188736 (47.30 GB)
    - extents: 6194
    - blocks: 6199608

    The table doesn’t have a primary key, nor any other constraints, nor any indexes. (of course this is not a real life situation)

    No exadata optimisation

    At first I disabled the Oracle storage optimisation using the session parameter ‘CELL_OFFLOAD_PROCESSING’:
    alter session set cell_offload_processing=false;

    Then executed: select count(*) from cg_var where sample_id=1;
    The value ’1′ in the table ‘CG_VAR’ accounts for roughly 25%.

    Execution plan:

    So what’s the “Michigan OakTable Symposium” all about…

    Bit sad that I don’t have the time to post some of the cool internal tracing stuff that I have done lately and that gave me a far better insight in the inner workings of Oracle XMLDB, but there is a reason for it…

    :-)

    As you have maybe have noticed, there is besides the day-to-day database administration and consulting work, a lot of presenting going on in my “free” time. Sharing is fun, but also needs a lot of preparing. So besides attending and presenting on Oracle Open World and OPP2000 in Brussels, I also sneaky arranged – LOL – some slots in that great Michigan OakTable Symposium line-up, in which I feel myself very humbled…so if you can’t read it maybe you should get hold of me somewhere on one of those conferences and just ask…

    ;-)

    Anyway, need to know why you should attend…? Have a sneak peek…

    MOTS from Joel Schneider on Vimeo.

    Kerry Osborne's Oracle Blog

    Just another Oracle blog

    Exadata パフォーマンス・チューニング

    Oracle コンサルタントのエピソード・ブログ

    SQL Developer and MS SQL Server…

    This afternoon I’ve been cleaning up some data in an SQL Server database. I decided to use SQL*Developer to connect to SQL Server by following this post.

    I made liberal use of the following tip when dealing with TEXT and NTEXT types.

    The joys of dealing with multiple engines…

    Cheers

    Tim…

    SaneSAN2010: Serial to Serial – When One Bottleneck Isn’t Enough

    I was recently looking into a storage-related performance problem at a customer site. The system was an Oracle 10.2.0.4/SLES 9 Linux system, Fibre Channel attached to an EMC DMX storage array. The DMX was replicated to a DR site using SRDF/S.

    The problem was only really visible during the overnight batch runs, so AWR reports were the main source of information in diagnosis. In this case, they were more than sufficient, showing clear wait spikes for ‘free buffer waits’ and ‘log file parallel write’ during the problematic period. They were quite impressive, too – sixteen second latencies for some of the writes.

    Sane SAN 2010 – Introduction

    This year at the UKOUG Conference in Birmingham, acceptance permitting, I will present the successor to my original Sane SAN whitepaper first penned in 2000. The initial paper was spectacularly well received, relatively speaking, mostly because disk storage at that time was very much a black box to DBAs and a great deal of mystique surrounded its operation. Well, nothing much has changed on that front, so I figured it was very much time to update/rewrite the paper for modern technology and trends and try to impose my occasionally humble opinion on the reader </p />
</p></div>

    	  	<div class=