Search

Top 60 Oracle Blogs

Recent comments

October 2011

  • warning: SimpleXMLElement::__construct(): Entity: line 1: parser error : Space required after the Public Identifier in /www/oaktable/sites/all/modules/amazon/amazon.module on line 262.
  • warning: SimpleXMLElement::__construct(): in /www/oaktable/sites/all/modules/amazon/amazon.module on line 262.
  • warning: SimpleXMLElement::__construct(): ^ in /www/oaktable/sites/all/modules/amazon/amazon.module on line 262.
  • warning: SimpleXMLElement::__construct(): Entity: line 1: parser error : SystemLiteral " or ' expected in /www/oaktable/sites/all/modules/amazon/amazon.module on line 262.
  • warning: SimpleXMLElement::__construct(): in /www/oaktable/sites/all/modules/amazon/amazon.module on line 262.
  • warning: SimpleXMLElement::__construct(): ^ in /www/oaktable/sites/all/modules/amazon/amazon.module on line 262.
  • warning: SimpleXMLElement::__construct(): Entity: line 1: parser error : SYSTEM or PUBLIC, the URI is missing in /www/oaktable/sites/all/modules/amazon/amazon.module on line 262.
  • warning: SimpleXMLElement::__construct(): in /www/oaktable/sites/all/modules/amazon/amazon.module on line 262.
  • warning: SimpleXMLElement::__construct(): ^ in /www/oaktable/sites/all/modules/amazon/amazon.module on line 262.

Little Things Doth Crabby Make – Part XV. Oracle SPARC SuperCluster Is Much Faster Than IBM Power Systems! No Squinting Allowed!

Since I missed Oracle Openworld 2011 I wasn’t able to attend the keynotes. I have, however, taken the time to view each of them in playback from video archives. After viewing the keynote delivered by Oracle Corporation’s CEO Larry Ellison, I felt compelled to read some additional literature relevant to the IBM-smashing claims made by Mr. Ellison during his segment focused on Oracle SPARC SuperCluster. A simple Google search brought me to www.oracle.com/us/corporate/features/sun-beats-ibm-501074.html where I see the following graphic:

 

 

 

 

 

 

 

 

 

 

 

 

 

It has been a long time since my last installment in the Little Things Doth Crabby Make series so it’s high time I do so.  Here is what I see at the “sun-beats-ibm” webpage:

 

 

 

 

 

 

 

 

 

In case the fine-print disclaimer is too small, here’s what I see (bold font added by me):

Sources for Comparison of Systems:

Systems cost based on server, software and comparable storage list prices (without discounts), as well as third party research. Performance comparison based on Oracle internal testing together with publicly available information about IBM Power 795 TurboCore system with highest processor speed commercially available (4.25 GHz) as of Sept 28, 2011

 

That makes me crabby. I shouldn’t have to squint, should I?

Filed under: oracle

Oracle Core - Essential Internals for DBAs and Developers (Jonathan Lewis)

Book: 

 Oracle Core: Essential Internals for Troubleshooting by Jonathan Lewis provides just the essential information about Oracle Database internals that every database administrator needs for troubleshooting—no more, no less.
 
Oracle Database seems complex on the surface. However, its extensive feature set is really built upon upon a core infrastructure resulting from sound architectural decisions made very early on that have stood the test of time. This core infrastructure manages transactions and the ability to commit and roll back changes, protects the integrity of the database, enables backup and recovery, and allows for scalability to thousands of users all accessing the same data.

Want to Know More about Oracle’s Core?

I had a real treat this summer during my “time off” in that I got to review Jonathan Lewis’s up-coming new book. I think it’s going to be a great book. If you want to know how Oracle actually holds it’s data in memory, how it finds records already in the cache and how it manages to control everything so that all that committing and read consistency really works, it will be the book for you.

{Update, Jonathan has confirmed that, unexpected hiccups aside, Oracle Core: Essential Internals for DBAs and Developers should be available from October 24, 2011}

The Social Development Database

I’ve always been fascinated by development databases — more so sometimes than huge, heavily utilized production ones. Mainly because I’ve seen how the beginnings of a performance problem, or the start of an elegant solution takes shape within a development database. It’s one of the reasons why I love high levels of visibility through full DDL-auditing within development. I love to SEE what database developers are thinking, and how they are implementing their ideas using specific shapes of data structures.

One of the concepts I’d love to see is a “river of news” panel within development tools to see what is going on within a development database. Some of the good distributed source code control systems do this now.

Here’s a good example of what I mean:

http://github-images.s3.amazonaws.com/blog/2011/mac-screenshots/commits-full.png

Happy Birthday Val…

It’s my mom’s 70th birthday today. Happy birthday Val.

I’m flying to China today, so I’ve set this post to publish automatically while I’m in transit. :)

Cheers

Tim…




Debugging PL/SQL and Java Stored Procedures with JPDA

In 2003 I published a paper entitled Debugging PL/SQL and Java Stored Procedures with JPDA. Its aim was to describe how to debug PL/SQL and Java code deployed into the database with JDeveloper 9i. Two weeks ago a reader of my blog, Pradip Kumar Pathy, contacted me because he tried, without success, to do something similar with JDeveloper 11g, WebLogic 11g and Oracle Database 11g. Unfortunately I was not able to help him. The reason is quite simple, since 2004 I’m an Eclipse user…

Few days later Pradip contacted me again to let me know that, at last, he succeeded. Here you find his notes…

  1. Grant the required privileges
  2. GRANT DEBUG CONNECT SESSION to &&schema_name;
    GRANT DEBUG ANY PROCEDURE TO &&schema_name;

Returning to the Day Job.

Having the Summer off. It’s something that quite a few IT contractors and some consultants say they intend to do…one year. It’s incredibly appealing of course, to take time off from your usual work to do some other things. Asking around, though, it is not something many of us self-employed types who theoretically could do, actually have done. I think it is because, although the theory is nice, the reality is a period not earning a living – and the background worry of “if I take a break, will I be able to step straight back into gainful employment afterwards”?

AIOUG Webcast: Methodical Performance Tuning

A big thank you to all those you attended my session today. I sincerely hope you got something out of it. Here are the scripts I used in the demo. And, here is the slide deck, if you are interested.

Remember, this was just the beginner's session. We will have intermediate and advanced ones in near future. Stay tuned through the AIOUG site.

Counting Triangles Faster

A few weeks back one of the Vertica developers put up a blog post on counting triangles in an undirected graph with reciprocal edges. The author was comparing the size of the data and the elapsed times to run this calculation on Hadoop and Vertica and put up the work on github and encouraged others: “do try this at home.” So I did.

Compression

Vertica draws attention to the fact that their compression brought the size of the 86,220,856 tuples down to 560MB in size, from a flat file size of 1,263,234,543 bytes resulting in around a 2.25X compression ratio. My first task was to load the data and see how Oracle’s Hybrid Columnar Compression would compare. Below is a graph of the sizes.

Volatile Data, Dynamic Sampling And Shared Cursors

For the next couple of weeks I'll be picking up various random notes I've made during the sessions that I've attended at OOW. This particular topic was also a problem discussed recently at one of my clients, so it's certainly worth to be published here.

In one of the optimizer related sessions it was mentioned that for highly volatile data - for example often found in Global Temporary Tables (GTT) - it's recommended to use Dynamic Sampling rather than attempting to gather statistics. In particular for GTTs gathering statistics is problematic because the statistics are used globally and shared across all sessions. But GTTs could have a completely different data volume and distribution per session so sharing the statistics doesn't make sense in such scenarios.

So using Dynamic Sampling sounds like a reasonable advice and it probably is in many such cases.