Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Oracle Open World Report for October 11th and 12th

As I am attending Open World 2009 on blogger credentials, it seems proper I should  actually blog about it.

So, here it is.  I won't be blogging about keynotes or other things that will appear in the news the following day, but rather on some of the sessions I attend.

As I got back to my room too late and too tired to do this properly on Monday, I am putting Sunday and Monday in the same post.

Here goes:

Open World - Sunday 10/11/2009

While attending Oracle Open 2009, I thought it a good idea to make some report of sessions attended, and any interesting developments at OOW.

Some of the sessions I attended may not be considered DBA topics. I thought it would be interesting to break out of the DBA mold for a bit and attend some sessions that might be a bit outside the DBA realm.

Sue Harper - Everyday Tasks with Oracle SQL Developer

Sue Harper is the product manager for SQL Developer, and was presenting some of the useful new features of the SQL Developer 2.1 Early Adopter release.

While I have used SQL Developer from the time it was first released as Raptor, I have not until recently used it simply as a database browsing tool.  After exploring some of the features that allow writing reports with master/detail sections, I converted some SQLPLus scripts for use with SQL Developer.

SQL Developer is a very capable tool, so I attended this session to see what else I might be missing out on.

There was only one hour allocated for the session, and given the first 15 minutes were consumed convincing the audience why they should be using SQL Developer, there was just that much less time available to see the new features.

Taking a few minutes to market it is probably just in the product manager DNA.

Some of the features demonstrated were actually available in 1.5, but maybe not widely known.  As I have not used 2.1, I won't always differentiate between versions here. Some of these features may not be new to 2.1, maybe just improved.

Though not a new feature in 2.1, a few minutes were used to demonstrate the use of the built in version control integration. This is a very useful feature, and can be setup for seamless integration for CVS, SubVersion, Perforce, and one other I can't recall now.  It's definitely worth a look.

Some features that are new to 2.1 looked very useful:

Persistent column organizing and hiding.  When viewing data in SQL Developer, the columns may be easily rearranged and selected or de-selected for viewing.  While previous versions allowed dragging columns around, 2.1 has a nice dialog that makes this much easier.

New to 2.1 is column filtering.  By right clicking on a cell in the data pane, a dialog can be brought up to filter the data based on values found.  This allows filtering the data without requerying the table.

Also new to 2.1 is an XML DB Repository Navigator. It was mentioned, but alas there was not time to demonstrate it.

http://www.oracle.com/technology/products/database/sql_developer/index.html

http://sqldeveloper.oracle.com/

Christoper Jones - Python/Django Framework

This was a hands on developer session centered on using the Python scripting language with the Django Web application framework.  This was a fun session.  The lab was already setup, running Oracle Linux VM's with access via individual laptops setup in the training room.

The lab was a go at your own pace session, with instructions both printed and available via browser.  Browser based was the way to go with the instructions, as the examples could be cut and pasted, saving a lot of time typing.

I wasn't able to quite complete the lab as I kept encountering an error when running the web app.  It was probably due to an error in one of the scripts I modified during the session, but enough was accomplished to see that the Django Framework looked very interesting.  Perhaps even simple enough to use for a DBA.  Yes, I did search the schedule for a similar Perl session, perhaps using Mason or somesuch.

The training materials are going to be placed on OTN in the Oracle By Example section after Open World concludes.

http://www.oracle.com/technology/obe/start/index.html

Ray Smith - Linux Shell Scripting Craftmanship

The last session I attended on Sunday was all about shell script craftsmanship. Ray Smith was sharing some common sense methods that can be used to greatly enhance your shell scripts.

If you have done any software development, the information presented would be similar to what you already know.

  • Use white space and format your code for readability.
  • Don't be overly clever - other people will have to read the shell script.
  • Format your scripts with a header explaining the purpose of the script, and separate sections for independent and dependent variables, and a section for the code.
  • Use getops to control command line arguments.
Smith strongly advocated that everyone in the audience obtain a copy of the book "The Art of Unix Programming" by Eric S. Raymond.  This is not a new book by any means, but Smith drew upon it for many of the principles he advocated in scripting.

A couple of tools new to me were mentioned:

Zenity and Dialog - both of these are graphical dialog boxes that may be called from shell scripts in linux.

http://freshmeat.net/projects/zenity

Dialog is installed with linux, so just do man dialog to check it out.

It was an interesting presentation.  Though a lot of it was not new to me, the two dialog tools mentioned were, showing that no matter how well you think you may know a subject, you can always learn something from someone else.

Open World - Monday 10/12/2009

Jonathan Lewis Unconference - How to be an Expert
http://www.jlcomp.demon.co.uk/
http://jonathanlewis.wordpress.com/

Jonathan Lewis had an interesting unconference presentation.

http://wiki.oracle.com/page/Oracle+OpenWorld+Unconference

In a nutshell, it comes down to this:

You must practice, and practice quite a lot.

To make the point, he used the joke about the American Tourist asking the grounds keeper how the lawns of the Royal Estates are maintained to be so lush, have such and even texture and in short, to be so perfect.

The groundskeeper explained while the tourist took notes.

First you must dig down 4 inches.

Then you must put down a layer of charcoal.

Then another 1 inch layer find sharp sand.

Finally a layer of fine loam goes on top.

You then must seed the lawn, and water it very well for 6 weeks.

After 6 weeks, you must cut the grass, being very carefully to remove only a small amount as you mow.  This must be done three times a week.

And then you continue doing this for 200 years.

Ok, everyone had a good laugh at that, but the point was made.

Reading some books and being able to run some scripts does not make you an expert.  Lots and lots of practice may make you an expert, if you apply yourself well.

During the presentation he asked a number of questions of the audience made up mostly of DBA's. I will reprise a couple of them here.

Q1:  Assuming you have a simple heap table, with no indexes, you update a single column in 1 row of the table.  How many bytes of redo will that generate?

Q2: Who of you in the audience when you insert data into a table, deliberately insert duplicate data into the database?

I will leave you to speculate on the answers a bit.

Of those 2 questions, only 1 was answered correctly by the audience.

Leng Tan and Tom Kyte DBA 2.0 - Battle of the DBA's

What is the difference between a DBA 1.0 (the old days) and a DBA 2.0 ( the modern DBA)

DBA 2.0 has modern tools, self managing database enabled by AWR and the Diag and Tuning packs.

DBA 1.0 uses scripts and works from the command line.

One the stage in addition to Kyte and Tan were two DBA's, each with a laptop and an oracle server to work on.

Two scenarios were presented for a timed hands on problem that each DBA must work through.

First scenario - Security Audit

Each DBA is given 6 minutes to do a database audit and report on possible vulnerabilities

DBA 1.0 ran scripts to check for open accounts, default passwords, publicly granted packages and umask settings.

After doing so he ran a script to remove privileges granted to PUBLIC, and locked a couple of accounts.

DBA 2.0

DBA 2.0 worked from the Oracle Enterprise Manager console, using the Secure
Configuration for Oracle Database.

He was able to observe the database security score, navigate through several screens and correct the same security problems that DBA 1.0 did.  Following that he was able to see that the security score for the database had improved.

So the conclusion made by the presenter is that OEM is clearly superior because OEM will automatically generate the needed data every night.

By contrast DBA 1.0 can only do one db at a time.

I do not believe this demonstration to be a valid comparison - it's quite simple to run the script against any number of databases from a script, and report on anomalies.

At this point it should be mentioned that DBA 1.0 spent 4 minutes explaining what he was going to do, another minute explaining what the scripts were doing, with less than 1 minute spent actually running the scripts.

By comparison, DBA 2.0 was navigating through screens through nearly the entire 6 minutes.

The statement was made by the presented that doing this with scripts at the command line was far too tedious a task, and DBA 1.0 would never be able to accomplish the task for 200 databases.

I won't belabor the point (well, not too much) but automating these kinds of tasks is relatively simple for command line tools.  Which is easier and more productive?  Automating a set of scripts to poll all of your databases, or navigate through OEM for 200 databases?

The present referred to using OEM as "really convenient"  Sorry, but I have never found OEM to be at all convenient.  Whenever I run into problems with it, it requires a SR to fix it.

Thetre was a round 2 as well regarding testing execution plans both before and after setting optimizer_features _enable to a newer version.  OEM fared well here compared the the scripting method as the scripts used 'explain plan' and OEM actually executed the queries to gather execution plan information.

That isn't to say however that the scripts could not be modified to do the same.  No, I am not completely against GUI environments.  I am just against making more work for DBA tasks.

Enough for now, I will report on Tuesdays conferences later this week.

Extra! Extra! Oracle Closed World today.... on Cloud

We had planned not to have any OCW presentations today in order not to steal Larry's audience from his planned keynote, but we're doing it anyway.

It's at 1200 hours, NOT 1300 hours as usual.

More details via text messages later, including todays codeword. If you want text messages from me for the OCW sessions, send me a text/SMS on +45 25277100.

Cloud computing is 'hot'. So is Larry when he talks about it on YouTube. Funny as Hell, actually.

There are at least these two videos. They are partly overlapping, but that doesn't matter- you'll want to see him do this standup routine a couple of times, trust me:

http://www.youtube.com/watch?v=0FacYAI6DY0
http://www.youtube.com/watch?v=8UYa6gQC14o

Which is why today, at the secret location, Oracle Closed World will present a couple of guys that know everything about 'the cloud'.

Mogens

OOW Day2

Why do you come to Open World? I'm sure we will get all kinds of reasons, as many as there are stars in the sky. Some predominant themes are - getting to know more about the Oracle (or related) technologies by attending sessions, reconnecting with old friends and building networking. Of course, getting freebies from the Exhibit Halls, I'm sure, can't be far behind as a motivator.

I come to OOW for all those reasons as well. But high up in my list is the visit to the Exhibit Halls. No; not for the tee-shirts that do not fit me and graphics I don't really dig. I visit the demogrounds and exhibit halls to know about the products and tools that I should be aware of. Where else would you find 1000+ companies advertising the products at one place? Sure, I can call them and find out; but ho do I find them? OOW exhibit halls are prime "hunting" grounds to look for new ideas and tools that I should be interested in; or at least be aware of. I can not only look at the tools; I can actually get some relevant technical facts in 5 minutes which might take weeks of scheduling and hours of marketing talk. And, if I decide the product is not relevant; I can always walk away. I have the privilege of walking away; they don't. If I call them to my office, "they" have that option; not me :) If I find something attractive, I can always follow up and get to know more.

Oracle demogrounds are even better. Not only I can meet Oracle PMs there; but the people who never come out to the public world - developers, development managers, architects and so on. These unsung heroes are mostly the reason why Oracle is what it is now. I meet the known faces, get to know new ones and establish new relationships. They hear from me what customers want and I learn the innards of some features I am curious about.

So, I spent almost the whole day yesterday navigating through demo grounds and exhibit halls. I could cover only a small fraction. In between I had to attend some meetings at work. Going to OOW is never "going away". I wish it was.

And Now For Something Completely Different…

So just a short post today, more of an announcement…

At 11.05am on Wednesday 2nd December, I will co-present a talk at the UKOUG conference with Tanel Poder. OK, so nothing particularly earth-shattering there. However, we’ve decided to do something a little bit different with the creative process. The talk is entitled “The Oracle Wait Interface Is Useless (sometimes)”, and is a subject that both Tanel and I have been working on separately for a little while. The premise is this: The wait interface is great for ‘slow’ waits, but what about waits that are not instrumented by the wait interface? What about waits that are not waits from Oracle’s perspective, such as reading a page of memory? What about pure inefficiency? There is, of course, the concept of DB Time, but it is not currently granular enough.

So, you probably can see the idea behind the presentation: The goal is to present some alternative diagnostic techniques to determine the cause of poor performance. We happen to both have similar ideas on this, and they don’t just stop at Oracle.

So here’s the new concept, at least for us: We are going to write the content for the presentation as a tag team effort between our two blogs. Airing our dirty laundry in public, so to speak. We think this will give a fairly unique opportunity for public comment before the presentation is actually given!

So, over to Tanel for part one…

OOW09 - RAC Performance Tuning

For all those who came to my session - many, many thanks. There is no better sight for a presenter than to see a roomful of attendees, especially with people standing near the walls. The fire marshal was not amused probably; but I was grateful. The harrowing incident of a blue screen of death on my PC - not just once but twice - just before the presentation was about to start was enough to throw me into a panic mode; but the third time was a charm. It worked. Phew!

You can download the presentation here. And while you are there, look around and download some more of my sessions as well.

Thanks a lot once again. I'm off the keynote now.

Oracle Closed World - an underground conference...

I'm here in San Francisco for the Oracle Open World conference along with four other guys from Miracle, the two crazy Miracle Finland guys and some other crazy people - we've rented a couple of big apartments as usual, and are doing work, beer and other essential stuff together.

Last year at Oracle Open World (OOW) my friend Iggy Fernandez, who edits the NOCOUG (Northern California Oracle User Group) magazine/journal, suggested an Oracle Closed World conference, where REAL, TECHNICAL presentations would take place underground in secret locations, using secret passwords, and what have you.

Well, it's here. Monday, Tuesday and Thursdag at a secret location we'll do deep and very technical presentations about various topics. The secret location (which is indeed underground) has the capability to serve beer, by the way.

Let me know if you're interested in hearing more about OCW - email me on mno@MiracleAS.dk or text me on +45 2527 7100.

Mogens

Things you never wanted to know about SAN's...

Here's some information you will try to forget after reading. It explains why SAN's always cause trouble, why "a firmware upgrade" is really a complete change of an OS and therefor really dangerous (and impossible to plan or test for) and more.

From now on, think of the firmware in a SAN as a whole OS, just bigger. Scary, right?

My question to this very smart guy I know was this:

"Could you repeat what OS'es are used in what SAN's for me? And how many code lines the ExaData is using?"

=======================================================================

oh god...that is a huge question... First, Exadata software is small (less than about 100MB of bits)...but that is an unfair comparison to the glut of stuff in a full-featured array... Exadata has Linux underneath it, but then we execute about 98% User, 2% Kernel so really, the only thing we get from Linux is scheduling and I/O... Exadata is small because it doesn't do any of the fat stuff arrays like Clariion do (e.g., snapshots, remote mirror, etc).

Netapp's is called OnTap and it is a heavily developed BSD (Net/1 to be exact). It is huge and full featured as you can tell by how many add on packages it support, but just in protocol provisioning it is huge. Consider the fact that it can support front-end FC yet the LUNS are actually files in the WAFL filesystem! Wild.

Clariion OS is called FLARE and it sits on top of a full Windows distro (XP). EMC NAS (celerra) is called DART which is written from scratch.

HP StorageWorks Clustered Gateway is Linux +hundreds of thousands of very specialized PolyServe code.

EMC DMX OS is called Enginuity...DMX cpus are Power and I have no idea what the origin of this OS is. If I were a betting man I'd bet that it is scratch like DART.

IBM DS83XX is full blown AIX plus more (this is the old SHARK array) in fact, it is a cluster of AIX boxes in there...

I don't know what HP calls the stuff that runs inside EVA ...

as you can see...it is very confusing.
=======================================================================

Oracle OpenWorld first presentation

My first presentation of Oracle OpenWorld is done - it was the one I was most nervous about. It was a keynote, the opening session for the Oracle Develop conference track. As a keynote - I try to make it fun and informative but not a brain dump of all things technical.

The room filled up - which surprised me, it was 9am on a Sunday morning...

And the presentation seemed to work - I ran about 3 minutes over (need to tighten it up a tiny bit) - but it went well. Everyone laughed when they were supposed to, and didn't when they weren't. Even got a bit of spontaneous applause every now and then :)

For those that missed it, or those not at the conference that want to see it - there will be a replay tomorrow (Monday, October 12th) at 11am pacific time - see this link for details.

ID#: S312577
Title: Keynote: Oracle Develop "What Are We Still Doing Wrong"
Track: Oracle Develop: Database
Date: 11-OCT-09
Time: 09:00 - 10:00
Venue: Hilton Hotel
Room: Grand Ballroom B

Aggregating & Purging Batch Timings

Application Engine can collect timing information for the programs being execution. These 'batch timings' can be written to log file and/or tables in the database. I always recommend that this is enabled in all environments. The runtime overhead is very low, and this data is extremely valuable to determine the performance of a system over a period of time, and to identify the pieces of SQL or PeopleCode code that account for the most time. The timing data collected for individual processes can be viewed directly within the Process Monitor component.

The Process Scheduler purge process does not delete batch timings, so this data remains in the database indefinitely, although it can no longer be accessed via the Process Monitor. Over time, on a busy system, a large volume of data can accumulate. In some ways this is a good thing. There are good reasons to purge the Process Scheduler as aggressively as the business will permit.  The batch timings can still be analysed by direct SQL query. However, the sheer volume of data is likely to result in queries that can take quite a while to execute. After a while, you are less likely to be interested in the performance of individual processes, but are more likely to want to aggregate the data. So, it makes sense to hold the data at least partly aggregated.

I have produced a very simple Application Engine program (GFC_TIM_ARCH) to address this problem. This program is available for download from the Go-Faster website as a PeopleTools Application Designer Project. Please note that this process has been written using Oracle RDBMS specific SQL syntax.

  • Application Engine stores batch timing data in three table PS_BAT_TIMINGS_LOG, PS_BAT_TIMINGS_FN and PS_BAT_TIMINGS_DTL.
  • I have created three new tables PS_GFC_TIMINGS_LOG, PS_GFC_TIMINGS_FN and PS_GFC_TIMINGS_DTL.
  • GFC_TIM_ARCH aggregates the data in each of these tables by the day on which the process begin, by the process name, and (where applicable) by the 'detail_id' column. The aggregated data is put into the GFC_TIMINGS% tables, the original data is removed from the BAT_TIMINGS% tables.
  • It finds the earliest three days for which timing data exists that is older than the longest Process Scheduler retention limit. The idea is that the process should be run daily (the delivered 'Daily Purge' recurrence is suitable), but if it doesn't run for some reason it will catch up the next day.
  • Only one instance of WMS_TIM_ARCH is permitted to run concurrently.

On one system, where I tested this process, BAT_TIMINGS_DTL was growing by over 1 million rows per day. This became around 5000 rows per day in GFC_TIMINGS_DTL.

If you have been running with batch timings for a while, then when you first introduce this process you will probably have a large backlog of data to be aggregated and purged. The easiest option is to run this process repeatedly until the data has been processed (possibly using the recurrence that causes a program to run every minute). After the backlog has been cleared the BAT_TIMINGS% tables should be rebuilt or shrunk in order to release the space left in the tables by the deleted rows. This will help queries that scan the BAT_TIMINGS_DTL record, otherwise these scans still need to include the empty rows because they are below the tables High Water Mark.

Once the backlog has been cleared, the GFC_TIM_ARCH process can run daily and, not withstanding variations in the load on the system, the rolling volume of data retained in the BAT_TIMINGS% tables should be fairly constant, and it should not be necessary to rebuild them frequently. Space freed by the daily delete should be used by new rows that are inserted into the table as AE processes run.

Aggregating & Purging Batch Timings

Application Engine can collect timing information for the programs being execution. These 'batch timings' can be written to log file and/or tables in the database. I always recommend that this is enabled in all environments. The runtime overhead is very low, and this data is extremely valuable to determine the performance of a system over a period of time, and to identify the pieces of SQL or PeopleCode code that account for the most time. The timing data collected for individual processes can be viewed directly within the Process Monitor component.

The Process Scheduler purge process does not delete batch timings, so this data remains in the database indefinitely, although it can no longer be accessed via the Process Monitor. Over time, on a busy system, a large volume of data can accumulate. In some ways this is a good thing. There are good reasons to purge the Process Scheduler as aggressively as the business will permit.  The batch timings can still be analysed by direct SQL query. However, the sheer volume of data is likely to result in queries that can take quite a while to execute. After a while, you are less likely to be interested in the performance of individual processes, but are more likely to want to aggregate the data. So, it makes sense to hold the data at least partly aggregated.

I have produced a very simple Application Engine program (GFC_TIM_ARCH) to address this problem. This program is available for download from the Go-Faster website as a PeopleTools Application Designer Project. Please note that this process has been written using Oracle RDBMS specific SQL syntax.

  • Application Engine stores batch timing data in three table PS_BAT_TIMINGS_LOG, PS_BAT_TIMINGS_FN and PS_BAT_TIMINGS_DTL.
  • I have created three new tables PS_GFC_TIMINGS_LOG, PS_GFC_TIMINGS_FN and PS_GFC_TIMINGS_DTL.
  • GFC_TIM_ARCH aggregates the data in each of these tables by the day on which the process begin, by the process name, and (where applicable) by the 'detail_id' column. The aggregated data is put into the GFC_TIMINGS% tables, the original data is removed from the BAT_TIMINGS% tables.
  • It finds the earliest three days for which timing data exists that is older than the longest Process Scheduler retention limit. The idea is that the process should be run daily (the delivered 'Daily Purge' recurrence is suitable), but if it doesn't run for some reason it will catch up the next day.
  • Only one instance of WMS_TIM_ARCH is permitted to run concurrently.

On one system, where I tested this process, BAT_TIMINGS_DTL was growing by over 1 million rows per day. This became around 5000 rows per day in GFC_TIMINGS_DTL.

If you have been running with batch timings for a while, then when you first introduce this process you will probably have a large backlog of data to be aggregated and purged. The easiest option is to run this process repeatedly until the data has been processed (possibly using the recurrence that causes a program to run every minute). After the backlog has been cleared the BAT_TIMINGS% tables should be rebuilt or shrunk in order to release the space left in the tables by the deleted rows. This will help queries that scan the BAT_TIMINGS_DTL record, otherwise these scans still need to include the empty rows because they are below the tables High Water Mark.

Once the backlog has been cleared, the GFC_TIM_ARCH process can run daily and, not withstanding variations in the load on the system, the rolling volume of data retained in the BAT_TIMINGS% tables should be fairly constant, and it should not be necessary to rebuild them frequently. Space freed by the daily delete should be used by new rows that are inserted into the table as AE processes run.