Search

Top 60 Oracle Blogs

Recent comments

August 2011

IOT Part 5 – Primary Key Drawback – and Workaround

<..IOT1 – the basics
<….IOT2 – Examples and proofs
<……IOT3 – Significantly reducing IO
<……..IOT4 – Boosting Buffer Cache efficiency

One of the drawbacks of IOTs is that they have to be organised by the primary key of the table. If your table does not have a primary key, it cannot be Index Organized.

I would argue that any table that holds persistent data (ie it is not transient data about to be loaded into the database proper or a temporary working set) should have a Primary Key. If I am working on a system and come across a table without a Primary Key I immediately challenge it. {There are occasional, valid reasons for a persistent table to lack a PK, but I confess I am struggling right now to come up with one – but I digress}. I’m a big fan of database-enforced referential integrity.

The problem is, if you you are making a table into an Index Organized Table so that the records are clustered to match how you process the data, it could well be that the primary key is not related to how you want to order the data. Let me give you an example. {Oh, and for brevity, I’ll put the SQL statements to create the examples at the end of this post}.

mdw11> desc ACCOUNT
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ----------------------
 ACCO_TYPE                                             NOT NULL NUMBER(2)  ---PKK
 ACCO_ID                                               NOT NULL NUMBER(10) ---PK
 NAME                                                  NOT NULL VARCHAR2(100)
 DATE_1                                                NOT NULL DATE
 NUM_1                                                          NUMBER(2)
 NUM_2                                                          NUMBER(2)

mdw11> desc TRANSACTION_HEAP
 Name                                                  Null?    Type
 ----------------------------------------------------- -------- ----------------------
 TRAN_TYPE                                             NOT NULL NUMBER(2)  ---PK
 TRAN_ID                                               NOT NULL NUMBER(10) ---PK
 ACCO_TYPE                                             NOT NULL NUMBER(2)
 ACCO_ID                                               NOT NULL NUMBER(10)
 CRE_DATE                                              NOT NULL DATE
 VC_1                                                  NOT NULL VARCHAR2(100)
 DATE_1                                                         DATE
 NUM_1                                                          NUMBER(2)
 NUM_2                                                          NUMBER(2)

This is a classic parent-child relationship, each account has a set of transactions. I’ve expanded on my prior example by:

  • changing the parent to be called ACCOUNT and giving it a two-part Primary Key, ACCO_TYPE and ACCO_ID.
  • Changing the child to be called TRANSACTION and given it a Primary Key of TRAN_TYPE and TRAN_ID.
  • In a real system I would create a foreign key from TRANSACTION.ACCO_TYPE,ACCO_ID to the ACCOUNT table primary key.

Note that the Primary Key on the TRANSACTION table is NOT based on the account columns. Maybe in theory the primary key on the transaction table would be the account columns and the cre_date – if the cre_date held a datetime AND two records could not be created on the same second.  If we used a timestamp then you might be able to argue no record would be created in the same fraction of a second – except that often transactions get given a fixed time. Midnight springs to mind (consider when you would add the accrued interest on a savings account). So, a new surrogate Primary Key is intoduced, a transaction type and ID. TRAN_TYPE and TRAN_ID are the primary key of the TRANSACTION table.

I’d say that I see such two-part primary keys more often then single column primary keys these days. Possibly because so many databases recevie information from other systems or even applications on the same database.

As before, I create 10,000 parent records (ACCOUNT) and 10,000 random child records (TRANSACTION_HEAP) each day for 100 days. 

Also as before, I want to select information grouped by account. I want all the transactions for an account, not all transactions on a day or for a range of transaction IDs. Hopefully this is a scenario most of you will recognise. 

Selecting a sum of one of the non-indexed columns and a count of records for a given account takes quite a bit of effort on the part of the HEAP table:

select sum(num_1), count(*) from transaction_heap th where acco_type=10 and acco_id=123

SUM(NUM_1)   COUNT(*)
---------- ----------
      1201        116
Elapsed: 00:00:02.68

Execution Plan
---------------------------------------------------------------------------------------
| Id  | Operation          | Name             | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |                  |     1 |    10 |  3466   (1)| 00:00:52 |
|   1 |  SORT AGGREGATE    |                  |     1 |    10 |            |          |
|*  2 |   TABLE ACCESS FULL| TRANSACTION_HEAP |   100 |  1000 |  3466   (1)| 00:00:52 |
---------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
      13929  consistent gets
      13921  physical reads

Of course, it has to do a full table scan as my Primary Key is on two columns that have nothing to do with the query. I can repeat this statement as often as I like, it takes the same number of physical reads and consistent gets as it is not caching the information.

I add an index on the ACCO_TYPE, ACCO_ID and CRE_DATE columns and re-run the query:

select sum(num_1),count(*) from transaction_heap th where acco_type=10 and acco_id=123

SUM(NUM_1)   COUNT(*)
---------- ----------
      1201        116
Elapsed: 00:00:00.01

Execution Plan
---------------------------------------------------------------------------------------------------
| Id  | Operation                    | Name               | Rows  | Bytes | Cost (%CPU)| Time     |
---------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |                    |     1 |    10 |   103   (0)| 00:00:02 |
|   1 |  SORT AGGREGATE              |                    |     1 |    10 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| TRANSACTION_HEAP   |   100 |  1000 |   103   (0)| 00:00:02 |
|*  3 |    INDEX RANGE SCAN          | TRHE_ACCO_CRDA_IDX |   100 |       |     3   (0)| 00:00:01 |
---------------------------------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
        120  consistent gets
          0  physical reads

I ran it twice to get rid of the parse overhead, but the first time it did a load of physical reads to support those 120 consistent gets.

I could recreate the TRANSACTION_HEAP table as an IOT of course – but it will be organized by the TRAN_TYPE and TRAN_ID columns. That is useless to me. Even if I add a secondary index on the ACCO_TYPE, ACCO_ID and CRE_DATE columns it will at best be no better than the above HEAP table and, because the secondary index will hold rowid guesses and will sometimes have to use the primary key information to walk down the index, it will be worse. {I am not sure I have explained that bit yet about row guesses. Post 6?}

So, if you want the information organized in an order that is not helped by the Primary Key of the table, an IOT is useless to you. You cannot achieve that physical record grouping by the IOT method.

I am going to do something else though. I’m going to sort of change the rules to work around the issue.

As far as the physical implementation is concerned, a Primary Key is in effect just a unique index and two rules. The rules are that all the columns in the Primary Key must be mandatory and there can only be one PK on a table. I can have as many unique indexes as I like, so long as the key combinations lead to no duplicate rows. I can alter my Primary Key – it is not set in stone.

Before I go any further I am going to stress that I am about to abuse the concept of the Primary Key. I’d need to do a seperate blog to fully justify saying what a Primary Key is, but part of the concept is that no column must be derivable from other columns in the PK and it must be the minimum number of columns required to make the key unique.

We want to group the data by the account columns and the creation date. So let’s define a Primary Key that is ACCO_TYPE, ACCO_ID, CRE_DATE and whatever else we need to guarantee the key is unique. In our case that would be TRAN_TYPE and TRAN_ID – the current Primary Key! If I knew I would always want all records for the account, I could drop the CRE_DATE out of my fake Primary Key, but I know that the creation date is very often important. You may want activity for the last month, last quarter, a stated date or even an exact datetime. For all those cases, including the CRE_DATE column is highly beneficial.

So, I create TRANSACTION_IOT below and populate it with data.

desc transaction_iot
 Name                                                        Null?    Type
 ----------------------------------------------------------- -------- --------------
 TRAN_TYPE                                                   NOT NULL NUMBER(2)
 TRAN_ID                                                     NOT NULL NUMBER(10)
 ACCO_TYPE                                                   NOT NULL NUMBER(2)
 ACCO_ID                                                     NOT NULL NUMBER(10)
 CRE_DATE                                                    NOT NULL DATE
 VC_1                                                        NOT NULL VARCHAR2(100)
 DATE_1                                                               DATE
 NUM_1                                                                NUMBER(2)
 NUM_2                                                                NUMBER(2)

--
--

OWNER    TABLE_NAME          NUM_ROWS      BLOCKS AVG_L GLS ULS LST_ANL      PRT  SAMP_SIZE
-------- -------------- ------------- ----------- ----- --- --- ------------ --- ----------
MDW      TRANSACTION_IO      1000,000                94 YES NO  160811 23:05 NO     1000000
         T
INDEX_NAME      TYP PRT UNQ BL     L_BLKS   DIST_KEYS       CLUSTF     LB_KEY     DB_KEY LST_ANL
--------------- --- --- --- -- ---------- ----------- ------------ ---------- ---------- ------------
TRIO_PK         IOT NO  UNI  2     21,433    1058,381            0          1          1 160811 23:05
TRIO_TRAN_UQ    NOR NO  UNI  2      4,386    1000,000      999,405          1          1 160811 23:05

INDEX_NAME                   TABLE_NAME       PSN COL_NAME
---------------------------- ---------------- --- ------------------------------------------------
TRIO_PK                      TRANSACTION_IOT  1   ACCO_TYPE
TRIO_PK                      TRANSACTION_IOT  2   ACCO_ID
TRIO_PK                      TRANSACTION_IOT  3   CRE_DATE
TRIO_PK                      TRANSACTION_IOT  4   TRAN_TYPE
TRIO_PK                      TRANSACTION_IOT  5   TRAN_ID
TRIO_TRAN_UQ                 TRANSACTION_IOT  1   TRAN_TYPE
TRIO_TRAN_UQ                 TRANSACTION_IOT  2   TRAN_ID

Now let’s select our data from that IOT.

select sum(num_1),count(*) from transaction_IOT th where acco_type=10 and acco_id=123

SUM(NUM_1)   COUNT(*)
---------- ----------
      1030         97
Elapsed: 00:00:00.00

Execution Plan
-----------------------------------------------------------------------------
| Id  | Operation         | Name    | Rows  | Bytes | Cost (%CPU)| Time     |
-----------------------------------------------------------------------------
|   0 | SELECT STATEMENT  |         |     1 |    10 |     5   (0)| 00:00:01 |
|   1 |  SORT AGGREGATE   |         |     1 |    10 |            |          |
|*  2 |   INDEX RANGE SCAN| TRIO_PK |   100 |  1000 |     5   (0)| 00:00:01 |
-----------------------------------------------------------------------------

Statistics
----------------------------------------------------------
          0  recursive calls
          0  db block gets
          5  consistent gets
          0  physical reads

5 consistent gets. It has walked down the IOT and scanned 3 blocks to collect that data. Our IOT based on an abused Primary Key does the job of supporting range scans efficiently, with the benefits to the Block Buffer Cache I refered to in IOT4

That “Primary Key” I created is NOT a real Primary key. It is not the minimum number of columns I need to uniquely identify a column. My Primary key is on ACCO_TYPE, ACCO_ID, CRE_DATE,TRAN_TYPE and TRAN_ID – the account, the datetime of the transaction and the transaction. What if I was to alter the datetime by a second? I could create a record with the same account, the same transaction_id as an existing record but a second into the future. That is just wrong. After all, the whole point of the TRAN_TYPE and TRAN_ID is to uniquely identify a record. If created the new record I stated above, there would be two records for the one TRAN_TYPE/TRAN_ID.

I protect against this ability to create incorrect records by creating a UNIQUE KEY against the table also, against columns TRAN_TYPE and TRAN_ID. This is unique index TRIO_TRAN_UQ as displayed in the information above. A Primary Key is usually the referenced parent of any referential integrity, ie foreign keys, between this table and any children. However, a Unique Key can also be the target of Referential Integrity. I cannot create a record in TRANSACTION_IOT with the same TRAN_TYPE/TRAN_ID as already exists due to this unique constraint:

insert into transaction_iot_p
values
(2,163 -- existing transaction type and id
,10,11111
,sysdate,'ASCAFWEWEHGWSHERJH',SYSDATE,7,7)
/

insert into transaction_iot_p
*
ERROR at line 1:
ORA-00001: unique constraint (MDW.TIP_TRAN_UQ) violated

Elapsed: 00:00:00.34

So, I have my IOT to support querying code and I have my Unique Constraint to police my original Primary Key and be used as the target for any Foreign Key requirements I might need. This is not a perfect solution – the design will look a little strange to anyone who looks at this database and the Unique Key is supported by a secondary index on an IOT which can have some issues. But it does work.

My “primary key” is no longer a true Primary Key. It is just a tool for allowing me to organise the data physically in a way that will support my application. That is what I meant about changing the rules.

I am willing to abuse a Primary Key in this way because of the performance benefits. It is a solution for a system where most of the query access is against a set of records which would be scatter-gunned across a table if you did not use some sort of physical grouping. If you are reading this and thinking “oh, I am not sure about you doing that to a Primary Key Martin” then you are probably OK to consider this solution. If you can’t see a problem with it then you are either very used to turning off referential integrity and understand the consequences – or you simply do not understand what RI does for your database. If you are in the latter camp, do not even consider doing this. If you are one of those people who works on data warehouse and for whom is it just part of the DW process to turn off RI as that is what you do for data warehouses – DON’T do this!

OK, I’m nearly at the end of this topic but I want to touch on partitioning. You can range partitition an Index Organized Table from 9i I think. It is certainly supported in Oracle 10 upwards. Partitioning is important in this technique because a unique index must contain the partition key if the index is to be locally partitioned – otherwise the index must be global, ie the one index object references all the partitions across the table.

Below is my table creation statement for the IOT organized by the account, creation date and transaction. The table is ranged partitioned by CRE_DATE, into months.

create table transaction_IOT_P
(tran_type number(2)     not null
,tran_id   number(10)    not null
,acco_type number(2)     not null
,acco_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(100) not null
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint tip_pk primary key(ACCO_TYPE,ACCO_ID,CRE_DATE,TRAN_TYPE,TRAN_ID)
--  using index tablespace index_01
,constraint tip_tran_uq unique (TRAN_TYPE,TRAN_ID)
  using index tablespace index_01
)
organization index
tablespace data_01
partition by range  (cre_date)
(partition rm20110601 values less than (to_date('01-06-2011','DD-MM-YYYY'))
  tablespace data_01
,partition rm20110701 values less than (to_date('01-07-2011','DD-MM-YYYY'))
  tablespace data_01
,partition rm20110801 values less than (to_date('01-08-2011','DD-MM-YYYY'))
  tablespace data_01
,PARTITION RMTOP  VALUES LESS THAN (MAXVALUE)
  tablespace USERS
)
/

You can see the definition of my fake Primary Key and the fact that it does not have a tablespace defined for it – as the ‘organization index’ statement lower down causes the table to be an IOT and the segment will go into the “table” tablespace.
I then state my Unique Index to police the integrity of my table – TIP_TRAN_UQ
I then state the partition clause, ‘partition by range (cre_date)’ followed by my initial partition definitions. It’s as simple as that to partition an IOT.

What gets created? A set of four segments for the IOT, which are primary key index segments of course, not table segments:

@seg_dets
Enter value for seg_name: tip_pk
Enter value for owner: mdw

OWNER    SEG_NAME        SEG TS_NAME     BYTES_K    BLOCKS exts   INI_K   NXT_K
-------- --------------- --- -------- ---------- --------- ---- ------- -------
MDW      TIP_PK RM201106 IP  DATA_01      45,056     5,632   59      64    1024
         01
MDW      TIP_PK RM201107 IP  DATA_01      60,416     7,552   74      64    1024
         01
MDW      TIP_PK RM201108 IP  DATA_01      61,440     7,680   75      64    1024
         01
MDW      TIP_PK RMTOP    IP  USERS        34,816     4,352   49      64    1024

Note that the SEG (type) is “IP” – my script decodes the type into a short mnemonic and IP is Index Partition. You can see the tablespaces those segments are in and the size of the segments. What about that unique index I created?

@seg_dets
Enter value for seg_name: tip_tran_uq
Enter value for owner: mdw

OWNER    SEG_NAME        SEG TS_NAME     BYTES_K    BLOCKS exts   INI_K   NXT_K
-------- --------------- --- -------- ---------- --------- ---- ------- -------
MDW      TIP_TRAN_UQ     IND INDEX_01     35,840     4,480   50      64    1024

It is a single segment, a normal index. I cannot have it as a locally partitioned index as it is a unique index and lacks the partitioning key in it’s definition.

This could be a problem. The usual reason you partition a table is because it is too large to comfortably be held as a single segment {and also for the benefit of partition exclusion, but you don’t usually need that on small tables!}. This means that the global index to support that primary key is going to be large. Now, I made a “mistake” when I created my partitioned IOT – I did not create a partition for this month, some data has gone into the MAXVALUE partition (see the size of the segment above, 34K and 49 extents). If I split that last partition to create a new partition for this month and a new MAXVALUE partition, I will invalidate the global index and I will have to rebuild it. Very large indexes can take a long time and a heck of a lot of temporary space to gather and sort the data. That could be an ongoing maintenance nightmare.

In a recent implementation I did using IOTs I did not create a global unique index to replace the original foreign key. I create a non-unique, locally partitioned index to support some queries using those columns and the table had no children so no Foreign Keys were needed. But there was something else I needed to do as I had removed the referential integrity rules for that table. Remember I sad I am a fan of database enforced referential integrity? Now I “know” the application will not create data that will break the removed Primary Key rule, I “know” I documented what I had done. And I know that in 12 months time there will almost certainly be data that will have duplicate values for that Primary Key if it is not enforced somehow, because it always happends. I need to implement a little script to regularly check for duplicate TRAN_TYPE/TRAN_ID conmbinations being created. If you remove RI from a relational database, you should replace it in some way. Otherwise, you will pretty soon have a non-relational database.

That’s it for this topic. The below is my example script for creating most of the above, in case anyone wants it or wants to verify what I have said.

-- test_iot2.sql
-- create test tables to show how you can work around the PK issue and
-- partition an IOt - and the possible impact on my PK workaround.
spool test_iot2.lst
--
set feed on timi on pause off
--
drop table account purge;
drop table transaction_heap purge;
drop table transaction_iot purge;
drop table transaction_iot_p purge;
--
-- create 10,000 parent records
create table mdw.account
(ACCO_type  number(2)     not null
,ACCO_id       number(10)    not null
,name     varchar2(100) not null
,date_1   date          not null
,num_1    number(2)
,num_2    number(2)
,constraint ACCO_pk primary key(ACCO_type,ACCO_id)
 using index tablespace index_01
)
tablespace data_01
/
insert into account
select 10
,rownum
,dbms_random.string('U',mod(rownum,10)+50)
,sysdate-(mod(rownum,500)+1000)
,mod(rownum,99)+1
,trunc(dbms_random.value(0,100))
from dual connect by level <= 5000
/
insert into account
select 15
,rownum
,dbms_random.string('U',mod(rownum,10)+50)
,sysdate-(mod(rownum,500)+1000)
,mod(rownum,99)+1
,trunc(dbms_random.value(0,100))
from dual connect by level <= 5000
/
--
-- create the table to hold the children as a heap table
create table transaction_heap
(tran_type number(2)     not null
,tran_id   number(10)    not null
,ACCO_type number(2)     not null
,ACCO_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(100) not null
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint trhe_pk primary key(tran_type,tran_id)
 using index tablespace index_01
)
tablespace data_01
/
--
create index trhe_ACCO_crda_idx
on transaction_heap(ACCO_type,ACCO_id,cre_date)
tablespace index_01
/
-- populate the Heap table
-- 100 days, 10000 people
declare
v_num number :=10000; -- number of people
v_str varchar2(60);
begin
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
for i in 1..100 loop --days to do
  v_str:=dbms_random.string('U',60);
  insert into transaction_heap
    (tran_type,tran_id,ACCO_type,ACCO_id,cre_date,vc_1,date_1,num_1,num_2)
  select mod(rownum,3)+1
   ,((i-1)*v_num)+rownum
   , 5+(trunc(dbms_random.value(1,3))*5)
   ,trunc(dbms_random.value(1,v_num/2))
   ,sysdate-(100-i) + (rownum/(60*60*24) )
   ,substr(v_str,1,51+mod(rownum,10))
   ,sysdate-(100-i) + ((mod(rownum,30)+1)/3)
   ,mod(rownum,20)+1
   ,mod(rownum,99)+1
  from dual connect by level <=v_num;
end loop;
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
end;
/
--
--
--
create table transaction_IOT
(tran_type number(2)     not null
,tran_id   number(10)    not null
,ACCO_type number(2)     not null
,ACCO_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(100) not null
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint trio_pk primary key(ACCO_type,ACCO_id,cre_date,tran_type,tran_id)
--  using index tablespace index_01
,constraint trio_tran_uq unique (tran_type,tran_id)
  using index tablespace index_01
)
organization index
tablespace data_01
/
--
-- populate the IOT table
-- 100 days, 10000 people
declare
v_num number :=10000; -- number of people
v_str varchar2(60);
begin
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
for i in 1..100 loop --days to do
  v_str:=dbms_random.string('U',60);
  insert into transaction_IOT
    (tran_type,tran_id,ACCO_type,ACCO_id,cre_date,vc_1,date_1,num_1,num_2)
  select mod(rownum,3)+1
   ,((i-1)*v_num)+rownum
   , 5+(trunc(dbms_random.value(1,3))*5)
   ,trunc(dbms_random.value(1,v_num/2))
   ,sysdate-(100-i) + (rownum/(60*60*24) )
   ,substr(v_str,1,51+mod(rownum,10))
   ,sysdate-(100-i) + ((mod(rownum,30)+1)/3)
   ,mod(rownum,20)+1
   ,mod(rownum,99)+1
  from dual connect by level <=v_num;
end loop;
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
end;
/
create table transaction_IOT_P
(tran_type number(2)     not null
,tran_id   number(10)    not null
,ACCO_type number(2)     not null
,ACCO_id   number(10)    not null
,cre_date  date          not null
,vc_1      varchar2(100) not null
,date_1    date
,num_1     number(2)
,num_2     number(2)
,constraint tip_pk primary key(ACCO_type,ACCO_id,cre_date,tran_type,tran_id)
--  using index tablespace index_01
,constraint tip_tran_uq unique (tran_type,tran_id)
  using index tablespace index_01
)
organization index
tablespace data_01
partition by range  (cre_date)
(partition rm20110601 values less than (to_date('01-06-2011','DD-MM-YYYY'))
  tablespace data_01
,partition rm20110701 values less than (to_date('01-07-2011','DD-MM-YYYY'))
  tablespace data_01
,partition rm20110801 values less than (to_date('01-08-2011','DD-MM-YYYY'))
  tablespace data_01
,PARTITION RMTOP  VALUES LESS THAN (MAXVALUE)
  tablespace USERS
)
/
-- populate the IOT_P table
-- 100 days, 10000 people
declare
v_num number :=10000; -- number of people
v_str varchar2(60);
begin
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
for i in 1..100 loop --days to do
  v_str:=dbms_random.string('U',60);
  insert into transaction_IOT_P
    (tran_type,tran_id,ACCO_type,ACCO_id,cre_date,vc_1,date_1,num_1,num_2)
  select mod(rownum,3)+1
   ,((i-1)*v_num)+rownum
   , 5+(trunc(dbms_random.value(1,3))*5)
   ,trunc(dbms_random.value(1,v_num/2))
   ,sysdate-(100-i) + (rownum/(60*60*24) )
   ,substr(v_str,1,51+mod(rownum,10))
   ,sysdate-(100-i) + ((mod(rownum,30)+1)/3)
   ,mod(rownum,20)+1
   ,mod(rownum,99)+1
  from dual connect by level <=v_num;
end loop;
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
end;
/
commit;
--
exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'ACCOUNT')
exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'TRANSACTION_HEAP')
exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'TRANSACTION_IOT')
exec dbms_stats.gather_table_stats(ownname=>USER,tabname=>'TRANSACTION_IOT_P')
--
select * from transaction_iot_p
where rownum < 10
/
insert into transaction_iot_p
values
(2,163 -- existing transaction type and id
,1,11111
,sysdate,'ASCAFWEWEHGWSHERJH',SYSDATE,7,7)
/
insert into transaction_iot_p
values
(3,163 -- new transaction type and id
,1,11111 -- but the whole of the rest of the record is the same.
,sysdate,'ASCAFWEWEHGWSHERJH',SYSDATE,7,7)
/
--
BEGIN
dbms_output.put_line (to_char(SYSTIMESTAMP,'HH24:MI:SS.FF'));
END;
/
--
spool off

InSync11 – Day 1

Getting to Sydney:

No major drama there. I didn’t sleep much on the plane. A couple of 15 minute stints during the 22 hours on the plane. Slept well on the first night, so day 1 of the conference was relatively jetlag free.

InSync11 Sessions I attended on Day 1 included:

Me: Clonedb: The quick and easy cloning solution you never knew you had.

This was my first presentation at the conference. The majority of the talk went pretty well. The demo didn’t go quite to plan, which was my own fault for making some last minute changes. Even so, the clone worked and if you managed to ignore the bumbling idiot at the front, I think it proved the point. I’m presenting this again in Canberra in a couple of days, so I guess the wrinkles will be ironed out by then. :)

Tom Kyte: Efficient PL/SQL – why and how to use PL/SQL to its greatest effect.

I’ve done similar presentations to this myself, but Tom is a better presenter than me, so it’s cool to see him doing his stuff.

Richard Foote: 10 things you possibly don’t know about indexes.

Richard is fun guy, who doesn’t look at all like David Bowie! :) The presentation mostly focused on refuting many of the myths surrounding indexes, with some really neat examples. It would have been cool if he had been given a double slot for this as it was a struggle to fit it into 45 minutes.

Angus MacDonald: An insight into what is coming next.

Angus works for Oracle, and main focus of this talk was Oracle-Sun related technology, including general hardware, as well as the Sparc and Solaris roadmaps. It was well presented and the subject matter was interesting, but a few comparisons grated on me a little. I felt like some information was a little like, “This is what Solaris on Sparc will be capable of in 2015, and Linux on x86-64 can’t do that now”. OK, but what will Linux on x86-64 be capable of by 2015? I think I was being a little over-sensitive, so perhaps it wasn’t a big deal. :)

Sydney Oracle Meetup:

In the evening a number of us (Connor McDonald, Craig Shallahamer, Chris Muir, Guy Harrison, Marcelle Kratochvil, Tom Kyte, Richard Foote and myself) were invited over to the Sydney Oracle Meetup to sit on a panel session. It was very informal and good fun. Probably the most enjoyable panel session I’ve been too. After the panel we all went out for some food, so the panel session extended into the night a little.  Thanks to the Pythian guys, Noons, Gary Myers and others for sorting this out… :)

I didn’t sleep well last night so Day 2 will be a struggle. I’m presenting in the last slot, so it’s going to be a long day. :)

Cheers

Tim…




New create_1_hint_sql_profile.sql

I modified my create_1_hint_sql_profile.sql script (which I blogged about here: Single Hint Profiles) to allow any arbitrary text sting including quote characters. This is a script that I use fairly often to apply a hint to a single SQL statement that is executing in a production system where we can’t touch the code for some reason. For example, it’s sometimes useful to add a MONITOR hint or a GATHER_PLAN_STATISTICS hint to a statement that’s behaving badly so we can get more information about what the optimizer is thinking. I recently updated the script to allow special characters in the hint syntax. This feature is useful when you want to add something like an OPT_PARAM hint that takes quoted arguments.

dbms_xplan (4)

This little note on how dbms_xplan behaves was prompted by a very simple question on OTN which raised a point that I often manage to forget (temporarily). I’ve chosen to explain it through a little demonstration. 

Session 1 – cut-n-paste (with minor cosmetic changes):

SQL> select max(n2) from t1 where n1 = 15;

   MAX(N2)
----------
        15

1 row selected.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------
SQL_ID  b8ud16xgnsgt7, child number 0
-------------------------------------
select max(n2) from t1 where n1 = 15

Plan hash value: 269862921

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |     2 (100)|          |
|   1 |  SORT AGGREGATE              |       |     1 |     8 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |    15 |   120 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=15)

20 rows selected.

Session 2 – cut-n-paste:

SQL> alter session set workarea_size_policy = manual;

Session altered.

SQL> select max(n2) from t1 where n1 = 15;

   MAX(N2)
----------
        15

1 row selected.

SQL> select * from table(dbms_xplan.display_cursor);

PLAN_TABLE_OUTPUT
--------------------
SQL_ID  b8ud16xgnsgt7, child number 1
-------------------------------------
select max(n2) from t1 where n1 = 15

Plan hash value: 269862921

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |     2 (100)|          |
|   1 |  SORT AGGREGATE              |       |     1 |     8 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |    15 |   120 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=15)

20 rows selected.

SQL>

Because I’ve changed the optimizer environment for the second session Oracle has created a second child cursor for query – even though the execution plan turned out to be exactly the same. (The fact that you can get two child cursors  with the same plan sometimes surprises people, but it’s not a rare occurrence.) You’ll notice that the two sessions report different values for child number.

So let’s use a third session to find the plans for the sql_id that the previous outputs show: b8ud16xgnsgt7. Here’s the complete cut-n-paste (again with minor cosmetic changes):

SQL> select * from table(dbms_xplan.display_cursor('b8ud16xgnsgt7'));

PLAN_TABLE_OUTPUT
--------------------
SQL_ID  b8ud16xgnsgt7, child number 0
-------------------------------------
select max(n2) from t1 where n1 = 15

Plan hash value: 269862921

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |     2 (100)|          |
|   1 |  SORT AGGREGATE              |       |     1 |     8 |            |          |
|   2 |   TABLE ACCESS BY INDEX ROWID| T1    |    15 |   120 |     2   (0)| 00:00:01 |
|*  3 |    INDEX RANGE SCAN          | T1_I1 |    15 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   3 - access("N1"=15)

20 rows selected

Question: We’ve got the plan for child number 0, what happened to child number 1 ?
Answer: We didn’t ask for it.

The default value for the second parameter to display_cursor() is zero. If you want to see all the available plans for a given sql_id, you need to supply an explicit null to the call, viz:

SQL> select * from table(dbms_xplan.display_cursor('b8ud16xgnsgt7',null));

(I won’t bother to cut-n-paste the output – it just lists the two plans one after the other in child order, reporting a total of 40 rows selected.)

If you want to read other notes that make significant points about dbms_xplan, there’s an entry for it in the Categories drop-down list to the top right of the screen.

Facebook Adopts Oracle Exalogic Elastic Cloud, Replaces Tens Of Thousands Of Servers. Memorable Conference Keynote Addresses.

BLOG UPDATE 2011.08.17: In the original I misstated the Facebook page view rate to be 260,000,000 per day when it is in fact 260,000,000,000 per month. I used the correct per-second rate anyway so this update does not change the original in that regard.

BLOG UPDATE 2011.08.16: The blog title is a come-on.

Conference Keynotes: One Year Ago versus Eleven Years Ago
I spent some time rummaging through my scrapbook over the weekend. I stumbled on a program for the IBM Partnerworld 2000 conference. It might be difficult to see, but I scanned it in to show that on the left hand side there is an announcement of the keynote session with (former) U.S. President George H.W. Bush and on the bottom of the right hand page is an announcement of the session I was offering the same day. That was a really cool conference! The audience (certainly me included) thoroughly enjoyed the keynote. That was eleven years ago but I recall that my session was a bunch of blah-blah-blah compared the keynote.

There was another keynote, only one year ago, that is burned much more clearly into my memory. The conference was Oracle Openworld 2010 and, of course, the keynote speaker was Larry Ellison. Unlike the IBM conference of long ago I had no speaking session. I certainly would have had a session, due to the overwhelming number of votes racked up by my proposed “Oracle Mix Suggest-a-Session” (105 votes), but the Oracle Mix/Conference folks decided, but didn’t share the fact until afterwards, that Oracle employees weren’t eligible to participate in the Suggest-A-Session program. Boo hoo. Attendees missed a lot because the session topic was Do-It-yourself Exadata-level Performance.

Do It Yourself  Exadata-Level Performance
A DIY Exadata-level performance configuration is not what I’m blogging about. However, it seems I just stepped in it.  Quote me on this:

 If you configure a lot of I/O bandwidth and a lot of CPU you can match Exadata performance with Real Application Clusters and conventional hardware.

…but I’m not blogging about that.

Big Sounding Claims
So, back to the comparison between the circa-2000 IBM Partnerworld keynote and last year’s Openworld keynote. The majority of fanfare last year was Exalogic. I remember this Larry Ellison sound bite :

A single setup is capable of handling 1 million HTTP requests per second; two running side-by-side could handle Facebook’s HTTP request workload

For those who don’t know, the following is true:

That’s roughly 100,000 per second (260 * 10^9)/(30*24*60*60). So, the claims of a two-rack Exalogic configuration handling Facebooks “HTTP request workload” is actually quite true. So why am I blogging about it then? The numbers are indeed stunning! There is a huge difference between the “HTTP request workload” and serving meaningful content. But I’m not blogging about that.

Those Facebook Engineers are Real Dummies
Well, they must be, if they were using  “thousands and thousands” of servers to do what they were doing even back in 2008:

Because we have thousands and thousands of computers, each running a hundred or more Apache processes […]

Silly Facebook engineers. They could save so much space, power and cooling if they only listened to certain keynote wisdom.

Your Choice: A Two-Rack Exalogic Configuration Or 25 Itsy-Bitsy Xeon 5600 Cores
If A two-rack Exalogic configuration can supplant all the processing power of Facebook’s web serving farm, I have to do the math. I won’t argue that Exalogic can perform “1 millions HTTP requests per second.” Because it surely can. I will, however, focus on just how much Westmere-EP CPU it takes to do the 100,000 HTTP requests/sec it would take to handle just the request portion of Facebook’s workload.

I think I’ll fire up httpd on one of my Linux servers and crank up ab(8) (ApacheBench) to see how much CPU is takes to handle a given rate of requests. The workload is pulling about 4K for each request. The server is a 2-socket Westmere-EP (Xeon 5600) system running Linux 2.6:

# ab -n 1000000 -c 24 http://172.28.8.250/
 This is ApacheBench, Version 2.0.40-dev apache-2.0
 Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
 Copyright 2006 The Apache Software Foundation, http://www.apache.org/
Benchmarking 172.28.8.250 (be patient)
 Completed 100000 requests
 Completed 200000 requests
 Completed 300000 requests
 Completed 400000 requests
 Completed 500000 requests
 Completed 600000 requests
 Completed 700000 requests
 Completed 800000 requests
 Completed 900000 requests
 Finished 1000000 requests
Server Software: Apache/2.2.3
 Server Hostname: 172.28.8.250
 Server Port: 80
Document Path: /
 Document Length: 3985 bytes
Concurrency Level: 24
 Time taken for tests: 62.656311 seconds
 Complete requests: 1000000
 Failed requests: 0
 Write errors: 0
 Non-2xx responses: 1000002
 Total transferred: 4183008366 bytes
 HTML transferred: 3985007970 bytes
 Requests per second: 15960.08 [#/sec] (mean)
 Time per request: 1.504 [ms] (mean)
 Time per request: 0.063 [ms] (mean, across all concurrent requests)
 Transfer rate: 65196.45 [Kbytes/sec] received
 Connection Times (ms)
 min mean[+/-sd] median max
 Connect: 0 0 0.0 0 1
 Processing: 0 1 4.4 1 885
 Waiting: 0 0 4.5 0 884
 Total: 0 1 4.4 1 885
Percentage of the requests served within a certain time (ms)
 50% 1
 66% 1
 75% 1
 80% 1
 90% 2
 95% 3
 98% 4
 99% 5
 100% 885 (longest request)

No tuning, out of the box gives me 15960.08 HTTP requests/second. And the CPU? Well, it peaked at 66% idle, or 34% utilized.  Of course this is SMT (hypterthreaded) so it’s 8 processor threads or 4 cores:

# vmstat 3
 procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------
 r b swpd free buff cache si so bi bo in cs us sy id wa st
 2 0 127816 259408 230712 47307252 0 0 92 2 0 0 0 0 100 0 0
 0 0 127816 259400 230712 47307352 0 0 0 0 1105 432 0 0 100 0 0
 0 0 127816 259408 230712 47307352 1 0 1 12 1074 330 0 0 100 0 0
 0 0 127816 259392 230712 47307352 0 0 0 0 1114 441 0 0 100 0 0
 5 0 127816 259044 230712 47308164 0 0 0 8 7439 23103 4 4 92 0 0
 10 0 127816 237848 230728 47318736 1 0 1 4963 7509 94804 17 14 70 0 0
 2 0 127816 244196 230736 47331552 0 0 0 48 6944 69394 16 12 72 0 0
 6 0 127816 231328 230744 47330476 0 0 0 5927 10704 62483 13 10 78 0 0
 14 0 127816 227280 230756 47307412 1 0 1 0 5756 85341 15 13 72 0 0
 18 0 127816 239084 230764 47306240 0 0 0 6467 6003 83381 15 13 72 0 0
 10 0 127816 237880 230780 47306440 1 0 1 6036 6257 82766 15 12 73 0 0
 1 0 127816 268676 230784 47302892 0 0 0 0 7258 42912 9 7 85 0 0
 9 0 127816 259512 230788 47307268 0 0 0 3169 19715 31181 8 6 85 0 0
 6 0 127816 236700 230804 47319992 1 0 1 3 5636 106346 19 15 66 0 0
 3 0 127816 240576 230816 47330944 0 0 1 7223 12153 49646 12 9 79 0 0
 2 0 127816 229308 230828 47329748 0 0 0 5669 8747 87512 15 12 72 0 0
 3 0 127816 241880 230840 47332748 1 0 1 1 6669 77569 16 13 71 0 0
 7 0 127816 229984 230848 47327988 0 0 0 5672 13008 56876 13 10 77 0 0
 8 0 127816 235400 230864 47330736 0 0 0 51 5411 101470 18 15 67 0 0
 2 0 127816 246684 230868 47328748 1 0 1 7179 12156 34923 9 7 84 0 0
 8 0 127816 234116 230880 47325008 0 0 0 3977 13480 55217 12 9 79 0 0
 10 0 127816 229848 230888 47301592 1 0 1 0 5712 86969 15 13 72 0 0
 9 0 127816 242936 230900 47302292 0 0 0 6248 5907 82740 15 13 72 0 0
 7 0 127816 234464 230908 47312100 0 0 0 0 6211 81525 14 12 74 0 0
 3 0 127816 260600 230916 47310580 1 0 1 6019 10948 49947 10 8 81 0 0
 0 0 127816 252816 230920 47314964 0 0 0 2847 11985 19545 5 4 90 0 0
 0 0 127816 252840 230920 47316000 0 0 0 0 1078 358 0 0 100 0 0
 0 0 127816 252952 230920 47316000 11 0 11 15 1108 426 0 0 100 0 0

With 3,990 HTTP requests per second/core I only need 25 Westmere-EP cores to “handle Facebook’s HTTP request workload.”  That’s 13 2U servers.

It’s both absurd, and insulting to Facebook, to suggest a couple of Exalogic systems can do anything worthwhile under Facebook’s insanely huge web workload. Since Facebook opted not to replace all their servers with a few tiles of Exalogic goodies, what did they do? Well, they built a behemoth data center in my part of the world. These are panorama shots so click and drag right to left:

Thanks to the Scobleizer for that. I’ve seen the outside of this building while driving by. If you read about the state of the art environmental engineering these guys put into this facility you’ll know why I’d love to get in there to see it—and you’ll certainly know why the odds are, um, slim Exalogic could replace even a partial fragment of a fraction of a percentage of half of one side of a single aisle of this datacenter.

So, no, Facebook didn’t really replace hundreds of metric tonnes of hardware with two 42U racks worth of Exalogic Elastic Cloud. I meant to say Facebook apparently didn’t get the memo.

Summary
Keynotes are interesting. Some are memorable.

Filed under: oracle

Enough Already MOS!

So another My Oracle Support Update at the weekend. Today I get the following results when searching the knowledge base: Thanks a bunch, Oracle! We pay for this stuff and you continually screw it up in basic ways. And no Chrome is not an unusual browser. And yes flash is up to date. It isn’t [...]

Real-Time SQL Monitoring - Retention

As I suggested in my last post, there's at least one more reason that your long-running SQL statements might not appear in SQL Monitoring views (e.g. V$SQL_MONITOR) or the related OEM screens.

When the developers at my current site started to use SQL Monitoring more often, they would occasionally contact me to ask why a statement didn't appear in this screen, even though they knew for certain that they had run it 3 or 4 hours ago and had selected 'All' or '24 Hours' from the 'Active in last' drop-down list.

I noticed when I investigated that some of our busiest test systems only displayed statements from the past hour or two, even when selecting 'All' from the drop-down. I asked some friends at Oracle about this and they informed me that there is a configurable limit on how many SQL plans will be monitored that is controlled by the _sqlmon_max_plan hidden parameter. It has a default value of the number of CPUs * 20 and controls the size of a memory area dedicated to SQL Monitoring information. This is probably a sensible approach in retrospect because who knows how many long-running SQL statements might be executed over a period of time on your particular system?

I included this small snippet of information in my SQL Monitoring presentations earlier this year because it's become a fairly regular annoyance and planned to blog about it months ago but first I wanted to check what memory area would be increased and whether there would be any significant implications.

Now that I've suggested to my client that we increase it across our systems I had a dig around in various V$ views to try to identify the memory implications but didn't notice anything obvious. My educated guess is that the additional memory requirement is unlikely to be onerous on modern systems but would still like to know for sure and so I'll keep digging but, if anyone knows already, I'd be very interested ...

Updated later - thanks to Nick Affleck for pointing out the additional 's' I introduced on the parameter name. Fixed now to read _sqlmon_max_plan

Advert: Cary Millsap in London

If you're interested in Oracle performance (or system performance in general) and you don't know who Cary Millsap is then I'm surprised. If you've never read 'Optimizing Oracle Performance', then I think you should.

Most of all, and this isn't to diminish Cary's other work in any way, if you've never had the pleasure of hearing Cary explain complex performance topics in a way that makes them fun and easy to understand then you really are missing something special.

Well now's your chance to put this right. Although it's pretty easy to hear Cary speaking at one of the many user group conferences he pops up at over the pond, the chance to hear him speak for a whole day in the UK is pretty rare.

Sign up. I know training budgets are tight at the moment, but I bet you won't regret it!

(Oh, and in case you think this is just rampant advertising for a friend, you should see the number of mails I get these days asking me to review things I don't have the slightest interest in, in the hope I'll blog about them. I only blog about things I think are worth my and your time ...)

How To Become An Oracle Expert (The Scientist)

I’ve been invited by the Sydney Oracle Meetup Group to participate in an open discussion on “How To Become An Oracle Expert” next Tuesday, 16 August 2011 at 5:30pm. There’s a great lineup, including: -  Chris Muir (Oracle ACE Director, Australia) -  Connor McDonald (Oracle ACE Director, Australia) -  Craig Shallahamer (Oracle ACE Director, US) -  [...]

Friday Philosophy – Blogging Style and Aim

I’ve recently looked back at some of my earlier blog postings and also some notes I made at the time I started. I had a few aims at the start, pretty much in this order:

  • A place to put all those Oracle thoughts and ideas, for my own benefit
  • Somewhere to record stuff that I keep forgetting
  • I’d started commenting on other blogs and felt I was maybe too verbal on them
  • To increase my profile within the Oracle community
  • To share information, because I’m quite socialist in that respect
  • To learn more

It very quickly morphed into something slightly different though.

Firstly, it is not really somewhere that I record thoughts and ideas or where I record stuff that I forget. When I am busy, I sometimes only get half way to the bottom of resolving an issue or understanding some feature of Oracle. I tend to create little documents about them but I can lose track of them. I initially intended to put these on my blog. The thing is though, I don’t feel I can blog about them because I might be wrong or I raise more questions than I answer. I don’t think a public blog about technology is a good place to have half-baked ideas and I certainly don’t want people:

  1. reading and believing something that is wrong
  2. thinking I do not know what I am talking about
  3. seeing my rough notes as boy are they rough, often with naughty words in them and slang. Converting them to a familly-friendly format takes time. 

You see, there is the point about increasing my profile in the community. Part of me hates the conceit that you have to be seen as all-knowing or never wrong, as no one is all-knowing and never wrong. In fact, I think most of us find it hard to like people who put themselves as such.  But if I put out a blog saying “it works this way” and I am wrong or I simply say it in a clumsy way or I assume some vital prior knowledge, I could be making people’s lives harder not easier, so I spend a lot of effort testing and checking. It takes me a lot, lot longer to prepare a technical blog than I ever thought it would before I started. And yes, I accept I will still get it wrong sometimes.

Another consideration is that I make my living out of knowing a lot about Oracle. If I post a load of blogs saying something like “gosh I wish I understood how Oracle locks parts of the segment as it does an online table rebuild and handles the updates that happen during it”, then I obviously don’t know about that. Or I put out a post about how I currently think it works and I’m wrong. Tsch, I can’t be that good! How much should I have to think about how I am selling myself as a consultant? There is a difference between being liked and being perceived as good at what you do. If you want someone to design a VLDB for you, you probably don’t care if s/he is a nice person to spend an evening in the pub with - but you certainly care if they seem to be fundamentally wrong about oracle partitioning.

Balancing that, if you saw my recent post on Pickler Fetch you will see that I was wrong about a couple of things and there was some stuff I did not know yet. But I learnt about those wrong things and lack of knowledge, so I feel good about that. That was one of my original aims, to learn. Not only by having to check what I did but by people letting me know when I was wrong.

What about style? I can be quite flippant and, oh boy, can I go on and on. I know some people do not like this and, if you want a quick solution to an oracle problem, you probably do not want to wade through a load of side issues and little comments. You just want to see the commands, the syntax and how it works. Well, that is what the manuals are for and there a lot of very good web sites out there that are more like that. If you do not like my verbose style then, hey that’s absolutely fine.  But I like to write that way and so I shall.

So after over 2 years of blogging, I seem to have settled into a style and my aims have changed.

  • I try to be helpful and cover things in detail.
  • I try to polish what I present a lot, lot more than I do for my own internal notes. Maybe too much.
  • I’m going to write in a long-winded way that some people will not enjoy but it is my style.
  • I’m going to try and worry less about looking perfect as I am not.

I suppose what I could do is start a second, private blog with my half-baked stuff on it. But I just don’t think I’ve got the time :-)