Search

Top 60 Oracle Blogs

Recent comments

May 2011

On the Topic of Technology… 3

May 24, 2011 (Updated May 28, 2011) (Back to the Previous Post in the Series) In the previous article in this series I described some of the features of the Motorola Xoom tablet, and some of the reasons why I purchased that tablet rather than the Apple iPad 2.  When I wrote the previous article I could [...]

Offloading production backups to a different storage array

For quite a while Oracle DBAs have performed split mirror backups using special devices called “Business Continuance Volumes” or BCVs for short. A BCV is a special mirror copy of a LUN on the same storage array as the primary copy.

In a BCV backup scenario, the storage administrator (usually) “splits” the mirror after putting the database into hot backup mode. After the mirror is split, the database is taken out of hot backup mode and resumes normal operation. A new Oracle instance on a different host can be mounted using the split mirror copy of the database for backups. The use of this technology for refreshing a test environment is out of scope of this article. The below figure demonstrates the idea:

The advantage of such an approach is that the backup operation, initiated from the mount host, should not impact the performance of the production database. Once the backup is complete, the BCV for the ARCH diskgroup should be re-synchronised with the source LUN, whereas the DATA disk group should not. This allows us to quickly recover from problems with the primary LUNs-more on that on a later post.

One Step Further

On my current site this process has been refined. One of the requirements was that SRDF should be used as a means for disaster recovery. I should say that SRDF, or Symmetrix Remote Data Facility is my customer’s preferred method for DR, and I do by no means want to advertise for EMC here; it just so happened that I was working on EMC storage for this project.

I should also note that Data Guard cannot be used due to an application constraint (an ACFS file system is integral part of the application and must be backed up together with the database).

All storage is presented by ASM, which in many ways makes life easier for me. The ASM LUNs or “disks” will have all the required information in the “disk” header. So after the cloned LUNs have been presented to the mount host, all I need to do is make them available to the OS, and optionally run an “oracleasm scandisks” as root to detect them. From then on I should be able to simply mount the disk group (either via sql*plus or srvctl in 11.2). The actual backup requires a few more steps, these are shown below.

Before going further into detail let’s have a look at the architecture first:

Two data centres are in use: the local one is used for production in normal operations, including backups. One of the design requirements was that backups can be taken on either data centre, in respect to the DR situation.

Split mirror backups as shown in the above figure are taken on the local data centre in normal operations. In case of DR, the remote data centre will be configured to take backups. For this to happen, it is necessary to clone the replicated LUNs (that would be activated in the DR event) much as it’s done for the local data centre’s split mirror backups. As an added advantage, the clones can be used to create pre- and post batch “backups” that would be activated in case of a horrible failure of the batch/end-of-year processing.

Taking the Backups

To be able to take a backup of the split mirror, a few things are necessary. Most of these are documented in MOS note “RMAN and Split Mirror Disk Backups [ID 302615.1]”. You certainly require a recovery catalogue database in the first place. As a first step you register the database in the catalog. You perform this step connect to the production database and recovery catalog, as shown in the below example:

$ rman target / catalog=rman/rmanPwd@rccat

Recovery Manager: Release 11.2.0.2.0 - Production on Fri May 20 10:39:00 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PROD (DBID=1796248120)
connected to recovery catalog database

RMAN> register database;

database registered in recovery catalog
starting full resync of recovery catalog
full resync complete

RMAN> exit

If you like, you can configure defaults at this stage, I opted for the following (still connected to the primary database and recovery catalog):

RMAN> configure retention policy to redundancy 3;

new RMAN configuration parameters:
CONFIGURE RETENTION POLICY TO REDUNDANCY 3;
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE DEFAULT DEVICE TYPE TO SBT_TAPE;

new RMAN configuration parameters:
CONFIGURE DEFAULT DEVICE TYPE TO 'SBT_TAPE';
new RMAN configuration parameters are successfully stored

RMAN> configure controlfile autobackup off;

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP OFF;
new RMAN configuration parameters are successfully stored

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE sbt_tape to '%F';

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP FORMAT FOR DEVICE TYPE 'SBT_TAPE' TO '%F';
new RMAN configuration parameters are successfully stored

RMAN> configure device type sbt_tape parallelism 4 backup type to backupset;

new RMAN configuration parameters:
CONFIGURE DEVICE TYPE 'SBT_TAPE' PARALLELISM 4 BACKUP TYPE TO BACKUPSET;
new RMAN configuration parameters are successfully stored

RMAN> exit

Now switch over to the mount host for some real work.

The high level steps for taking a backup of the “split mirror” are:

  1. Ensure that the cloned LUNs are presented to the mount host’s operating system, including appropriate zoning on the fabric
  2. Ensure that the multi-pathing solution of choice is correctly configured for all paths to the cloned LUNs
  3. Ensure that the ASM disks are known to the local ASM instance. That may including running an /etc/init.d/oracleasm scandisks as root, or putting the relevant rules into /etc/udev/rules.d/

These steps have been quite generic and will depend on your OS and storage stack. I won’t go into detail, but you might find some relevant bits and pieces on my blog if you are interested.

The next step is to make the ASM disks known to ASM. Initially that has to be done via the command line, from 11.1 onwards you have to connect as SYSASM:

SQL> alter  diskgroup DGName mount;

For Oracle 11.2 this automatically creates a resource for diskgroup DGName in the OCR which is very convenient (especially in RAC environments). Next time, all you need to do is to execute “srvctl start diskgroup -g DGName” as the grid software owner.

For pre 11.2 environments you might want to consider updating the “asm_diskstring” initialisation parameter accordingly.

Once the ASM disk groups your database requires are mounted in ASM, it’s time to mount the database. If not done so already, register the database in the OCR and ensure that you add the spfile option as well, as in this example (Oracle RAC users would add instances additionally):

$ srvctl add database -d PROD -o $ORACLE_HOME -c SINGLE -p ‘+data/PROD/spfilePROD.ora’ \
> -s mount -a DATA,ARCH

As per said MOS document, it is required to start the database using a BACKUP controlfile on the mount host. Otherwise you’d end up with these RMAN errors after the first backup:

RMAN-3014: Implicit resync of recovery catalog failed
RMAN-6038: Recovery catalog package detected an error
RMAN-20035: Invalid high RECID error

To do so, create a backup controlfile on the primary database before splitting the mirror. The process is very much the same you would for a physical standby database:

SQL> alter database backup controlfile to ‘/tmp/backup.ctl’;

Database altered.

This controlfile now needs to be made available to the mount host-an elegant way would be to use DBMS_FILETRANSFER to perform this task, or asmcmd’s copy command.

Once it’s on the mount host, say in /tmp//backup.ctl, it needs to be made available to Oracle. The easiest way is to use RMAN for this:

Connect to the mirror instance as SYSDBA; do not connect to the recovery catalog.

$ rman target /

Recovery Manager: Release 11.2.0.2.0 - Production on Fri May 20 10:53:40 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: PROD (DBID=1796248120, not open)

RMAN> shutdown immediate

using target database control file instead of recovery catalog
database dismounted
Oracle instance shut down

RMAN> startup nomount
connected to target database (not started)
Oracle instance started

Total System Global Area    9219969024 bytes

Fixed Size                     2234056 bytes
Variable Size               4630513976 bytes
Database Buffers            4563402752 bytes
Redo Buffers                  23818240 bytes

RMAN> restore controlfile from '/tmp/PROD/ backup.ctl';

Starting restore at 20-MAY-11
allocated channel: ORA_DISK_1
channel ORA_DISK_1: SID=200 instance=PROD device type=DISK

channel ORA_DISK_1: copied control file copy
output file name=+ARCH/prod/controlfile/current.256.737725307
output file name=+DATA/prod/controlfile/current.256.737725307
Finished restore at 20-MAY-11

If you were really desperate to back the CURRENT controlfile (which is part of the clone) up, you could have done this prior to the restore of the backup controlfile. You must not connect to the recovery catalog in this case-see MOS note 1111369.1 for more information.

Querying V$DATABASE.CONTROLFILE_TYPE should now return “BACKUP”. With the setup completed, you are ready to back the database up from the cloned LUNs. Connect to RMAN again, using the backup instance and the recovery catalog and initiate the backup. For example:

run {
allocate channel t1 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/u01/app/oracle/product/admin/PROD/tdp/tdpo_PROD.opt)';
allocate channel t2 type sbt_tape parms 'ENV=(TDPO_OPTFILE=/u01/app/oracle/product/admin/PROD/tdp/tdpo_PROD.opt)';
backup database;
}

This completes the backup of the database. One caveat exists: the control file is not backed up as part of this process-RMAN doesn’t back a “backup” controlfile up:

$ rman target / catalog rman/rmanPwd@rccat

Recovery Manager: Release 11.2.0.2.0 - Production on Tue May 24 12:51:22 2011

Copyright (c) 1982, 2009, Oracle and/or its affiliates.  All rights reserved.

connected to target database: GB1 (DBID=1796248120, not open)
connected to recovery catalog database

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

new RMAN configuration parameters:
CONFIGURE CONTROLFILE AUTOBACKUP ON;
new RMAN configuration parameters are successfully stored

RMAN> run {
2> allocate channel t1 type sbt_tape parms
3> 'ENV=(TDPO_OPTFILE=/u01/app/oracle/product/admin/PROD/tdp/tdpo_PROD.opt)';
3> backup tablespace USERS;
4> }

allocated channel: t1
channel t1: SID=199 instance=PROD device type=SBT_TAPE
channel t1: Data Protection for Oracle: version 5.5.1.0

Starting backup at 24-MAY-11
channel t1: starting full datafile backup set
channel t1: specifying datafile(s) in backup set
input datafile file number=00006 name=+DATA/prod/datafile/users.264.737727697
channel t1: starting piece 1 at 24-MAY-11
channel t1: finished piece 1 at 24-MAY-11
piece handle=1cmd4ott_1_1 tag=TAG20110524T125316
comment=API Version 2.0,MMS Version 5.5.1.0
channel t1: backup set complete, elapsed time: 00:00:01
Finished backup at 24-MAY-11

RMAN-06497: WARNING: control file is not current, control file AUTOBACKUP skipped
released channel: t1

RMAN>  exit

It’s not difficult at all to get around this problem. As part of the regular archive log backups you are performing on the production database anyway, you add a “backup current controlfile” and a command to resync the catalog (“resync catalog”).

At this stage I should note that the backup process wouldn’t be different in principle, if additional backups were to be taken on the DR site (licensing questions aside). Instead of having to clone the primary LUNs on the local data centre, the storage admins would clone the replicated LUNs (the “R2s” in EMC-talk) and bring these clones up on the DR mount host.

Summary

The process described above approach was interesting from my personal point of view as I haven’t used this concept before. I’m an Oracle DBA, and I feel most comfortable when I have things in hand, relying on another team for doing BAU database tasks is a new experience.

The process description deliberately left product names out unless they were part of the command output. The concept is quite universal and is by no means tied down to a specific vendor.

One of the requirements is that your tape solution can talk to the primary host(s) and the mount host in the local and remote data centres. Although I have seen sites (including rather large ones ) where the DR site was not configured to take backups. Although you could argue that you shouldn’t operate from your DR site for long, that’s by no means an excuse for not having backups. Let’s suggest you run into a block corruption, how would you recover from that without a backup? But then the storage team usually argue that block corruption doesn’t happen on their high end arrays.

References

  • RMAN and Split Mirror Disk Backups [ID 302615.1]
  • RMAN Backup of Controlfile fails from Split Mirror / BCV Copy. Is there any way to take controlfile backup from Split Mirror / BCV so that the Production DB is not used for Controlfile backup? [ID 1111369.1]
  • Supported Backup, Restore and Recovery Operations using Third Party Snapshot Technologies [ID 604683.1]

oradebug peek

An odd little discovery today. The peek command of oradebug is fairly well documented on the internet, and I’ve used it from time to time to find the contents of memory. Earlier on today, though, I was running 11.1.0.6 and it didn’t do what I was expecting. I was trying to dump a log buffer block from memory so after logging on as sys and finding the location for the block I duly typed:


oradebug setmypid
peek 0x09090000 512

The response I got looked, as expected, like this:


[9090000, 9090200) = 00002201 0000E3D7 00000974 E1808044 00340035 00350035 00340035
0034002F 0022002F 0309002C 050C0AC2 010860C4 02C1023E 056F7807 08290714 ...

As you can see, the dump range is given as 512 bytes (the variation in bracketing for the "[9090000,9090200)" means include the low address but exclude the high address) but the dump itself is only 15 words. My memory told me that this is what Oracle does on screen and that all I had to do was read the trace file for the full 512 bytes. Unfortunately my memory was wrong - the trace file was limited to the same 15 words.

It wasn't until I added an extra number to the command that I got the full 512 bytes I wanted in the trace file:

=== On screen ===
SQL> oradebug peek 0x09090000 512 1
[9090000, 9090200) = 00002201 0000E3D7 00000974 E1808044 00340035 00350035 00340035
0034002F 0022002F 0309002C 050C0AC2 010860C4 02C1023E 056F7807 08290714 ...

=== from the trace file ===
Processing Oradebug command 'peek 0x09090000 512 1'
[9090000, 9090200) = 00002201 0000E3D7 00000974 E1808044 00340035 00350035 ...
Dump of memory from 0x9090018 to 0x9090200
9090010                   00340035 0034002F          [5.4./.4.]
9090020 0022002F 0309002C 050C0AC2 010860C4  [/.".,........`..]
9090030 02C1023E 056F7807 08290714 056F7807  [>....xo...)..xo.]
9090040 082A0714 023DC203 0303C102 091215C2  [..*...=.........]
     ...
90901E0 03083207 02083DC2 C20303C1 C2031215  [.2...=..........]
90901F0 00012D09 00000009 00000000 0016010B  [.-..............]

*** 2011-05-20 16:35:11.156
Oradebug command 'peek 0x09090000 512 1' console output:
[9090000, 9090200) = 00002201 0000E3D7 00000974 E1808044 00340035 00350035 00340035 0034002F 0022002F 0309002C 050C0AC2 010860C4 02C1023E 056F7807 08290714 ...

Apart from the repeating the on-screen dump, the trace file now reports the full dump, with both a raw list and a character-based interpretation of each 16 bytes that starts, unfortunately, by repeating the first 24 (0×18) bytes in the same format as the onscreen format.

Maybe I’m wrong about how I used to peek data – maybe it’s just a variation in this version of Oracle.

Ad: DOAG Berliner Expertenseminare (Last Call)

I was just informed that there are still some free seats for the 2-day seminar I will present in Berlin in two weeks (June 7-8). Hence, do not wait too long if you want to join us…

The content is based on the chapters 4, 5, 6 and 7 of my book, i.e. part 3: Query Optimizer. The essential difference is that the content was updated to cover version 11.2 as well.

The event is organized by DOAG. You can read the full description of the seminar (incl. agenda) here. Just be careful that the spoken language will be German (slides will be in English, though).

HOWTO: Convert from W3C XML DateTime format to Oracle Timezone

Most XML datetime information is represented like the following:

#66cc66;"><date#66cc66;">>#cc66cc;">2011#66cc66;">-05#66cc66;">-23T12:01:#cc66cc;">51.217#66cc66;">+02:00#66cc66;">date#66cc66;">>

So said that, how do you get from that format to a Oracle datetype… The following will help converting you from the W3C datetime towards Oracle datetypes…

#993333; font-weight: bold;">WITH datestuff #993333; font-weight: bold;">AS
 #66cc66;">(#993333; font-weight: bold;">SELECT xmltype#66cc66;">(#ff0000;">'2011-05-23T12:01:51.217+02:00'#66cc66;">) xmlcol 
    #993333; font-weight: bold;">FROM dual
 #66cc66;">)
#993333; font-weight: bold;">SELECT to_timestamp_tz#66cc66;">(xt#66cc66;">.datum#66cc66;">,#ff0000;">'YYYY-MM-DD"T"HH24:MI:SS.FF9tzh:tzm'#66cc66;">) 
       #993333; font-weight: bold;">AS #ff0000;">"TO_TIMESTAMP_TZ"
#993333; font-weight: bold;">FROM   datestuff 
#66cc66;">,      xmltable#66cc66;">(#ff0000;">'*'
                passing xmlcol
                #993333; font-weight: bold;">COLUMNS
                  datum varchar2#66cc66;">(#cc66cc;">35#66cc66;">) PATH #ff0000;">'/date'
                #66cc66;">) xt;
 
TO_TIMESTAMP_TZ
#808080; font-style: italic;">-----------------------------------
#cc66cc;">23#66cc66;">-05#66cc66;">-#cc66cc;">11 #cc66cc;">12:01:#cc66cc;">51#66cc66;">,#cc66cc;">217000000 #66cc66;">+02:00

So use the timestamp with timezone function and make sure the format fits while using a varchar2(35).

Monotonically Increasing Indexes

Richard Foote has put up another excellent rebuttal of the myth that you should rebuild indexes based on the value of del_lf_rows/lf_rows from an appropriately valideated index structure. Greg asked a question similar to my comment on the OTN forums in this thread. Namely

Is Oracle able to reuse index space when indexed column is like current date (monolitically increasing). So if we delete some old rows (past dates) and inserted new rows are with higer (more recent) dates . Does Oracle reuse space from deleted rows ?

The short answer to Greg is that the Radiohead example Richard uses shows that index space is reused for montonically increasing indexes where the old data is completely deleted. What happens is that the block gets put back on the freelist and is eligible for reuse as a new leaf block. This pattern of insert, process, delete old is pretty common. Sometimes however not *all* the old data gets deleted. In this case Oracle won’t re-use the free space. I have modified Richard’s example so that I delete all the non-prime values below a threshold. Due to the pattern of distribution of the primes this should leave a relatively large number of almost empty index blocks. In this case we see that the old space is not reused until the laggard’s have been finally deleted. NB if you are tempted to use my prime number test, feel free but be aware it degrades as the value of n grows. In other words I only warrant accuracy for positive numbers, not performance. In my case a significant percentage of the available space is left held by these ‘old’ entries. It’s worth remembering however that

  1. the ratio can’t tell you if you have a pattern that matches this usage and
  2. in most cases you’d want to investigate why these values weren’t being deleted anyway.
NIALL @ NIALL1 >@index_deletes
NIALL @ NIALL1 >drop table rush purge;

Table dropped.

Elapsed: 00:00:03.83
NIALL @ NIALL1 >
NIALL @ NIALL1 >create table rush(
  2  	     id number
  3  ,	     code number
  4  ,	     name  varchar2(30)
  5  );

Table created.

Elapsed: 00:00:00.02
NIALL @ NIALL1 >
NIALL @ NIALL1 >insert into rush
  2  select rownum,mod(rownum,100),'MOVING PICTURES'
  3  FROM dual
  4  connect by level <= 1000000; 

1000000 rows created.
Elapsed: 00:00:03.18 

NIALL @ NIALL1 >commit;

Commit complete.

Elapsed: 00:00:00.03
NIALL @ NIALL1 >
NIALL @ NIALL1 >create unique index pk_rush on rush(id);

Index created.

Elapsed: 00:00:05.64
NIALL @ NIALL1 >
NIALL @ NIALL1 >alter table rush
  2  add constraint pk_rush
  3  primary key (id) using index;

Table altered.

Elapsed: 00:00:00.14
NIALL @ NIALL1 >
NIALL @ NIALL1 >
NIALL @ NIALL1 >create or replace function is_prime(p in number) return number
  2  is
  3  	     l_limit number;
  4  	     i	     number := 3;
  5  	     retval  number := 1;  -- returns 0 if false, 1 if true
  6  begin
  7  	     l_limit := sqrt(p);
  8  	     if p = 1 then
  9  		     retval:=1;
 10  	     end if;
 11  	     if p = 2 then
 12  		     retval :=1;
 13  	     end if;
 14
 15  	     if mod(p,2) = 0 then
 16  		     retval := 0;
 17  	     end if;
 18
 19  	     while (i <= l_limit) loop
 20  		if mod(p,i) = 0 then
 21  			     retval := 0;
 22  		end if;
 23  		i := i + 2; -- (no need to test even numbers)
 24  	     end loop;
 25  	     return retval;
 26  end;
 27  /
Function created.
Elapsed: 00:00:00.00
NIALL @ NIALL1 >
NIALL @ NIALL1 >show errors
No errors.
NIALL @ NIALL1 >
NIALL @ NIALL1 >delete from rush
  2  where id < 400000
  3  and is_prime(id) = 0;

 366139 rows deleted.
Elapsed: 00:01:15.54
NIALL @ NIALL1 >
NIALL @ NIALL1 >commit;

Commit complete.

Elapsed: 00:00:00.01
NIALL @ NIALL1 >
NIALL @ NIALL1 >analyze index pk_rush validate structure;

Index analyzed.

Elapsed: 00:00:00.97
NIALL @ NIALL1 >
NIALL @ NIALL1 >select lf_rows,del_lf_rows,round(del_lf_rows/lf_rows,2) del_pct
  2  from index_stats;

   LF_ROWS DEL_LF_ROWS    DEL_PCT
---------- ----------- ----------
   1000000      366139        .37

1 row selected.

Elapsed: 00:00:00.03
NIALL @ NIALL1 >
NIALL @ NIALL1 >begin
  2  for i in 1000001..1400000 loop
  3  insert into rush
  4  values(i,mod(i,100),'GRACE UNDER PRESSURE');
  5  end loop;
  6  commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:00:47.54
NIALL @ NIALL1 >
NIALL @ NIALL1 >analyze index pk_rush validate structure;

Index analyzed.

Elapsed: 00:00:00.49
NIALL @ NIALL1 >
NIALL @ NIALL1 >select lf_rows,del_lf_rows,round(del_lf_rows/lf_rows,2) del_pct
  2  from index_stats;

   LF_ROWS DEL_LF_ROWS    DEL_PCT
---------- ----------- ----------
   1400000      366139        .26

1 row selected.

Elapsed: 00:00:00.02
NIALL @ NIALL1 >
NIALL @ NIALL1 >
NIALL @ NIALL1 >
NIALL @ NIALL1 >
NIALL @ NIALL1 >begin
  2  for i in 1400001..2000000 loop
  3  insert into rush
  4  values(i,mod(i,100),'2112');
  5  end loop;
  6  commit;
  7  end;
  8  /

PL/SQL procedure successfully completed.

Elapsed: 00:01:08.89
NIALL @ NIALL1 >
NIALL @ NIALL1 >analyze index pk_rush validate structure;

Index analyzed.

Elapsed: 00:00:00.63
NIALL @ NIALL1 >
NIALL @ NIALL1 >select lf_rows,del_lf_rows,round(del_lf_rows/lf_rows,2) del_pct
  2  from index_stats;

   LF_ROWS DEL_LF_ROWS    DEL_PCT
---------- ----------- ----------
   2000000      366139        .18

1 row selected.

Elapsed: 00:00:00.02
NIALL @ NIALL1 >spoo off

ASSM bug reprise - part 2

Introduction

In the first part of this post I've explained some of the details and underlying reasons of bug 6918210. The most important part of the bug is that it can only be hit if many row migrations happen during a single transaction. However, having excessive row migrations is usually a sign of poor design, so this point probably can't be stressed enough:

If you don't have excessive row migrations the bug can not become significant

Of course, there might be cases where you think you actually have a sound design but due to lack of information about the internal workings it might not be obvious that excessive row migrations could be caused by certain activities.

DEL_LF_ROWS Index Rebuild Criteria ? (Codex)

In the same article and book where the erroneous claims regarding the BLKS_GETS_PER_ACCESS were made, we also find the following recommendation:   “Another rebuild condition would be cases where deleted leaf nodes comprise more than 20% of the index nodes“.   This is a very common claim however no matter how often it might be [...]

ODTUG KScope Preview 2011

Ook dit jaar, namelijk op dinsdag 14 Juni, organiseert AMIS de ODTUG Preview. Het jaarlijkse congres van de ODTUG, de Oracle Development Tools Users Group, vind dit jaar plaats in Longbeach, California van 26 tot en met 30 juni. Het is niet voor iedereen weggelegd om daar naar toe te gaan. AMIS biedt, alweer voor het vijfde achtereenvolgende jaar, aan geïnteresseerden de kans om een selectie van de presentaties die daar te zien zijn bij te wonen. Een aantal Europese sprekers zal tijdens de AMIS ODTUG preview presentatie laten zien die ook in de Verenigde Staten worden gehouden.

Tijdens de AMIS ODTUG Preview zullen er drie keer drie parallelle sessies worden gehouden met verschillende onderwerpen zoals APEX, database development, ADF, JHeadstart en SOA.

Programma:

Tijd Track 1 Track 2 Track 3
16:30 Welkom en Registratie
17:00 XFILES, the APEX 4 Version: The Truth is in There…
Marco Gralike & Roel Hartman
ADF Developers – Make the Database Work for You

Lucas Jellema

Pipelined Table Functions

Patrick Barel

18:00 Dinner
19:00 APEX Face/Off – Designing a GUI with APEX Templates and Themes

Christian Rokitta

BPMN: The New Silver Bullet?

Lonneke Dikmans

Oracle JHeadstart: Superior Productivity in Developing Best-practice ADF Web Applications

Steven Davelaar

20:15 Who’s Afraid of Analytic Functions?

Alex Nuijten

Overview of Eventing in Oracle SOA Suite 11g

Ronald van Luttikhuizen

…and Thus Your Forms ‘Automagically’ Disappeared

Luc Bors

Dit evenement is met name bedoeld voor ontwikkelaars.m Uiteraard zijn er aan dit event geen kosten verbonden, maar het aantal plaatsen voor dit evenement is beperkt, wacht niet te lang. Vol is vol.

Inschrijven kan binnenkort op www.amis.nl

ASSM

Here’s an interesting little detail (obvious AFTER the event) about space management with ASSM (automatic segment space management). It starts with this question on OTN:

When I alter table deallocate unused and keep 1K the object ends up with 24 blocks, even after I’ve truncated the table. Why?
This is in a tablespace using ASSM, with locally managed extents set to use automatic (system) allocation.

Ultimately the answer is – the first extent in this table started life at 8MB, and an extent that large needs to have 16 level 1 bitmap (space management) blocks, one level 2 bitmap block, and the segment header block before you get to data blocks. When you truncate and deallocate Oracle doesn’t recreate the map, so the extent has to start with 18 blocks – round that up to the multiple of 8 blocks (the 64KB that Oracle normally uses for starting extents for small objects) and you get the 24 blocks from the question.

It took us a bit of time to get to the right answer on the thread – and that’s why I’m giving you the quick answer.