Search

Top 60 Oracle Blogs

Recent comments

Table rebuilds

Here’s a question from the OTN forum that I’ve seen fairly frequently in the last couple of years:

I have a table abc(c_date date, c_id varchr2(20), c_name varchar2);
The table is already range partitoned on “c_date” column and I have lot of data in that table.
Now my task is to create the hash partions under range; hash partiton needs 16 partitons.
I dont want to create another table with range and hash partitons and migrate the data. The amount of data I have is 3 terabytes.

The statement of requirements is a little ambigious in this case – does the OP want a table where new partitions should be hash partitioned, or does he want the entire data set to be range/hash partitioned. If it’s the latter then he will have to do something that rewrites (and re-indexes) the entire data set; if the former then he can run through a process that does a small amount of data dictionary manipulation to get to the required state. Here’s a quick outline that I hacked together in a few minutes between sessions (okay, I’ll admit it also needed a few minutes during a session) at the ACED briefing.

  1. Create an empty clone of the table, but introduce hash subpartitioning with a single subpartition per partition.
  2. Create a pair of ‘staging’ clones of the table – the first should be a non-partitioned table, the second should be a hash-partitioned table
  3. For each partition in the original table,
    1. exchange the partition with the first staging table
    2. exchange the first staging table with the single partition of the second staging table
    3. exchange the second staging table with the target partition of the final table
  4. Patch some statistics that can’t otherwise be created in all the exchanges
    1. Copy the hash subpartition stats up to the range partition
    2. copy the original table-level stats to the new table
  5. Change the hash partition default, drop the transfer tables and the old table and rename the new table

Here’s a code demonstration – but it doesn’t include the stats patching, and it’s not a model you should use exactly as it is on a production system.

First we create a range-based table for our demonstration:


create table pt_range (
	id1	not null,
	id2	not null,
	grp,
	small_vc,
	padding
)
partition by range(id1) (
	partition p200 values less than (200),
	partition p400 values less than (400)
)
as
select
	cast(rownum as number(8))				id1,
	cast(trunc(dbms_random.value(1,64)) as number(8))	id2,
	cast(trunc(rownum/50) as number(4))			grp,
	lpad(to_char(trunc(rownum/20)),10)			small_vc,
	rpad('x',100)						padding
from
	all_objects
where
	rownum <= 350
;

Now we can create an empty copy – but introduce the hash subpartition – and two staging tables.

create table pt_range_hash (
	id1		number(8,0)	not null,
	id2		number(8,0)	not null,
	grp		number(4),
	small_vc	varchar2(10),
	padding		varchar2(100)
)
partition by range(id1)
subpartition by hash (id2)
subpartitions 1
(
	partition p200 values less than (200),
	partition p400 values less than (400)
)
;

create table transfer_r (
	id1		number(8,0)	not null,
	id2		number(8,0)	not null,
	grp		number(4),
	small_vc	varchar2(10),
	padding		varchar2(100)
)
;

create table transfer_h (
	id1		number(8,0)	not null,
	id2		number(8,0)	not null,
	grp		number(4),
	small_vc	varchar2(10),
	padding		varchar2(100)
)
partition by hash (id2)
(
	partition p1
)
;

You’ll notice that I’ve matched the partition names of the full clone using a very simple subpartition name for the one hash subpartition, and repeated that simple name for one partition of the hash-partitioned table. This isn’t absolutely necessary, but I think it makes the next process simpler to understand.


begin
	for r in (
		select	partition_name
		from	user_tab_partitions
		where	table_name = 'PT_RANGE'
		order by
			partition_position
	) loop

		dbms_output.put_line(r.partition_name);

		execute immediate
			'alter table PT_RANGE exchange partition ' ||
			r.partition_name ||
			' with table transfer_r including indexes without validation'
		;

		execute immediate
			'alter table transfer_h exchange partition ' ||
			'p1' ||
			' with table transfer_r including indexes without validation'
		;

		execute immediate
			'alter table pt_range_hash exchange partition ' ||
			r.partition_name ||
			' with table transfer_h including indexes without validation'
		;

	end loop;
end;
/

When this block completes your data hasn’t moved, it hasn’t been copied, but it is now in a range/hash partitioned table. The statistics that used to exist at the partition level on the original table will now be on the new table at the subpartition level, and the new table won’t have partition-level or table-levle stats. (I may find time to write part 2 to this note with code that fixes the stats – but any reader who wants to have a shot at it instead is welcome to supply the code).

Now we can change the default behaviour of the new table to specify the number of hash partitions for future partitions:

alter table pt_range_hash
	set subpartition template 16
;

NOTE: I did say that this is a model that you shouldn’t follow in a production system. The main reason for this is that if something does go wrong in the loop then you could end up with partitions switching around into the wrong place in the wrong table. (And you definitely don’t want to drop any objects until you’ve checked them all!). Realistically you would probably transfer one partition at a time, and check the results after each transfer. Bear in mind, also, that this was just a little hack put together very quickly. Even though the general strategy is sound I may have overlooked some critical detail and, obviously, there are some variations on theme (IOTS, for example) where you would need to work out some code changes; there’s also an implicit assumption that the only indexes on the table are local indexes – if there are any global, or globally partitioned, indexes then the code has to be modified to allow for them, and the time for the various exchanges will be affected by the choice of index maintenance options.