Search

Top 60 Oracle Blogs

Recent comments

Infrastructure

Last Modified

Have you ever wondered if there’s a way to see when the data in a table was last modified ? Depending on what you’re really trying to achieve, how accurate you want the answer to be, what resources you’re prepared to use, and how you’ve configured your database, you may be able to get an answer that’s good enough very easily.

If all you want is a rough indication that the table hasn’t changed over the last few days, or weeks, or even months, you may be able to run a simple, but potentially brutal, query against the table to find out. Here’s an example that get’s there in three steps – using the table sys.link$ as a target. Steps 1 and 2 are for explanatory purposes only.


select
	ora_rowscn
from
	link$
;

select
	first_change#, first_time
from
	v$log_history
order by
	first_change#;

select
	min(first_time)
from
	v$log_history
where
	first_change# >= (
		select max(ora_rowscn) from link$
	)
;

The pseudo-column ora_rowscn is an upper bound for the SCN at which the last change to a row committed. If you have enabled rowdependencies this will generally be the actual commit SCN for the row, otherwise Oracle has various algorithms for working out the largest SCN avaiable that is not less than the SCN at which the transaction committed. If all you’re interested in is an indication of when the table last changed even this is upper bound SCN may be good enough.

Step 2 is there to remind you that v$log_history records the SCN and timestamp for the first change in the file. This gives us a very crude conversion between SCNs and timestamps.

The final step gives us the answer we want. We find the approximate SCN of the most recent change to the table, and then report the timestamp of the first log file that started at a higher SCN.

Quiz Night.

I was on a customer site recently where I needed to add a NOT NULL constraint to a table of 200 million rows – without taking any downtime. It’s not difficult (provided you are happy with a check constraint rather than a column definition.)

alter table t1
	add constraint t1_ck_colX_nn check (colX is not null)
	enable novalidate
;

The first step creates the constraint and enables it – but doesn’t validate it. This means that future data (and changes) will obey the constraint, but there may be illegal data already in the table that will not be checked. You will have an interruption to service doing this, as your session will wait to lock the table in share mode (mode 4) to add the constraint – so will be blocked by current update transactions, and will block new update transactions. In a typical OLTP system this should result in just a brief pause.

The second step validates the constraint, which needs a slightly more complex piece of code – perhaps something like the following:

declare
	resource_busy exception;
	pragma EXCEPTION_INIT(resource_busy, -54);
begin
	loop
		begin
			execute immediate
			'alter table t1 modify constraint t1_ck_colX_nn validate';
			dbms_output.put_line('Succeeded');
			exit;
		exception
			when resource_busy then
				dbms_output.put_line('Failed');
		end;
		dbms_lock.sleep(0.01);
	end loop;
end;
/

This code tries to validate the constraint and goes into a loop, sleeping for 1/100 second, if it hits the “resource busy” error. Unlike the call to add the constraint, the call to validate it doesn’t wait for a lock – it requests an immediate lock, so it’s going to fail immediately if there are any active transactions on the table. Once it has the lock it drops it, so you’re not going to shut your users out while the validation takes place.

Index rebuilds

A couple of years ago I wrote about a poster on the OTN db forum who was puzzled by the fact that when he started rebuilding tables they got bigger.  (Bad luck, sometimes that’s what happens !)

A few days ago a related question appeared: I rebuilt some indexes and my query got slower. (Bad luck, sometimes that’s what happens – again!)

If you rebuild an index it’s physically different and its statistics are different. Plans can change and go slower because the index stats look sufficiently different; plans can stay the same and go slower because the index is physically different. 

I’ve added a couple of comments to the thread – there may still be some further mileage in it.

Cardinalilty One

I think anyone who has read Wolfgang Breitling’s material about the optimizer will be familiar with the concept of Cardinality Feedback and one particular detail that when Oracle gets a cardinality estimate of one for a “driving” table then there’s a good chance that the execution plan will go wrong. (That’s not rule, by the way, just a fairly common observation after things have gone wrong.)

A recent note on OTN reminded me of a particular scenario where this specific problem can occur. It’s not particularly common, but it may hit people who are building data warehouses from multiple different sources. We start with an unlikely looking data set and very simple query:

drop table t1;

create table t1 as
select
	rownum id1,
	rownum id2
from
	all_objects
where
	rownum <= 10000
;

execute dbms_stats.gather_table_stats(user,'t1');

set autotrace traceonly

select
	count(*)
from
	t1
where
	id1 = id2
;

What do you think Oracle estimated cardinality will be for this predciate ? We know, because we saw the data being built, that we’re going to identify 10,000 rows. But the optimizer doesn’t see it that way – check line 2 of the execution plan. The optimizer thinks it will find just one row:

Index Space

I’ve just been reminded of a thread on OTN which turned into a Q&A about index space usage and various related topics. On re-reading it, I decided it was too good to waste in the backwaters of OTN, so here’s a link to it.

Index Space Utilization.

Joins – NLJ

This is part one of my thesis that “all joins are nested loop joins – it’s just the startup overheads that vary”; there will be a note on “Joins – HJ” and “Joins – MJ” to follow. In some ways, the claim is trivially obvious – a join simply takes two row sources and compares [...]

Fragmentation 4

This note is part four of a four-part series, and covers Index fragmentation. The whole series is as follows  Introduction Disk and Tablespace Fragmentation Table Fragmentation Index Fragmentation – this bit 4. Index “fragmentation”. The multiple extent and ASSM “fragmentation” that I described in the previous about table fragmentaiton applies equally well to indexes, of course, [...]

Fragmentation 3

This note is part three of a four-part series, and covers Table fragmentation. The whole series is as follows Introduction Disk and Tablespace Fragmentation Table Fragmentation – this bit Index Fragmentation 3. Table “fragmentation”. In the introduction we discussed one type of table fragmentation that doesn’t (usually) matter – the fragmentation of a table into [...]

Fragmentation 2

This note is part two of a four-part series, and covers Disk and Tablespace fragmentation. The whole series is as follows Introduction Disk and Tablespace Fragmentation – this bit Table Fragmentation Index Fragmentation 2.1 Disk “fragmentation”. Tablespaces are made up of files, and files are stored on discs – which are often “logical volumes” rather than [...]

Changing UNDO

From time to time people run into problems with UNDO tablespaces that have grown much larger than expected (perhaps due to a rogue process doing far too much work) and refuse to shrink. The workaround is to create a new undo tablespace and switch the instance to use it – but even this simple procedure [...]