Search

Top 60 Oracle Blogs

Recent comments

Oracle

Whitepaper Announcement: Benchmarking Amazon Aurora.

This is just a quick blog post to inform readers of a good paper that shows some how-to information for benchmarking Amazon Aurora PostgreSQL. This is mostly about sysbench which is used to test transactional capabilities.

As an aside, many readers my have heard that I’m porting SLOB to PostgreSQL and will make that available in May 2018. It’ll be called “pgio” and is an implemention of the SLOB Method as described in the SLOB documentation. Adding pgio, to tools like sysbench, rounds-out the toolkit for testing platform readiness for your PostgreSQL applications.

To get a copy of the benchmarking paper, click here.

Docker: efficiently building images for large software

I see increasing demand to build a Docker image for the Oracle Database. But the installation process for Oracle does not really fit the Docker way to install by layers: you need to unzip the distribution, install from it to the Oracle Home, remove the things that are not needed, strop the binaries,… Before addressing those specific issues, here are the little tests I’ve done to show how the build layers increase the size of the image.

I’m starting with an empty docker repository on XFS filesystem:

[root@VM121 docker]# df -hT /var/lib/docker
Filesystem Type Size Used Avail Use% Mounted on
/dev/sdc xfs 80G 33M 80G 1% /var/lib/docker

Docker-CE on Oracle Enterprise Linux 7

Here is how I install the latest Docker version on Oracle Linux 7. You find several blog posts about it which all install ‘docker-engine’. But things move fast in this agile world and docker package name has changed. The Community Edition is now ‘docker-ce’ and you want this one to run the latest version.

I’m on OEL 7.4 but should also wotj on RHEL 7:
[root@VM188 yum]# cat /etc/oracle-release
Oracle Linux Server release 7.4

Lock Types

Every now and again I have to check what a particular lock (or enqueue) type is for and what the associated parameter values represent. This often means I have to think about the names of a couple of views and a collection of columns – then create a few column formats to make the output readable (though sometimes I can take advantage of the “print_table()” procedure that Tom Kyte published a long time ago.  It only takes a little time to get the code right but it’s a nuisance when I’m in a hurry so I’ve just scribbled out a few lines of a script that takes a lock type as an input parameter and reports all the information I want.

A look into oracle redo, part 8: generate redo

This blogpost looks at the very start of oracle redo: the generation of it. In order to do that, I start off with a very simple table, and look at the redo generation part. I guess the regular readers of this blogpost series understand that redo generation is closely connected with making changes made to blocks. This post therefore is not only about redo generation, but also about how the technical implementation of block changes.

I created a simple table (create table test (f1 varchar2(10)) with no index to make the execution as simple as possible, and simply insert rows (insert into test values (‘a’)). It could be argued that not having an index makes it not the most real life scenario, and this might be right. However, the goal here is to make the execution as simple as possible.

I then looked at the execution of the SQL, and created an overview of the relevant functions that are executed in my session:

A look into oracle redo, part 7: adaptive log file sync

This is the seventh part of a blog series about oracle redo.

Adaptive log file sync is a feature that probably came with Oracle version 11.2. Probably means I looked at the undocumented parameters of Oracle version 11.1 and do not see any of the ‘_adaptive_log_file_sync*’ parameters. It was actually turned off by default with versions 11.2.0.1 and 11.2.0.2, and was turned on by default since version 11.2.0.3.

Reference Costs

The partitioning option “partition by reference” is a very convenient option which keeps acquiring more cute little features, such as cascading truncates and cascading splits, as time passes – but what does it cost and would you use it if you don’t really need to.

When reference partitioning came into existence many years ago, I had already seen several performance disasters created by people’s enthusiasm for surrogate keys and the difficulties this introduced for partition elimination; so my first thought was that this was a mechanism that would have a hugely beneficial effect on systems which (in 20:20 – or 6:6 if you’re European – hindsight) had been badly designed and would otherwise need a lot of re-engineering to use partitioning effectively.

Result cache invalidation caused by DML locks

In a previous post about Result Cache, I was using ‘set autotrace on’ in my session to see when the result cache hit saves the logical reads. I got a comment, by e-mail and with full test case, mentioning that the invalidation I had when doing some DML does not occur when autotrace is set to off. I reproduced the problem, on 18c because I use my cloud services to do my tests, but it works the same way in 12c.

I’ll focus here on DML which does not modify any rows from the table the result depends on, because this is the case that depends on the session transaction context. When rows are modified, the result is always invalidated.

GDPR Solutions in the Day of Automation

There’s something for having job security and many of the solutions that I see offered for RDBMS challenges offer just that. With compliance with EU’s GDPR, (General Data Protection Regulations) just around the corner, (mark you calendar, May 25, 2018) you’d think we’d all be scrambling for a simpler solution to discovering and addressing all that GDPR data.

Quick refresher for those of you going, “What is GDPR?”

Data security is a known focus of GDPR when you talk to folks, but it’s much more than just security.  It’s about extended rights of the individual in the EU. There’s four areas as a DBA, you need to really concern yourself with:

Keeping Intervals

I’ve recently been reminded of a blog post I wrote a couple of years ago that discussed the issue of running into the hard limit of 2^20 -1 as the number of segments for a (composite) partitioned table – a problem that could arise in a relatively short time if you used a large number of hash subpartitions in an interval/hash composite partitioned table (you get about 2 years and 10 months of daily partitions at 1,024 subpartitions per day, for example).