Search

Top 60 Oracle Blogs

Recent comments

January 2019

Descending Problem

I’ve written in the past about oddities with descending indexes ( here, here, and here, for example) but I’ve just come across a case where I may have to introduce a descending index that really shouldn’t need to exist. As so often happens it’s at the boundary where two Oracle features collide. I have a table that handles data for a large number of customers, who record a reasonable number of transactions per year, and I have a query that displays the most recent transactions for a customer.

Upcoming Events- #SQLSatCle and @RMOUG_ORG #TD19

As I live in an RV and travel as part of my work and to attend events, travel has started to figure out that I often fly out of one location and fly back into another. This week will be no different.

Since we travel between cities on the weekend, its bound to happen that I leave for a SQL Saturday on a Friday in one city and arrive back in a different city on a Sunday.

This Friday I will fly out of New Orleans, LA airport, only to fly in on Sunday to a small airport outside of Pensacola, FL so I can speak on Power BI to the SQL Saturday Cleveland event in Ohio.

Announcement: “Oracle Indexing Internals and Best Practices Seminar” – Berlin 8-9 May 2019 !!

I’m very excited to announce I’ll be running my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar in conjunction with the German Oracle User Group (DOAG) in Berlin, Germany on 8-9 May 2019. The venue will be the DOAG Offices in Berlin (DOAG Office, 6th Floor, Tempelhofer Weg 64, 12347 Berlin). Because of venue […]

Announement: “Oracle Indexing Internals and Best Practices Seminar” – Berlin 8-9 May 2019 !!

I’m very excited to announce I’ll be running my highly acclaimed “Oracle Indexing Internals and Best Practices” seminar in conjunction with the German Oracle User Group (DOAG) in Berlin, Germany on 8-9 May 2019. The venue will be the DOAG Offices in Berlin (DOAG Office, 6th Floor, Tempelhofer Weg 64, 12347 Berlin). Because of venue […]

Azure Automation of A-to-Z, Part I

DevOps deployments and automation have numerous tools at their disposal, but most often, scripting is required. Although I’m a Microsoft Azure fanatic, I am also a strong advocate of Linux and with my two decades on Unix, I strongly prefer BASH over PoSH. I find the maturity of BASH and KSH highly attractive over PoSH and with my experience, I’m simply more skilled with shells native to the Linux OS.

Where to check Data Guard gap?

At work, we had a discussion with well-known colleagues, Luca Canali and Ludovico Caldara, about where we check that Data Guard recovery works as expected without gap. Several views can be queried, depending on the context. Here are a few comments about them.

v$database

This is my preferred because it relies on the actual state of the database, whatever the recovery process is:

SQL> select scn_to_timestamp(current_scn) 
from v$database;
SCN_TO_TIMESTAMP(CURRENT_SCN)
----------------------------------------------------------
22-JAN-19 03.08.32.000000000 PM

This reads the current System Change number (DICUR_SCN from X$KCCDI) and maps it to a timestamp (using the mapping SMON_SCN_TIME table).

How Not to Perform a Difficult Update in SQL Server/Azure

I learned a long time ago, that the quickest way to do something was to not do it at all.

A friend of mine asked me if I could review an update statement that was confounding their group looking for ways to optimize it.

The table has, just short of 50 columns, a considerable number of indexes and the column in question, has a datatype set to NVarChar(8), consists of some numbers, combinations of letters and numbers, etc.

The process would take 7 ½ hours to run this update, as it parses through 100K rows at a time. There is one index that includes the CN1 column in it, but no index on just the CN1 column. This is what the update statement looks like:

Oracle numbers in K/M/G/T/P/E

Oracle is very well instrumented, for decades, from a time where measuring the memory in bytes was ok. But today, we spend a lot of time converting bytes in KB, GB, TB to read it easily. I would love to see a Human-Readable format for TO_CHAR, but there’s not. Here is a workaround without having to create a new function.

DBMS_XPLAN does that when displaying execution plans and we can access the functions it uses internally. The metrics can be numbers, and then the Kilo, Mega, Giga applies to powers of 1000. Or they can be a size in bytes, and we prefer the powers of 1024. Or they can be a time in seconds, and then we use a base 60. And then we have 3 sets of functions:

Oracle Linux 7 and a new YUM configuration since January 2019

For quite some time I used to have a configuration file /etc/yum.repos.d/public-yum-ol7.repo managing my package sources in lab VMs. Today I noticed that this configuration file is deprecated, and has been replaced by a new model. This is explained in the Oracle Linux 7 administrator guide and a few other sources I totally missed. I thought I’d show you the full story in this post before I go and change all my Ansible scripts :)

State of play before

To repeat the scenario I just went through, I created a new machine, server3, based on the stock Oracle Linux 7.6 image. After its initial boot I log in to the console to perform a yum upgrade.

Building your own local Oracle Linux 7 Vagrant base box

I have been talking about Vagrant for a long time and use it extensively on my Ubuntu-powered laptop. I am using Oracle Linux 7.6 for most of my lab builds, and I like to have specific tools such as collectl, perf, and many others available when the VM boots. I als like to stay in control of things, especially when it comes to downloading otherwise unknown things from the Internet I decided to learn how to create a Vagrant box myself.