Search

Top 60 Oracle Blogs

Recent comments

July 2019

Decision Support Benchmark TPC-H won by #Exasol

Oops, we did it again </p />
</p></div>

    	  	<div class=

Hi Tuyen, as this removes lot of files from the Oracle Home, many features will not work.

Hi Tuyen, as this removes lot of files from the Oracle Home, many features will not work. Only what requires only the basic binaries and procedures will work.

Assumptions

Over the last few days I’ve been tweeting little extracts from Practical Oracle 8i, and one of the tweets contained the following quote:

This lead to the question:

HIGH_VALUE (and other LONG columns) revisited

Just a quick post to ensure people don’t get caught out by a recent “trick” I saw on an AskTOM question for extracting the HIGH_VALUE out of it’s LONG storage in the dictionary to a more appropriate datatype. A reviewer (I’m sure in good faith) posted the following mechanism to extract the HIGH_VALUE

(In his best Arnie voice) I’M BACK!!!

Yes, after just under 7 months leave without pay due to the chronic pain issues you may recall I was suffering from last year, I am finally pain-free (relatively) and starting back at work in a graduated fashion. Last week, I only worked two days (most of which was deleting email that accumulated while I was gone – around 95,400 of them!). Today, I hope to work three days (if my body holds up to it) and then gradually increase back to the full 5 days, but that really depends on how my body reacts!

For those of you that aren’t interested in the gory medical details, you can stop reading here.

(In his best Arnie voice) I’M BACK!!!

Yes, after just under 7 months leave without pay due to the chronic pain issues you may recall I was suffering from last year, I am finally pain-free (relatively) and starting back at work in a graduated fashion. Last week, I only worked two days (most of which was deleting email that accumulated while I was gone – around 95,400 of them!). Today, I hope to work three days (if my body holds up to it) and then gradually increase back to the full 5 days, but that really depends on how my body reacts!

For those of you that aren’t interested in the gory medical details, you can stop reading here.

Sampling pg_stat_statements based on the active sessions and their associated queryid

Introduction

Now that we have the ability to sample and record the active sessions and their associated queryid with the pg_active_session_history view (see this blog post), it would be interesting to have insights about the queries statistics at the time the sessions were active.

PostgreSQL provides the queries statistics with the pg_stat_statements view. We could query the pg_active_session_history and the pg_stat_statements views and join them on the queryid field, but the queries statistics would be:

strace -k (build with libunwind)

strace -k (built with libunwind)

prints the stack trace with the system calls

PostgreSQL is Open Source and you may think that it is not necessary to trace the system calls as we can read the source code. But even there, strace is a really nice tool for troubleshooting a running process.

https://postgreslondon.org/speaker/dmitrii-dolgov/

Little disclaimer here: attaching strace to a running process may hang. Do not use it in production, except when this (small) risk is an acceptable way to troubleshoot a critical problem.

Getting rid of annoying, repetitive messages in /var/log/messages

The primary source of information regarding any change or issue on a linux system is the /var/log/messages file. I am often annoyed when a linux system is setup in such a way that certain messages are written to syslog with a high frequency swamping the messages file with information that is not important. The reason for my annoyance is that this makes it very hard to actually spot important information because you have to skip through a lot of lines before you find the important information, especially if you do not know for sure if there a message in the first place.

Please mind this blogpost is created on a Centos 7 server which uses rsyslog.

There are a couple of ways to manage this. The standard syslog way of managing this is the following, which can be found in /etc/rsyslog.conf:

DB links

A recent thread on the ODC SQL & PL/SQL forum raised the question of how data moves across a database link – is it row by row or array based (or other)? It’s a good question as it’s fairly common knowledge that distributed queries can be very much slower than the same query when executed on an equivalent set of local tables, so it’s worth having a little poke around to see if there’s anything in the way the data moves across the link that could be tweaked to improve performance.