Search

Top 60 Oracle Blogs

Recent comments

trace

Oracle memory troubleshooting using analysis on heapdumps, part 2

In the article oracle memory troubleshooting using analysis on heap dumps I introduced heap_analyze.awk.

The reason the tool exists is because I am using it myself. Therefore, I ran into additional things that I wanted the tool to do. I added some stuff, which is that significant, that I decided to make another blogpost to introduce the new features.

1. Percentages
In order to get an idea of the relative size of the summarised topic, I added a percentage. For example:

Oracle memory troubleshooting using analysis on heapdumps

This blogpost is about analysing Oracle heap dumps. It is an extension to earlier work, Tanel Poder’s heap dump analyzer. So hat tip to Tanel, he’s done the hard work, I merely increased the presentation options. The heap analyser script that I wrote analyses Oracle heapdumps from the trace file that the dump was written to by the Oracle database. Because the heap dump representation is the same between PGA and SGA memory, it can work on both. The reason for this is that memory management is done by the same memory manager, and is commonly called ‘kgh’ (kernel generic heap) managed memory.

Please mind that for PGA analysis, not all memory is managed by the kgh memory manager. For example memory used for networking (sqlnet) is allocated totally outside of the kgh memory manager.

A look into oracle redo, part 11: log writer worker processes

Starting from Oracle 12, in a default configured database, there are more log writer processes than the well known ‘LGWR’ process itself, which are the ‘LGnn’ processes:

$ ps -ef | grep test | grep lg
oracle   18048     1  0 12:50 ?        00:00:13 ora_lgwr_test
oracle   18052     1  0 12:50 ?        00:00:06 ora_lg00_test
oracle   18056     1  0 12:50 ?        00:00:00 ora_lg01_test

These are the log writer worker processes, for which the minimal amount is equal to the amount public redo strands. Worker processes are assigned to a group, and the group is assigned to a public redo strand. The amount of worker processes in the group is dependent on the undocumented parameter “_max_log_write_parallelism”, which is one by default.

A look into oracle redo: index and overview

I gotten some requests to provide an overview of the redo series of blogposts I am currently running. Here it is:

A look into oracle redo, part 10: commit_wait and commit_logging

The redo series would not be complete without writing about changing the behaviour of commit. There are two ways to change commit behaviour:

1. Changing waiting for the logwriter to get notified that the generated redo is persisted. The default is ‘wait’. This can be set to ‘nowait’.
2. Changing the way the logwriter handles generated redo. The default is ‘immediate’. This can be set to ‘batch’.

There are actually three ways these changes can be made:
1. As argument of the commit statement: ‘commit’ can be written as ‘commit write wait immediate’ (statement level).
2. As a system level setting. By omitting an explicit commit mode when executing the commit command, the setting as set with the parameters commit_wait (default: wait) and commit_logging (default: immediate).
3. As a session level setting. By omitting an explicit commit mode, but by setting either commit_wait or commit_logging it overrides the settings at the system level.

A look into oracle redo, part 9a: commit – concurrency considerations

During the investigations of my previous blogpost about what happens during a commit and when the data becomes available, I used breaks in gdb (GNU debugger) at various places of the execution of an insert and a commit to see what is visible for other sessions during the various stages of execution of the commit.

However, I did find something else, which is very logical, but is easily overlooked: at certain moments access to the table is blocked/serialised in order to let a session make changes to blocks belonging to the table, or peripheral blocks like undo, for the sake of consistency. These are changes made at the physical layer of an Oracle segment, the logical model of Oracle says that writers don’t block readers.

A look into oracle redo, part 9: commit

The previous blogpost talked about a simple insert, this blogpost investigates what happens when the DML is committed. Of course this is done with regular commit settings, which means means they are not touched, which means commit_logging is set to immediate and commit_wait is set to wait as far as I know. The documentation says there is no default value, and the settings are empty in all parameter views. In my humble opinion, if you must change the commit settings in order to make your application perform usable with the database, something is severely wrong somewhere.

This blogpost works best if you thoroughly gone through the previous post. I admit it’s a bit dry and theoretical, but you will appreciate the knowledge which you gained there, because it directly applies to a commit.

First let’s look at the flow of functions for the commit:

A look into oracle redo, part 8: generate redo

This blogpost looks at the very start of oracle redo: the generation of it. In order to do that, I start off with a very simple table, and look at the redo generation part. I guess the regular readers of this blogpost series understand that redo generation is closely connected with making changes made to blocks. This post therefore is not only about redo generation, but also about how the technical implementation of block changes.

I created a simple table (create table test (f1 varchar2(10)) with no index to make the execution as simple as possible, and simply insert rows (insert into test values (‘a’)). It could be argued that not having an index makes it not the most real life scenario, and this might be right. However, the goal here is to make the execution as simple as possible.

I then looked at the execution of the SQL, and created an overview of the relevant functions that are executed in my session:

A look into oracle redo, part 7: adaptive log file sync

This is the seventh part of a blog series about oracle redo.

Adaptive log file sync is a feature that probably came with Oracle version 11.2. Probably means I looked at the undocumented parameters of Oracle version 11.1 and do not see any of the ‘_adaptive_log_file_sync*’ parameters. It was actually turned off by default with versions 11.2.0.1 and 11.2.0.2, and was turned on by default since version 11.2.0.3.