NOTE:: Please see my next entry on Kernel NFS performance and the improvements that come with the latest Solaris.
==============
After experimenting with dNFS it was time to do a comparison with the “old” way. I was a little surprised by the results, but I guess that really explains why Oracle decided to embed the NFS client into the database
This experiment was designed to load up a machine, a T5240, with OLTP style transactions until no more CPU available. The dataset was big enough to push about 36,000 IOPS read and 1,500 IOPS write during peak throughput. As you can see, dNFS performed well which allowed the system to scale until DB server CPU was fully utilized. On the other hand, Kernel NFS throttles after 32 users and is unable to use the available CPU to scale transactional throughput.
A common measure for benchmarks is to figure out how many transactions per CPU are possible. Below, I plotted the CPU content needed for a particular transaction rate. This chart shows the total measured CPU (user+system) to for a given TPS rate.
As expected, the transaction rate per CPU is greater when using dNFS vs kNFS. Please do note, that this is a T5240 machine that has 128 threads or virtual CPUs. I don’t want to go into semantics of sockets, cores, pipelines, and threads but thought it was at least worth noting. Oracle sees a thread of a T5240 as a CPU, so that is what I used for this comparison.
When doing the OLTP style tests with a normal sized SGA, I was not able to fully utilize the 10gigE interface or the Sun 7410 storage. So, I decided to do a silly little micro benchmark with a real small SGA. This benchmark just does simple read-only queries that essentially result in a bunch of random 8k IO. I have included the output from the Fishworks analytics below for both kNFS and dNFS.
I was able to hit ~90K IOPS with 729MB/sec of throughput with just one 10gigE interface connected to Sun 7140 unified storage. This is an excellent result with Oracle 11gR2 and dNFS for a random test IO test… but there is still more bandwidth available. So, I decided to do a quick DSS style query to see if I could break the 1GB/sec barrier.
===dNFS=== SQL> select /*+ parallel(item,32) full(item) */ count(*) from item; COUNT(*) ---------- 40025111 Elapsed: 00:00:06.36 ===kNFS=== SQL> select /*+ parallel(item,32) full(item) */ count(*) from item; COUNT(*) ---------- 40025111 Elapsed: 00:00:16.18
Excellent, with a simple scan I was able to do 1.14GB/sec with dNFS more than doubling the throughput of kNFS.
I was running on a T5240 with Solaris 10 Update 8.
$ cat /etc/release Solaris 10 10/09 s10s_u8wos_08a SPARC Copyright 2009 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 16 September 2009
This machine has the a built-in 10gigE interface which uses multiple threads to increase throughput. Out of the box, there is very little to tuned as long as you are on Solaris 10 Update 8. I experimented with various settings, but found that only basic tcp settings were required.
ndd -set /dev/tcp tcp_recv_hiwat 400000 ndd -set /dev/tcp tcp_xmit_hiwat 400000 ndd -set /dev/tcp tcp_max_buf 2097152 ndd -set /dev/tcp tcp_cwnd_max 2097152
Finally, on the storage front, I was using the Sun Storage 7140 Unified storage server as the NFS server for this test. This server was born out of the Fishworks project and is an excellent platform for deploying NFS based databases…. watch out NetApp.
dNFS wins hands down. Standard kernel NFS only essentially allows one client per “mount” point. So eventually, we see data queued to a mount point. This essentially clips the throughput far too soon. Direct NFS solves this problem by having each Oracle shadow process mount the device directly. Also with dNFS, all the desired tuning and mount point options are not necessary. Oracle knows what options are most efficient for transferring blocks of data and configures the connection properly.
When I began down this path of discovery, I was only using NFS attached storage because nothing else was available in our lab… and IO was not initially a huge part of the project at hand. Being a performance guy who benchmarks systems to squeeze out the last percentage point of performance, I was skeptical about NAS devices. Traditionally, NAS was limited by slow networks and clumsy SW stacks. But times change. Fast 10gigE networks and Fishworks storage combined with clever SW like Direct NFS really showed this old dog a new trick.
Posted in Oracle, Storage Tagged: 11g, 7410, analytics, dNFS, fishworks, NAS, NFS, Oracle, performance, Solaris, Sun
When I start a new project, I like to check performance from as many layers as possible. This helps to verify things are working as expected and helps me to understand how the pieces fit together. My recent work with dNFS and Oracle 11gR2, I started down the path to monitor performance and was surprised to see that things are not always as they seem. This post will explore the various ways to monitor and verify performance when using dNFS with Oracle 11gR2 and Sun Open Storage “Fishworks“.
“iostat(1M)” is one of the most common tools to monitor IO. Normally, I can see activity on local devices as well as NFS mounts via iostat. But, with dNFS, my device seems idle during the middle of a performance run.
bash-3.0$ iostat -xcn 5
cpu
us sy wt id
8 5 0 87
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 6.2 0.0 45.2 0.0 0.0 0.0 0.4 0 0 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 toromondo.west:/export/glennf
cpu
us sy wt id
7 5 0 89
extended device statistics
r/s w/s kr/s kw/s wait actv wsvc_t asvc_t %w %b device
0.0 57.9 0.0 435.8 0.0 0.0 0.0 0.5 0 3 c1t0d0
0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 toromondo.west:/export/glennf
From the DB server perspective, I can’t see the IO. I wonder what the array looks like.
The analytics package available with fishworks is the best way to verify performance with Sun Open Storage. This package is easy to use and indeed I was quickly able to verify activity on the array.
There are 48,987 NFSv3 operations/sec and ~403MB/sec going through the nge13 interface. So, this array is cooking pretty good. So, let’s take a peek at the network on the DB host.
nicstat is wonderful tool developed by Brendan Greg at Sun to show network performance. Nicstat really shows you the critical data for monitoring network speeds and feeds by displaying packet size, utilization, and rates of the various interfaces.
root@saemrmb9> nicstat 5
Time Int rKB/s wKB/s rPk/s wPk/s rAvs wAvs %Util Sat
15:32:11 nxge0 0.11 1.51 1.60 9.00 68.25 171.7 0.00 0.00
15:32:11 nxge1 392926 13382.1 95214.4 95161.8 4225.8 144.0 33.3 0.00
So, from the DB server point of view, we are transferring about 390MB/sec… which correlates to what we saw with the analytics from Fishworks. Cool!
Ok, I wouldn’t be a good Sun employee if I didn’t use DTrace once in a while. I was curious to see the Oracle calls for dNFS so I broke out my favorite tool from the DTrace Toolkit. The “hotuser” tool shows which functions are being called the most. For my purposes, I found an active Oracle shadow process and searched for NFS related functions.
root@saemrmb9> hotuser -p 681 |grep nfs
^C
oracle`kgnfs_getmsg 1 0.2%
oracle`kgnfs_complete_read 1 0.2%
oracle`kgnfswat 1 0.2%
oracle`kgnfs_getpmsg 1 0.2%
oracle`kgnfs_getaprocdata 1 0.2%
oracle`kgnfs_processmsg 1 0.2%
oracle`kgnfs_find_channel 1 0.2%
libnfsodm11.so`odm_io 1 0.2%
oracle`kgnfsfreemem 2 0.4%
oracle`kgnfs_flushmsg 2 0.4%
oracle`kgnfsallocmem 2 0.4%
oracle`skgnfs_recvmsg 3 0.5%
oracle`kgnfs_serializesendmsg 3 0.5%
So, yes it seems Direct NFS is really being used by Oracle 11g.
There are a set of V$ tables that allow you to sample the performance of the performance of dNFS as seen by Oracle. I like V$ tables because I can write SQL scripts until I run out of Mt. Dew. The following views are available to monitor activity with dNFS.
With some simple scripting, I was able to create a simple script to monitor the NFS IOPS by sampling the v$dnfs_stats
view. This script simply samples the nfs_read and nfs_write operations, pauses for 5 seconds, then samples again to determine the rate.
timestmp|nfsiops
15:30:31|48162
15:30:36|48752
15:30:41|48313
15:30:46|48517.4
15:30:51|48478
15:30:56|48509
15:31:01|48123
15:31:06|48118.8
Excellent! Oracle shows 48,000 NFS IOPS which agrees with the analytics from Fishworks.
Consulting the AWR, shows “Physical reads” in agreement as well.
Load Profile Per Second Per Transaction Per Exec Per Call
~~~~~~~~~~~~ --------------- --------------- ---------- ----------
DB Time(s): 93.1 1,009.2 0.00 0.00
DB CPU(s): 54.2 587.8 0.00 0.00
Redo size: 4,340.3 47,036.8
Logical reads: 385,809.7 4,181,152.4
Block changes: 9.1 99.0
Physical reads: 47,391.1 513,594.2
Physical writes: 5.7 61.7
User calls: 63,251.0 685,472.3
Parses: 5.3 57.4
Hard parses: 0.0 0.1
W/A MB processed: 0.1 1.1
Logons: 0.1 0.7
Executes: 45,637.8 494,593.0
Rollbacks: 0.0 0.0
Transactions: 0.1
iostat(1M) monitors IO to devices and nfs mount points. But with Oracle Direct NFS, the mount point is bypassed and each shadow process simply mounts files directly. To monitor dNFS traffic you have to use other methods as described here. Hopefully, this post was instructive on how to peel back the layers in-order to gain visibility into dNFS performance with Oracle and Sun Open Storage.
Posted in Oracle, Storage Tagged: 7410, analytics, dNFS, monitoring, network, NFS, Oracle, performance, Solaris
Earlier this month I was able to bought the book “Sun Performance and Tuning: Java and the Internet (2nd Edition)” for only 190pesos at some bookstore
Recent comments
1 year 45 weeks ago
2 years 5 weeks ago
2 years 9 weeks ago
2 years 10 weeks ago
2 years 14 weeks ago
2 years 36 weeks ago
3 years 4 weeks ago
3 years 33 weeks ago
4 years 18 weeks ago
4 years 18 weeks ago