Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

12.2 New Feature: the FLEX ASM disk group part 3

In the previous 2 parts of this mini series I introduced the Flex ASM disk group and two related concepts, the Quota Group and File Group. In what should have become the final part (but isn’t) I am interested in checking whether quotas are enforced.

(Un)fortunately I have uncovered a few more things that are worth investigating and blogging about, which is why a) this isn’t the last post and b) it got a bit shorter than the previous two. Had I combined part 3 and 4 it would have been too long for sure … BTW, you can navigate all posts using the links at the very bottom of the page.

Are quotas enforced?

The purpose of the Quota Group is … to enforce quotas on a disk group, much like on a file system. This is quite interesting, because you now have a hard limit to which databases can grow within a disk group even for non-CDBs.

The question I set out to answer in this part is whether quotas are enforced. As you read in the previous post’s interlude, I have created 2 databases on the FLEX ASM disk group: ORCL and CDB, both container databases.

The current state of affairs for my File Groups is this:

ASMCMD> lsfg
File Group         Disk Group  Quota Group  Used Quota MB  Client Name    Client Type  
DEFAULT_FILEGROUP  FLEX        GENERIC      2488                                       
CDB_CDB$ROOT       FLEX        QG_CDB       6744           CDB_CDB$ROOT   DATABASE     
CDB_PDB$SEED       FLEX        QG_CDB       1656           CDB_PDB$SEED   DATABASE     
PDB1               FLEX        QG_CDB       1784           PDB1           DATABASE     
ORCL_CDB$ROOT      FLEX        GENERIC      9104           ORCL_CDB$ROOT  DATABASE     
ORCL_PDB$SEED      FLEX        GENERIC      1616           ORCL_PDB$SEED  DATABASE     
PDB1_0001          FLEX        GENERIC      9424           PDB1           DATABASE   

Database CDB is nicely tucked away in QG_CDB, but none of the ORCL database’s components are assigned to a Quota Group yet. I wanted to have another Quota Group QG_ORCL, for my second CDB. Somehow I think that a Quota Group per database makes sense.

ASMCMD> mkqg -G FLEX QG_ORCL quota 20G
Diskgroup altered.
ASMCMD> lsqg
Group_Num  Quotagroup_Num  Quotagroup_Name  Incarnation  Used_Quota_MB  Quota_Limit_MB  
2          1               GENERIC          1            22632          0               
2          2               QG_ORCL          7            0              20480           
2          3               QG_CDB           1            10184          20480           
ASMCMD> 
If you followed part 1 and 2 you may have noticed that my FLEX disk group group_number has changed from 5 to 2, after a server reboot.
SQL> select group_number, name, state, type from v$asm_diskgroup
  2  where group_number = 2;

GROUP_NUMBER NAME                           STATE       TYPE
------------ ------------------------------ ----------- ------
           2 FLEX                           CONNECTED   FLEX

Don’t let the change in numbers confuse you

Back to the example: File Groups ORCL_CDB$ROOT, ORCL_PDB$SEED and PDB1_0001 are not yet within QG_ORCL. This can be rectified using 3 simple mvfg commands in ASMCMD, or the corresponding SQL commands. After the move commands completed, the Quota Group still has space left (although it’s very limited)

ASMCMD> mvfg -G flex --filegroup PDB1_0001 QG_ORCL
Diskgroup altered.
ASMCMD> mvfg -G flex --filegroup ORCL_PDB$SEED  QG_ORCL
Diskgroup altered.
ASMCMD> mvfg -G flex --filegroup ORCL_CDB$ROOT  QG_ORCL
Diskgroup altered.
ASMCMD> lsfg
File Group         Disk Group  Quota Group  Used Quota MB  Client Name    Client Type  
DEFAULT_FILEGROUP  FLEX        GENERIC      2488                                       
CDB_CDB$ROOT       FLEX        QG_CDB       6744           CDB_CDB$ROOT   DATABASE     
CDB_PDB$SEED       FLEX        QG_CDB       1656           CDB_PDB$SEED   DATABASE     
PDB1               FLEX        QG_CDB       1784           PDB1           DATABASE     
ORCL_CDB$ROOT      FLEX        QG_ORCL      9104           ORCL_CDB$ROOT  DATABASE     
ORCL_PDB$SEED      FLEX        QG_ORCL      1616           ORCL_PDB$SEED  DATABASE     
PDB1_0001          FLEX        QG_ORCL      9424           PDB1           DATABASE     
ASMCMD> lsqg
Group_Num  Quotagroup_Num  Quotagroup_Name  Incarnation  Used_Quota_MB  Quota_Limit_MB  
2          1               GENERIC          1            2488           0               
2          2               QG_ORCL          7            20144          20480           
2          3               QG_CDB           1            10184          20480           
ASMCMD> 

Now I just need to try and push it over the edge to see if the quota has any effect. This is quite simple: all I need to do is create another tablespace in ORCL:PDB1

SQL> show con_name

CON_NAME
------------------------------
PDB1
SQL> create tablespace userdata datafile size 500m;
create tablespace userdata datafile size 500m
*
ERROR at line 1:
ORA-01119: error in creating database file '+FLEX'
ORA-17502: ksfdcre:4 Failed to create file +FLEX
ORA-15437: Not enough quota available in quota group QG_ORCL.

SQL> 

This is similar to what we could already enforce using the storage clause in the create pluggable database command. But what about ASM file types that aren’t data files? Querying the database I can find quite a few of these:

SQL> get /tmp/non_data_files
  1  SELECT
  2      f.group_number as DG_NUMBER,
  3      f.file_number,
  4      f.incarnation as file_incarnation,
  5      f.type,
  6      fg.name as filegroup_name,
  7      a.name as file_name
  8  FROM
  9      v$asm_alias a,
 10      v$asm_file f,
 11      v$asm_filegroup fg
 12  WHERE
 13          a.group_number = f.group_number
 14      AND
 15          a.file_number = f.file_number
 16      AND
 17          a.file_incarnation = f.incarnation
 18      AND
 19          fg.group_number = f.group_number
 20      AND
 21          fg.filegroup_number = f.filegroup_number
 22      AND
 23*         f.type  'DATAFILE';
SQL> start /tmp/non_data_files

 DG_NUMBER FILE_NUMBER FILE_INCARNATION TYPE            FILEGROUP_NAME       FILE_NAME
---------- ----------- ---------------- --------------- -------------------- ------------------------------
         2         282        948453843 PASSWORD        DEFAULT_FILEGROUP    pwdorcl.282.948453843
         2         287        948462715 PASSWORD        DEFAULT_FILEGROUP    pwdorcl.287.948462715
         2         293        948462849 CONTROLFILE     ORCL_CDB$ROOT        Current.293.948462849
         2         292        948462849 CONTROLFILE     ORCL_CDB$ROOT        Current.292.948462849
         2         294        948462855 ONLINELOG       ORCL_CDB$ROOT        group_2.294.948462855
         2         295        948462855 ONLINELOG       ORCL_CDB$ROOT        group_1.295.948462855
         2         296        948462861 ONLINELOG       ORCL_CDB$ROOT        group_1.296.948462861
         2         297        948462861 ONLINELOG       ORCL_CDB$ROOT        group_2.297.948462861
         2         304        948463227 ONLINELOG       ORCL_CDB$ROOT        group_3.304.948463227
         2         305        948463231 ONLINELOG       ORCL_CDB$ROOT        group_3.305.948463231
         2         306        948463239 ONLINELOG       ORCL_CDB$ROOT        group_4.306.948463239
         2         307        948463243 ONLINELOG       ORCL_CDB$ROOT        group_4.307.948463243
         2         298        948462891 TEMPFILE        ORCL_CDB$ROOT        TEMP.298.948462891
         2         302        948462937 TEMPFILE        ORCL_PDB$SEED        TEMP.302.948462937
         2         308        948463249 PARAMETERFILE   ORCL_CDB$ROOT        spfile.308.948463249
         2         312        948464283 TEMPFILE        PDB1_0001            TEMP.312.948464283

16 rows selected.

Let’s take the online redo logs and add another thread to instance 1:

SQL> alter database add logfile thread 1 size 1g;
alter database add logfile thread 1 size 1g
*
ERROR at line 1:
ORA-00301: error in adding log file '+FLEX' - file cannot be created
ORA-17502: ksfdcre:4 Failed to create file +FLEX
ORA-15437: Not enough quota available in quota group QG_ORCL.


SQL> 

This proves that quotas are enforced, at least for a couple of very simple examples.

Summary Part 3

There is certainly a lot more to discover about Quota Groups, datafiles set to autoextent, archivelog growth to a certain size, temp- and undo tablespaces etc. At first glance, it looks good. The key again is to have suitable monitoring in place that warns DBAs about File Groups running out of space. I wonder if that’s port of Enterprise Manager, otherwise it’s easy to write such checks yourself either as custom metrics in OEM or in tools such as Nagios.

The $50 Million Hyphen

There are a plethora of mishaps in the early space program to prove the need for DevOps, but Fifty-five years ago this month, there was one in particular that is often used as an example for all.  This simple human error almost ended the whole American space program and it serves as a strong example of why DevOps is essential as agile speeds up the development cycle.  

The Mariner I space probe was a pivotal point in the space race between the United States and the Soviet Union.  The space probe was a grand expedition into a series of large, sophisticated, as well as interplanetary missions, all to carry the Mariner moniker.  For this venture to launch, (pun intended) it was dependent on a huge, as well as new development project for a powerful booster rocket called the Atlas-Centaur.  The development program ran into so many testing failures that NASA ended up dropping the initial project and going with a less sophisticated booster to meet the release date, (i.e. features dropped from the project.)  These new probe designs were based off the previously used Ranger moon probes, so there was less testing thought needed and the Atlas Agena B Booster was born, bringing the Mariner project down to a meager cost of $80 million.

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/07/Screen-Shot-2... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/07/Screen-Shot-2... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/07/Screen-Shot-2... 1406w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/07/Screen-Shot-2... 1200w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

The goal of the Mariner I was to perform an unmanned mission to Mars, Venus and Mercury.  It was equipped with solar cells on its wings to assist on the voyage, which was all new technology, but the booster, required to escape Earth’s gravity, was an essential part of the project. As the boosters were based off of older technology than many of the newer features, the same attention wasn’t offered to it while testing was being performed.

On July 22nd, 1962, the Mariner I lifted off, but after approximately four minutes in, it veered off course.  NASA made the fateful decision to terminate the spacecraft and destroyed millions of dollars of equipment, ensuring it didn’t end up crashing on its own into populated areas.

As has already been well documented, the guidance system, which was supposed to correct the Mariner 1 flight, had a single typo in the entire coded program.  A missing hyphen, required for instructions to adjust flight patterns was missing.  Where it should have read “R-dot-bar sub-n”, instead was “R-dot-bar sub n”.  This minor change caused the program to over-correct small velocity changes and created erratic steering commands to the spacecraft.

This missing hyphen caused a loss of millions of dollars in the space program and is considered the most expensive hyphen in history.

How does this feed into the DevOps scenario?  

Missing release dates for software can cost companies millions of dollars, but so can the smallest typos.  Reusing code and automation of programming, along with proper policies, process and collaboration throughout the development cycle ensures that code isn’t just well written, but in these shortened development cycles, it’s reviewed and tested fully before it’s released.  When releases are done in smaller test scenarios, a feedback loop is ensured so that errors are caught early and guaranteed not to go into production.



Tags:  


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [The $50 Million Hyphen], All Right Reserved. 2017.

The post The $50 Million Hyphen appeared first on DBA Kevlar.

DevOps and Webinars- July 19th and July 25th

Doing three or four webinars in a month doesn’t seem like a big deal until you actually try to do it…and present at two or three events and make sure you do everything for your job outside of that, too.  Suddenly you find yourself scrambling to keep up, but I’m known for taking on a few too many things at once… </p />
</p></div></div>

    	  	<div class=

New member of the OakTable Network

A quick one to say that I am very happy to be a new member of the OakTable Network.

Thanks Kevin Closson for my nomination, I feel deeply honored!

Exadata 12c PX Adaptive Offloading

Here is yet another case when you may not see as much offloading on your Exadata as you expect.

I was recently investigating a simple select count(*) query producing a lot of buffered read events:

select count(*) from whs.trans partition (sys_p10236650) f

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
resmgr:pq queued 1 0.00 0.00
Disk file operations I/O 1 0.00 0.00
cell single block physical read 41 0.00 0.04
enq: KO - fast object checkpoint 3 0.00 0.00
reliable message 1 0.00 0.00
enq: PS - contention 24 0.00 0.00
PX Deq: Join ACK 48 0.00 0.00
PX Deq: Parse Reply 24 0.01 0.06
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
PX Deq: Execute Reply 338 0.08 1.57
PX Deq: Signal ACK EXT 24 0.01 0.01
PX Deq: Slave Session Stats 24 0.00 0.00
cursor: pin S wait on X 23 0.01 0.18
PX Deq: Execution Msg 362 0.25 5.12
cell multiblock physical read 3402 0.03 9.07
gc current grant busy 5 0.00 0.00
latch: gc element 60 0.00 0.00
gc cr grant 2-way 136 0.00 0.03
gc cr multi block request 131 0.00 0.06
cell smart table scan 7714 0.02 2.27
library cache: mutex X 15 0.00 0.00
library cache pin 3 0.00 0.00
latch: cache buffers lru chain 13 0.00 0.00
latch: ges resource hash list 1 0.00 0.00
gc current block 2-way 1 0.00 0.00
cell list of blocks physical read 2 0.00 0.01
gc cr grant congested 2 0.00 0.00
latch: object queue header operation 2 0.00 0.00
latch: gcs resource hash 3 0.00 0.00
********************************************************************************

As you can see we've got 3402 cell multiblock physical read and 41 cell single block physical read alongside 7714 cell smart table scan events. I looked at all the usual smart scan preventing suspects but came up with nothing.

Tracing

In order to further understand the source of buffered reads I've enabled the following traces:



alter session set events 'trace[nsmtio]';
alter session set "_px_trace" = high,granule,high,execution;

What I discovered in the trace was quite interesting. For smart scans the following sections were present in the trace:

2017-07-15 13:54:17.777305*:PX_Control:kxfx.c@10122:kxfxsGetNextGranule():  Receiving 1 granules
...
kxfxContAdaptOff: offload granule:1 offload enabled:1 ratio:0.800000 cellrate:0.000000 cacherate:0.000000 dr_amount:0 dr_time:0 bc_amount:0 bc_time:0 system offloading ratio:0.800000
...
NSMTIO: kxfxghwm:[DirectRead]: Buffer Cache Consumption rate of granule is less and direct read enabled.
...
WAIT #140442514780840: nam='cell smart table scan' ela= 208 cellhash#=459971463 p2=0 p3=0 obj#=36324502 tim=13309302766869
WAIT #140442514780840: nam='cell smart table scan' ela= 224 cellhash#=83369134 p2=0 p3=0 obj#=36324502 tim=13309302767638
WAIT #140442514780840: nam='cell smart table scan' ela= 157 cellhash#=3883045144 p2=0 p3=0 obj#=36324502 tim=13309302768329
...

And for buffered reads the trace had the following:


2017-07-15 13:54:18.151520*:PX_Control:kxfx.c@10122:kxfxsGetNextGranule(): Receiving 1 granules
...
kxfxContAdaptOff: offload granule:0 offload enabled:1 ratio:0.800000 cellrate:1320.421053 cacherate:0.000000 dr_amount:0 dr_time:0 bc_amount:0 bc_time:0 system offloading ratio:0.800000
...
NSMTIO: kxfxghwm:[GRANULE_AFFINITIZED]: Either size is small OR medium and can be cached.
...
WAIT #140442514780840: nam='cell multiblock physical read' ela= 2057 cellhash#=2689352169 diskhash#=3123344858 bytes=1048576 obj#=36324502 tim=13309303143810
WAIT #140442514780840: nam='cell multiblock physical read' ela= 3162 cellhash#=2689352169 diskhash#=3123344858 bytes=1048576 obj#=36324502 tim=13309303151338
WAIT #140442514780840: nam='cell multiblock physical read' ela= 1383 cellhash#=2689352169 diskhash#=3123344858 bytes=450560 obj#=36324502 tim=13309303156489
...

The above sections were alternating for each granule received. After receiving a granule the entire granule was processed using either smart scans or buffered reads. It also appeared that the decision was driven by the kxfxContAdaptOff (Control Adaptive Offloading?) function. Notice kxfxContAdaptOff emitting cellrate statistic into the trace as well. This got me thinking that it might be checking for storage cells workload (or something similar) and using it as one of the inputs to arrive at the decision which way to process a granule.

PX Adaptive Offloading

With a little bit of outside help kxfxContAdaptOff function lead us to two underscore parameters:




SQL> select ksppinm, ksppstvl, ksppstdfl, ksppdesc
2 from x$ksppi x, x$ksppcv y
3 where (x.indx = y.indx)
4 and ksppinm like '_px_adaptive_offload%';
KSPPINM KSPPSTVL KSPPSTDFL KSPPDESC
------------------------------- -------- --------- --------------------------------------------------------------------------------
_px_adaptive_offload_threshold 10 10 threshold (GB/s) for PQ adaptive offloading of granules
_px_adaptive_offload_percentage 80 30 percentage for PQ adaptive offloading of granules

The description of these parameters appeared to match the behavior I was seeing. It appeared that there was a threshold and a percentage controlling how much granules were subjected to adaptive offloading.

_px_adaptive_offload_threshold

I have repeated my test using _px_adaptive_offload_threshold=0 and here are the events I've got:



Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
library cache pin 2 0.00 0.00
resmgr:pq queued 1 0.00 0.00
enq: KO - fast object checkpoint 3 0.00 0.00
reliable message 1 0.00 0.00
enq: PS - contention 39 0.00 0.00
PX Deq: Join ACK 48 0.00 0.00
PX Deq: Parse Reply 24 0.01 0.02
Disk file operations I/O 1 0.00 0.00
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
PX Deq: Execute Reply 338 0.23 4.46
PX Deq: Signal ACK EXT 24 0.06 0.07
PX Deq: Slave Session Stats 24 0.00 0.00
library cache lock 1 0.00 0.00
PX Deq: Execution Msg 362 0.47 14.54
cell multiblock physical read 14536 0.04 44.74
gc current grant busy 24 0.00 0.01
gc cr grant 2-way 626 0.00 0.11
gc cr multi block request 553 0.00 0.18
cell single block physical read 3459 0.02 2.69
latch: gcs resource hash 29 0.00 0.01
latch: gc element 149 0.00 0.02
latch: cache buffers lru chain 42 0.00 0.01
latch: cache buffers chains 4 0.00 0.00
latch: object queue header operation 3 0.00 0.00
gc cr grant congested 1 0.00 0.00
********************************************************************************

Notice complete absence of smart scans -- everything got processed using buffered reads! This confirmed that I was on the right track. And here are the events with _px_adaptive_offload_threshold=1000:

Elapsed times include waiting on following events:
Event waited on Times Max. Wait Total Waited
---------------------------------------- Waited ---------- ------------
resmgr:pq queued 1 0.00 0.00
enq: KO - fast object checkpoint 3 0.02 0.02
reliable message 1 0.00 0.00
enq: PS - contention 29 0.00 0.00
PX Deq: Join ACK 48 0.00 0.00
PX Deq: Parse Reply 24 0.01 0.03
Disk file operations I/O 1 0.00 0.00
SQL*Net message to client 2 0.00 0.00
SQL*Net message from client 2 0.00 0.00
PX Deq: Execute Reply 338 0.09 1.37
PX Deq: Signal ACK EXT 24 0.08 0.09
PX Deq: Slave Session Stats 24 0.00 0.00
PX Deq: Execution Msg 362 0.45 16.56
cell multiblock physical read 2018 0.10 7.04
gc current grant busy 5 0.00 0.00
gc cr grant 2-way 57 0.00 0.00
gc cr multi block request 41 0.00 0.01
cell smart table scan 8641 0.10 5.23
cell single block physical read 602 0.02 0.58
library cache pin 2 0.00 0.00
latch: cache buffers lru chain 2 0.00 0.00
latch: gc element 4 0.00 0.00
latch: object queue header operation 1 0.00 0.00
library cache: mutex X 1 0.00 0.00
********************************************************************************

Notice that while I've got more smart scans the buffered reads were still there as well. Interestingly enough I could not fully eliminate buffered reads no matter how high I've set this parameter to. This likely means that there are more variables at play here and/or that _px_adaptive_offload_percentage dictates that a certain amount of granules always gets processed using buffered reads.

I'm not entire sure what _px_adaptive_offload_percentage does at the moment as setting it to one value or the other did not appear to make any changes. While this is a guess on my part the default value (80) matches ratio:0.800000 reported by kxfxContAdaptOff and it appears to be hard coded in the function.

Final thoughts

Clearly 12c got some new code introduce which can dynamically make a decision on how to process each PX granule. The unfortunate part is none of this appears to be documented and it looks like quite a big change which warrants some sort of an explanation.

Updated a little bit later...

It appears that setting _px_adaptive_offload_threshold=0 disables Adaptive Offloading after all. The reason I've got only buffered reads when I set it to 0 was due to the system running with Auto DOP so In-Memory PQ kicked in. When set together with _parallel_cluster_cache_policy=adaptive it reverted everything to the expected behavior -- everything got processed using smart scans.









Friday Philosophy – Improving Your Working Life

If I got you all to write down the top 5 things that make working bearable, and then got you to make a list of the top 5 things that make working enjoyable, I have a suspicion there will be one thing high on the “Enjoyable” list that may not even be on the “Bearable” list:

Being in a good team.

This one thing can make a real difference to your working life. I know this is true for me and it’s something I’ve heard other people say a lot. The team you are in can make up for a lot of negative things about any given job. I’ve found myself in roles where I am unsuitable for the task, or under a ridiculously high workload, even being paid much less than I know I am worth. But if I have been in a good team, working with people I like (well, at least some of them!) it makes it all a lot better. A lot, lot better. Think about the jobs or roles you have most enjoyed in your life. In any of them did you not like the team you were in?

Unfortunately in most teams there are people like Miserable Kevin, who does nothing but complain and slag everyone off. Or Oddly Quiet Katrina who is about as much fun to be with as eating a jar of pickled frogs.

You might expect me to now suggest you leave any team you do not like being in – but that is impractical advice for most people and impossible for many. And in fact I think there is a much, much better option:

Don’t change which team you are in – Change the team you are in.

You may think that it’s the responsibility of the team leader or maybe “management” to create a good team, perhaps by punishing or getting rid of Kevin and Katrina. And to a certain extent you are right. But most team leaders got the job for reasons other than their soft skills (the ones that allow them to understand and work with people – Heck, most of us in I.T. are there at least partially as we do not like other bloody people!). But actually, anyone in the team can change the team into somewhere more pleasant to be. I’m willing to bet that if you have worked in teams that are fun and satisfying to be in, the person or people who made it that way were not the team leader. Or at least not limited to the team leader.

Be warned, I am not claiming it is easy to change a team and it can take a while. But I think anyone can improve their team, if they put in a little effort. And you can do it in small, easy steps.

It helps if you know or learn a little about how different people think, a little bit of pop-psychology can go a long way (all those management training courses I once went on helped me a lot in this) but in essence you just need to help people to talk, relax, interact and get to know each other a little better. Try to see things from their side, consider why they are being objectionable or difficult. And be nice to people. Not in a creepy way, but just try to not lose your temper at someone who is being annoying, do not join in with the weekly team moan about Kevin behind his back. I’m not saying you should not disapprove if someone is being an arse, but you can make it clear you are not happy with them without being antagonistic or retaliating. Just one person doing this can make a big difference. I know, I’ve done it.

Another thing to do is try to include people more. If you and another team member are discussing an issue, maybe ask Oddly Quiet Katrina what they think. If you can find a common interest with someone you don’t know well in the team, try to talk to them about it. I don’t mean do what one bloke I know did – come in the office, ask how your weekend went and then look stunned with boredom when you told him – He had no real interest, he was playing a role, and doing it badly. If you are going to try and draw someone out a little, it really helps to be interested in what they say.

Once thing I have found helps significantly in creating a better team is suggesting the occasional coffee, go to lunch together (especially if it means going out the office to get something, even if only a sandwich) or have an after work pint. {I’ve previously mentioned this as Team Ice-Cream}. It really helps if you know at least 1 other person who will join in; and you are not trying to get everyone there. You just let people know, go and have the coffee or drink and let whoever wishes to join in. Sometimes there will be someone who will refuse to join in but, heck, the last thing you want to do is try and make it “enforced team fun” – as that never works well. I’ve done this in 3 or 4 roles now and after a few weeks (and it can take weeks and months) most of the team was coming along most of the time. And the intra-team bitching had plummeted. I even saw people help each other without being told to!

One thing to mention – don’t be too enthusiastic and gung-ho about it. Don’t come in first thing and cry “Hi team! How’s it all going! Hi-Fives! Who’s for a pint after work!?”. This will make you into Psychotic Barry. No one trusts Psychotic Barry.

The key principle is to be a bit more friendly and inclusive without people really noticing you are doing it. Keep it all low key.

Why should *you* be the person to put in the effort? Well, think of it selfishly. It will be nicer for you if being at work is less bloody awful. Also, it’s a skill you can use everywhere! Each time you change team, you can see if you can improve the new team a bit and see if you can do it more effectively. You can use it in social situations too. Maybe even the wider family, but if you try that and it all goes horribly wrong, don’t sue me.

Another reason to do it? Many people who know me in the flesh may be surprised to know that I’m not naturally very good in a crowd. I got a lot better at it, and more comfortable with people in general, by trying to improve the team.

So go change your team. It’s actually easier than you think and, heck, what have you to lose but some wasted effort and a couple of quiet evenings in a pub on your own (or maybe worst case scenario, just you and Psychotic Barry)? At least you will know that you gave it a go, it is now definitely the team leader’s problem.

One final word of warning. That bloke over there who is a militant vegan, self-appointed know-er of all and despises all contractors on philosophical principles? Don’t expect to get anywhere with him and don’t try too hard. Some people need professional help!

Upgrading an Amazon EC2 Delphix Source, Part I

For a POC that I’m working on with the DBVisit guys, I needed a quick, 12c environment to work on and have at our disposal as required.  I knew I could build out an 11g one in about 10 minutes with our trust free trial, but would then need to upgrade it to 12c.

Disable snapshots to Delphix Engine

This is a simple prerequisite before you upgrade an Oracle source database and takes down the pressure on the system, as well as confusion as the database upgrades the Oracle home, etc.

Simply log into the Delphix Admin console, click on your source group that the source database belongs to and under Configuration, in the right hand side, you’ll see a slider that needs to be moved to the “disable” position to no longer take interval snapshots.

https://i2.wp.com/dbakevlar.com/wp-content/uploads/2017/07/Screen-Shot-2... 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2017/07/Screen-Shot-2... 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2017/07/Screen-Shot-2... 1200w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

Configure GUI for Simplified Oracle Installation

EC2 doesn’t come default with the GUI interface, so we just need to install it on the host to make life a little easier for the upgrade:

  •  Check for updates:
[delphix@linuxsource database]$ sudo yum update -y

….

  xfsprogs.x86_64 0:3.1.1-20.el6                                                
  yum.noarch 0:3.2.29-81.el6.centos                                             
  yum-plugin-fastestmirror.noarch 0:1.1.30-40.el6                               
Replaced:
  python2-boto.noarch 0:2.41.0-1.el6                                            
Complete!
  • Install the desktop:
[delphix@linuxsource database]$ sudo yum groupinstall -y "Desktop"

  xorg-x11-xkb-utils.x86_64 0:7.7-12.el6                                        
  xulrunner.x86_64 0:17.0.10-1.el6.centos                                       
  zenity.x86_64 0:2.28.0-1.el6                                                  
Complete!
  • Install dependencies like fonts needed:
[delphix@linuxsource database]$ sudo yum install -y pixman pixman-devel libXfont
[delphix@linuxsource database]$ sudo yum -y install tigervnc-server

Each of the above should show completed successfully.

  • Set the password for the VNC:
[delphix@linuxsource database]$ vncpasswd
Password:
Verify:
  • Restart the SSHD Service:

sudo service sshd restart

[delphix@linuxsource database]$ sudo service sshd restart
Stopping sshd:                                             [  OK  ]
Starting sshd:                                             [  OK  ]
  • Configure VNC Server properties with SUDO privs:
[delphix@linuxsource database]$ sudo vi /etc/sysconfig/vncservers
VNCSERVERS="1:delphix"
VNCSERVERARGS[2]="-geometry 1280X1024

Save and exit the vncservers configuration file.

  • Start VNC Server:
[delphix@linuxsource database]$ sudo service vncserver start

….

Log file is /home/delphix/.vnc/linuxsource.delphix.local:1.log
                                                           [  OK  ]
  • I’m the only one who will be accessing this host to perform these types of tasks, so I’ll use port 5901 and add a firewall rule:
[delphix@linuxsource database]$ sudo iptables -A INPUT -m state --state NEW -m tcp -p tcp --dport 5901 -j ACCEPT

You can now use the VNC Viewer to access the GUI for the Linux Source and install/upgrade Oracle.  I’m assuming you already have it, but if you don’t, download it and do a quick install.  Keep in mind, to install Oracle via the GUI on the Linux Target, I’ll need to perform these steps on that target, too.

Let’s check and verify that we can get to the Linux Source desktop.  Configure a new connection in the VNC Viewer and remember to use the public IP and “:1” for your user that you wish to log into.  Save and log into the Linux Source.

In the next post, I’ll update the Linux Source Oracle database and we’ll proceed with upgrading Delphix source and target databases on Amazon.

 



Tags:  , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Upgrading an Amazon EC2 Delphix Source, Part I], All Right Reserved. 2017.

The post Upgrading an Amazon EC2 Delphix Source, Part I appeared first on DBA Kevlar.

Role of # in SQL*Plus

The # character is for commenting in SQL*Plus, right?

The character # has been mostly used for comments in many languages, such as shell scripts and python. Interestingly # is legal syntax in SQL scripting as well; but is it considered a comment? The answer is no; it's not. The purpose of # in SQL scripts is very different. and you should be very careful using it.

Entering # tells SQL*Plus to temporarily pauses what has been entered before and execute everything after that #sign, as if in a different session. Here is a usecase. Suppose you are writing this query:

SQL> select *
  2  from v$sesstat
  3  where

[Typographical Conventions: User inputs are shown bold. System outputs are not.]

At this time you are stuck. You don't remember the column names of V$SESSTAT. But you need to know that to use it as a predicate. What can you do? You could just press ENTER at this point, which causes the SQL*Plus prompt to be redisplayed. You can describe the view and that will be your answer. However, this means you will have lost all the input you have entered until then. In case of a complicated long query it is hardly desirable.

This is where the # symbol comes to help. Just type # and enter the command you want to be displayed. In this case you want to describe the view. So enter at the 4th line prompt of SQL*Plus:

  4  #desc v$sesstat

When you enter this command and press ENTER, SQL*Plus will halt the evaluation of all the command it is doing now and execute the desc v$sesstat command. Here will be the output:

 Name                                      Null?    Type
 ----------------------------------------- -------- ----------
 SID                                                NUMBER
 STATISTIC#                                         NUMBER
 VALUE                                              NUMBER
 CON_ID                                             NUMBER

After this display, the SQL*Plus prompt will show the 4th line as the prompt "4". Now that you know the column, enter the rest of the SQL statement you were entering

  4  con_id = 1 
  5  /

And that's it. Note how SQL*Plus didn't discard any of the commands already entered. Instead it merely paused the evaluation and started evaluating the command "desc v$sesstat" after the # symbol. This symbol "#" is not a comment character but a temporary new command without discarding the previous commands in play in the session.

Unintended Ramifications

This behavior of # character may result in some scenarios not at all intended. Take for instance the following scenario. We create a table, insert a row, do not commit and then check the number of records.

SQL> create table t (col1 number);

Table created.

SQL> insert into t values (1);

1 row created.

SQL> select count(1)
  2  from t
  3  where 1 > 0
  4  /

  COUNT(1)
----------
         1

We got the results as expected. Now let's introduce a small line #roll in between the commands. Perhaps someone assumed it was a comment and left it there.

SQL> select count(1)
  2  #roll
Rollback complete.
  2  from t
  3  where 1 > 0
  4  /

  COUNT(1)
----------
         0

Note how the counts came back as 0. Why? It's because the rollback occurred independently of the select command. By the time the select statement was completed, the rollback statement had rolled back the insert and hence there were no rows in the table.

In some cases this behavior might prove deadly. Here is another example I have seen, from a script that shuts down the database, takes a storage snapshot and restarts it.

shutdown abort
startup

Later it was decided that shutdown abort is not desirable. It should be shutdown transactional. So, the DBA changed the script to the following. Assuming the # character to be comment, she thought she was commenting the line instead of removing it:

#shutdown abort
shutdown transactional
startup

Now let's run the script:

SQL> @b
ORACLE instance shut down.
ERROR:
ORA-01034: ORACLE not available
ORA-27101: shared memory realm does not exist
Process ID: 0
Session ID: 0 Serial number: 0
ORACLE instance started.

Total System Global Area 5117050880 bytes
Fixed Size                  8757424 bytes
Variable Size            1056968528 bytes
Database Buffers         4043309056 bytes
Redo Buffers                8015872 bytes
Database mounted.
Database opened.                                                                                                                    

What happened? The shutdown failed with "ORACLE not available". Why?

The answer is simple. The shutdown abort executed. The database was already down. Therefore the shutdown transactional failed.

Don't Like #?

Now that you see how some people may confuse # as a comment and make mistakes, you may not like to have this special property assigned to #. Instead, you may want another character, say "*". You can easily change it. The special setting in SQL*Plus called SQLPREFIX allows you to do that:

SQL> set sqlprefix *

Now, let's repeat the same example we used earlier but with * instead of #:

SQL> select count(1)
  2  * desc t
 Name                                      Null?    Type
 ----------------------------------------- -------- --------
 COL1                                               NUMBER

  2  from t;

  COUNT(1)
----------
         0

What if you use # character in this case?

SQL> select count(1)
  2  # desc t
  3  from t;
# desc t
*
ERROR at line 2:
ORA-00911: invalid character

You will not go anywhere. You will get an error. It's probably better than an assumption of # character as a comment.

Takeaways

  1. The # character is not a comment in SQL scripts.
  2. When SQL*Plus encounters a # character at the first position, it temporarily pauses the evaluation of the statements being entered and immediately executes the command after # and resumes evaluation of the previously entered commands afterwards.
  3. You can assign another character to exhibit this behavior by setting the SQLPREFIX to that character.

Little Things Doth Crabby Make – Part XXI. No, colrm(1) Doesn’t Work.

This is just another quick and dirty installment in the Little Things Doth Crabby Make series. Consider the man page for the colrm(1) command:

#000000;" src="https://kevinclosson.files.wordpress.com/2017/07/capture11.png?w=500&h=99" alt="" width="500" height="99" srcset="https://kevinclosson.files.wordpress.com/2017/07/capture11.png?w=500&h=99 500w, https://kevinclosson.files.wordpress.com/2017/07/capture11.png?w=996&h=198 996w, https://kevinclosson.files.wordpress.com/2017/07/capture11.png?w=150&h=30 150w, https://kevinclosson.files.wordpress.com/2017/07/capture11.png?w=300&h=60 300w, https://kevinclosson.files.wordpress.com/2017/07/capture11.png?w=768&h=153 768w" sizes="(max-width: 500px) 100vw, 500px" />

That looks pretty straightforward to me. If, for example, I have a 6-column text file and I only want to ingest from, say, columns 1 through 3,  I should be able to execute colrm(1) with a single argument: 4. I’m not finding the colrm(1) command to work in accordance with my reading of the man page so that qualifies as a little thing that doth crabby make.

Consider the following screenshot showing a simple 6-column text file. To make sure there are no unprintable characters that might somehow interfere with colrm(1) functionality I also listed the contents with od(1):

#000000;" src="https://kevinclosson.files.wordpress.com/2017/07/capture22.png?w=500&h=207" alt="" width="500" height="207" srcset="https://kevinclosson.files.wordpress.com/2017/07/capture22.png?w=500&h=207 500w, https://kevinclosson.files.wordpress.com/2017/07/capture22.png?w=150&h=62 150w, https://kevinclosson.files.wordpress.com/2017/07/capture22.png?w=300&h=124 300w, https://kevinclosson.files.wordpress.com/2017/07/capture22.png 668w" sizes="(max-width: 500px) 100vw, 500px" />

Next, I executed a series of colrm(1) commands in an attempt to see which columns get plucked from the file based on different single-argument invocations:

#000000;" src="https://kevinclosson.files.wordpress.com/2017/07/capture3.png?w=500" alt="" srcset="https://kevinclosson.files.wordpress.com/2017/07/capture3.png 292w, https://kevinclosson.files.wordpress.com/2017/07/capture3.png?w=71 71w" sizes="(max-width: 292px) 100vw, 292px" />

Would that make anyone else crabby? The behavior appears to me very indeterminate to me and that makes me crabby.

Thoughts? Leave a comment!

 

Filed under: oracle

Little Things Doth Crabby Make – Part XX – Man Pages Matter! Um, Still.

It’s been a while since I’ve posted a Little Things Doth Crabby Make entry so here it is, post number 20 in the series. This is short and sweet.

I was eyeing output from the iostat(1) command with the -xm options on a Fedora 17 host and noticed the column headings were weird. I was performing a SLOB data loading test and monitoring the progress. Here is what I saw:

#000000;" src="https://kevinclosson.files.wordpress.com/2017/07/capture21.png?w=500&h=168" alt="" width="500" height="168" srcset="https://kevinclosson.files.wordpress.com/2017/07/capture21.png?w=500&h=168 500w, https://kevinclosson.files.wordpress.com/2017/07/capture21.png?w=1000&h=336 1000w, https://kevinclosson.files.wordpress.com/2017/07/capture21.png?w=150&h=50 150w, https://kevinclosson.files.wordpress.com/2017/07/capture21.png?w=300&h=101 300w, https://kevinclosson.files.wordpress.com/2017/07/capture21.png?w=768&h=258 768w" sizes="(max-width: 500px) 100vw, 500px" />

 

If that looks all fine and dandy then please consider the man page:

#000000;" src="https://kevinclosson.files.wordpress.com/2017/07/capture1.png?w=500&h=236" alt="" width="500" height="236" srcset="https://kevinclosson.files.wordpress.com/2017/07/capture1.png?w=500&h=236 500w, https://kevinclosson.files.wordpress.com/2017/07/capture1.png?w=1000&h=472 1000w, https://kevinclosson.files.wordpress.com/2017/07/capture1.png?w=150&h=71 150w, https://kevinclosson.files.wordpress.com/2017/07/capture1.png?w=300&h=142 300w, https://kevinclosson.files.wordpress.com/2017/07/capture1.png?w=768&h=363 768w" sizes="(max-width: 500px) 100vw, 500px" />

OK, so that is petty, I know.  But, the series is called Little Things Doth Crabby Make after all. </p />
</p></div>

    	  	<div class=