Oakies Blog Aggregator

OpenWorld 2017–where is the tech content ?

I can’t actually remember how many OpenWorld conferences I have attended. At a guess it would be around 7 or 8. Every year there is a criticism that there is “not enough technical content”.

Well I’m calling “fake news” on that. I can’t remember a time that I’ve struggled to find sessions that I’m interested in. Don’t get me wrong – I would love it if there was double the technical content especially in the core database area, but there’s a pretty simple reason for that. I’m a technically-oriented database-oriented kinda guy Smile. I’m sure that as I type this post, there is probably an Oracle Apps Functional Analyst preparing a blog post about how database content has gone overboard and there should be more Apps content. It’s a fact of life that you can’t please everyone.

“But I looked at the agenda, and a lot of it says ‘in the cloud'” I hear you exclaim. “It will all just be advertising!”.

Yeah, I’m calling “fake news” on that claim as well. Think about that for a second. If I’m submitting a talk for a conference that has a heavy emphasis on cloud architectures, then “Duh!”, of course I’m going to have “cloud” in the title or the synopsis. Because one of the prerequisites for giving a talk at the conference is getting the damn thing accepted ! I suspect some of the cynics that claim the agenda has not enough technical content perhaps have not taken the effort to actually study it.

But you need not worry… Some kind person has done that for you Smile. Having enough technical content is the least of your problems if you’re attending OpenWorld 2017. I’ve been through the session catalog and scraped out those sessions that I have an interest in, and even from my narrow database-centric perspective, I still have a massively tough task of deciding between multiple clashing sessions for almost every single timeslot. And as an Oracle employee, it will be even harder, because I have to wait until all of the conference attendees have entered the session, and only then do I get in if there are a few seats still vacant!

So here’s my list of sessions I’d love to attend at OpenWorld. Click on the code for each to get more details, or (if you are an attendee) to add them to your session builder … you lucky devils! Smile

Sunday 9:45

  • Franck Pachot Multitenant: Trust It by Understanding Beyond What Is Documented SUN1051 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 09:45 01-OCT-2017 09:45
  • Gary Gordhamer Oracle and NLS SUN5637 Moscone South – Room 155 01-OCT-2017 09:45 01-OCT-2017 09:45
  • Jason Arneil Cloud Networking for Beginners SUN1230 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 01-OCT-2017 09:45 01-OCT-2017 09:45

Sunday 10:45

  • Adeesh Fulay Docker 101 for Oracle DBAs SUN5617 Moscone South – Room 151 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Francis Mignault Cloudy with a Chance of Oracle Application Express SUN6246 Marriott Marquis (Yerba Buena Level) – Salon 4-6 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Debra Lilley,Ralf Koelling,Jan-Peter Timmermann EOUC Database ACES Share Their Favorite Database Things SUN2413 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Peter Koletzke Leveraging Oracle Database SQL and PL/SQL to Simplify User Interface App Dev SUN6252 Marriott Marquis (Yerba Buena Level) – Salon 10/11 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Sai Janakiram Penumuru A 360-Degree View of Oracle Database Cloud Service SUN5296 Moscone South – Room 160 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Stephen Kost The Thrifty DBA Does Database Security SUN5690 Moscone South – Room 159 01-OCT-2017 10:45 01-OCT-2017 10:45
  • Wayne Martin,Arun Sar Ingesting Data from Oracle Database 12cR2 in a Hadoop System SUN5615 Moscone South – Room 157 01-OCT-2017 10:45 01-OCT-2017 10:45

Sunday 11:45

  • Alex Zaballa Let’s Get Started with Oracle Database Cloud SUN1871 Moscone South – Room 314 01-OCT-2017 11:45 01-OCT-2017 11:45
  • Debra Lilley,Ralf Koelling,Jan-Peter Timmermann EOUC Developer Champions Show the Cool Tech They Use SUN2415 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 11:45 01-OCT-2017 11:45
  • Harris Baskaran SSL Securing Oracle Database: Goodbye Passwords SUN5642 Moscone South – Room 159 01-OCT-2017 11:45 01-OCT-2017 11:45
  • James Czuprynski Oracle Database In-Memory: Adventures with SwingBench TPC-DS SUN5644 Moscone South – Room 153 01-OCT-2017 11:45 01-OCT-2017 11:45
  • Yalim Gerger Version Control for PL/SQL SUN5700 Moscone South – Room 157 01-OCT-2017 11:45 01-OCT-2017 11:45

Sunday 12:45

  • Ludovico Caldara Get the Most Out of Oracle Data Guard SUN1399 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 01-OCT-2017 12:45 01-OCT-2017 12:45
  • Janis Griffin Need for Speed: Top Five Oracle Performance Tuning Tips SUN5647 Moscone South – Room 153 01-OCT-2017 12:45 01-OCT-2017 12:45
  • Oren Nakdimon Write Less (Code) with More: Oracle Database 12c New Features SUN2990 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 12:45 01-OCT-2017 12:45
  • Riyaj Shamsudeen Oracle Automatic Storage Management and Internals SUN5682 Moscone South – Room 155 01-OCT-2017 12:45 01-OCT-2017 12:45

Sunday 13:45

  • Brendan Tierney SQL: One Language to Rule All Your Data SUN1238 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Jonathan Dixon Alexa, How Do You Work with Oracle REST Data Services? SUN6251 Marriott Marquis (Yerba Buena Level) – Salon 4-6 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Julian Dontcheff DBA Types SUN1111 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Pini Dibask Database Consolidation Using Oracle’s Multitenant Architecture SUN5677 Moscone South – Room 153 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Simon Pane,Ivica Arsov Securing Your DBA Monitoring and Backup Scripts SUN5687 Moscone South – Room 159 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Stewart Bryson Kafka, Data Streaming, and Analytics Microservices SUN5693 Moscone South – Room 157 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Tariq Farooq,francisco munoz alvarez Under a Cloudy Sky: The Different Types of Skies SUN4396 Moscone South – Room 313 01-OCT-2017 13:45 01-OCT-2017 13:45
  • Tim Gorman Linux/UNIX Tools for Oracle DBAs SUN5694 Moscone South – Room 151 01-OCT-2017 13:45 01-OCT-2017 13:45

Sunday 14:45

  • Alex Nuijten Life After Oracle Forms: The Move to Oracle Application Express SUN6274 Marriott Marquis (Yerba Buena Level) – Salon 4-6 01-OCT-2017 14:45 01-OCT-2017 14:45
  • Ami Aharonovich Exploring Oracle Database Partitioning New Features and Best Practices SUN5065 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 14:45 01-OCT-2017 14:45
  • Markus Flechtner Application Containers: Multitenancy for Database Applications SUN3110 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 01-OCT-2017 14:45 01-OCT-2017 14:45
  • Nelson Calero Redefining Tables Online Without Surprises SUN3541 Moscone South – Room 314 01-OCT-2017 14:45 01-OCT-2017 14:45

Sunday 15:45

  • Heli Helskyaho,Elise Valin-Raki,Tuomas Pystynen Oracle Private Cloud Built with Engineered Systems in Fennia Insurance Company SUN4487 Marriott Marquis (Golden Gate Level) – Golden Gate A 01-OCT-2017 15:45 01-OCT-2017 15:45
  • John King Oracle Database 12c New Features for Developers and DBAs SUN5654 Moscone South – Room 157 01-OCT-2017 15:45 01-OCT-2017 15:45
  • Martin D’Souza Explore Oracle Application Express APIs SUN6244 Marriott Marquis (Yerba Buena Level) – Salon 4-6 01-OCT-2017 15:45 01-OCT-2017 15:45
  • Mauro Pagano Adaptive Plans on Oracle Database 12c SUN5664 Moscone South – Room 155 01-OCT-2017 15:45 01-OCT-2017 15:45
  • Rita Nunez Managing Oracle Database in Oracle Database Exadata Express Cloud Service SUN3575 Moscone South – Room 314 01-OCT-2017 15:45 01-OCT-2017 15:45

Monday 11:00

  • Burt Clouse,Michael Sukhenko,Sreedhar Bobbadi Database Management with Gold Images: Easily Provision, Patch, and Upgrade CON6706 Moscone West – Room 3004 02-OCT-2017 11:00 02-OCT-2017 11:00
  • Geertjan Wielenga,Lucas Jellema,John Brock,Andrejus Baranovskis,Filip Huysmans,Luc Bors Getting Started with Oracle JavaScript Extension Toolkit CON3935 Moscone West – Room 3001 02-OCT-2017 11:00 02-OCT-2017 11:00
  • Mohamed Taman Effective Design of RESTful APIs CON1206 Moscone West – Room 2005 02-OCT-2017 11:00 02-OCT-2017 11:00
  • Perry Harrington,Mirko Ortensi MySQL DBA Primer HOL7298 Hilton San Francisco Union Square (Ballroom Level) – Continental Ballroom 7 02-OCT-2017 11:00 02-OCT-2017 11:00
  • Sergiusz Wolicki New Paradigm for Case-Sensitivity and Collation on Oracle Database 12c Release 2 CON6551 Moscone West – Room 3008 02-OCT-2017 11:00 02-OCT-2017 11:00

Monday 11:30

  • I’ll be in the theatre area in the Demo Grounds doing a lightning talk on Multi-tenant and In-Memory in 12c Release 2

Monday 12:15

  • Geertjan Wielenga,Lucas Jellema,John Brock,Andrejus Baranovskis,Filip Huysmans,Luc Bors Java Versus JavaScript for the Enterprise? CON6063 Moscone West – Room 2007 02-OCT-2017 12:15 02-OCT-2017 12:15
  • John Heimann How Safe Am I? Choosing the Right Security Tool for the Job CON6303 Moscone South – Room 313 02-OCT-2017 12:15 02-OCT-2017 12:15
  • Shay Shmeltzer,Patrik Frankovic Introduction to Oracle’s Visual Low-Code Development Platform CON6510 Moscone West – Room 3001 02-OCT-2017 12:15 02-OCT-2017 12:15

Monday 13:15

  • Avi Miller,Marshall Weymouth,Brian Grad Tips for Securing Your Docker Containers CON6462 Marriott Marquis (Yerba Buena Level) – Salon 13 02-OCT-2017 13:15 02-OCT-2017 13:15
  • Bruce Lowenthal,Juan Perez-Etchegoyen To Patch or Not to Patch: Answering the CPU Question CON6302 Moscone South – Room 313 02-OCT-2017 13:15 02-OCT-2017 13:15
  • Eleanor Meritt New Release Model for Oracle Database CON6550 Moscone West – Room 3014 02-OCT-2017 13:15 02-OCT-2017 13:15
  • Jeff Smith What’s New in Oracle REST Data Services and Oracle SQL Developer CON6669 Moscone West – Room 3012 02-OCT-2017 13:15 02-OCT-2017 13:15

Monday 14:15

  • George Lumpkin What’s New in Oracle Database 12c for Data Warehousing CON6643 Moscone West – Room 3010 02-OCT-2017 14:15 02-OCT-2017 14:15
  • Heli Helskyaho,Elise Valin-Raki,Tuomas Pystynen Design Your Databases Straight to the Cloud CON1846 Moscone West – Room 3007 02-OCT-2017 14:15 02-OCT-2017 14:15
  • Juan Loaiza Oracle Exadata: Disruptive New Memory and Cloud Technologies CON6661 Moscone West – Room 3014 02-OCT-2017 14:15 02-OCT-2017 14:15
  • Roel Hartman Docker for Dummies CON1049 Moscone West – Room 3005 02-OCT-2017 14:15 02-OCT-2017 14:15

Monday 15:15

  • Alan Williams,Keith Wilcox NEW FEATURE! Centralized Database User Management Using Active Directory CON6574 Moscone West – Room 3011 02-OCT-2017 15:15 02-OCT-2017 15:15
  • Jeff Smith Oracle SQL Developer: GUI, CLI, or Browser? CON6670 Moscone West – Room 3012 02-OCT-2017 15:15 02-OCT-2017 15:15


Monday 15:30

  • I’ll be in the theatre area in the Demo Grounds doing a lightning talk on hierarchy SQL and the WITH clause

Monday 16:45

  • Guido Schmutz Apache Kafka: Scalable Message Processing and More CON6156 Moscone West – Room 2004 02-OCT-2017 16:30 02-OCT-2017 16:30
  • Chaitanya Koratamaddi Oracle Application Express 5.2 New Features CON6739 Moscone West – Room 3002 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Chaitanya Pydimukkala,Sai Devabhaktuni,Bobby Curtis Oracle Data Integration Platform Empowers Enterprise-Grade Big Data Solutions CON6893 Marriott Marquis (Golden Gate Level) – Golden Gate B 02-OCT-2017 16:45 02-OCT-2017 16:45
  • George Lumpkin Data Warehousing for Everybody: Oracle Data Warehouse Cloud Service CON6647 Moscone West – Room 3004 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Julian Dontcheff Oracle RAC 12c Release 2: Operational Best Practices CON6684 Moscone West – Room 3014 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Lisa Considine Five Inexpensive Tricks to Accelerate Oracle Database Using x86 CON7580 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Nancy Ikeda,Christopher Jones,Anthony Tuininga Oracle Net Services 12c: Best Practices for Database Performance and Scalability CON6718 Moscone West – Room 3011 02-OCT-2017 16:45 02-OCT-2017 16:45
  • Santanu Datta,Christian Shay Oracle and .NET: Intro and What’s New CON6722 Moscone West – Room 3012 02-OCT-2017 16:45 02-OCT-2017 16:45

Monday 17:45

  • Lawrence Mcintosh,Ken Kutzer Best Practices for implementing Database Security CON6434 Marriott Marquis (Yerba Buena Level) – Nob Hill A/B 02-OCT-2017 17:45 02-OCT-2017 17:45
  • Michael Hichwa Low-Code Rapid Application Development CON6685 Moscone West – Room 3002 02-OCT-2017 17:45 02-OCT-2017 17:45
  • Pedro Lopes NEW! Database Security Assessment Tool Discovers Top Security Risks CON6575 Moscone West – Room 3011 02-OCT-2017 17:45 02-OCT-2017 17:45

Tuesday 11:00

  • Chris Richardson ACID Is So Yesterday: Maintaining Data Consistency with Sagas CON2832 Moscone West – Room 2005 03-OCT-2017 11:00 03-OCT-2017 11:00
  • Hermann Baer,Yasin Baskan What Everybody Should Know About Oracle Partitioning CON6638 Moscone West – Room 3008 03-OCT-2017 11 03-OCT-2017 11:00
  • Julian Dontcheff,Anil Nair,Markus Michalewicz Oracle RAC 12c Release 2 and Cluster Architecture Internals CON6690 Moscone West – Room 3004 03-OCT-2017 11 03-OCT-2017 11:00

Tuesday 11:30

  • Dominic Giles, Maria Colgan Oracle Database and the Internet of Things CON7121 Moscone West – Room 3011 03-OCT-2017 11:30 03-OCT-2017 11:30
  • Colm Divilly Oracle REST Data Services/Oracle Database REST API CON6662 Moscone West – Room 3012 03-OCT-2017 11:30 03-OCT-2017 11:30
  • Gp Gongloor,Sriram Vrinda,Konstantin Kerekovski SQL Tuning for Expert DBAs CON7091 Moscone West – Room 3018 03-OCT-2017 11:30 03-OCT-2017 11:30
  • Tirthankar Lahiri,Ananth Raghavan,Doug Hood Oracle Database In-Memory Deep Dive: Past, Present, and Future CON6584 Moscone West – Room 3014 03-OCT-2017 11:30 03-OCT-2017 11:30

Tuesday 12:45

  • Bryn Llewellyn Ten Rules for Doing a PL/SQL Performance Experiment CON7639 Moscone West – Room 3011 03-OCT-2017 12:45 03-OCT-2017 12:45
  • Colm Divilly Securing Your RESTful Services CON6667 Moscone West – Room 3012 03-OCT-2017 12:45 03-OCT-2017 12:45

Tuesday 13:15

  • Connor Mcdonald Leaner, Faster Code with Advanced SQL Techniques CON3557 Moscone West – Room 2002 03-OCT-2017 13:15 03-OCT-2017 13:15 RubberStamp-Must-See

Tuesday 15:00

  • Dan Mcghan Creating RESTful Web Services the Easy Way with Node.js CON1242 Moscone West – Room 2002 03-OCT-2017 15:00 03-OCT-2017 15:00

Tuesday 15:45

  • Dan Mcghan,Chris Saxon 12 Things Developers Will Love About Oracle Database 12c Release 2 CON6734 Moscone West – Room 3014 03-OCT-2017 15:45 03-OCT-2017 15:45
  • Martin Gubar Oracle Big Data SQL: Roadmap to High-Performance Data Virtualization CON6644 Moscone West – Room 3010 03-OCT-2017 15:45 03-OCT-2017 15:45

Tuesday 16:45

  • Stephan Haisley,Sai Krishnamurthy Maximizing Availability for Oracle GoldenGate Microservices CON6570 Moscone West – Room 3014 03-OCT-2017 16:45 03-OCT-2017 16:45

Tuesday 17:45

  • Connor Mcdonald Using Advanced SQL Techniques for Faster Applications CON6735 Moscone West – Room 3014 03-OCT-2017 17:45 03-OCT-2017 17:45 RubberStamp-Must-See

Wednesday 09:45

  • Nancy Ikeda,Christopher Jones,Anthony Tuininga Python and Oracle Database 12c: Scripting for the Future HOL7605 Hilton San Francisco Union Square (Ballroom Level) – Continental Ballroom 6 04-OCT-2017 09:45 04-OCT-2017 09:45

Wednesday 11:00

  • Greg Drobish Oracle Database 12c Storage Features that Cut Database Tuning Time by 67 Percent CON4671 Marriott Marquis (Yerba Buena Level) – Nob Hill C/D 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Juergen Mueller,Andrew Holdsworth Why the Fastest IO Is Still the One You Don’t Do CON6560 Moscone West – Room 3012 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Mark Drake NoSQL Development and JSON Support in the Next Generation of Oracle Database CON6636 Moscone West – Room 3011 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Mark Fallon Inside the Head of a Database Hacker CON6572 Moscone West – Room 3014 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Thomas Baby Oracle Multitenant: Ask the Architects CON6731 Moscone West – Room 3010 04-OCT-2017 11:00 04-OCT-2017 11:00
  • Tim Goh,Michael Schulman NoSQL Data Modeling for RDBMS Users CON6545 Moscone West – Room 3008 04-OCT-2017 11:00 04-OCT-2017 11:00

Wednesday 12:00

  • Gregg Christman Get the Best Out of Oracle Compression CON6577 Moscone West – Room 3006 04-OCT-2017 12:00 04-OCT-2017 12:00
  • Pravin Jha Oracle Database 12.2: Lockdown Profiles CON6424 Moscone West – Room 3016 04-OCT-2017 12:00 04-OCT-2017 12:00
  • Gerald Venzl, Maria Colgan Oracle Database Features Every Developer Should Know About CON6558 Moscone West – Room 3020 04-OCT-2017 12:00 04-OCT-2017 12:00

Wednesday 13:00

  • Connor Mcdonald War of the Worlds: DBAs Versus Developers CON6737 Moscone West – Room 3014 04-OCT-2017 13:00 04-OCT-2017 13:00 RubberStamp-Must-See

Wednesday 14:00

  • Mike Dietrich,Jay Barnhart Migrate a 100 TB Database Cross-Platform in Less Than a Day CON6467 Moscone West – Room 3004 04-OCT-2017 14:00 04-OCT-2017 14:00
  • Dominic Giles, Maria Colgan Five Things You Might Not Know About Oracle Database CON6631 Moscone West – Room 3020 04-OCT-2017 14:00 04-OCT-2017 14:00

Wednesday 14:45

  • Dan Mcghan Building Real-time Data in Web Applications with Node.js CON1240 Moscone West – Room 2003 04-OCT-2017 14:45 04-OCT-2017 14:45

Wednesday 15:30

  • Bryn Llewellyn Guarding Your Data Behind a Hard Shell PL/SQL API CON6633 Moscone West – Room 3014 04-OCT-2017 15:30 04-OCT-2017 15:30

Wednesday 16:30

  • Scott Rotondo Top Five Tips for Building Secure Database Applications CON6578 Moscone West – Room 3011 04-OCT-2017 16:30 04-OCT-2017 16:30
  • Ashish Agrawal SQL Analytics: Using Automatic Workload Repository and Active Session History CON7065 Moscone West – Room 3018 04-OCT-2017 17 04-OCT-2017 17:00
  • Tim Hall Make the RDBMS Relevant Again with RESTful Web Services and JSON CON7358 Moscone West – Room 2002 04-OCT-2017 16:30 04-OCT-2017 16:30

Wednesday 17:30

  • John King Blockchain? What Is Blockchain? Why Do I Care? CON2276 Moscone West – Room 2010 04-OCT-2017 17:30 04-OCT-2017 17:30
  • Lucas Jellema,Jurgen Leijer JavaScript to SQL: Diagnose Application Issues Across a Hybrid Infrastructure CON6914 Moscone West – Room 2024 04-OCT-2017 17:30 04-OCT-2017 17:30
  • Stewart Bryson Practical Advice from a Developer Using a Cloud-Based DevOps Environment CON7377 Moscone West – Room 2002 04-OCT-2017 17:30 04-OCT-2017 17:30

Thursday 10:45

  • Beda Hammerschmidt,Vikas Arora,Maxim Orgiyan The Top 10 Dos and Don’ts of JSON Processing in a Database CON4647 Marriott Marquis (Golden Gate Level) – Golden Gate C1/C2 05-OCT-2017 10:45 05-OCT-2017 10:45

Thursday 13:45

  • Brendan Tierney Is SQL the Best Language for Statistics and Machine Learning? CON7350 Marriott Marquis (Golden Gate Level) – Golden Gate C3 05-OCT-2017 13:45 05-OCT-2017 13:45


Thursday 14:45

  • Heli Helskyaho,Elise Valin-Raki,Tuomas Pystynen The Basics of Machine Learning CON7354 Marriott Marquis (Golden Gate Level) – Golden Gate C1/C2 05-OCT-2017 14:45 05-OCT-2017 14:45
  • Lucas Jellema,Jurgen Leijer How End User Monitoring and Selenium Can Help Troubleshoot Customer issues CON6912 Marriott Marquis (Golden Gate Level) – Golden Gate C3 05-OCT-2017 14:45 05-OCT-2017 14:45

 

And as I mentioned before…the whole AskTOM team will be at OpenWorld for the week

Data Gravity and the Network

The network has often been viewed as “no man’s land” for the DBA-  Our tools may identify network latency, but rarely does it go into any details, designating the network outside our jurisdiction.

As we work through data gravity, i.e. the weight of data, the pull of applications, services, etc. to data sources, we have to inspect what connects it to the data and slows it down.  Yes, the network.

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/09/Snail-slow-ne... 300w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

We can’t begin to investigate the network without spending some time on Shannon’s law, also known as Shannon-Hartley Theorem.  The equation relates to the maximum capacity (transmission bit rate) that can be achieved over a given channel with certain noise characteristics and bandwidth.

This theorem has been around for quite some time in the telephony world, first patented in 1903 by W.M Minor with the introduction of a concept on how to increase the capacity of transmission lines.  Over the years, multiplexing and quantizers were introduced, but the main computation has stayed the same:

  • A given communication, (or data) system has a maximum rate of information C known as the channel capacity
  • If the transmission information rate R is less than C, then the data transmission in the presence of noise can be made to happen with arbitrarily small error probabilities by using intelligent coding techniques
  • To get lower error probabilities, the encoder has to work on longer blocks of signal data. This entails longer delays and higher computational requirements

In layman’s terms-  the data is only going to go as fast as it can do so without hitting a error threshold.

As a DBA, we always inspect waits in the form of latency and latency is actually just a measure of time and you should always tune for time or you’re just wasting time. Latency is the closest measure to speed when you compare it to the distance involved between two points, which when discussing data source and application, etc., are your points. This is where it gets interesting.  Super low latency networks aren’t necessarily huge bandwidth, such as infiniband, which is common in engineered systems like Exadata.  In comparison, standard networks can have much higher volume, but they can’t talk as “fast” on a packet by packet basis. These types of networks compete by providing extensive parallel lanes, but as we know, individual lanes simply won’t be able to.

Now I’m not going to go into the further areas of this theorem, including the Shannon’s Limit, but the network, especially with the introduction of the cloud, has reared its ugly head as the newest bottle neck.  There’s a very good reason cloud providers like AWS have come up with Snowmobile.  Every cloud project I’ve been on, the network has been a significant impact to its success.  My advice to all DBAs is to enhance your knowledge with information about networking.  If you didn’t respect your network administrator before, you will after you do a little research… </p />
</p></div></div>

    	  	<div class=

Partition-wise join

So just what is a “partition-wise” join ?  We will use a metaphor to hopefully Smile explain the benefit.

image

Let’s say two people, Logan and Shannon, decide to move in together.  If each of them already have an existing residence, they will both have a lot of the common items that you find in any household.  So they have a decision to make – do they keep two of everything, or do they have a bit of a “cull” of things that they have in common.  In this imaginary scenario, we will focus on household items in the bathroom and the kitchen.  Logan grabs a set of kitchen knives a knife block, calls Shannon and asks: “Hey Shannon, do you already have a knife block?”

What do you think Shannon will do ? Search the entire house for an existing knife block ?  Of course not.  If there is a knife block, then the only place it will be located will be in the kitchen.  In fact, when matching up the items throughout the house, Shannon and Logan will restrict their investigation to the room that makes sense for the item in question.  That is just common sense – why would anyone search in the bathroom for (say) forks and spoons ?  It would just be a waste of effort.

(Editors Note:  Anyone with young children will of course dispute this metaphor, stating quite correctly that you can probably find every possible household item in every possible room, and probably outside as well Smile but we’ll omit that possibility for the sake of this discussion)

image

And that is exactly what a partition-wise join enables us to do in the database.  If two tables are partitioned with the same definition, and we are joining on the partition key, then that definition guarantees that for a row in one table with partition key “K” and hence partition “P”, we only need to seek that row in the same partition in the table we are joining to (where “same” is based on the partitioning definition).  It is the partitioning equivalent of “only searching in the kitchen and not the bathroom”.  We can see this via the execution plan when doing such a join.  Let’s create two tables with equal partition definitions and then join on the partition key.


SQL> --
SQL> -- Example 1
SQL> --
SQL>
SQL> drop table t1 purge;

Table dropped.

SQL> drop table t2 purge;

Table dropped.

SQL>
SQL> create table t1 ( x int, y int )
  2  partition by range ( x )
  3  (
  4  partition p_kitchen values less than (10000),
  5  partition p_bathroom values less than (20000),
  6  partition p_dining values less than (30000)
  7  );

Table created.

SQL>
SQL>
SQL> create table t2 ( x int, y int )
  2  partition by range ( x )
  3  (
  4  partition p_kitchen values less than (10000),
  5  partition p_bathroom values less than (20000),
  6  partition p_dining values less than (30000)
  7  );

Table created.

SQL>
SQL>
SQL> insert into t1 select rownum, rownum from dual connect by level < 30000;

29999 rows created.

SQL> insert into t2 select * from t1;

29999 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','t1')

PL/SQL procedure successfully completed.

SQL> exec dbms_stats.gather_table_stats('','t2')

PL/SQL procedure successfully completed.

SQL>
SQL> --
SQL> set autotrace traceonly explain
SQL> select count(t1.y), count(t2.y)
  2  from t1,t2
  3  where t1.x = t2.x;

Execution Plan
----------------------------------------------------------
Plan hash value: 3155849676

---------------------------------------------------------------------------------------------
| Id  | Operation            | Name | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT     |      |     1 |    20 |  1641   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE      |      |     1 |    20 |            |          |       |       |
|   2 |   PARTITION RANGE ALL|      | 29999 |   585K|  1641   (1)| 00:00:01 |     1 |     3 |
|*  3 |    HASH JOIN         |      | 29999 |   585K|  1641   (1)| 00:00:01 |       |       |
|   4 |     TABLE ACCESS FULL| T1   | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   5 |     TABLE ACCESS FULL| T2   | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
---------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   3 - access("T1"."X"="T2"."X")

SQL> set autotrace off
SQL>

The key part of the execution plan here is that the HASH JOIN is occurring within (or “under”) the PARTITION RANGE ALL iteration.  This can be interpreted as: “Start with the first partition in each table, do a hash join on that partition.  Then move onto the next partition; do a hash join on that partition”, and so on.  This is efficient on resources because at no point are we trying (and obviously failing) to join a row from table T1 partition P_KITCHEN to table T2 partition P_BATHROOM or P_DINING.  Each hash join is a smaller operation and hence also more likely to be completed in the available PGA allocation for that session.  Also, when it comes to running such a query in parallel, then each parallel slave can tackle the job of handling a partition in isolation to the other slaves.

If the partitions do not align (see the Editors note above Smile), then our join will not be as efficient.


SQL> --
SQL> -- Example 2
SQL> --
SQL>
SQL>
SQL> drop table t2 purge;

Table dropped.

SQL> create table t2 ( x int, y int )
  2  partition by range ( x )
  3  (
  4  partition p1 values less than (15000),
  5  partition p3 values less than (30000)
  6  );

Table created.

SQL>
SQL> --
SQL> -- all partitions do NOT align, so we do NOT see partition-wise join
SQL> --
SQL>
SQL> insert into t2 select * from t1;

29999 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','t2')

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly explain
SQL> select count(t1.y), count(t2.y)
  2  from t1,t2
  3  where t1.x = t2.x;

Execution Plan
----------------------------------------------------------
Plan hash value: 666786458

---------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |     1 |    20 |  1369   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE               |         |     1 |    20 |            |          |       |       |
|*  2 |   HASH JOIN                   |         | 29999 |   585K|  1369   (1)| 00:00:01 |       |       |
|   3 |    PART JOIN FILTER CREATE    | :BF0000 | 29999 |   585K|  1369   (1)| 00:00:01 |       |       |
|   4 |     PARTITION RANGE ALL       |         | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   5 |      TABLE ACCESS FULL        | T1      | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   6 |    PARTITION RANGE JOIN-FILTER|         | 29999 |   292K|   548   (1)| 00:00:01 |:BF0000|:BF0000|
|   7 |     TABLE ACCESS FULL         | T2      | 29999 |   292K|   548   (1)| 00:00:01 |:BF0000|:BF0000|
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."X"="T2"."X")

Note
-----
   - this is an adaptive plan

SQL> set autotrace off
SQL>
SQL>

The key element here is that the HASH JOIN now sits above the cycling through all of the partitions.  In earlier releases of Oracle, you would not see the line containing the :BF0000, so it would be a simple join across all the rows as if the tables were not partitioned at all.  But when the partitions do not align, things are slightly better in modern releases.  We use a “Bloom filter” (hence the :BF prefix) to reduce the overhead of joining the two tables.  Since I’m using metaphors in this post, think of “phoning ahead” to the cinema to see if there are seats available for your favourite movie.  If the cinema owner says the movie is sold out, you have saved yourself a car trip. But just because the owner says there are seats available, it is still possible you might drive there and find that the movie has sold out during that time.  A Bloom filter is like phoning ahead – there’s a good chance you can avoid some work, but it is not a guarantee.  You can read about Bloom filters here in a great whitepaper by Christian Antognini.

Note that all of the partitions must align. Here is an example where the first three partitions are in alignment, having boundaries are 10000, 20000 and 30000, but our second table T2 has an additional partition defined.  Once again, we fall back to the Bloom filter option.


SQL> --
SQL> -- Example 3
SQL> --
SQL> drop table t2 purge;

Table dropped.

SQL> create table t2 ( x int, y int )
  2  partition by range ( x )
  3  (
  4  partition p1 values less than (10000),
  5  partition p2 values less than (20000),
  6  partition p3 values less than (30000),
  7  partition p4 values less than (40000)
  8  );

Table created.

SQL>
SQL> --
SQL> -- all partitions do NOT align, so we do NOT see partition-wise join
SQL> --
SQL>
SQL> insert into t2 select rownum, rownum from dual connect by level < 40000;

39999 rows created.

SQL> commit;

Commit complete.

SQL> exec dbms_stats.gather_table_stats('','t2')

PL/SQL procedure successfully completed.

SQL> set autotrace traceonly explain
SQL> select count(t1.y), count(t2.y)
  2  from t1,t2
  3  where t1.x = t2.x;

Execution Plan
----------------------------------------------------------
Plan hash value: 666786458

---------------------------------------------------------------------------------------------------------
| Id  | Operation                     | Name    | Rows  | Bytes | Cost (%CPU)| Time     | Pstart| Pstop |
---------------------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT              |         |     1 |    20 |  1913   (1)| 00:00:01 |       |       |
|   1 |  SORT AGGREGATE               |         |     1 |    20 |            |          |       |       |
|*  2 |   HASH JOIN                   |         | 29999 |   585K|  1913   (1)| 00:00:01 |       |       |
|   3 |    PART JOIN FILTER CREATE    | :BF0000 | 29999 |   585K|  1913   (1)| 00:00:01 |       |       |
|   4 |     PARTITION RANGE ALL       |         | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   5 |      TABLE ACCESS FULL        | T1      | 29999 |   292K|   820   (1)| 00:00:01 |     1 |     3 |
|   6 |    PARTITION RANGE JOIN-FILTER|         | 39999 |   390K|  1093   (1)| 00:00:01 |:BF0000|:BF0000|
|   7 |     TABLE ACCESS FULL         | T2      | 39999 |   390K|  1093   (1)| 00:00:01 |:BF0000|:BF0000|
---------------------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - access("T1"."X"="T2"."X")

Note
-----
   - this is an adaptive plan

SQL> set autotrace off
SQL>
SQL>
SQL>


So faster queries on partitioned tables is not just about partition pruning.  Partition-wise joins also can make a beneficial impact on query response times.

Where in the World is Goth Geek Girl, Week #38

As we come upon Oracle Open World at the end of the month, I’m busy with a number of events and tasks.

I spoke at the Microservices, Containers and DevOps Summit in Denver yesterday and will be traveling to San Diego, California to speak at SQL Saturday #661 this weekend.  I love the Microsoft events, but not sure my family loves the loss of my weekend time with them half as much.

At the end of the month, Delphix will be publishing my DevOps book based off my series of blog posts and webinars on DevOps and the DBA.  It will be available via eBook on Delphix’s website and a hardcopy will be available at a few events in the coming year, including at Oak Table World 2017 at Oracle Open World.  If you do have the opportunity to read the book, I would love some feedback for the upcoming second edition where I really want to get more into specifics of how to do DevOps with database development.  Most focus on application tier and most DevOps environments scope across app, database and OS tiers.

I’m also looking forward to Oracle Open World this year.  I’m finished with my Microsoft Pass Summit slides and presentation, but still working on the ones for Oracle Open World!  You’d think with Summit a month out after OOW, it would be the other way around, but this just isn’t how its played out.  This year at OOW, I’m presenting on Super Sunday with Mike Donovan from DBVisit.  We’re going to be virtualizing and replicating to the cloud with our demo, so it’s been a lot of fun creating the demo environment.  Our story is intact, its just getting the time and the common timezone to get everything together, (Mike is in New Zealand…:))

I’ll also be presenting at the Oracle Women in Tech event on Sunday.  Thanks to Laura Ramsey who’s ensured this has taken more precedence at Oracle and last year, we had a great panel session in the midst of the chaos!  Laura’s the bomb and made sure the importance of inclusion of women in the Oracle technical industry is shared with the community at large.

On Monday and Tuesday, I’ll spend most of my time at Oak Table World, which for many of you know, is the event we Oakies put on next door to the big event, showcasing the sessions that may not be of interest to Oracle, but is of interest to us geeks.  We’ll be at the Children’s Creativity Museum next to the Carousel and Jeremiah Wilton has done an awesome job organizing this year’s event.  As I did last years, I know how much work it is and I don’t envy him, so if you see him, make sure to give him a pat on the back!

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/09/Screen-Shot-2... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/09/Screen-Shot-2... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/09/Screen-Shot-2... 1330w" sizes="(max-width: 600px) 100vw, 600px" data-recalc-dims="1" />

As always, there are a ton of events, parties and get togethers at Oracle Open World.  It is the largest annual Oracle event in the world.  I look forward to seeing many friends and if you are one of those that won’t be able to make it this year, know that you will be missed…:(

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Where in the World is Goth Geek Girl, Week #38], All Right Reserved. 2017.

The post Where in the World is Goth Geek Girl, Week #38 appeared first on DBA Kevlar.

rollback internals

While researching redo log internals for V00D00 we had to face the fact, that we know shit about real transactional behavior. When I say "real", I mean – under the hood.
Even with a very simple stuff like COMMIT and ROLLBACK we were constantly amazed by the internal mechanisms.

Today let’s take ROLLBACK under the investigation. According to documentation:

The ROLLBACK statement ends the current transaction and undoes any changes made during that transaction.

Cool. But what it means? First of all, you have to realize that all changes in redo logs are in a form of REDO RECORD which has its own address, known as RBA or RS_ID.

Sample RS_ID (RBA) looks like this: 0x00000a.00008c0f.006c

  • 00000a = 10 : this is sequence#
  • 00008c0f = 35855 : this is a block number in a redo log
  • 006c = 108 : this is offset in a redo block, where this redo record begins

Each redo record consists of one or more change vectors and each change vector has it’s own op code. Check out this article by Jonathan Lewis: https://jonathanlewis.wordpress.com/2017/07/25/redo-op-codes/

If you update EMPLOYEES table and change 2 rows, you will get usually one redo record with at least 4 change vectors – 2 for actual updates (so REDO) and 2 for reversing the operation (undo)

You can learn it from redolog or archivelog dumps:

Let’s modify a few rows:

SQL> update hr.employees
  2  set salary=salary*2
  3  where department_id=100;

6 rows updated.

SQL> commit;

Commit complete.

Now I’ll make a dump of an archivelog which contains those changes:

SQL> alter system switch logfile;

System altered.

SQL> get get_arch
  1  with v_arch as
  2  (
  3   select name, rank() over (order by sequence# desc) as rn
  4   from v$archived_log
  5  )
  6  select name
  7  from v_arch
  8* where rn=1
SQL> /

NAME
--------------------------------------------------------------------------------
+DATA/SKIPER/ARCHIVELOG/2017_09_20/thread_1_seq_13.303.955209887

SQL> alter system dump logfile '+DATA/SKIPER/ARCHIVELOG/2017_09_20/thread_1_seq_13.303.955209887';

System altered.

SQL> get tracefile
  1* select value from v$diag_info where name='Default Trace File'
SQL> /

VALUE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_24204.trc

The dump file is quite big and hard to analyze at the beginning, so I wrote a small and simple (primitive even) tool in Python which will help you to analyze the dump file. The tool can be downloaded from here.

It’s tested against Python 7.5. You can use it like this:

[oracle@skiper ~]$ python logfile_dump_parser.py
parse file /path/to/a/redo/dump object_id_or_name   	 if you want to parse file (duh!)
connect to oracle user/password@ip:port/service     	 if you want to connect to db to get table names instead ids
exit to quit
VOODOO >

At the beginning you will get a prompt – you can tell the tool to parse a dumpfile, providing DATA_OBJECT_ID of a table, which you’d like to search in a file or you can connect to Oracle (cx_Oracle module for Python required) with a simple connect string and parse a dump file, using name of the table.

VOODOO > connect to oracle system/oracle@localhost:1521/skiper
VOODOO > parse file /u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_24204.trc EMPLOYEES

Redo record block: 42 offset: 16 [rs_id: 0x00000d.0000002a.0010]
	 change 1 operation UPDATE xid = 0x0004.019.000004f2
	 change 2 operation UNDO HEADER xid = 0x0004.019.000004f2
	 change 3 operation UPDATE xid = 0x0004.019.000004f2
	 change 4 operation UPDATE xid = 0x0004.019.000004f2
	 change 5 operation UPDATE xid = 0x0004.019.000004f2
	 change 6 operation UPDATE xid = 0x0004.019.000004f2
	 change 7 operation UPDATE xid = 0x0004.019.000004f2
	 change 8 operation COMMIT xid = 0x0004.019.000004f2
	 change 9 operation UNDO xid = 0x0004.019.000004f2
	 change 10 operation UNDO xid = 0x0004.019.000004f2
	 change 11 operation UNDO xid = 0x0004.019.000004f2
	 change 12 operation UNDO xid = 0x0004.019.000004f2
	 change 13 operation UNDO xid = 0x0004.019.000004f2
	 change 14 operation UNDO xid = 0x0004.019.000004f2

As you can see, this simple update changed 6 rows and generated one redo record with 14 change vectors – 6X UPDATE (redo) 6X UNDO, one UNDO HEADER modification and one COMMIT. This whole redo record is a single, atomic operation for the database.

Let’s see what will happen when we will ROLLBACK this kind of transaction instead.

First we have to repeat the operation and generate archivelog dump for our parser:

SQL> get upd
  1  update hr.employees
  2  set salary=salary*2
  3* where department_id=100
SQL> /

6 rows updated.

SQL> rollback;

Rollback complete.

SQL> alter system switch logfile;

System altered.

SQL> @get_arch

NAME
--------------------------------------------------------------------------------
+DATA/SKIPER/ARCHIVELOG/2017_09_20/thread_1_seq_29.319.955212079

SQL> alter system dump logfile '+DATA/SKIPER/ARCHIVELOG/2017_09_20/thread_1_seq_29.319.955212079';

System altered.

SQL> @tracefile

VALUE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_3601.trc

And now we can parse logfile with my little tool (note, that you have to make connection to the database only once per parsing session)

VOODOO > parse file /u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_3601.trc EMPLOYEES

Redo record block: 47 offset: 16 [rs_id: 0x00001d.0000002f.0010]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation UNDO HEADER xid = 0x0002.001.00000615
	 change 3 operation UPDATE xid = 0x0002.001.00000615
	 change 4 operation UPDATE xid = 0x0002.001.00000615
	 change 5 operation UPDATE xid = 0x0002.001.00000615
	 change 6 operation UPDATE xid = 0x0002.001.00000615
	 change 7 operation UPDATE xid = 0x0002.001.00000615
	 change 8 operation UNDO xid = 0x0002.001.00000615
	 change 9 operation UNDO xid = 0x0002.001.00000615
	 change 10 operation UNDO xid = 0x0002.001.00000615
	 change 11 operation UNDO xid = 0x0002.001.00000615
	 change 12 operation UNDO xid = 0x0002.001.00000615
	 change 13 operation UNDO xid = 0x0002.001.00000615

Redo record block: 50 offset: 228 [rs_id: 0x00001d.00000032.00e4]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 50 offset: 424 [rs_id: 0x00001d.00000032.01a8]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 51 offset: 124 [rs_id: 0x00001d.00000033.007c]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 51 offset: 320 [rs_id: 0x00001d.00000033.0140]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 52 offset: 20 [rs_id: 0x00001d.00000034.0014]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 52 offset: 216 [rs_id: 0x00001d.00000034.00d8]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (final) xid = 0x0002.001.00000615
COMMIT [ending rollback]  for transaction 0x0002.001.00000615 [block: 52 offset: 432 rs_id: 0x00001d.00000034.01b0]

Nice! So for each and every modified row, Oracle had to generate separate redo record with 2 change vectors – one vector is an actual UPDATE built from UNDO information, which changes a row to its previous value and one rollback code (5.6 – ROLLBACK SINGLE STATEMENT (not final) or 5.11 – ROLLBACK SINGLE STATEMENT (final))

 

  • 5.6 – ROLLBACK SINGLE STATEMENT (not final) – this means that the rollback is in progress and there can be more operations like this
  • 5.11 – ROLLBACK SINGLE STATEMENT (final) – this means that the rollback is finished and there should be no more rollback records

Each complete ROLLBACK is actually finished by COMMIT (OP:5.4). This is logical because ROLLBACK is generating commands to reverse previous operations and then it can COMMIT the whole transaction.

The interesting thing is with partial rollback. Let’s UPDATE again 6 records but then let’s try to insert a row which will violate a primary key:

SQL> get upd
  1  update hr.employees
  2  set salary=salary*2
  3* where department_id=100
SQL> /

6 rows updated.

SQL> insert into hr.employees
  2  values (100,'Dupa','Blada','DBLADA','666.66.666',sysdate,'ST_CLERK',5600,null,100,100);
insert into hr.employees
*
ERROR at line 1:
ORA-00001: unique constraint (HR.EMP_EMP_ID_PK) violated

Now let’s check the parser:

V00D00 > parse file /u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_6603.trc EMPLOYEES

Redo record sequence: 32 block: 38 offset 16 [rs_id: 0x000020.00000026.0010]
	 change 1 operation UPDATE xid = 0x0005.004.0000061e
	 change 2 operation UNDO HEADER xid = 0x0005.004.0000061e
	 change 3 operation UPDATE xid = 0x0005.004.0000061e
	 change 4 operation UPDATE xid = 0x0005.004.0000061e
	 change 5 operation UPDATE xid = 0x0005.004.0000061e
	 change 6 operation UPDATE xid = 0x0005.004.0000061e
	 change 7 operation UPDATE xid = 0x0005.004.0000061e
	 change 8 operation UNDO xid = 0x0005.004.0000061e
	 change 9 operation UNDO xid = 0x0005.004.0000061e
	 change 10 operation UNDO xid = 0x0005.004.0000061e
	 change 11 operation UNDO xid = 0x0005.004.0000061e
	 change 12 operation UNDO xid = 0x0005.004.0000061e
	 change 13 operation UNDO xid = 0x0005.004.0000061e

Redo record sequence: 32 block: 41 offset 188 [rs_id: 0x000020.00000029.00bc]
	 change 1 operation OP:14.11 xid = 0x0005.004.0000061e

Redo record sequence: 32 block: 41 offset 252 [rs_id: 0x000020.00000029.00fc]
	 change 1 operation OP:13.22 xid = 0x0005.004.0000061e

Redo record sequence: 32 block: 41 offset 328 [rs_id: 0x000020.00000029.0148]
	 change 1 operation UNDO xid = 0x0005.004.0000061e
	 change 2 operation INSERT xid = 0x0005.004.0000061e

Redo record sequence: 32 block: 42 offset 172 [rs_id: 0x000020.0000002a.00ac]
	 change 1 operation DELETE xid = 0x0005.004.0000061e
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0005.004.0000061e
COMMIT  for transaction 0x0005.004.0000061e [sequence: 33 block: 3 offset 16 [rs_id: 0x000021.00000003.0010]

Great! So we can see in here one familiar redo record, some change vectors from layer 13 (Segment management) and 14 (Extent management) and then our redo record for INSERT, followed by a redo record with a DELETE (ROLLBACK information)!

So each time you are doing something to a database, redo is being generated – even if you violate constraints! After constraint violations, Oracle is doing a partial rollback of this transaction.

But the funniest thing can be seen in this situation:

SQL> update hr.employees
  2  set employee_id=1;
update hr.employees
*
ERROR at line 1:
ORA-00001: unique constraint (HR.EMP_EMP_ID_PK) violated

We could expect, that modifying the second row to the same value as the previous one will throw this kind of exception. But redo log analysis shows something different:

V00D00 > parse file /u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_9695.trc EMPLOYEES

Redo record sequence: 34 block: 8771 offset 16 [rs_id: 0x000022.00002243.0010]
	 change 1 operation UNDO HEADER xid = 0x000a.002.00000508
	 change 2 operation UNDO xid = 0x000a.002.00000508
	 change 3 operation UPDATE xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8773 offset 96 [rs_id: 0x000022.00002245.0060]
	 change 1 operation UNDO xid = 0x000a.002.00000508
	 change 2 operation UPDATE xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8774 offset 100 [rs_id: 0x000022.00002246.0064]
	 change 1 operation UNDO xid = 0x000a.002.00000508
	 change 2 operation UPDATE xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8775 offset 268 [rs_id: 0x000022.00002247.010c]
	 change 1 operation UPDATE xid = 0x000a.002.00000508
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8776 offset 136 [rs_id: 0x000022.00002248.0088]
	 change 1 operation UPDATE xid = 0x000a.002.00000508
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8777 offset 172 [rs_id: 0x000022.00002249.00ac]
	 change 1 operation UPDATE xid = 0x000a.002.00000508
	 change 2 operation ROLLBACK SINGLE STATEMENT (final) xid = 0x000a.002.00000508
COMMIT [ending rollback]  for transaction 0x000a.002.00000508 [sequence: 34 block: 8777 offset 384 [rs_id: 0x000022.00002249.0180]

We can see that Oracle did 3 modifications and 3 rollbacks after getting a signal that constraint was violated. So maybe we can assume that constraint checking is an asynchronous operation?

Stay tuned for more! Hope you will have fun with analyzing redo dumps with my simple Python tool </p />
</p></div>

    	  	<div class=

rollback internals

While researching redo log internals for V00D00 we had to face the fact, that we know shit about real transactional behavior. When I say "real", I mean – under the hood.
Even with a very simple stuff like COMMIT and ROLLBACK we were constantly amazed by the internal mechanisms.

Today let’s take ROLLBACK under the investigation. According to documentation:

The ROLLBACK statement ends the current transaction and undoes any changes made during that transaction.

Cool. But what it means? First of all, you have to realize that all changes in redo logs are in a form of REDO RECORD which has its own address, known as RBA or RS_ID.

Sample RS_ID (RBA) looks like this: 0x00000a.00008c0f.006c

  • 00000a = 10 : this is sequence#
  • 00008c0f = 35855 : this is a block number in a redo log
  • 006c = 108 : this is offset in a redo block, where this redo record begins

Each redo record consists of one or more change vectors and each change vector has it’s own op code. Check out this article by Jonathan Lewis: https://jonathanlewis.wordpress.com/2017/07/25/redo-op-codes/

If you update EMPLOYEES table and change 2 rows, you will get usually one redo record with at least 4 change vectors – 2 for actual updates (so REDO) and 2 for reversing the operation (undo)

You can learn it from redolog or archivelog dumps:

Let’s modify a few rows:

SQL> update hr.employees
  2  set salary=salary*2
  3  where department_id=100;

6 rows updated.

SQL> commit;

Commit complete.

Now I’ll make a dump of an archivelog which contains those changes:

SQL> alter system switch logfile;

System altered.

SQL> get get_arch
  1  with v_arch as
  2  (
  3   select name, rank() over (order by sequence# desc) as rn
  4   from v$archived_log
  5  )
  6  select name
  7  from v_arch
  8* where rn=1
SQL> /

NAME
--------------------------------------------------------------------------------
+DATA/SKIPER/ARCHIVELOG/2017_09_20/thread_1_seq_13.303.955209887

SQL> alter system dump logfile '+DATA/SKIPER/ARCHIVELOG/2017_09_20/thread_1_seq_13.303.955209887';

System altered.

SQL> get tracefile
  1* select value from v$diag_info where name='Default Trace File'
SQL> /

VALUE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_24204.trc

The dump file is quite big and hard to analyze at the beginning, so I wrote a small and simple (primitive even) tool in Python which will help you to analyze the dump file. The tool can be downloaded from here.

It’s tested against Python 7.5. You can use it like this:

[oracle@skiper ~]$ python logfile_dump_parser.py
parse file /path/to/a/redo/dump object_id_or_name   	 if you want to parse file (duh!)
connect to oracle user/password@ip:port/service     	 if you want to connect to db to get table names instead ids
exit to quit
VOODOO >

At the beginning you will get a prompt – you can tell the tool to parse a dumpfile, providing DATA_OBJECT_ID of a table, which you’d like to search in a file or you can connect to Oracle (cx_Oracle module for Python required) with a simple connect string and parse a dump file, using name of the table.

VOODOO > connect to oracle system/oracle@localhost:1521/skiper
VOODOO > parse file /u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_24204.trc EMPLOYEES

Redo record block: 42 offset: 16 [rs_id: 0x00000d.0000002a.0010]
	 change 1 operation UPDATE xid = 0x0004.019.000004f2
	 change 2 operation UNDO HEADER xid = 0x0004.019.000004f2
	 change 3 operation UPDATE xid = 0x0004.019.000004f2
	 change 4 operation UPDATE xid = 0x0004.019.000004f2
	 change 5 operation UPDATE xid = 0x0004.019.000004f2
	 change 6 operation UPDATE xid = 0x0004.019.000004f2
	 change 7 operation UPDATE xid = 0x0004.019.000004f2
	 change 8 operation COMMIT xid = 0x0004.019.000004f2
	 change 9 operation UNDO xid = 0x0004.019.000004f2
	 change 10 operation UNDO xid = 0x0004.019.000004f2
	 change 11 operation UNDO xid = 0x0004.019.000004f2
	 change 12 operation UNDO xid = 0x0004.019.000004f2
	 change 13 operation UNDO xid = 0x0004.019.000004f2
	 change 14 operation UNDO xid = 0x0004.019.000004f2

As you can see, this simple update changed 6 rows and generated one redo record with 14 change vectors – 6X UPDATE (redo) 6X UNDO, one UNDO HEADER modification and one COMMIT. This whole redo record is a single, atomic operation for the database.

Let’s see what will happen when we will ROLLBACK this kind of transaction instead.

First we have to repeat the operation and generate archivelog dump for our parser:

SQL> get upd
  1  update hr.employees
  2  set salary=salary*2
  3* where department_id=100
SQL> /

6 rows updated.

SQL> rollback;

Rollback complete.

SQL> alter system switch logfile;

System altered.

SQL> @get_arch

NAME
--------------------------------------------------------------------------------
+DATA/SKIPER/ARCHIVELOG/2017_09_20/thread_1_seq_29.319.955212079

SQL> alter system dump logfile '+DATA/SKIPER/ARCHIVELOG/2017_09_20/thread_1_seq_29.319.955212079';

System altered.

SQL> @tracefile

VALUE
--------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_3601.trc

And now we can parse logfile with my little tool (note, that you have to make connection to the database only once per parsing session)

VOODOO > parse file /u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_3601.trc EMPLOYEES

Redo record block: 47 offset: 16 [rs_id: 0x00001d.0000002f.0010]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation UNDO HEADER xid = 0x0002.001.00000615
	 change 3 operation UPDATE xid = 0x0002.001.00000615
	 change 4 operation UPDATE xid = 0x0002.001.00000615
	 change 5 operation UPDATE xid = 0x0002.001.00000615
	 change 6 operation UPDATE xid = 0x0002.001.00000615
	 change 7 operation UPDATE xid = 0x0002.001.00000615
	 change 8 operation UNDO xid = 0x0002.001.00000615
	 change 9 operation UNDO xid = 0x0002.001.00000615
	 change 10 operation UNDO xid = 0x0002.001.00000615
	 change 11 operation UNDO xid = 0x0002.001.00000615
	 change 12 operation UNDO xid = 0x0002.001.00000615
	 change 13 operation UNDO xid = 0x0002.001.00000615

Redo record block: 50 offset: 228 [rs_id: 0x00001d.00000032.00e4]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 50 offset: 424 [rs_id: 0x00001d.00000032.01a8]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 51 offset: 124 [rs_id: 0x00001d.00000033.007c]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 51 offset: 320 [rs_id: 0x00001d.00000033.0140]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 52 offset: 20 [rs_id: 0x00001d.00000034.0014]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0002.001.00000615

Redo record block: 52 offset: 216 [rs_id: 0x00001d.00000034.00d8]
	 change 1 operation UPDATE xid = 0x0002.001.00000615
	 change 2 operation ROLLBACK SINGLE STATEMENT (final) xid = 0x0002.001.00000615
COMMIT [ending rollback]  for transaction 0x0002.001.00000615 [block: 52 offset: 432 rs_id: 0x00001d.00000034.01b0]

Nice! So for each and every modified row, Oracle had to generate separate redo record with 2 change vectors – one vector is an actual UPDATE built from UNDO information, which changes a row to its previous value and one rollback code (5.6 – ROLLBACK SINGLE STATEMENT (not final) or 5.11 – ROLLBACK SINGLE STATEMENT (final))

 

  • 5.6 – ROLLBACK SINGLE STATEMENT (not final) – this means that the rollback is in progress and there can be more operations like this
  • 5.11 – ROLLBACK SINGLE STATEMENT (final) – this means that the rollback is finished and there should be no more rollback records

Each complete ROLLBACK is actually finished by COMMIT (OP:5.4). This is logical because ROLLBACK is generating commands to reverse previous operations and then it can COMMIT the whole transaction.

The interesting thing is with partial rollback. Let’s UPDATE again 6 records but then let’s try to insert a row which will violate a primary key:

SQL> get upd
  1  update hr.employees
  2  set salary=salary*2
  3* where department_id=100
SQL> /

6 rows updated.

SQL> insert into hr.employees
  2  values (100,'Dupa','Blada','DBLADA','666.66.666',sysdate,'ST_CLERK',5600,null,100,100);
insert into hr.employees
*
ERROR at line 1:
ORA-00001: unique constraint (HR.EMP_EMP_ID_PK) violated

Now let’s check the parser:

V00D00 > parse file /u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_6603.trc EMPLOYEES

Redo record sequence: 32 block: 38 offset 16 [rs_id: 0x000020.00000026.0010]
	 change 1 operation UPDATE xid = 0x0005.004.0000061e
	 change 2 operation UNDO HEADER xid = 0x0005.004.0000061e
	 change 3 operation UPDATE xid = 0x0005.004.0000061e
	 change 4 operation UPDATE xid = 0x0005.004.0000061e
	 change 5 operation UPDATE xid = 0x0005.004.0000061e
	 change 6 operation UPDATE xid = 0x0005.004.0000061e
	 change 7 operation UPDATE xid = 0x0005.004.0000061e
	 change 8 operation UNDO xid = 0x0005.004.0000061e
	 change 9 operation UNDO xid = 0x0005.004.0000061e
	 change 10 operation UNDO xid = 0x0005.004.0000061e
	 change 11 operation UNDO xid = 0x0005.004.0000061e
	 change 12 operation UNDO xid = 0x0005.004.0000061e
	 change 13 operation UNDO xid = 0x0005.004.0000061e

Redo record sequence: 32 block: 41 offset 188 [rs_id: 0x000020.00000029.00bc]
	 change 1 operation OP:14.11 xid = 0x0005.004.0000061e

Redo record sequence: 32 block: 41 offset 252 [rs_id: 0x000020.00000029.00fc]
	 change 1 operation OP:13.22 xid = 0x0005.004.0000061e

Redo record sequence: 32 block: 41 offset 328 [rs_id: 0x000020.00000029.0148]
	 change 1 operation UNDO xid = 0x0005.004.0000061e
	 change 2 operation INSERT xid = 0x0005.004.0000061e

Redo record sequence: 32 block: 42 offset 172 [rs_id: 0x000020.0000002a.00ac]
	 change 1 operation DELETE xid = 0x0005.004.0000061e
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x0005.004.0000061e
COMMIT  for transaction 0x0005.004.0000061e [sequence: 33 block: 3 offset 16 [rs_id: 0x000021.00000003.0010]

Great! So we can see in here one familiar redo record, some change vectors from layer 13 (Segment management) and 14 (Extent management) and then our redo record for INSERT, followed by a redo record with a DELETE (ROLLBACK information)!

So each time you are doing something to a database, redo is being generated – even if you violate constraints! After constraint violations, Oracle is doing a partial rollback of this transaction.

But the funniest thing can be seen in this situation:

SQL> update hr.employees
  2  set employee_id=1;
update hr.employees
*
ERROR at line 1:
ORA-00001: unique constraint (HR.EMP_EMP_ID_PK) violated

We could expect, that modifying the second row to the same value as the previous one will throw this kind of exception. But redo log analysis shows something different:

V00D00 > parse file /u01/app/oracle/diag/rdbms/skiper/skiper/trace/skiper_ora_9695.trc EMPLOYEES

Redo record sequence: 34 block: 8771 offset 16 [rs_id: 0x000022.00002243.0010]
	 change 1 operation UNDO HEADER xid = 0x000a.002.00000508
	 change 2 operation UNDO xid = 0x000a.002.00000508
	 change 3 operation UPDATE xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8773 offset 96 [rs_id: 0x000022.00002245.0060]
	 change 1 operation UNDO xid = 0x000a.002.00000508
	 change 2 operation UPDATE xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8774 offset 100 [rs_id: 0x000022.00002246.0064]
	 change 1 operation UNDO xid = 0x000a.002.00000508
	 change 2 operation UPDATE xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8775 offset 268 [rs_id: 0x000022.00002247.010c]
	 change 1 operation UPDATE xid = 0x000a.002.00000508
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8776 offset 136 [rs_id: 0x000022.00002248.0088]
	 change 1 operation UPDATE xid = 0x000a.002.00000508
	 change 2 operation ROLLBACK SINGLE STATEMENT (not final) xid = 0x000a.002.00000508

Redo record sequence: 34 block: 8777 offset 172 [rs_id: 0x000022.00002249.00ac]
	 change 1 operation UPDATE xid = 0x000a.002.00000508
	 change 2 operation ROLLBACK SINGLE STATEMENT (final) xid = 0x000a.002.00000508
COMMIT [ending rollback]  for transaction 0x000a.002.00000508 [sequence: 34 block: 8777 offset 384 [rs_id: 0x000022.00002249.0180]

We can see that Oracle did 3 modifications and 3 rollbacks after getting a signal that constraint was violated. So maybe we can assume that constraint checking is an asynchronous operation?

Stay tuned for more! Hope you will have fun with analyzing redo dumps with my simple Python tool </p />
</p></div>

    	  	<div class=

Updating indexes with partition maintenance

An index is basically a structure that maps keys (values) in columns to the physical location of their corresponding rows in a table.  So if you move the rows (ie, change the physical location of a row) then the index entries for those rows need to be updated, or the index is no longer usable.  And as most people are aware, the latter is the default when you perform a partition maintenance operation on a table that re-locates rows.  For example, we’ll create a simple partitioned table, add both a local and a global index, split one of the partitions, and then see what the impact on those indexes is.


SQL>
SQL> drop table t purge;

Table dropped.

SQL>
SQL> create table t (x int, y int, z int )
  2  partition by range (x)
  3  (
  4    partition p1 values less than (4000),
  5    partition p2 values less than (8000)
  6  );

Table created.

SQL>
SQL> insert into t select q*1000,rownum,rownum from dba_objects, ( select rownum q from dual connect by level <= 7 );

548338 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> create index ix_local on t ( y ) local;

Index created.

SQL> create index ix_global on t ( z ) ;

Index created.

SQL>
SQL> alter table t split partition p2 at (6000)
  2    into ( partition p2a, partition p2b ) ;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1                   USABLE
IX_LOCAL                       P2A                  UNUSABLE
IX_LOCAL                       P2B                  UNUSABLE

3 rows selected.

SQL>
SQL> select index_name, status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      UNUSABLE
IX_LOCAL                       N/A

2 rows selected.

SQL>
SQL>
SQL>

The result is what we would expect.  The global index has become unusable because a portion of the rows that had entries in the index have been moved as part of the partition split operation.  The local index has also become unusable but only for those partitions that were impacted by the split.  Partition P1 was not affected by the split, and hence the corresponding local index partition is still valid.  Just a quick footnote – the “N/A” in USER_INDEXES for the local index represents that the STATUS of the local index should be determined by looking at the status for each underlying partition.  In early releases of Oracle this could create problems in terms of availability for partitioned tables, because if you tried to use an index that had become unusable, you would get an error: ORA-01502: index or partition of such index is in unusable state,  which is not the most pleasant thing to be sending back to your application users Smile

In Oracle 9, a workaround to the problem was introduced, the UPDATE GLOBAL INDEXES clause.  This keeps the (global) index entries valid by correcting the index entries as the partition operation is performed.  This has some costs in terms of resource consumption, because obviously maintaining a large amount of index entries will consume CPU and redo, but it kept the index available to application users.  Here’s a demo of that in action:


SQL> drop table t purge;

Table dropped.

SQL>
SQL> create table t (x int, y int, z int )
  2  partition by range (x)
  3  (
  4    partition p1 values less than (4000),
  5    partition p2 values less than (8000)
  6  );

Table created.

SQL>
SQL> insert into t select q*1000,rownum,rownum from dba_objects, ( select rownum q from dual connect by level <= 7 );

548338 rows created.

SQL> commit;

Commit complete.

SQL>
SQL> create index ix_local on t ( y ) local;

Index created.

SQL> create index ix_global on t ( z ) ;

Index created.

SQL>
SQL> alter table t split partition p2 at (6000)
  2    into ( partition p2a, partition p2b )
  3    update global indexes;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1                   USABLE
IX_LOCAL                       P2A                  UNUSABLE
IX_LOCAL                       P2B                  UNUSABLE

3 rows selected.

SQL>
SQL> select index_name, status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.

SQL>
SQL> alter index ix_local rebuild partition P2A;

Index altered.

SQL> alter index ix_local rebuild partition P2B;

Index altered.

But notice that the local index has not been spared from being marked unusable.  We had to rebuild each local partition after the operation. You might be thinking that only having UPDATE GLOBAL INDEXES was a hence a waste of time, but don’t forget that some partition operations, such as DROP and TRUNCATE do not impact local indexes, so updating the global index entries is all that is required:


SQL> alter table t drop partition p2a update global indexes;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1B                  USABLE
IX_LOCAL                       P2B                  USABLE

2 rows selected.

SQL>
SQL> select index_name,  status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.

SQL>
SQL>

But if you do have existing scripts that contain the UPDATE GLOBAL INDEXES clause, it is perhaps worth revisiting them to take advantage of the more complete implementation of this facility, which arrived in Oracle 11.  The UPDATE INDEXES clause will take care of both global and local indexes during partition maintenance operations.


SQL>
SQL> alter table t split partition p1 at (2000)
  2    into ( partition p1a, partition p1b )
  3    update indexes;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1A                  USABLE
IX_LOCAL                       P1B                  USABLE
IX_LOCAL                       P2A                  USABLE
IX_LOCAL                       P2B                  USABLE

4 rows selected.

SQL>
SQL> select index_name,  status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.

SQL>
SQL> alter table t drop partition p1a update indexes;

Table altered.

SQL>
SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME       STATUS
------------------------------ -------------------- --------
IX_LOCAL                       P1B                  USABLE
IX_LOCAL                       P2A                  USABLE
IX_LOCAL                       P2B                  USABLE

3 rows selected.

SQL>
SQL> select index_name,  status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.

SQL>

Not that I can see a real need for it, but it’s worth noting that these are your only two options.  There is no such thing as keeping only the local indexes entries updated and not the global ones.


SQL>  alter table t split partition p2 at (6000)
  2      into ( partition p2a, partition p2b ) update local indexes;
    into ( partition p2a, partition p2b ) update local indexes
                                                 *
ERROR at line 2:
ORA-00905: missing keyword

Finally, remember that since Oracle 10g we decided that it is better to allow a query to run slowly rather than have it crash with an error, so the default value for “skip_unusable_indexes” is now TRUE.  So if you have indexes that have a status of UNUSABLE, you will not detect this via errors in your SQL queries.  The queries will just be optimized as if the indexes did not exist, which might have an impact on their execution time.  Here is a a simple starting point for a monitoring query to keep an eye on unusable indexes (and rebuild them if necessary):


SQL> set serverout on
SQL> declare
  2    procedure ddl(p_cmd varchar2) is
  3    begin
  4      --execute immediate p_cmd;
  5      dbms_output.put_line(p_cmd);
  6    end;
  7  begin
  8      for i in (
  9          select index_owner, index_name, partition_name, 'partition' ddl_type
 10          from all_ind_partitions
 11          where status = 'UNUSABLE'
 12          union all
 13          select index_owner, index_name, subpartition_name, 'subpartition' ddl_type
 14          from all_ind_subpartitions
 15          where status = 'UNUSABLE'
 16          union all
 17          select owner, index_name, null, null
 18          from all_indexes
 19          where status = 'UNUSABLE'
 20      )
 21      loop
 22        if i.ddl_type is null then
 23          ddl('alter index '||i.index_owner||'.'||i.index_name||' rebuild');
 24        else
 25          ddl('alter index '||i.index_owner||'.'||i.index_name||' modify '||i.ddl_type||' '||i.partition_name||' rebuild');
 26        end if;
 27      end loop;
 28  end;
 29  /
alter index MCDONAC.T_PAR_IX modify partition P1 rebuild
alter index MCDONAC.T_PAR_IX modify partition P2 rebuild

PL/SQL procedure successfully completed.

And finally, as Tim reminded me, with 12c Release 2, many partition operations (and many other maintenance operations as well) can now be done online simply by specifying ONLINE as a suffix to the operation. That (as the name suggests) will keep all indexes in a USABLE state.


SQL>  create table t (x int, y int, z int )
  2      partition by range (x)
  3      (
  4        partition p1 values less than (4000),
  5        partition p2 values less than (8000)
  6      );

Table created.

SQL>
SQL> insert into t select q*1000,rownum,rownum from dba_objects, ( select rownum q from dual connect by level <= 7 );

547862 rows created.

SQL>
SQL> commit;

Commit complete.

SQL>
SQL> create index ix_local on t ( y ) local;

Index created.

SQL>
SQL> create index ix_global on t ( z ) ;

Index created.

SQL>
SQL> alter table t split partition p2 at (6000)
  2      into ( partition p2a, partition p2b ) online;

Table altered.

SQL> select index_name, partition_name, status
  2  from user_ind_partitions
  3  where index_name like 'IX%';

INDEX_NAME                     PARTITION_NAME                 STATUS
------------------------------ ------------------------------ --------
IX_LOCAL                       P1                             USABLE
IX_LOCAL                       P2A                            USABLE
IX_LOCAL                       P2B                            USABLE

3 rows selected.

SQL>
SQL> select index_name, status
  2  from user_indexes
  3  where index_name like 'IX%';

INDEX_NAME                     STATUS
------------------------------ --------
IX_GLOBAL                      VALID
IX_LOCAL                       N/A

2 rows selected.



With Subquery()

Here’s a little oddity that came up recently on the OTN database forum – an example where a “with” subquery (common table expression / factored subquery) produced a different execution plan from the equivalent statement with the subquery moved to an inline view; tested in 12.1.0.2 and 12.2.0.1. Here are the two variations:

with  tbl as (
          select 1 col1, 'a'  col2 from dual
union all select 2 , 'a' from dual
union all select 3 , 'b' from dual
union all select 4 , 'a' from dual
union all select 5 , 'a' from dual
union all select 6 , 'b' from dual
union all select 7 , 'b' from dual
),
lag_data as (
        select col1, col2, lag(col2) over (order by col1) col2a from tbl
)
select  col1, col2
from    lag_data
where   col2a is null or col2a <> col2
order by col1
;

with  tbl as (
          select 1 col1, 'a'  col2 from dual
union all select 2 , 'a' from dual
union all select 3 , 'b' from dual
union all select 4 , 'a' from dual
union all select 5 , 'a' from dual
union all select 6 , 'b' from dual
union all select 7 , 'b' from dual
)
select  col1, col2
from    (
        select col1, col2, lag(col2) over (order by col1) col2a from tbl
        )
where   col2a is null or col2a <> col2
order by col1
;

You’ll notice that there’s an explicit “order by” clause at the end of both queries. If you want the result set to appear in a specific order you should always specify the order and not assume that it will appear as a side effect; but in this case the ordering for the “order by” clause seems to match the ordering needed for the analytic function, so we might hope that the optimizer would take advantage of the analytic “window sort” and not bother with a “sort order by” clause. But here are the two plans – first with subquery factoring, then with the inline view:


-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |       |       |    16 (100)|          |
|   1 |  SORT ORDER BY   |      |     7 |    56 |    16  (13)| 00:00:01 |
|*  2 |   VIEW           |      |     7 |    56 |    15   (7)| 00:00:01 |
|   3 |    WINDOW SORT   |      |     7 |    42 |    15   (7)| 00:00:01 |
|   4 |     VIEW         |      |     7 |    42 |    14   (0)| 00:00:01 |
|   5 |      UNION-ALL   |      |       |       |            |          |
|   6 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|   7 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|   8 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|   9 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|  10 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|  11 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
|  12 |       FAST DUAL  |      |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   2 - filter(("COL2A" IS NULL OR "COL2A"<>"COL2"



-------------------------------------------------------------------------
| Id  | Operation        | Name | Rows  | Bytes | Cost (%CPU)| Time     |
-------------------------------------------------------------------------
|   0 | SELECT STATEMENT |      |       |       |    15 (100)|          |
|*  1 |  VIEW            |      |     7 |    56 |    15   (7)| 00:00:01 |
|   2 |   WINDOW SORT    |      |     7 |    42 |    15   (7)| 00:00:01 |
|   3 |    VIEW          |      |     7 |    42 |    14   (0)| 00:00:01 |
|   4 |     UNION-ALL    |      |       |       |            |          |
|   5 |      FAST DUAL   |      |     1 |       |     2   (0)| 00:00:01 |
|   6 |      FAST DUAL   |      |     1 |       |     2   (0)| 00:00:01 |
|   7 |      FAST DUAL   |      |     1 |       |     2   (0)| 00:00:01 |
|   8 |      FAST DUAL   |      |     1 |       |     2   (0)| 00:00:01 |
|   9 |      FAST DUAL   |      |     1 |       |     2   (0)| 00:00:01 |
|  10 |      FAST DUAL   |      |     1 |       |     2   (0)| 00:00:01 |
|  11 |      FAST DUAL   |      |     1 |       |     2   (0)| 00:00:01 |
-------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - filter(("COL2A" IS NULL OR "COL2A"<>"COL2"))

The two plans are different, and the difference is an extra “sort order by” operation even though the optimizer has moved the subquery with the analtyic function inline so that in principle both statements are the technically the same and merely cosmetically different.

It’s been some time since I’ve noticed subquery factoring resulting in a change in plan when the expected effect is purely cosmetic. Interestingly, though, the “unparsed query” in the 10053 (CBO) trace file is the same for the two cases with the only difference being the name of a generated view:


SELECT  
        "LAG_DATA"."COL1" "COL1","LAG_DATA"."COL2" "COL2" 
FROM    (SELECT 
                "TBL"."COL1" "COL1","TBL"."COL2" "COL2",
                DECODE(
                        COUNT(*) OVER ( ORDER BY "TBL"."COL1" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING ),
                        1, FIRST_VALUE("TBL"."COL2") OVER ( ORDER BY "TBL"."COL1" ROWS  BETWEEN 1 PRECEDING  AND 1 PRECEDING ),
                           NULL
                ) "COL2A" 
        FROM    (
                            (SELECT 1 "COL1",'a' "COL2" FROM "SYS"."DUAL" "DUAL") 
                 UNION ALL  (SELECT 2 "2",'a' "'A'" FROM "SYS"."DUAL" "DUAL") 
                 UNION ALL  (SELECT 3 "3",'b' "'B'" FROM "SYS"."DUAL" "DUAL") 
                 UNION ALL  (SELECT 4 "4",'a' "'A'" FROM "SYS"."DUAL" "DUAL") 
                 UNION ALL  (SELECT 5 "5",'a' "'A'" FROM "SYS"."DUAL" "DUAL") 
                 UNION ALL  (SELECT 6 "6",'b' "'B'" FROM "SYS"."DUAL" "DUAL") 
                 UNION ALL  (SELECT 7 "7",'b' "'B'" FROM "SYS"."DUAL" "DUAL")
                ) "TBL"
        ) "LAG_DATA" 
WHERE 
        "LAG_DATA"."COL2A" IS NULL OR "LAG_DATA"."COL2A"<>"LAG_DATA"."COL2" 
ORDER BY "LAG_DATA"."COL1"

The above is the unparsed query for the query with two factored subqueries; the only difference in the unparsed query when I moved the analytic subquery inline was that the view name in the above text changed from “LAG_DATA” to “from$_subquery$_008”.

Footnote:

When I used a real table (with the same data) instead of a “union all” factored subquery for the driving data this anomaly disappeared. The union all is a convenient dirty trick for generating very small test data sets on OTN – it remains to be seen whether a more realistic example of multiple factored subqueries would still result in the optimizer losing an opportunity for eliminating a “sort order by” operation.

In passing – did you notice how the optimizer had managed to rewrite a “lag()” analytic function as a form of “first_value()” function with decode ?

“Oracle Indexing Internals & Best Practices” Seminar: Australia 2017 Tour (Stage)

I’ll be running a fully revised and updated version of my acclaimed “Oracle Indexing Internals and Best Practices” seminar throughout Australia in Oct/Nov 2017. Previous versions of this seminar have been hugely popular and run in some 18 countries. This updated version will feature lots of new material including 12c related indexing capabilities and recommendations. […]

LOBs and tiny typos

This one caught me out – I was racing around looking for bugs, or parameter files, or hidden settings that stopped SECUREFILE lobs from being created.  Here was my incredibly simple test case – create a securefile LOB, and then make sure it’s a securefile.


SQL> create table t1 ( b blob ) lob ( b ) store as securfile;

Table created.

SQL> select securefile
  2  from   user_lobs
  3  where  table_name = 'T1';

SEC
---
NO

That had me bamboozled, but it is a trivial explanation.  Notice that I did not spell  “SECUREFILE” correctly.  As a result, the syntax is interpreted as being the name of the LOB segment in the data dictionary, rather than the specification of how the LOB should be stored.


SQL> select segment_name
  2  from   user_lobs
  3  where  table_name = 'T1';

SEGMENT_NAME
------------------------------
SECURFILE

All it takes is the correct “e” in “securefile” and normal service was resumed Smile


SQL> drop table t1 purge;

Table dropped.

SQL> create table t1 ( b blob ) lob ( b ) store as securefile;

Table created.

SQL> select securefile
  2  from   user_lobs
  3  where  table_name = 'T1';

SEC
---
YES

So why blog about it such a silly mistake ?  Because this is a common issue with all of us as developers.  We see something unusual and our first assumption is that it must be some fundamental flaw in the product we’re using.  That mindset has been around as long computer programming has existed, but but 20 years ago, it wasn’t so easy to make a fool of yourself by bleating about it on social media Smile.  I remember when I first started programming, one of my mentors told me: “Remember, the COBOL compiler is not broken”, and of course, we can insert any programming language into that sentence.  So before you jump on to Twitter … just take a moment to re-check that script, or get a colleague to give it a “once over”.  You’ll either save yourself some embarrassment, or you add additional rigour to your test case – it’s win-win.