Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

The Priority 600 Pinion Gearbox Bike

My friends at Priority Bicycles have released a new model bicycle with a German-made gearbox that is sealed against the elements and is centered on the bike for optimal weight distribution. While it's the gearbox that is the marquee feature on this new bike, it is the wide tires that fire my soul. I want to share where those tires can take you.

Full disclosure: I am a friend of Priority Bicycles, and was in the loop on the design and testing of the Priority 600 All Road bicycle. I am not a disinterested party.

Pavement is easy. Priority has specified WTB Horizon road-plus tires on the bike. These are 47 mm wide tires that you can run at low pressures for comfort over rough surfaces. Priority did not skimp here! You are not getting a cheap OEM version of the Horizon tires. Priority spec'd the full-on, high thread count enthusiast version that is tubeless ready, and mounted on tubeless rims. 

Priority 600 Pavement.jpg




Rough. Smooth. Doesn't matter. You can't go wrong with the Horizon tires on pavement.

Gravel is the sweet spot. I can't say enough about how much fun it is to head out on the Horizon tires and just follow my nose down whatever interesting road -- paved or unpaved -- presents itself. Run the tires at 30 psi. Adjust up or down to suit your taste. These tires feel planted and secure, and comfortable!

Priority 600 Gravel.jpg

The Horizon tires are planted and secure on gravel

Doubletrack's good too! Because even better than gravel roads are the endless miles of leftover logging doubletrack and ATV trails in the Hiawatha National Forest. One of my favorite rides for exercise is a mix of gravel road to ATV trail to pavement, and back to the parking lot. The WTB Horizon tires take all those surfaces in stride.

Priority 600 Big Gravel.jpg

Doubletrack? Big Gravel? It's all good!

Singletrack's possible. Just make it smooth singletrack. I don't recommend slamming into rocks and roots, and for sure don't be hitting jumps and grabbing air. But go easy, and the Horizon tires have let me push the envelope to include easy singletrack into my weekend afternoon rides.

Priority 600 Singletrack.jpg

Smooth Singletrack? Sweet Solitude.

Boardwalks and other unusual surfaces are easy, with the wide tires providing stability and grip. I don't often get the opportunity to ride long distances on boards, but last summer on vacation with my wife I had fun circumnavigating a lake on a trail that included over two miles of boardwalk. 

Does it get any better than this?

And now for the crazy stuff! What follows is not recommended(!), but I'm an enthusiast and have pushed the bike -- willing to risk that either the bike or myself might break in the process -- beyond its intended design parameters. This next photo shows the first prototype of the Priority 600 All Road just as I'm entering a rough patch of mountain bike trail in Manhattan's Highbridge Park. 

Priority 600 On The Rocks.jpg

The Priority 600 prototype handled this rough section. Skill and experience are needed! Surfaces like these are outside the design parameters and bring risk of damage and personal injury.

I love the bike! I've ridden the WTB Horizon tires for over a year now. I love where they can take me on my Priority bicycles. I love how the wide tires help me link together random trails and roads, bringing me to scenic places and helping me connect with nature and recoup from a tough day in the office. The Pinion gearbox on the Priority 600 makes the experience even better by centering the weight on the bike and adds tremendous gearing range for getting up (and down!) steep terrain. 

Priority 600 Smiling.jpg

The Priority Smile

p.s., Ryan Van Duzer's video review of the Priority 600 is excellent.

The simplest things….can be risky

Java and Oracle expert Lukas Eder tweeted yesterday about a potential optimization that could be done when reviewing database SQL code.


This looks to be a logical thing to do.  Why scan the table T twice to perform an update, when the same job could be done in a single pass.  The benefits seem obvious:

  • less I/O work
  • less time the data is spent locked
  • less risk of an error between the two operations

so don’t get me wrong – the consolidation is going to be a good thing in the majority of cases

And therein lies the rub – the “majority” of cases is the not the same as “all” cases, and that is why I don’t think a tool should ever automatically perform this change. I’d be cool with a tool making a recommendation but let’s see why you cannot just assume that the consolidation is correct.

Here’s our table with a single row and single business rule implement with a check constraint

SQL> create table t ( a int, b int );

Table created.

SQL> alter table t add constraint chk check ( a < b ) ;

Table altered.

SQL> insert into t values (1,2);

1 row created.

SQL> commit;

Commit complete.

Now I’ll implement the application in the original “unoptimized” way:

SQL> update t set a = a + 1;
update t set a = a + 1
ERROR at line 1:
ORA-02290: check constraint (MCDONAC.CHK) violated

SQL> update t set b = b*2;

1 row updated.

You can see that the first update failed – it violated the check constraint. Of course, it is not definitively clear whether this should be the case based on the business requirements, because I haven’t elaborated on whether these two updates should be two transactions or a single transaction. The correctness is not really the point I’m trying to make here, but that if I now choose to consolidate the update, I end up with a different application behaviour.

I’ll roll back the change above, and repeat the experiment using the consolidate update:

SQL> roll;
Rollback complete.
SQL> update t set a = a + 1, b = b*2;

1 row updated.

This time the update completes successfully. If a tool had automatically done this, then I will get a different behaviour in my application. That might be a good might not be. I could eliminate the difference by implementing the constraint in a DEFERRED usage, but we’re starting to depart even further from the existing implementation of the application code, which means more scrutiny and more regression testing.

So by all means, explore opportunities to improve the performance of your SQL by re-arranging it, consolidating it, and aiming to get more done with less work. But be careful that you do not unknowingly change the way your application works when you do so.


Oracle Can Generate 6 Password Hashes When a User is Added or Password Changed in and Above

In a database it's possible that Oracle generates 6 different password hashes for one password for one user under certain circumstances when a password is changed or created (user is created). I will layout the 6 different ones first....[Read More]

Posted by Pete On 13/06/18 At 09:02 PM

#Exasol’s annual conference in Berlin #XP18

We have had our annual conference Xperience in Berlin again – it was an amazing event with a packed agenda full of customer success stories and technical news. This year at the Umspannwerk.

Umspannwerk Berlin 1240w, 150w, 300w, 768w, 1024w" sizes="(max-width: 620px) 100vw, 620px" />

My assumption is that we will need a larger location next year because of our constant growth – it was a bit tight already this time </p />

    	  	<div class=

VMware Hands On Labs


In a previous post, I covered the VMware Experts Program. When I was attending the Program, one of the things Don Sullivan mentioned was the VMware Hands On Labs (HOL). I hadn’t had a chance to explore those before, so I was interested to see what they were like.

The Good

Quite honestly, these are among the best hands on lab environments I have played around with. If I wanted to test a feature of Oracle software in my previous life, most of the time I built myself a virtual machine environment using Oracle’s VirtualBox, installed all the relevant software and started from there. Of course, much of that could be snapshotted to save repeating the exercise, but it was largely restricted to a single VM. If I wanted to test something like Oracle’s Real Application Clusters technology, I built myself a small two node cluster out of two separate VMs and went from there. It really didn’t give me the feel of a real-world environment.

Enter the VMware HOL. Obviously, to get a real-world environment to test things like vMotion migration of VMs takes much more in the way of resources than my poor old laptop could handle, even if it does have 32 GB of RAM. The VMware HOL environment solves that issue for you. It takes only a small amount of time to actually crank up an environment for each lab you want to do, and each lab comes complete with online instructions integrated with the lab, as well as a separate PDF file and HTML version of the lab if you want to use that.

The labs are largely standalone, but do reference other labs where more details can be found. For things that may take some time to execute, you generally find they have included an interactive simulation where you click your way around and type in a few things, but the actual steps are simply simulated to save time.

Overall, as I mentioned above, the VMware HOLs are pretty darn good!

The Bad

Frankly, there’s not much to say here. If I was to nit pick a few minor points, they’d be these:

  • It would be nice to have a recommended path through the labs documented outside the labs themselves. There are quite a few labs there, and it’s a bit hard to determine which ones should be done in which order. In the labs themselves, there are sometimes suggestions to do other labs for more information, but that’s about it.
  • Sometimes the interactive simulation may not take input correctly (sometimes need to hit a key twice – which can be more than a tad confusing when you’re entering passwords and don’t realize one got missed!). However, after a while you realize you can type any active key (i.e. not Shift) and it will think you’re typing what it expects, so you can just keep typing “jjjjjjjjjjjjj…” until the field populates with the right number of characters. <br />

    	  	<div class=

The Journey Begins- Power BI and AI

I’m back!!  I know you missed my posts…be honest…. </p />

    	  	<div class=

OUG Scotland – Why to Come & Survival Guide

The UKOUG’s Scottish conference is on the 21st June in the centre of Edinburgh, at the Sheraton Grand Hotel, not far from Edinburgh Castle in the centre of the city.


The Event 600w, 150w" sizes="(max-width: 300px) 100vw, 300px" />

There is a six-stream agenda covering Database, Apex & Development, Platform & Services, Coud Apps, EBS Apps tech, and Business Analytics/systems & EPM, so pretty much the whole breadth of Oracle Tech, Apps and BI. We have a keynote by Oracle’s Caroline Apsey on the Bloodhound Project, the UK-based group trying to smash the world land-speed record with a 1,000mph rocket car – and solve lots of engineering challenges on the way. And uses the Oracle Cloud. I’ll be sure to see that one.

With 6 all-day streams there are a lot of presentations to choose from, but as a taste of what is on offer I’ll mention Jonathan Lewis talking about stats, Heli Helskyaho explaining the basics of machine learning, and from Oracle we have Grant Ronald on AI-driven chatbots, Hilary Farrell on the new features of APEX 18.1, and Keith Laker on JSON & SQL. The talks are a nice mixture of end-user experiences, recognised experts and Oracle themselves. UKOUG is independent of Oracle so although we are very happy to have Oracle support us, we have talks that are not just what Oracle are currently pushing. This is what I love about user group meetings, you get the whole story.

As a member of the UKOUG this event is free, counting as one of your SIG places. If you have run out of SIG places, you can buy an extra one at £85 – or upgrade your membership of course </p />

    	  	<div class=

First Day at Microsoft, the Satya Way

So I’ve finally crawled my way back out of the hole I dug myself in this last month.  The house is empty, the 5th wheel is ready for us to move into and I’m now in Redmond, doing my NEO, (New Employee Orientation.)

I swear this is all I see in my head every time I hear “NEO” </p />

    	  	<div class=

Introducing SQLdb360: merging eDB360 and SQLd360, while rising the bar to community engagement

Today, we are very happy to release SQLdb360, a new tool that merges together eDB360 and SQLd360, under a single package

Tools eDB360 and SQLd360 can still be used independently, but now there is only one package to download and keep updated. All the new features and updates to both tools are now in that one package.

The biggest change that comes with SQLdb360 is the kind invitation to everyone interested to contribute to its development. This is why the new blended name and its release format.

We do encourage your help and ideas to continue building a free, open-source, and hopefully a YMMV great tool!

Over the years, a few community members requested new features, but they were ultimately slowed down by our speed of reaction to their requests. Well, no more!

Few consumers of these tools implemented cool changes they needed, sometimes sending us the changes (or pull requests) until a later time. This means good ideas were available to others after some time. Not anymore!

If there is something you’d like to have as part of SQLdb360 (aka SQLd360 and eDB360), just write and test the additional code, then send us the pull request! Next, we will review, validate, and merge your code changes to the main tool.

There are several advantages to this new approach:

  1. Carlos and Mauro won’t dictate the direction of the tool anymore: we will continue helping and contributing, but we won’t “own” it anymore (the community will!)
  2. Carlos and Mauro won’t slow down the development anymore: nobody is the bottleneck now!
  3. Carlos and Mauro wan’t run out of ideas anymore!!! The community has great ideas to share!!!

Due to the nature of this new collaborative effort, the way we now publish SQLdb360 is this:

  • Instead of linking to the current master repository, the tool now implements “releases”. This, in order to snapshot stable versions that bundle several changes together (better than creating separate versions per merge into master).
  • Links in our blogs are now getting updated, with references to the latest (and current) stable release of SQLdb360 (starting with v18.1).

Note: Version names sound awfully familiar to Oracle nomenclature, right? Well, we started using this numbering back in 2014!!!

Carlos & Mauro

Change Data Capture from Oracle with StreamSets Data Collector

With this trend of CQRS architectures where the transactions are streamed to a bunch of heterogenous eventually consistent polyglot-persistence microservices, logical replication and Change Data Capture becomes an important component, already at the architecture design phase. This is good for existing products vendors such as Oracle GoldenGate (which must be licensed even to use only the CDC part in the Oracle Database as Streams is going to be desupported) or Dbvisit replicate to Kafka. But also for Open Source projects. There are some ideas running on (Debezium), VOODOO but not yet released.

Today I tested the Oracle CDC Data Collector for StreamSets. StreamSets Data Collector is an open-source project started by former people from Cloudera and Informatica, to define pipelines streaming data from data collectors. It is easy, simple and has a buch of destinations possible. The Oracle CDC is based on LogMiner which means that it is easy but may have some limitations (mainly datatypes, DDL replication and performance).


The installation guide is at I choose the easiest way for testing as they provide a Docker container (

# docker run --restart on-failure -p 18630:18630 -d --name streamsets-dc streamsets/datacollector
Unable to find image 'streamsets/datacollector:latest' locally
latest: Pulling from streamsets/datacollector
605ce1bd3f31: Pull complete
529a36eb4b88: Pull complete
09efac34ac22: Pull complete
4d037ef9b54a: Pull complete
c166580a58b2: Pull complete
1c9f78fe3d6c: Pull complete
f5e0c86a8697: Pull complete
a336aef44a65: Pull complete
e8d1e07d3eed: Pull complete
Digest: sha256:0428704019a97f6197dfb492af3c955a0441d9b3eb34dcc72bda6bbcfc7ad932
Status: Downloaded newer image for streamsets/datacollector:latest

And that’s all. I am ready to connect with http on port 18630.

The default user/password is admin/admin

The GUI looks simple and efficient. There’s a home page where you define the ‘pipelines’ and monitor them running. In the pipelines, we define sources and destinations. Some connectors are already installed, others can be automatically installed. For Oracle, as usual, you need to download the JDBC driver yourself because Oracle doesn’t allow to get it embedded for legal reasons. I’ll do something simple here just to check the mining from Oracle.

In ‘Package Manager’ (the little gift icon on the top) go to JDBC and check ‘install’ for the streamsets-datacollector-jdbc-lib library
Then in ‘External Libraries’, install (with the ‘upload’ icon at the top) the Oracle jdbc driver (ojdbc8.jar).
I’ve also installed the MySQL one for future tests:

File Name Library ID
ojdbc8.jar streamsets-datacollector-jdbc-lib
mysql-connector-java-8.0.11.jar streamsets-datacollector-jdbc-lib

Oracle CDC pipeline

I’ll use the Oracle Change Data Capture here, based on Oracle LogMiner. The GUI is very easy: just select ‘Oracle CDC’ as source in a new pipeline. Click on it and configure it. I’ve set the minimum here.
In JDBC tab I’ve set only the JDBC Connection String to: jdbc:oracle:thin:scott/tiger@// which is my PDB (I’m on Oracle 18c here and multitenant is fully supported by StreamSets). In the Credentials tab I’ve set ‘sys as sysdba’ as username and its password. The configuration can also be displayed as JSON and here is the corresponding entry:

"configuration": [
"name": "hikariConf.connectionString",
"value": "jdbc:oracle:thin:scott/tiger@//"
"name": "hikariConf.useCredentials",
"value": true
"name": "hikariConf.username",
"value": "sys as sysdba"
"name": "hikariConf.password",
"value": "oracle"

I’ve provided SYSDBA credentials and only the PDB service, but it seems that StreamSets figured out automatically how to connect to the CDB (as LogMiner can be started only from CDB$ROOT). The advantage of using LogMiner here is that you need only a JDBC connection to the source – but of course, it will use CPU and memory resource from the source database host in this case.

Then I’ve defined the replication in the Oracle CDC tab. Schema to ‘SCOTT’ and Table Name Pattern to ‘%’. Initial Change as ‘From Latest Change’ as I just want to see the changes and not actually replicate for this first test. But of course, we can define a SCN here which is what must be used to ensure consistency between the initial load and the replication. ‘Dictionary source to ‘Online Catalog’ – this is what will be used by LogMiner to map the object and column IDs to table names and column names. But be carefull as table structure changes may not be managed correctly with this option.

"name": "oracleCDCConfigBean.baseConfigBean.schemaTableConfigs",
"value": [
"schema": "SCOTT",
"table": "%"
"name": "oracleCDCConfigBean.baseConfigBean.changeTypes",
"value": [
"name": "oracleCDCConfigBean.dictionary",

I’ve left the defaults. I can’t think yet about a reason for capturing the ‘select for update’, but it is there.

Named Pipe destination

I know that the destination part is easy. I just want to see the captured changes here and I took the easiest destination: Named Pipe where I configured only the Named Pipe (/tmp/scott) and Data Format (JSON)

"instanceName": "NamedPipe_01",
"library": "streamsets-datacollector-basic-lib",
"stageName": "com_streamsets_pipeline_stage_destination_fifo_FifoDTarget",
"stageVersion": "1",
"configuration": [
"name": "namedPipe",
"value": "/tmp/scott"
"name": "dataFormat",
"value": "JSON"

Supplemental logging

The Oracle redo log stream is by default focused only on recovery (replay of transactions in the same database) and contains only the minimal physical information requried for it. In order to get enough information to replay them in a different database we need supplemental logging for the database, and for the objects involved:

SQL> alter database add supplemental log data;
Database altered.
SQL> exec for i in (select owner,table_name from dba_tables where owner='SCOTT' and table_name like '%') loop execute immediate 'alter table "'||i.owner||'"."'||i.table_name||'" add supplemental log data (primary key) columns'; end loop;
PL/SQL procedure successfully completed.


And that’s all. Just run the pipeline and look at the logs:


StreamSet Oracle CDC pulls continuously from LogMiner to get the changes. Here are the queries that it uses for that:


This starts to mine between two timestamp. I suppose that it will read the SCNs to get finer grain and avoid overlapping information.

And here is the main one:


This reads the redo records. The operation codes 7 and 36 are for commit and rollbacks. The operations 1,3,2,25 are those that we want to capture (insert, update, delete, select for update) and were defined in the configuration. Here the pattern ‘%’ for the SCOTT schema has been expanded to the table names. As far as I know, there’s no DDL mining here to automatically capture new tables.


Then I’ve run this simple insert (I’ve added a primary key on this table as it is not ther from utlsampl.sql):

SQL> insert into scott.dept values(50,'IT','Cloud');

And I committed (as it seems that StreamSet buffers the changes until the end of the transaction)

SQL> commit;

and here I got the message from the pipe:

/ $ cat /tmp/scott

The graphical interface shows how the pipeline is going:

I’ve tested some bulk loads (direct-path inserts) and it seems to be managed correctly. Actually, this Oracle CDC is based on LogMiner so it is fully supported (no mining of proprietary redo stream format) and limitations are clearly documented.


Remember that the main work is done by LogMiner, so don’t forget to look at the alert.log on the source database. With big transactions, you may need large PGA (but you can also choose buffer to disk). If you have Oracle Tuning Pack, you can also monitor the main query which retreives the redo information from LogMiner:
You will see a different SQL_ID because the filter predicates sues literals instead of bind variables (which is not a problem here).


This product is very easy to test, so you can do a Proof of Concept within a few hours and test for your context: supported datatypes, operations and performance. By easy to test, I mean: very good documentation, very friendly and responsive graphical interface, very clear error messages,…


Cet article Change Data Capture from Oracle with StreamSets Data Collector est apparu en premier sur Blog dbi services.