Oakies Blog Aggregator

Oracle Database Multilingual Engine (MLE)

My ODC appreciation blog post was about Javascript in the database running in the beta of the Oracle Database Multilingual Engine (MLE). Here I’ll detail my first test which is a comparison, in performance, between a package written in Javascript, running in the MLE, and one written and running in PL/SQL.

I’ve downloaded the 12GB .ova from OTN, installed the latest SQLcl, and I’m ready to load my first Javascript procedure. I want something simple that I can run a lot of times because I want to test my main concern when running code in a different engine: the context switch between the SQL engine and the procedural one.

My kid’s maths exercises were about GCD (greatest common divisor) this week-end so I grabbed the Euclid’s algorithm in Javascript. This algorithm was the first program I ever wrote long time ago, on ZX-81, in BASIC. Now in Javascript it can use recursion. So here is my gcd.js file:

module.exports.gcd = function (a, b) {
function gcd(a, b) {
if (b == 0)
{return a}
else
{return gcd(b, a % b)}
}
return gcd(a,b)
}

We need strong typing to be able to load it as a stored procedure, so here is the TypeScript definition in gcd.d.ts

export function gcd(a:number, b:number ) : number;

I load it with the dbjs utility, which I run in verbose mode:

[oracle@dbml MLE]$ dbjs deploy -vv gcd.js -u demo -p demo -c //localhost:1521/DBML
deploy: command called /media/sf_share/MLE/gcd.js oracle
Oracle backend: starting transpiler
gcd: processed function
Oracle backend: opening connection to database
gcd.js: retrieving functions
dropModule: called with gcd.js
loadModule: called with gcd.js
BEGIN
EXECUTE IMMEDIATE 'CREATE PACKAGE GCD AS
FUNCTION GCD("p0" IN NUMBER, "p1" IN NUMBER) RETURN NUMBER AS LANGUAGE JS LIBRARY "gcd.js" NAME "gcd" PARAMETERS("p0" DOUBLE, "p1" DOUBLE);
END GCD;';
END;
: generated PLSQL
+ gcd.js
└─┬ gcd
└── SCALAR FUNCTION GCD.GCD("p0" IN NUMBER, "p1" IN NUMBER) RETURN NUMBER

As it is mentioned in the verbose log, the Javascript code is transpiled. My guess is that the Javascript is parsed by the Oracle Truffle framework and compiled by Oracle GaalVM. More info in the One VM to Rule Them All paper.

This has loaded the package, the library and an ‘undefined’ object of type 144 (this MLE is in beta so not all dictionary views have been updated):


SQL> select * from dba_objects where owner='DEMO';
 
OWNER OBJECT_NAME SUBOBJECT_NAME OBJECT_ID DATA_OBJECT_ID OBJECT_TYPE CREATED LAST_DDL_TIME TIMESTAMP STATUS TEMPORARY GENERATED SECONDARY NAMESPACE EDITION_NAME SHARING EDITIONABLE ORACLE_MAINTAINED
----- ----------- -------------- --------- -------------- ----------- ------- ------------- --------- ------ --------- --------- --------- --------- ------------ ------- ----------- -----------------
DEMO GCD 93427 PACKAGE 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 1 NONE Y N
DEMO gcd.js 93426 LIBRARY 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 1 NONE Y N
DEMO gcd.js 93425 UNDEFINED 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 2017-10-09:15:29:33 VALID N N N 129 NONE N
 
 
SQL> select * from sys.obj$ where obj# in (select object_id from dba_objects where owner='DEMO');
 
OBJ# DATAOBJ# OWNER# NAME NAMESPACE SUBNAME TYPE# CTIME MTIME STIME STATUS REMOTEOWNER LINKNAME FLAGS OID$ SPARE1 SPARE2 SPARE3 SPARE4 SPARE5 SPARE6 SIGNATURE SPARE7 SPARE8 SPARE9
---- -------- ------ ---- --------- ------- ----- ----- ----- ----- ------ ----------- -------- ----- ---- ------ ------ ------ ------ ------ ------ --------- ------ ------ ------
93427 284 GCD 1 9 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 51713CBD7509C7BDA23B4805C3E662DF 0 0 0
93426 284 gcd.js 1 22 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 8ABC0DDB16E96DC9586A7738071548F0 0 0 0
93425 284 gcd.js 129 144 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 09-OCT-2017 15:29:33 1 0 6 65535 284 0 0 0

MLE Javascript

So, I’ve executed the function multiple times for each one of 10 millions rows:

SQL> select distinct gcd(rownum,rownum+1),gcd(rownum,rownum+2),gcd(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:17.64

The execution on 30 million took 17 seconds

PL/SQL function

In order to compare, I’ve created the same in PL/SQL:

SQL> create or replace function gcd_pl(a number, b number) return number as
2 function gcd(a number, b number) return number is
3 begin
4 if b = 0 then
5 return a;
6 else
7 return gcd_pl.gcd(b,mod(a,b));
8 end if;
9 end;
10 begin
11 return gcd_pl.gcd(a,b);
12 end;
13 /

Here is the execution:

SQL> select distinct gcd_pl(rownum,rownum+1),gcd_pl(rownum,rownum+2),gcd_pl(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:01:21.05

PL/SQL UDF function

In 12c we can declare a function with the pragma UDF so that it is optimized for calling from SQL

SQL> create or replace function gcd_pl_udf(a number, b number) return number as
2 pragma UDF;
3 function gcd(a number, b number) return number is
4 begin
5 if b = 0 then
6 return a;
7 else
8 return gcd_pl_udf.gcd(b,mod(a,b));
9 end if;
10 end;
11 begin
12 return gcd_pl_udf.gcd(a,b);
13 end;
14 /

Here is the execution:

SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:51.85

Native compilation

We can also improve PL/SQL runtime by compiling it in native, rather than being interpreted on p-code

SQL> alter session set plsql_code_type=native;
Session altered.
 
SQL> alter function gcd_pl_udf compile;
Function altered.
 
SQL> alter function gcd_pl compile;
Function altered.

and here is the result:

SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:01:10.31
 
SQL> select distinct gcd_pl_udf(rownum,rownum+1),gcd_pl_udf(rownum,rownum+2),gcd_pl_udf(rownum,rownum+3) from xmltable('1 to 10000000');
 
Elapsed: 00:00:45.54

Inline PL/SQL

Finally, similar to an UDF function, we can declare the function in the query, inlined in a WITH clause:


SQL> with function gcd_pl_in(a number, b number) return number as
2 function gcd(a number, b number) return number is
3 begin
4 if b = 0 then
5 return a;
6 else
7 return gcd(b,mod(a,b));
8 end if;
9 end;
10 begin
11 return gcd(a,b);
12 end;
13 select distinct gcd_pl_in(rownum,rownum+1),gcd_pl_in(rownum,rownum+2),gcd_pl_in(rownum,rownum+3) from xmltable('1 to 10000000')
14 /

And here is the result:

Elapsed: 00:00:48.92

Elapsed time summary

Here is a recap of the elapsed time:
CaptureMLE

Elapsed: 00:00:17.64 for MLE Javascript
Elapsed: 00:00:45.54 for PL/SQL UDF function (native)
Elapsed: 00:00:48.92 for Inline PL/SQL
Elapsed: 00:00:51.85 for PL/SQL UDF function (interpreted)
Elapsed: 00:01:10.31 for PL/SQL function (native)
Elapsed: 00:01:21.05 for PL/SQL function (interpreted)

The top winner is Javascript!

Perfstat Flame Graph

My tests were deliberately doing something we should avoid for performance and scalability: call a function for each row, because this involves a lot of time spent in switching the context between the SQL and the procedural engine. But this is however good for code maintainability. This overhead is not easy to measure from the database. We can look at the call stack to see what happens when the process is evaluating the operand (evaopn2) and switches to PL/SQL (evapls), and what happens besides running the PL/SQL itself (pfrrun). I have recorded perf-stat for the cases above to display the Flame Graph on the call stack. When looking for more information I remembered that Frits Hoogland already did that so I let you read Frits part1 and part2

You can download my Flame Graphs and here is a summary of .svg name and call stack from operand evaluation to PL/SQL run:

PL/SQL UDF function (native) perf-gcd_pl_UDF_native.svg evaopn2>evapls>peidxrex>penrun
Inline PL/SQL perf-gcd_pl_inline.svg evaopn2>evapls>kkxmss_speedy_stub>peidxrex>pfrrun>pfrrun_no_tool
PL/SQL UDF function (interpreted) perf-gcd_pl_UDF_interpreted.svg evaopn2>evapls>peidxexe>pfrrun>pfrrun_no_tool
PL/SQL function (native) perf-gcd_pl_native.svg evaopn2>evapls>kgmexec>kkxmpexe>kkxdexe>peidxexe>peidxr_run>plsql_run>penrun
PL/SQL function (interpreted) perf-gcd_pl_interpreted.svg evaopn2>evapls>kgmexec>kkxmpexe>kkxdexe>peidxexe>peidxr_run>plsql_run>pfrrun>pfrrun_no_tool

But more interesting is the Flame Graph for the JavaScript execution:
CaptureMLEFlame

My interpretation on this is limited but I don’t see a stack of context switching function before calling the MLE engine, which is probably the reason why it is fast. Besides the ‘unknown’ which is probably the run of the JavaScript itself (the libwalnut.so library has no symbols) we can see that most of the time is in converting SQL data types into JavaScript types at call, and the opposite on return:

  • com.oracle.walnut.core.types.OraNumberUtil.doubleToNumber
  • com.oracle.walnut.core.types.OraNumberUtil.numberToDouble

This is the price to pay when running a different language, with different data types.

So what?

This MultiLingual Engine looks promising, both for functionalities (choose the language to run in the database) and performance (same address space than the SQL, and context switching is minimal). Of course, this is only in beta. There may be more things to implement, with more overhead. For example, we can imagine that if it goes to production there will be some instrumentation to measure time and record it in the Time Model. It may also be optimized further. You can test it (download from the MLE home and give feedback about it (on the MLE forum).

This post was about to measuring performance when switching from SQL to PL/SQL. In next post, I’ll look at callbacks when running SQL from MLE.

 

Cet article Oracle Database Multilingual Engine (MLE) est apparu en premier sur Blog dbi services.

Testing Amazon RDS for Oracle: Plotting Latency and IOPS for OLTP I/O Pattern

This is just a quick blog entry to direct readers to an article I recently posted on the AWS Database Blog. Please click through to give it a read: https://aws.amazon.com/blogs/database/testing-amazon-rds-for-oracle-plotting-latency-and-iops-for-oltp-io-pattern/.

#800040;" src="https://kevinclosson.files.wordpress.com/2017/10/capture1.png?w=500&h=198" alt="" width="500" height="198" srcset="https://kevinclosson.files.wordpress.com/2017/10/capture1.png?w=500&h=198 500w, https://kevinclosson.files.wordpress.com/2017/10/capture1.png?w=1000&h=396 1000w, https://kevinclosson.files.wordpress.com/2017/10/capture1.png?w=150&h=59 150w, https://kevinclosson.files.wordpress.com/2017/10/capture1.png?w=300&h=119 300w, https://kevinclosson.files.wordpress.com/2017/10/capture1.png?w=768&h=304 768w" sizes="(max-width: 500px) 100vw, 500px" />

Thanks for reading my blog!

 

Filed under: oracle

nVision Performance Tuning: Introduction

This blog post is the first in a series that discusses how to get good performance from nVision as used in General Ledger reporting.

PS/nVision is a PeopleTools technology that extracts data from the database and places it in an Excel spreadsheet (see PS/nVision Overview).  Although PS/nVision can be used with any PeopleSoft product, it is most commonly used in Financials General Ledger.

The SQL queries generated by nVision are, at least conceptually, similar to data warehouse queries. The ledger, ledger budget or summary ledger tables are the fact tables.

The ledger tables are analysed by their attribute columns. There are always literal conditions on the fiscal year and accounting period, there is usually a literal condition on currency code.  Then there are criteria on some of the other attributes.  I will take an example that analyses the ledger table in three dimensions: BUSINESS_UNIT, ACCOUNT and CHARTFIELD1, but there are many other attribute columns on the ledger tables.  These attributes are defined in lookup tables in the application, but their hierarchies are defined in trees.

nVision reports use the trees to determine which attribute values to report.  A report might report on a whole tree, or particular nodes, or branches of a tree.  nVision joins the tree definition to the attribute table and produces a list of attributes to be reported.  These are put into working storage tree selector tables (PSTREESELECT01 to 30).  The choice of selector table is controlled by the length of the attribute column.  BUSINESS_UNIT is a 5 character column so it goes into PSTREESELECT05. CHARTFIELD1 and ACCOUNT are 10 character columns so they use PSTREESELECT10.  These selector tables form the dimensions in the queries.

Here is an example of a SQL statement generated by nVision.  The tree selector 'dimension' tables are joined to the ledger 'fact' table.

#eeeeee; border: 0px solid rgb(0, 0, 0); font-family: "courier new"; font-size: 85%; overflow: auto; padding-left: 4px; padding-right: 4px; width: 95%;">SELECT L.TREE_NODE_NUM,L2.TREE_NODE_NUM,SUM(A.POSTED_TOTAL_AMT)
FROM PS_LEDGER A
, PSTREESELECT05 L1
, PSTREESELECT10 L
, PSTREESELECT10 L2
WHERE A.LEDGER='ACTUALS'
AND A.FISCAL_YEAR=2016
AND A.ACCOUNTING_PERIOD BETWEEN 1 AND 11
AND L1.SELECTOR_NUM=30982 AND A.BUSINESS_UNIT=L1.RANGE_FROM_05
AND L.SELECTOR_NUM=30985 AND A.CHARTFIELD1=L.RANGE_FROM_10
AND L2.SELECTOR_NUM=30984 AND A.ACCOUNT=L2.RANGE_FROM_10
AND A.CURRENCY_CD='GBP'
GROUP BY L.TREE_NODE_NUM,L2.TREE_NODE_NUM

This SQL looks simple enough, but there are various complexities

  • The tree selector tables are populated at runtime.  Many dimensions can be stored in each tree selector table, each keyed by a different SELECTOR_NUM.
  • Selectors can be static or dynamic.  In dynamic selectors, the data is only stored temporarily for the lifetime of the report and will be deleted when it completes.  So immediately, there is a challenge of keeping statistics up to date, and even then Oracle doesn't always manage to find an effective execution plan.
  • Different selectors will have different numbers of rows, so the statistics have to describe that skew.
  • Different nVision reports and even different parts of the same report generate different statements that can use different combinations of attribute columns.  The number of dimensions can vary, I have seen systems that use as many as five different trees in a single query.
  • Then the database needs to find the relevant rows on the ledger table for the dimensions specified as efficiently as possible.

This very quickly becomes a difficult and complex problem.  This series articles works through the various challenges and describe methods to overcome them.  Not all of them are applicable to all systems, in some cases, it will be necessary to choose between approaches depending on circumstances.

nVision Performance Tuning: Table of Contents

This post is an index for a series of blog posts that discuss how to get good performance from nVision as used in General Ledger reporting.  As the posts become available links will be updated in this post.

  • Introduction
  • nVision Performance Options
  • Indexing of Ledger, Budget and Summary Ledger Tables
  • Partitioning of Ledger, Budget and Summary Ledger Tables
  • Additional Oracle Instrumentation for nVision
  • Logging Selector Usage
  • Analysis of Tree Usage  with the Selector Log
  • Interval Partitioning and Statistics Maintenance of Selector Tables
  • Compression without the Advanced Compression option
  • Maintaining Statistics on Non-Partitioned Selector Tables
  • Excel -v- OpenXML
The current versions of scripts mentioned in the series will be made available on GitHub.

Overloaded Indexes (for ODC Appreciation Day)

ODC Appreciation Day is an idea that Tim Hall (aka Oracle-Base) came up with, to show out appreciation for the Oracle Technology Network (OTN)/Oracle Developer Community.

I want to show my support but rather than mention an Oracle “feature” I particularly like I’d like to mention a “trick” that I (and many others) often employ to help performance. That trick is Overloaded Indexes.

We can all appreciate that accessing data in a table via an index is a very good thing to do when we want just the few records for a given index key from the thousands or millions of rows in the table. As an example, we are looking up all the records for someone and we know their first name, last name and date of birth – a very common occurrence in things like hospital or billing systems. So our PERSON table has an index on those three columns. Oracle will read the root block of the index, read the one relevant block in each of the branch levels for the key, find all of the leaf block entries for that key – and then collect the relevant rows from the table. Fig 1 shows how we think of this working. i.e. most of the records we want to find will be close together in the table.

Actually, a lot of people who just “use” the Oracle database as a store of information sort-of think this is how an index always works. It efficiently identifies the rows that you want and that is the end of it. If the index is on the value(s) you are looking up rows for (say LAST_NAME, FIRST_NAME, DATE_OF_BIRTH) the index is ideal and that is as good as it gets.

https://mwidlake.files.wordpress.com/2017/10/screenhunter_251-oct-10-15-... 600w, https://mwidlake.files.wordpress.com/2017/10/screenhunter_251-oct-10-15-... 150w" sizes="(max-width: 300px) 100vw, 300px" />

Fig 2 – a more usual, less efficient index range scan

But in reality, the index lookup is often far less efficient than this and is more like fig 2. Working down the index structure to get all of the required index entries is exactly the same, but the rows you want are scattered all over the table. Oracle has to fetch many table blocks to get your data, maybe as many blocks as there records to be found. This is far from efficient.

So what can you do about this? You already have the “perfect” index, on LAST_NAME, FIRST_NAME, DATE_OF_BIRTH, the values you are looking up. Maybe you could add another column to the index to avoid those situations where there are many people with the same name and date of birth. But you may not have that extra information or it is simply not possible to identify the values in the table any more accurately, you really do need all the rows scattered though that table for the given search key.

There are “architectural” things you can do such as create the table as an Index Organised Table (see my little set of blogs about them starting here). You can also use various methods to group the relevant rows together in the table. But all of those methods are Big Impact. You need to recreate the table or plan for this eventuality up-front when you design the system.

But there is a very specific, easy thing you can do to address this particular problem, for the SQL statement you need to speed up. You can add all the columns your query needs into the index. This is an Overloaded Index.

An Overloaded Index holds not only the table columns in the WHERE clause but all the columns needed from that table for the query.

Why does this work? Because when Oracle identifies the range of keys for the key (LAST_NAME, FIRST_NAME, DATE_OF_BIRTH) all the other columns it needs are also in those index leaf entries and there is no need to get the rows from the table. All those lookups to the table are avoided. Fig 3 at the end of this article demonstrates this.

However, I’ll give you a real world example I had recently. A client had a SELECT statement something like the below, with the execution plan shown, and it was running too slowly for the business requirements:

SELECT (SUM(NVL(T.TRAN_VALUE_CR,0))-SUM(NVL(T.TRAN_VALUE_DB,0))) ,
        COUNT(*)
FROM    ACCOUNTS     A ,
        TRANSACTIONS T
WHERE   A.ACC_XYZ_IND      =:3
AND     A.ACC_ACCOUNT_NO   =:1             
AND     A.ACC_SUBACC_NO    =:2
AND     T.TRAN_XYZ_IND     =A.ACC_XYZ_IND
AND     T.TRAN_ACCOUNT_NO  =A.ACC_ACCOUNT_NO
AND     T.TRAN_SUBACC_NO   =A.ACC_SUBACC_NO
AND     T.TRAN_P_IND       =:4 
AND     T.TRAN_DLM_DATE    >=TO_DATE(:5,'YYYYMMDD')

------------------------------------------------------------
| Operation                                 | Name         | 
------------------------------------------------------------
|   0 | SELECT STATEMENT                    |              |
|   1 |  SORT AGGREGATE                     |              |
|   2 |   NESTED LOOPS                      |              |
|   3 |    NESTED LOOPS                     |              |
|*  4 |     INDEX RANGE SCAN                | ACC_PRIME    |
|*  5 |      INDEX RANGE SCAN               | TRAN2_3      |
|*  6 |    TABLE ACCESS BY LOCAL INDEX ROWID| TRANSACTIONS |
------------------------------------------------------------
Statistics
----------------------------------------------------------
       4740  consistent gets
       3317  physical reads

The index used on TRANSACTIONS is:

INDEX_NAME                   TABLE_NAME       PSN COL_NAME
---------------------------- ---------------- --- --------------
TRAN2_3                      TRANSACTIONS     1   TRAN_ACCOUNT_NO
TRAN2_3                      TRANSACTIONS     2   TRAN_SUBACC_NO
TRAN2_3                      TRANSACTIONS     3   TRAN_DLM_DATE

The index TRAN2_3 on the TRANSACTION table that you can see being used in the plan was for all the columns being used in the WHERE clause that actually helped identify the TRANSACTION records required – TRAN_ACCOUNT_NO, TRAN_SUBACC_NO and TRAN_DLM_DATE (the TRAN_XYZ_IND and TRAN_P_IND were always the same so “pointless” to index).

I added a new index to the TRANSACTION table. I added a new index rather than change the existing index as we did not want to impact other code and we wanted to be able to drop this new index if there were any unexpected problems. I added all the columns on the TRANSACTION table that were in the SELECT list, were in the the WHERE clauses even though they did not help better identify the rows needed. If there had been TRANSACTION columns in an ORDER BY or windowing clause, I would have added them too. So my index looked like this:

create index TRAN2_FQ on TRANSACTIONS
(TRAN_ACCOUNT_NO   ,TRAN_SUBACC_NO   ,TRAN_DLM_DATE
,TRAN_P_IND        ,TRAN_XYZ_IND
,TRAN_VALUE_CR     ,TRAN_VALUE_DB)

It is very, very important that the new index holds every column from the TRANSACTION table that the query needs.To prevent accessing the table, all the data the query needs for that table must be in the index.

The query could now satisfy the query by just using the new index, as the below explain plan shows.

----------------------------------------------------
| Id  | Operation                      | Name      |
----------------------------------------------------
|   0 | SELECT STATEMENT               |           |
|   1 |  SORT AGGREGATE                |           |
|   2 |   NESTED LOOPS                 |           |
|*  3 |    INDEX RANGE SCAN            | ACC_PRIME |
|*  4 |     INDEX RANGE SCAN           | TRAN2_FQ  |
----------------------------------------------------
Statistics
----------------------------------------------------
         56  consistent gets
         52  physical reads

There is now no line in the plan for visiting the table TRANSACTIONS and we are using the new TRAN2_FQ index. The consistent gets and physical reads to satisfy the query have gone down from 4740 and 3317 respectively to 56 and 52. I think that is good enough.

Fig 3 shows what is happening. The new index is effectively a “mini IOT” designed to support the given query.

There are of course a few caveats. The new index needs to be maintained, which is an overhead on all INSERT/UPDATE/MERGE/DELETE activity on the table. The index will only remove the need to visit the table for queries that are very, very similar to the one it is designed for – ones that use the same rows from the TRANSACTIONS table or a subset of them. If you alter the query you, e.g. select another column from the TRANSACTION table you would need to revisit this overloaded index.

Finally, be careful of modifying existing indexes to overload them to support specific queries. If the index is there to support referential integrity you need to think carefully about this and the modified index may be less efficient for other queries that used the original index (as adding columns to an index make it “wider”).

BTW if you think you recognise this from a recent Oracle Scene article then you would be right. I was pushed for time, I used something I had written before </p />
</p></div>

    	  	<div class=

Oracle Developer Community Appreciation Day

OK, time for some controversy Smile

My ODC appreciation post is about support.oracle.com

Now I’m sure there will people in the community just itching to lose their proverbial about how they lost years of their lives waiting on the support response or similar complaint, and since I work for Oracle, they will be just as keen to jump on the bandwagon and claim that I’m just shilling for the company I work for.

But I’m not denying that our support resource could be better.  Any company’s support resources can be better. 

This post is about the value I get from support.oracle.com when answering questions on asktom.oracle.com.  Yes, there is some out-dated or incorrect content on there, but there is also some awesome content on there.  Many times I find myself quoting MOS notes when providing background information for a question on AskTOM.

So this is my thank you to the people who contribute support notes to MOS.

Because I’m pretty sure it’s a thankless task.

image

Disrupt or Die- Learning From History

As I continue to dig into the legacy we’re impacted by in technology, I had the opportunity to read Disrupt or Die, the new book from Delphix and Avamar founder, Jedidiah Yueh.  I was first concerned that the content wouldn’t be that interesting to someone from the deep technical like myself…

To be honest, I ended up quite captivated by the history lessons surrounding some of our most successful  technology companies, like Facebook, Google, Salesforce and others.  The book discusses, in Jed’s own words, something that I’ve always referred to as “the power in the simple act of doing” and understanding that there is a distinct formula to success.  If you have vision, (i.e an ability to see the big picture) you should follow your path and if you don’t have this vision, search out and follow those that do.

Lessons Learned

There’s a significant number of lessons for anyone to learn from, but this book did offer me a distinct value as I’ve always felt I had a difficult time comprehending the C-level or entrepreneur mindset.  I appreciated all the data and analogies that Jed uses in the book.  It was bit intimidating at first, being told there’s no field that’s too complicated for him to master, but then quickly felt comfortable once he assured the reader to believe the same about themselves when using the method of learning he proceeds to share in his book.

Jed happened to use the example of competitive data as an example and this resonated with me.  Although I had a deep understanding of technology, until I joined Delphix, I hadn’t really done competitive analysis.  After taking my time to understand the technology at what might appear to be a very slow and deep pace, I’m now able to quickly review any company that someone deems a potential competitor and know within minutes if they are or if they will be competition in the near future.  This is a process Jed refers to as Learning Slow and Fast, requiring someone to take considerable time building an educational foundation for a topic which then offers the ability to accelerate their learning post the foundation build.  As we all know how fast technology changes, you have to appreciate this opportunity to learn a method that could serve you in learning new topics in the future.

Yesterday and Tomorrow

When it comes to the history and the future of technology, Chapter 5 Jed discusses the “mad scientist” of technology, Elon Musk.  Elon demonstrates some interesting, albeit modern personality similarities to the inventor, physicist and sometimes futurist his car company is named after, Nikola Tesla.

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/10/N.Tesla_.jpg?... 230w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/10/N.Tesla_.jpg?... 768w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2017/10/N.Tesla_.jpg?... 1024w" sizes="(max-width: 168px) 100vw, 168px" data-recalc-dims="1" />https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/10/3046916-poste... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/10/3046916-poste... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/10/3046916-poste... 1280w" sizes="(max-width: 393px) 100vw, 393px" data-recalc-dims="1" />

Although Elon Musk didn’t name the Tesla company, you can’t help but notice the similar thirst for knowledge, larger than life ideas and accomplishments that surrounds and drives both individuals- one historical, the other current.  Robert Lomas, author of “The Man Who Invented the Twentieth Century”, even referred to Nikola Tesla having a “reputation of an archetypal mad scientist in American popular culture.” which is similar to Jed’s own reference to Musk.

We realize that those with this type of vision, (i.e. the mad scientists) are required for us to move past some of the legacy and nostalgia in technology, science, mathematics and engineering that are holding us from moving to the next phase of advancement.

How Far We’ve Come

Jed spends chapters carefully disseminating the Silicon Valley history, magic, leadership and those that have impressed him and why.  Learning about where he came from, what caused him to create companies like Delphix, helped me understand the company I work for and love the job I do for Delphix even more.

In the process, I somehow managed to learn more about wine tasting, all the while understanding the C-level/entrepreneur instead of just rushing off to the to some new geeky task….:)

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Disrupt or Die- Learning From History], All Right Reserved. 2017.

The post Disrupt or Die- Learning From History appeared first on DBA Kevlar.

Thanks ODC, Oracle WIT Edition #ThanksODC

Just in case you wondered, if you do a search for #ThanksODC and leave the “C” off, you get a lot of stuff on the boy band, One Direction…. You can thank me later for that warning….

The sheer amount of thank you posts for Oracle Development Community Appreciation Day, (#ThanksODC) and the historical hashtag for reference, #ThanksOTN are impressive, but I waited a day on mine so I could continue the support for the Oracle Women in Technology, (and @ORAWit).  Here are the ThankODC blog posts from the great women of the Oracle Community!

I’m going to start with Flora B, (@floo_bar) in recognizing her first blog post!!  I’m even more thrilled its on the Enterprise Manager CLI-  a topic near and dear to my heart!

She’s off to a great start and I look forward to seeing more from her in the upcoming years!

Sabine, (@oraesque) has been creating some awesome content and wasn’t to be outdone today when she posted on why she liked ansi-joins.

Another newer contributor to the Oracle community is Mandy Sandhu, (@Mandysandhu14) She wrote a great post on a topic that all DBAs run into sooner or later, moving datafiles.

Opal, (@opal_EPM) made sure to get her post out on OAC Cube Designer and is one of the first editions post publish!

Next is Heli, (@HeliFromFinland) and her post on the JSON Data Guide.

Debra, (@debralilley) made sure to get her post out, too, relinking to other posts she’s done with a focus on the Visual Cloud Builder.

And although I didn’t see anything from Paula Pullas, (@Techbyppullas) she is also following the full #ThanksODC feed on Twitter, so she’s a good one to follow for the full list of everyone’s posts.

I will search when possible to update this post for any additional posts as the day progresses.  Go #OraWIT!!

 



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Thanks ODC, Oracle WIT Edition #ThanksODC], All Right Reserved. 2017.

The post Thanks ODC, Oracle WIT Edition #ThanksODC appeared first on DBA Kevlar.

ODC Appreciation Day : Javascript in the database

Tim Hall has launched the idea to post small blogs this day, from all the Oracle community, about an Oracle feature. I choose one feature that is only released in beta test for the moment: the Multilingual Engine (MLE) which is able to run Javascript stored procedures in the database.

Why?

When I first heard about this idea, last year before OOW16, I didn’t understand the idea at all. But the good thing at Oracle Open World, is that we can discuss with Oracle product managers, and with other Oracle DBAs or Developers, rather than relying on rumors or wrong ideas. My perception of Javascript was narrowed to the language used at client-side in thin clients, in the browser, to manage the presentation layer. It is interpreted by the browser, has no type checking, and errors are not easy to understand. Clearly, the opposite if something that I want to run in my database, on my data. PL/SQL is obviously the best choice: compiled and run into the database, strong typing to avoid runtime errors, directly integrated with SQL for better performance, etc.

So that idea of JS in the database made me smile, but I was wrong. What I didn’t get is that Javascript is just a language, and running Javascript does not mean that it has to be interpreted like when it is running on a browser.

Multilingual Engine (MLE)

Actually, what Oracle is developing in its lab goes far beyond just running Javascript in the database. They are building an execution engine, like PL/SQL or SQL execution engine, but this one being able to run programs written in different languages. They start with Javascript and TypeScript (and then strong typing here) but this can be extended in the future (Python, and why not PL/SQL one day running there). The programs will be loaded into the database as stored procedures/functions/packages and compiled into an intermediate representation, like bytecode. This code is optimized to access efficiently to data, like the PL/SQL engine.

Actually, I’ll show in a future post that this new engine can run faster than PL/SQL for some processing and that it looks like the context switching with the SQL engine is highly efficient.

Javascript

So, why would you write your stored procedure in Javascript? The first reason is that there are a lot of existing libraries available and you may not want to re-write one. For example, I remember when working on an airline company application that I had to write in PL/SQL the function to calculate the orthodromic distance (aka great circle). This is a very simple example. But if you can get the formula in Javascript, then why not compile from this rather than translate it into another language? Currently, you can find pretty everything in Javascript or Python.

The second reason is that your application may have to use the same function at different layers. For example, you can check that a credit card number is correctly formed in the presentation layer, in order to show quickly to the user if it is correct or not. That may be Javascript in the browser. But the database should also verify that in case the rows are inserted with a different application, or in case the number has been corrupt in between. That may be PL/SQL in the database. Then you have to maintain two libraries in two different languages, but doing the same thing. Being able to run Javascript in the database let us re-use exactly the same library in the client and in the database.

Finally, one reason why some enterprise architects do not want to write procedures in the database is that the language for that, PL/SQL, can only run on Oracle. If they can write their business logic in a language that can run everywhere, then there is no vendor lock-in anymore. They have the possibility to run on another RDBMS if needed, and still get the optimal performance of processing data in the database.

Public Beta

Currently, this is a lab project from Oracle in Zurich. They have released a public beta downloadable as a VM. Just go to the download page at http://www.oracle.com/technetwork/database/multilingual-engine/overview/index.html

Capture;LE

And stay tuned to this blog to see some performance comparison with PL/SQL User-Defined Function.

 

Cet article ODC Appreciation Day : Javascript in the database est apparu en premier sur Blog dbi services.

Free eBook: DevOps and the DBA, The Last Frontier!

The first edition of the free eBook is available for download at Delphix!

It took us a bit to get the final updates to the eBook version completed and I want to thank everyone at Delphix who helped get the content updated with the latest additions, including the bossman, Eric Schrock and for helping me keep it all organized and finalized, shoutouts to Rita Rosenberg.

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2017/10/Screen-Shot-2... 300w" sizes="(max-width: 644px) 100vw, 644px" data-recalc-dims="1" />

If you’re interested in downloading a PDF copy, simply go to this link and get a version for yourself and find out what you need to know as a DBA about where we fit into the DevOps landscape!

https://i2.wp.com/dbakevlar.com/wp-content/uploads/2017/10/1xaxkw.jpg?re... 300w" sizes="(max-width: 640px) 100vw, 640px" data-recalc-dims="1" />



Tags:  , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [Free eBook: DevOps and the DBA, The Last Frontier!], All Right Reserved. 2017.

The post Free eBook: DevOps and the DBA, The Last Frontier! appeared first on DBA Kevlar.