Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

ADWC – the hidden gem: Zepplin Notebook

IMG_5339
In the previous blog posts I explained how to create, and stop/start the Autonomous Data Warehouse Cloud service. And I didn’t show yet how to connect to it. It is easy, from sqlplus or SQL Developer, or SQLcl.

But there’s something more exciting to run some SQL queries: the Oracle Machine Learning Notebooks based on Apache Zepplin. At first, I didn’t realize why the administration menu entry to create users in the ADWC service was named ‘Manage Oracle ML Users’, and didn’t realize that the ‘Autonomous Data Warehouse Cloud’ header was replaced by ‘Machine Learning’.

But last week at IOUG Collaborate 18, I visited the Demo Grounds and thanks to Charlie Berger I realized all the power of this: we are in the ‘Machine Learning’ interface here and the home button opens all the features available to query the ADWC database, including the SQL Notebooks based on Apache Zepplin.

Here is the path to this hidden Gem. From your ADWC service, you go to the Service Console:
CaptureZepplin001

Here you log as the ADMIN user with the >12 characters password that you have defined at service creation. Don’t worry if you forgot it, you can reset it from here:
CaptureZepplin002

Once connected, you go to the Administration tab and choose the ‘Manage Oracle ML Users':

CaptureZepplin003

Here you have to create a user because the ADMIN user not a Machine Learning user. Machine Learning users need one of the following roles: OML_DEVELOPER, OML_APP_ADMIN, OML_SYS_ADMIN. The user you will create here will have OML_DEVELOPER which is required to use SQL Notebooks.

Now that you have a user created from here, you can click on this little house icon, which is your home in the Machine Learning part of the ADWC:

CaptureZepplin004

Here you connect with the user you have created from the Oracle ML User page (not the ADMIN one as it has no OML role granted).

CaptureZepplin005

Then you are in your OML home, ready to run SQL from a Notebook:

CaptureZepplin006

I’ll show what you can do in future post. But just to give you an idea, you have a Notebook where you can type a query, execute it, and have the result displayed as a table, or as a graph. Here I was looking at I/O latency and the following shows me that the ‘cell single block physical read’, which are nothing else than the buffered one-block-at-a-time reads that are called ‘db file sequential read’ when not on Exadata, in dark green here, have most of their I/O call time between 128 and 512 microseconds.

CaptureZepplin007

I like this way to have the result just under the query, with easy formatting. The code, documented, is at the same place as the result, in a notebook that is easy to refresh, or share. And you can export the whole in a simple JSON file.

 

Cet article ADWC – the hidden gem: Zepplin Notebook est apparu en premier sur Blog dbi services.

Free Conference (*) in May!

How do you fancy going to a full-day, five stream conference, for free? With a great agenda including Pete Finnigan talking on the hot topic of GDPR; Chris Saxon, Nigel Bayliss and Grant Ronald giving us the latest low-down on optimizer, 18C database features for developers and AI powered apps? Stalwarts of the Oracle community like Robin Moffat, Zahid Anwar and Andrew Clarke giving their real-world view?

https://mwidlake.files.wordpress.com/2018/05/northerntech17-com-600x110-... 150w, https://mwidlake.files.wordpress.com/2018/05/northerntech17-com-600x110-... 300w, https://mwidlake.files.wordpress.com/2018/05/northerntech17-com-600x110-... 600w" sizes="(max-width: 460px) 100vw, 460px" />

Well, if you are a member of the UKOUG you can – and even if you are not a member, there is a way! All levels of UKOUG membership, even bronze, allow you to attend at least one SIG (Special Interest Group) meeting – and the Northern Technology Summit is classed as a SIG, even though it is as large as some smaller conferences. The 5 streams cover Database, RAC, Systems, APEX, and Development (I know, APEX is part of development – but it gets a whole stream to fit in the large range of speakers, who are mostly end users with real stories to tell). You can see the full agenda here.

https://mwidlake.files.wordpress.com/2018/05/pp_leeds.jpg?w=99 99w" sizes="(max-width: 182px) 100vw, 182px" />


Park Plaza. Leeds.

The summit is being held in Leeds, at the Park Plazza hotel, on the 16th of May. The Park Plaza is so close to Leeds train station that you could probably hit it with a catapult from the entrance. It is also about 2 minutes from where the M621 (a spur off the M1) ends in the city centre. You can sign up to the event by clicking here.

Is Leeds far away? No. Trains from Kings Cross take only 2 hours and you can get there and back for £50 or less. Check out Trainline.com and similar websites. Of course, coming in from Birmingham, Sheffield, Manchester, Newcastle etc is even quicker and cheaper (except maybe Brum, for reasons I cannot fathom) Even Edinburgh is less than 3 hours away.

SO you are not a UKOUG member – You can still come, and still come for free as I said – well, sort of. The cost of a SIG for a non-member is £170 plus VAT, which is pretty cheap for a whole-day event full of technical content and an absolute steal for a 5-stream mini-conference. But if you become a Bronze member of the UKOUG for five pounds less, i.e. £165, you get a SIG place – so you can come to the Northern Technology summit. The UKOUG have waived the usual joining fee of £50 to ensure it is cheaper to become a bronze member than simply pay for this event. And, if you become a higher level member, (silver, gold, platinum) the UKOUG will still waive the joining fee. You can see full details of the offer here

As well as the excellent agenda we will be having some fun. We are having a meet-up the night before in Leeds, at Foley’s Tap House where we have reserved an area. This is one of my favourite pubs in Leeds, I seem to end up in it for a pint or two whenever I visit the city. There are already over half a dozen of us going and I’ll buy a round. The park plaza hotel is just next to the latest shopping centre in Leeds. If you have never visited the city before, or did so a long time ago, it’s become a very vibrant city centre over the last 10 years or so. I suspect after the event some of us will end up in the Scarborough hotel opposite the train station before we wander home.

So, sign up and get yourself over to a whole-day, 5-stream conference full of both the official information from Oracle on 10 topics and end-user/partner opinions on 25 more.

FBI Limitation

A recent question on the ODC (OTN) database forum prompted me to point out that the optimizer doesn’t consider function-based indexes on remote tables in distributed joins. I then spent 20 minutes trying to find the blog note where I had demonstrated this effect, or an entry in the manuals reporting the limitation – but I couldn’t find anything, so I’ve written a quick demo which I’ve run on 12.2.0.1 to show the effect. First, the SQL to create a couple of tables and a couple of indexes:


rem
rem     Script:         fbi_limitation.sql
rem     Author:         Jonathan Lewis
rem     Dated:          May 2018
rem

-- create public database link orcl@loopback using 'orcl'; 
define m_target = orcl@loopback

create table t1
segment creation immediate
nologging
as
with generator as (
        select
                rownum id
        from dual
        connect by
                level <= 1e4 -- > comment to avoid WordPress format issue
)
select
        rownum                          id,
        rownum                          n1,
        lpad(rownum,10,'0')             v1,
        lpad('x',100,'x')               padding
from
        generator       v1,
        generator       v2
where
        rownum <= 1e6 -- > comment to avoid WordPress format issue
;

create table t2
nologging
as
select * from t1
;

alter table t1 add constraint t1_pk primary key(id);
alter table t2 add constraint t2_pk primary key(id);
create unique index t2_f1 on t2(id+1);

begin
        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 'T1',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );

        dbms_stats.gather_table_stats(
                ownname     => user,
                tabname     => 'T2',
                cascade     => true,
                method_opt  => 'for all columns size 1'
        );
end;
/


The code is very simple, it creates a couple of identical tables with an id column that will produce an index with a very good clustering_factor. You’ll notice that I’ve (previously) created a public database link that is (in my case) a loopback to the current database and the code defines a variable that I can use as a substitution variable later on. If you want to do further tests with this model you’ll need to make some changes in these two lines.

So now I’m going to execute a query that should result in the optimizer choosing a nested loop between the tables – but I have two versions of the query, one which treats t2 as the local table it really is, and one that pretends (through the loopback) that t2 is remote.


set serveroutput off

select
        t1.v1, t2.v1
from
        t1,
        t2
--      t2@orcl@loopback
where
        t2.id+1 = t1.id
and     t1.n1 between 101 and 110
;


select * from table(dbms_xplan.display_cursor);

select
        t1.v1, t2.v1
from
        t1,
--      t2
        t2@orcl@loopback
where
        t2.id+1 = t1.id
and     t1.n1 between 101 and 110
;

select * from table(dbms_xplan.display_cursor);

Here are the two execution plans, pulled from memory – including the “remote” section in the distributed case:


SQL_ID  fthq1tqthq8js, child number 0
-------------------------------------
select  t1.v1, t2.v1 from  t1,  t2 -- t2@orcl@loopback where  t2.id+1 =
t1.id and t1.n1 between 101 and 110

Plan hash value: 1798294492

--------------------------------------------------------------------------------------
| Id  | Operation                    | Name  | Rows  | Bytes | Cost (%CPU)| Time     |
--------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT             |       |       |       |  2347 (100)|          |
|   1 |  NESTED LOOPS                |       |    11 |   407 |  2347   (3)| 00:00:01 |
|*  2 |   TABLE ACCESS FULL          | T1    |    11 |   231 |  2325   (4)| 00:00:01 |
|   3 |   TABLE ACCESS BY INDEX ROWID| T2    |     1 |    16 |     2   (0)| 00:00:01 |
|*  4 |    INDEX UNIQUE SCAN         | T2_F1 |     1 |       |     1   (0)| 00:00:01 |
--------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------

   2 - filter(("T1"."N1"<=110 AND "T1"."N1">=101))
   4 - access("T2"."SYS_NC00005$"="T1"."ID")

Note
-----
   - this is an adaptive plan




SQL_ID  ftnmywddff1bb, child number 0
-------------------------------------
select  t1.v1, t2.v1 from  t1, -- t2  t2@orcl@loopback where  t2.id+1 =
t1.id and t1.n1 between 101 and 110

Plan hash value: 1770389500

-------------------------------------------------------------------------------------------
| Id  | Operation          | Name | Rows  | Bytes | Cost (%CPU)| Time     | Inst   |IN-OUT|
-------------------------------------------------------------------------------------------
|   0 | SELECT STATEMENT   |      |       |       |  4663 (100)|          |        |      |
|*  1 |  HASH JOIN         |      |    11 |   616 |  4663   (4)| 00:00:01 |        |      |
|*  2 |   TABLE ACCESS FULL| T1   |    11 |   231 |  2325   (4)| 00:00:01 |        |      |
|   3 |   REMOTE           | T2   |  1000K|    33M|  2319   (3)| 00:00:01 | ORCL@~ | R->S |
-------------------------------------------------------------------------------------------

Predicate Information (identified by operation id):
---------------------------------------------------
   1 - access("T1"."ID"="T2"."ID"+1)
   2 - filter(("T1"."N1"<=110 AND "T1"."N1">=101))

Remote SQL Information (identified by operation id):
----------------------------------------------------
   3 - SELECT "ID","V1" FROM "T2" "T2" (accessing 'ORCL@LOOPBACK' )

Both plans show that the optimizer has estimated the number of rows that would be retrieved from t1 correctly (very nearly); but while the fully local query does a nested loop join using the high-precision, very efficient function-based index (reporting the internal supporting column referenced in the predicate section) the distributed query seems to have no idea about the remote function-based index and select all the required rows from the remote table and does a hash join.

Footnote:

Another reason for changes in execution plan when you test fully local and then run distributed is due to the optimizer ignoring remote histograms, as demonstrated in a much older blog note (though still true in 12.2.0.1).

Addendum

After finishing this note, I discovered that I had written a similar note about reverse key indexes nearly five years ago. Arguably a reverse key is just a special case of a function-based index – except it’s not labelled as such in user_tab_cols, and doesn’t depend on a system-generated hidden column.

 

StarEast, InteropITX and GDPR

I’m getting ready to get on a plane between two events today and have been so busy, that there’s been a break in blogging.  That’s right folks, Kellyn has let a few things slide….

For those people on top of all the happenings in Kevlar’s life, I’ve been busy removing 15 years of possessions from my home so we can sell it in the next month, along with the purchase, upgrade and consolidation into a 42Ft. travel trailer.  It’s quite an undertaking, so a few things have had to be put lower on the priority list to complete the rest.  I am finishing up a technical review of an Oracle Cloud book from Apress and signed on to write a Women in Technology book.

I’m happiest when I’m busy, but this is a bit too much for anyone, needless to say.  I’m going to add a short blog while I have a minute on what is going on this week.  I’m representing Delphix and just finished speaking at Techwell’s StarEast Software Testing conference in Orlando.  This session was incredibly well attended, with great interaction from those there, (I love questions and comments, what can I say?)  I met with lay requested audience for my “Genius Bar” meeting and was able to update my slides to compliment the audience, which was a majority of testers looking to embrace DevOps into their business.

I will then get on a plane in a few hours for Las Vegas for the Interop ITX conference.  Thanks to the referral from Karen Lopez, I’m going to speak on data and DevOps at this event, (if my plane will stop with the delays, already…. :))  I’m looking forward to getting back to Las Vegas, (wasn’t I just there last week for Collaborate??) and hopefully get to see Karen Lopez and maybe even Gwen Shapira for a minute or two!

Keep an eye on this space-  lots of great stuff in the next couple weeks with Great Lakes Oracle Conference, (GLOC)  Data Summit and SQL Saturday Dallas!

 



Tags:  , , , , , ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBA Kevlar [StarEast, InteropITX and GDPR], All Right Reserved. 2018.

ADWC – a Docker container to start/stop Oracle Cloud services

In the previous post, I’ve explained how to start and stop the Autonomous Data Warehouse Cloud service from PSM (PaaS Service Manager). There’s a setup phase, and a run phase starting with service-start and ending with service-stop. And the setup is specific to an Oracle Cloud account, storing information in the local user home. You may want to run different setups, and even provide an easy way to start/stop an Oracle Cloud service without knowing the user, password and tenant name.

A Docker container is perfect to isolate this.

Dockerfile

Here is my quick (aka experimental) Dockerfile:

FROM alpine:latest
RUN apk add --update --no-cache python3 curl
ENV user=my.cloud.account@me.com
ENV password=MyP@ssw0rd
ENV tenant=idcs-31bbd63c3cb9466cb8a96f627b6b6116
ENV region=us
# get PSM
RUN curl -X GET -u ${user}:${password} -H X-ID-TENANT-NAME:${tenant} https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/${tenant}/client -o psmcli.zip
# install PSM
RUN pip3 install -U psmcli.zip
# setup PSM
RUN echo "{\"username\":\"${user}\",\"password\":\"${password}\",\"identityDomain\":\"${tenant}\",\"region\":\"${region}\",\"outputFormat\":\"short\"}" > config-payload &&\
psm setup --config-payload config-payload &&\
rm 421d64918638 # remove file with password
# patch PSM for bugs
RUN sed -ie 's/core\\[/]//g' ~/.psm/data/ADWC.json
# variables which can be overwritten at run time
ENV service=ADWC
ENV name=MYADWC1
CMD trap 'echo "Stopping service ${service}...";psm ${service} stop-service --service-name ${name} -wc true ; exit 0' SIGINT SIGSTOP SIGKILL; echo "Starting service ${service}...";psm ${service} start-service --service-name ${name} -wc true || exit 1 ; echo "You can access to ${service} console with ADMIN user at:";echo;psm ${service} service -s ${name} -of json | jq -r '.serviceConsole';echo ; while sleep 60 ; do echo "Status of service ${service} at $(date)...";psm ADWC service -s ADWC ; done

We need curl to download PSM, and pip3 to install it, and python3 to run it.
You can set your Oracle Cloud Account credentials as environment variables.
Then it fills all required information in a ‘config-payload’ file, runs ‘psm setup’ and removes that file.

At run, it calls a ‘start-service’ and loops while showing the status every minute (you can see them with docker container logs). The INT, STOP and KILL signals call ‘stop-service’. Then, the idea is that while the container exists, the Cloud Service is running. And it is shutdown at the container end of life. There’s no other action to do with the container: it display the console url where you have everything to interact with the service (download client credentials, manage users, go to Machine Learning notebooks,…).

Run example:

Here is a simple example:

# docker run --rm franck/psm/awdc
 
Starting service MYADWC1...
Message: Job submitted successfully for start of service/system
Job ID: 25583108
Waiting for the job to complete... (it cannot be cancelled)
Command completed with status [SUCCEED].
You can access to MYADWC1 console with ADMIN user at:
 
https://adwc.uscom-east-1.oraclecloud.com/console/index.html?tenant_name=idcs-31bbd63c3cb9466cb8a96f627b6b6116&database_name=MYADWC1
 
Status of service MYADWC1 at Sun Apr 29 18:20:50 UTC 2018...
Service: MYADWC1
Status: Ready
Version: 18.1.4.0
Edition: N/A
Compute Site: N/A
Cloud Storage Container: N/A
Created On: 2018-04-19T19:22:18.360+0000
Status of service MYADWC1 at Sun Apr 29 18:21:51 UTC 2018...
Service: MYADWC1
Status: Ready
Version: 18.1.4.0
Edition: N/A
Compute Site: N/A
Cloud Storage Container: N/A
Created On: 2018-04-19T19:22:18.360+0000
 
^C
 
Stopping service MYADWC1...
Message: Job submitted successfully for stop of service/system
Job ID: 25620930
Waiting for the job to complete... (it cannot be cancelled)
Command completed with status [SUCCEED].

After 2 minutes I’ve hit ‘Control-C’ to stop the container. The service has been cleanly shut-down.

 

Cet article ADWC – a Docker container to start/stop Oracle Cloud services est apparu en premier sur Blog dbi services.

ODTUG Kscope18 Conference: Change of Presentation Times

For those of you that were planning to attend my presentations at the upcoming ODTUG Kscope18 Conference in Orlando (yes, I’m specifically talking to the two of you), both presentations have been allotted new times and locations. They are now confirmed as: New Indexing Features Introduced in Oracle 12c Release 2 (and 18c): When:  June […]

18c Scalable Sequences Part II (Watch That Man)

In Scalable Sequences Part I, I introduced this new 18c feature (although it was hidden and undocumented in previous releases). By adding a 6 digit prefix value that constitutes the first 3 digits for the Instance Id and the next 3 digits for the Session Id, it results in a sequence value that doesn’t always […]

ADWC: start/stop with PSM Command Line Interface

In the previous post, I explained how to create an Autonomous Data Warehouse with PSM (PaaS Service Manager Command Line Interface). The most common operation you want to do with it is starting and stopping the service. This is the best way to save credits for hourly billed services. And PSM is the easiest: run from everywhere (it is Python 3) and no need to provide credentials each time. In the previous post, I explained how to setup PSM for the ADWC service.

Unfortunately, for starting and stopping the instance you may realize that:

  • It is not in the documentation
  • Syntax exists but doesn’t work


The documentation is there, but no mention of start-service, stop-service nor restart-service: https://docs.oracle.com/en/cloud/paas/autonomous-data-warehouse-cloud/user/adwc-commands.html

The online help has start/stop/restart-service:

$ psm adwc h
 
DESCRIPTION
Oracle Autonomous Data Warehouse Cloud
 
SYNOPSIS
psm ADWC [parameters]
 
AVAILABLE COMMANDS
o services
List all Autonomous Data Warehouse Cloud instances
o service
List Autonomous Data Warehouse Cloud instance
o create-service
Provision Autonomous Data Warehouse
o delete-service
Unprovision Autonomous Data Warehouse
o scale-service
Scale Autonomous Data Warehouse
o start-service
This operation will set the operational state of service as started
o stop-service
This operation will set the operational state of service as stopped
o restart-service
This operation will set the operational state of service as after...
o view-backups
List all backups of Autonomous Data Warehouse Cloud instance
o view-backup
List a backup of Autonomous Data Warehouse Cloud instance
o backup
Backup Autonomous Data Warehouse
o view-restores
List all restore operations for Autonomous Data Warehouse Cloud instance
o view-restore
List a specified restore operation for Autonomous Data Warehouse Cloud...
o restore
Restore Autonomous Data Warehouse
o check-health
Health Check operation
o operation-status
View status of Autonomous Data Warehouse Cloud instance operation
o activities
View activities for Autonomous Data Warehouse Cloud instance
o help
Show help

All 3 take the same parameters, the service name, the REST API output format, and a boolean for wait of the completion of the job:

$ psm adwc start-service h
 
DESCRIPTION
This operation will set the operational state of service as started
 
SYNOPSIS
psm ADWC start-service [parameters]
-s, --service-name
[-of, --output-format ]
[-wc, --wait-until-complete ]
 
AVAILABLE PARAMETERS
-s, --service-name (string)
Name of the Autonomous Data Warehouse Cloud instance
 
-of, --output-format (string)
Desired output format. Valid values are [short, json, html]
 
-wc, --wait-until-complete (boolean)
Wait until the command is complete. Valid values are [true, false]. Default is
'false'.
 
EXAMPLES
psm ADWC start-service -s ExampleInstance

But…

So, the online help show it and I try it:

$ psm adwc start-service --service-name ADWC --output-format short -wc true
 
Error: Not Found.


Error 404--Not Found







Error 404--Not Found



From RFC 2068 Hypertext Transfer Protocol -- HTTP/1.1:


10.4.5 404 Not Found


The server has not found anything matching the Request-URI. No indication is given of whether the condition is temporary or permanent.

If the server does not wish to make this information available to the client, the status code 403 (Forbidden) can be used instead. The 410 (Gone) status code SHOULD be used if the server knows, through some internally configurable mechanism, that an old resource is permanently unavailable and has no forwarding address.




 


Unfortunately, this doesn’t work. Is it that those commands are not supported yet, reason why we don’t find them in the documentation? Or maybe the opposite: they do not work and rather than fix them, they removed them from the documentation. One thing I’m 100% sure: start-service and stop-service are the most useful commands for a CLI giving easy access to an hourly billed and I want them to work. And it is Python, JSON and HTML – nothing hidden there.

Hack Fix

The error message is about no matching URL. PSM metadata is stored in your user directory (~/.psm/data on Linux) with one JSON file for each Oracle platform service. Having a look at the URLs in ADWC.json the bug is obvious:

$ jq . ~/.psm/data/ADWC.json | grep uri
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/scale",
"uri": "/paas/core/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/start",
"uri": "/paas/core/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/stop",
"uri": "/paas/core/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restart",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/backups",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/backups/{backupId}",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/backups",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restoredbackups",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restoredbackups/{jobId}",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/restoredbackups",
"uri": "/paas/api/v1.1/instancemgmt/{identityDomainId}/services/{serviceType}/instances/{serviceName}/healthcheck",
"uri": "/paas/api/v1.1/activitylog/{identityDomainId}/job/{jobId}",
"uri": "/paas/api/v1.1/activitylog/{identityDomainId}/filter",

Let’s remove this ‘/core’ from the uri:

sed -ie 's/core\\[/]//g' ~/.psm/data/ADWC.json

And run again the start-service:

$ psm adwc start-service -s ADWC -wc true
 
Message: Job submitted successfully for start of service/system
Job ID: 25617877
Waiting for the job to complete... (it cannot be cancelled)

Here it is.

As my laptop is on Windows where I use Cygwin, I have setup two icons with:
C:\cygwin64\bin\mintty.exe -w min /usr/bin/psm adwc stop-service -s ADWC -wc true
C:\cygwin64\bin\mintty.exe -w min /usr/bin/psm adwc start-service -s ADWC -wc true

Start/Stop time

CaptureStartStop
This ADWC service is a PDBaaS. Starting and Stopping is as easy as opening and closing a pluggable database. Here are the timestamps after starting and stoppin in a loop (with graph on 24 loops).

The startup time is around 30 seconds. The stop time is about a minute. Really easy to use.


$ psm adwc activities -s ADWC -l 50
Operation Type Status Start Time End Time
START_SERVICE SUCCEED 2018-04-29T14:30:36.888+0000 2018-04-29T14:31:21.563+0000
STOP_SERVICE SUCCEED 2018-04-29T14:27:26.551+0000 2018-04-29T14:27:35.610+0000
STOP_SERVICE SUCCEED 2018-04-29T14:25:22.172+0000 2018-04-29T14:25:51.586+0000
START_SERVICE SUCCEED 2018-04-29T14:20:47.957+0000 2018-04-29T14:21:38.131+0000
STOP_SERVICE SUCCEED 2018-04-29T14:08:09.409+0000 2018-04-29T14:08:48.125+0000
START_SERVICE SUCCEED 2018-04-29T14:07:24.892+0000 2018-04-29T14:08:08.244+0000
STOP_SERVICE SUCCEED 2018-04-29T14:04:57.566+0000 2018-04-29T14:05:27.458+0000
START_SERVICE SUCCEED 2018-04-29T14:03:51.035+0000 2018-04-29T14:04:34.108+0000
STOP_SERVICE SUCCEED 2018-04-29T14:03:17.701+0000 2018-04-29T14:03:47.262+0000
START_SERVICE SUCCEED 2018-04-29T14:02:00.944+0000 2018-04-29T14:02:50.978+0000
STOP_SERVICE SUCCEED 2018-04-29T14:00:56.990+0000 2018-04-29T14:01:29.567+0000
START_SERVICE SUCCEED 2018-04-29T13:59:52.898+0000 2018-04-29T14:00:39.373+0000
STOP_SERVICE SUCCEED 2018-04-29T13:59:19.380+0000 2018-04-29T13:59:49.011+0000
START_SERVICE SUCCEED 2018-04-29T13:58:15.594+0000 2018-04-29T13:58:58.937+0000
STOP_SERVICE SUCCEED 2018-04-29T13:57:42.355+0000 2018-04-29T13:58:11.845+0000
...

Easy command line without having to provide a password interactively, wait for completion, fast operation, this gives a great user experience for this service. The only problem is when you play with several cloud accounts. I’ll show an idea in the next post.

 

Cet article ADWC: start/stop with PSM Command Line Interface est apparu en premier sur Blog dbi services.

ADWC: Creation of Autonomous Data Warehouse Cloud service

You want to try the Autonomous Data Warehouse Cloud Service? That’s easy. Here is a Step-by-Step.

Cloud Credits

CaptureOCIcredits
First, you need Cloud Credits. You may have bought them (any recent negotiation with Oracle Sales, even for on-premises, involves some Cloud Credits). You can have a free trial with 300$ Cloud Credits available for 1 month. To get another month, you need a different e-mail address and different Credit Card number (not charged). It is quite easy to have different e-mail addresses and your bank may provide virtual credit card where the number changes each time. Or you may have the 5000$ Cloud Credits available for 1 year from the Education program. I got those thanks to ACE Director program.

Update 01-MAY-2018 – There’s also the 500$ credits from the “white glove” program – you can ask to your Sales representative

In all cases you will be able to test the service without spending too much credits because:

  • This service is not expensive ($2.5 per OCPU per Hour in Pay As You Go)
  • It is very easy to start and stop the service, and then pay only for the hours where you connect
  • If you choose ‘Bring You Own License’ in the creation, the per OCPU per Hour is only $0.48 (but be sure that you have covered
    See https://cloud.oracle.com/en_US/datawarehouse/pricing)
  • Capturebilling

  • And finally, during the trial promotion, the credits are consumed at discounted rate
    (after 9 hours of usage, I got less than 1$ used)

OCI Account

CaptureOCIaccount
The first generation of Oracle Cloud, is now called ‘OCI Classic’, and you distinguish it when connecting as the Sign-In page mentions ‘Traditional Cloud Account’. You cannot access to ADWC with this account.

You need an access to the OCI (Oracle Cloud Infrastructure – the version 2 of Oracle Cloud).
If, when Sign-In, you are welcomed by this guy looking at his phone, you are at the right place. I’m always curious about how they choose an image for a page used every day and for several years. The oracle.com login page is easy with the headquarters blurry shot. For the OCI account, they choose the “Man On Smart Phone – Young Business Man Texting In Airport – Casual Urban Professional Businessman Using Smartphone App Smiling Happy Inside Office Building Or Airport” from the Adobe image stock.

Ashburn

CaptureOCIregion
For the moment, the ADWC service is available only in the Ashburn Cloud Center. Not yet in Europe (but planned for Frankfurt). You can see the regions here: https://cloud.oracle.com/data-regions. Then, when you receive your access to the Oracle Cloud Services, chose the Ashburn Data Center.

Update 01-MAY-2018 – It seems that the service is available in Frankfurt.

Create Instance

CaptureOCIcreateCaptureOCIcreate2
The instance creation is easy and fast. It will create a Pluggable Database (PDB) in the Oracle Cloud CDB. You provide a name, and ADMIN password (be careful, rule is at least 12 characters) which is the password you’ll use to connect as the ADMIN user. You can change it later and add new users. The Shape is different from the DBaaS here. You define the number of threads you want to use (it actually sets the CPU_COUNT for the PDB) and the size of PDB datafiles. You can change both later with Scale Up/Down.

PaaS Service Manager Command Line Interface

You can also create an ADWC service from the command line. I’ll show how to install and use PSM, the PaaS Service Manager Command Line Interface). Rodrigo Jorge has a nice description for DBaaS on his blog.

So, you download PSM:

curl -X GET -u my.cloud.account@me.com:MyP@ssw0rd -H X-ID-TENANT-NAME:idcs-31bbd63c3cb9466cb8a96f627b6b6116 https://psm.us.oraclecloud.com/paas/core/api/v1.1/cli/idcs-31bbd63c3cb94... -o psmcli.zip
 
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
100 86945 0 86945 0 0 16806 0 --:--:-- 0:00:05 --:--:-- 23820

CaptureOCIurl

The user:password are those you use in the account Sign-In.

The ‘Tenant Name’, you get it from the URL of this Man On Smart Phone Sign-in web page. You will see it also mentioned later as ‘Identity domain’ (like in OCI-Classic). If you have a doubt, create the service from the web console, click on it and you will see the Tenant Name.

CapturePIP3
So, you have a zip file and do not unzip it. It is a Python 3 module and you install it with ‘pip3′. You can do that in any OS.

I have the strange idea to run my laptop on Windows with Cygwin for command line stuff. Here are the python3 packages I have here.

Here is the installation of PDM:

pip3 install -U psmcli.zip
 
Processing ./psmcli.zip
...

And now the nice thing is that you will configure once your credentials with ‘psm setup’. You provide the user, password and tenant name (which is called ‘identity domain’ here):

$ psm setup
 
Username: my.cloud.account@me.com
Password: MyP@ssw0rd
Retype Password: MyP@ssw0rd
Identity domain: idcs-31bbd63c3cb9466cb8a96f627b6b6116
Region [us]:
Output format [short]:
Use OAuth? [n]:
----------------------------------------------------
'psm setup' was successful. Available services are:
 
o ADWC : Oracle Autonomous Data Warehouse Cloud
o ADWCP : Oracle Autonomous Data Warehouse Cloud Platform
o ANALYTICS : Oracle Analytics Cloud
o APICS : Oracle API Platform Cloud Service
o APICatalog : Oracle API Catalog Service
...

ADWC is on the list. You are ready to manage ADWC instances, such as create one:


$ psm adwc create-service -c - <<<' {
"serviceName": "ADWCx",
"adminPassword": "Ach1z00dAch1",
"numCpus": "1",
"storageCapacity": "1",
"serviceLevel": "PAAS",
"serviceVersion": "18.1.1",
"managedSystemType": "oracle",
"enableNotification": true,
"notificationEmail": "notifocations@me.com",
"isBYOL": true
} '
 
Message: Submitted job to create service [ADWCx] in domain [idcs-31bbd63c3cb9466cb8a96f627b6b6116].
Job ID: 25509908

We can check the status of job
$ psm adwc activities --service-name ADWC
Operation Type Status Start Time End Time
CREATE_SERVICE RUNNING 2018-04-28T19:57:31.056+0000 N/A

And a few minutes later the service is there:
$ psm adwc activities --service-name ADWC
Operation Type Status Start Time End Time
CREATE_SERVICE SUCCEED 2018-04-28T19:57:31.056+0000 2018-04-28T19:59:51.900+0000

We will see how to connect in a future post. Very easy from SQL Developer or SQLcl.

You can delete the service when you don’t need it anymore:

psm adwc delete-service --service-name ADWC

To save credits, you want an easy way to stop and start the service. That’s for tne next post as PSN requires a little hack there.

 

Cet article ADWC: Creation of Autonomous Data Warehouse Cloud service est apparu en premier sur Blog dbi services.

“Let’s Talk Database” is Back !! Canberra/Sydney/Melbourne

Due to popular demand, I’ve been asked by Oracle to again run some “Let’s Talk Database” events this month. Dates and venues are as follows: Wednesday, 23 May – Canberra (Cliftons Canberra, 10 Moore St): Registration Link. Tuesday, 29 May – Melbourne (Oracle Melbourne Office, 417 St Kilda Road): Registration Link. Wednesday, 30 May – Sydney […]