Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

JDBC & the Oracle Database: if you want Transparent Application Failover you need the OCI driver

This is the second article in the series of JDBC articles I’m about to publish. It covers an old technology that’s surprisingly often found in use: Transparent Application Failover (TAF). It’s a client side feature for clustered Oracle databases allowing sessions (and to some extent, select statements) to fail over to a healthy node from a crashed instance.

I would wager a bet that you probably don’t want to use Transparent Application Failover in (new) Java code. There are many better ways to write code these days. More posts to follow with my suggestions ;)

Well, then, why bother writing this post? Simple! There is a common misconception about the requirement: since Transparent Application Failover relies on the Oracle client libraries, you cannot use it with the thin driver. The little tool I have written demonstrates exactly that. And besides, I had the code more or less ready, so why not publish it?

Prerequisites for running the demo code

My Java code has been updated to Oracle work with Oracle 19c. I am also using an Oracle 19c RAC database as the back-end.

Preparing the client

Since I am going to use the Secure External Password Store again you need to prepare the client as per my previous article. The only difference this time is that I need a sqlnet.ora file in my client’s tns directory. Continuing the previous example I created the file in /home/martin/tns, and it contains the following information:

WALLET_LOCATION =
  (SOURCE =(METHOD = FILE)
    (METHOD_DATA =
      (DIRECTORY = /home/martin/tns)
    )
  )

SQLNET.WALLET_OVERRIDE = TRUE  

When you are creating yours, make sure to update the path according to your wallet location.

Since I’m connecting to a RAC database I need to change the entry in tnsnames.ora as well. This requires the application specific service to be created and started, a bit of a chicken and egg problem. The setup of the database service is explained in the next section. Here is my tnsnames.ora entry:

taf_svc =
 (DESCRIPTION = 
  (ADDRESS = (PROTOCOL = tcp)(HOST = rac19pri-scan.example.com)(PORT = 1521))
  (CONNECT_DATA=
    (SERVICE_NAME = taf_svc) 
     (FAILOVER_MODE=(TYPE=select)(METHOD=basic)))
  )

Note that setting the failover_mode () isn’t the preferred way to set TAF properties. It’s better to do that at the service level, see below.

Preparing the database service

Oracle strongly discourages the use of the default service name except for DBA tasks. As I’m a good citizen I’ll create a separate service for my little TAF application.

You need to connect to the database server and use srvctl create service to create a service. I used the following properties:

[oracle@rac19pri1]$ srvctl add service -db NCDB -service taf_svc \
-role primary -policy automatic -clbgoal long \
-failovermethod basic -failovertype session \
-preferred "NCDB1,NCDB2"

You have to set at least preferred nodes and the connect time load balancing goal. If you want to ensure anyone connecting to the TAF services actually makes use of it regardless of the tnsnames setting, you also need to set failovertype and failovermethod.

Don’t forget to start the service after you created it! Once the service is created and running, let’s try to use it to see if all TAF properties are available. To do so, I connected to taf_svc in my 1st session. I then checked the status after connecting as SYSTEM in a second session:

SQL> select inst_id, failover_method, failover_type, failed_over, service_name
  2  from gv$session where username = 'MARTIN'
  3  /

   INST_ID FAILOVER_M FAILOVER_TYPE FAI SERVICE_NAME
---------- ---------- ------------- --- ---------------
         1 BASIC      SESSION       NO  taf_svc

SQL> show user
USER is "SYSTEM"

Running the code

The complete code is available on github in my java-blogposts repository. After downloading it to your machine, change into the taf-demo-1 directory and trigger the compile target using mvn compile.

With the code built, you can run it easily on the command line. First off, try the thin driver.

JDBC Thin Driver

I used this command to start the execution using the thin driver:

java -cp /home/martin/java/libs/ojdbc10.jar:/home/martin/java/libs/oraclepki.jar:/home/martin/java/libs/osdt_cert.jar:/home/martin/java/libs/osdt_core.jar:target/taf-example-1-0.0.1-SNAPSHOT.jar de.martin.tafDemo.App thin

This should connect you to the database, but not with the desired effect.

About to start a demonstration using Transparent Application Failover
Driver Name: Oracle JDBC driver
Driver Version: 19.7.0.0.0
Connection established as MARTIN


inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver 

As you can easily spot there isn’t any trace of TAF in the output. Not surprisingly, the code crashes as soon as instance 2 fails:

inst_id: 2 sid: 00264 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00049 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
inst_id: 2 sid: 00049 failover_type: NONE       failover_method: NONE       failed_over: NO    module: TAF Demo action: thin driver
SQLException while trying to get the session information: java.sql.SQLRecoverableException: No more data to read from socket
[martin@appserver taf-demo-1]$

There are potentially ways around that, but I yet have to see an application implement these. So in other words, in most cases the following equation is true: instance crash = application crash.

JDBC OCI driver

Running the same code using the OCI driver should solve that problem. You will need an Oracle 19.7.0 client installation for this to work, and you have to set LD_LIBRARY_PATH as well as TNS_ADMIN in the shell

$ export TNS_ADMIN=/home/martin/tns
$ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/path/to/client/installation

Once these are set, start the application:

$ java -cp /home/martin/java/libs/ojdbc10.jar:/home/martin/java/libs/oraclepki.jar:/home/martin/java/libs/osdt_cert.jar:/home/martin/java/libs/osdt_core.jar:target/taf-example-1-0.0.1-SNAPSHOT.jar de.martin.tafDemo.App oci
About to start a demonstration using Transparent Application Failover
Driver Name: Oracle JDBC driver
Driver Version: 19.7.0.0.0
Connection established as MARTIN


inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 1 sid: 00035 failover_type: SELECT     failover_method: BASIC      failed_over: NO    module: TAF Demo action: oci driver
inst_id: 2 sid: 00275 failover_type: SELECT     failover_method: BASIC      failed_over: YES   module: java@appserver (TNS V1-V3) action: null
inst_id: 2 sid: 00275 failover_type: SELECT     failover_method: BASIC      failed_over: YES   module: java@appserver (TNS V1-V3) action: null
inst_id: 2 sid: 00275 failover_type: SELECT     failover_method: BASIC      failed_over: YES   module: java@appserver (TNS V1-V3) action: null
^C[martin@appserver taf-demo-1]$ 

You should notice a seamless transition from node 1 to node 2. As you can imagine this is the simplest example, but it should convey the message as intended. For more details about TAF and RAC, including the use of “select” failover and DML support I suggest you have a look at Pro Oracle 11g RAC on Linux, chapter 11.

Summary

Contrary to what one might think using TAF with the JDBC thin driver doesn’t protect a session from instance failure. The only way to protect a (fat client) session is to make use of the Oracle Call Interface.

Then again, TAF is a very mature solution and there might be better ways of working with RAC. Connection pools based on Oracle’s own Universal Connection Pool look like the way forward. Newer technologies, such as (Transparent) Application Continuity are better suited to meet today’s requirements.

Faster DISTINCT operations in 19c

If I gave you a telephone book and asked you to tell me how many distinct street names are present in the book, then the most likely thing you would do is …Wave your mobile phone at me and ask what a “telephone book” is Smile. But assuming you’re old enough to remember telephone books, you’ll probably tackle the task by sorting the entire set in street name order and then work your way down the list and incrementing your count of distinct street names every time it changed. 

In database terms, a little demo of that approach is as below



SQL> create table telephone_book
  2  as select 'Street'||trunc(dbms_random.value(1,30)) street
  3  from dual
  4  connect by level  select street
  2  from   telephone_book
  3  order by 1;

STREET
----------------------------------------------
Street1             == 1st street
Street1
Street10            == 2nd street 
Street10
Street11            == 3rd street 
Street11
Street11
Street11
Street11
Street11
...
...
Street8
Street9             == 28th street
Street9
Street9
Street9

100 rows selected.

which results in our 28 distinct streets.

But what if we were not allowed to sort the data. Perhaps a more realistic question is – what if it was ridiculously prohibitive in terms of time and effort to sort the data? I don’t know about you, but I have better things to do with my weekend than sort the telephone book! Smile

From my demo above, I can see that the highest possible number of distinct streets I could have is 30. So rather than sort the data, I can create a simple 30-character string, which each character represents the occurrence the partnering street.  My string would start off as (the hyphens added for clarity)

NNNNNNNNNN-NNNNNNNNNN-NNNNNNNNNN

and as I read the telephone book, I simply change the string for each street I encounter. If the street for the first row is “Street7”, I change my string to be:

NNNNNNYNNN-NNNNNNNNNN-NNNNNNNNNN

If the second row is “Street22” then the string becomes:

NNNNNNYNNN-NNNNNNNNNN-NYNNNNNNNN

and so on until I have read the entire table. At the end of the exercise, I just count the number of “Y” and that is the distinct count of streets. I never had to sort the data. Here’s a simple code demo of that process:


SQL> set serverout on
SQL> declare
  2    bits varchar2(32) := rpad('N',32,'N');
  3    street_no int;
  4  begin
  5    for i in ( select rownum idx, t.* from telephone_book t )
  6    loop
  7      street_no := to_number(ltrim(i.street,'Street'));
  8      bits := substr(bits,1,street_no-1)||'Y'||substr(bits,street_no+1);
  9      dbms_output.put_line('After row '||i.idx||' map='||bits);
 10    end loop;
 11    dbms_output.put_line('Final='||bits);
 12    dbms_output.put_line('ndv='||(length(bits)-length(replace(bits,'Y'))));
 13  end;
 14  /
After row 1 map=NNNNNNNNNNNNNNNNNNNNNYNNNNNNNNNN
After row 2 map=NNNNNNNNNNNNNNNNNNYNNYNNNNNNNNNN
After row 3 map=NNNNNNNNYNNNNNNNNNYNNYNNNNNNNNNN
After row 4 map=NNNNNNNNYNNNNYNNNNYNNYNNNNNNNNNN
...
...
After row 97 map=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
After row 98 map=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
After row 99 map=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
After row 100 map=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
Final=YYYYYYYYYYYYYYYYYNYYYYYYYYYYYNNN
ndv=28

PL/SQL procedure successfully completed.

and we can verify the result with a standard SQL query:


SQL> select count(distinct street)
  2  from   telephone_book;

COUNT(DISTINCTSTREET)
---------------------
                   28

Of course, such a simple solution masks a lot of complexity in implementing something like this for an arbitrary set of data.

  • How do we know the upper limit on the potential number of distinct rows?
  • How do we rapidly count the number of “Y” or “hits” once we have scanned the data?
  • Have we just shifted the problem to an enormous memory structure?

You’ll be reassured to know that a lot of thought has gone into tackling these issues, and thus taking the simple demo above into a genuine robust implementation within version 19c of the database. Many queries requiring distinct counts can be achieved now without requiring an expensive sort.

Here’s the video walking through exactly what we’ve done in 19c, and what some of the benefits are:

Video : Real-Time Statistics in Oracle Database 19c

In today’s video we’ll give a demonstration of Real-Time Statistics in Oracle Database 19c.

This video is based on the following article.

This is essentially a follow-on from the previous video and article.

The star of today’s video is Ludovico Caldara, who is rocking a rather “different” look. I’ll leave it to you to decide if it’s an improvement or not! </p />
</p></div>

    	  	<div class=

Oracle Autonomous JSON Database (AJD) : The Big Reveal

https://oracle-base.com/blog/wp-content/uploads/2020/08/autonomous_json_... 218w" sizes="(max-width: 174px) 85vw, 174px" />

The Autonomous JSON Database (AJD) was announced during the Oracle Developer Live (#OracleDevLive) event last night. This was accompanied by a blog post announcement here.

I was on an briefing the night before where we were told about this announcement in advance. Later I found out the service had been live since Tuesday, but they were waiting for this event for the big reveal. As soon as I knew it was live I fired up an instance up on my free tier account, but I had to wait for the announcement before I released the article. You can see what I tried here.

If it looks familiar, that’s because it is. The Autonomous JSON Database is essentially an Autonomous Transaction Processing (ATP) instance with some restrictions, that you get to run for less money. You can “convert” it to full ATP at the click of a button if you want to. Obviously, the price changes then.

If you are considering using the Autonomous JSON Database service you will need to learn more about SODA (Simple Oracle Document Access). I’ve written a few things about this over the years.

There are SODA APIs for a bunch of languages. You can do all of this on-prem using Oracle REST Data Services (ORDS), but it comes ready to go on AJD and ATP.

So now it’s here, and it’s available on the Oracle Cloud Free Tier, what have you got to lose?

Cheers

Tim…

The post Oracle Autonomous JSON Database (AJD) : The Big Reveal first appeared on The ORACLE-BASE Blog.


Oracle Autonomous JSON Database (AJD) : The Big Reveal was first posted on August 14, 2020 at 7:36 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

The Classic Editing Experience is Moving, Not Leaving

With the introduction of the Block editor, the WordPress.com Classic Editor was set for retirement at the beginning of June. We pushed that back a bit to make time for more changes that ease the transition to the Block editor — and now it’s time!

The WordPress editor empowers you to create pages and posts by layering multiple blocks on top of each other. It’s versatile, intuitive, and boasts exciting new features, including:

  • Over 100 content blocks available for publishing.
  • A growing collection of block patterns.
  • Dozens of beautiful built-in page templates.
  • Styles you can customize directly within the editor.

If you’d rather stick with the Classic editor experience — the one you used before we introduced the WordPress.com editor a few years ago — no worries. With the new and improved Classic block, you have the best of both editors: the flexibility and stability of the Block editor, and the Classic editor interface you know.

From August 11 on all WordPress.com accounts will start to switch from Classic editor to the new Block editor. It will happen in phases, and you’ll get an email to let you know to expect the change.

Here’s what you need to know if you’re a fan of the Classic editor experience.

Why the change?

There are exciting new features in the pipeline that require the new WordPress editor. It’s not technically possible to retrofit them into the older, Classic editor, and we want to make sure everyone can take advantage of them as they become available. With all WordPress.com users publishing with the Block editor, all WordPress.com users always have the latest and greatest.

Can I create simple blog posts the way I always have?

Yes, with the Classic block! It provides an editing experience that mimics the Classic editor — the same options and tools, in the same spot.

To use it, add a Classic block to your post or page, then add and edit both text and media right inside it.

Also ….

The Block editor has updates to bring in some of your favorite classic features, like a clean editing screen. The Block editor displays pop-up options and menus as you type — they give you lots of control, but you might not always want them visible over your content. Turn on Top toolbar mode to keep them pinned to the top of the screen. It’s a great way to experience the full flexibility of the block editor while still allowing distraction-free writing.

What about editing posts and pages already created in the Classic editor?

Many of you have lots of pages and posts already created and published with the Classic editor. Previously, editing them in the Block editor led to a lot of prompts asking you to convert the content to blocks. Now there’s a single “Convert to blocks” menu item to take care of it in one go.

https://en-blog.files.wordpress.com/2020/07/convert-to-blocks-toolbar.pn... 150w, https://en-blog.files.wordpress.com/2020/07/convert-to-blocks-toolbar.pn... 300w, https://en-blog.files.wordpress.com/2020/07/convert-to-blocks-toolbar.pn... 768w, https://en-blog.files.wordpress.com/2020/07/convert-to-blocks-toolbar.png 1280w" sizes="(max-width: 1024px) 100vw, 1024px" />

You can use this button to upgrade your posts and pages to block-based content at your leisure.

Can I combine the Classic block with other blocks?

For the best editing experience, particularly if you use the mobile app to edit your posts, we recommend just having a single Classic block on each post or page.

But, moving everyone to the block editor gives you the best of both worlds. You can continue writing and editing some of your posts with the simple Classic interface — but when you want to experiment with more complex layouts and functionality you can create a new post and play with the power and flexibility of all the other blocks on offer. For example, have you ever wanted an easy way to show off your favorite podcast?


Look out for the email letting you know when to expect the Block editor switch! In there meantime, learn more about working with the Block editor and the Classic block.

WhatsApp: A New, Convenient Way for Your Customers to Contact You

The world is mobile, and your visitors and customers expect to be able to easily contact you using their mobile device. With WordPress.com’s new WhatsApp button, you can provide a one-click, secure way for people to open WhatsApp, with your phone number and a message pre-filled.

https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 150w, https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 300w, https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 768w, https://en-blog.files.wordpress.com/2020/08/screen-shot-2020-08-11-at-9.... 1408w" sizes="(max-width: 1024px) 100vw, 1024px" />
Insert the WhatsApp button with your phone number and a custom message pre-filled.

Adding the button is easy. In the block editor, create a new block and search for WhatsApp:

whatsapp blockhttps://en-blog.files.wordpress.com/2020/08/whats-app-block.png?w=606&h=772 606w, https://en-blog.files.wordpress.com/2020/08/whats-app-block.png?w=118&h=150 118w, https://en-blog.files.wordpress.com/2020/08/whats-app-block.png?w=235&h=300 235w" sizes="(max-width: 303px) 100vw, 303px" />

The WhatsApp button is available now to all WordPress.com sites on a Premium, Business, or eCommerce plan. You can upgrade your site to one of these plans, try it out for 30 days, and if you’re not satisfied with your upgrade we’ll grant you a full refund.

If you decide to cancel your paid plan after you’ve already accepted the free custom domain, the domain is yours to keep. We simply ask that you cover the costs for the domain registration.

We hope the WhatsApp button helps you connect with your customers and visitors in new ways. Give it a try today!

Oracle ADB: rename the service_name connect_data

By Franck Pachot

.
Since Aug. 4, 2020 we have the possibility to rename an Autonomous Database (ATP, ADW or AJD – the latest JSON database) on shared Exadata infrastructure (what was called ‘serverless’ last year which is a PDB in a public CDB). As the PDB name is internal, we reference the ADB with its database name is actually a part of the service name.

I have an ATP database that I’ve created in the Oracle Cloud Free Tier a few months ago.
I have downloaded the region and instance wallet to be used by client connections:


SQL> host grep _high /var/tmp/Wallet_DB202005052234_instance/tnsnames.ora

db202005052234_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=jgp1nyc204pdpjc_db202005052234_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))

This is the instance wallet which references only this database (db202005052234)


SQL> host grep _high /var/tmp/Wallet_DB202005052234_region/tnsnames.ora

db202005052234_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=jgp1nyc204pdpjc_db202005052234_high.atp.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))
db202003061855_high = (description= (retry_count=20)(retry_delay=3)(address=(protocol=tcps)(port=1522)(host=adb.eu-frankfurt-1.oraclecloud.com))(connect_data=(service_name=jgp1nyc204pdpjc_db202003061855_high.adwc.oraclecloud.com))(security=(ssl_server_cert_dn="CN=adwc.eucom-central-1.oraclecloud.com,OU=Oracle BMCS FRANKFURT,O=Oracle Corporation,L=Redwood City,ST=California,C=US")))

This contains also my other database service that I have in the same region.

I connect using this wallet:


SQL> connect admin/"TheAnswer:=42"@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
Connected.

SQL> select name,network_name,creation_date,pdb from v$services;

                                                          NAME                                                   NETWORK_NAME          CREATION_DATE                               PDB
______________________________________________________________ ______________________________________________________________ ______________________ _________________________________
JGP1NYC204PDPJC_DB202005052234_high.atp.oraclecloud.com        JGP1NYC204PDPJC_DB202005052234_high.atp.oraclecloud.com        2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_tpurgent.atp.oraclecloud.com    JGP1NYC204PDPJC_DB202005052234_tpurgent.atp.oraclecloud.com    2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_low.atp.oraclecloud.com         JGP1NYC204PDPJC_DB202005052234_low.atp.oraclecloud.com         2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_tp.atp.oraclecloud.com          JGP1NYC204PDPJC_DB202005052234_tp.atp.oraclecloud.com          2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234
jgp1nyc204pdpjc_db202005052234                                 jgp1nyc204pdpjc_db202005052234                                 2020-08-13 09:02:02    JGP1NYC204PDPJC_DB202005052234
JGP1NYC204PDPJC_DB202005052234_medium.atp.oraclecloud.com      JGP1NYC204PDPJC_DB202005052234_medium.atp.oraclecloud.com      2019-05-17 20:53:03    JGP1NYC204PDPJC_DB202005052234

Here are all the services registered: the LOW/MEDIUM/HIGH/TP/TP_URGENT for my connections and the PDB name one.

Now from the Cloud Console I rename the database:

https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 300w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 768w, https://blog.dbi-services.com/wp-insides/uploads/sites/2/2020/08/Annotat... 1530w" sizes="(max-width: 1024px) 100vw, 1024px" />

You can see that the “display name” (DB 202008131439) didn’t change but the “Database name” has been renamed (from “DB202008131439” to “FRANCK”).


SQL> select name,network_name,creation_date,pdb from v$services;

Error starting at line : 1 in command -
select name,network_name,creation_date,pdb from v$services
Error at Command Line : 1 Column : 1
Error report -
SQL Error: No more data to read from socket
SQL>

My connection has been canceled. I need to connect again.


SQL> connect admin/"TheAnswer:=42"@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
Aug 13, 2020 10:13:41 AM oracle.net.resolver.EZConnectResolver parseExtendedProperties
SEVERE: Extended settings parsing failed.
java.lang.RuntimeException: Unable to parse url "/var/tmp/Wallet_DB202005052234_instance:1521/DB202005052234_tp?TNS_ADMIN"
        at oracle.net.resolver.EZConnectResolver.parseExtendedProperties(EZConnectResolver.java:408)
        at oracle.net.resolver.EZConnectResolver.parseExtendedSettings(EZConnectResolver.java:366)
        at oracle.net.resolver.EZConnectResolver.parse(EZConnectResolver.java:171)
        at oracle.net.resolver.EZConnectResolver.(EZConnectResolver.java:130)
        at oracle.net.resolver.EZConnectResolver.newInstance(EZConnectResolver.java:139)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:669)
        at oracle.jdbc.driver.OracleDriver.connect(OracleDriver.java:562)
        at java.sql.DriverManager.getConnection(DriverManager.java:664)
        at java.sql.DriverManager.getConnection(DriverManager.java:208)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.connect(SQLPLUS.java:5324)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.logConnectionURL(SQLPLUS.java:5418)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.logConnectionURL(SQLPLUS.java:5342)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.getConnection(SQLPLUS.java:5154)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.runConnect(SQLPLUS.java:2414)
        at oracle.dbtools.raptor.newscriptrunner.SQLPLUS.run(SQLPLUS.java:220)
        at oracle.dbtools.raptor.newscriptrunner.ScriptRunner.runSQLPLUS(ScriptRunner.java:425)
        at oracle.dbtools.raptor.newscriptrunner.ScriptRunner.run(ScriptRunner.java:262)
        at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:344)
        at oracle.dbtools.raptor.newscriptrunner.ScriptExecutor.run(ScriptExecutor.java:227)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.process(SqlCli.java:410)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.processLine(SqlCli.java:421)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.startSQLPlus(SqlCli.java:1179)
        at oracle.dbtools.raptor.scriptrunner.cmdline.SqlCli.main(SqlCli.java:502)

  USER          = admin
  URL           = jdbc:oracle:thin:@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
  Error Message = Listener refused the connection with the following error:
ORA-12514, TNS:listener does not currently know of service requested in connect descriptor
  USER          = admin
  URL           = jdbc:oracle:thin:@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance:1521/DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance
  Error Message = IO Error: Invalid connection string format, a valid format is: "host:port:sid"

Warning: You are no longer connected to ORACLE.
SQL>

The service is not known, which makes sense because the rename of the database is actually a rename of the services.

Oracle documentation says that we have to download the wallet again after a rename of the database. But that’s not very agile. Let’s rename the service in the tnsnames.ora


SQL> host sed -ie s/_db202005052234/FRANCK/g /var/tmp/Wallet_DB202005052234_instance/tnsnames.ora

This changes only the SERVICE_NAME in CONNECT_DATA but not the tnsnames.ora entry, then I can use the same connection string.


SQL> connect admin/"TheAnswer:=42"@DB202005052234_tp?TNS_ADMIN=/var/tmp/Wallet_DB202005052234_instance

SQL> select name,network_name,creation_date,pdb from v$services;

                                                  NAME                                           NETWORK_NAME          CREATION_DATE                       PDB
______________________________________________________ ______________________________________________________ ______________________ _________________________
JGP1NYC204PDPJC_FRANCK_high.atp.oraclecloud.com        JGP1NYC204PDPJC_FRANCK_high.atp.oraclecloud.com        2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_tp.atp.oraclecloud.com          JGP1NYC204PDPJC_FRANCK_tp.atp.oraclecloud.com          2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_medium.atp.oraclecloud.com      JGP1NYC204PDPJC_FRANCK_medium.atp.oraclecloud.com      2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
jgp1nyc204pdpjc_franck                                 jgp1nyc204pdpjc_franck                                 2020-08-13 10:05:58    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_low.atp.oraclecloud.com         JGP1NYC204PDPJC_FRANCK_low.atp.oraclecloud.com         2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK
JGP1NYC204PDPJC_FRANCK_tpurgent.atp.oraclecloud.com    JGP1NYC204PDPJC_FRANCK_tpurgent.atp.oraclecloud.com    2019-05-17 20:53:03    JGP1NYC204PDPJC_FRANCK

Using the new SERVICE_NAME is sufficient. As you can see above, some autonomous magic remains: the new services still have the old creation date.

Note that you should follow the documentation and download the wallet and change your connection string. There is probably a reason behind this. But autonomous or not, I like to understand what I do and I don’t see any reason for changing everything when renaming a service.

Cet article Oracle ADB: rename the service_name connect_data est apparu en premier sur Blog dbi services.

Amazon DynamoDB: the cost of indexes

By Franck Pachot

.
That’s common to any data structure, whether it is RDBMS or NoSQL, indexes are good to accelerate reads but slow the writes. This post explains the consequences of adding indexes in DynamoDB.

Secondary Indexes

What we call an index in DynamoDB is different from an index in RDBMS. They have the same goal: store your data with some redundancy in order to have it physically partitioned, sorted, and clustered differently than the table, in order to optimize the performance for specific access patterns. It can be full redundancy (covering indexes) so that there’s is no need to look at the table, or partial redundancy (only the key values and the sufficient values for accessing the table efficiently). The indexes are maintained automatically: when the table is updated, the index entries are maintained by the database engine. This can be synchronous, or asynchronous if eventual consistency is accepted. The major difference is that a relational database separates the logical and physical implementation (Codd Rule 8: Physical Data Independence) for better agility: there is no change to do in the application code to access through an index or another. RDBMS have an optimizer (query planner) that selects the best access path for the query predicates. That was the topic of the previous post. But following the NoSQL spirit, AWS DynamoDB delegates this responsibility to the application code: the index access will be used only when you explicitly query it.

Because DynamoDB tables are physically organized by the primary key (hash partitioning with local index when a sort key is defined) this KeySchema can be considered the primary index. Then any additional index is a secondary index. It can be local, prefixed by the hash key, or global, prefixed by another hash key than the table.

Table with no indexes


aws dynamodb create-table --table-name Demo \
 --billing-mode PROVISIONED --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
 --attribute-definitions AttributeName=P,AttributeType=S AttributeName=S,AttributeType=S \
 --key-schema AttributeName=P,KeyType=HASH AttributeName=S,KeyType=RANGE

I have created a HASH/RANGE table that is perfect to access with a single value for the attribute P, the partition key, and a single value or a range for S, the sort key.


{
    "TableDescription": {
        "TableArn": "arn:aws:dynamodb:eu-central-1:802756008554:table/Demo",
        "AttributeDefinitions": [
            {
                "AttributeName": "P",
                "AttributeType": "S"
            },
            {
                "AttributeName": "S",
                "AttributeType": "S"
            }
        ],
        "ProvisionedThroughput": {
            "NumberOfDecreasesToday": 0,
            "WriteCapacityUnits": 5,
            "ReadCapacityUnits": 5
        },
        "TableSizeBytes": 0,
        "TableName": "Demo",
        "TableStatus": "CREATING",
        "TableId": "b2a97f98-611d-451d-99ee-c3aab1129b30",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "P"
            },
            {
                "KeyType": "RANGE",
                "AttributeName": "S"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1597052214.276
    }
}

This is the output of the create-table command. I use small reserved capacity in my blog posts so that you can run the test on the AWS Free Tier without risk. I look at the metrics, for better understanding, not the response time which depends on many other factors (network latency, bursting, throttling,…). But of course, you will get the same on larger data sets.

In this table, I’ll insert items by batch from the following JSON (values are randomly generated for each call):


{                                                                                                                                                                                                                                  [16/91205]
 "Demo": [
  {"PutRequest":{"Item":{"P":{"S":"4707"},"S":{"S":"23535"},"A0":{"S":"18781"}
,"A01":{"S":"10065"} ,"A02":{"S":"2614"} ,"A03":{"S":"7777"} ,"A04":{"S":"19950"} ,"A05":{"S":"30864"} ,"A06":{"S":"24176"} ,"A07":{"S":"22257"} ,"A08":{"S":"11549"} ,"A09":{"S":"28368"} ,"A10":{"S":"29095"} ,"A11":{"S":"23060"} ,"A12":{
"S":"3321"} ,"A13":{"S":"30588"} ,"A14":{"S":"16039"} ,"A15":{"S":"31388"} ,"A16":{"S":"21811"} ,"A17":{"S":"10593"} ,"A18":{"S":"18914"} ,"A19":{"S":"23120"} ,"A20":{"S":"25238"} }}},
  {"PutRequest":{"Item":{"P":{"S":"4106"},"S":{"S":"15829"},"A0":{"S":"28144"}
,"A01":{"S":"9051"} ,"A02":{"S":"26834"} ,"A03":{"S":"1614"} ,"A04":{"S":"6458"} ,"A05":{"S":"1721"} ,"A06":{"S":"8022"} ,"A07":{"S":"49"} ,"A08":{"S":"23158"} ,"A09":{"S":"6588"} ,"A10":{"S":"17560"} ,"A11":{"S":"4330"} ,"A12":{"S":"175
78"} ,"A13":{"S":"8548"} ,"A14":{"S":"57"} ,"A15":{"S":"27601"} ,"A16":{"S":"8766"} ,"A17":{"S":"24400"} ,"A18":{"S":"18881"} ,"A19":{"S":"28418"} ,"A20":{"S":"14915"} }}},
... 
  {"PutRequest":{"Item":{"P":{"S":"27274"},"S":{"S":"8548"},"A0":{"S":"11557"}
,"A01":{"S":"28758"} ,"A02":{"S":"17212"} ,"A03":{"S":"17658"} ,"A04":{"S":"10456"} ,"A05":{"S":"8488"} ,"A06":{"S":"28852"} ,"A07":{"S":"22763"} ,"A08":{"S":"21667"} ,"A09":{"S":"15240"} ,"A10":{"S":"12092"} ,"A11":{"S":"25045"} ,"A12":{"S":"9156"} ,"A13":{"S":"27596"} ,"A14":{"S":"27305"} ,"A15":{"S":"22214"} ,"A16":{"S":"13384"} ,"A17":{"S":"12300"} ,"A18":{"S":"12913"} ,"A19":{"S":"20121"} ,"A20":{"S":"20224"} }}}
 ]
}

In addition to the primary key, I have attributes from A0 to A20. I put 25 items per call (that’s the maximum for DynamoDB) and my goal is to have many attributes that I can index later.


aws dynamodb batch-write-item --request-items file://batch-write.json \
 --return-consumed-capacity INDEXES --return-item-collection-metrics SIZE

This is the simple call for this batch insert, returning the consumed capacity on table and indexes:


aws dynamodb batch-write-item --request-items file://batch-write139.json --return-consumed-capacity INDEXES --return-item-collection-metrics SIZE
...
{
    "UnprocessedItems": {},
    "ItemCollectionMetrics": {},
    "ConsumedCapacity": [
        {
            "CapacityUnits": 25.0,
            "TableName": "Demo",
            "Table": {
                "CapacityUnits": 25.0
            }
        }
    ]
}

25 Write Capacity Units for 25 items: each item that is smaller than 1KB consumes 1 WCU. My items are on average 170 Bytes here, so they fit in 1 WCU. And batching doesn’t help there: it is batched for the network call only, but they all go into a different place, and then require a WCU for each of them. There is nothing like preparing full blocks with many items (like RDBMS direct-path inserts, or fast load, or insert append…). DynamoDB is there to scale small transactions by scattering data though multiple partitions.

Table with 5 local indexes

Here is the same create statement but with 5 Local Secondary Indexes declared:


aws dynamodb create-table --table-name Demo --billing-mode PROVISIONED --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
 --attribute-definitions AttributeName=P,AttributeType=S AttributeName=S,AttributeType=S \
  AttributeName=A01,AttributeType=S AttributeName=A02,AttributeType=S AttributeName=A03,AttributeType=S AttributeName=A04,AttributeType=S AttributeName=A05,AttributeType=S \
 --key-schema AttributeName=P,KeyType=HASH AttributeName=S,KeyType=RANGE \
 --local-secondary-indexes \
  'IndexName=LSI01,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A01,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
  'IndexName=LSI02,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A02,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
  'IndexName=LSI03,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A03,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
  'IndexName=LSI04,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A04,KeyType=RANGE}],Projection={ProjectionType=ALL}' \
  'IndexName=LSI05,KeySchema=[{AttributeName=P,KeyType=HASH},{AttributeName=A05,KeyType=RANGE}],Projection={ProjectionType=ALL}'

This recreated the table with adding the definition for 5 local indexes, on the same partition key but different sort key. I had to add the attribute definition for them as I reference them in the index definition.


...
        },
        "TableSizeBytes": 0,
        "TableName": "Demo",
        "TableStatus": "CREATING",
        "TableId": "84fc745b-66c5-4c75-bcf4-7686b2daeacb",
        "KeySchema": [
            {
                "KeyType": "HASH",
                "AttributeName": "P"
            },
            {
                "KeyType": "RANGE",
                "AttributeName": "S"
            }
        ],
        "ItemCount": 0,
        "CreationDateTime": 1597054018.546
    }
}

The hash partition size in DynamoDB is fixed, 10GB, and because the local indexes are stored within each partition, the total size of an item plus all its index entries cannot go higher than this limit. Here, I’m far from the limit, which will be often the case: if your key-value store is a document store, you will not project the document into all local indexes. Then use KEYS_ONLY for the projection type and not the ALL one I used there. And anyway, 5 local indexes is the maximum you can create in DynamoDB.


aws dynamodb batch-write-item --request-items file://batch-write139.json --return-consumed-capacity INDEXES --return-item-collection-metrics SIZE
...
   "ConsumedCapacity": [
        {
            "CapacityUnits": 150.0,
            "TableName": "Demo",
            "LocalSecondaryIndexes": {
                "LSI01": {
                    "CapacityUnits": 25.0
                },
                "LSI03": {
                    "CapacityUnits": 25.0
                },
                "LSI02": {
                    "CapacityUnits": 25.0
                },
                "LSI05": {
                    "CapacityUnits": 25.0
                },
                "LSI04": {
                    "CapacityUnits": 25.0
                }
            },
            "Table": {
                "CapacityUnits": 25.0
            }
        }
    ]
}

Here we are 155 WCU in total here. The same 25 WCU as before, for the 25 items put in the table. And each local index accounts for an additional 25 WCU. I have no idea why 26 and not 25 by the way. Note that I’ve seen a few with 26 WCU for all indexes in the test and I don’t really know why.

Table with 20 global indexes

Now, without any local indexes but the maximum global indexes we can have here: 20 Global Secondary Indexes (GSI)


aws dynamodb create-table --table-name Demo --billing-mode PROVISIONED --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \
 --attribute-definitions AttributeName=P,AttributeType=S AttributeName=S,AttributeType=S \
AttributeName=A01,AttributeType=S AttributeName=A02,AttributeType=S AttributeName=A03,AttributeType=S AttributeName=A04,AttributeType=S AttributeName=A05,AttributeType=S AttributeName=A06,AttributeType=S AttributeName=A07,AttributeType=S AttributeName=A08,AttributeType=S AttributeName=A09,AttributeType=S AttributeName=A10,AttributeType=S AttributeName=A11,AttributeType=S AttributeName=A12,AttributeType=S AttributeName=A13,AttributeType=S AttributeName=A14,AttributeType=S AttributeName=A15,AttributeType=S AttributeName=A16,AttributeType=S AttributeName=A17,AttributeType=S AttributeName=A18,AttributeType=S AttributeName=A19,AttributeType=S AttributeName=A20,AttributeType=S \
 --key-schema AttributeName=P,KeyType=HASH AttributeName=S,KeyType=RANGE \
 --global-secondary-indexes \
  'IndexName=GSI01,KeySchema=[{AttributeName=A01,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI02,KeySchema=[{AttributeName=A02,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI03,KeySchema=[{AttributeName=A03,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI04,KeySchema=[{AttributeName=A04,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI05,KeySchema=[{AttributeName=A05,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI06,KeySchema=[{AttributeName=A06,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI07,KeySchema=[{AttributeName=A07,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI08,KeySchema=[{AttributeName=A08,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI09,KeySchema=[{AttributeName=A09,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI10,KeySchema=[{AttributeName=A10,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI11,KeySchema=[{AttributeName=A11,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI12,KeySchema=[{AttributeName=A12,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI13,KeySchema=[{AttributeName=A13,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI14,KeySchema=[{AttributeName=A14,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI15,KeySchema=[{AttributeName=A15,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI16,KeySchema=[{AttributeName=A16,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI17,KeySchema=[{AttributeName=A17,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI18,KeySchema=[{AttributeName=A18,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI19,KeySchema=[{AttributeName=A19,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}' \
  'IndexName=GSI20,KeySchema=[{AttributeName=A20,KeyType=HASH},{AttributeName=S,KeyType=RANGE}],Projection={ProjectionType=ALL},ProvisionedThroughput={ReadCapacityUnits=1,WriteCapacityUnits=1}'

This takes much longer to create because global indexes are actually like other tables that are maintained asynchronously (there’s only eventual consistency when you read them).


aws dynamodb batch-write-item --request-items file://batch-write139.json --return-consumed-capacity INDEXES --return-item-collection-metrics SIZE
...
{                                                                                                                                                                                                                                      "UnprocessedItems": {},
    "ItemCollectionMetrics": {},
    "ConsumedCapacity": [
        {
            "CapacityUnits": 525.0,
            "GlobalSecondaryIndexes": {
                "GSI06": {
                    "CapacityUnits": 25.0
                },
                "GSI07": {
                    "CapacityUnits": 25.0
                },
                "GSI05": {
                    "CapacityUnits": 25.0
                },
 ...
                "GSI08": {
                    "CapacityUnits": 25.0
                },
                "GSI03": {
                    "CapacityUnits": 25.0
                }
            },
            "TableName": "Demo",
            "Table": {
                "CapacityUnits": 25.0
            }
        }
    ]
}

The cost is the same as with local indexes: one capacity unit per item per index in addition to the table.

So this post is simply there to get your attention to the fact that adding indexes will slow the writes, in NoSQL, as in any database. In DynamoDB this is measured by Write Capacity Unit and you can get the whole detail, how many WCU for the table, for the LSI and for the GSI, with “ReturnConsumedCapacity”. But what is important is that this capacity can scale. You will probably not see a difference in the response time. Except of course if you go beyond the provisioned capacity. And then you can increase it (it has a cost of course). How does it scale? Because DynamoDB allows us to do only things that scale. Maintaining global indexes requires cross-node synchronization in a distributed database, and this cannot scale. So DynamoDB does it asynchronously (reads on the GSI is eventually consistent). And the number of GSI is limited to 20. Maintaining local indexes do not involve cross-partition latency and are maintained synchronously. But to limit the overhead, you can create 5 LSI at maximum. Within those limits, local and global indexes are useful to keep item access fast (see previous posts on covering GSI and LSI)

Cet article Amazon DynamoDB: the cost of indexes est apparu en premier sur Blog dbi services.

Manage Posts and Pages with the Block Editor Sidebar

We wanted to make it easier to manage posts and pages when you’re working within the Block Editor. With the Block Editor Sidebar you can view, create, edit, and manage posts and pages right from the Block Editor!  

Wait, another sidebar?

Have you ever finished editing something and then needed to make a quick edit to an existing post, or start a new one? Previously this meant navigating to your dashboard and to the posts list.

The Block Editor Sidebar allows you to view and access your site’s posts and pages — whether they’re drafts, published, or scheduled — and create new ones right from the editor.

Using the sidebar

The sidebar is available in the Block Editor when editing both posts and pages. For now, when editing a post you’ll only be able to see recent posts, and create new posts. Similarly, if you’re editing a page you’ll see a list of your recent pages and a link to create a new page.

The following steps refer to posts, but you can use the same steps to edit pages.

Open an existing post or create a new one to open the editor.

Select a post from the posts listhttps://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=1440 1440w, https://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=150 150w, https://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=300 300w, https://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=768 768w, https://en-blog.files.wordpress.com/2020/08/select-post-to-edit.jpg?w=1024 1024w" sizes="(max-width: 720px) 100vw, 720px" />

To view the sidebar, click on the WordPress.com logo at the top left of the editor.

Click on the WordPress.com logohttps://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 1440w, https://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 150w, https://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 300w, https://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 768w, https://en-blog.files.wordpress.com/2020/08/click-on-the-wordpress-logo.... 1024w" sizes="(max-width: 720px) 100vw, 720px" />

To return to your posts list, select the View Posts link.

Block Editor Sidebar - view all postshttps://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 1440w, https://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 150w, https://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 300w, https://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 768w, https://en-blog.files.wordpress.com/2020/08/sidebar-return-to-post-list.... 1024w" sizes="(max-width: 720px) 100vw, 720px" />

To load an existing post, select the post title in the list of recent posts. Unless it’s already been published, the post’s status will appear next to the title, for example “Draft” or “Scheduled.”

Block Editor Sidebar - select a post to edithttps://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.... 150w, https://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.... 300w, https://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.... 768w, https://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.... 1024w, https://en-blog.files.wordpress.com/2020/08/sidebar-select-post-to-edit.jpg 1412w" sizes="(max-width: 720px) 100vw, 720px" />

To create a new post, click on the Add new post link.

Block Editor Sidebar - add a new posthttps://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=1440 1440w, https://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=150 150w, https://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=300 300w, https://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=768 768w, https://en-blog.files.wordpress.com/2020/08/sidebar-add-new-post.jpg?w=1024 1024w" sizes="(max-width: 720px) 100vw, 720px" />

To dismiss the sidebar, click anywhere outside the sidebar or press the “Escape” key.

What’s next?

Now that we have the Block Editor Sidebar in place, we’ll extend its capabilities with more tools and shortcuts to help you manage your site’s content.

Do you have ideas for site-level features you’d like to see in the Block Editor Sidebar? Or suggestions about how we can improve this feature? Please let us know in the comments!

Starting Your Own Podcast on WordPress.com

I am very inspired by creative podcasts. Entertaining guests and the unscripted nature of the conversations make me feel like I am right there, chatting about world affairs or the deep societal transformations of the connected age. I decided to start my own show with a group of friends  — loosely about blogging, but delving into all sorts of things.

It took three hours, with a total cost of $5/month, including our own domain name. In this post, I’ll share my journey of launching a brand-new podcast on WordPress.com, and later promoting it on iTunes and Spotify.

Why start a podcast?

Podcasting has been hailed as “the new blogging.” According to Edison Research, 51% of Americans have listened to a podcast, and hosts have created over one million podcasts on iTunes Podcasts.

Starting a podcast comes with a lot of perks:

  • You can connect with your audience on a much deeper level, thanks to your voice and the unscripted nature of the conversation.
  • It’s a fantastic excuse to reach out and meet interesting people.
  • Podcasting pairs really well with other projects — blogging, writing, or even a corporate ”day job.”

A group of Polish bloggers and I had been dreaming of a foray into podcasting for a while. We created the Placebo podcast in hopes of meeting interesting people and, frankly, having fun.

The tagline in Polish reads: “Your dose of absolutely nothing. Confirmed clinical efficacy.”

How does it all work?

You may have listened to a podcast on iTunes, Spotify, or another app — but did you know the content you’re enjoying doesn’t originate there? The beating heart of every podcast is its RSS feed. It is a particular format for content that services like iTunes or Spotify can process and display in the appropriate apps — iTunes, Alexa devices, and other services.

When you upload an audio file to your blog (for example, an .mp3) and hit publish, the new post (including the file) shows up in your RSS feed. From there, iTunes, Spotify, Pocket Casts, or another podcast app downloads the file to play it to your listeners.

WordPress.com also has built-in podcasting tools to make that job easier. To upload audio files, you’ll need a paid plan — any plan, including the Personal plan at $5/month, will do.

What do you need to start?

The name

The name of your podcast helps listeners find you in their favorite podcasting app. With Placebo, our main concern was making it memorable; our secondary goal, humor. (Which is related — funny is memorable!) We came up with “Placebo — a podcast with a confirmed clinical efficacy.”

Other memorable name examples include therapist Esther Perrell’s podcast, Where Should We Begin, Matt Mullenweg’s podcast about remote work, Distributed, and Joe Rogan’s The Joe Rogan Experience. If you are still stuck, try out our recently launched business name generator to help you brainstorm.

Recording

We didn’t want to get expensive gear. It’s better to start simple with the equipment you already have. The Placebo podcast is recorded over Zoom, with video-call headsets. I recommend recording at least three episodes, so when you publish, your listeners will have a better taste of your style.

The Distributed podcast has a great walkthrough of the various recording options you can try.

Cover art

In addition to your name, there are other ways to stand out from all the other podcasts and voices: strong cover art and a succinct, solid summary. (You can add all three on WordPress.com in My Sites → Settings → Writing → Podcasting.) 

The cover art should be easy to recognize. Since our podcast is named Placebo, we decided on a satirical medical vibe. We found a nice graphic, and after a few tweaks, tada!

https://en-blog.files.wordpress.com/2020/08/image.png?w=512 512w, https://en-blog.files.wordpress.com/2020/08/image.png?w=150 150w, https://en-blog.files.wordpress.com/2020/08/image.png?w=300 300w" sizes="(max-width: 256px) 100vw, 256px" />
Placebo podcast cover image

Pro tip: Cover art should be 1400×1400 px, so be sure to use an image that’s big enough.

The summary

When writing your summary on iTunes, you must distill the description of your podcast content to 250 characters or less. For Placebo, we wanted to give listeners a taste and encourage them to give us a try:

Placebo is a podcast with confirmed clinical efficacy. Artur, Agnes, and Michael chat about building your online home, technology, culture, business ideas, and being the best citizen of the Internet you can be. More on placebo.pro

We also included the link to our site, where listeners can learn more.

Here is where you upload your cover art and summary on WordPress.com

Light audio editing

Some podcasts (like Distributed) are edited to equalize audio levels or condense the episode. We opted to publish the raw audio — I only attach the pre-recorded intro music in GarageBand. I delve into details of audio editing on my blog Deliberate Internet

Before I upload a new episode to iTunes Podcasts, I add relevant ID3 tags in the iTunes player (where you edit rather than publish). ID3 tags are used by the iPhone Podcasts app to display your cover image. Import each one of your recorded files in iTunes player and select “Information.” There, you can tweak the image, category, and title of the audio file.

https://en-blog.files.wordpress.com/2020/08/image-1.png?w=135 135w, https://en-blog.files.wordpress.com/2020/08/image-1.png?w=271 271w, https://en-blog.files.wordpress.com/2020/08/image-1.png 541w" sizes="(max-width: 406px) 100vw, 406px" />

After you edit the information in iTunes player, or wherever you are editing your podcast, upload your cover art and click “OK”; your episode will appear in the iTunes Player Podcast library on your computer. You can find this file in your Home Directory → Music → iTunes → Podcasts.

Your audio files are now ready for distribution! That’s where WordPress.com comes in.

Uploading to WordPress.com

Create a separate WordPress.com post for each episode of your podcast. This gives you a unique link to share with your audience, a way for them to listen to your episode without the app, and a place to add extra notes and links.

To publish a podcast post:

  1. Create a new post. Add any extra notes or info you want to share with the podcast.
  2. Add the Audio block and upload your podcast file.
  3. Give the post the Podcast category.
  4. Publish the post.

This guide on the WordPress.com Support Site has more details.

Time to go live

You have your domain and a site for your podcast. You’ve recorded and uploaded a few episodes. Now it’s time to share your podcast with the world!

Your podcast is available on your WordPress.com site, but the majority of people consume podcasts through an app — iTunes podcasts, Spotify, Google Play, etc. Fortunately, they all work by checking your RSS feed. Once you submit your RSS feed to those services, new episodes will appear there automatically.

Spotify 

  1. Copy the RSS URL from the WordPress.com podcasting settings page
  2. Go to Podcasters Spotify
  3. Submit your RSS feed
  4. Wait two to three days for your podcast to appear

iTunes

  1. Copy the RSS URL from the WordPress.com podcasting settings page
  2. Go to Podcasts Connect
  3. Submit your RSS feed
  4. Wait about a week for them to review and approve your podcast

When the podcast is reviewed, approved, and appears in the Spotify and iTunes libraries, add those links to your site. That way, you can promote your nifty WordPress.com custom domain, and visitors can choose their preferred podcast consuming technology. We put these buttons right up at the top:

You can download these buttons here:

The lucky number seven

Recording new podcast episodes are plenty of fun, but can sometimes feel like work. Podcasts with seven or more episodes are likely to continue growing. The Placebo podcast crew has pledged to record at least eight.

And don’t forget to enjoy it! The conversations with interesting guests are why we do it.

I’ve published the full version of “How to Start a Podcast” on my personal website. There, I’ve expanded on my process to record episodes over Zoom, and how I later edited them in GarageBand. Dive deeper if you’re interested!


Cover Photo by The Teens Network Daytime Show Studios on Pexels.com