Search

Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Datatype conversion laziness … yet another reason

I’ve got two words to describe my thoughts on being lazy with data types …. JUST DON’T! Smile

There’s been many blog posts, videos, AskTOM questions etc over the years with numerous examples of how the optimizer gets confused, how the SQL engine gets confused…and even how developers will ultimately get confused if you do not spend that little tiny bit of extra effort in ensuring data type consistency.

I personally wish we had an init.ora parameter called (say) “response_to_data_type_conversion” = IMPLICIT (default) or ERROR, so that when we encountered a data type conversion we could choose to report it back to the developer as an error, or implicitly try to convert it as we currently do. Then we could have it set to “error” at least in non-production environments to get a handle on where we are not being diligent.

But anyway, enough ranting Smile. The reason for this blog post to give yet another example of how making assumptions about data type handling can lead to spurious errors.

Let’s assume we have a simple requirement – to collapse a result set into a single CSV result stored in a CLOB. Here is my first cut at the code



SQL> set serverout on
SQL> declare
  2    l_csv  clob := empty_clob();
  3  begin
  4   for i in (select *
  5             from dba_objects )
  6   loop
  7        l_csv := l_csv || i.owner||','||i.object_name||','||i.object_id;
  8    end loop;
  9    dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 10  end;
 11  /
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 7
ORA-06512: at line 7

And it fails. To get an idea as to why and where it fails, I’ll add an exception handler to capture the state of the CLOB.



SQL> set serverout on
SQL> declare
  2    l_csv  clob := empty_clob();
  3  begin
  4   for i in (select *
  5             from dba_objects )
  6   loop
  7        l_csv := l_csv || i.owner||','||i.object_name||','||i.object_id;
  8    end loop;
  9    dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 10  exception
 11    when others then
 12      dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 13      raise;
 14  end;
 15  /
length=32776
declare
*
ERROR at line 1:
ORA-06502: PL/SQL: numeric or value error
ORA-06512: at line 13
ORA-06512: at line 7
ORA-06512: at line 7

Now that number looks “interesting” in that it is right on the length limit for a VARCHAR2 in PL/SQL. But why would a VARCHAR2 come into play here? We are dealing with a CLOB and that should be allowed to get much larger. To further confuse things, look what happens when I perform functionally the same operation, but using an intermediate variable for each row fetched from the loop.



SQL> declare
  2    l_csv  clob := empty_clob();
  3    l_line varchar2(255);
  4  begin
  5   for i in (select *
  6             from dba_objects )
  7   loop
  8        l_line := i.owner||','||i.object_name||','||i.object_id;
  9        l_csv := l_csv || l_line;
 10    end loop;
 11    dbms_output.put_line('length ' || length(l_csv));
 12  end;
 13  /

PL/SQL procedure successfully completed.

And now it works! That seems extraordinary because the logic would appear to be identical.

The answer here is once again – not taking sufficient care with our data type conversions. The OBJECT_ID we are fetching is numeric. Because we are simply slamming that into a concatenation operator (||), we need to do some implicit data type conversion, and to achieve that, we need to do some casting into VARCHAR2. We are not privy to how the PL/SQL execution engine is performing the conversion, but the error suggests that all components of the expression (including the left hand side) are being casted to VARCHAR2 and hence our “clob” ultimately exceeds the 32k limit.

The resolution is simple – take control of the data type conversion as we should have done anyway:



SQL> set serverout on
SQL> declare
  2    l_csv  clob := empty_clob();
  3  begin
  4   for i in (select *
  5             from dba_objects )
  6   loop
  7        l_csv := l_csv || i.owner||','||i.object_name||','||to_char(i.object_id);
  8    end loop;
  9    dbms_output.put_line('length=' || dbms_lob.getlength(l_csv));
 10  end;
 11  /
length=3491433

PL/SQL procedure successfully completed.

The moral of the story remains the same. Consistent and explicit handling of data type conversions will give you more robust code.

Another little 12c improvement

You’ve got a huge table right? Massive! Immense! And then something bad happens. You get asked to remove one of the columns from that table.

“No problem” you think. “I won’t run the ‘drop column’ command because that will visit every block and take forever!”

So you settle on the perfect tool for such a scenario – simply mark the column as unused so that it is no longer available to application code and the developers that write that code.

But there’s a catch that not many people know about. SET UNUSED is meant to be just a trivial database dictionary operation. We don’t touch the data, we only manipulate the column definition and thus an execution of SET UNUSED should be instantaneous. This is indeed the case most of the time, but as the example below shows – the way the column was added to the table, can have a bearing on what it costs to have that column removed.


SQL> select banner from v$version;

BANNER
--------------------------------------------------------------------------------
Oracle Database 11g Enterprise Edition Release 11.2.0.4.0 - 64bit Production

--
-- large-ish table to monitor the impact
--
SQL> create table t1
  2   as
  3   select d.*
  4   from dba_objects d,
  5   ( select 1 from dual connect by level <= 100);

Table created.

SQL> set timing on
SQL> select count(*) from t1;

  COUNT(*)
----------
   8713700

Elapsed: 00:00:01.92

--
-- Take an existing column to UNUSED ... instantaneous!
--
SQL> alter table t1 set unused column created;

Table altered.

Elapsed: 00:00:00.06


--
-- Add a new column, and then make it UNUSED ... instantaneous!
--
SQL> alter table t1 add new_col1 timestamp;

Table altered.

Elapsed: 00:00:00.01
SQL> alter table t1 set unused column new_col1;

Table altered.

Elapsed: 00:00:00.00


--
-- Add a new columns with a not null default, and then make it UNUSED ... careful!
--
SQL> alter table t1 add new_col2 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.00
SQL> alter table t1 set unused column new_col2;

Table altered.

Elapsed: 00:01:35.39

You will not get the same issue in 12c.


SQL> select banner from v$version;

BANNER
-----------------------------------------------------------------------------
Oracle Database 12c Enterprise Edition Release 12.2.0.1.0 - 64bit Production

SQL> create table t1  nologging
  2   as
  3   select d.*
  4   from dba_objects d,
  5   ( select 1 from dual connect by level <= 100);

Table created.

SQL>
SQL> set timing on
SQL> select count(*) from t1;

  COUNT(*)
----------
   7951500

Elapsed: 00:00:02.20
SQL> alter table t1 set unused column created;

Table altered.

Elapsed: 00:00:00.08
SQL>
SQL> alter table t1 add new_col1 timestamp;

Table altered.

Elapsed: 00:00:00.00
SQL> alter table t1 set unused column new_col1;

Table altered.

Elapsed: 00:00:00.00
SQL>
SQL> alter table t1 add new_col2 timestamp default sysdate not null;

Table altered.

Elapsed: 00:00:00.01
SQL> alter table t1 set unused column new_col2;

Table altered.

Elapsed: 00:00:00.01
SQL>

#Exasol Database whoami

This little script displays some useful meta-information:

SQL_EXA> create schema myschema;
EXA: create schema myschema;

Rows affected: 0

SQL_EXA> create or replace script whoami as
 output('Current User: '.. tostring(exa.meta.current_user))
 output('Current Schema: '.. tostring(exa.meta.current_schema))
 output('Session ID: '.. tostring(exa.meta.session_id))
 output('Database Version: '.. tostring(exa.meta.database_version))
 output('Number of Nodes: '.. tostring(exa.meta.node_count))
 /
EXA:create or replace script whoami as...

Rows affected: 0
SQL_EXA> col output for a40;
COLUMN   output ON
FORMAT   a40
SQL_EXA> execute script whoami with output;
EXA: execute script whoami with output;

OUTPUT
----------------------------------------
Current User: SYS
Current Schema: MYSCHEMA
Session ID: 1612024483893367379
Database Version: 6.1.0-alpha1
Number of Nodes: 1

5 rows in resultset.

All available metadata is documented here (Chapter 3. Concepts -> 3.6 UDF scripts).

Configuring Firewall Access for SQL Database on Azure

Its quite common that the first time you work with a feature in a cloud interface, it can appear so foreign until you’ve done it a few times.  If it’s a task that you may not do often or only once, it can be downright painful.  It doesn’t matter who the cloud provider is or the application, we all have felt the pain of this type of situation and why some of us even started blogging…:)  Until its familiar, it may not feel comfortable and you may not even like how it works.

As a newer employee at Microsoft, I’m often learning right along with my customers.  Lucky for me, I learn quick, but I feel their pain and if something slows me down, then I know it will for the customer, too.  One of the tasks that I noticed tripped us up in configuring SQL Database on Azure was configuring the firewall for SSMS, (SQL Server Management Studio) or Visual Studio, (VS) access.  We all know that having access for SSMS is important for many DBAs to have a single pane of glass for database management and for me, well, I like to do everything in Visual Studio-  I’m just strange that way.

The Set up

To configure access for SSMS to access SQL Database, log into your Azure portal, then click on the database you wish to configure access for, (sorry folks, I need this play environment for a while, so I’ve removed some info from the screenshots):

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 1400w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 2100w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" />

Click on Set Server Firewall at the top, which will bring you to the following screen:

https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 1447w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" />

The one thing I really like about Azure is that it displays your IP address in the console, assuming this may be the information you require.  If you are attempting to add this to access a local installation of VS or SSMS, simply enter in an identifying name for your rule and then add the IP address or a range of IP Addresses in the Start IP and End IP field, then click Save.

If you need to retrieve the IP address for another host or workstation, you can do this by opening the command prompt and type in ipconfig.  From there, look at the IPv4 address:

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 1400w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" />

Now know that if you work from numerous WiFi, MiFi and other network connections, your IP Address will change.  If your IP address changes, you’ll need to add a new rule or if it’s a temporary location, just remove the rule and create a new one for your new access point to keep things uncluttered.

Connect Time

Once this is performed, it’s time to connect to your Azure SQL Database.  If you’re asking me how you find that information, let’s go back to our main page for the database:

https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/Conf_firewall... 300w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/Conf_firewall... 768w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/Conf_firewall... 1400w, https://i2.wp.com/dbakevlar.com/wp-content/uploads/2018/08/Conf_firewall... 2100w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" />

Clock on the link for  Show database connection strings and it will take you to the following, displaying the JDBC.net connection info in the second tab:

https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 300w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 768w, https://i1.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 1400w" sizes="(max-width: 650px) 100vw, 650px" data-recalc-dims="1" />

Well, look there, all you need is the first thing displayed after Server=.  Note that it’s a “,” and not a “:” between the server name and the port, as you’ll need to do the same in SSMS or VS when you enter the connection.  Your user name has an addition too:  Username@domain.com@sql database name.

After you enter this, use the password you configured your database with.  If you forgot, the main page in the portal has a “reset password” link for you to update it.

You’ll then connect with something that looks like the following:

https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 300w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 768w, https://i0.wp.com/dbakevlar.com/wp-content/uploads/2018/08/conf_firewall... 1196w" sizes="(max-width: 519px) 100vw, 519px" data-recalc-dims="1" />

I can then switch to a user database and manage Azure SQL Databases just as I have if they were on-prem.  Enjoy!



Tags:  ,


Del.icio.us



Facebook

TweetThis

Digg

StumbleUpon




Copyright © DBAKevlar [Configuring Firewall Access for SQL Database on Azure], All Right Reserved. 2018.

Aurora MySQL synch/mutex/innodb/aurora_lock_thread_slot_futex wait

Thanks to Jeremiah Wilton for the following info:

This wait event indicates that there is a thread which is waiting on an InnoDB record lock. Check your database for conflicting workloads. More information on InnoDB locking can be found here: https://dev.mysql.com/doc/refman/5.6/en/innodb-locking.html

 

In other words, record-level lock conflicts are happening. More than one connection is trying to update the last_login for a particular id in the_table at the same time. Those connections are conflicting and serializing on the record lock for that id. Here’s a query that can help you identify the blocker and waiter for InnoDB record locks in MySQL-family engines. Run this when you see the aurora_lock_thread_slot_futex wait event in Performance Insights. In a future release of Performance Insights, we will automatically generate and display a similar blockers-and-waiters report when Performance Insights detects this event.

select p1.id waiting_thread, p1.user waiting_user, p1.host waiting_host, it1.trx_query waiting_query,
       ilw.requesting_trx_id waiting_transaction, ilw.blocking_lock_id blocking_lock, il.lock_mode blocking_mode,
       il.lock_type blocking_type, ilw.blocking_trx_id blocking_transaction,
       case it.trx_state when 'LOCK WAIT' then it.trx_state else p.state end blocker_state, il.lock_table locked_table,
       it.trx_mysql_thread_id blocker_thread, p.user blocker_user, p.host blocker_host
from information_schema.innodb_lock_waits ilw
join information_schema.innodb_locks il on ilw.blocking_lock_id = il.lock_id and ilw.blocking_trx_id = il.lock_trx_id
join information_schema.innodb_trx it on ilw.blocking_trx_id = it.trx_id
join information_schema.processlist p on it.trx_mysql_thread_id = p.id
join information_schema.innodb_trx it1 on ilw.requesting_trx_id = it1.trx_id
join information_schema.processlist p1 on it1.trx_mysql_thread_id = p1.id;

+----------------+--------------+---------------------+---------------------------------------+---------------------+--------------------+---------------+---------------+----------------------+---------------+----------------------+----------------+--------------+---------------------+
| waiting_thread | waiting_user | waiting_host        | waiting_query                         | waiting_transaction | blocking_lock      | blocking_mode | blocking_type | blocking_transaction | blocker_state | locked_table         | blocker_thread | blocker_user | blocker_host        |
+----------------+--------------+---------------------+---------------------------------------+---------------------+--------------------+---------------+---------------+----------------------+---------------+----------------------+----------------+--------------+---------------------+
|           1117 | reinvent     | 172.31.51.118:34734 | UPDATE sbtest8 SET k=k+1 WHERE id=125 | 888017450           | 888017113:88:6:17  | X             | RECORD        | 888017113            | LOCK WAIT     | `sysbench`.`sbtest8` |           1196 | reinvent     | 172.31.51.118:34888 |
|           1117 | reinvent     | 172.31.51.118:34734 | UPDATE sbtest8 SET k=k+1 WHERE id=125 | 888017450           | 888017089:88:6:17  | X             | RECORD        | 888017089            | LOCK WAIT     | `sysbench`.`sbtest8` |           1431 | reinvent     | 172.31.51.118:35366 |
|           1117 | reinvent     | 172.31.51.118:34734 | UPDATE sbtest8 SET k=k+1 WHERE id=125 | 888017450           | 888015342:88:6:17  | X             | RECORD        | 888015342            | LOCK WAIT     | `sysbench`.`sbtest8` |           1680 | reinvent     | 172.31.51.118:35868 |
.
.
+----------------+--------------+---------------------+----------------------------------------+---------------------+-

Also the following:
https://dev.mysql.com/doc/refman/5.6/en/innodb-information-schema-examples.html

SELECT
  r.trx_id waiting_trx_id,
  r.trx_mysql_thread_id waiting_thread,
  r.trx_query waiting_query,
  b.trx_id blocking_trx_id,
  b.trx_mysql_thread_id blocking_thread,
  b.trx_query blocking_query
FROM       information_schema.innodb_lock_waits w
INNER JOIN information_schema.innodb_trx b
  ON b.trx_id = w.blocking_trx_id
INNER JOIN information_schema.innodb_trx r
  ON r.trx_id = w.requesting_trx_id;
+----------------+----------------+----------------------------------------+-----------------+-----------------+----------------------------------------+
| waiting_trx_id | waiting_thread | waiting_query                          | blocking_trx_id | blocking_thread | blocking_query                         |
+----------------+----------------+----------------------------------------+-----------------+-----------------+----------------------------------------+
| 917169041      |           2822 | UPDATE sbtest5 SET k=k+1 WHERE id=126  | 917169007       |            2296 | UPDATE sbtest5 SET k=k+1 WHERE id=126  |
| 917169041      |           2822 | UPDATE sbtest5 SET k=k+1 WHERE id=126  | 917168488       |            2214 | UPDATE sbtest5 SET k=k+1 WHERE id=126  |
| 917169025      |           3069 | UPDATE sbtest2 SET k=k+1 WHERE id=125  | 917168945       |            2700 | UPDATE sbtest2 SET k=k+1 WHERE id=125  |
.
.
+----------------+----------------+----------------------------------------+-----------------+-----------------+----------------------------------------+

see AWS forum post at https://forums.aws.amazon.com/thread.jspa?threadID=289484

Start/Stop your Autonomous Databases

The ATLAS experiment in LEGO®

Here are two blog posts on the Databases at CERN blog:

  1. Oracle Cloud Infrastructure API Keys and OCID
  2. Oracle Cloud: start/stop automatically the Autonomous Databases

The idea is to control the Autonomous Databases from the command line, with no installation, just an easy function downloaded from the documentation and customized with environment variables, in order to automate the start and stop of the services. The first post shows how to get all those OCIDs and the second one how to use this oci-curl() function to stop all started ADW or ATP services.

In summary, define the following variables with your values:

privateKeyPath=~/.oci/oci_api_key.pem
keyFingerprint="05:56:ee:89:19:e7:16:03:9b:00:c3:91:cf:a6:9d:e9"
authUserId=ocid1.user.oc1..aaaaaaaaflzrbfegsz2dynqh7nsea2bxm5...
tenancyId=ocid1.tenancy.oc1..aaaaaaaac6guna6l6lpy2s6cm3kguijf...
compartmentId=ocid1.tenancy.oc1..aaaaaaaac6guna6l6lpy2s6...
endpoint=database.us-ashburn-1.oraclecloud.com
apiVersion=20160918

Load the oci-curl() function:

source <( curl https://docs.cloud.oracle.com/iaas/Content/Resources/Assets/signing_sample_bash.txt | grep -vE "(local tenancyId|local authUserId=|local keyFingerprint|local privateKeyPath=)" | sed -e '1s/^.*#/#/' )

List your autonomous services to check that all is correctly set:

for service in autonomousDatabases autonomousDataWarehouses ; do oci-curl $endpoint get "/$apiVersion/$service?compartmentId=$compartmentId" | jq -r '.[] | [ .dbName , .lifecycleState , .id ]  | @tsv'; done

And the result is:

All details in the on the Databases at CERN blog:

  1. Oracle Cloud Infrastructure API Keys and OCID
  2. Oracle Cloud: start/stop automatically the Autonomous Databases

The usage of this oci-curl() function was inspired by Yasin Baskan blog post:

https://blogs.oracle.com/datawarehousing/managing-autonomous-data-warehouse-using-oci-curl

Revisiting Embedded InnoDB

Many people these days don’t know InnoDB was originally developed as an independent database engine apart from MySQL. Its author, Heikki Tuuri, modeled InnoDB after Transaction Processing: Concepts and Techniques, the seminal transaction processing book authored by Turing Award laureate James “Jim” Gray and Andreas Reuter. It wasn’t until later InnoDB was integrated with MySQL. While InnoDB […]

Open Source ODBC Drivers for Oracle

In the good old days, database-agnostic applications were written using drivers that implemented the Microsoft Open Database Connectivity (ODBC) API, especially on Windows. Much like JDBC, ODBC provided developers with a single, interoperable, C-based programming language interface that made it possible for applications to access data from a variety of database management systems. When developing an […]

ODBV3 – more comfortable usage

It has been crazy few months – organizing POUG2018 took a lot of energy but it was satisfying as hell! </p />
</p></div>

    	  	<div class=

Partial indexing – get the dictionary definitions right

Just a quick post to clear up some confusion that can be seen on the partial indexing capabilities in Oracle Database 12c and above. I was at an event in the Oracle Perth office last week, and an attendee mentioned that they thought the feature was not working or was not enabled on their database, and presented the following demonstration to prove their case:



SQL> create table t ( x int, y int )
  2  partition by range ( x )
  3  (
  4    partition p1 values less than ( 1000 ) indexing on,
  5    partition p2 values less than ( 2000 ) indexing off
  6  );

Table created.

SQL> insert into t
  2  select rownum, rownum from dual connect by level < 2000;

1999 rows created.

SQL> create index ix on t ( x ) local;

Index created.

SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';

SEGMENT_NAME                   PARTITION_NAME
------------------------------ ------------------------------
IX                             P1
IX                             P2

SQL> select partition_name, status
  2  from user_ind_partitions
  3  where index_name = 'IX';

PARTITION_NAME                 STATUS
------------------------------ --------
P1                             USABLE
P2                             USABLE

At first glance, this looks counter-intuitive. I have explicitly specified that I do not want indexing on partition P2, yet after creating a local index, I still have 2 segments, one for each partition, and double-checking USER_IND_PARTITIONS tells me that they are both “fully-fledged” usable index partitions.

As per the documentation linked above, nominating the INDEXING ON / OFF at the partition level on the table definition is not the whole story. When you create the index, you need to inform the database that you wish a particular index to respect that intent. This is because you might want some indexes to be partial and others not to be.

So by slightly changing my CREATE INDEX statement, I can get the desired outcome.


SQL> drop index ix;

Index dropped.

SQL> create index ix on t ( x ) local indexing partial;

Index created.

SQL> select segment_name, partition_name
  2  from user_segments
  3  where segment_name = 'IX';

SEGMENT_NAME                   PARTITION_NAME
------------------------------ ------------------------------
IX                             P1

SQL> select partition_name, status
  2  from user_ind_partitions
  3  where index_name = 'IX';

PARTITION_NAME                 STATUS
------------------------------ --------
P1                             USABLE
P2                             UNUSABLE