Search

Top 60 Oracle Blogs

Recent comments

Postgres

Some myths about PostgreSQL vs. Oracle

By Franck Pachot

.
I originally wrote this as a comment on the following post that you may find on internet:
https://www.2ndquadrant.com/en/blog/oracle-to-postgresql-reasons-to-migrate/
but my comment was not published (many links in it… I suppose it has been flagged as spam?) so I put it there.

You should never take any decision on what you read on the internet without verifying. It is totally valid to consider a move to Open Source databases, but doing it without good understanding is a risk for your migration project success.

PostgreSQL Shared Buffers vs free RAM

PostgreSQL, like all other database engines, modifies the table and index blocks in shared buffers. People think that the main goal of buffered reads is to act as a cache to avoid reading from disk. But that’s not the main reason as this is not mandatory. For example PostgreSQL expects that the filesystem cache is used. The primary goal of shared buffers is simply to share them because multiple sessions may want to read a write the same blocks and concurrent access is managed at block level in memory. Without shared buffers, you would need to lock a whole table. Most of the database engines use the shared buffers for caching. Allocating more memory can keep the frequently used blocks in memory rather than accessing disk. And because they manage the cache with methods suited to the database (performance and reliability) they don’t need another level of cache and recommend direct I/O to the database files. But not with Postgres.

The myth of NoSQL (vs. RDBMS) agility: adding attributes

By Franck Pachot

.
There are good reasons for NoSQL and semi-structured databases. And there are also many mistakes and myths. If people move from RDBMS to NoSQL because of wrong reasons, they will have a bad experience and this finally deserves NoSQL reputation. Those myths were settled by some database newbies who didn’t learn SQL and relational databases. And, rather than learning the basics of data modeling, and capabilities of SQL for data sets processing, they thought they had invented the next generation of persistence… when they actually came back to what was there before the invention of RDBMS: a hierarchical semi-structured data model. And now encountering the same problem that the relational database solved 40 years ago. This blog post is about one of those myths.

AWS Aurora vs. RDS PostgreSQL on frequent commits

This post is the second part of https://blog.dbi-services.com/aws-aurora-xactsync-batch-commit/ where I’ve run row-by-row inserts on AWS Aurora with different size of intermediate commit. Without surprise the commit-each-row anti-pattern has a negative effect on performance. And I mentioned that this is even worse in Aurora where the session process sends directly the WAL to the network storage and waits, at commit, that it is acknowledged by at least 4 out of the 6 replicas. An Aurora specific wait event is sampled on these waits: XactSync. At the end of the post I have added some CloudWatch statistics about the same running in RDS but with the EBS-based PostgreSQL rather than the Aurora engine. The storage is then an EBS volume mounted on the EC2 instance.

AWS Aurora IO:XactSync is not a PostgreSQL wait event

By Franck Pachot

.
In AWS RDS you can run two flavors of the PostgreSQL managed service: the real PostgreSQL engine, compiled from the community sources, and running on EBS storage mounted by the database EC2 instance, and the Aurora which is proprietary and AWS Cloud only, where the upper layer has been taken from the community PostgreSQL. The storage layer in Aurora is completely different.

ysql_bench: the YugaByteDB version of pgbench

By Franck Pachot

.
This follows the previous post on testing YugaByteDB 2.1 performance with pgbench: https://blog.dbi-services.com/yugabytedb-2-1/
A distributed database needs to reduce inter-node synchronization latency and then replaces two-phase pessimistic locking by optimistic concurrency control in many places. This means more serialization errors where a transaction may have to be re-tried. But the PostgreSQL pgbench does not have this possibility and this makes benchmarking distributed database very hard. For example when CERN tested CoackroachDB the conclusion was: “comparative benchmarking of CockroachDB was not possible with the current tools used”.

Demystifying JSON with CockroachDB… Import, Index, and Computed Columns

Overview

Recently, I created and delivered an "Advanced Developer Workshop" for CockroachLabs. One of the topics dove into how to ingest and use JSON data.

Like many databases, CockroachDB has the ability to use JSON data type for columns within a table. Basically, you insert a JSONB object into a row, and then can filter and extract the desired data with SQL. The following simple example shows how this is done:

So it is pretty straight forward to use JSONB objects within tables, but how do you load those HUGE json files into CockroachDB?

How SQL Server MVCC compares to Oracle and PostgreSQL

By Franck Pachot

.
Microsoft SQL Server has implemented MVCC in 2005, which has been proven to be the best approach for transaction isolation (the I in ACID) in OLTP. But are you sure that writers do not block readers with READ_COMMITTED_SNAPSHOT? I’ll show here that some reads are still blocked by locked rows, contrary to the precursors of MVCC like PostgreSQL and Oracle.

For this demo, I run SQL Server 2019 RHEL image on docker in an Oracle Cloud compute running OEL7.7 as explained in the previous post. If you don’t have the memory limit mentioned, you can simply run:

docker run -d -e "ACCEPT_EULA=Y" -e 'MSSQL_PID=Express' -p 1433:1433 -e 'SA_PASSWORD=**P455w0rd**' --name mssql mcr.microsoft.com/mssql/rhel/server:2019-latest