Top 60 Oracle Blogs

Recent comments

Oakies Blog Aggregator

Video : SQLcl and Oracle REST Data Services (ORDS)

In today’s video we’ll demonstrate the ORDS functionality built into Oracle SQLcl.

This is based on this article.

There are loads of other ORDS articles here.

The star of today’s video is Arman Sharma, captured at Sangam 2015. Seems like yesterday.



Video : SQLcl and Oracle REST Data Services (ORDS) was first posted on December 2, 2019 at 9:41 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

Friday Philosophy – Computer Magazines & Women (Not) In I.T

I often get into discussions about Women In IT (#WIT), even more so in the last 4 or 5 years with my growing involvement in organising and being at conferences. There is no doubting that the I.T industry is generally blighted by a lack of women and other minorities (and I don’t like referring to women as “minorities” as there are more women in the UK than men). Ours is mostly a white, male, middle-class and (especially in the Oracle sphere) “middle aged” world.

I’ve never been happy with the ratio of men to women in the IT workplace – and I started my career in the UK National Health Service, where the ratio of men to women in technical roles seemed more like 80:20. In all companies since, the ratio I would estimate as been 10-15% women. And I haven’t seen it changing much. And I’m afraid to say, to a certain degree, I have almost given up on trying to correct this imbalance in our current workforce. Note, current workforce.

Why? Well, I’ve tried for years to increase the ratio of women in technical areas or at least to increase female representation. That is, make women more visible:

  • When I’ve hired new staff I’ve given female candidates an extra half point in my head – and part of me hates doing it because it’s sexist, the very thing that is the problem. But the small wrong done to try and right a larger wrong.
  • When allocating pay increases I looked out for imbalance (is Sarah doing the same role as Dave to the same level, but being paid less? Let’s fix that).
  • When I have input to paper selection for conferences, “minorities” get an extra half point. But only half. They have to be good at presenting/have an interesting abstract.
  • When it comes to promotion, it is utterly on merit. I don’t care what’s in your underwear, the colour you are, what clothes you wear that are dictated by religion. If your work is deserving of promotion and I can promote, I promote. No positive or negative discrimination. I take this stance as I know people do not want to be promoted “just because” of filling a quota. Further, if it is perceived that this is happening, it creates a bad backlash.

But, really, it’s had little impact. The problem I keep hitting is that there are simply far fewer women in I.T. We can all try and skew things in the way that I (and many others) do or strive for more women in visible positions to act as role models, which I think is an important thing for our industry to do.

But we can’t magically create more women in I.T. Specifically, we can’t create women who have been doing the job for a long time and so are more likely to be skilled and willing to present. We can only work with what we have. One result of the skewing is a relatively small number of women are constantly asked to present and invariable sit on #WIT panels. We see the same people over and over again.

What we can do is encourage a more mixed group of young people coming into the industry. It won’t help much with something like the database world, or at least the database user community, as you see few young people of any type coming in – we need to fix that as well and I applaud things like the German user group #NextGen efforts – databases do not attract young people, It’s Not Cool. But that’s a whole other topic for another day.

In discussing all this, many times, over the years the idea that we need to go back to pre-work people (that would be kids and teenagers then) and encourage everyone – irrespective of gender,sexuality, ethnicity etc etc etc – to do IT, Science, Art, domestic science, whatever they want and ignore the stereotypes of old – is pretty much agreed to be A Good Thing.

All of this is great but it left me with a question. How did we get into this mess in the first place? Why are there so few women in IT between the ages of 35 and retirement? In the early days a lot of women were in IT compared to the average number of women in scientific areas generally. When I was at school (1980’s) they introduce Computer Studies into the curriculum and there were as many girls as boys in my class. Ability was equally spread. The number of women taking IT at college was admittedly terribly low when I went, but colleges did a lot to encourage women and the numbers were rising. And then stopped. Why? What was stopping girls continuing with computers? Well, a year or two ago I read an article (I think in print as I struggled to find similar online – but if you find one let me know) about the computer press back in the 90’s. And it stuck a chord with me. 1234w, 150w, 300w, 768w, 1024w" sizes="(max-width: 617px) 100vw, 617px" />

The article argued that part (not all, but maybe a big part) of the problem was the computer magazines of the time. I’ve picked on “PC Format” as it was a magazine I bought often and knew, but others were similar. PC Format seemed to me to nearly always have a sexualised image of a woman on the cover, like the one at the top of this article. This was especially true if the image was a bit “science fiction”, say a ray-traced image to promote graphics cards. The image would invariably be of a woman with a, frankly, quite striking and often physiologically unlikely figure. Inside the magazine adverts were liberally decorated with nubile women leaning forward provocatively or with striking make-up & hair and yet wearing nerd glasses. You know, the sort of look you NEVER saw in real life. This was not a style or fashion magazine, it was not an “adult” magazine, it was about mother boards, CPUs, games, programming and general tech.

The covers I found online for this article are not as bad as many I remember (and perhaps I should not be using the worst anyway), but you get the idea. And it was not just PC Format, but that particular publication seemed to style itself as more a lifestyle magazine than just Tech or just Games. Games magazines also had a fair amount of “Dungeons & Dragons” images of women wearing clothes you would freeze to death in and be totally unsuitable for a bit of sword fighting. Why all the women?

When I read the article about this sexism I remembered a letter that had been published in, probably, PC Format. That and the response utterly summed it up. The letter asked why the magazine kept using sexy images of women on the front of a computer magazine. It wasn’t very Women’s Lib. The answer by the magazine was basically “If we put a sexy picture of a woman on the front it sells more. The more copies we sell the more money we make. We are simply giving you what you want; it’s not our problem, it’s actually yours”.

At the time I liked that letter as it said “you the public are in the wrong” and I rather liked stuff that put two fingers up at the majority and I mentally supported the magazine’s position. Looking back now, what strikes me is the abject shirking of responsibility and blatant putting profit before morality. Which I think is the biggest blight on society. Now I’m angry that the magazine just shrugged it’s shoulders and kept on.

When you added the magazines to the depictions of women in science fiction films & TV, and then once you were in the industry the use of booth babes and that nearly all women in sales & PR looked more like models than average (which still is true today) then the whole message was “women – you can be OK in IT if you are able to look like and act like this”. It’s not very inclusive.

The odd thing is, If you look further back at the old Sinclair User or Commodore User magazines, they had nothing like the same level of sexualised imagery of women on the front – they mostly had screen shots of the games in them or art work based on the games. The sexism grew through the end of the 80’s and into the 90’s I think.

So what is my point? We see less of this stuff these days, isn’t it more historical? Well, I think we need to keep an eye on history as it informs. I think it also explains (partly) the lack of mature women in I.T and that it’s almost impossible to change now. But also, it’s not so much “don’t repeat the mistakes of the past”  but “what mistakes are we currently making that in 20 years will be as obvious as that old mistake”. It’s not avoiding the same mistakes but similar ones.

I’ve been talking to Abigail Giles-Haigh recently about her presenting at our (UKOUG’s) #WIT event at Techfest 2019.  Abi is an expert on Artificial Intelligence and we were chatting about the dangers of training systems on historic data, as they can perpetuate historical bias. Also, any system we train now can bake in current bias. It might not even be conscious bias, it can be a bias due to an absence of training data. Some face recognition systems struggle to recognise people with dark skin tones for example. It’s not beyond reason that if we were training AI systems back in the 90’s as to what makes a computer magazine popular, it might have picked up on not just the sexualised lady images but also other aspects of an overtly male-oriented magazine, such as the type of adverts or the language used. Adjustements in light of the data would be made, sales would have gone up even further, and locked in the white-male bias. Only now it would be AI driving it and would we question the underlying, unconscious biases? I do think it’s a danger.

I think it’s going to continue to be a real struggle to encourage more non-white-male-old people into the industry, especially if we try and change the mature workforce. I’m not going to stop trying but I honestly don’t think we can make much difference to the here-and-now.

But we can work more to remove bias for the in-coming generation. And for that we need role models. From the current generation.


Presenting Well – Tell Your Story

<< I Wish All New Presenters Knew This (and it will help you)
<<<< Controlling The Presentation Monster (Preparing to Present)
<<<<<< How to (Not) Present – The Evil Threes

I don’t think the key to a really good presentation is the content, the structure, the delivery method, or even the main message. It’s The Story.

Actually, I’d go as far as to say that there is no one, single key to presenting well – but The Story seems to be at the heart of many of the best presentations I have seen and I think that some of the best presenters I know use The Story.

More and more I strive to present by Telling A Story. It works for me and since I started doing this, I think my presentations have got a lot better.

When you read (or watch) a story, it is about something – a person, an event, how change occurred, overcoming an obstacle. It might be hard to totally define what a story is, but when you read a book and it does not really go anywhere, it’s usually not satisfying and you know it has not really told the story. Some presentations are like that: They have some great content and there is knowledge being passed on but, just as when characters are poorly developed or the plot is disjointed, the presentation feels like it’s made of bits and you come away feeling you can’t join all the dots. With a book lacking a good story you may feel you did not get the whole thing; with a technical presentation you might feel you don’t really understand how you do something – or why.

When people design a talk they usually focus on “what facts do I need to tell, what details must I include”. The aim is to put information in other people’s heads. But facts and code and details are hard to absorb. For many a story helps it all go in more smoothly. You absolutely need the facts and details, but if you start gently, setting the pace – but maybe hinting of things to come or an early nugget of detail maybe  (as you do with story) – then expand the scope and go into the details you stand a better chance of carrying the crowd with you.

If you are now thinking “It’s hard enough to come up with a presentation topic, design the talk and then deliver it, and now you want me to do all that and in the form of a story?!? – that’s going to be so much harder!” well, let me explain why I think it is actually easier.

It’s already a story

First of all, what you want to talk about could be, by it’s very nature, already a story.

If the presentation is about using a software technique or product to solve a business problem – that’s a story about how you did it (or, even better, how you tried to do it and it failed – most people present on successes but presentations on failures are often fantastic!).

If it is about learning about a feature of a language or of the database, your story is something like:

“how do I get going with this, what do I need to learn, the things that went wrong, my overcoming adversity {my ignorance}, and finally reaching the sunny uphills of expertise”.


A story has a flow. It’s a lot easier to learn a story than a set of facts. Some talks are just facts. In fact {see what I did there} many techniques for remembering lists of things are to make them into a story.

Rather than making it harder to remember, having a story makes it easier to remember your talk and move through it. Each part of the presentation leads to (and reminds you of, up on that scary stage where your brain might burp) the next part. The Story helps remove the fear of forgetting parts of your material, and thus helps Control the Presentation Monster.

For the audience it gives them a progression, a narrative. I find that if a talk does not so much leap from points but more segues into them, it is easier to listen and focus. As I design my talks and add more facts and details, I keep in mind how can I preserve the flow. If I am going to talk about some of the things that can go wrong, putting them all in 4 slides together is easy for me and I have a chunk of “things to avoid” – but it may well break the flow, so I try to mention the things to avoid as I came across them or as I expand my theme. I fit them into the flow of the story.

Added colour

I’m not at all suggesting you invent characters or plot devices for your talk. That really would be hard! I also suspect that, unless you were a brilliant story teller, it would be pretty awful! But you can add in little aspects of this.

If I mention someone in my presentation, I usually give a couple of bits of information about them. Not a biography, just something like “Dave was the systems admin – wonderful collection of Rick & Morty t-shirts and no sense of smell”. There is no need for me to do this, it does not help understand the technical content, but now people have a mental (and possibly even nasal) image of Dave.

Side plots – if in learning about some aspect of say Virtual Private Database I discovered something about PL/SQL functions, I’ll divert from My Core Story and give 3 or 4 minutes on that (as a mini story). The great thing about side stories is that, depending on your time management, you can drop or include them as your talk progresses. If I get asked questions during my talk and it has slowed me down (which is NOT a problem – I love the interaction) I can drop a side plot.


Finally, when you tell a story you talk to your audience. You are not talking AT an audience. You are explaining to them the background, taking them through the narrative of the topic and leading them, possibly via some side stories, to the conclusion. It is far more like communicating with your audience than dictating to them. And, if you are brave enough to do so, you can look at your audience and engage with them, try to judge if they are following the story and have any feedback or response to it. Mostly any feedback is quite passive (no one shouts out to hear more about PL/SQL functions) but you will catch people’s eye, get a smile, get some indication that they are listening.

For me, discovering that last bit about The Story was when I finally felt I had a way of presenting that worked for me. If I am talking with my audience and I feel there is an engagement, a rapport, that is when I do my best job of it. That’s when I come off the stage buzzing and happy.

Danger Will Robinson!

There is a danger to Telling a Story and that is time. Most good stories build to a satisfying end. Most technical presentations also generally have a main point. But if you are progressing through a Story you might run out of time, in which case you do not get to your Big Expose or you have to suddenly blurt out the ending. It’s like those TV programs where they obviously run out of steam and some kludge is used to end it  – “And then the side character from an hour ago appears, distracts the dragon and you nick the golden egg! Hurr…ah?”.

You can modify the run time with side plots as I say above, but if you are going to Tell a Story, you need to practice the run time more than normal.

You can finish early, it’s better than not finishing at all. But being on time is best.


Oracle Cloud : Free Tier and Article Updates 220w" sizes="(max-width: 189px) 85vw, 189px" />

Oracle Cloud Free Tier was announced a couple of months ago at Oracle OpenWorld 2019. It was mentioned in one of my posts at the time (here). So what do you get for your zero dollars?

  • 2 Autonomous Databases : Autonomous Transaction Processing (ATP) and/or Autonomous Data Warehouse (ADW). Each have 1 OCPU and 20G user data storage.
  • 2 virtual machines with 1/8 OCPU and 1 GB memory each.
  • Storage : 2 Block Volumes, 100 GB total. 10 GB Object Storage. 10 GB Archive Storage.
  • Load Balancer : 1 instance, 10 Mbps bandwidth.
  • Some other stuff…

I’ve been using Oracle Cloud for a few years now. Looking back at my articles, the first was written over 4 years ago. Since then I’ve written more as new stuff has come out, including the release of OCI, and the Autonomous Database (ADW and ATP) services. As a result of my history, it was a little hard to get exited about the free tier. Don’t get me wrong, I think it’s a great idea. Part of the battle with any service is to get people using it. Once people get used to it, they can start to see opportunities and it sells itself. The issue for me was I already had access to the Oracle Cloud, so the free tier didn’t bring anything new to the table *for me*. Of course, it’s opened the door for a bunch of other people.

More recently I’ve received a few messages from people using the free tier who have followed my articles to set things up, and I’ve found myself cringing somewhat, as aspects of the articles were very out of date. They still gave you then general flow, but the screen shots were old. The interface has come a long way, which is great, but as a content creator it’s annoying that every three months things get tweaked and your posts are out of date. </p />

    	  	<div class=

Failed Installation of MSSQL-CLI on Ubuntu

So you want to run mssql-cli on Ubuntu Linux, but you received a number of errors and even if you got through some errors, you’re still stuck?

I’m here to try to help you get through them and hopefully I’ve captured them all.  Trust me, the Oracle DBAs have been here-  our databases and tools failed for a very long time until Linux administrators came to know what we needed and started to build us the images with the correct libraries and versions we needed, so we feel your pain!

The easiest scenario for many to deploy SQL Server 2019 on Linux to start working with it, is most likely an Ubuntu distribution, (flavor) of Linux.  With that, you may want to play with the newest tool for command line execution of SQL, which isn’t sqlcmd, but mssql-cli.  It’s got some awesome new features, which I won’t go into here, but focus on installation failures instead, which happens not because the installation is complicated but because of the demands still for Python 2.7 when 3+ versions are required for newer software.

mssql-cli requires Python 3, so I recommend checking the version before running the mssql-cli installation command, as this may save you a lot of work with dependencies.  I’ll still go through the steps to if you want to force it to work with Python 2.7, but seriously, just using the right version of Python will make it so much easier.

python3 --version
Python 3.43

PIP Isn’t Installed

pip install mssql-cli

The program ‘pip’ is currently not installed. You can install it by typing:

sudo apt-get install python3-pip
Yep, do what it says and as SUDO, (Super User Domain Owner) use APT-GET, (this is Ubuntu, but if it’s another Linux distribution, you may need to use YUM or ZYPPER) to install pip!

Failure in pip Install for Python3

If it succeeded, you’re set and you can go forward, if you attempt to install PIP, but receive the following error at the end of the installation:
ImportError: No module named 'ConfigParser'
dpkg: error processing package python-pip (--configure):
If you scan up in the installation process, you’ll see errors similar to the following:
dpkg: error processing package python-pip (--configure):
storing debug log for failure in /root/.pip/pip.log
Many errors will be trapped to the pip.log.  For these types of errors, they are commonly unique and the application installation will log them and you must go look to see the exact problem. For this one, we have separate problem.  The ConfigParser was replaced by the configparser in Python3.  It’s not actually the PIP install that’s having a problem, but another support utility that is on a too early of a version.
view /root/.pip/pip.log
If it is only the ConfigParser error in Python3, (unlike Python 2.7, which I’ll discuss later on) do the following:
Run an update for the available packages:
sudo apt-get update
Update the Net-Tools and the Setup Tools:
sudo apt-get install net-tools

sudo apt-get install setuptools
Attempt to run the installation again and verify that the config parser error is fixed, as the dependent utilities are now pointing to the correct support utilities, too.
sudo apt-get install python3-pip

Dependencies Upon Dependencies

All of these challenges surrounds dependencies for mssql-cli for libraries, utilities and packages that aren’t present on your Linux host/image.  Once you correct one, it doesn’t mean you are done, it means you may have to fix others and the correct step once you correct everything is to create an image for your database server so you don’t have to run through this every time.
The next set of errors once you get through the last one, when installing the mssql-cli may include the following:
The following packages have unmet dependencies:
 openssl : Depends: libssl1.0.0 (>= 1.0.2g) but 1.0.1f-1ubuntu2.27 is to be installed
 python-pip : Depends: python-colorama but it is not going to be installed
              Depends: python-distlib but it is not going to be installed
              Depends: python-html5lib but it is not going to be installed
              Depends: python-pip-whl (= 1.5.4-1ubuntu4) but it is not going to be installed
              Depends: python-setuptools (>= 0.6c1) but it is not going to be installed
              Recommends: build-essential but it is not going to be installed
              Recommends: python-dev-all (>= 2.6) but it is not installable
              Recommends: python-wheel but it is not going to be installed
#ff0000;">E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution).
You’ll need to run this as SUDO again, as you’re admin won’t have the privileges required to run this by default.  The installer is clearly telling you that you are missing a number of distribution python library files/packages that are needed before you can run this step, so, run the update to gather the latest from the repository:
sudo apt-get -f install
Then run the python installation again:
sudo apt-get install python-pip

Failure Restarting the Daemon

Upon installion, post the update, you may also receive a failure step when attempting to restart the daemon:

Removing landscape-client (14.12-0ubuntu6.14.04.4) ...
 * Stopping landscape-client daemon                                                              [fail]
This may be a false failure.  It can’t stop the landscape-client daemon because IT ISN’T INSTALLED.
1.  Verify if it exists:
ps -ef | grep daemon
This will display a list of all running daemons.  If you don’t see the landscape-client in the list, you can switch over to ROOT, (the proper owner) and see if it can be started with the minimum command:
sudo su -

If it says it isn’t installed, you’re good.  You’ve just verified that the reason the step failed in the install was the landscape-client isn’t installed in the first place.

Failure to Create the Setup Tools

You’ve run the installer without checking the version of Python and run into this error:
    Installed /tmp/pip_build_azureuser/pymssql/setuptools_git-1.2-py2.7.egg

Installing collected packages: mssql-cli, pymssql
Running install for mssql-cli
#ff0000;">error: could not create '/usr/local/lib/python2.7/dist-packages/mssql_cli': Permission denied
Complete output from command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_azureuser/mssql-cli/';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-46EYHL-record/install-record.txt --single-version-externally-managed --compile:
This is actually part of the final failure that you’ll see if you’re attempting to install the mssql-cli with the wrong Python version set.  You should note the 2.7 version in the path and the continued failures due to missing library and module files.

Fix the Version of Python

Yes, it’s Python and without the right support version, it’s going to have some issues.  The errors that you receive may become a bit misleading because you’ll see the demands for pymssql or other modules that are no longer available:
#ff0000;">DeprecationWarning: The pymssql project has been discontinued.  To install the last working released version, use a

version specifier like "pymssql<3.0".  For details and alternatives see:

Cleaning up...
Command /usr/bin/python -c "import setuptools, tokenize;__file__='/tmp/pip_build_root/pymssql/';exec(compile(getattr(tokenize, 'open', open)(__file__).read().replace('\r\n', '\n'), __file__, 'exec'))" install --record /tmp/pip-Ccg8po-record/install-record.txt --single-version-externally-managed --compile failed with error code 1 in /tmp/pip_build_root/pymssql
Storing debug log for failure in /home/azureuser/.pip/pip.log
1.  Determine “which” Python you are pointed to, ensuring that nothing in your path has redirected you to a different version of Python than the default one in /usr/bin:
which python
Now find out what the soft link looks like:
ls -la /usr/bin/python
 lrwxrwxrwx 1 root root 18 Nov 22 21:22 /usr/bin/python -> /usr/bin/python2.7
 You'll need to remove the existing soft link, (remember, this is just a soft link, you aren't removing Python.) and then create a new one.  This should be done as the root user:
rm /usr/bin/python
ln -s /usr/bin/python3.4 /usr/bin/python

Now run your installation and this should fix many of the issues that you may have been facing.  If not, continue on with the steps and learn how to address dependencies.

sudo pip install mssql-cli

Update and Install

As you update the installation libraries to be then installed on your Linux VM, keep in mind, this is simply adding them to the repository to be available and only required ones will be added if you run the “sudo apt-get install” command afterwards.  No secondary libraries or tools will be installed until you call them specifically.  If it’s been awhile since you’ve updated your repository for newly available updates or you just built the image, it’s a good idea to run an update and install overall.  Once this is done, then run the pip installation:
sudo apt-get update

sudo apt-get -f install
sudo apt-get install python-pip
TypeError: #ff0000;">Error when calling the metaclass bases

    str() takes at most 1 argument (3 given)

Cleaning up...
Command python egg_info failed with error code 1 in /tmp/pip_build_root/pymssql
Storing debug log for failure in /root/.pip/pip.log
Let’s get this fixed by updating the setup tools and then running the installation again:
pip install --upgrade setuptools
There may be a next layer to this error:
  File "/tmp/pip_build_root/pymssql/", line 88, in 

    from Cython.Distutils import build_ext as _build_ext

ImportError: #ff0000;">No module named Cython.Distutils

Cleaning up...
Command python egg_info failed with error code 1 in /tmp/pip_build_root/pymssql
Storing debug log for failure in /root/.pip/pip.log
Notice that there is a debug log included in this, as the error can be different for each failure.  View the log for more information:
view /root/.pip/pip.log
If you run into this one, feel free to put the exact error from the pip.log into the comments and we’ll work through it.

When Cython Fails

Note in the error, we are missing the Cython.Distutils.  Cython is considered the superset of the Python language, so yes, we need this to work.  As we install missing libraries and utilities/tools,  we may need to follow back through and repeat other installations, library–> tool–>application that was the goal to begin with.  To install this, the following command(s) can be run as one with the following set, separated by a semi-colon:
sudo apt-get install python-dev; sudo pip install cython 

Update the setup tools

pip install --upgrade setuptools pip 

Use a different driver, as the pymsql driver is no longer supported through it’s github repository:

pip install sqlalchemy_pyodbc_mssql 

You should be set now to install msql-cli

sudo pip install mssql-cli

Pip Install not Found

PIP is a command line tool to install Python software packages and is required to perform the mssql-cli installation.  If the wrong version for support or soft links, (think of them as aliases in Linux) are pointing to the wrong directories/library files, a failure will occur.  This can become a frustrating situation, so I’m hoping to walk you through errors and fixes.  Keep in mind, as you go through these fixes, you may have to re-run previous commands to success once you fix each dependency.

If you receive the following error:

  File "#ff0000;">/usr/lib/python3/dist-packages/", line 628, in resolve
    raise DistributionNotFound(req)
pkg_resources.DistributionNotFound: pip==1.5.4

This is due to a pip installation for 2.7 of Python that needs to be installed to the 3.4 version and relinked.  This can be done with the following commands:


sudo python

pip --version

Trick the Pip

What ended up working to get mssql-cli for many was to trip the PIP installation by installing the python 3 pip, copying over the pip from 2.7 to it, then installing the full setup tools to correct whatever appears to be incomplete in the 3 version in some repositories.
sudo easy_install3 pip

sudo mv /usr/local/bin/pip /usr/local/bin/pip-3
Run the installation another time:
sudo apt-get install python3-setuptools
Now run the mssql-cli install:
sudo pip install mssql-cli

Last Error I’ll Cover, I Promise!

Although you’re able to install mssql-cli now, you may experience one more error in the variations of this troubleshooting guide:
  Building wheel for terminaltables ( ... done
  Stored in directory: /root/.cache/pip/wheels/30/6b/50/6c75775b681fb36cdfac7f19799888ef9d8813aff9e379663e
Successfully built configobj humanize future terminaltables
ERROR: prompt-toolkit 2.0.10 has requirement six>=1.9.0, but you'll have six 1.5.2 which is incompatible.
The prompt toolkit is just as it sounds-  it’s a powerful toolkit to build out command line and terminal applications.  If you want to update this, as the one in the image may be an earlier than supported version, you can update your net-tools package, which includes a newer version of the prompt-toolkit:
sudo apt-get install net-tools
This should be all of the related challenges that you could run into with installing mssql-cli on Ubuntu Linux.
Hope it’s helpful!!


Tags:  , ,





Copyright ©  [Failed Installation of MSSQL-CLI on Ubuntu], All Right Reserved. 2019.

Hi Chris,

Hi Chris,

>> How do you run automated tests for applications which depend on state of an Oracle database/schema?

Oracle Multitenant (no need for option and this may be done with free Oracle XE) can flashback a Pluggable DataBase very fast.

Of course, people would like to use the same technology for all components. But whether OS containers are good for the application server, it is not the right choice for a database instance (many processes with shared memory and persistent storage). With Multitenant, the CDB runs those processes and memory and can create/drop/flashback database containers which are the PDBs. Think about PDBs for data in the same ways as docker containers for software.


Video : Oracle REST Data Services (ORDS) : OAuth Implicit

In today’s video we look at the OAuth Implicit flow for Oracle REST Data Services.

This goes together with a previous video about first-party authentication here.

Both videos are based on parts of this article.

There are loads of other ORDS articles here.

The star of today’s video is Bob Rhubart, who amongst other things is the host of the Oracle Groundbreakers Podcast.



Video : Oracle REST Data Services (ORDS) : OAuth Implicit was first posted on November 25, 2019 at 9:21 am.
©2012 "The ORACLE-BASE Blog". Use of this feed is for personal non-commercial use only. If you are not reading this article in your feed reader, then the site is guilty of copyright infringement.

19c instant client and Docker

You should get there if you search for “ORA-12637: Packet receive failed” and “Docker”. Note that you can get the same error on old versions of VirtualBox and maybe other virtualized environments that do not correctly forward out-of-band data.

TL;DR: There are two workarounds for this:

  • get out-of-band correctly handled with a recent version of your hypervisor, or by disabling userland-proxy in Docker
  • disable out-of-band breaks usage by setting DISABLE_OOB=ON in sqlnet.ora (client and/or server)

But this post is also the occasion to explain a bit more about this.

Out Of Band breaks

You have all experienced this. You run a long query and want to cancel it. Sometimes, just hitting ^C stops it immediately. Sometimes, it takes very long. And that has to do with the way this break command is handled.

Here is an example where Out-Of-Band is disabled (I have set DISABLE_OOB=ON in sqlnet.ora):

09:55:44 SQL> exec dbms_lock.sleep(10);
BEGIN dbms_lock.sleep(10); END;
ERROR at line 1:
ORA-01013: user requested cancel of current operation
ORA-06512: at "SYS.DBMS_LOCK", line 215
ORA-06512: at line 1
Elapsed: 00:00:06.02

Here is an example where Out-of-Band is enabled (the default DISABLE_OOB=OFF in sqlnet.ora):

10:13:05 SQL> exec dbms_lock.sleep(10);
^CBEGIN dbms_lock.sleep(10); END;
ERROR at line 1:
ORA-01013: user requested cancel of current operation
ORA-06512: at "SYS.DBMS_LOCK", line 215
ORA-06512: at line 1
Elapsed: 00:00:00.43

You see immediately the difference: with Out-Of-Band enabled, the cancel is immediate. Without it, it takes some time for the server to cancel the statement. In both cases, I’ve hit ^C at a time where the client was waiting for the server to answer a call. And that’s the point: when the client sends a user call it waits for the answer. And the server is busy and will not read from the socket until it has something to send.

Then here is how it works: on some platform (which explains why this ^C is not immediate when your client or server is Windows) the “break” message is sent with an Out-Of-Band channel of TCP/IP with the URG flag. Then, on the server, the process will be interrupted with a SIGURG signal and will be able to cancel immediately. Without it, this “break/reset” communication is done through the normal socket channel when it is available.

What is new in 19c

When the connection is established, the client checks that out-of-band break messages are supported, by sending a message with the MSG_OOB flag. This can be disabled by the new parameter: DISABLE_OOB_AUTO

But if the network stack does not handle this properly (because of bug in proxy tunneling or hypervisor) the connection hangs for a few minutes and then fails with: ORA-12637: Packet receive failed

Note that I don’t think the problem is new. The new connection-time check makes it immediately visible. But when OOB is not working properly a ^C will also hang. This means that setting DISABLE_OOB_AUTO=TRUE is not a solution but just postpones the problem. The solution is DISABLE_OOB which is there from previous versions.

ORA-12637: Packet receive failed

Here is what comes from strace on my sqlplus client trying to connect to my docker container using the forwarded port when using the default docker-proxy:

This, just after sending the MSG_OOB packet, is waiting 400 seconds before failing with ORA-12637. More about this 400 seconds timeout later.

Here is how I reproduced this:

  • I’m on CentOS 7.7 but I’m quite sure that you can do the same in OEL
  • I have Docker docker-ce-19.03.4
  • I installed Tim Hall dockerfiles:
git clone
  • I added Oracle 19.3 install zip into the software subdirectory :
ls dockerfiles/database/ol7_19/software/
  • and apex:
ls dockerfiles/database/ol7_19/software/
  • I build the image:
cd dockerfiles/database/ol7_19
docker build -t ol7_19:latest .
  • I installed 19.3 instant client:
yum install -y oracle-instantclient19.3-basic-–1.x86_64.rpm
yum install -y oracle-instantclient19.3-sqlplus-–1.x86_64.rpm
  • Created some volumes:
mkdir -p /u01/volumes/ol7_19_con_u02
chmod -R 777 /u01/volumes/ol7_19_con_u02
  • and network:
docker network create my_network
  • and create a docker container based on the image, redirecting the 1521 port from the container to the host:
docker run -dit - name ol7_19_con \
-p 1521:1521 -p 5500:5500 \
- shm-size="1G" \
- network=my_network \
-v /u01/volumes/ol7_19_con_u02/:/u02 \
  • Wait that the database is created:
until docker logs ol7_19_con | grep "Tail the alert log file as a background process" ; do sleep 1 ; done
  • Now if I connect from the host, to the localhost 1521 (the redirected one):
ORACLE_HOME=/usr/lib/oracle/19.3/client64 /usr/lib/oracle/19.3/client64/bin/sqlplus -L system/oracle@//localhost:1521/pdb1

I get ORA-12637: Packet receive failed after 400 seconds.

  • If I disable OOB, for example on the server:
docker exec -t ol7_19_con bash -ic 'echo DISABLE_OOB=ON > $ORACLE_HOME/network/admin/sqlnet.ora'

Then the connection is ok.

Note that if I disable DISABLE_OOB_AUTO=TRUE (the OOB detection introduced in 19c) the connection is ok but the client hangs if later I hit ^C

If I disable the docker-proxy then all is ok without the need to disable OOB (and I verified that ^C immediately cancels a query):

systemctl stop docker
echo ' { "userland-proxy": false } ' > /etc/docker/daemon.json
systemctl start docker
docker start ol7_19_con
docker exec -t ol7_19_con bash -ic 'rm $ORACLE_HOME/network/admin/sqlnet.ora'

The connection is ok and ^C cancels immediately.

Now about the 400 seconds, this is the connect timeout (INBOUND_CONNECT_TIMEOUT_LISTENER) set in this image:



I discussed this with the Oracle Database Client product manager and SQL*Net developers. They cannot do much about this except document the error, as the problem is in the network layer. I did not open an issue for Docker because I did not easily reproduce a simple send/recv test case.

What I’ve observed is that the error in docker-proxy came between docker version 18.09.8 and 18.09.9 and if someone wants to do a small test case to open a docker issue, here is what I traced in the Oracle InstantClient.

  • I trace the client + server + docker-proxy processes:
pid=$(ps -edf | grep docker-proxy | grep 1521 | awk '{print $2}')
strace -fyytttTo docker.strace -p $pid &
pid=$(ps -edf | grep tnslsnr | grep LISTENER | awk '{print $2}')
strace -fyytttTo oracle.strace -p $pid &
ORACLE_HOME=/usr/lib/oracle/19.3/client64 strace -fyytttTo sqlplus.strace /usr/lib/oracle/19.3/client64/bin/sqlplus -L demo/demo@//localhost/pdb1 

Here is the OOB checking from the client (sqlplus.strace) and sending other data:

13951 1573466279.299345 read(4127.0.0.1:1521]>, "\0-\0\0\2\0\0\0\1>\fA\0\0\0\0\1\0\0\0\0-AA\0\0\0\0\0\0\0\0"..., 8208) = 45 <0.001236>
13951 1573466279.300683 sendto(4127.0.0.1:1521]>, "!", 1, MSG_OOB, NULL, 0) = 1 <0.000047>
13951 1573466279.300793 write(4127.0.0.1:1521]>, "\0\0\0\n\f \0\0\2\0", 10) = 10 <0.000033>
13951 1573466279.301116 write(4127.0.0.1:1521]>, "\0\0\0\237\6 \0\0\0\0\336\255\276\357\0\225\0\0\0\0\0\4\0\0\4\0\3\0\0\0\0\0"..., 159) = 159 <0.000047>
13951 1573466279.301259 read(4127.0.0.1:1521]>, "", 8208) = 0 <60.029279>

Everything was ok (last read of 45 bytes) until a MSG_OOB is sent. Then the client continues to send but nothing is received for 60 seconds.

On the server side, I see the 45 bytes sent and then poll() waiting for 60 seconds without nothing received:

13954 1573466279.300081 write(16, "\0-\0\0\2\0\0\0\1>\fA\0\0\0\0\1\0\0\0\0-AA\0\0\0\0\0\0\0\0"..., 45 
13954 1573466279.300325 <... write resumed> ) = 45 <0.000092>
13954 1573466279.300424 setsockopt(16, SOL_SOCKET, SO_KEEPALIVE, [1], 4 
13954 1573466279.301296 <... setsockopt resumed> ) = 0 <0.000768>
13954 1573466279.301555 poll([{fd=16, events=POLLIN|POLLPRI|POLLRDNORM}], 1, -1 
13954 1573466339.297913 <... poll resumed> ) = ? ERESTART_RESTARTBLOCK (Interrupted by signal) <59.996248>

I’ve traces SQL*Net (later, so do not try to match the time):

2019-11-12 06:56:54.852 : nsaccept:Checking OOB Support
2019-11-12 06:56:54.853 : sntpoltsts:fd 16 need 43 readiness event, wait time -1
*** 2019-11-12T06:57:54.784605+00:00 (CDB$ROOT(1))
2019-11-12 06:57:54.784 : nserror:entry
2019-11-12 06:57:54.785 : nserror:nsres: id=0, op=73, ns=12535, ns2=12606; nt[0]=0, nt[1]=0, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
2019-11-12 06:57:54.785 : nttdisc:entry
2019-11-12 06:57:54.786 : nttdisc:Closed socket 16
2019-11-12 06:57:54.787 : nttdisc:exit
2019-11-12 06:57:54.787 : sntpoltsts:POLL failed with 4
2019-11-12 06:57:54.788 : sntpoltsts:exit
2019-11-12 06:57:54.788 : ntctst:size of NTTEST list is 1 - not calling poll
2019-11-12 06:57:54.788 : sntpoltst:No of conn to test 1, wait time -1
2019-11-12 06:57:54.789 : sntpoltst:fd 16 need 6 readiness events
2019-11-12 06:57:54.789 : sntpoltst:fd 16 has 2 readiness events
2019-11-12 06:57:54.790 : sntpoltst:exit
2019-11-12 06:57:54.790 : nttctl:entry
2019-11-12 06:57:54.790 : ntt2err:entry
2019-11-12 06:57:54.791 : ntt2err:soc -1 error - operation=5, ntresnt[0]=530, ntresnt[1]=9, ntresnt[2]=0
2019-11-12 06:57:54.791 : ntt2err:exit
2019-11-12 06:57:54.791 : nsaccept:OOB is getting dropped
2019-11-12 06:57:54.792 : nsprecv:entry
2019-11-12 06:57:54.792 : nsprecv:reading from transport...
2019-11-12 06:57:54.792 : nttrd:entry
2019-11-12 06:57:54.793 : ntt2err:entry
2019-11-12 06:57:54.793 : ntt2err:soc -1 error - operation=5, ntresnt[0]=530, ntresnt[1]=9, ntresnt[2]=0
2019-11-12 06:57:54.793 : ntt2err:exit
2019-11-12 06:57:54.794 : nttrd:exit
2019-11-12 06:57:54.794 : nsprecv:error exit
2019-11-12 06:57:54.794 : nserror:entry
2019-11-12 06:57:54.795 : nserror:nsres: id=0, op=68, ns=12535, ns2=12606; nt[0]=0, nt[1]=0, nt[2]=0; ora[0]=0, ora[1]=0, ora[2]=0
2019-11-12 06:57:54.795 : nsaccept:error exit
2019-11-12 06:57:54.795 : nioqper: error from niotns: nsaccept failed...
2019-11-12 06:57:54.796 : nioqper: ns main err code: 12535
2019-11-12 06:57:54.796 : nioqper: ns (2) err code: 12606
2019-11-12 06:57:54.796 : nioqper: nt main err code: 0
2019-11-12 06:57:54.797 : nioqper: nt (2) err code: 0
2019-11-12 06:57:54.797 : nioqper: nt OS err code: 0
2019-11-12 06:57:54.797 : niotns:No broken-connection function available.
2019-11-12 06:57:54.798 : niomapnserror:entry
2019-11-12 06:57:54.798 : niqme:entry
2019-11-12 06:57:54.798 : niqme:reporting NS-12535 error as ORA-12535
2019-11-12 06:57:54.799 : niqme:exit
2019-11-12 06:57:54.799 : niomapnserror:exit
2019-11-12 06:57:54.799 : niotns:Couldn't connect, returning 12170

The server process receives nothing for one minute (neither the OOB nor the normal data).

And here is the docker-proxy trace, copying the messages with splice() between the two sockets:

12972 1573466279.300434 splice(5172.18.0.2:1521]>, NULL, 8, NULL, 4194304, SPLICE_F_NONBLOCK 
12972 1573466279.300482 <... splice resumed> ) = 45 <0.000028>
12972 1573466279.300514 splice(7, NULL, 3::ffff:]>, NULL, 45, SPLICE_F_NONBLOCK
12972 1573466279.300581 <... splice resumed> ) = 45 <0.000051>
12972 1573466279.300617 splice(5172.18.0.2:1521]>, NULL, 8, NULL, 4194304,
12972 1573466279.300666 <... splice resumed> ) = -1 EAGAIN (Resource temporarily unavailable) <0.000032>
12972 1573466279.300691 epoll_pwait(6, [], 128, 0, NULL, 2) = 0 <0.000016>
12972 1573466279.300761 epoll_pwait(6,
12972 1573466279.300800 <... epoll_pwait resumed> [{EPOLLOUT, {u32=2729139864, u64=140263476125336}}], 128, -1, NULL, 2) = 1 <0.000028>
12972 1573466279.300836 epoll_pwait(6, [{EPOLLIN|EPOLLOUT, {u32=2729139864, u64=140263476125336}}], 128, -1, NULL, 2) = 1 <0.000017>
12972 1573466279.300903 futex(0x55dc1af342b0, FUTEX_WAKE_PRIVATE, 1) = 1 <0.000021>
12972 1573466279.300963 splice(3::ffff:]>, NULL, 10, NULL, 4194304, SPLICE_F_NONBLOCK
12972 1573466279.301007 <... splice resumed> ) = -1 EAGAIN (Resource temporarily unavailable) <0.000026>
12972 1573466279.301058 epoll_pwait(6,
12972 1573466279.301096 <... epoll_pwait resumed> [], 128, 0, NULL, 2) = 0 <0.000025>
12972 1573466279.301128 epoll_pwait(6,
12972 1573466279.301206 <... epoll_pwait resumed> [{EPOLLIN|EPOLLOUT, {u32=2729139864, u64=140263476125336}}], 128, -1, NULL, 2) = 1 <0.000066>
12972 1573466279.301254 splice(3::ffff:]>, NULL, 10, NULL, 4194304, SPLICE_F_NONBLOCK
12972 1573466279.301306 <... splice resumed> ) = -1 EAGAIN (Resource temporarily unavailable) <0.000031>
12972 1573466279.301346 epoll_pwait(6, [], 128, 0, NULL, 2) = 0 <0.000016>
12972 1573466279.301406 epoll_pwait(6,
12972 1573466339.329699 <... epoll_pwait resumed> [{EPOLLIN|EPOLLOUT|EPOLLERR|EPOLLHUP|EPOLLRDHUP, {u32=2729139656, u64=140263476125128}}], 128, -1, NULL, 2) = 1 <60.028278>

I didn’t get further. The difference between Docker version 18.09.8 and 18.09.9 is also a difference of Go lang from 1.10.8 to 1.11.13 so there are many layers to troubleshoot and I wasted too much time on this.

The solution

Maybe the best solution is not trying to run Oracle Database in a Docker container. Docker makes things simple only for simple software. With such complex software as the Oracle Databases, it brings more problems.

For Docker, the best solution is to set the following in /etc/docker/daemon.json and restart docker:

"userland-proxy": false

This uses iptables to handle the port redirection rather than the docker-proxy which copies the messages with splice() which is a CPU overhead and adds latency, anyway.

For VirtualBox, there is a “ Network: scrub inbound TCP URG pointer, working around incorrect OOB handling” bug fixed in 6.0.12:

If you have no solution at network level, then you will need to disable Out-Of-Band breaks at SQL*Net level by adding DISABLE_OOB=ON in sqlnet.ora on the client or database (not the listener, the database one, and this does not need any restart). Then hitting ^C will be handled by in-band break where the server will have to stop at regular time and check for a break request.

How to (Not) Present – The Evil Threes

<< I Wish All New Presenters Knew This (and it will help you)
. . . . . . . . . Presenting Well – Tell Your Story >>

I’m going to let you into a secret. One of the most commonly taught “sure-fire-wins” to presenting is, in my opinion, a way to almost guarantee that your presentation is boring and dull. Whenever I am in a presentation and I realise they are going to do the “Rule of Three”, a little piece of me dies – and I check to see if I can get to an exit without too much notice. If I can do so I’m probably going to leave. Otherwise, I’ll be considerate and sit quietly. But I’m already thinking I might just watch cat videos on my phone.

The Rule of Three is a presenting structure that is useful if you hate presenting and you feel you are poor at it, but an inescapable part of your role is to present information to groups of people, be they internally to your team or to small groups. The principle is this:

  • People will only remember 3 things from your presentation.
  • There are three parts to your presentation – the start, the body, the end.
  • Use lists of three. I have examples below but basically do something like “be more engaging, more dynamic, more able to get the message over”. 3 parts.
  • 3 squared – use the above to create a killer presentation!
    • Tell the audience in the intro the three things you are going to tell them (briefly)
    • In the body explain each one of the three points in turn, in detail (using lists of three)
    • at the end, sum up the three points briefly.
    • Finish. To indifferent applause.

The problem with the Rule of three is it is a formula, a structure, to help the presenter to cope. Which if presenting is not your thing is OK. But it is not a method for engaging the audience or for making a talk interesting. It is in fact a straight jacket on a talk. As soon as it starts you know that you are going to be told three things. You will be told them again – but actually you won’t, as the presenter nearly always has 2,4, 5, or 12 things to tell you and they will “make it fit”. And at the end, you will have to listen to a summary of what you heard twice already – but again, it will be squeezed into the 3-point-rule.

I guess part of the reason I dislike this technique so much is that back when I started presenting, it was ubiquitous. I’d say half the talks I saw were Rule of Three style and they were the bulk of the poor ones. Back then we did not have Smart Phones. Many of us did not even have Dumb Phones (you know, ones that pretty much only made calls and sent texts, but worked for a week between charges). I played a lot of “snake” during those bad talks. Another thing we had back then was more in the way of training courses. And maybe that was the source of the popularity of this style…

After a year or two of my “presenting career” I went on an “advanced presentation skills” course. I checked before hand that it was not a course for those who had never presented or had to present but it made them want to die,  but that the course was aimed at taking you from being competent to being a skilled presenter. They said yes, it was, it was for people who already presented but wanted to be more engaging, more dynamic, more able to get the message over. My next question was “so no Rule of Three then?” They said no, no Rule of Three.

The course was all around the Rule of Three.

Now don’t get me wrong, if your aim is to describe something fairly simple and all you want to do is get that information from your brain into the brains of the people listening, with the minimum of pain to you, then the Rule of Three will work. It is fairly simple and it is efficient. But you better have a topic that has 3 parts to it and you are using this method as you are only presenting as you are being forced to and this is a way to cope.

If you want to Present, then the Rule of Three sucks. It really sucks. It sucks the enjoyment out of the talk, it sucks the energy out the room, and it sucks the oxygen out of the atmosphere.

The one part of the Rule of Three that I do have a lot of time for is having three parts or examples to a phrase or description. “Be strong, be bold, be brave!” Listing three options such as “If you want to wake up a little the try some light exercise. Go for a walk, get on the bike for 15 minutes, or even a jog a mile or two”. This is a pattern the ancient Greeks used a lot, as you will find out (ad nauseam – which is Latin not Greek) if you google “The rule of three”.Two does not seem enough and 4 or 5 seem a little over the top. But don’t use it all the time as otherwise it can make what you say (or write) too formulaic, too structured, too obvious… a bit crap.

Anyway, having got to the course and discovered that it was all on the Rule Of Three, to say I was annoyed would be a serious understatement. The course was not at all on how you make your presentations more engaging or how to identify things to avoid. (And I will do a post or two on those topics next).

However I did manage to have some fun. On all such presentation skills courses you do at least one, if not several, practice presentations to the other delegates.

I did one that went down very well. It was on why I so, so, so dislike presenting by the Rule of Three.

Ansible Tips’n’tricks: rebooting Vagrant boxes after a kernel upgrade

Occasionally I have to reboot my Vagrant boxes after kernel updates have been installed as part of an Ansible playbook during the “vagrant up” command execution.

I create my own Vagrant base boxes because that’s more convenient for me than pulling them from Vagrant’s cloud. However they, too, need TLC and updates. So long story short, I run a yum upgrade after spinning up Vagrant boxes in Ansible to have access to the latest and greatest (and hopefully most secure) software.

To stay in line with Vagrant’s philosophy, Vagrant VMs are lab and playground environments I create quickly. And I can dispose of them equally quickly, because all that I’m doing is controlled via code. This isn’t something you’d do with Enterprise installations!

Vagrant and Ansible for lab VMs!

Now how do you reboot a Vagrant controlled VM in Ansible? Here is how I’m doing this for VirtualBox 6.0.14 and Vagrant 2.2.6. Ubuntu 18.04.3 comes with Ansible 2.5.1.

Finding out if a kernel upgrade is needed

My custom Vagrant boxes are all based on Oracle Linux 7 and use UEK as the kernel of choice. That is important because it determines how I can find out if yum upgraded the kernel (eg UEK) as part of a “yum upgrade”.

There are many ways to do so, I have been using the following code snippet with some success:

 - name: check if we need to reboot after a kernel upgrade
    shell: if [ $(/usr/bin/rpm -q kernel-uek|/usr/bin/tail -n 1) != kernel-uek-$(uname -r) ]; then /usr/bin/echo 'reboot'; else /usr/bin/echo 'no'; fi
    register: must_reboot

So in other words I compare the last line from rpm -q kernel-uek to the name of the running kernel. If they match – all good. If they don’t, it seems there is a newer kernel-uek* RPM on disk than that of the running kernel. If the variable “must_reboot” contains “reboot”, I guess I have to reboot.


Ansible introduced a reboot module recently, however my Ubuntu 18.04 system’s Ansible version is too old for that and I wanted to stay with the distribution’s package. I needed an alternative.

There are lots of code snippets out there to reboot systems in Ansible, but none of them worked for me. So I decided to write the process up in this post :)

The following block worked for my very specific setup:

  - name: reboot if needed
    - shell: sleep 5 && systemctl reboot
      async: 300
      poll: 0
      ignore_errors: true

    - name: wait for system to come back online
        delay: 60
        timeout: 300
    when: '"reboot" in must_reboot.stdout'

This works nicely with the systems I’m using.

Except there’s a catch lurking in the code: when installing Oracle the software is made available via Virtualbox’s shared folders as defined in the Vagrantfile. When rebooting a Vagrant box outside the Vagrant interface (eg not using the vagrant reload command), shared folders aren’t mounted automatically. In other words, my playbook will fail trying to unzip binaries because it can’t find them. Which isn’t what I want. To circumvent this situation I add the following instruction into the block you just saw:

    - name: re-mount the shared folder after a reboot
        path: /mnt
        src: mnt
        fstype: vboxsf
        state: mounted

This re-mounts my shared folder, and I’m good to go!


Before installing Oracle software in Vagrant for lab and playground use I always want to make sure I have all the latest and greatest patches installed as part of bringing a Vagrant box online for the first time.

Using Ansible I can automate the entire process from start to finish, even including kernel updates in the process. These are applied before I install the Oracle software!

Upgrading the kernel (or any other software components for that matter) post Oracle installation is more involved, and I usually don’t need to do this during the lifetime of the Vagrant (playground/lab) VM. Which is why Vagrant is beautiful, especially when used together with Ansible.