Following up with this blog post, here is a quick overview of what is performed on Exadata Cloud at Customer nodes when a Grid Infrastructure patching is launched via the web interface, the Oracle Cloud control plane.
The whole patch process is not that different from the precheck process. Most of the steps are similar, and of course, the main difference is that the Opatch command actually … patches the Grid Infrastructure 😉
Let’s launch the patching process with a simple click on the Oracle Cloud control plane (sorry for the Fren-Glish screenshots …) :
I am now exploring the brand new Grid Infrastructure patching method for Exadata Cloud at Customer. With Exadata Cloud at Customer version 126.96.36.199, it is now possible to patch Grid Infrastructure with a few clicks in the GUI. Let’s see exactly which steps are performed.
First, with my preferred method (CLI 🙂 ), I run opatch against Grid Infrastructure home to check the current patch level :
I recently encountered a problem, for which I do not have any clue yet. But at least, I have a workaround. The goal of this blog post is to remember the exploration towards this workaround. And then to switch back to a normal sitution when possible.
For some reason, 120 development databases were configured to use Shared Server Architecture. The day after this change of configuration, a lot of users started complaining about a fully-automatized-0-problem-encountered-in-the-last-2-years-procedure to duplicate a production database to development database … Indeed, this procedure begins with stopping target database, and this day, failed almost everytime during this step … Why ?
There are several ways to dig for precious information in listener logs, for example this method described by Arup Nanda or this one by Liron Amitzi.
I currently work in an environment with 40+ servers and 550+ databases managed by Grid Infrastructure. I recently wanted to help a colleague who was experiencing problems with a brand new installed application. Her application should connect to a database in another VLAN. Our first intuition was to check if the application could, at least, reach the database. Since the database resides on a Grid Infrastructure cluster, it would have been tedious to check all (scan-) listener logs spread accross all servers. This is where Splunk has proven useful.
Conferences are great. Not only for the technical content, also for the people. Recently during DOAG, I had very interesting conversations (yes, several conversations 🙂 ) with Martin Berger about how to control who is connecting to which database in a complex environment. Among other topics, we mentioned that it was possible, starting with Oracle 12.2, to set Access Control Lists to allow connections to a database service (in Non-CDB or PDB) from specific IP addresses.
This new feature Database Service Firewall was introduced with Oracle 12.2. It should not to be confused with Database Firewall, which is a dedicated system used to monitor traffic from and to databases, and is part of Oracle Audit Vault and Database Firewall product.
As I never used Database Service Firewall, I decided to give it a try in a Multitenant environment with RAC.
My lab is a 2-node RAC cluster with Grid Infrastructure 18, a 18.3 RAC Container database called metal, and one pluggable database called opeth.
A critical application is recently having a creepy behaviour in production, so its developers are willing to understand what is going on in the database and troubleshoot in an effective way. Let’s give them access to all the databases related to this application throught Oracle Enterprise Manager 188.8.131.52.
The following procedure is mainly relevant with this version, as Oracle Enterprise Manager 13c has a new set of privileges that should be more appropriate.
By chance, right after my “ODC Appreciation Day” post, I’ve been asked to convert a database from character set WE8ISO8859P1 to AL32UTF8 with DMU. Apart from a few well-known issues described in MOS note 2018250.1, I got a “Need conversion” row on table WRI$_SQLSET_DEFINITIONS in data dictionary.
Section D.11 of MOS note 2018250.1 states that you can remove “Invalid Binary Representation” in AWR tables (WRI$_%, WRH$_%, WRR$_%) by following MOS note 782974.1 to drop and recreate AWR. I tried this solution as a last resort. Unfortunately, after using catnoawr.sql, most of WRI$_% tables are still there, only 3 of them are dropped. And of course, WRI$_SQLSET_DEFINITIONS remains intact.