Oracle Trace File Analyzer (TFA) – Installation/upgrade as root without SSH, and synchronization between nodes

Animal footprints

(Picture by Clker-Free-Vector-Images [CC BY-SA 3.0], via Pixabay)

Working on a 3-node Oracle 12.2.0.1 Grid Infrastructure cluster on Red Hat Enterprise Linux Server release 7.4, I need to upgrade TFA to the latest version which is 18.2.1, as of today. I don’t want to configure passwordless SSH user equivalency for root and need to use sudo. Official documentation indicates :

“If you do not want to use ssh, you can install on each host using a local install. Then use tfactl syncnodes to generate and deploy the relevant SSL certificates.”

Here are the steps I follow to upgrade TFA :

  • Download the latest version from Doc ID 1513912.1
  • Extract the zip file in any appropriate directory on the first node
  • Launch installTFA using sudo :
# sudo ./installTFA-LINUX
TFA Installation Log will be written to File : /some/directory/tfa_install_45563_2018_07_02-15_46_37.log

Starting TFA installation

TFA Version: 182100 Build Date: 201805291110

TFA HOME : /grid/infrastructure/home/tfa/node01/tfa_home

Installed Build Version: 181100 Build Date: 201803280250

TFA is already installed. Patching /grid/infrastructure/home/tfa/node01/tfa_home...
TFA patching typical install from zipfile is written to /grid/infrastructure/home/tfa/node01/tfapatch.log

TFA will be Patched on:
node01
node02
node03

Do you want to continue with patching TFA? [Y|N] [Y]: Y

 

Checking for ssh equivalency in node02
Node node02 is not configured for ssh user equivalency

Checking for ssh equivalency in node03
Node node03 is not configured for ssh user equivalency

SSH is not configured on these nodes :
node02
node03

Do you want to configure SSH on these nodes ? [Y|N] [Y]: N

Patching remote nodes using TFA Installer /some/directory/TFA/installTFA-LINUX...

 

Copying TFA Installer to node02...
root@node02's password:
root@node02's password:
root@node02's password:
lost connection

Starting TFA Installer on node02...
root@node02's password:
root@node02's password:
root@node02's password:

Copying TFA Installer to node03...
root@node03's password:
root@node03's password:
root@node03's password:
lost connection

Starting TFA Installer on node03...
root@node03's password:
root@node03's password:
root@node03's password:

Applying Patch on node01:

Stopping TFA Support Tools...

Shutting down TFA for Patching...

Shutting down TFA
Removed symlink /etc/systemd/system/multi-user.target.wants/oracle-tfa.service.
Removed symlink /etc/systemd/system/graphical.target.wants/oracle-tfa.service.
. . . . .
. . .
Successfully shutdown TFA..

No Berkeley DB upgrade required

Copying TFA Certificates...
Moving Properties.bkp to Properties

Running commands to fix init.tfa and tfactl in localhost

 

Starting TFA in node01...

Starting TFA..
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Waiting up to 100 seconds for TFA to be started..
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands

Enabling Access for Non-root Users on node01...

.----------------------------------------------------------------.
| Host     | TFA Version | TFA Build ID         | Upgrade Status |
+----------+-------------+----------------------+----------------+
| node01   | 18.2.1.0.0  | 18210020180529111033 | UPGRADED       |
| node02   | 18.1.1.0.0  | 18110020180328025002 | NOT UPGRADED   |
| node03   | 18.1.1.0.0  | 18110020180328025002 | NOT UPGRADED   |
'----------+-------------+----------------------+----------------'

TFA is now up to date :

# tfactl version
TFA Version : 182100
TFA Build ID : 20180529111033

But only on the first node :

# sudo /grid/infrastructure/home/bin/tfactl print config
.------------------------------------------------------------------------------------.
| node01                                                                             |
+-----------------------------------------------------------------------+------------+
| Configuration Parameter                                               | Value      |
+-----------------------------------------------------------------------+------------+
| TFA Version                                                           | 18.2.1.0.0 |
[...]

.------------------------------------------------------------------------------------.
| node02                                                                             |
+-----------------------------------------------------------------------+------------+
| Configuration Parameter                                               | Value      |
+-----------------------------------------------------------------------+------------+
| TFA Version                                                           | 18.1.1.0.0 |
[...]

.------------------------------------------------------------------------------------.
| node03                                                                             |
+-----------------------------------------------------------------------+------------+
| Configuration Parameter                                               | Value      |
+-----------------------------------------------------------------------+------------+
| TFA Version                                                           | 18.1.1.0.0 |
[...]

Those steps must be performed on all nodes.

After having completed this process on all nodes, let’s synchronize them. As I decided not to use SSH, I need to execute those final steps :

  • Launch tfactl syncnodes using sudo :
# sudo /grid/infrastructure/home/bin/tfactl syncnodes

Login using root is disabled in sshd config. Please enable it or

Please copy these files manually to remote node and restart TFA
1. /grid/infrastructure/home/tfa/node01/tfa_home/server.jks
2. /grid/infrastructure/home/tfa/node01/tfa_home/client.jks
3. /grid/infrastructure/home/tfa/node01/tfa_home/internal/ssl.properties

These files must be owned by root and should have 600 permissions.
  • Follow the previous instructions about copying files with right permissions and owner.
  • Stop and start TFA on each node :
sudo /grid/infrastructure/home/bin/tfactl stop
sudo /grid/infrastructure/home/bin/tfactl start
  • Launch tfactl status to check node synchronization :
# sudo /ccv/app/grid/product/gi12_2/bin/tfactl status

.------------------------------------------------------------------------------------------------.
| Host     | Status of TFA | PID   | Port | Version    | Build ID             | Inventory Status |
+----------+---------------+-------+------+------------+----------------------+------------------+
| node01   | RUNNING       | 11111 | 1234 | 18.2.1.0.0 | 18210020180529111033 | COMPLETE         |
| node02   | RUNNING       | 22222 | 1234 | 18.2.1.0.0 | 18210020180529111033 | COMPLETE         |
| node03   | RUNNING       | 33333 | 1234 | 18.2.1.0.0 | 18210020180529111033 | COMPLETE         |
'----------+---------------+-------+------+------------+----------------------+------------------'
  • And finally use TFA, for example with tfactl diagcollect :
# tfactl diagcollect

By default TFA will collect diagnostics for the last 12 hours. This can result in large collections
For more targeted collections enter the time of the incident, otherwise hit  to collect for the last 12 hours
[YYYY-MM-DD HH24:MI:SS,=Collect for last 12 hours] :

Collecting data for the last 12 hours for all components...
Collecting data for all nodes

Collection Id : 20180703123627node01

Detailed Logging at : /some/tfa/repository/collection_Tue_Jul_03_12_36_27_CEST_2018_node_all/diagcollect_20180703123627_node01.log
2018/07/03 12:36:32 CEST : NOTE : Any file or directory name containing the string .com will be renamed to replace .com with dotcom
2018/07/03 12:36:32 CEST : Collection Name : tfa_Tue_Jul_03_12_36_27_CEST_2018.zip
2018/07/03 12:36:32 CEST : Collecting diagnostics from hosts : [node03, node02, node01]
2018/07/03 12:36:32 CEST : Scanning of files for Collection in progress...
2018/07/03 12:36:32 CEST : Collecting additional diagnostic information...
2018/07/03 12:37:27 CEST : Getting list of files satisfying time range [07/03/2018 00:36:32 CEST, 07/03/2018 12:37:27 CEST]
2018/07/03 12:39:34 CEST : Collecting ADR incident files...
2018/07/03 12:40:26 CEST : Completed collection of additional diagnostic information...
2018/07/03 12:40:41 CEST : Completed Local Collection
2018/07/03 12:40:41 CEST : Remote Collection in Progress...
.-------------------------------------.
|          Collection Summary         |
+----------+-----------+-------+------+
| Host     | Status    | Size  | Time |
+----------+-----------+-------+------+
| node02   | Completed | 172MB | 246s |
| node03   | Completed | 187MB | 247s |
| node01   | Completed | 144MB | 249s |
'----------+-----------+-------+------'

Logs are being collected to: /some/tfa/repository/collection_Tue_Jul_03_12_36_27_CEST_2018_node_all
/some/tfa/repository/collection_Tue_Jul_03_12_36_27_CEST_2018_node_all/node01.tfa_Tue_Jul_03_12_36_27_CEST_2018.zip
/some/tfa/repository/collection_Tue_Jul_03_12_36_27_CEST_2018_node_all/node02.tfa_Tue_Jul_03_12_36_27_CEST_2018.zip
/some/tfa/repository/collection_Tue_Jul_03_12_36_27_CEST_2018_node_all/node03.tfa_Tue_Jul_03_12_36_27_CEST_2018.zip

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s