Grid Infrastructure and Database Upgrade on Exadata Database Machine

NODE1-exaprddb01 (As grid)
*****************************
ps -ef|grep d.bin
ps -ef|grep lsnr
/u01/app/11.2.0.3/grid/bin/crsctl check crs
/u01/app/11.2.0.3/grid/bin/crsctl stat res -t
/u01/app/11.2.0.3/grid/bin/crsctl query crs activeversion

NODE1-exaprddb02 (As grid)
*****************************
ps -ef|grep d.bin
ps -ef|grep lsnr
/u01/app/11.2.0.3/grid/bin/crsctl check crs
/u01/app/11.2.0.3/grid/bin/crsctl stat res -t
/u01/app/11.2.0.3/grid/bin/crsctl query crs activeversion


To /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017 (create this directory, if doesn’t exist in DR site) Apply FULL Permissions (touch grid, root to verify)
cd /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017

unzip p21419221_121020_Linux-x86-64_5of10.zip
unzip p21419221_121020_Linux-x86-64_6of10.zip
cd grid

run cluvfy using below

./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0.3/grid -dest_crshome /u01/app/12.1.0.2/grid -dest_version 12.1.0.2.0 -verbose > /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/cluvfy_21SEP2017.log

>>>>>>Use SSH User Setup Document to setup passwordless for grid users, if you get failed in this cluvfy output and run the cluvfy again.

NODE1
=====
mkdir -p /u02/GI_12C_UPGRADE_BACKUPS/exaprddb01
mkdir -p /u02/GI_12C_UPGRADE_BACKUPS/MD_BACKUPS/node1
mkdir -p /u02/GI_12C_UPGRADE_BACKUPS/OCR_BACKUPS/node1
mkdir -p /u01/app/12.1.0.2/grid
chown -R grid:oinstall /u02/GI_12C_UPGRADE_BACKUPS
chown -R grid:oinstall /u01/app/12.1.0.2/grid

NODE2
=====
mkdir -p /u02/GI_12C_UPGRADE_BACKUPS/exaprddb02
mkdir -p /u02/GI_12C_UPGRADE_BACKUPS/MD_BACKUPS/node2
mkdir -p /u02/GI_12C_UPGRADE_BACKUPS/OCR_BACKUPS/node2
mkdir -p /u01/app/12.1.0.2/grid
chown -R grid:oinstall /u02/GI_12C_UPGRADE_BACKUPS
chown -R grid:oinstall /u01/app/12.1.0.2/grid

Take Backups
*************

NODE1-exaprddb01 (As ROOT)
*****************************

cd /u01/app/11.2.0.3

tar -cvzf /u02/GI_12C_UPGRADE_BACKUPS/exaprddb01/grid.tgz grid/

cat /etc/oraInst.loc

cd /u01/app

tar -cvzf /u02/GI_12C_UPGRADE_BACKUPS/exaprddb01/oraInventory.tgz oraInventory

crsctl query css votedisk

NODE1-exaprddb02 (As ROOT)
*****************************

cd /u01/app/11.2.0.3

tar -cvzf /u02/GI_12C_UPGRADE_BACKUPS/exaprddb02/grid.tgz grid/

cat /etc/oraInst.loc

cd /u01/app

tar -cvzf /u02/GI_12C_UPGRADE_BACKUPS/exaprddb02/oraInventory.tgz oraInventory

crsctl query css votedisk

NODE1 (exaprddb01) NODE2 (exaprddb01) (grid)
****************************************************

Create two directories for OCR_BACKUPS and MD_BACKUPS

Create two directories in backup mount NODE1, NODE2 inside OCR_BACKUPS
Create two directories in backup mount NODE2, NODE2 inside MD_BACKUPS

WORKING OCR BACKUPS
***********************
OCR Backups
*****************
ocrconfig -showbackup

ocrconfig -showbackup manual

ocrconfig -showbackup auto

LIST OLR Backups
********************

$GRID_HOME/bin/ocrconfig -local -showbackup


WORKING ON MetaData ASM Diskgroups BACKUP
*****************************************

[grid@exaprddb01 MDBACKUP_NODE1]$ asmcmd -p
ASMCMD [+] > lsdg

ASMCMD [+] > md_backup /u02/GI_12C_UPGRADE_BACKUPS/MD_BACKUPS/node1/exaprddb-db_dbfs_dg_21SEP2017 -G DBFS_DG

ASMCMD [+] > md_backup /u02/GI_12C_UPGRADE_BACKUPS/MD_BACKUPS/node1/exaprddb-db_data_21SEP2017 -G DATA

ASMCMD [+] > md_backup /u02/GI_12C_UPGRADE_BACKUPS/MD_BACKUPS/node1/exaprddb-db_reco_21SEP2017 -G RECO


-------------------------------------------------------------------------------------------

[grid@exaprddb02 ~]$ asmcmd -p
ASMCMD [+] > lsdg

ASMCMD [+] > md_backup /u02/GI_12C_UPGRADE_BACKUPS/MD_BACKUPS/node2/exaprddb-db_dbfs_dg_21SEP2017 -G DBFS_DG

ASMCMD [+] > md_backup /u02/GI_12C_UPGRADE_BACKUPS/MD_BACKUPS/node2/exaprddb-db_data_21SEP2017 -G DATA

ASMCMD [+] > md_backup /u02/GI_12C_UPGRADE_BACKUPS/MD_BACKUPS/node2/exaprddb-db_reco_21SEP2017 -G RECO


PERFORM THIS COPY JUST BEFORE STOP CRS
*****************************************
cp -r /u01/app/11.2.0.3/grid/cdata /u02/GI_12C_UPGRADE_BACKUPS/OCR_BACKUPS/node1
cp -r /u01/app/11.2.0.3/grid/cdata /u02/GI_12C_UPGRADE_BACKUPS/OCR_BACKUPS/node2

/u01/app/11.2.0.3/grid/bin/ocrconfig -local -export /u02/GI_12C_UPGRADE_BACKUPS/OCR_BACKUPS/node1/BeforeDRGIUpgrade.ocr
/u01/app/11.2.0.3/grid/bin/ocrconfig -local -export /u02/GI_12C_UPGRADE_BACKUPS/OCR_BACKUPS/node2/BeforeDRGIUpgrade.ocr

PERFORM THIS COPY JUST BEFORE STOP CRS
*****************************************



PRIMARY Site     (STOP THE REPLICATION, DOWNTIME STARTS for DR SITE, as well as PRIMARY SITE)
****************

1)    alter system archive log current; (Multiple Times)

2)    Take the count
select thread#, max(sequence#) "Last Primary Seq Generated"
from v$archived_log val, v$database vdb
where val.resetlogs_change# = vdb.resetlogs_change#
group by thread# order by 1;

3)    alter system set log_archive_dest_state_2='DEFER' scope=both sid='*';

4) su - oracent
srvctl stop database -d CENTDB -o immediate
su - oraoamtest
srvctl stop database -d OAMTEST -o immediate
su - oraebstest
srvctl stop database -d EBSTEST -o immediate
su - oraoamprd
srvctl stop database -d OAMPRD -o immediate
su - orahypdev
srvctl stop database -d HYPDEV -o immediate
su - oracentest
srvctl stop database -d CENTEST -o immediate
su - oralegprd
srvctl stop database -d LEGACY -o immediate
su - orahypprd
srvctl stop database -d HYPPRD -o immediate
su - oraoamdrprd
srvctl stop database -d OAMDRPRD -o immediate
su - oraebsuat
srvctl stop database -d EBSUAT -o immediate
su - orahyptest
srvctl stop database -d HYPTEST -o immediate
su - oraebsprd
srvctl stop database -d EBSPROD -o immediate
su - oraebsdev
srvctl stop database -d ebsdev -o immediate
su - oracentdbn
srvctl stop database -d centdbn -o immediate

STANDBY SITE
*********************

1)    Take the count
Shipping count
*******************
select thread#, max(sequence#) "Last Standby Seq Received"
from v$archived_log val, v$database vdb
where val.resetlogs_change# = vdb.resetlogs_change#
group by thread# order by 1;
Recovery count
*******************
select thread#, max(sequence#) "Last Standby Seq Applied"
from v$archived_log val, v$database vdb
where val.resetlogs_change# = vdb.resetlogs_change#
and val.applied in ('YES','IN-MEMORY')
group by thread# order by 1;

2)    alter system set log_archive_dest_state_2='DEFER' scope=both sid='*';
3)    alter database recover managed standby database cancel;
4)    srvctl stop database -d ebsproddr -o immediate


Login as ROOT (NODE1-exaprddb01)

/u01/app/11.2.0.3/grid/bin/crsctl check crs
/u01/app/11.2.0.3/grid/bin/crsctl stop crs
/u01/app/11.2.0.3/grid/bin/crsctl check crs

cd /u01/app/11.2.0.3/grid/
tar -cvzf /u02/GI_12C_UPGRADE_BACKUPS/exaprddb01/cdata.tgz  cdata/

/u01/app/11.2.0.3/grid/bin/crsctl start crs

/u01/app/11.2.0.3/grid/bin/crsctl check crs
/u01/app/11.2.0.3/grid/bin/crsctl stop crs
/u01/app/11.2.0.3/grid/bin/crsctl check crs

cd /u01/app/11.2.0.3/grid/
tar -cvzf /u02/GI_12C_UPGRADE_BACKUPS/exaprddb02/cdata.tgz  cdata/

/u01/app/11.2.0.3/grid/bin/crsctl start crs


START THE INSTALLATION OF SOFTWARE
**************************************
Current Versions:
*****************
Grid Infrastructure : 11.2.0.3.28 (JUL 2015 – 18983927)
OPatch version : 11.2.0.3.12
Exadata : 12.1.2.2.0.150917
O.S : Red Hat Enterprise Linux Server release 6.7 64-bit

cd /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017
cd grid
run cluvfy using below

./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0.3/grid -dest_crshome /u01/app/12.1.0.2/grid -dest_version 12.1.0.2.0 -verbose > /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/cluvfy_22062017.log
./runcluvfy.sh stage -pre crsinst -upgrade -rolling -src_crshome /u01/app/11.2.0.3/grid -dest_crshome /u01/app/12.1.0.2/grid -dest_version 12.1.0.2.0 -verbose -fixup

NODE1 (exaprddb01) -- AS ROOT USER

ls -ld /u01/app/12.1.0.2/grid
mkdir -p /u01/app/12.1.0.2/grid
ls -ld /u01/app/12.1.0.2/grid
chown -R grid:oinstall /u01/app/12.1.0.2/grid
chown -R grid:oinstall /u01/app/12.1.0.2/
chmod -R 777 /u01/app/12.1.0.2/

NODE2 (exaprddb02) -- AS ROOT USER

ls -ld /u01/app/12.1.0.2/grid
mkdir -p /u01/app/12.1.0.2/grid
ls -ld /u01/app/12.1.0.2/grid
chown -R grid:oinstall /u01/app/12.1.0.2/grid
chown -R grid:oinstall /u01/app/12.1.0.2/
chmod -R 777 /u01/app/12.1.0.2/

change the asm sga parameter values

sqlplus / as sysasm
show parameter sga
SQL> alter system set sga_max_size = 2G scope=spfile sid='*';
SQL> alter system set sga_target = 2G scope=spfile sid='*';

NOTE:To make the above SGA changes  restart the crs from root on both the nodes.


Install VNC Server with help of Infra Team
*******************************************
open new vnc server session with password

Login as grid (NODE1)
*************************
unset ORACLE_HOME ORACLE_BASE ORACLE_SID
unset ORACLE_SID
echo $ORACLE_HOME should be null
echo $ORACLE_BASE should be null
echo $ORACLE_SID should be null
cd /backup/soft/linux/12.1.0.2/grid
export SRVM_USE_RACTRANS=true
./runInstaller

Follow the screenshots given in the document

--Choose upgrade Grid Infrastructure
--asmadmin, asmdba, asmoper
Run FixUp scripts as root user if pre-requisite check is having

Once prompted for run rootupgrade.sh on both cluster nodes (STOPPP HERE)

****************************
cd /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017
cp .bash_profile 12c.env

Do the below changes to 12c.env (BOTH NODES) (Conditional)
**********************
export ORACLE_BASE=/u01/app/grid
export ORACLE_HOME=/u01/app/12.1.0.2/grid
export ORACLE_SID=+ASM2
export PATH=$ORACLE_HOME/bin:$ORACLE_HOME/OPatch:$PATH

Source 12c.env (Conditional)

opatch version (will be lower version supplied in the 12.1.0.2 software itself)




opatch version (old version)
opatch lsinventory (Empty Home)

Upgrade OPatch
****************
NODE1-exaprddb01
*********
1) cd $ORACLE_HOME/
2) mv OPatch OPatch_supplied
3) cp /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/p6880880_121010_Linux-x86-64.zip .
4) unzip p6880880_121010_Linux-x86-64.zip
5) export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
6)      (12.2.0.1.5)
7) opatch lsinventory (Notice empty oracle home with no patches installed)

NODE2-exaprddb02
*********
1) cd $ORACLE_HOME/
2) mv OPatch OPatch_supplied
3) cp /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/p6880880_121010_Linux-x86-64.zip .
4) unzip p6880880_121010_Linux-x86-64.zip
5) export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch
6) opatch version (12.2.0.1.5)
7) opatch lsinventory (Notice empty oracle home with no patches installed)

APPLY BUNDLE PATCHES (25434003) on both the nodes:
**************************************************

Node 1 as grid User:
=======================
[grid@exaprddb01 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/25434003/25363750

[grid@exaprddb01 ~]$ opatch lsinventory
Patch description: "ACFS PSU 12.1.0.2.170418"

[grid@exaprddb01 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/25434003/21436941

[grid@exaprddb01 ~]$ opatch lsinventory
Patch description: "DBWLM PSU 12.1.0.2.5"

[grid@exaprddb01 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/25434003/25171037

[grid@exaprddb01 ~]$ opatch lsinventory
Patch description: "DB PSU 12.1.0.2.170418 (Apr2017)"

[grid@exaprddb01 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/25434003/25363740

[grid@exaprddb01 ~]$ opatch lsinventory
Patch description: "OCW PSU 12.1.0.2.170418"

Node 2 as grid User:
=======================

[grid@exaprddb02 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/25434003/25363750

[grid@exaprddb02 ~]$ opatch lsinventory
Patch description: "ACFS PSU 12.1.0.2.170418"

[grid@exaprddb02 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/25434003/21436941

[grid@exaprddb02 ~]$ opatch lsinventory
Patch description: "DBWLM PSU 12.1.0.2.5"

[grid@exaprddb02 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/25434003/25171037

[grid@exaprddb02 ~]$ opatch lsinventory
Patch description: "DB PSU 12.1.0.2.170418 (Apr2017)"

[grid@exaprddb02 ~]$ /u01/app/12.1.0.2/grid/OPatch/opatch napply -oh /u01/app/12.1.0.2/grid -local /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/25434003/25363740

[grid@exaprddb02 ~]$ opatch lsinventory
Patch description: "OCW PSU 12.1.0.2.170418"


>>>>>Once the patching is completed on both the nodes, continue with the Grid Infrastructure Installation by running the rootupgrade.sh on both the grid nodes<<<<<<<

ROOTUPGRADE Script Execution Outputs
************************************
[root@exaprddb01 ~]# /u01/app/12.1.0.2/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.1.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/06/22 17:46:22 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2017/06/22 17:46:46 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2017/06/22 17:46:48 CLSRSC-464: Starting retrieval of the cluster configuration data

2017/06/22 17:46:54 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.

2017/06/22 17:46:58 CLSRSC-515: Starting OCR manual backup.

2017/06/22 17:47:00 CLSRSC-516: OCR manual backup successful.

2017/06/22 17:47:02 CLSRSC-468: Setting Oracle Clusterware and ASM to rolling migration mode

2017/06/22 17:47:02 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/asmca -silent -upgradeNodeASM -nonRolling false -oldCRSHome /u01/app/11.2.0.3/grid -oldCRSVersion 11.2.0.3.0 -nodeNumber 1 -firstNode true -startRolling true'


ASM configuration upgraded in local node successfully.

2017/06/22 17:47:08 CLSRSC-469: Successfully set Oracle Clusterware and ASM to rolling migration mode

2017/06/22 17:47:08 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2017/06/22 17:49:06 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2017/06/22 17:49:28 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/06/22 17:51:31 CLSRSC-472: Attempting to export the OCR

2017/06/22 17:51:31 CLSRSC-482: Running command: 'ocrconfig -upgrade grid oinstall'

2017/06/22 17:51:40 CLSRSC-473: Successfully exported the OCR

2017/06/22 17:51:42 CLSRSC-486:
 At this stage of upgrade, the OCR has changed.
 Any attempt to downgrade the cluster after this point will require a complete cluster outage to restore the OCR.

2017/06/22 17:51:42 CLSRSC-541:
 To downgrade the cluster:
 1. All nodes that have been upgraded must be downgraded.

2017/06/22 17:51:42 CLSRSC-542:
 2. Before downgrading the last node, the Grid Infrastructure stack on all other cluster nodes must be down.

2017/06/22 17:51:42 CLSRSC-543:
 3. The downgrade command must be run on the node exaprddb02 with the '-lastnode' option to restore global configuration data.

2017/06/22 17:52:13 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 11g Release 2.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
2017/06/22 17:52:34 CLSRSC-474: Initiating upgrade of resource types

2017/06/22 17:52:59 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.3.0 -d 12.1.0.2.0 -p first'

2017/06/22 17:52:59 CLSRSC-475: Upgrade of resource types successfully initiated.

2017/06/22 17:53:01 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded



>>>>>>>>>>NODE2—ROOTUPGRADE.sh OUTPUT<<<<<<<<<<<<<<<

[root@exaprddb02 ~]# /u01/app/12.1.0.2/grid/rootupgrade.sh
Performing root user operation.

The following environment variables are set as:
    ORACLE_OWNER= grid
    ORACLE_HOME=  /u01/app/12.1.0.2/grid

Enter the full pathname of the local bin directory: [/usr/local/bin]:
The file "dbhome" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying dbhome to /usr/local/bin ...
The file "oraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying oraenv to /usr/local/bin ...
The file "coraenv" already exists in /usr/local/bin.  Overwrite it? (y/n)
[n]: y
   Copying coraenv to /usr/local/bin ...

Entries will be added to the /etc/oratab file as needed by
Database Configuration Assistant when a database is created
Finished running generic part of root script.
Now product-specific root actions will be performed.
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/06/22 17:54:19 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2017/06/22 17:54:44 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

2017/06/22 17:54:44 CLSRSC-464: Starting retrieval of the cluster configuration data

2017/06/22 17:54:49 CLSRSC-177: Failed to add (property/value):('LOCAL_NODE_NUM'/'') for checkpoint 'ROOTCRS_OLDHOMEINFO' (error code 1)

2017/06/22 17:54:49 CLSRSC-465: Retrieval of the cluster configuration data has successfully completed.


ASM configuration upgraded in local node successfully.

2017/06/22 17:54:56 CLSRSC-466: Starting shutdown of the current Oracle Grid Infrastructure stack

2017/06/22 17:56:54 CLSRSC-467: Shutdown of the current Oracle Grid Infrastructure stack has successfully completed.

OLR initialization - successful
2017/06/22 17:57:20 CLSRSC-329: Replacing Clusterware entries in file 'oracle-ohasd.conf'

CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Oracle High Availability Services has been started.
2017/06/22 18:00:01 CLSRSC-343: Successfully started Oracle Clusterware stack

clscfg: EXISTING configuration version 5 detected.
clscfg: version 5 is 12c Release 1.
Successfully taken the backup of node specific configuration in OCR.
Successfully accumulated necessary OCR keys.
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
Start upgrade invoked..
2017/06/22 18:00:14 CLSRSC-478: Setting Oracle Clusterware active version on the last node to be upgraded

2017/06/22 18:00:14 CLSRSC-482: Running command: '/u01/app/12.1.0.2/grid/bin/crsctl set crs activeversion'

Started to upgrade the Oracle Clusterware. This operation may take a few minutes.
Started to upgrade the OCR.
Started to upgrade the CSS.
The CSS was successfully upgraded.
Started to upgrade Oracle ASM.
Started to upgrade the CRS.
The CRS was successfully upgraded.
Successfully upgraded the Oracle Clusterware.
Oracle Clusterware operating version was successfully set to 12.1.0.2.0
2017/06/22 18:01:52 CLSRSC-479: Successfully set Oracle Clusterware active version

2017/06/22 18:01:59 CLSRSC-476: Finishing upgrade of resource types

2017/06/22 18:02:16 CLSRSC-482: Running command: 'upgrade model  -s 11.2.0.3.0 -d 12.1.0.2.0 -p last'

2017/06/22 18:02:16 CLSRSC-477: Successfully completed upgrade of resource types

2017/06/22 18:02:45 CLSRSC-325: Configure Oracle Grid Infrastructure for a Cluster ... succeeded

You have new mail in /var/spool/mail/root
[root@exaprddb02 ~]#


>>>>>>>>>CLICK OK TO Continue the Rootupgrade Script Dialog from Installation Wizard and wait for to finish

GRID INFRA 12.1.0.2 INSTALLED VERIFICATION
*********************************************
ps -eaf|grep d.bin
(Binaries should point to 12.1.0.2 path)

crsctl query crs softwareversion -all

crsctl query crs releaseversion (NODE1)
crsctl query crs releaseversion (NODE2)

Remove all the softwares from /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017 to free up the mount

Revisit the .bash_profile for grid user on both nodes, to point the 12.1.0.2 oracle home path

If required, include the opatch in the .bash_profile for ease of future patching
export PATH=$PATH:$ORACLE_HOME/bin:$ORACLE_HOME/OPatch

crsctl check crs

crsctl stat res -t

asmcmd -p

lsdg

lsattr -l -G DATAC1
lsattr -l -G RECOC1
lsattr -l -G DBFS_DG

Observe the diskgroup attribute "compatible.asm" is upgraded to 12.1.0.2.0

show parameter sga (2G)

How to change diskgroup attribute to match with upgraded GI version
********************************************************************
ALTER DISKGROUP RECOC1 SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';            

ALTER DISKGROUP DBFS_DG SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';

ALTER DISKGROUP DATAC1 SET ATTRIBUTE 'compatible.asm' = '12.1.0.2.0';



srvctl status database -d ebsprod

srvctl start database -d ebsprod

(Monitor the initialization parameters referring the compatibility.rdbms is still 11.2.0.3.0 but GI is on 12.1.0.2.0)

crsctl query css votedisk

crsctl query crs releasepatch

crsctl query crs activeversion

crsctl query crs activeversion -f

crsctl query crs softwarepatch exaprddb01
crsctl query crs softwarepatch exaprddb02

As Root:

crsctl stop crs (On Both Nodes)

*****  Restart BOTH DB Nodes 1 after the other ********

HOW to DECONFGURE IF ANY CLUSTER CONFIGURATION FAILURES
**********************************************************
[grid@exaprddb01 bin]$ ./olsnodes -s -t
ebstdb01-mgmt   Active  Unpinned
ebstdb02-mgmt   Active  Unpinned

To Unpin (AS ROOT) (Source the GRID HOME)
********
./crsctl unpin css -n exadbdr01
./crsctl unpin css -n exadbdr02

Collect the Following Details
******************************
$GRID_HOME/bin/crsctl stat res -t
$GRID_HOME/bin/crsctl stat res -p
$GRID_HOME/bin/crsctl query css votedisk
$GRID_HOME/bin/ocrcheck
$GRID_HOME/bin/oifcfg getif
$GRID_HOME/bin/srvctl config nodeapps -a
$GRID_HOME/bin/srvctl config scan
$GRID_HOME/bin/srvctl config asm -a
$GRID_HOME/bin/srvctl config listener -l <listener-name> -a
$DB_HOME/bin/srvctl config database -d <dbname> -a
$DB_HOME/bin/srvctl config service -d <dbname> -s <service-name> -v

How to Deconfigure/Reconfigure(Rebuild OCR) or Deinstall Grid Infrastructure (Doc ID 1377349.1)

ROLL BACK PLAN ASM.COMPATIBLE PARAMETER >>> Refer the Document shared
ROLL BACK PLAN GRID INFRASTRUCTURE >>> Also Refer the document including the above note id.


=========================================================================================================

Apply Net Patch(18841764) on Grid Infrastructure:
==================================================

Node 1:

opatch lsinventory |grep 18841764

[grid@exaprddb01 ~]$ opatch lsinventory |grep 18841764


** Copy the patch to /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/

Unzip patch:

[grid@exaprddb01 GRID_AND_DB_SOFTWARE]$ unzip p18841764_12102160119DBEngSysandDBIM_Linux-x86-64.zip
Archive:  p18841764_12102160119DBEngSysandDBIM_Linux-x86-64.zip
   creating: 18841764/
  inflating: 18841764/README.txt    
   creating: 18841764/etc/
   creating: 18841764/etc/config/
  inflating: 18841764/etc/config/inventory.xml 
  inflating: 18841764/etc/config/actions.xml 
   creating: 18841764/files/
   creating: 18841764/files/lib/
   creating: 18841764/files/lib/libn12.a/
  inflating: 18841764/files/lib/libn12.a/ns.o 


[grid@exaprddb01 18841764]$ pwd
/backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/18841764
[grid@exaprddb01 18841764]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.2.0.1.5
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/12.1.0.2/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0.2/grid/oraInst.loc
OPatch version    : 12.2.0.1.5
OUI version       : 12.1.0.2.0
Log file location : /u01/app/12.1.0.2/grid/cfgtoollogs/opatch/opatch2017-02-10_19-08-33PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[grid@exaprddb01 18841764]$



=============================================




Node 2
========
[grid@exaprddb02 ~]$ opatch lsinventory |grep 18841764

Copy the patch to /backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/


Unzip patch:

[grid@exaprddb02 GRID_AND_DB_SOFTWARE]$ unzip p18841764_12102160119DBEngSysandDBIM_Linux-x86-64.zip
Archive:  p18841764_12102160119DBEngSysandDBIM_Linux-x86-64.zip
   creating: 18841764/
  inflating: 18841764/README.txt    
   creating: 18841764/etc/
   creating: 18841764/etc/config/
  inflating: 18841764/etc/config/inventory.xml 
  inflating: 18841764/etc/config/actions.xml 
   creating: 18841764/files/
   creating: 18841764/files/lib/
   creating: 18841764/files/lib/libn12.a/
  inflating: 18841764/files/lib/libn12.a/ns.o 


[grid@exaprddb02 18841764]$ pwd
/backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/18841764
[grid@exaprddb02 18841764]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.2.0.1.5
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/12.1.0.2/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0.2/grid/oraInst.loc
OPatch version    : 12.2.0.1.5
OUI version       : 12.1.0.2.0
Log file location : /u01/app/12.1.0.2/grid/cfgtoollogs/opatch/opatch2017-02-10_19-09-28PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[grid@exaprddb02 18841764]$


*** Stop crs on both the Nodes 

[root@exaprddb01 ~]# /u01/app/12.1.0.2/grid/bin/crsctl stop crs

[root@exaprddb02 ~]# /u01/app/12.1.0.2/grid/bin/crsctl stop crs




HIGHLEVEL STEPS
NODE 1:
=======
ROOT User: /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -unlock
grid User :Apply Patch
ROOT User: /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -patch
NODE 2:
=======
ROOT User: /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -unlock
grid User :Apply Patch
ROOT User: /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl –patch



Node 1:

As root:

[root@exaprddb01 ~]# /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -unlock
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/02/10 19:14:27 CLSRSC-347: Successfully unlock /u01/app/12.1.0.2/grid

As grid:

[grid@exaprddb01 18841764]$ pwd
/backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/18841764
[grid@exaprddb01 18841764]$ opatch prereq CheckConflictAgainstOHWithDetail -ph ./
Oracle Interim Patch Installer version 12.2.0.1.5
Copyright (c) 2017, Oracle Corporation.  All rights reserved.

PREREQ session

Oracle Home       : /u01/app/12.1.0.2/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0.2/grid/oraInst.loc
OPatch version    : 12.2.0.1.5
OUI version       : 12.1.0.2.0
Log file location : /u01/app/12.1.0.2/grid/cfgtoollogs/opatch/opatch2017-02-10_19-08-33PM_1.log

Invoking prereq "checkconflictagainstohwithdetail"

Prereq "checkConflictAgainstOHWithDetail" passed.

OPatch succeeded.
[grid@exaprddb01 18841764]$ pwd
/backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/18841764
[grid@exaprddb01 18841764]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.5
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/12.1.0.2/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0.2/grid/oraInst.loc
OPatch version    : 12.2.0.1.5
OUI version       : 12.1.0.2.0
Log file location : /u01/app/12.1.0.2/grid/cfgtoollogs/opatch/opatch2017-02-10_19-15-16PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   18841764 

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/12.1.0.2/grid')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '18841764' to OH '/u01/app/12.1.0.2/grid'

Patching component oracle.network.rsf, 12.1.0.2.0...

Patching component oracle.rdbms.rsf, 12.1.0.2.0...

Patching component oracle.rdbms, 12.1.0.2.0...
OUI-68010:
Clusterware stack is down, OPatch will skip patch files propagation to remote nodes. You can try any one of following options if you want to finish the propagation :
 1) Please invoke 'opatch util updateremotenodes' command to apply patch on the remote nodes with -remote_nodes <node list> option. For more details, please check 'opatch util updateRemoteNodes -help'
 2) Please rollback this patch and retry apply with stack up. You can start the stack by command "/u01/app/12.1.0.2/grid/bin/crsctl start crs"

Patch 18841764 successfully applied.
OPatch Session completed with warnings.
Log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/opatch/opatch2017-02-10_19-15-16PM_1.log

OPatch completed with warnings.
[grid@exaprddb01 18841764]$

[grid@exaprddb01 18841764]$ opatch lsinventory |grep 18841764
Patch  18841764     : applied on Fri Feb 10 19:15:36 AST 2017
     18841764
[grid@exaprddb01 18841764]$


As root: (REPLACE THE BELOW OUTPUT WITH ONSCREEN DETAILS OUTPUT)
=======

[root@exaprddb01 ~]# /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -patch
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/02/10 19:18:01 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2017/02/10 19:18:01 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'exaprddb01'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'exaprddb01'
CRS-2677: Stop of 'ora.drivers.acfs' on 'exaprddb01' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'exaprddb01' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.evmd' on 'exaprddb01'
CRS-2676: Start of 'ora.mdnsd' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.evmd' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'exaprddb01'
CRS-2676: Start of 'ora.gpnpd' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'exaprddb01'
CRS-2676: Start of 'ora.gipcd' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'exaprddb01'
CRS-2676: Start of 'ora.cssdmonitor' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.diskmon' on 'exaprddb01'
CRS-2676: Start of 'ora.diskmon' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.cssd' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.ctssd' on 'exaprddb01'
CRS-2676: Start of 'ora.ctssd' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'exaprddb01'
CRS-2676: Start of 'ora.asm' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'exaprddb01'
CRS-2676: Start of 'ora.storage' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'exaprddb01'
CRS-2676: Start of 'ora.crf' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'exaprddb01'
CRS-2676: Start of 'ora.crsd' on 'exaprddb01' succeeded
CRS-6023: Starting Oracle Cluster Ready Services-managed resources
CRS-6017: Processing resource auto-start for servers: exaprddb01
CRS-2672: Attempting to start 'ora.MGMTLSNR' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.net2.network' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.net1.network' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.oc4j' on 'exaprddb01'
CRS-2676: Start of 'ora.net1.network' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.cvu' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.exaprddb01.vip' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.exaprddb02.vip' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.ons' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.scan1.vip' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.scan2.vip' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.scan3.vip' on 'exaprddb01'
CRS-2676: Start of 'ora.net2.network' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.exaprddb01_2.vip' on 'exaprddb01'
CRS-2672: Attempting to start 'ora.exaprddb02_2.vip' on 'exaprddb01'
CRS-2676: Start of 'ora.cvu' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.ons' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.exaprddb01.vip' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'exaprddb01'
CRS-2676: Start of 'ora.exaprddb02.vip' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.MGMTLSNR' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.scan1.vip' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'exaprddb01'
CRS-2676: Start of 'ora.scan2.vip' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN2.lsnr' on 'exaprddb01'
CRS-2676: Start of 'ora.scan3.vip' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN3.lsnr' on 'exaprddb01'
CRS-2676: Start of 'ora.exaprddb01_2.vip' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_IB.lsnr' on 'exaprddb01'
CRS-2676: Start of 'ora.exaprddb02_2.vip' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.mgmtdb' on 'exaprddb01'
CRS-2676: Start of 'ora.LISTENER_SCAN2.lsnr' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN3.lsnr' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.LISTENER_IB.lsnr' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.oc4j' on 'exaprddb01' succeeded
CRS-2676: Start of 'ora.mgmtdb' on 'exaprddb01' succeeded
CRS-6016: Resource auto-start has completed for server exaprddb01
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1128856052].
[root@exaprddb01 ~]#



Node 2: (REPLACE THE BELOW OUTPUT WITH ONSCREEN DETAILS OUTPUT)
=========
As root:
[root@exaprddb02 ~]# /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -unlock
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/02/10 19:21:21 CLSRSC-347: Successfully unlock /u01/app/12.1.0.2/grid



As grid:
[grid@exaprddb02 18841764]$ pwd
/backup/soft/linux/patchs/12.1.0.2/64bit/apr2017/18841764
[grid@exaprddb02 18841764]$ opatch apply
Oracle Interim Patch Installer version 12.2.0.1.5
Copyright (c) 2017, Oracle Corporation.  All rights reserved.


Oracle Home       : /u01/app/12.1.0.2/grid
Central Inventory : /u01/app/oraInventory
   from           : /u01/app/12.1.0.2/grid/oraInst.loc
OPatch version    : 12.2.0.1.5
OUI version       : 12.1.0.2.0
Log file location : /u01/app/12.1.0.2/grid/cfgtoollogs/opatch/opatch2017-02-10_19-22-08PM_1.log

Verifying environment and performing prerequisite checks...
OPatch continues with these patches:   18841764 

Do you want to proceed? [y|n]
y
User Responded with: Y
All checks passed.

Please shutdown Oracle instances running out of this ORACLE_HOME on the local system.
(Oracle Home = '/u01/app/12.1.0.2/grid')


Is the local system ready for patching? [y|n]
y
User Responded with: Y
Backing up files...
Applying interim patch '18841764' to OH '/u01/app/12.1.0.2/grid'

Patching component oracle.network.rsf, 12.1.0.2.0...

Patching component oracle.rdbms.rsf, 12.1.0.2.0...

Patching component oracle.rdbms, 12.1.0.2.0...
OUI-68010:
Clusterware stack is down, OPatch will skip patch files propagation to remote nodes. You can try any one of following options if you want to finish the propagation :
 1) Please invoke 'opatch util updateremotenodes' command to apply patch on the remote nodes with -remote_nodes <node list> option. For more details, please check 'opatch util updateRemoteNodes -help'
 2) Please rollback this patch and retry apply with stack up. You can start the stack by command "/u01/app/12.1.0.2/grid/bin/crsctl start crs"

Patch 18841764 successfully applied.
OPatch Session completed with warnings.
Log file location: /u01/app/12.1.0.2/grid/cfgtoollogs/opatch/opatch2017-02-10_19-22-08PM_1.log

OPatch completed with warnings.
[grid@exaprddb02 18841764]$

[grid@exaprddb02 18841764]$ opatch lsinventory |grep 18841764
Patch  18841764     : applied on Fri Feb 10 19:22:19 AST 2017
     18841764
[grid@exaprddb02 18841764]$



As root: (REPLACE THE BELOW OUTPUT WITH ONSCREEN DETAILS OUTPUT)
=========

[root@exaprddb02 ~]# /u01/app/12.1.0.2/grid/crs/install/rootcrs.pl -patch
Using configuration parameter file: /u01/app/12.1.0.2/grid/crs/install/crsconfig_params
2017/02/10 19:23:31 CLSRSC-4015: Performing install or upgrade action for Oracle Trace File Analyzer (TFA) Collector.

2017/02/10 19:23:31 CLSRSC-4003: Successfully patched Oracle Trace File Analyzer (TFA) Collector.

CRS-4123: Oracle High Availability Services has been started.
CRS-2791: Starting shutdown of Oracle High Availability Services-managed resources on 'exaprddb02'
CRS-2673: Attempting to stop 'ora.drivers.acfs' on 'exaprddb02'
CRS-2677: Stop of 'ora.drivers.acfs' on 'exaprddb02' succeeded
CRS-2793: Shutdown of Oracle High Availability Services-managed resources on 'exaprddb02' has completed
CRS-4133: Oracle High Availability Services has been stopped.
CRS-4123: Starting Oracle High Availability Services-managed resources
CRS-2672: Attempting to start 'ora.mdnsd' on 'exaprddb02'
CRS-2672: Attempting to start 'ora.evmd' on 'exaprddb02'
CRS-2676: Start of 'ora.mdnsd' on 'exaprddb02' succeeded
CRS-2676: Start of 'ora.evmd' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.gpnpd' on 'exaprddb02'
CRS-2676: Start of 'ora.gpnpd' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.gipcd' on 'exaprddb02'
CRS-2676: Start of 'ora.gipcd' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.cssdmonitor' on 'exaprddb02'
CRS-2676: Start of 'ora.cssdmonitor' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.cssd' on 'exaprddb02'
CRS-2672: Attempting to start 'ora.diskmon' on 'exaprddb02'
CRS-2676: Start of 'ora.diskmon' on 'exaprddb02' succeeded
CRS-2676: Start of 'ora.cssd' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.cluster_interconnect.haip' on 'exaprddb02'
CRS-2672: Attempting to start 'ora.ctssd' on 'exaprddb02'
CRS-2676: Start of 'ora.ctssd' on 'exaprddb02' succeeded
CRS-2676: Start of 'ora.cluster_interconnect.haip' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.asm' on 'exaprddb02'
CRS-2676: Start of 'ora.asm' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.storage' on 'exaprddb02'
CRS-2676: Start of 'ora.storage' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.crf' on 'exaprddb02'
CRS-2676: Start of 'ora.crf' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.crsd' on 'exaprddb02'
CRS-2676: Start of 'ora.crsd' on 'exaprddb02' succeeded
CRS-6017: Processing resource auto-start for servers: exaprddb02
CRS-2672: Attempting to start 'ora.net2.network' on 'exaprddb02'
CRS-2672: Attempting to start 'ora.net1.network' on 'exaprddb02'
CRS-2676: Start of 'ora.net1.network' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.ons' on 'exaprddb02'
CRS-2676: Start of 'ora.net2.network' on 'exaprddb02' succeeded
CRS-2673: Attempting to stop 'ora.exaprddb02_2.vip' on 'exaprddb01'
CRS-2673: Attempting to stop 'ora.exaprddb02.vip' on 'exaprddb01'
CRS-2677: Stop of 'ora.exaprddb02_2.vip' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.exaprddb02_2.vip' on 'exaprddb02'
CRS-2677: Stop of 'ora.exaprddb02.vip' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.exaprddb02.vip' on 'exaprddb02'
CRS-2676: Start of 'ora.ons' on 'exaprddb02' succeeded
CRS-2673: Attempting to stop 'ora.LISTENER_SCAN1.lsnr' on 'exaprddb01'
CRS-2677: Stop of 'ora.LISTENER_SCAN1.lsnr' on 'exaprddb01' succeeded
CRS-2673: Attempting to stop 'ora.scan1.vip' on 'exaprddb01'
CRS-2677: Stop of 'ora.scan1.vip' on 'exaprddb01' succeeded
CRS-2672: Attempting to start 'ora.scan1.vip' on 'exaprddb02'
CRS-2676: Start of 'ora.exaprddb02_2.vip' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_IB.lsnr' on 'exaprddb02'
CRS-2676: Start of 'ora.exaprddb02.vip' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER.lsnr' on 'exaprddb02'
CRS-2676: Start of 'ora.scan1.vip' on 'exaprddb02' succeeded
CRS-2672: Attempting to start 'ora.LISTENER_SCAN1.lsnr' on 'exaprddb02'
CRS-2676: Start of 'ora.LISTENER_IB.lsnr' on 'exaprddb02' succeeded
CRS-2676: Start of 'ora.LISTENER.lsnr' on 'exaprddb02' succeeded
CRS-2676: Start of 'ora.LISTENER_SCAN1.lsnr' on 'exaprddb02' succeeded
CRS-6016: Resource auto-start has completed for server exaprddb02
CRS-6024: Completed start of Oracle Cluster Ready Services-managed resources
CRS-4123: Oracle High Availability Services has been started.
Oracle Clusterware active version on the cluster is [12.1.0.2.0]. The cluster upgrade state is [NORMAL]. The cluster active patch level is [1274404106].
[root@exaprddb02 ~]#



=============================================

Start DB: PRIMARY

[oracle@exaprddb01 ~]$ srvctl start database -d ebsprod

alter system set log_archive_dest_state_2='ENABLE' scope=both sid='*';


Start DB: STANDBY

[oracle@exaprddb01 ~]$ srvctl start database -d ebsproddr

alter system set log_archive_dest_state_2='ENABLE' scope=both sid='*';

Start MRP: STANDBY

sql> alter database recover managed standby database using current logfile disconnect;


Ref:

11.2.0.2, 11.2.0.3, 11.2.0.4 or 12.1.0.1 to 12.1.0.2 Grid Infrastructure and Database Upgrade on Exadata Database Machine running Oracle Linux (Doc ID 1681467.1)

Comments

Popular posts from this blog

Fatal agent error: Target Interaction Manager failed at Startup

[INS-40718] Single Client Access Name (SCAN): could not be resolved. ( LDOMS & Zones)

CRS-2883: Resource 'ora.asm' failed during Clusterware stack start