Friday, October 3, 2025

Constraining TNS Searches

This week, I had a most interesting customer request regarding limiting the scope of Oracle database name resolution (e.g. Transparent Network Substrate (TNS) / Net Service record lookups) search results by database clients to a single result.  The customer wanted the Oracle Unified Directory (OUD) directory service to limit the results returned by wild card searches by database clients to a single result.  For example, they didn't want a database client to be able to return all registered databases in OUD.

Fortunately with OUD this is a very easy problem to solve.  You simply change the system wide default size-limit to 1.  This will prevent anonymous and any non-administrative users from returning the full list of registered databases. However, it is important to note this is not a recommended approach for general use because most directory service client applications return more than one result.  In the case of TNS name resolution, that is the the only use case for normal clients.

With the default TNS configuration an anonymous search can list all registered databases:

$ ldapsearch -T -h tns1.example.com -p 1389 -b dc=example,dc=com -s sub 'orclNetDescString=*'  dn
dn: cn=mydb1,ou=Databases,cn=OracleContext,DC=example,DC=com

dn: cn=mypdb1_tns1,ou=Databases,cn=OracleContext,DC=example,DC=com

dn: cn=mytestdb,cn=OracleContext,DC=example,DC=com

Changing the system wide size-limit to 1 per OUD directory service instance limits the results returned to just 1 entry.

$ dsconfig -h tns1.example.com -X -p 4444 -D 'cn=Directory Manager' -j /u01/cfg/...pw --no-prompt set-global-configuration-prop --set size-limit:1

With this system wide change applied, here is what is now returned to the database client for a wildcard search attempting to show all databases:

$ ldapsearch -T -h tns1.example.com -p 1389 -b dc=example,dc=com -s sub 'orclNetDescString=*'  dn
dn: cn=mydb1,ou=Databases,cn=OracleContext,DC=example,DC=com

SEARCH operation failed
Result Code:  4 (Size Limit Exceeded)
Additional Information:  This search operation has sent the maximum of 1 entries to the client

Specific searches for an individual database continue to work as expected. For example:

$ ldapsearch -T -h tns1.example.com -p 1389 -b dc=example,dc=com -s sub 'cn=mydb1' orclNetDescString
dn: cn=mydb1,ou=Databases,cn=OracleContext,DC=example,DC=com
orclNetDescString: (DESCRIPTION= (ADDRESS = (PROTOCOL = TCP)(HOST = tns1.example.com )(PORT = 1521))(CONNECT_DATA = (SERVICE_NAME = mydb1 )))

Some directory experts might point out that this can be worked around by reducing the page size to 1. Again with OUD, that is a privilege that can be taken away and is not enabled by default for anonymous users as you can see from the example below:

ldapsearch -T -h tns1.example.com -p 1389 --simplePageSize 1 -b dc=example,dc=com -s sub 'orclNetDescString=*'  dn
SEARCH operation failed
Result Code:  50 (Insufficient Access Rights)
Additional Information:  The request control with Object Identifier (OID) "1.2.840.113556.1.4.319" cannot be used due to insufficient access rights


There is one thing that needs to be addressed.  TNS administrators are impacted by this system wide change as well. To resolve this constraint, we simply override the system wide size-limit with a user specific size-limit.  For example:

$ ldapmodify -h tns1.example.com -Z -X -p 1636 -D "cn=Directory Manager" -j /u01/cfg/...pw <<EOF
dn: cn=tnsadmin,ou=TNSAdmins,cn=OracleContext
changetype: modify
add: ds-rlim-size-limit
ds-rlim-size-limit: 0
EOF

Now, the TNS administrator has sufficient privilege to list all databases in OUD.

I hope you found this information useful and insightful.

Blessings!
























Tuesday, September 30, 2025

Entra ID RAS Integration

Real Application Security (RAS) provides fine grain access controls within the Oracle database for application users.  This capability can be extended to centralized database architectures such as Centrally Managed Users (CMU) and Entra ID integration.  Thomas Minne recently wrote up an excellent article titled "Unifying Identity and Data Security: Real Application Security with Active Directory" that illustrates how to configure RAS access controls of CMU roles by way of specifying the CMU database group with principal_name and the RAS xs_acl.ptype_db principal_type.

With CMU, database roles can be mapped as identified globally to Active Directory (AD) groups using the group's distinguished name (DN) in AD.  In the following example, we map the database roles idp_dba and to dbsession to AD groups  "cn=idp_dba,cn=Groups,dc=myco,dc=com" and "cn=dbsession,cn=Groups,dc=myco,dc=com" respectively:

DB Role For DBAs:
SQL> CREATE ROLE idp_dba IDENTIFIED GLOBALLY AS 'cn=idp_dba,cn=Groups,dc=myco,dc=com';
SQL> GRANT pdb_dba TO idp_dba;

DB Role For Users:
SQL> CREATE ROLE dbsession IDENTIFIED GLOBALLY AS 'cn=dbsession,cn=Groups,dc=myco,dc=com';
SQL> GRANT CREATE SESSION TO dbsession;

With Entra ID integration, the database roles are mapped to Entra ID app roles that are linked within Entra ID to Entra ID groups.  In the following example, we map the database roles idp_dba and dbsession to Entra ID app roles dba.role and session.role respectively:

DB Role For DBAs:
SQL> CREATE ROLE idp_dba IDENTIFIED GLOBALLY AS 'AZURE_ROLE=dba.role';
SQL> GRANT pdb_dba TO idp_dba;

DB Role For Users:
SQL> CREATE ROLE dbsession IDENTIFIED GLOBALLY AS 'AZURE_ROLE=session.role';
SQL> GRANT CREATE SESSION TO dbsession;

In both cases, the RAS access policy declarations can be applied to the CMU or Entra ID database role names.  For example, the dbsession can be granted minimal RAS select privilege and the idp_dba role can be granted maximum select privilege.  Here are examples borrowing from Thomas Minne's blog post:

    aces(1) := xs$ace_type(privilege_list => xs$name_list
                            ('select'),
                             principal_name => 'dbsession',
                             principal_type => xs_acl.ptype_db);

    aces(2) := xs$ace_type(privilege_list => xs$name_list
                            ('select','view_employee_details'),
                             principal_name => 'idp_dba',
                             principal_type => xs_acl.ptype_db);

A user that is a member of the dbsession group would only be able to return the limited results governed by the RAS rules for that access control. And a user that is a member of the idp_dba group would be able to see all the data within the contraints of rules of the RAS access controls for the idp_daba role.

I hope you find this educational and informative.

Blessings!

Monday, September 29, 2025

Creating Entra ID Enabled Net Service TNS Entries

Oracle Net Services provides name service resolution for Oracle database clients when looking up the connect string for a target database.  Cloud native Entra ID integration for Oracle database users is a new capability that provides centralized multi-factor authentication to end users and service accounts.

As customers explore transitioning their LDAP-based name service database entries to Entra ID integration (a.k.a. MSIE), one method is to duplicate all database entries with new entries that are tagged with _MSIE and incorporate the requisite TLS encrypted connectivity and Entra ID properties such as authentication method (e.g. interactive, passthrough, or service account), Entra ID tenant ID, Entra ID web app client ID and Entra ID web app database server URI.

A new manage_tns tool has been introduced to simplify creating and loading the Entra ID entries into the existing LDAP-based directory naming service.  Here is how to use manage_tns to duplicate all existing entries with new entries that include the _MSIE tag and include the Entra ID properties.

Step 1: Install python3-ldap and the manage_tns tool on to a Linux host

$ sudo install python3-ldap
$ cd /u01
$ curl -so manage_tns.sh https://raw.githubusercontent.com/oudlabs/manage_tns/refs/heads/main/manage_tns.sh
$ chmod 0700 manage_tns.sh


Step 2: Backup the primary naming context

$ /u01/manage_tns.sh export -h <dshost> -p <ldaps_port> -f tnsnames.ora --suffix "DC=example,DC=com"

Sample output:

Directory Server: ldaps://tns1.example.com:1636
User: Loging into directory service anonymously
Exporting pdb3...done
Export to tnsnames-msie.ora complete
$ cat tnsnames-msie.ora
pdb3=
   (DESCRIPTION=
         (ADDRESS=(PROTOCOL=TCPS)(HOST=pdb3.example.com)(PORT=2484))
      (CONNECT_DATA=
         (SERVER=DEDICATED)
         (SERVICE_NAME=pdb3.example.com)))


Step 3: Export the database entries from the naming context with MSIE data

$ /u01/manage_tns.sh exportmsie -h <dshost> -p <ldaps_port> -f tnsnames-msie.ora --suffix "DC=example,DC=com" --dbport 2484 --method interactive --tenantid 7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1 --clientid e5124a85-ac3e-14a4-f2ca-1ad635cf781a --serveruri "https://dbauthdemo.com/16736175-ca41-8f33-af0d-4616ade17621"

Directory Server: ldaps://tns1.example.com:1636

Sample output:

Directory Server: ldaps://tns1.example.com:1636
User: Loging into directory service anonymously
Exporting pdb3...done
Export to tnsnames-msie.ora complete
$ cat tnsnames-msie.ora
PDB3_MSIE=
   (DESCRIPTION=
         (ADDRESS=(PROTOCOL=TCPS)(HOST=pdb3.example.com)(PORT=2484))
         (SECURITY=
            (SSL_SERVER_DN_MATCH=TRUE)
            (WALLET_LOCATION=SYSTEM)
            (TOKEN_AUTH=AZURE_INTERACTIVE)
            (TENANT_ID=7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1)
            (CLIENT_ID=e5124a85-ac3e-14a4-f2ca-1ad635cf781a)
            (AZURE_DB_APP_ID_URI=https://dbauthdemo.com/16736175-ca41-8f33-af0d-4616ade17621))
      (CONNECT_DATA=
         (SERVER=DEDICATED)
         (SERVICE_NAME=pdb3.example.com)))


Step 4: Update the database server URI for every database in tnsnames-msie.ora

PDB3_MSIE=
   (DESCRIPTION=
         (ADDRESS=(PROTOCOL=TCPS)(HOST=pdb3.example.com)(PORT=2484))
         (SECURITY=
            (SSL_SERVER_DN_MATCH=TRUE)
            (WALLET_LOCATION=SYSTEM)
            (TOKEN_AUTH=AZURE_INTERACTIVE)
            (TENANT_ID=7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1)
            (CLIENT_ID=e5124a85-ac3e-14a4-f2ca-1ad635cf781a)
            (AZURE_DB_APP_ID_URI=https://dbauthdemo.com/16781793-df98-94e1-2c51-8a91e8878171 ))
      (CONNECT_DATA=
         (SERVER=DEDICATED)
         (SERVICE_NAME=pdb3.example.com)))


Step 5: Load the MSIE tagged entries

$ /u01/manage_tns.sh load -h <dshost> -p <ldaps_port> --suffix "DC=example,DC=com" -f tnsnames-msie.ora


Step 6: Confirm that Oracle database clients can authenticate with Entra ID integration into each of the databases

See the following blog posts on how to setup the respective Oracle database clients:

I hope you found this information helpful and insightful.

Blessings!





How To Consolidate Oracle Naming Contexts

Oracle Net Services provides name service resolution for Oracle database clients when looking up the connect string for a target database.  Customers that have used Net Services for a long time may have accumulated a large number of Oracle contexts (a.k.a. directory service base suffixes) over time. Some customers have expressed an interest in consolidating all of the databases into a single naming context. For example, consider a customer that has databases in the following naming contexts:
  • DC=myco,DC=com
  • DC=corp,DC=myco,DC=com
  • DC=dev,DC=myco,DC=com
  • DC=acquiredco,DC=com
The customer would like to consolidate all of these naming contexts into a unified naming context of "DC=myco,DC=com" for all database entries.  In the past, this could be accomplished through a variety of means but all were fraught with a certain degree of risk and complications.  However, there is a new manage_tns tool available that handles this use case quite effortlessly.  Here is how to accomplish this objective in seven steps with manage_tns:

Step 1: Install python3-ldap and the manage_tns tool on to a Linux host

$ sudo install python3-ldap
$ cd /u01
$ curl -so manage_tns.sh https://raw.githubusercontent.com/oudlabs/manage_tns/refs/heads/main/manage_tns.sh
$ chmod 0700 manage_tns.sh

Step 2: Backup the primary naming context

$ /u01/manage_tns.sh export -h <dshost> -p <ldaps_port> -f tnsnames.ora --suffix "DC=myco,DC=com"

Step 3: Export the database entries from all other naming contexts

$ /u01/manage_tns.sh export -h <dshost> -p <ldaps_port> -f tnsnames-corp.ora --suffix "DC=corp,DC=myco,DC=com"

$ /u01/manage_tns.sh export
 -h <dshost> -p <ldaps_port>  -f tnsnames-dev.ora --suffix "DC=dev,DC=myco,DC=com"

$ /u01/manage_tns.sh export
 -h <dshost> -p <ldaps_port>  -f tnsnames-other.ora --suffix "DC=acquiredco,DC=com"

Step 4: Load the exported database entries into the primary naming context

$ /u01/manage_tns.sh load -h <dshost> -p <ldaps_port> -D <tns_admin_dn> -f tnsnames-corp.ora --suffix "DC=myco,DC=com"

$ /u01/manage_tns.sh load
 -h <dshost> -p <ldaps_port> -D <tns_admin_dn> -f tnsnames-dev.ora --suffix "DC=myco,DC=com"

$ /u01/manage_tns.sh load
 -h <dshost> -p <ldaps_port> -D <tns_admin_dn> -f tnsnames-other.ora --suffix "DC=myco,DC=com"

Step 5: Update client references to the new naming context in ldap.ora and JDBC connect lookups

Example of ldap.ora:

DEFAULT_ADMIN_CONTEXT = "DC=myco,DC=com"

Example of jdbc reference:

"jdbc:oracle:thin:@ldaps:<ds_host>:<ldaps_port>/<db_alias>,cn=OracleContext,DC=myco,DC=com"

Step 6: Obtain a list of all of the database entries from the legacy naming context

$ /u01/manage_tns.sh list -h <dshost> -p <ldaps_port> --suffix "DC=acquiredco,DC=com"


Step 7: Un-register database entries from the old contexts once you are certain that clients are no long referencing the old contexts

$ /u01/manage_tns.sh unregister -n <db_alias> -h <dshost> -p <ldaps_port> -f tnsnames-corp.ora --suffix "DC=corp,DC=myco,DC=com"


I hope you found this information helpful and insightful.

Blessings!





Wednesday, September 24, 2025

Simplifying LDAP-based Oracle Name Service Record Management

As customer's Oracle database estate expands on premises and across all major clouds (Oracle OCI, Microsoft Azure, Amazon AWS, and Google Cloud), they often desire to centralize name service resolution into LDAP-based directory services in order to ensure accuracy of the connect strings used by client applications.  This is particularly important for use cases like database migrations or employing custom connect strings.  This further exponentiated by the adoption of Entra ID integration where additional TLS and Entra ID connect string properties are required.  In order to address this growing need, I wrote a simple script for managing name service records and published it to GitHub at https://github.com/oudlabs/manage_tns.  This blog post just summarizes the use cases that it covers.

Installation

To install, just download from GitHub to a linux host.

curl -so manage_tns.sh https://raw.githubusercontent.com/oudlabs/manage_tns/refs/heads/main/manage_tns.sh


Usage

To see usage, just run the script with help subcommand

manage_tns.sh help


Register Database

Use the "register" subcommand to register a database into the directory service.

manage_tns.sh.sh register -n <db_alias> [options]


Unregister Database

Use the "unregister" subcommand to remove a database entry from the directory service.

manage_tns.sh.sh unregister -n <db_alias> [options]


List Registered Databases

Use the "list" subcommand to list all registered database in the directory service.

manage_tns.sh.sh list [options]


Show Database Registration Details

Use the "show" subcommand to show the details of a specific database from the directory service.

manage_tns.sh.sh show -n <db_alias> [options]


Examples


Register a database with alias name mypdb to the directory service.

$ manage_tns.sh register -n pdb1 -h tns.example.com -p 10636 --dbhost cdb1.example.com --dbport 1521 --dbproto TCP --service pdb1
Directory Server: ldaps://tns.example.com:10636
User: Loging into directory as cn=eusadmin,ou=EUSAdmins,cn=oracleContext
Enter directory service TNS admin user's password: *********
Register database pdb1
Database registration completed successfully



Register a database that includes TLS encryption and Entra ID integration details into the directory service.

manage_tns.sh register -n pdb2 -h tns.example.com -p 10636 --dbhost cdb1.example.com --dbport 2484 --dbproto TCPS --service pdb2.example.com --method interactive --tenantid 7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1 --clientid e5124a85-ac3e-14a4-f2ca-1ad635cf781a --serveruri "https://dbauthdemo.com/16736175-ca41-8f33-af0d-4616ade17621"
Directory Server: ldaps://tns.example.com:10636
User: Loging into directory as cn=eusadmin,ou=EUSAdmins,cn=oracleContext
Enter directory service TNS admin user's password: *********
Register database pdb2
Database registration completed successfully


Register a database with a custom connection string into the directory service.

manage_tns.sh register -n rac1 --dbhost rac1.example.com -c "(DESCRIPTION=(CONNECT_TIMEOUT=90)(RETRY_COUNT=50)(RETRY_DELAY=3)(TRANSPORT_CONNECT_TIMEOUT=3)(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=racnode1.example.com)(PORT=1521)))(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=racnode2.example.com)(PORT=1521)))(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=racnode3.example.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=rac1)))"
Directory Server: ldaps://tns1.example.com:10636
User: Loging into directory as cn=eusadmin,ou=EUSAdmins,cn=oracleContext
Enter directory service TNS admin user's password: *********
Register database rac1
Database registration completed successfully


List all databases registered in the directory service.

$ manage_tns.sh list
Directory Server: ldaps://tns1.example.com:10636
User: Loging into directory service anonymously
List registered databases

cn=pdb1,cn=OracleContext,dc=example,dc=com
orclNetDescString: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=cdb1.example.com)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=pdb1)))

cn=pdb2,cn=OracleContext,dc=example,dc=com
orclNetDescString: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCPS)(HOST=cdb1.example.com)(PORT=2484))(SECURITY=(SSL_SERVER_DN_MATCH=TRUE)(WALLET_LOCATION=SYSTEM)(TOKEN_AUTH=AZURE_INTERACTIVE)(TENANT_ID=7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1)(AZURE_DB_APP_ID_URI=https://dbauthdemo.com/16736175-ca41-8f33-af0d-4616ade17621)(CLIENT_ID=e5124a85-ac3e-14a4-f2ca-1ad635cf781a))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=pdb2)))

cn=rac1,cn=OracleContext,dc=example,dc=com
orclNetDescString: (DESCRIPTION=(CONNECT_TIMEOUT=90)(RETRY_COUNT=50)(RETRY_DELAY=3)(TRANSPORT_CONNECT_TIMEOUT=3)(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=racnode1.example.com)(PORT=1521)))(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=racnode2.example.com)(PORT=1521)))(ADDRESS_LIST=(LOAD_BALANCE=on)(ADDRESS=(PROTOCOL=TCP)(HOST=racnode3.example.com)(PORT=1521)))(CONNECT_DATA=(SERVICE_NAME=rac1)))


Show the details of one of the registered databases.

$ manage_tns.sh show -n pdb1
Directory Server: ldaps://tns1.example.com:10636
User: Loging into directory service anonymously
Show database pdb1
dn: cn=pdb1,cn=OracleContext,dc=example,dc=com
cn: pdb1
objectClass: orclApplicationEntity
objectClass: orclDBServer
objectClass: orclService
objectClass: top
objectClass: orclDBServer_92
orclDBGlobalName: pdb1
orclNetDescName: 000:cn=DESCRIPTION_0
orclNetDescString: (DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=cdb1.example.com)(PORT=1521))(CONNECT_DATA=(SERVER=DEDICATED)(SERVICE_NAME=pdb1)))
orclOracleHome: /dbhome_1
orclServiceType: DB
orclSid: pdb1
orclSystemName: cdb1.example.com
orclVersion: 121000

Show the connect string of a database entry.

$ manage_tns.sh showcs -n pdb2
Directory Server: ldaps://tns1.example.com:10636
User: Loging into directory service anonymously
Show connect string of database pdb2
(DESCRIPTION=
         (ADDRESS=(PROTOCOL=TCPS)(HOST=cdb1.example.com)(PORT=2484))
         (SECURITY=
            (SSL_SERVER_DN_MATCH=TRUE)
            (WALLET_LOCATION=SYSTEM)
            (TOKEN_AUTH=AZURE_INTERACTIVE)
            (TENANT_ID=7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1)
            (AZURE_DB_APP_ID_URI=https://dbauthdemo.com/16736175-ca41-8f33-af0d-4616ade17621)
            (CLIENT_ID=e5124a85-ac3e-14a4-f2ca-1ad635cf781a))
      (CONNECT_DATA=
         (SERVER=DEDICATED)
         (SERVICE_NAME=pdb2)))


Unregister a database entry from the directory service.

$ manage_tns.sh unregister -n pdb1
Directory Server: ldaps://tns1.example.com:10636
User: Loging into directory as cn=eusadmin,ou=EUSAdmins,cn=oracleContext
Enter directory service TNS admin user's password: *********
Unregister database pdb1
Database unregistration completed successfully


That concludes this blog post. 

I hope that you found it useful and informative.

Blessings!















Tuesday, September 23, 2025

ZLDRA AD Integration



The Zero Data Loss Recovery Appliance (ZDLRA) from Oracle is a powerful and very popular backup and recovery solution for the Oracle Database that enables rapid recovery from outages and ransomware attacks.  A customer that I worked with recently, inquired about how to integrate their ZDLRA appliance with Active Directory (AD) in order to enable centralized authentication and authorization of ZDLRA administrators using their existing AD credentials.

I shared that there are a variety of approaches that you could take to enable centralizing authentication, authorization and life cycle management of ZDLRA administrators.  One of those ways is to leverage the existing System Security Services Daemon (SSSD) included with the Oracle Linux operating system of the appliance.  Authentication is enabled by outsourcing authentication via Kerberos or password based authentication over LDAPS to AD.  Authorization can also be managed via the SSSD configuration as well.

Requisites

The only requisites are access to one or more Active Directory (AD) domain controllers or preferably a load balancer or a round robin fully qualified domain name and an AD service account that can be used to lookup AD users.  One other implied requisite is that all of the installation and configuration must be applied to both ZDLRA compute hosts per ZDLRA system.  The default compute host IP addresses are outlined in the Factory IP Address Settings section of the ZDLRA Owner's Guide.

Installation

Once you login to the ZDLRA compute hosts, you will need to install the System Security Services Daemon (SSSD) and associated tools.

sudo dnf install sssd sssd-tools


Configuration

SSSD authentication is most easily enabled through the authconfig command.  In this example, we enable password based authentication to an AD domain controller (ad.example.com).

The first configuration step is to enable SSSD authentication with the authconfig command.

$ sudo authconfig --enablesssd --enablesssdauth --enableldap --enableldapauth --ldapserver=ldaps://ad.example.com:636 --ldapbasedn=dc=example,dc=com --enableldaptls --enablerfc2307 --enablemkhomedir --enablecachecreds --update

Once SSSD authentication is enabled, you will want to tailor the resulting /etc/sssd/sssd.conf configuration file according to your needs as well as to a few particular things for enabling for use with ZDLRA.  Here is a sample sssd.conf configuration file.

[domain/default]
autofs_provider = ldap
ldap_search_base = dc=example,dc=com?subtree?
id_provider = ldap
auth_provider = ldap
chpass_provider = ldap
sudo_provider = ldap
resolver_provider = ldap
case_sensitive = False
ldap_uri = ldaps://ad.example.com:636
ldap_tls_cacertdir = /etc/pki/tls/certs
#ldap_tls_cacertdir = /etc/pki/ca-trust/extracted/pem/corpchain.pem
#ldap_tls_reqcert = never
ldap_id_use_start_tls = False
ldap_default_bind_dn = cn=zdlraadm,ou=Services,DC=example,DC=com
ldap_default_authtok_type = obfuscated_password
ldap_default_authtok = AAAgAC...
cache_credentials = True
enumerate = True
ldap_referrals = True
ldap_schema = rfc2307
ldap_user_object_class = user
ldap_group_object_class = group
ldap_user_name = samAccountName
ldap_user_uid_number = employeeid
ldap_user_gid_number = employeeid
override_gid = 11145
override_homedir = /rausers/%u
override_shell = /bin/bash
access_provider = permit
#access_provider = simple
#simple_allow_users = jsmith,nbrown
#simple_allow_groups = zdlradmins,itadmins,dbadmins
ldap_network_timeout = 60

ldap_opt_timeout = 60
ldap_search_timeout = 60
#debug_level = 9

[sssd]
services = nss, pam
config_file_version = 2
domains = default

[nss]
filter_users = root,ldap,named,avahi,haldaemon,dbus,news,nscd,raadmin

Certificate Validation

SSSD authentication requires an encrypted connection to AD. This means that the certificate chain of trust used to verify the authenticity of the AD domain controllers certificate provided during TLS negotiation must be specified in the SSSD configuration.  For certificates signed by public certificate authorities, the default trust store located in /etc/pki/tls/certs should be specified with:

ldap_tls_cacertdir = /etc/pki/tls/certs

If instead a private certificate authority is used to sign the AD certificate, then you will want to copy the private certificate chain in PEM format to a file on the ZDLRA compute servers and specify the location in the SSSD configuration with:

ldap_tls_cacertdir = /etc/pki/ca-trust/extracted/pem/corpchain.pem

Otherwise, if you are just testing and don't want to check the authenticity of the AD server certificate, you can disable certificate verification with:

ldap_tls_reqcert = never

Service Account Password Obfuscation

When specifying the password in sssd.conf, you don't want to provide the clear text password.  The SSSD tools provides a command for obfuscating the password in the configuration file.  When running this command, it is best to specify the domain of the configuration file, which in this case is "default".  Here is a sample invocation:

sss_obfuscate -d default
Enter password: **************
Re-enter password: 
**************

Once the obfuscation command completes, the sssd.conf the value of ldap_default_authtok_type  will be changed to obfuscated_password and the value of ldap_default_authtok will be updated with the obfuscated value.

AD Schema

SSSD authentication will not have visibility to users in AD unless they have values for group ID number and user ID number.  Although AD schema supports these POSIX attributes, some customers do not populate them. The default attributes used for the values are uidNumber and gidNumber. If uidNumber and gidNumber are not popluated in your user's objects, the easy workaround is to use the employeeID value for the user ID and group ID numbers. 

ldap_user_uid_number = employeeid
ldap_user_gid_number = employeeid

ZDLRA Specific Overrides

There are a few specific overrides that need to be applied in order to ensure that the user has the proper group ID number, the proper home directory and shell.  These are set in the SSSD config with:

override_gid = 11145
override_homedir = /rausers/%u
override_shell = /bin/bash

Apply SSSD Configuration Changes

Once you complete all configuration changes to sssd.conf, you need to apply them by restarting the SSSD service.  Note that if you also have the name service cache daemon (nscd) running, you will need to restart it as well.  Once you are ready to test connecting remotely over secure shell (ssh), you will also need to restart ssd as well.  Here are sample restart and status commands

sudo systemctl stop sssd
sudo systemctl start sssd
sudo systemctl status sssd
sudo systemctl stop nscd

sudo systemctl start sssd
sudo systemctl status sssd
sudo systemctl stop sshd
sudo systemctl start sshd
sudo systemctl status sshd

Synchronizing ZDLRA Compute Hosts

Once the SSSD configuration is complete on the first ZDLRA host, you will need to duplicate the configuration on the second host.  Likewise, you will want to add the user with:

racli add admin_user --user_name=<samAccountName>

Troubleshooting SSSD

To confirm that the configuration is working properly, test looking up a user with id and getent:

id user10000
uid=10000(user10000) gid=10000 groups=10000

getent passwd user10000
user10000:*:10000:10000:Aaren Atp:/rausers/user10000:/bin/bash

If these are working as expected, then you should be able to ssh into the ZDLRA and run the ZDLRA command line tools for configuring and managing the ZDLRA appliance.  When you login to the ZDLRA for the first time with ssh, the user's home directory is created in /rausers/<samAccountName>.  In some circumstances, the user's home directory may get created with permissions drwxr-xr-x (0700).  Check with "ls -d /rausers/<samAccountName>".  If the permissions are drwxr-xr-x, change the permissions with the following where <samAccountName> is the user name of your user:

chmod 0700 /rausers/<samAccountName>

If id and getent are not returning values for known users, then you will want to enable debug_mode to level 9 in sssd.confg, restart sssd (and if applicable nscsd), run the id or "gentent passwd <samAccountName>" and review the domain log files in /var/log/sssd. 

debug_level = 9

The most common reasons that I've observed thus far from the /var/log/sssd domain logs are the following:
1. SSSD cannot connect securely to the AD domain controller because the AD domain controller's certifice is self signed or signed by a private certificate authority for which the ZDLRA compute host does not have a copy of the certificate chain necessary to verify the authenticity of the certificate.
2. The AD service account that SSSD is attempting to login with doesn't exist
3. The password of the AD service account is incorrect
4. The AD service account is locked because of too many failed login attempts

Authorization

Once the basic plumbing of AD integration is complete and users can ssh into the ZDLRA compute host, the next and final step is to determine who should have access to the ZDLRA compute hosts.  This is accomplished through SSSD authorization configuration using the access_provider SSSD configuration parameter.  The default value of the access_provider is "permit", which means anyone can ssh into that host.  Other options include simple, ad, and ldap.  The easiest and most straight forward for this use case is the "simple" option.  The "simple" option enables you to specify lists of specific users (by <samAccountName>) and groups to determine who is allowed to login over ssh to the ZDLRA compute hosts. For example, after SSSD integration is complete, you could apply the following configuration change to sssd.conf to limit who can login to the ZDLRA compute to the specified users and members of the specified groups:

#access_provider = permit
access_provider = simple
simple_allow_users = jsmith,nbrown
simple_allow_groups = zdlradmins,itadmins,dbadmins

Once theses authorization settings have been updated, restart sssd, sshd and if applicable nscd.

That concludes one example of how to integrate the ZDLRA with AD.

I hope you found this information useful and informative.

Blessings!

Wednesday, July 9, 2025

Entra ID Integration: How To Rotate Service Principal Credential

Oracle database Entra ID integration enables centralization of authentication, authorization and user life cycle management of Oracle database users and service accounts.  Service accounts use the service principal authentication flow, where a client secret is added to the Entra ID web application for the database client service account. This secret is loaded into an Oracle 23ai (or newer) wallet on the host where the client application will use the wallet to authenticate to the target database with Entra ID integration. 

The secret loaded into the Entra ID web application expires after a specified amount of time where the default is 1 year. Therefore, it will be important from an operationalization perspective to setup a system to regularly update the secret in both the Entra ID web application and the Oracle wallet.  This is typically initially setup interactively through the Entra ID web console and the orapki tool on the Oracle database.  However, most database customers would prefer to script this operational task so via the command line.

Fortunately, the combination of the Microsoft Azure CLI and Oracle 23ai client together can realize this objective in just 2 steps.

1. First, you request a credential secret request and capture the resulting secret returned.

az ad app credential reset --id <appId> --display-name app_secret --years <n>


2. Second, you update the wallet where the client app resides

orapki secretstore modify_entry -wallet <dir> -pwd <pw> -alias oracle.security.azure.credential.<appId> -secret <secret>


Let's consider a working example.


For example on Oracle/RedHat/CentOS 9 Linux:

sudo rpm --import https://packages.microsoft.com/keys/microsoft.asc
sudo dnf install -y https://packages.microsoft.com/config/rhel/9.0/packages-microsoft-prod.rpm
sudo dnf install azure-cli

Next, you download and extract Oracle full or instant client because it includes orapki command for wallet management.  Login and download 12ai full client (V1044258-01.zip) from https://edelivery.oracle.com.

unzip -qo /u01/bits/V1044258-01.zip -d /u01/23ai_fullclient


Here is the script that I threw together to demonstrate an example of how you could automate updating the service account secret in Azure and the local wallet.

Important notes:
1. This script is not supported by Oracle or me. It is only provided as an example.
2. You must be logged into Azure CLI tool.  This script builds that into the flow if you are not already logged in.
3. If you copy the wallet to a remote host, you will need to re-enable auto_logon of the wallet on the new host.

Here's the sample script:

#!/bin/bash
wDir="$1"
appId="$2"
expiration="$3"
now=$(date +'%Y%m%d%H%M%S')

# Show usage if appId is not provided
if [ -z "${appId}" ]
then
   echo "Usage: $0 <db_client_app_id>"
   exit 1
fi

testLogin=$(az account show 2> /dev/null)
if [ -z "${testLogin}" ]
then
   echo "Login to Auzre for az CLI tool:"
   az login --allow-no-subscriptions --only-show-errors
fi

# Set default expiration at 10 years
if [ -z "${expiration}" ];then expiration=10;fi

# Securely read in the Oracle wallet password
echo -e "Enter wallet password: \c"
while IFS= read -r -s -n1 char
do
  [[ -z $char ]] && { printf '\n'; break; }
  if [[ $char == $'\x7f' ]]
  then
      [[ -n $wpw ]] && wpw=${wpw%?}
      printf '\b \b'
  else
    wpw+=$char
    printf '*'
  fi
done

# Path set from blog post demo
PATH=$PATH:/u01/23ai_fullclient/bin

# Reset secret and capture new password
secret=$(az ad app credential reset --only-show-errors --id ${appId} --display-name app_secret --years ${expiration} 2>> $0-${now}.log |jq -r '.password')
if [ -z "${secret}" ]
then
   echo "ERROR: Failed to create secret. See: $0-${now}.log"
   exit 1
fi

# Create wallet if one does not exist
if [ -e "${wDir}/ewallet.p12" ]
then
   walletOp='modify_entry'
else
   walletOp='create_entry'
   echo "Create wallet ${wDir}"
   orapki wallet create -wallet "${wDir}" -pwd "${wpw}"  -auto_login >> $0-${now}.log 2>&1
fi

# Load the wallet with  the secret
echo "Load secret into wallet"
orapki secretstore ${walletOp} -wallet "${wDir}" -pwd "${wpw}" -alias oracle.security.azure.credential.${appId} -secret "${secret}" >> $0-${now}.log 2>&1
rc=$?

if [ "${rc}" -ne 0 ]
then
   echo "ERROR: Failed to load secret into wallet. See: $0-${now}.log"
   exit 1
fi

echo "Secret loaded into wallet ${wDir}"


Here is a sample invocation:

$ /u01/rotate_secret.sh /u01/swallet 186f231c-830e-4513-9b64-34f341848050
Login to Auzre for az CLI tool:
To sign in, use a web browser to open the page https://microsoft.com/devicelogin and enter the code FTB7E3DR5 to authenticate.

Retrieving tenants and subscriptions for the selection...

[Tenant and subscription selection]

No     Subscription name          Subscription ID                       Tenant
-----  -------------------------  ------------------------------------  ------------------------------------
[1] *  N/A(tenant level account)  7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1  7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1

The default is marked with an *; the default tenant is '
7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1' and subscription is 'N/A(tenant level account)' (7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1).

Select a subscription and tenant (Type a number or Enter for no changes): 1

Tenant: 
7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1
Subscription: N/A(tenant level account) (7f4c6e3e-a1e0-43fe-14c5-c2f051a0a3a1)

[Announcements]
With the new Azure CLI login experience, you can select the subscription you want to use more easily. Learn more about it and its configuration at https://go.microsoft.com/fwlink/?linkid=2271236

If you encounter any problem, please open an issue at https://aka.ms/azclibug

Enter wallet password: *********
Load secret into wallet
Secret loaded into wallet /u01/swallet





Wednesday, July 2, 2025

Entra ID Client Credential Authentication Flow For Oracle Database Service Accounts

Entra ID is one of the new cloud native Oracle database authentication, authorization and user life cycle management architectures introduced in 2022.  In addition to authentication, authorization, and user life cycle management, this architecture also adds multi-factor authentication and unified password policy complexity with on premises Active Directory (AD) because most customers sync their users and groups with their Entra ID tenancy.

The main three Oracle Database authentication flows for Oracle database Entra ID integration include:
  • Interactive authentication flow for the humans connecting from devices that support pop-up browser
  • Device code authentication flow for humans connecting through jump hosts that do not support pop-up browser
  • Client credential authentication flow for service accounts
This blog post focuses on the client credential authentication flow to provide a simplified recipe for setting up Oracle database client service accounts with Entra ID integration.

The configuration encompasses configuration in each of the following three areas:
  • Entra ID
    • New app role in the target database
    • Service account app registration
    • Service account pre-shared credential
    • Add service account to target database allow app ingress rules
  • Database Server
    • Add service account
    • Apply grants to service account
  • Database Client
    • Create wallet with pre-shared credential
    • Add TNS entry for service account with AZURE_CREDENTIALS
In this example, lets assume that we are adding a service account for the Human Resources (HR) application.  The Entra ID App Role will be named app_hr.role.  The HR service account configuration in Entra ID will be named dbclient_service_hr.  The Oracle database server representation of this service account will be named app_hr.

Wednesday, June 25, 2025

Upgrade OUD 12cPS4 to OUD 14c

Oracle Unified Directory (OUD) 14c is now available and upgrading has never been easier.  The documentation for upgrade and migration are found at the following links:
From my experience, there are at least four viable upgrade paths, but the most will use one of the following first two options.  These options include:

I. Swing Migration
II. Upgrade In Place
III. Upgrade To New Middleware Home
IV. Sync New Topology

Each option is outlined below.  Before getting into the details, it is important to make clear that it is critical to pre-qualify the migration steps and qualification testing in lower non-production environments before attempting in production.

I. Swing Migration

This migration strategy expands the existing OUD 12cPS4 topology by adding new OUD 14c instances on new or existing infrastructure. The basic workflow looks like the following:

1. Setup new or existing hosts for the new OUD 14c instances

2. Install JDK 21 (or 17) and the OUD 14c software interactively or automated with with response files

2.1 Extract software

cd /opt/ods/poc/sw
tar -zxf /opt/ods/poc/bits/14c/jdk-21_linux-x64_bin.tar.gz
unzip -qo /opt/ods/poc/bits/14c/V1048203-01.zip
unzip -qo /opt/ods/poc/bits/14c/p37376076_141200_Generic.zip


2.2 Make response files for automated OUD 14c installation

cat  /opt/ods/poc/cfg/oraInventory.loc
inventory_loc=/opt/ods/poc/cfg/oraInventory
inst_group=opc

cat  /opt/ods/poc/cfg/oud14c-standalone.rsp
[ENGINE] 
Response File Version=1.0.0.0.0
[GENERIC]
DECLINE_AUTO_UPDATES=true
MOS_USERNAME=
MOS_PASSWORD=<SECURE VALUE>
AUTO_UPDATES_LOCATION=
SOFTWARE_UPDATES_PROXY_SERVER=
SOFTWARE_UPDATES_PROXY_PORT=
SOFTWARE_UPDATES_PROXY_USER=
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>
ORACLE_HOME=/opt/ods/poc/mw_oud14c
INSTALL_TYPE=Standalone Oracle Unified Directory Server (Managed independently of WebLogic server)


2.3 Install OUD 14c

export JAVA_HOME=/opt/ods/poc/sw/jdk-21.0.6
$JAVA_HOME/bin/java -jar  /opt/ods/poc/sw/fmw_14.1.2.1.0_oud.jar -silent -ignoreSysPrereqs -responseFile  /opt/ods/poc/cfg/oud14c-standalone.rsp -invPtrLoc /opt/ods/poc/cfg/oraInventory.loc


3 Setup OUD instance

export JAVA_HOME=/opt/ods/poc/sw/jdk-21.0.6
export ORACLE_HOME=/opt/ods/poc/mw_oud14c
/opt/ods/poc/mw_oud14c/oud/oud-setup --cli --integration no-integration --instancePath /opt/ods/poc/mw_oud14c/oud3/OUD --adminConnectorPort 3444 --ldapPort 3389 --ldapsPort 3636 --httpAdminConnectorPort 3555 --httpPort disabled --httpsPort 3443 --baseDN dc=example,dc=com --rootUserDN 'cn=Directory Manager' --addBaseEntry --enableStartTLS --useJavaKeystore /opt/ods/poc/cfg/certs/oud1.example.com/oud1.example.com.jks --keyStorePasswordFile /opt/ods/poc/cfg/certs/oud1.example.com/oud1.example.com.pin --certNickname server-cert --hostName oud1.example.com --noPropertiesFile


4. Apply desired configuration to the OUD 14c instance including custom schema, indexing, password policy, ... etc.

cd /opt/ods/poc/mw_oud14c/oud1/OUD/bin
./ldapmodify -Z -X -p 1444 -h oud3.example.com -D "cn=Directory Manager" -j /opt/ods/poc/cfg/...pw -c -f /opt/ods/poc/cfg/custom_schema.ldif
./dsconfig --batchFilePath /opt/ods/poc/cfg/custom_config.batch --hostName oud3.example.com --port 1444 --bindDN "cn=Directory Manager" --bindPasswordFile /opt/ods/poc/cfg/...pw  --trustAll --no-prompt --noPropertiesFile
./stop-ds
./start-ds


5.  Add the OUD 14c instance to the OUD 12c replication topology

cd /opt/ods/poc/mw_oud14c/oud3/OUD/bin
./dsreplication enable --secureReplication1 --secureReplication2 --host1 oud1.example.com --port1 1444 --bindDN1 'cn=Directory Manager' --bindPasswordFile1 /opt/ods/poc/cfg/...pw --bindPasswordFile2 /opt/ods/poc/cfg/...pw --replicationPort1 1989 --host2 oud3.example.com --port2 1444 --bindDN2 'cn=Directory Manager' --replicationPort2 1989 --baseDN dc=example,dc=com --adminUID admin --adminPasswordFile /opt/ods/poc/cfg/...pw --trustAll --noPropertiesFile --no-prompt


6. Initialize the OUD 14c instance with data

Initialization options include initialization over protocol, via binary backup restore, and LDIF file import.

6.1 Initialize over protocol
Initialize the OUD 14c instance on oud3.example.com from the the data of OUD 12cPS4 instance on oud1.example.com

cd /opt/ods/poc/mw_oud14c/oud3/OUD/bin
./dsreplication initialize --hostSource oud1.example.com --portSource 1444 --portProtocolSource auto-detect --hostDestination oud3.example.com --portDestination 1444 --portProtocolDestination auto-detect --baseDN dc=example,dc=com --adminUID admin --adminPasswordFile /opt/ods/poc/cfg/...pw --trustAll --no-prompt


6.2 Initialize from recent binary backup

6.2.1 If you don't already have a backup from an OUD 12cPS4 instance, create one with the backup command.

cd /opt/ods/poc/mw_oud12c/oud1/OUD/bin
./stop-ds
./backup -c -n userRoot --backupDirectory /opt/ods/poc/tmp/backup/userRoot
./start-ds


6.2.2 Copy the backup from the OUD 12cPS4 instance to the new OUD 14c instance

6.2.3 Restore the binary backup to the OUD 14c instance

cd /opt/ods/poc/mw_oud12c/oud1/OUD/bin
./stop-ds
./restore --backupDirectory /opt/ods/poc/tmp/backup/userRoot
./start-ds


6.3 Initialize from recent LDIF export

6.3.1 Export to LDIF file from OUD 12cPS4 instance

cd /opt/ods/poc/mw_oud12c/oud1/OUD/bin
./stop-ds
./export-ldif -c -n userRoot -l /opt/ods/poc/tmp/export.ldif
./start-ds

6.3.2 Copy the exported LDIF file to the OUD 14c host

6.3.3 Initialize the OUD 14c instance from the LDIF file

cd /opt/ods/poc/mw_oud12c/oud1/OUD/bin
./stop-ds
./import-ldif --skipDNValidation --skipSchemaValidation -c -n userRoot -l /opt/ods/poc/tmp/export.ldif
./start-ds

7. Rinse and repeat steps 1-5 until the desired number of OUD 14c instances have been deployed

8. Satisfy functional and performance qualifiers applying client load

9. Introduce load to the OUD 14c instances by either adding the OUD 14c instances to an existing network load balancer virtual IP address (VIP) or setup a new VIP and iterative migrate applications to the new VIP until no more client load exists on the previous VIP.

Important Note: At this point that if you have any issues at all, your rollback strategy is revert the network load balancer VIPs to their previous state where all load goes to OUD 12cPS4 instances.

10. Once all applications are transitioned to OUD 14c, then remove the OUD 12cPS4 instances from the load balancing infrastructure if necessary.  If you had transitioned all instances to the new VIP, you could transition the old DNS name to the new VIP so that if any straggler apps try to use the old name, they will get routed to the proper VIP.

11. The last step is to deinstall the OUD 12cPS4 instances using the OUD deinstall script so that each OUD 12cPS4 instance is properly removed from the OUD replication topology.  Then, once all of the OUD 12cPS4 instances have been deinstalled and cleanly removed from the OUD topology, you can decommission the OUD 12cPS4 infrastructure. 

II. Upgrade In Place

With this migration strategy, you upgrade existing OUD 12c instances in place in the existing middleware home directory. The basic workflow looks like the following:

1. Stop and backup the existing OUD 12cPS4 instance(s)

Important Note: This backup strategy presumes that the backend data and logs reside within the OUD instance directory path.

/opt/ods/poc/mw_oud12c/oud1/OUD/bin/stop-ds
cd /opt/ods/poc/mw_oud12c
tar -czf /opt/ods/poc/oud1.tgz oud1


2. Backup the OUD 12cPS4 middleware home

cd /opt/ods/poc
tar -czf /opt/ods/poc/mw_oud12c.tgz --exclude oud1 mw_oud12c


3. Deinstall the OUD 12cPS4 middleware home

cd /opt/ods/poc/mw_oud12c
./oui/bin/deinstall.sh -silent -distributionName 'Oracle Unified Directory'
rm -fr 
/opt/ods/poc/mw_oud12c


4. Extract JDK 21 (or 17) and OUD 14c

cd /opt/ods/poc/sw
tar -zxf /opt/ods/poc/bits/14c/jdk-21_linux-x64_bin.tar.gz
unzip -qo /opt/ods/poc/bits/14c/V1048203-01.zip
unzip -qo /opt/ods/poc/bits/14c/p37376076_141200_Generic.zip


5.1 Make response files for automated OUD 14c installation

cat  /opt/ods/poc/cfg/oraInventory.loc
inventory_loc=/opt/ods/poc/cfg/oraInventory
inst_group=opc

cat  /opt/ods/poc/cfg/oud14c-standalone.rsp
[ENGINE] 
Response File Version=1.0.0.0.0
[GENERIC]
DECLINE_AUTO_UPDATES=true
MOS_USERNAME=
MOS_PASSWORD=<SECURE VALUE>
AUTO_UPDATES_LOCATION=
SOFTWARE_UPDATES_PROXY_SERVER=
SOFTWARE_UPDATES_PROXY_PORT=
SOFTWARE_UPDATES_PROXY_USER=
SOFTWARE_UPDATES_PROXY_PASSWORD=<SECURE VALUE>
ORACLE_HOME=/opt/ods/poc/mw_oud12c
INSTALL_TYPE=Standalone Oracle Unified Directory Server (Managed independently of WebLogic server)


5.2 Install OUD 14c

export JAVA_HOME=/opt/ods/poc/sw/jdk-21.0.6
$JAVA_HOME/bin/java -jar  /opt/ods/poc/sw/fmw_14.1.2.1.0_oud.jar -silent -ignoreSysPrereqs -responseFile  /opt/ods/poc/cfg/oud14c-standalone.rsp -invPtrLoc /opt/ods/poc/cfg/oraInventory.loc


5.3 Apply patch(es) to OUD 14c

Important Note: Patch 37376076 is not required.  It is just provided for illustrative purposes.

export JAVA_HOME=/opt/ods/poc/sw/jdk-21.0.6
export ORACLE_HOME=/opt/ods/poc/mw_oud12c
cd /opt/ods/poc/sw/37376076
$ORACLE_HOME/OPatch/opatch apply


6. Restore the OUD 12cPS4 instance(s)

cd /opt/ods/poc/mw_oud12c
tar -xzf /opt/ods/poc/oud1.tgz


7. Upgrade the OUD 14c instance scripts to reflect the new JDK path

cd /opt/ods/poc/mw_oud12c
./oud/bin/upgrade-oud-instances --instancePath /opt/ods/poc/mw_oud12c/oud1


8. Upgrade the OUD 14c instance(s)

/opt/ods/poc/mw_oud12c/oud1/OUD/bin/start-ds --upgrade


9. Start the OUD 14c instance(s)

/opt/ods/poc/mw_oud12c/oud1/OUD/bin/start-ds


That completes the demonstration of an in place migration of an OUD instance from OUD 12cPS4 to OUD 14c.

III. Upgrade To New Middleware Home

With this migration strategy, you transition existing OUD 12c instances to a new middleware home (e.g. mw_oud14c) from the existing middleware home directory (e.g. mw_oud12c).  The only benefit of this method is that it may streamline rollback if necessary.  Most customers will either migrate in place or swing to new OUD 14c instances on new infrastructure.  The workflow of this upgrade method is identical to the in place upgrade except using a new middleware home (e.g. mw_oud14c) instead of the existing middleware home (e.g. mw_oud12c).

IV. Sync New Topology

With this migration strategy, you setup a complete independent OUD 14c topology, setup and OUD 14c Directory Integration Platform (DIP) instance, configure bi-directional DIP sync profiles between the one of the OUD 12cPS4 instances and one of the OUD 14c instances to sync data between the OUD 12cPS4 and OUD 14c replication topologies. The work flow is similar to the swing migration with regards to load balancer configuration and rollback methodology.  Once the migration is complete, in addition to de-commissioning the OUD 12cPS4 infrastructure, you de-commission the DIP instance as well.  A detailed example of this migration flow is beyond the scope of this blog post because most customers will use one of the first two approaches. 

I hope that you found this helpful.

Brad