While preparing for an interview in 1992 for a co-operative education program at IBM to support their new RISC SYSTEM/6000 line of servers and it's version of the UNIX operating system, Advanced Interactive eXecutive (AIX), I was introduced to what would become one of my favorite past times, shell scripting. At that time, the common shells were the Bourne shell (sh), Korn shell (ksh), and C shell (csh). Later, Linux would introduce what is now the default shell of many UNIX distributions, the Bourne-Again shell (bash).
Though shell scripting is technically not a programming language, everyone that works in the UNIX domain becomes familiar with shell scripting at some level. Over my 30+ year career in IT, I've easily contributed more than 10,000,000 lines of shell scripting code to the projects that I've worked on. The purpose of this blog is to pass along some of the gems that I've collected along the way.
When working on projects that include multiple scripts, one of the biggest problems is how do you write all of the scripts such that they will work properly in any directory path. For example, my current project is composed of 84 different scripts. The answer, is that you define a common path context for all of the scripts in the header of every script in the project. My standard for this is the following:
Let's unpack this simple set of lines to understand what's going on.
1. Define the shell to use for this script's runtime environment via the shebang directive.
#!/bin/bash
2. Determine the script name. The $0 variable is the execution path of the script being run. For example, the execution path could have been to run the script in the current directory with ./myscript.sh. Or, it could have been called from a directory above with ../myscript.sh. Or, perhaps it might have been called using the full path of /opt/ods/poc/myscript.sh. The basename command trims off the path to leave just the script name. We assign the output of the basename command to the cmd variable so that it can be used as a common reference (${cmd}) throughout the rest of the script.
cmd=$(basename $0)
3. Determine the base directory based of the script's location. The dirname command trims off the script name leaving just the path to the script. We temporarily assign the result of the dirname output to the curdir in the first step to determining the current directory.
curdir=$(dirname $0)
4. Lastly, we change directory to the current value of ${curdir} and then use the present working directory (pwd) command to determine the fully qualified path of the directory that we are in.
curdir=$(cd ${curdir}; pwd)
You may be wondering why we didn't just stop at step 3 because technically, that is correctly the current directory. The reason is because the execution of the script can be called from a variety of ways that are not the full path name. For example, if I ran the script with ../../../myscript.sh, the first value of curdir would be "../../../", which is not the fully qualified path. However, after step 4, the fully qualified path is righly determined to be /opt/ods/poc.
Oracle Unified Directory (OUD) is one directory service product included in the Oracle Directory Services Plus (ODS+) suite that is used for a wide variety of use cases requiring LDAPv3 interoperability.
One very common use case is Oracle Database name resolution. Oracle name resolution is referred to by several names including "net services", Transparent Network Substrate (TNS), and Oracle Names (ONAMES) depending on the Oracle Database version.
Oracle Database name resolution is to Oracle Databases as Domain Name Services (DNS) is to resolving a fully qualified domain name to an IP address to enable a web browser or ssh client to connect to the associated IP address. For more information on Oracle Database name resolution (a.k.a. Net Services), see the Oracle Database documentation here: https://docs.oracle.com/en/database/oracle/oracle-database/19/netag/
Basic Workflow
In this use case, Oracle Database clients connect anonymously [default] or via basic authentication to the OUD directory server over either the LDAP [default] or LDAPS protocol. Once connected to OUD, the database client requests the client connect string for a specific database. The connect string is returned to the database client and then the database client uses the provided connection string to connect to the Oracle Database. The following is a sample connect string returned by OUD:
Temporary password file for setup (/opt/oud/bits/.pw)
$ echo 'Oracle123' > /opt/oud/bits/.pw
Java security configuration file (/opt/oud/bits/tns-java.security) that allows anonymous cipher suites, which are required for registering databases with dbca
List patch inventory to see current OUD version and what patches are installed
$ $ORACLE_HOME/OPatch/opatch lsinventory
Install the OPatch patch
$ cd /opt/oud/bits/6880880
$ $JAVA_HOME/bin/java -jar /opt/oud/bits/6880880/opatch_generic.jar -silent oracle_home=$ORACLE_HOME
Install OUD patch responding interactively with y to both questions
cd /opt/oud/bits/35263333
$ $ORACLE_HOME/OPatch/opatch apply
List patch inventory to see compare with previous lsinventory output
$ $ORACLE_HOME/OPatch/opatch lsinventory
7. Set OPENDS_JAVA_ARGS environment variable so that when OUD instances start, they will use our custom tns-java.security configuration file rather than the default configuration file
13. Create realm configuration to add TNS admin and grant privileges to manage database entries. Here is a sample realm configuration file in LDIF format
dn: ou=TNSAdmins,cn=OracleContext
changetype: add
objectClass: top
objectClass: organizationalUnit
ou: TNSAdmins
Beginning with Oracle Unified Directory (OUD) 12c Patch Set 4, Oracle began adding new features and functionality along with bug fixes with each bundle patch release. The What's New section of the documentation covers new features as they are added with each bundle patch release.
One such feature enhancement introduced a new password storage scheme that is used by Enterprise User Security (EUS) and Centrally managed Users (CMU) architectures for password based user authentication to the Oracle database.
This new password scheme is a proprietary blend of multiple rounds of PBKDF2 SHA-512, which is much stronger than the storage scheme used for earlier Oracle database versions (e.g. 10g and 11g). The full list of password storage schemes offered by OUD 12cPS4 are available here.
With the EUS architecture configuration where OUD is the identity store for authentication, identity management solutions like Oracle Identity Manager, SailPoint and others can simply update the password using normal password update and OUD password policy will automatically generate this password hash for database authentication.
With the EUS or CMU architectures where Active Directory (AD) is the identity store, the individual user's orclCommonAttribute value needs to be updated with this new hash in order for password based authentication to work properly.
The standard method of updating the user's orclCommonAttribute attribute value is through the deployment of the password filter to all AD domain controllers. When a user updates their password with Ctrl-Alt-Delete feature of Windows, the Oracle's password filter (orapwdfltr.dll) captures the clear text password that was entered by the user, hashes the password, and stores the hashed value into the orclCommonAttribute attribute of the user's AD object. See Doc ID 2640135.1 for more information on how to obtain and deploy the latest version of this password filter.
There is an alternative approach to populating the user's orclCommonAttribute in AD that has the same end result but does not require the password filter. You can use the OUD encode-password command to generate the hashed value of the password and then update orclCommonAttribute in the user's AD entry. This approach could be dove tailed into your provisioning solution as well. Here is a sample workflow:
1. Install OUD.
Note: If integrating with Identity Management solution, OUD would most likely need to be installed on the host(s) where the Identity Management solution is running in order to securely handle the password.
2. Use the OUD encode-password command to generated the hash of the user's password.
Enterprise User Security (EUS) is one of the architectures available for centralizing authentication, authorization, and user/password lifecycle management or Oracle Database users. One Oracle database error that customers often encounter as they begin evaluating the EUS architecture is the following:
This error can be especially frustrating because there are a variety of possible causes.
Here are some common causes and corresponding solutions and troubleshooting techniques:
The wallet wallet containing the ORACLE.SECURITY.DN and ORACLE.SECURITY.PASSWORD entries does not exist
$ ls -al $ORACLE_BASE/admin/$ORACLE_SID/wallet
The wallet containing the ORACLE.SECURITY.DN and ORACLE.SECURITY.PASSWORD exists but is empty or has missing or incorrect values including case sensitive passwords. To troubleshoot, retrieve the values from the wallet with:
If Oracle database was upgraded from earlier version to 18c or newer, the mappings may need to be re-created. See Doc ID 2611300.1
The EUS configuration (e.g. Sample in /<oud_install/oud/config/EUS/modifyRealm.ldif) has not yet been applied or is mis-configured. See Doc ID 2118421.1
The Certificate Authority (CA) certificate chain or OUD self-signed certificate is not loaded into the wallet. To troubleshoot this issue, confirm the presence of the certificate in the wallet with:
Database start fails with ORA-01017. In this case grid user's group needs to be a member of the OSRACDBA group. See Doc ID 2313555.1
Get ORA-01017 with RAC database. This can be caused by the having inconsistent wallets on each RAC node or by using the same wallet via NFS share on all three nodes but where auto_login only works for the node on which it was set.
May have specified the wrong ORACLE_SID environment variable value and the authentication fails because you are attempting to connect to the wrong database.
If using tnsnames.ora, the connect string may be pointing to the wrong database for which the user or user/password combination are not valid.
When troubleshooting error ORA-01017 from the database perspective, you will want to enable tracing to determine the reason for the authentication failure.
Step 1: Enable Oracle database tracing by with:
$ $ORACLE_HOME/bin/sqlplus / as sysdba SQL> alter system set events '28033 trace name context forever, level 9';
Step 2: Perform authentication attempt that fails with ORA-01017
$ $ORACLE_HOME/bin/sqlplus / as sysdba SQL> alter system set events '28033 trace name context off';
Step 4: Lookup the path of the trace files (in case they aren't in default location):
$ $ORACLE_HOME/bin/sqlplus / as sysdba SQL> sho param dbug;
Step 5: Review trace file looking for KZLD_ERR messages
When troubleshooting error ORA-01017 from the directory service perspective, you will want to review the directory service logs. In the case of Oracle Unified Directory (OUD), you will want to review the /<oud_instance>/OUD/logs/access or /<oud_instance>/OUD/logs/access.log log file depending on which logger is enabled. Things to look for include:
Authentication attempt by <eus_user_id> fails because user does not exist (err=32)
Authentication attempt by <eus_user_id> fails because the wrong password is used (err=49)
Connection to the OUD instances fails because of inability to come to agreement on the LDAPS cryptographic negotiation. Typically see error "no cipher suites in common". See Doc ID 2397791.1 for OUD 12c and Doc ID 2304757.1 for OUD 11g. Note that this can happen if you've upgraded the JDK 8 to a version that has deprecated use of anonymous and NULL cipher suites. In this case, you will need to update the jre/lib/security/java.security of the JDK implementation used by OUD to remove anon from jdk.tls.disabledAlgorithms. Here is a sample java.security for jdk1.8.0_361:
Beginning with Oracle Unified Directory (OUD) 12c Patch Set 4, Oracle began adding new features and functionality along with bug fixes with each bundle patch release. The What's New section of the documentation covers new features as they are added with each bundle patch release.
One such feature enhancement introduced connection details to each of the OUD log publishers. When enabled, this enhancement tags each operation in the log file with the following additional details:
These additional details can enable logging analytics tools like Oracle Cloud Infrastructure Logging Analytics to provide deep insights into the use and security posture of the OUD service to help you to strengthen security posture, identify out-of-date clients, identify root cause of cryptographic communication breakdowns, and even identify potential threat actors.
Here are some of the questions that each of these additional details can enable you to answer:
Authentication Distinguished Name
What operations did James perform on the directory server over the past 48 hours?
What users added new entries over the past 90 days?
Are any clients connecting anonymously to OUD?
Protocol
What clients or users are connecting to OUD via non-encrypted LDAP?
What clients are connecting via REST/SCIM?
Client
What is the volume of load per client IP address?
From what client IP addresses were write operations performed?
From what client IP addresses where anonymous authentications performed?
Server
Is the distribution of load even across servers in the load balanced pool?
What OUD instances receive write operations?
Which OUD instances are processing un-indexed searches or other abusive loads?
Which OUD instances are receiving non-encrypted LDAP connections?
Cryptographic Protocol
Which users are requesting weak cryptographic protocols like SSLv3?
What is the distribution of cryptographic protocols handled by the OUD service?
Based on client load, are we to disable weak cryptographic protocols?
Which clients need to be patched or updated to use strong cryptography?
Which clients need their trust store updated with the latest certificate authority certificate chain or perhaps need the updated self-signed certificates?
Cryptographic Cipher Suite
Which users are using anonymous or weak cipher suites when connecting to OUD?
What is the distribution of cipher suites being used by clients?
Based on client load, are we to disable weak cryptographic cipher suites?
Enabling these additional connection details is very straight forward and can be enabled via the dsconfig command line (interactively, non-interactively, and in batch) tool or the web-based administrative console (Oracle Unified Directory Services Manager).
Here is a sample non-interactive dsconfig command that for enabling connection details to the File-based Access logger:
TIME_WAIT is an incredible part of the TCP/IP stack that enables connections to linger until the client properly closes the connection. However, in some cases, the client does not close the connection properly or efficiently. This can result in TCP connections in the TIME_WAIT state to persist until the operating system purges them. For a busy system, this can result in a denial of service on the host because all available connections get tied up in the TIME_WAIT state.
Having worked with many large-scale and high-performance systems through the years, I've seen this scenario play out many times. Fortunately, each operating system has its own way of optimizing for this scenario to minimize the impact.
To determine if this is a problem on your host, track the number of connections in the TIME_WAIT state. For example, on UNIX/Linux and MacOS systems, you can count these connections with netstat:
$ netstat -an|grep -c TIME_WAIT
45564
Here are TCP tunings per operating system that I have used to mitigate this issue:
In my 2015 post "Have Solar Systems Finally Reached Financial Viability For The Masses?" I concluded that solar systems had not yet reached financial viability in Texas because the cost of electricity from an electricity provider was less than what it would be though a purchased or leased solar system. In this post I'm checking to see if the current cost of solar has finally reached the financial tipping point and if not, what is the tipping point in Texas.
For this research, I called several solar companies. However, the only three that would provide quotes over the phone were Tesla (formerly Solar City), SunRun, and Elevation Solar. All three were more than $.11/kWh. This is still $.03/kWh more than what I'm currently paying on average. Here's the monthly data from my house. On average, my actual cost for the last 12 months was $.09/kWh which is not far off from the $.086/kWh in my electricity provider contract.
According to PowerToChoose.org, It looks like the price for my usage is dropping to $.076/kWh for the next 12 months.
The good news is that the average solar power price since 2015 has dropped from $.13/kWh to $.11/kWh. If the rate reduction of $.01/kWh per 2 years continues the projected breakeven rate of $.09/kWh would arrive in 2023. It is worth noting that my electricity rate has been under $.10/kWh for more than the last 15 years where I live in DFW.
Unfortunately, my prognostication regarding lowering the cost of solar by 10x with the transition to graphene-based solar cells has yet to be realized. I'm still hopeful as the scientific community still believes that graphene will dramatically increase the energy capture percentage and improve energy transfer efficiency. As a bonus, graphene's transparency lends itself to a much larger array of applications including windshields, car wraps, phone screens, home and business window films, ... etc.
The following video provides a great primer on the rationale for graphene efficiency improvements.
Other innovations in solid-state batteries and super-capacitor storage will also greatly improve energy transfer efficiency and reduce the cost of electricity storage over the next 5 to 10 years as well.
Once these three technologies are fully realized and applied solar power solutions, the price for electricity will likely drop to less than a penny per kWh. Hopefully, we will see that power revolution in my lifetime.
Two very important notes regarding investing in a solar system in Texas:
1. The technology employed at the time of installation cannot be upgraded until you pay off the loan. Therefore, if solar efficiencies double next year, you can't take advantage of that benefit for 20 years or until you pay off the loan.
2. If your roof gets damaged by hail, the installer has to remove the solar system and then re-install it after the roof is replaced. The costs for removal and re-install depends on the number of panels installed. One company's estimate was $7,000.
Let's consider an example. Using current technology, the Tesla solution using standard tiles (not the cool roof tiles) including Power Walls would take 35 years to break even on a 20-year loan. Once the loan was paid, I would still have to pay for electricity because my house is not big enough for enough panels to exceed electricity usage.
If the solar capture and efficiency improve by 5x next year, it would only take 5.5 years to pay back the loan and then I would be accruing over $9,000/year of net-metered credit through excess load generation.
As for our house, we will wait until technology advances to the point that the break-even is less than 5 years.