I've noticed that one of our interface directories has a lot of old files, some of them were more than a year old. I checked it with our implementers and it turns out that we can delete all files that are older than 60 days. I decided to write a (tiny) shell script to purge all files older than 60 days and schedule it with crontab, this way I won't deal with it manually. I wrote a find command to identify and delete those files. I started with the following command:
It finds and deletes all files in directory /interface/inbound that are older than 60 days. After packing it in a shell script I got a request to delete "csv" files only. No problem... I added the "-name" to the find command:
All csv files in /interface/inbound that are older than 60 days will be deleted. But then, the request had changed, and I was asked to delete "*.xls" files further to "*.csv" files. At this point things went complicated for me since I'm not a shell script expert... I tried several things, like add another "-name" to the find command:
But no file was deleted. Couple of moments later I understood that I'm trying to find csv files which is also xls files... (logically incorrect of course). After struggling a liitle with the find command, I managed to make it works:
:-) Aviad |
Wednesday, June 10, 2009
Purge old files on Linux/Unix using “find” command
Posted by Aviad at 9:30 AM 7 comments
Labels: Unix\Linux
Wednesday, May 20, 2009
Upgrade Java plug-in (JRE) to the latest certified version
If you have already migrated to Java JRE with Oracle EBS 11i you may want to update EBS to the latest update from time to time. For example, if your EBS environment is configured to work with Java JRE 6 update 5 and you want to upgrade your clients with the latest JRE 6 update 13. This upgrade process is very simple:
That's all.... Since we upgraded our system to JRE 6 update 13 (2 weeks ago), our users don't complain about mouse focus issues and some other forms freezes they have experienced before. So... it was worth it... If you haven't migrated from Jinitiator to the native Sun Java plug-in yet, it's highly recommended to migrate soon. Jinitiator is going to be desupported soon. See the following post for detailed, step by step, migration instructions: Upgrade from Jinitiator 1.3 to Java Plugin 1.6.0.x. You are welcome to leave a comment. Aviad |
Posted by Aviad at 11:15 AM 3 comments
Labels: Developer 6i, Upgrades
Tuesday, March 17, 2009
Corruption in redo log file when implementing Physical Standby
Lately I started implementing Data Guard - Physical Standby - as a DRP environment for our production E-Businsess Suite database and I must share with you one issue I encountered during implementation. I chose one of our test environments as a primary instance and I used a new server, which was prepared to the standby database in production, as the server for the standby database in test. Both are Red-Hat enterprise linux 4. The implementation process went fast with no special issues (at lease I thought so...), everything seems to work fine, archived logs were transmitted from the primary server to the standby server and successfully applied on the standby database. I even executed switchover to the standby server (both database and application tier), and switchover back to the primary server with no problems. The standby database was configured for maximum performance mode, I also created standby redo log files and LGWR was set to asynchronous (ASYNC) network transmission. The exact setting from init.ora file:
At this stage, when the major part of the implementation had been done, I found some time to deal with some other issues, like interfaces to other systems, scripts, configure rsync for concurrent log files, etc... , and some modifications to the setup document I wrote during implementation. While doing those other issues, I left the physical standby instance active so archive log files are transmitted and applied on the standby instance. After a couple of hours I noticed the following error in the primary database alert log file:
I don't remember if I've ever had a corruption in redo log file before... The primary instance resides on a Netapp volume, so I checked the mount option in /etc/fstab but they were fine. I asked our infrastructure team to check if something went wrong with the network during the time I got the corruption, but they reported that there was no error or something unusual. Ok, I had no choice but to reconstruct the physical standby database, since when an archive log file is missing, the standby database is out of sync'. I set the 'log_archive_dest_state_2' to defer so no further archive log will be transferred to the standby server, cleared the corrupted redo log files (alter database clear unarchived logfile 'logfile.log') and reconstruct the physical standby database. Meanwhile (copy database files takes long...), I checked documentation again, maybe I missed something, maybe I configured something wrong.. I have read a lot and didn't find anything that can shed some light on this issue. At this stage, the standby was up and ready. First, I held up the redo transport service (log_archive_dest_state_2='defer') to see if I'll get a corruption when standby is off. After one or two days with no corruption I activated the standby. Then I saw the following sentence in Oracle® Data Guard Concepts and Administration 10g Release 2 (10.2): One moment, I thought to myself, the standby server is based on AMD processors and the primary server is based on Intel's.. Is it the problem?! Meanwhile, I got a corruption in redo log file again which assured there is a real problem and it wasn't accidentally. So I used another AMD based server (identical to the standby server) and started all over again – primary and standby instances. After two or three days with no corruption I started to believe the difference in the processors was the problem. But one day later I got a corruption again (Oh no…) I must say that on the one hand I was very frustrated, but on the other hand it was a relief to know it's not the difference in the processors. So it is not the processors, not the OS and not the network. What else can it be?! And here my familiarity with the "filesystemio_option" initialization parameter begins (thanks to Oracle Support!). I don't know how I missed this note before, but all is written here - Note 437005.1: Redo Log Corruption While Using Netapps Filesystem With Default Setting of Filesystemio_options Parameter. When the redo log files are on a netapp volume, "filesystemio_options" must be set to "directio" (or "setall"). When "filesystemio_options" is set to "none" (like my instance before), read/writes to the redo log files are using the OS buffer cache. Since netapp storage is based on NFS (which is stateless protocol), when performing asynchronous writing over the network, the consistency of writes is not guaranteed. Some writes can be lost. By setting the "filesystemio_options" to "directio", writes bypasses the OS cache layer so no write will be lost. Needless to say that when I set it to "directio" everything was fine and I haven't gotten any corruption again. Aviad |
Posted by Aviad at 5:54 PM 2 comments
Labels: Data Guard, Network, Troubleshooting, Unix\Linux