Oracle Notes

Sharing Oracle technology knowledge and experiences.

Archive for the ‘Oracle 11g Database Administration’ Category

Protecting Archive Logs in the Event of a Disaster

Posted by sdefilip on May 6, 2011

 Summary of Methods to Protect Archive Log Tapes

 I have a client who is concerned about losing archive logs and being able to recover their database in the event of a disaster.  We take full RMAN backups, including archive logs and control files every night, but the concern is what would happen if their server is destroyed in between these backups. With the knowledge that a full Disaster Recovery Plan is not a simple undertaking and that they would need more than archive logs to recover their database in case of the destruction of their server, they asked me to provide options which I am sharing on this blog.  I am sharing the results of this undertaking. The environment is AIX 6.5 and Oracle 11.2 Standard Edition.  Please feel free to comment and share your suggestions.

 Options to protect archive log tapes (and the full database) are:

 I.  Have RMAN take more frequent backups, including archive logs.

II. Oracle allows specification of an alternate destination for archive logging that is 

     additional to the primary destination. 

      NFS mount a directory on a remote server. Use an alternate “archive_log_dest”    parameter to specify the NFS mounted remote directory as the alternate log destination.

III.  SFTP or rsynch a copy of the archive logs to a remote server through a shell script. The shell script would have to:

  1. check the V$ARCHIVED_LOG view to determine if each archive log file in the archive log directory has completed its archiving process.
  2. use the AIX rsynch command (or SFTP) to synchronize the remote archive log destination with the primary server archive log destination
  3. run this script every nn time intervals, leaving a window for the RMAN backups

IV. Use the pre-RMAN method of database backup. Copy the database user datafiles, control files, archive logs and parameter file to a directory on an NFS mounted remote server directory. This requires that an Oracle instance is installed on the remote server to recover the database from these files. This method has it’s weaknesses and so RMAN was created and offered as an alternative.

V.  Use RMAN to rig a standby database without DataGuard or GoldenGate;

      this involves another Oracle instance running on the remote server, use of 

    RMAN to clone the primary database, shipping primary database RMAN backup

    files to remote server and, finally, running an RMAN recovery of the database on

   the remote server.

VI.  Create an Oracle RAC 2-node cluster with one cluster being on a remote server.

       I believe that this is possible with 11.2 tandard Edition, possibly with a small

       charge per node.

 I recommend option “III” because it does not require another instance of Oracle on the remote server, it will not slow down the log writer Oracle process (LGWR) which would most likely result in a general database slowdown. In addition, the AIX/network resources used, if the script is not run very excessively, should not be sufficient to slowdown the Oracle database or the application.  My suggestion is to determine the average amount of time that the database takes to fully archive a log, decide how many logs you would like copied at the same time and use that average as a guideline to determine how frequently to schedule the script.

 
Advertisements

Posted in Best Practices, Disaster Recovery, Oracle 11g, Oracle 11g Database Administration, Tips | Tagged: , | Leave a Comment »

Giving Oracle a Checkup

Posted by sdefilip on January 27, 2011

ORACLE 11g HEALTH CHECK MONITOR

The Oracle Health Check Monitor (HM) facility is new with Oracle 11.1 database.  It can run diagnostic checks that detect:

>   file corruptions – reports failures if these files are inaccessible, corrupt or inconsistent. If the database is in mount or open mode, this check examines the log files and data files listed in the control file. If the database is in NOMOUNT mode, only the control file is checked.

>   physical and logical block corruptions – detects disk image block corruptions such as checksum failures, head/tail mismatch, and logical inconsistencies within the block. Most corruptions can be repaired using Block Media Recovery. Corrupted block information is also captured in the V$DATABASE_BLOCK_CORRUPTION view. This check does not detect inter-block or inter-segment corruption.

> undo and redo corruptions –

For redo, HM scans the contents of the redo log for accessibility and corruption, as well as the archive logs, if available. The Redo Integrity Check reports failures such as archive log or redo corruption.

For undo, HM finds logical undo corruptions. After locating an undo corruption, this check uses PMON and SMON to try to recover the corrupted transaction. If this recovery fails, then Health Monitor stores information about the corruption in V$CORRUPT_XID_LIST. Most undo corruptions can be resolved by forcing a commit.

>   transaction integrity check – identical to the Undo Segment Integrity Check except that it checks only one

specific transaction

>  data dictionary corruptions – examines the integrity of core dictionary objects, such as tab$ and col$. It  performs the following operations:

    • Verifies the contents of dictionary entries for each dictionary object.
    • Performs a cross-row level check, which verifies that logical constraints on rows in the dictionary are enforced.
    • Performs an object relationship check, which verifies that parent-child relationships between dictionary objects are enforced.

The Dictionary Integrity Check operates on the following dictionary objects:

tab$, clu$, fet$, uet$, seg$, undo$, ts$, file$, obj$, ind$, icol$, col$, user$, con$, cdef$, ccol$, bootstrap$, objauth$, ugroup$, tsq$, syn$, view$, typed_view$, superobj$, seq$, lob$, coltype$, subcoltype$, ntab$, refcon$, opqtype$, dependency$, access$, viewcon$, icoldep$, dual$, sysauth$, objpriv$, defrole$, and ecol$.

Each of the above checks can be used with parameters that provide specific subcategories of information.

Run HM Checker Manually

The Oracle Health Monitor (HM) can be run using the following syntax manually:

BEGIN

DBMS_HM.RUN_CHECK(‘type of check’, ‘name of HM check run’);

END;

Example:

SQL>
BEGIN

DBMS_HM.RUN_CHECK(‘Data Block Integrity Check’, ‘db_blk_integ_run’);

END;
SQL> /

The types of checks that can be obtained in this manner (in place of ‘type of check’ above) are:

HM Test Check
DB Structure Integrity Check
CF Block Integrity Check
Data Block Integrity Check
Redo Integrity Check
Logical Block Check
Transaction Integrity Check
Undo Segment Integrity Check
No Mount CF Check
Mount CF Check
CF Member Check
All Datafiles Check
Single Datafile Check
Tablespace Check Check
Log Group Check
Log Group Member Check
Archived Log Check
Redo Revalidation Check
IO Revalidation Check
Block IO Revalidation Check
Txn Revalidation Check
Failure Simulation Check
Dictionary Integrity Check
ASM Mount Check
ASM Allocation Check
ASM Disk Visibility Check
ASM File Busy Check

Most health checks accept input parameters. You can view parameter names and descriptions with the V$HM_CHECK_PARAM view. Some parameters are mandatory while others are optional. If optional parameters are omitted, defaults are used. The following query displays parameter information for all health checks:

SELECT c.name check_name, p.name parameter_name, p.type,

p.default_value, p.description

FROM v$hm_check_param p, v$hm_check c

WHERE p.check_id = c.id and c.internal_check = ‘N’

ORDER BY c.name;

Input parameters are passed in the input_params argument as name/value pairs separated by semicolons (;). The following example illustrates how to pass the transaction ID as a parameter to the Transaction Integrity Check:

BEGIN

DBMS_HM.RUN_CHECK (

check_name   => ‘Transaction Integrity Check’,

run_name     => ‘my_run’,

input_params => ‘TXN_ID=7.33.2’);

END;

Running HM Checker using Enterprise Manager:

1.      On the Database Home page, in the Related Links section, click Advisor Central.

2.      Click Checkers to view the Checkers subpage.

3.      In the Checkers section, click the checker you want to run.

4.      Enter values for input parameters or, for optional parameters, leave them blank to accept the defaults.

5.      Click Run, confirm your parameters, and click Run again.

Viewing HM Checker Reports

You can now view a report of a checker execution. The report contains findings, recommendations, and other information. You can view reports using Enterprise Manager, the ADRCI utility, or the DBMS_HM PL/SQL package. The following table indicates the report formats available with each viewing method.

Report Viewing Method Report Formats Available
Enterprise Manager HTML
DBMS_HM PL/SQL package HTML, XML, and text
ADRCI utility XML

To view run findings using Enterprise Manager

  1. Access the Database Home page.
  2. In the Related Links section, click Advisor Central.
  3. Click Checkers to view the Checkers subpage.
  4. Click the run name for the checker run that you want to view.

The Run Detail page appears, showing the findings for that checker run.

  1. Click Runs to display the Runs subpage.

Enterprise Manager displays more information about the checker run.

  1. Click View Report to view the report for the checker run.

The report is displayed in a new browser window.

Viewing Reports Using DBMS_HM

You can view Health Monitor checker reports with the DBMS_HM package function GET_RUN_REPORT. This function enables you to request HTML, XML, or text formatting. The default format is text, as shown in the following SQL*Plus example:

SET LONG 100000

SET LONGCHUNKSIZE 1000

SET PAGESIZE 1000

SET LINESIZE 512

SELECT DBMS_HM.GET_RUN_REPORT(‘HM_RUN_1061’) FROM DUAL;

DBMS_HM.GET_RUN_REPORT(‘HM_RUN_1061’)

———————————————————————–

Run Name                     : HM_RUN_1061

Run Id                       : 1061

Check Name                   : Data Block Integrity Check

Mode                         : REACTIVE

Status                       : COMPLETED

Start Time                   : 2011-01-12 22:11:02.032292 -07:00

End Time                     : 2011-01-12 22:11:20.835135 -07:00

Error Encountered            : 0

Source Incident Id           : 7418

Number of Incidents Created  : 0

Input Paramters for the Run

BLC_DF_NUM=1

BLC_BL_NUM=64349

Run Findings And Recommendations

Finding

Finding Name  : Media Block Corruption

Finding ID    : 1065

Type          : FAILURE

Status        : OPEN

Priority      : HIGH

Message       : Block 64349 in datafile 1:

‘/ade/sfogel_emdb/oracle/dbs/t_db1.f’ is media corrupt

Message       : Object BMRTEST1 owned by SYS might be unavailable

Finding

Finding Name  : Media Block Corruption

Finding ID    : 1071

Type          : FAILURE

Status        : OPEN

Priority      : HIGH

Message       : Block 64351 in datafile 1:

‘/ade/sfogel_emdb/oracle/dbs/t_db1.f’ is media corrupt

Message       : Object BMRTEST2 owned by SYS might be unavailable

Viewing Reports Using the ADRCI Utility

You can create and view Health Monitor checker reports using the ADRCI utility.

To create and view a checker report using ADRCI

  1. Ensure that operating system environment variables (such as ORACLE_HOME) are set properly, and then enter the following command at the operating system command prompt:

2.      ADRCI

The utility starts and displays the following prompt:

adrci>>

Optionally, you can change the current ADR home. Use the SHOW HOMES command to list all ADR homes, and the SET HOMEPATH command to change the current ADR home. See Oracle Database Utilities for more information.

3. Enter the following command:

show hm_run

This command lists all the checker runs (stored in V$HM_RUN) registered in the ADR repository.

4. Locate the checker run for which you want to create a report and note the checker run name. The REPORT_FILE field contains a filename if a report already exists for this checker run. Otherwise, generate the report with the following command:

5.      create report hm_run run_name

6. To view the report, enter the following command:

show report hm_run run_name

**  For more details regarding HM views, parameters and more see

http://download.oracle.com/docs/cd/B28359_01/server.111/b28310/diag007.htm

*** Reference Oracle® Database Administrator’s Guide 11g Release 1 (11.1)

Part Number B28310-04

Posted in Oracle 11g, Oracle 11g Database Administration, Oracle 11g New Features, Oracle Performance, Oracle Troubleshooting, Tips | Leave a Comment »

Oracle 11g GoldenGate and Compressed Tables

Posted by sdefilip on January 6, 2011

Oracle GoldenGate and Compressed Tables

OGG does not support compressed tables or partitions, neither does it handle it well with proper error messages until OGG v10.4. The abends may or may not produce any error message and sometimes produce wrong messages.

From V11.1.1.0.0, Oracle has enhanced the error handling part in BugDB 9425542, which gives meaningful error message on the compressed record before Extract abend. It will list out the table name, rowid, etc

Example :
ERROR OGG-01028 Record on table QATEST1.TAB1 with rowid AAM4EkAAEAACBguAAA from transaction 5.24.270123 (0x0005.018.00041f2b) is compressed. Compression is not supported.

However, due to bug 10063108, sometimes the error message on compressed tables are not entirely correct. This problem has been fixed in 11.1.1.0.3 and above

A table created as compressed will cause all of the DML’s to go into compressed blocks on disk. If the user does an “alter table nocompress”, every DML that goes into the table AFTER that point in time will be uncompressed. The query for compression will return “nocompress” now, but the simple “alter” does not change the already existing compressed blocks on disk that were created before the “alter”. So to capture the records from a table which was compressed we need to do the following

SQL> alter table move nocompress;

This will touch every single block on disk and will uncompress everything thereby causing OGG to work properly and not abend.

If there is even a single partition in a partitioned table that is compressed, it will cause an abend. Partition compression can be verified by getting the full DDL for the table by running the DBMS_METADATA.GET_DDL package. For table partitions that are compressed, run the below query and get the partition names & tablespace names.

SQL> SELECT partition_name, subpartition_name, tablespace_name, high_value FROM user_tab_subpartitions WHERE table_name = ‘table_name’;

Alter statement for partition to move to nocompress:

SQL> ALTER TABLE MOVE PARTITION NOCOMPRESS TABLESPACE ;

Eensure that you have enough disk space within your tablespaces before running the ALTER statement.

Support of compressed tables will be in future releases of OGG, however, in current V10.4 and V11.1.1.x, the only option, if a “move nocompress” is not possible, is to comment the compressed table or exclude them from the Extract.

Listing some scenarios in which we have seen similar errors for compression  :
1) The extract abends with the following error

GGS ERROR ZZ-0QY Failed to validate table <SCHEMA>.<TABLE NAME>. The table is compressed and extract will not be able to extract data from Oracle logs.

This bug happens when DDL is enabled, the checking was not done correctly when table resides on Bigfile tablespace.

Issue is fixed in V10.4.0.93 and V11.1.1.0.4

Reference bugdb – 10063075

2) When running with OGG version earlier than V11.1.1.0.10, Extract might abend with below error message, without telling the name of the table or the Objectid of the compressed table

Source Context :

SourceModule : [er.redo.ora.sr]
SourceID : [/mnt/ecloud/workspace/Build_OpenSys_r11.1.1.0.9_004
_[40043]/perforce/src/app/er/redo/oracle/redoorasr.c]
SourceFunction : [get_subrec(int32_t, mempool_t *, unsigned char *, u
nsigned short, subrec_info_t *, redo_thread_t *, BOOL, log_context_t *)]
SourceLine : [5434]

2010-11-10 11:50:26 ERROR OGG-01028 compressed tables are not supported.

This is because there is a temp table created as compressed, then followed directly by a direct load operation.

Try to exclude the table DBMS_TABCOMP_TEMP_UNCMP and DBMS_TABCOMP_TEMP_CMP, which was created by Compression advisory.

From OGG v11.1.1.0.10 onwards when the extract abends because of direct load inserts on tables with compressed partitions, the extract gives the table name in the error.

Reference bugdb – 10279456

However, if the table has been dropped immediately after creation, the Extract will only give out the object ID of the “compressed” table.  To find out exactly what was the compressed table name, you could query GGS_DDL_HIST table if DDL replication is enabled.  A sample query

select OBJECTID, OBJECTNAME, METADATA_TEXT FROM GGS_DDL_HIST WHERE OBJECTID = xxxxx

 

 

Posted in Golden Gate, Oracle 11g, Oracle 11g Database Administration, Oracle 11g New Features, Oracle Fusion Middleware, Oracle GoldenGate, Oracle Troubleshooting, Tips | 5 Comments »

A Useful Oracle 11g New Feature

Posted by sdefilip on September 18, 2010

Server Result Cache

  • Enables query result to be cached in memory which can be used during future execution of a similar query by bypassing the regular processing thereby returning the results faster.
  • Decreases the wait time for both physical and logical IO by directly fetching the results from the cached memory.
  • Cached result set is completely shareable between the sessions and various statements as long as they share a common execution plan.
  • Server result cache is the new component of SGA that caches results of queries and is managed by automatic memory management.
  • New parameter RESULT_CACHE_MAX_SIZE is used to enable result cache by setting the maximum size of the cache.
  • A new optimizer hint allows use of result cache at the query level.

Query execution without result cache hint.

Query execution without result cache hint - Oracle 11gQuery execution without result cache hint – Oracle 11g

Query execution with result cache hint.

Query execution with result cache hint - Oracle 11gQuery execution with result cache hint – Oracle 11g

Parameters  related to Result Cache

RESULT_CACHE_MAX_RESULT  :  specifies the percentage of RESULT_CACHE_MAX_SIZE that any single result can use
RESULT_CACHE_MAX_SIZE    :  specifies the maximum amount of SGA memory (in bytes) that can be used by the Result Cache.
RESULT_CACHE_REMOTE_EXPIRATION   : specifies the number of minutes that a result using a remote object is allowed to remain valid
RESULT_CACHE_MODE    : specifies when a ResultCache operator is spliced into a query’s execution plan.

How to find result cache information:
V$RESULT_CACHE_STATISTICS
V$RESULT_CACHE_MEMORY
V$RESULT_CACHE_OBJECTS
V$RESULT_CACHE_DEPENDENCY

DBMS_RESULT_CACHE – PL/SQL API for result cache management:
Functions : Status – displays the current status of the result cache.

SELECT DBMS_RESULT_CACHE.status FROM dual;
STATUS
—————————————————–
ENABLED

Flush: remove all objects from the result cache and release memory.

I will cover some more features in next post . Keep reading

Posted in Oracle 11g, Oracle 11g Database Administration, Oracle 11g New Features, Oracle Architecture, Oracle Performance, Tips | Tagged: , , , | Leave a Comment »

About Oracle 11g Clusters

Posted by sdefilip on April 14, 2010

A cluster provides an optional method of storing table data. A cluster is made up of a group of tables that share the same data blocks. The tables are grouped together because they share common columns and are often used together. For example, the emp and dept table share the deptno column. When you cluster the emp and dept tables (see Figure-1), Oracle Database physically stores all rows for each department from both the emp and dept tables in the same data blocks.

Because clusters store related rows of different tables together in the same data blocks, properly used clusters offer two primary benefits:

• Disk I/O is reduced and access time improves for joins of clustered tables.

• The cluster key is the column, or group of columns, that the clustered
tables have in common. You specify the columns of the cluster key when
creating the cluster. You subsequently specify the same columns when
creating every table added to the cluster. Each cluster key value is stored
only once each in the cluster and the cluster index, no matter how many rows
of different tables contain the value.

Therefore, less storage might be required to store related table and index data in a cluster than is necessary in non-clustered table format. For example, in Figure 1, notice how each cluster key (each deptno) is stored just once for many rows that contain the same value in both the emp and dept tables.

After creating a cluster, you can create tables in the cluster. However, before any rows can be inserted into the clustered tables, a cluster index must be created. Using clusters does not affect the creation of additional indexes on the clustered tables; they can be created and dropped as usual.

You should not use clusters for tables that are frequently accessed individually.

Figure 1 Clustered Table Data

Posted in Oracle 11g, Oracle 11g Database Administration, Oracle Architecture, Oracle Performance | Leave a Comment »

How 11g Tries To Improve the Use of Bind Variables

Posted by sdefilip on December 17, 2009

I’m sure that all of us who have just about any Oracle experience have encountered the enigmatic SQL query that ran 2 seconds yesterday but 102 seconds today. What can cause this unpredictable behavior?

The answer is that many factors can cause this degradation, but one of the most common causes is execution plan invalidation. When the optimizer peeks at the values of user-defined bind variables on the first invocation of a cursor, it is called “bind peeking”. This usually has a positive effect on performance because it allows the optimizer to determine the selectivity of any “WHERE” clause condition as if literals have been used instead of bind variables, thus improving the quality of the execution plan generated for statements using bind variables.

Bind peeking becomes a problem when a bind variable is used with a table column that has skewed data. For example, we have a table of 20000 rows that contains a “part_num” column. Part number “A110” occurs 3 times in our table (3 rows) but part number “B235” occurs only 8000 times (800 rows). An index is created on the part_num column and when statistics are gathered, the skew in the data distribution is detected and a histogram is generated. When a cursor that contains a select query using bind variables for a conditional statement is first invoked, the optimizer can peek at the value of the user defined bind variable and determine the execution plan which is then stored in the shared pool. If the bind variable for part_num is set to “A110” at first cursor invocation, the optimizer peeks at the bind variable and sets the plan access accordingly. However, if the value of the bind variable is set to the value “B235” and the cursor is parsed again, the old plan will still be in place and will be used. Thus, an access plan not suited for optimal performance for the value of the bind variable is used, thereby degrading performance due to the difference in buffer gets that are required. Before 11g, one of the ways to take care of this was to flush the shared pool and cause the query to be reparsed with the new bind variable values.

In 11g, Oracle has taken a step to address the above problem by introducing Adaptive Cursor Sharing. ACS allows for more cursors for the same query containing bind variables. In other words, the optimizer will allow multiple execution plans to be used for a single statement that uses bind variables. This ensures that the best execution plan will be used depending on the bind value. So, taking the example above, 11g would allow two plans to be generated for the difference in selectivity of the part_num column bind variable values.

Posted in Oracle 11g, Oracle 11g Database Administration, Oracle Performance | Leave a Comment »

Oracle 11g and Automatic Memory Management (AMM)

Posted by sdefilip on November 13, 2009

I have been very busy for the last few months and haven’t posted much on this blog but I thought I would share some notes on recent experiences installing and upgrading to Oracle 11gR1 and 11gR2 on Decipher,LLC projects. I plan to make this a series of postings.

Carrying forward the theme of improving and automating memory management that started with Oracle 9i, Oracle has included Automatic Memory Management (AMM) in 11g.
The purpose of AMM is to provide dynamic management of the PGA and SGA. The method by which this is done is to allocate one large chunk of memory at database configuration using two new initialization parameters: MEMORY_TARGET and MEMORY_MAX_TARGET. From Oracle 11g Documention:

“you set only a target memory size initialization parameter (MEMORY_TARGET) and optionally a maximum memory size initialization parameter (MEMORY_MAX_TARGET). The instance then tunes to the target memory size, redistributing memory as needed between the system global area (SGA) and the instance program global area (instance PGA). Because the target memory initialization parameter is dynamic, you can change the target memory size at any time without restarting the database. The maximum memory size serves as an upper limit so that you cannot accidentally set the target memory size too high, and so that enough memory is set aside for the Oracle Database instance in case you do want to increase total instance memory in the future. Because certain SGA components either cannot easily shrink or must remain at a minimum size, the instance also prevents you from setting the target memory size too low.”

Before you configure your Oracle 11g instance, you should first make sure that you have enough shared memory on your server. On a Linux server, you do this by issuing:

# df -k /dev/shm
which will return the number of blocks in your shared memory filesystem, the number used, the usage percent, and the device on which it is mounted (/dev/shm).

# umount tmpfs
# mount -t tmpfs shmfs -o size=1200m /dev/shm
will adjust the shared memory filesystem size to that required size.

To make this change permanent you need to add the adjustment to the fstab file. To do this just edit the file /etc/fstab and add the line:

none /dev/shm tmpfs size=3000m 0 0
in this case, we set the size to 3GB

The next step is to optimally set your MEMORY_TARGET and MEMORY_MAX_TARGET initialization parameters.

The view V$MEMORY_TARGET_ADVICE provides tuning advice for the MEMORY_TARGET initialization parameter:

SQL> select * from v$memory_target_advice order by memory_size;

MEMORY_SIZE MEMORY_SIZE_FACTOR ESTD_DB_TIME ESTD_DB_TIME_FACTOR VERSION
———– —————— ———— ——————- ———-
180 .5 458 1.344 0
270 .75 367 1.0761 0
360 1 341 1 0
450 1.25 335 .9817 0
540 1.5 335 .9817 0
630 1.75 335 .9817 0
720 2 335 .9817 0

While installing Oracle 11g or using DBCA to create a new database, you have the option of using AMM or not.

The main point of this post is to inform you that Oracle 11g will require quite a bit more shared memory if you choose to use the new Automatic Memory Management feature.

Posted in Oracle 11g, Oracle 11g Database Administration, Oracle 11g Installation, Oracle 11g New Features, Oracle Architecture, Oracle Performance | 2 Comments »

ORA-06401 NETCMN: invalid driver designator

Posted by sdefilip on July 13, 2009

When recently issuing a command to connect to a remote database, the Oracle “ora-06401” error was encountered.

Oracle documentation states the following cause, along with what it suggests to be the sole solution:

ORA-06401 NETCMN: invalid driver designator

Cause: The login (connect) string contains an invalid driver designator.

Action: Correct the string and re-submit.

This usually means that the command syntax is incorrect. However, other causes exist:

1. improper authorization for the user issuing the command
2. invalid connection string for the target database version
3. ^M characters present in the file tnsnanes.ora file. These can be seen by viewing the ascii file after it has been converted to binary or by editing the file in a UNIX/Linux editor, such as vi. These characters are usually created by editing the file with a non-Linux/Unix editor and then transferring it by FTP to the server.
Use the following vi editor command to remove all of these characters from the file:

:%s/.$//

Posted in Oracle 11g, Oracle 11g Database Administration, Oracle Troubleshooting, Tips | 2 Comments »