Blog dedicated to Oracle Applications (E-Business Suite) Technology; covers Apps Architecture, Administration and third party bolt-ons to Apps

Friday, May 30, 2008

oracle.apps.fnd.framework.OAException

Following errors came up after bouncing Apache before logging in:

oracle.apps.fnd.framework.OAException: Application: FND, Message Name: FND_NO_REGION_DATA. Tokens: REGIONCODE = /oracle/apps/fnd/framework/navigate/webui/HomePG;
at oracle.apps.fnd.framework.webui.JRAD2AKMapper.getRootMElement(JRAD2AKMapper.java:529)
at oracle.apps.fnd.framework.webui.OAWebBeanFactoryImpl.getWebBeanTypeDataFromJRAD(OAWebBeanFactoryImpl.java:3719)
at oracle.apps.fnd.framework.webui.OAWebBeanFactoryImpl.getRootApplicationModuleClass(OAWebBeanFactoryImpl.java:3452)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:1006)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:509)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:430)
at _oa__html._OA._jspService(_OA.java:84)
at oracle.jsp.runtime.HttpJsp.service(HttpJsp.java:119)
at oracle.jsp.app.JspApplication.dispatchRequest(JspApplication.java:417)
at oracle.jsp.JspServlet.doDispatch(JspServlet.java:267)
at oracle.jsp.JspServlet.internalService(JspServlet.java:186)
at oracle.jsp.JspServlet.service(JspServlet.java:156)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:588)
at org.apache.jserv.JServConnection.processRequest(JServConnection.java:456)
at org.apache.jserv.JServConnection.run(JServConnection.java:294)
at java.lang.Thread.run(Thread.java:619)
## Detail 0 ##

Exception:
oracle.adf.mds.MetadataDefException: Unable to find component with absolute reference = /oracle/apps/fnd/framework/navigate/webui/HomePG, XML Path = null. Please verify that the reference is valid and the definition of the component exists either on the File System or in the MDS Repository.
at oracle.adf.mds.internal.MetadataManagerBase.findElement(MetadataManagerBase.java:1350)
at oracle.adf.mds.MElement.findElement(MElement.java:97)
at oracle.apps.fnd.framework.webui.JRAD2AKMapper.getRootMElement(JRAD2AKMapper.java:503)
at oracle.apps.fnd.framework.webui.OAWebBeanFactoryImpl.getWebBeanTypeDataFromJRAD(OAWebBeanFactoryImpl.java:3719)
at oracle.apps.fnd.framework.webui.OAWebBeanFactoryImpl.getRootApplicationModuleClass(OAWebBeanFactoryImpl.java:3452)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:1006)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:509)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:430)
at _oa__html._OA._jspService(_OA.java:84)
at oracle.jsp.runtime.HttpJsp.service(HttpJsp.java:119)
at oracle.jsp.app.JspApplication.dispatchRequest(JspApplication.java:417)
at oracle.jsp.JspServlet.doDispatch(JspServlet.java:267)
at oracle.jsp.JspServlet.internalService(JspServlet.java:186)
at oracle.jsp.JspServlet.service(JspServlet.java:156)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:588)
at org.apache.jserv.JServConnection.processRequest(JServConnection.java:456)
at org.apache.jserv.JServConnection.run(JServConnection.java:294)
at java.lang.Thread.run(Thread.java:619)


Exception:
oracle.adf.mds.MetadataDefException: Unable to find component with absolute reference = /oracle/apps/fnd/framework/navigate/webui/HomePG, XML Path = null. Please verify that the reference is valid and the definition of the component exists either on the File System or in the MDS Repository.
at oracle.adf.mds.internal.MetadataManagerBase.findElement(MetadataManagerBase.java:1350)
at oracle.adf.mds.MElement.findElement(MElement.java:97)
at oracle.apps.fnd.framework.webui.JRAD2AKMapper.getRootMElement(JRAD2AKMapper.java:503)
at oracle.apps.fnd.framework.webui.OAWebBeanFactoryImpl.getWebBeanTypeDataFromJRAD(OAWebBeanFactoryImpl.java:3719)
at oracle.apps.fnd.framework.webui.OAWebBeanFactoryImpl.getRootApplicationModuleClass(OAWebBeanFactoryImpl.java:3452)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:1006)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:509)
at oracle.apps.fnd.framework.webui.OAPageBean.preparePage(OAPageBean.java:430)
at _oa__html._OA._jspService(_OA.java:84)
at oracle.jsp.runtime.HttpJsp.service(HttpJsp.java:119)
at oracle.jsp.app.JspApplication.dispatchRequest(JspApplication.java:417)
at oracle.jsp.JspServlet.doDispatch(JspServlet.java:267)
at oracle.jsp.JspServlet.internalService(JspServlet.java:186)
at oracle.jsp.JspServlet.service(JspServlet.java:156)
at javax.servlet.http.HttpServlet.service(HttpServlet.java:588)
at org.apache.jserv.JServConnection.processRequest(JServConnection.java:456)
at org.apache.jserv.JServConnection.run(JServConnection.java:294)
at java.lang.Thread.run(Thread.java:619)

Solution:
Login as applmgr on your unix box which hosts the web tier.
cd $APPLCSF/scripts/$CONTEXT_NAME
adapcctl.sh stop
rm -rf $COMMON_TOP/_pages
adapcctl.sh start

Responsibility shows up in Security:User:Define but not in self service

This is a known issue after ATG RUP patches, which was first brought to my attention today. A user has been assigned 10 responsibilities through System Adminstrator Responsibility with navigation Security:User:Define. However when the user logs in he is able to see only 4 responsibilities. 6/10 responsibilities are missing. The solution is described in Metalink Notes 417439.1:

Applies to:

Oracle Self-Service Web Applications - Version: 11.5.9
This problem can occur on any platform.

Symptoms

PROBLEM
Responsibilities are not showing up after reactivating user accounts by removing the termination end date.
When reactivating user accounts that have been end dated due to termination, the employee (user) can't see any responsibilities
Ex-employees or employees that have a change assignments can access Oracle but can't access any responsibilities

### Steps to Reproduce ###
Rehire an Ex-Employee or change the Assignment on HR to Active (following return to work),
Remove End Date on User Account via Sys Admin.
User logs in but has no responsibilities

Changes

Rehire an ex-employee

Cause

Known issue when there is a user/role relationship referring to a user or a role
that does not exist in WF_LOCAL_ROLES, it removes the relationship.

Solution


1. Make sure that the latest ATG_RUP patch has been applied.
2. Run Concurrent Program System: Administrator Responsibility->Request->'Synchronize WF LOCAL tables'
3. Run Concurrent Program: 'Workflow Directory Services User/Role Validation'.
4. Restart the Workflow Agent Listeners: WF_DEFERRED and WF_JAVA_DEFERRED

References

Note 356529.1 - FND Users changes are not propagated to WF Directory Services Tables (WFDS)
Note 356342.1 - Fndscaus Shows User Resps As Active But They Are Missing At User Login

We did the steps and there was no result. When the user logged in next time, he had no responsibilities. The correct way to do this is:

Run Concurrent Program: Workflow Directory Services User/Role Validation twice
fist time with the parameters 10000,Y,N,N
second time with the parameters 10000,N,Y,N

Ask user to clear his/her browser cache or test in an alternate browser in which he/she did not login before.

This worked for us.

Windows XP taskbar group windows

Recently I got a new laptop with XP installed. In my previous laptop, instances of the same program were grouped together. But the XP in the new laptop did not have that feature enabled. An article in techrepublic gave directions on how to disable the feature. I just changed the steps to enable it:

1. Right-click an empty space on the taskbar.
2. Choose the Properties command from the context menu.
3. Clear the Group Similar Taskbar Buttons check box and click OK.

I am pasting the original article from Techrepublic just in case it disappears:

The grouping feature on Windows XP's taskbar is designed to reduce the button overcrowding problem that you may experience when you have a lot of applications open. The feature keeps tabs on the amount of space being occupied on the taskbar and groups similar windows under a single button once the amount of space on the taskbar starts to dwindle.

For example, if you have nine windows open and three of them contain text documents, the three text document taskbar buttons will combine into one button, named Notepad. This single button is then accompanied by an arrow button that, when clicked, will display a menu of all the documents in the group. In most cases, this taskbar grouping is very helpful. However, there may times when it is counterproductive to navigate through an additional menu to access your files—especially if you're simultaneously working with all of the documents in the group.

Fortunately, it is easy to toggle the taskbar grouping feature on and off with a few clicks. Here's how:

1. Right-click an empty space on the taskbar.
2. Choose the Properties command from the context menu.
3. Clear the Group Similar Taskbar Buttons check box and click OK.

Now you can easily switch between each individual document without having to navigate an intermediary menu. To re-enable the taskbar grouping feature, repeat these steps and select the Group Similar Taskbar Buttons check box. Then, click OK.

Solaris grep -r alternative

As per wikipedia, grep (Global Regular Expression Print) is a command line utility that was originally written for use with the Unix operating system. Given a list of files or standard input to read, grep searches for lines of text that match one or many regular expressions, and outputs only the matching lines.

The program's name derives from the command used to perform a similar operation, using the Unix text editor ed:
 g/re/p 

This command searches a file globally for lines matching a given regular expression, and prints them.

There are various command line switches available when using grep that modify this default behavior including printing lines which do not match, finding or excluding files to search, and annotating the output in various ways. There are also multiple modern implementations of the classic Unix grep, each with a unique feature set.

This is an example of a common grep usage:

 grep apple fruitlist.txt 

Grep would return, in this case, all of the lines in file fruitlist.txt with at least one instance of 'apple' in them. Keep in mind that grep would not return lines with 'Apple' (capital A) because by default grep is case sensitive. Like most Unix commands grep accepts flags which can be used to change this and many other behaviors. For example:

 grep -i apple fruitlist.txt 

This would return all lines with the words 'apple', 'Apple', 'apPLE', or any other mixing of capital and lower case.

Linux has a -r switch in grep command to search for a string in all the files in a directory tree. In Solaris you can use this command to do the same:

find . -type f -print | xargs grep -li "find me"

Thursday, May 29, 2008

Noetix and Sabrix issues after ATG RUP5/6

Metalink Note 456507.1 describes an issue with Noetix after application of ATG RUP5/6. The same applies to Sabrix too:

After applying patchset ATG RUP 5, the Noetix Views in the General Ledger business area, e.g GLIBS_JOURNAL_ENTRY_LINES, no longer return any data, even after the views were
re-generated successfully and all business areas were refreshed.

Note 452713.1 recommends applying Patch 6265836 after ATG RUP 5. The Noetix issue remains after doing so.

In Discoverer Administrator Edition, the folders in the GL business area return the error:

Table or view does not exist
Changes
ATG RUP 5
Cause

This is a known Noetix issue:

Incorrect database permissions on FND_CACHE_VERSIONS_PKG and FND_CACHE_VERSIONS granted to the Noetix/Sabrix end user layer owner.


Solution
-- To implement the solution, please execute the following steps::
To implement the fix, follow these steps:

1. Log into sqlplus as APPS or a DBA account.
2. SQL> Grant execute on APPS.FND_CACHE_VERSIONS_PKG to NOETIX_SYS;
SQL> Grant select on APPS.FND_CACHE_VERSIONS to NOETIX_SYS;
3. Log into sqlplus as NOETIX_SYS
4. SQL> Create synonym FND_CACHE_VERSIONS for APPS.FND_CACHE_VERSIONS
5. Test that the fix has worked as follows:
SQL> select je_batch_name, Acct$IBS_SERVICE
from glibs_journal_entry_lines
where acct$ibs_service = 'KF'
and period_name = 'JUN-07' -- this line optional if env is old
/
6. Migrate the solution as appropriate to other environments.

Do the steps 1 to 5 for SABRIX user too. This holds true for any bolt on which is using APPS.FND_CACHE_VERSIONS_PKG package and APPS.FND_CACHE_VERSIONS table

Force cleanup of temporary segments through DROP_SEGMENT event

While searching on information about segments, I found metalink note 47400.1 which describes DROP_SEGMENT event:

The DROP_SEGMENTS event
~~~~~~~~~~~~~~~~~~~~~~~
Available from 8.0 onwards.

DESCRIPTION
Finds all the temporary segments in a tablespace which are not currently locked and drops them.

For the purpose of this event a "temp" segment is defined as a segment (seg$ entry) with TYPE#=3. Sort space in a TEMPORARY tablespace does not qualify under this definition as such space is managed independently of SEG$ entries.

PARAMETERS
level - tablespace number+1. If the value is 2147483647 then
temp segments in ALL tablespaces are dropped, otherwise, only
segments in a tablespace whose number is equal to the LEVEL
specification are dropped.

NOTES
This routine does what SMON does in the background, i.e. drops
temporary segments. It is provided as a manual intervention tool which
the user may invoke if SMON misses the post and does not get to
clean the temp segments for another 2 hours. We do not know whether
missed post is a real possibility or more theoretical situation, so
we provide this event as an insurance against SMON misbehaviour.

Under normal operation there is no need to use this event.

It may be a good idea to
alter tablespace coalesce;
after dropping lots of extents to tidy things up.

*SQL Session (if you can connect to the database):
alter session set events 'immediate trace name DROP_SEGMENTS level TS#+1';

The TS# can be obtained from v$tablespace view:
select ts# from v$tablespace where name = '';

Or from SYS.TS$:

select ts# from sys.ts$ where name = '' and online$ != 3;

If ts# is 5, an example of dropping the temporary segments in that tablespace
would be:

alter session set events 'immediate trace name DROP_SEGMENTS level 6'

HTTP POST Basics

We had discussions about an interface from ERP to our internal intranet which is going to do an HTTP POST. I am reproducing here an old article from sun.com about HTTP POST basics which is no longer available online:

HTTP POST Basics

By Eric Giguere, June 8, 2004

The Hypertext Transfer Protocol (HTTP) is the most ubiquitous of the Internet protocols. Although seen primarily as a means to fetch pages of Hypertext Markup Language (HTML) content for display in a web browser, HTTP is really a general-purpose transport for any type of data. That versatility is why HTTP support is a fundamental feature of the Mobile Information Device Profile (MIDP). In fact, on many devices HTTP is the only network protocol available.

HTTP is a request-response protocol. Various types of requests are possible, each identified by a different HTTP method. MIDP supports two of those methods, GET and POST.

A GET request fetches data from a web server based solely on a URL value and a set of HTTP headers. Here's an example of what a web server receives when a client makes a GET request:


 GET /index.html?userid=joe&password=guessme HTTP/1.1
Host: www.mysite.com
User-Agent: Mozilla/4.0


You can generate this request with MIDP code:

 ...
HttpConnection conn = null;
String url = "http://www.mysite.com" +
"/index.html?userid=joe&password=guessme";
String agent = "Mozilla/4.0";

try {
conn = (HttpConnection) Connector.open( url );
conn.setRequestProperty( "User-Agent", agent );

int rc = conn.getResponseCode();
... // process it
}
catch( IOException e ){
// handle the error here
}
...


There is no extra data with a GET request; everything is specified in the URL and the headers.

In contrast, a POST request sends additional data to the web server, specified after the URL, the headers, and a blank line to indicate the end of the headers. An example:


 POST /login.jsp HTTP/1.1
Host: www.mysite.com
User-Agent: Mozilla/4.0
Content-Length: 27
Content-Type: application/x-www-form-urlencoded

userid=joe&password=guessme


You can generate this second request with the following code:


 ...
HttpConnection conn = null;
String url = "http://www.mysite.com/login.jsp";
String agent = "Mozilla/4.0";
String rawData = "userid=joe&password=guessme";
String type = "application/x-www-form-urlencoded";

String encodedData = encode( rawData ); // user-supplied

try {
conn = (HttpConnection) Connector.open( url );
conn.setRequestMethod( HttpConnection.POST );
conn.setRequestProperty( "User-Agent", agent );
conn.setRequestProperty( "Content-Type", type );
conn.setRequestProperty( "Content-Length",
encodedData.length() );

OutputStream os = conn.openOutputStream();
os.write( encodedData.getBytes() );

int rc = conn.getResponseCode();
... // process it
}
catch( IOException e ){
// handle the error here
}
...


The code is similar to the GET case, but now a call to setRequestMethod() changes the request type to POST, and three calls to setRequestProperty() set the headers. The Content-Length and Content-Type headers are critical because they tell the web server how many bytes of data to expect, and what kind, identified by a MIME type.

When posting data, setting the Content-Length header isn't obligatory; the HTTP client implementation should use "chunked encoding" - part of the HTTP 1.1 specification - to transfer the data to the web server. Set the length if you can, though; implementations of chunked encoding vary from one web server to the next.

After setting the headers the code sends the data itself, using the connection's output stream. You can send any kind of data as long as you identify it properly. In MIDP clients the two most popular MIME types are application/octet-stream, to send raw binary data, and application/x-www-form-urlencoded, to send name-value pairs.

The latter type mimics what a web browser sends when the user submits a form. You send the data for a "form post" in the same format you use for URL query parameters, except that you drop the leading ? character. Just as when sending a query, you must URL-encode form-post data. Unfortunately, MIDP doesn't supply an encoder. You must write your own, or use one of several freely available implementations.

In general, you'll use POST requests to transfer data from a MIDP client to a web server, because the web server doesn't log the data you send, and you can send binary data - handy for transferring byte arrays or serialized Java objects. You also avoid the limits on URL length that GET requests face if they include too many query parameters.

Difference between RESETLOGS and NORESETLOGS

What is the difference between RESETLOGS and NORESETLOGS when you create a control file ?

This question was posed by Sandeep today. One of the good responses to this question is in orafaq forums:

Oracle is forcing you to make a choice:

Use RESETLOGS after incomplete recovery (when the entire redo stream wasn't applied). RESETLOGS will initialize the logs, reset your log sequence number, and start a new "incarnation" of the database.

Use NORESETLOGS when doing complete recovery (when the entire redo stream was applied). Oracle will continue using the existing (valid) log files.

Metalink 16530.1 has this information:

After recover database operation, open the database with: ALTER DATABASE OPEN [NO]RESETLOGS

NORESETLOGS

The NORESETLOGS option does not clear the redo log files during startup and the online redo logs to be used for recovery. Only used in scenario where MANUAL RECOVERY is started, CANCEL is used, and then RECOVER DATABASE is started.

RESETLOGS

CAUTION: Never use RESETLOGS unless necessary.

Once RESETLOGS is used then the redo log files cannot be used and any completed transactions in those redo logs are lost!!

Before using the RESETLOGS option take an offline backup of the database.

The RESETLOGS option clears all the online redo logs and modifies all the online data files to indicate no recovery is needed. After resetting the redo logs none of the existing log files or data file backups can be used. In the control file, the log sequence number is modified, which is very important for recovery purposes. The recovery will be applied only to the log files whose sequence number is greater than log sequence number in the control file. One has to be very cautious when using RESETLOGS option. It is important to remember that all datafiles must be online otherwise they will become useless once the database is up

APP-FND-01564: ORACLE error 1000 in close_server_files

During the application of a help patch which was running FNDGFU concurrent requests the worker FAILED and following error was logged in .req file:

APP-FND-01564: ORACLE error 1000 in close_server_files

Cause: close_server_files failed due to ORA-01000: maximum open cursors exceeded.

The SQL statement being executed at the time of the error was: "&SQLSTMT and was executed from the file &ERRFILE".
ORACLE error 3114 in afpexcp

Cause: afpexcp failed due to ORA-03114: not connected to ORACLE.

The SQL statement being executed at the time of the error was: and was executed from the file .

After some research on metalink, I realized that the DBAs had not set INBOUND_CONNECT_TIMEOUT parameter in listener.ora, and the requests were simply timing out. After setting this parameter and setting its value to zero (for patching only), the patch proceeded without issues.

An offshoot of this error was ORA-07445 in alert log:

ORA-07445: exception encountered: core dump [drumcmf()+184] [SIGSEGV] [Address
not mapped to object] [0x000000000] [] []

So in 10.2.0.3, do not forget to set INBOUND_CONNECT_TIMEOUT to zero during patching and do not forget to set it back to 60, 120 or 240 whichever works best in your environment.

Wednesday, May 28, 2008

ORA-06512: at "SYS.UTL_FILE", line 140

During a patch run following errors were reported in adworker log:

sqlplus -s APPS/***** @/apps11i/erp/appl/xtr/11.5.0/patch/115/sql/xtr3904621.sql
DECLARE
*
ERROR at line 1:
ORA-29282: invalid file ID
ORA-06512: at "SYS.UTL_FILE", line 140
ORA-06512: at "SYS.UTL_FILE", line 785
ORA-06512: at line 158
ORA-29285: file write error

The initialization parameter utl_file_dir had only one directory $COMMON_TOP/temp. The DBAs changed this to /usr/tmp and bounced the database. After which they resumed patch. Still the error came. We tried running the xtr3904621.sql manually :

sqlplus /nolog
connect apps/apps
@/apps11i/erp/appl/xtr/11.5.0/patch/115/sql/xtr3904621.sql

We go the very same error.

I logged in as oracle user
cd /usr/tmp
touch abc.txt
vi abc.txt
Test file
:wq
file was created and edited without issues

Strangely after two minutes when I ran the same sql connected as oracle user from 10.2.0 ORACLE_HOME, it worked without issues. When I ran connected from 806 home as applmgr user, it ran. When we restarted the worker, it succeeded.

I still don't understand why it did not work, when we tried immediately after bouncing database. Is there a delay mechanism somewhere ? No answers on this question yet.

Tuesday, May 27, 2008

SEG$, FET$, UET$

Any object in the database that takes up space is a segment. A segment is made of extents and a segment exists in a tablespace. Each extent in the segment is made up of blocks and each extent must exist within a single file. In dictionary managed tablespaces, Oracle’s space management basically revolves around 5 tables:
• fet$ free extents
• uet$ used extents
• seg$ segments
• ts$ tablespaces
• file$ files
Between them fet$ and uet$ map all the space in the database. Seg$ is used to cluster used space, and ts$ is used to cluster free space, file$ does little more than list the files.

SEG$ structure in a 10.2.0.3 Database:

Name Null? Type
----------------------------------------- -------- ----
FILE# NOT NULL NUMBER
BLOCK# NOT NULL NUMBER
TYPE# NOT NULL NUMBER
TS# NOT NULL NUMBER
BLOCKS NOT NULL NUMBER
EXTENTS NOT NULL NUMBER
INIEXTS NOT NULL NUMBER
MINEXTS NOT NULL NUMBER
MAXEXTS NOT NULL NUMBER
EXTSIZE NOT NULL NUMBER
EXTPCT NOT NULL NUMBER
USER# NOT NULL NUMBER
LISTS NUMBER
GROUPS NUMBER
BITMAPRANGES NOT NULL NUMBER
CACHEHINT NOT NULL NUMBER
SCANHINT NOT NULL NUMBER
HWMINCR NOT NULL NUMBER
SPARE1 NUMBER
SPARE2 NUMBER


I found some good information on segments here which I am pasting below:

Segment types
There are 11 types of segments in Oracle 10g:

* table
* table partition
* index
* index partition
* cluster
* rollback
* deferred rollback
* temporary
* cache
* lobsegment
* lobindex

These types can be grouped into four segment classes:

* data segments: table and cluster
* index
* rollback
* temporary data

In order to find out the size of the respective segments, use dba_segments (or user_segments, or all_segments).
A segment can either be created so that it can have an unlimited or limited number of extents.
Oracle keeps track of free blocks for each data segment.
Segment Headers
The segment header is stored in the first block of the first extent. It contains:

* The extents table
* The free lists descriptors
* The high water mark

Temporary segments
The following SQL statements require a temporary segment if they cannot be completed in memory:

* create index.
* select order by
* select distinct
* select group by
* select union
* select intersect
* select minus by
* analyze table
* unindexed joins
* certain correlated subqueries

Actually, most of these statements require a sort.
Temporary segments can be stored in any tablespace. However, it makes more sense to store temporary segments in a temporary tablespace.
SMON frees temporary segments.

Apart from handling special events such as instance recovery, the System MONitor (smon) has two basic tasks which it cycles through during the lifetime of an instance.

Every five minutes it looks through the free extent table (sys.fet$) to see if there are any adjacent free extents that can be coalesced into a single extent, and on every 25th cycle (i.e. every two hours and five minutes) it looks at the segment table (sys.seg$) to see if there are any temporary segments that need to be eliminated

Friday, May 23, 2008

Markview error: ORA-04062: timestamp of package "SFGWY.SFOA_AP_UTIL" has been changed

We had an issue with 170 Systems Markview on one of our environments where the invoices were going into the error queue. Following error was thrown:

ORA-04062: timestamp of package "SFGWY.SFOA_AP_UTIL" has been changed

After a lot of time and research, Akhilesh and Subbu came across this thread in Oracle forums. After going through this they changed the value of initialization parameter from

REMOTE_DEPENDENCIES_MODE = TIMESTAMP
to

REMOTE_DEPENDENCIES_MODE = SIGNATURE

The error stopped coming after this momentarily. The error was finally solved after they bounced the database.

I found this some good writeup on this parameter here, which I am pasting below:

* If the initialization parameter file contains the following specification:

REMOTE_DEPENDENCIES_MODE = TIMESTAMP

Then only timestamps are used to resolve dependencies (if this is not explicitly overridden dynamically).
* If the initialization parameter file contains the following parameter specification:

REMOTE_DEPENDENCIES_MODE = SIGNATURE

Then signatures are used to resolve dependencies (if this not explicitly overridden dynamically).
* You can alter the mode dynamically by using the DDL statements. For example, this example alters the dependency model for the current session:

ALTER SESSION SET REMOTE_DEPENDENCIES_MODE = {SIGNATURE | TIMESTAMP}

Thise example alters the dependency model systemwide after startup:
ALTER SYSTEM SET REMOTE_DEPENDENCIES_MODE = {SIGNATURE | TIMESTAMP}

If the REMOTE_DEPENDENCIES_MODE parameter is not specified, either in the init.ora parameter file or using the ALTER SESSION or ALTER SYSTEM DDL statements, then timestamp is the default value. Therefore, unless you explicitly use the REMOTE_DEPENDENCIES_MODE parameter, or the appropriate DDL statement, your server is operating using the timestamp dependency model.

When you use REMOTE_DEPENDENCIES_MODE=SIGNATURE:

* If you change the default value of a parameter of a remote procedure, then the local procedure calling the remote procedure is not invalidated. If the call to the remote procedure does not supply the parameter, then the default value is used. In this case, because invalidation/recompilation does not automatically occur, the old default value is used. If you want to see the new default values, then you must recompile the calling procedure manually.
* If you add a new overloaded procedure in a package (a new procedure with the same name as an existing one), then local procedures that call the remote procedure are not invalidated. If it turns out that this overloading results in a rebinding of existing calls from the local procedure under the timestamp mode, then this rebinding does not happen under the signature mode, because the local procedure does not get invalidated. You must recompile the local procedure manually to achieve the new rebinding.
* If the types of parameters of an existing packaged procedure are changed so that the new types have the same shape as the old ones, then the local calling procedure is not invalidated or recompiled automatically. You must recompile the calling procedure manually to get the semantics of the new type.

Dependency Resolution

When REMOTE_DEPENDENCIES_MODE = TIMESTAMP (the default value), dependencies among program units are handled by comparing timestamps at runtime. If the timestamp of a called remote procedure does not match the timestamp of the called procedure, then the calling (dependent) unit is invalidated and must be recompiled. In this case, if there is no local PL/SQL compiler, then the calling application cannot proceed.

In the timestamp dependency mode, signatures are not compared. If there is a local PL/SQL compiler, then recompilation happens automatically when the calling procedure is run.

When REMOTE_DEPENDENCIES_MODE = SIGNATURE, the recorded timestamp in the calling unit is first compared to the current timestamp in the called remote unit. If they match, then the call proceeds. If the timestamps do not match, then the signature of the called remote subprogram, as recorded in the calling subprogram, is compared with the current signature of the called subprogram. If they do not match (using the criteria described in the section "When Does a Signature Change?"), then an error is returned to the calling session.
Suggestions for Managing Dependencies

Follow these guidelines for setting the REMOTE_DEPENDENCIES_MODE parameter:

* Server-side PL/SQL users can set the parameter to TIMESTAMP (or let it default to that) to get the timestamp dependency mode.
* Server-side PL/SQL users can choose to use the signature dependency mode if they have a distributed system and they want to avoid possible unnecessary recompilations.
* Client-side PL/SQL users should set the parameter to SIGNATURE. This allows:
1. Installation of new applications at client sites, without the need to recompile procedures.
2. Ability to upgrade the server, without encountering timestamp mismatches.
* When using signature mode on the server side, add new procedures to the end of the procedure (or function) declarations in a package specification. Adding a new procedure in the middle of the list of declarations can cause unnecessary invalidation and recompilation of dependent procedures.

conn / as sysdba

set linesize 121
col name format a40
col value format a40

SELECT name, value
FROM gv$parameter
WHERE name = 'remote_dependencies_mode';

Mail sent on clicking the Approve button in workflow mail bounces back with smtp error

In one of our dev environments, workflow approval mails were getting bounced with this message:

Your message did not reach some or all of the intended recipients.

Subject: Action Required: Purchase Requisition 14729 for Example, User (100.00 USD) requires approval
Sent: 5/23/2008 12:33 PM

The following recipient(s) could not be reached:

mail@erp4.justanexample.com on 5/23/2008 12:37 PM
A configuration error in the e-mail system caused the message to bounce between two servers or to be forwarded between two recipients. Contact your administrator.
<>

I googled for this smtp error and found that many people have suggested having all the names of the mail server in /etc/mail/local-host-names. I have asked unix team to do this change. However I am not very sure this will work, as we do not user local-host-names in our sendmail configuration in other dev servers.

Unix team came back saying that the smtp configuration works on all other servers and they would not change it. So I checked for other instances on the same box, and found that workflow notification mailer was working fine with some other instances using the very same smtp. I checked the workflow mailer configuration screens in OAM and found that smtp unix user and password were incorrect. Workflow mailer was not running at all. After this was corrected, the approval workflow mails started going in without bouncing back.

Thursday, May 22, 2008

Wireless router doesn't work after thunderstorms

Akhilesh called me as his wireless router was not taking the internet address from his cable modem. I checked all settings. Everything was fine. The wireless router worked fine in wireless and wired mode. However it refused to connect to Internet. As a last resort, I tried to upgrade the firmware of the wireless router. After the firmware was upgraded during which the router , the router ran fine.

Wednesday, May 21, 2008

java.io.IOException: Not enough space

Anand pinged me today for an error. The AppsLocalLogin.jsp page would not open up. He showed me the Apache logs :

[21/05/2008 16:14:49:716] (ERROR) an error returned handling request via protocol "ajpv12"
[21/05/2008 16:14:49:716] (ERROR) balance: 16408 internal servlet error in server erp08.justanexample.com:16040
[21/05/2008 16:14:49:716] (ERROR) an error returned handling request via protocol "balance"
[21/05/2008 16:17:32:274] (ERROR) ajp12: Servlet Error: OracleJSP: oracle.jsp.provider.JspCompileException: <H3>Errors compiling:$COMMON_TOP/_pages/_oa__html//_AppsChangePassword.java</H3><pre>java.io.IOException: Not enough space</pre>
[21/05/2008 16:17:32:274] (ERROR) an error returned handling request via protocol "ajpv12"
[21/05/2008 16:17:32:274] (ERROR) balance: 5587 internal servlet error in server erp08.justanexample.com:16040
[21/05/2008 16:17:32:274] (ERROR) an error returned handling request via protocol "balance"
[21/05/2008 16:22:50:548] (ERROR) ajp12: Servlet Error: OracleJSP: oracle.jsp.provider.JspCompileException: <H3>Errors compiling:$COMMON_TOP/_pages/_oa__html//_AppsChangePassword.java</H3><pre>java.io.IOException: Not enough space</pre>
[21/05/2008 16:22:50:548] (ERROR) an error returned handling request via protocol "ajpv12"
[21/05/2008 16:22:50:548] (ERROR) balance: 5586 internal servlet error in server erp08.justanexample.com:16040
[21/05/2008 16:22:50:548] (ERROR) an error returned handling request via protocol "balance"

I checked the output of swap -l
It was not returning anything.
We checked for space on /var and found that it had a 2GB file called swapfile
I immediately added swap -a /var/swapfile .

The issue was resolved.

Tuesday, May 20, 2008

You can not deploy a custom war/ear file on 9iAS 1.0.2.2.2

Sandeep asked me if it is possible to deploy a war/ear file on 9iAS 1.0.2.2.2 which ships with E-Business Suite. It is very much possible in 10g AS (9.0.4) but not possible in 9iAS. I logged an SR with Oracle about this and this is what they said:

QUESTION
=========
Is it possible to deploy a war file created in JDeveloper in 9iAS ORACLE_HOME of 11.5.10.2 ?

RESEARCH
=========
(Note: This is INTERNAL ONLY research. No action should be taken by the customer on this information.
This is research only, and may NOT be applicable to your specific situation.)

Note 361673.1 - Deploying to 10g (9.0.4) iAS Application Server from JDeveloper 9.0.4


UPDATE
========
Please try to use jdeveloper to directly deploy the war file. You can refer Note 361673.1.

Regards,
Jenny

@CUS

20-MAY-08 18:13:13 GMT

New info :
The note you have given tells about 10g application server version 9.0.4. 11.5.10.2 comes with 9iAS version 1.0.2.2.2.
How do we do this in 9iAS 1.0.2.2.2 ?


20-MAY-08 19:53:01 GMT
UPDATE
=======
Deploy the war file is part of functionality of jdeveloper. I think the way to deploy are same in both 10g and 9i. But need to double check with jdeveloper team, note 361673.1 belongs to jdeveloper team too.

Regards,
Jenny

@XFR

20-MAY-08 20:54:21 GMT
UPDATE
=======
Hi Vikram,

I am Stephan from the JDev team (core).
I will review your issue as soon as possible.
( I am sorry, but I do not support the Applications 11.5.10.2).

Regards,
Stephan

STATUS
-----------
RVW - in review

20-MAY-08 21:13:55 GMT


Hi Vikram,

QUESTION
========
Is it possible to deploy the war file in jdeveloper 9.0.3?
Is there any difference when deploy to iAS 9.0.4 and 1.0.2.2.2?


ANSWER
======
I would say it is unlikely to work.
I know 1.0.2.2.2 is not supported if that is the correct version.
And I doubt you would be able to deploy to it from JDeveloper 9.0.3
if that version is correct.

I've asked some of my colleagues, but no one recalls that there was a way to deploy to 1.0.2.2.2 other than its own mechanism.

1.0.2.2.2 didn't have any OC4J instance.
(OC4J was still an Orion product at that time).
It may be deploys Java applications thru Mod_JServ.
JServ didn't use ear or war files as JServ has no concept of ears and wars.
It didn't use opmn or dcm as I recall. It used admin.jar only I think.
You set everything up as jars and use certain properties files to point to the pieces.

Unfortunately, we no longer support those products for custom application deployment.
That if you want to use a custom application, you should install and use the current versions. Also, if you are trying to develop an apps framework application, this has to be looked after by the apps team.

Also, my research on Metalink let me find: Note 130376.1
Title: Deploying a servlet created in JDeveloper on iAS's Apache JServ

Please be advised that 9iAS 1.0.2.2.x became Desupported 30-JUN-2004

Extended Support (ES) will be provided until 30-JUN-2007, if a customer
has a current Oracle Support contract in place.

Note 231674.1
Desupport Notices - Oracle Products Oracle9i Application Server 1.0.2.2 & 1.0.2.2.x

Hope this helps you understand your case further.
Best regards,
Stephan

20-MAY-08 21:56:01 GMT

As a separate product 9iAS 1.0.2.2.2 is desupported, but as part of Oracle E-Business Suite 11.5.10.2, it still remains supported. Since OC4J is not present in 9iAS 1.0.2.2.2, I do not think it is possible. Thanks for all your help. You may close this SR.

ResetDNSService():OpenService: - GetLastError

Sometime back, my laptop was showing the blue screen of death whenever I issued the shutdown command. After applying XP Service Pack 3 on my laptop, the blue screen of death disappeared. However I get a message from my VPN Nortel Contivity client which says: ResetDNSService():OpenService: - GetLastError

This error comes only when I launch the client from a non-Administrator user. The error doesn't appear if VPN client is launched through Administrator user. As a good security practice, I use a non-admin user for my day to day activities. It prevents any spyware from getting installed without my knowledge as normal user has limited rights.

I am investigating this one still. Will update this post once I have some answers.

How to change the hostname of a linux box

You have to make two changes:

1. vi /etc/sysconfig/network

NETWORKING=yes
HOSTNAME=linux.justanexample.com

2. vi /etc/hosts

192.168.2.1 linux.justanexample.com linux

The IP adress given is just an example. You can use whatever IP is assigned to you by your ISP also. Reboot and you have new hostname for your box: linux.justanexample.com

Monday, May 19, 2008

"\x80L\x01\x03" 501 in Apache access_log on extranet

Recently when we were testing extranet on two new servers in DMZ, it was not possible to open the URL home page http://extranet3.justanexample.com. Checking the access_log we found that whenever we tried to access the URL, Apache access_log would show HTTP-501 message:

"\x80L\x01\x03" 501

The 501 status code message is displayed when the server does not recognize the document request method. The server is not capable of fulfilling this request and states the request was “Not Implemented”.

Once the load balancer team corrected the SSL certificate, we stopped receiving this error.

Saturday, May 17, 2008

ApacheJServ/1.1.2: Failed to bind to port(s) specified in jserv.properties

Recently we had an issue on the newly created extranet tiers. Apache was running fine but jserv would not come up. We got errors like this one in jvm logs:

ApacheJServ/1.1.2: Failed to bind to port(s) specified in $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties. Please check $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties and jserv.conf file, and make sure number of JServ process specified in jserv.conf is less than number of ports specified in $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties. and the ports are not used by other processes.

It was a mystery why jserv refused to come up.

A grep on servlet port range used on context file returned this:

$ grep s_oacore_servlet_portrange $CONTEXT_FILE
16000-16009
$

To rule out any hardening on the server which may cause ports to be disaabled, we tried starting the apps listener on ports 16000 to 16009 and it came up without issues. Seeing the error messages I was sure that it was trying to open these ports on some other server, may be the load balancer hostname.

After enabling debug logging on Apache and jserv as specified in metalink note 230688.1, we got the following error in jserv.log.

[16/05/2008 22:03:42:946 EDT] ApacheJServ/1.1.2 is starting...
[16/05/2008 22:03:42:946 EDT] using confFile $IAS_ORACLE_HOME/Apache
/Jserv/etc/jserv.properties
[16/05/2008 22:03:43:086 EDT] Connection authentication enabled
[16/05/2008 22:03:43:106 EDT] Connection allowed from /127.0.0.1
[16/05/2008 22:03:43:113 EDT] Connection allowed from extranet1.jstanexample.com/192.168.1.101
[16/05/2008 22:03:43:113 EDT] port 16000 is specified in properties file
[16/05/2008 22:03:43:113 EDT] port 16001 is specified in properties file
[16/05/2008 22:03:43:113 EDT] port 16002 is specified in properties file
[16/05/2008 22:03:43:113 EDT] port 16003 is specified in properties file
[16/05/2008 22:03:43:113 EDT] port 16004 is specified in properties file
[16/05/2008 22:03:43:113 EDT] port 16005 is specified in properties file
[16/05/2008 22:03:43:113 EDT] port 16006 is specified in properties file
[16/05/2008 22:03:43:113 EDT] port 16007 is specified in properties file
[16/05/2008 22:03:43:113 EDT] port 16008 is specified in properties file
[16/05/2008 22:03:43:113 EDT] port 16009 is specified in properties file
16/05/2008 22:03:43:117 EDT] cannot create socket on 16000
[16/05/2008 22:03:43:117 EDT] try to start on port 16001
[16/05/2008 22:03:43:117 EDT] cannot create socket on 16001
[16/05/2008 22:03:43:117 EDT] try to start on port 16002
[16/05/2008 22:03:43:117 EDT] cannot create socket on 16002
[16/05/2008 22:03:43:117 EDT] try to start on port 16003
[16/05/2008 22:03:43:118 EDT] cannot create socket on 16003
[16/05/2008 22:03:43:118 EDT] try to start on port 16004
[16/05/2008 22:03:43:118 EDT] cannot create socket on 16004
[16/05/2008 22:03:43:118 EDT] try to start on port 16005
[16/05/2008 22:03:43:118 EDT] cannot create socket on 16005
[16/05/2008 22:03:43:118 EDT] try to start on port 16006
[16/05/2008 22:03:43:118 EDT] cannot create socket on 16006
[16/05/2008 22:03:43:118 EDT] try to start on port 16007
[16/05/2008 22:03:43:118 EDT] cannot create socket on 16007
[16/05/2008 22:03:43:118 EDT] try to start on port 16008
[16/05/2008 22:03:43:118 EDT] cannot create socket on 16008
[16/05/2008 22:03:43:118 EDT] try to start on port 16009
[16/05/2008 22:03:43:119 EDT] cannot create socket on 16009
[16/05/2008 22:03:43:119 EDT] ApacheJServ/1.1.2: Failed to bind to port(s) specified in $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties. Please check $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties and jserv.conf file, and make sure number of JServ process specified in jserv.conf is less than number of ports specified in $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties. and the ports are not used by other processes. ApacheJServ/1.1.2: Failed to bind to port(s) specified in $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties. Please check $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties and jserv.conf file, and make sure number of JServ process specified in jserv.conf is less than number of ports specified in $IAS_ORACLE_HOME/Apache/Jserv/etc/jserv.properties. and the ports are not used by other processes.

mod_jserv.log showed:
[16/05/2008 23:51:19:877] (EMERGENCY) ajp12: can not connect to host 192.168.1.100:16000
[16/05/2008 23:51:19:878] (EMERGENCY) ajp12: function connection fail


That was the first clue we got that it was trying to connect to 192.168.1.100 on port 16000. 192.168.1.100 was not the IP address of the Apache server. It was the IP used to connect to the server from the intranet.

An nslookup on extranet1.justanexample.com showed that the DNS was returning the address 192.168.1.100 when this name was queried. Even though pinging the name returned the 10.x address assigned to the extranet apache server, when Oracle apps queried for the name it was resolving to the IP returned by DNS.

This means that the order of name resolution was dns first and /etc/hosts next. I corrected this in /etc/nsswitch.conf by switching this line:

hosts: dns files

with this line

hosts: files dns

After we did this, when we bounced Apache, jserv finally came up, as it was now biding on the correct IP address and port. The mystery was solved, and we correctly identified that it was previously trying to bind to ports on 192.168.1.100 instead of the base IP address 10.1.1.1 of the extranet server.

Thursday, May 15, 2008

SMON: disabling tx recovery

Recently, we had performance issues on a box which was shared by 18 instances. It was a Sun E2900 box with 24 processors and 96 GB RAM. The problem was tracked down to long running FNDLIBR processes which were hogging a lot of memory. Some of these processes were running for more than 30 days. Clearly they were not legitimate. As a result, we shutdown concurrent managers of most of the instances, ran cmclean.sql and restarted the services. On one of the instances, which had 10GB SGA, we reduced the SGA to 6 GB. However when shutdown immediate command was issued, the instance refused to shutdown with this message in the alert log:

SMON: disabling tx recovery

I found Metalink Note 1076161.6 which I am pasting here for reference:

Description
===========
SHUTDOWN NORMAL or SHUTDOWN IMMEDIATE hangs. In the alert.log, you see only
the following:

Shutting down instance (immediate)
License high water mark = 12
Thu Dec 8 18:43:16 1994
alter database close normal
Thu Dec 8 18:43:17 1994
SMON: disabling tx recovery
SMON: disabling cache recovery
or
waiting for smon to disable tx recovery

There are no ORA errors or trace files.



Scope & Application
===================
Informational

During a SHUTDOWN IMMEDIATE and SHUTDOWN NORMAL, SMON is cleaning up extents
which are no longer needed and marking them as freed.

Either wait for SMON to clean up the free extents in the database as it
shuts down or perform a SHUTDOWN ABORT to shutdown the instance. A SHUTDOWN
ABORT will not perform a clean shutdown.

Verify that temporary segments are decreasing
---------------------------------------------
To verify that the temporary segments are decreasing have an active session
available in Server Manager or SQLPLUS during the SHUTDOWN IMMEDIATE. Issue the following
query to ensure the database is not hanging, but is actually perform extent
cleanup:

SVRMGR/SQL> select count(block#) from fet$;
COUNT(BLOC
----------
7

SVRMGR/SQL> select count(block#) from uet$;
COUNT(BLOC
----------
402

After some time has elapsed, reissue the query and see that the values for fet$
have increased while the values or uet$ have decreased:

SVRMGR/SQL> select count(block#) from fet$;
COUNT(BLOC
----------
10

SVRMGR/SQL> select count(block#) from uet$;
COUNT(BLOC
----------
399

During shutdown the SMON process is cleaning up extents and updating the data
dictionary tables with the marked free extents. As the extents are marked as
freed, they are removed from the table for used extents, UET$ and placed on the
table for free extents, FET$.

How to Avoid creating many Temporary Extents
--------------------------------------------
Once the database has shutdown cleanly, to avoid creating many temporary
extents change the initial and next extent sizes on temporary tablespaces
to a more appropriate size:

ALTER TABLESPACE DEFAULT STORAGE (INITIAL M/K NEXT M/K);

Note: If the temporary tablespace is of type TEMPORARY, then this change
will only affect temporary segments created after issuing the above
command. Any existing temporary segments already in the TEMPORARY tablespace
will not be affected till the instance is restarted. On shutdown, existing
temporary segments are dropped. If the TEMPORARY TABLESPACE is of type
PERMANENT, then cleanup is performed by SMON after completion of the process
using it.

Increasing the initial and next extent size will decrease the number of extents
that are allocated to temporary segments. Since there are fewer extents to
deallocate, the database should shutdown more speedily.

Take the following scenario:

A database was subject to large sorts with the following sort parameter in
the "init.ora" file:

- sort_area_size=1000000

The temporary tablespaces for this database were all created with initial and
next extents sized at 50k and the total database size was about 300mb.

Database sorts will utilize memory as much as possible based on the "init.ora"
parameter "sort_area_size". Once this memory-based sort area is filled, the
database will utilize the temporary table space associated with the database
user to complete the sort operation. During a shutdown normal, the database
will attempt to clean up the temporary tablespaces.

If a small extent size is used, then a large number of extents will be created
for a large sort. The cleanup of the temporary tablespace takes much longer
with a large number of extents.

Note:
=====
You have to do a shutdown abort and then bring the database
back up to run the suggested queries.

For other reasons for slow/hung shutdown see also these notes:

Note 375935.1 - What To Do and Not To Do When 'shutdown immediate' Hangs
Note 428688.1 - Bug 5057695: Shutdown Immediate Very Slow To Close Database.

References:
===========
Note 61997.1 SMON - Temporary Segment Cleanup and Free Space Coalescing


Search Words:
=============
hanging
shutdown
We had to issue the command "shutdown abort" to finally shutdown the database. I asked Oracle in an SR how to do the temporary segment cleanup when the database is running, so that when we next shutdown the DB, it comes down without taking too much time. Oracle responded that there is no way to do it.

UPDATE
=======
Hi Vikram,
I reviewed the generated files hoping to see something upnormal, however both findings above do not point to any upnormal issues, hence SMON background process is performing the expected behaviour of cleaning up extents which are no longer needed and marking them as freed.

As the note mentions, there are two options, either to wait until SMON performs its job, or use
shutdown abort (which is something i would not recommend).

Please let me know your thoughts and whether you need us to discuss this over the phone.


Best Regards,
Hany Fawzy
Oracle Support.


Hi,

Is there any way to make SMON do its cleanup job before the actual command of shutdown immediate is given ?

- Vikram


UPDATE
=======
Hi Vikram,

The answer to this question is no, there is however another method, by which you would stop the cleanup before the shutdown, this is actually implemented by using Event 10061, This disables SMON from cleaning up temporary extents. This will leave used (unavailable) space in the tablespace in which the extents reside. You can then point users to another temporary tablesp ace if necessary. The database can then be restarted off-peak without 10061 and
SMON will clean up the temporary extents. The above is actually not recommended, If you perform a shutdown before allowing SMON to clean up all temporary extents in a temporary tablespace, then after the subsequent startup SMON may appear to spin without cleaning up any further extents.


Best Regards,
Hany Fawzy
Oracle Support.

The event 10061 is described in metalink note 21169.1. You need to put this in your init.ora:

event="10061 trace name context forever, level 10"

However, this also didn't help and shutdown immediate still hung up. So we kept doing shutdown abort till the instance was recloned from Production.

Wednesday, May 14, 2008

Pasta printing

We had adventures with setting up pasta today when we were called to check an issue logged where an instance was unable to print reports to a new Xerox printer. These were the steps to diagnose and finally solve the issue:

1. Check if printer is installed on unix
2. Print a plain text file to the printer with the command: lp -c filename.txt -d printername
3. Using ghostscript pdf2ps convert a pdf file into ps
4. Set environment variable PRINTER on unix: PRINTER=myprinter;export PRINTER
5. Send the ps file to printer through lp -c filename.ps 
6. Check Pasta configuration in E-Business Suite

Pasta is a post-processing printing tool used in Apps. The term post-processing is used because the Pasta program is called after the concurrent processing server has completed processing the request and has generated an output file. The Pasta executable is FNDPSTAX. It is invoked via the Apps printer driver ONLY if a printed copy has been requested. At the present moment, the latest available version is 3.0.4 (patch 3325651). You'll get the latest version when you apply the latest ATG rollup patch. Typing at the Unix command line FNDPSTAX –v will provide the version.

E-Business Suite (EBS) is seeded with the printer type '--PASTA Universal Printer Type'. It can be associated with a Postscript printer and it’s ready for use with minimal changes or modifications to $FND_TOP/resource/pasta_printername.cfg

Pasta is required for EBS instances installed with the Unicode (UTF8) character set. On a UTF8 instance, output files (reports) are created with the UTF8 character set. Unicode UTF8 is a relatively new character set and many existing printers do not support the Unicode character set. Therefore, Pasta is used to convert text output files to the Postscript format (an image) or to an alternate character set, which the printer supports.

If you plan to print in Chinese, Japanese or Korean from your E-Business Suite instance, PASTA is the only way to do it.

Some metalink notes to help you in configuring PASTA:

Note 356501.1 - How to Setup Pasta Quickly and Effectively
Note 365111.1 - How to Setup Pasta for Non-Postscript Ready Printers
Note 189708.1 - Oracle Reports 6i Setup Guide for Oracle Applications 11i
Note 240864.1 - Activating and Configuring IX Library
Note 409458.1 - Understanding and Implementing direct printing of PDF using Ghostscript
Note 338990.1 - How To Print XML Publisher PDF Reports Via The Concurrent Manager?

Tuesday, May 13, 2008

Laptop mouse pointer jumps all over the screen

I am currently facing a very annoying problem with my Dell 610 laptop mouse pointer. The pointer goes out of control and either goes to one corner and stays there or randomly keeps jumping. Any attempts to control it are defeated. This is not a mouse problem. As I have tried normal mouse with balls, optical mouse, laser mouse. It even happens when I have no mouse attached to my laptop. Apparently this problem is common to XO Laptop (One laptop per child) as described here. I have tried a few suggestions given by them:

1. Put four fingers on each corner of the touchpad. Works but its back to jumpy mouse in seconds
2. Use hand lotion to rule out static electricity. Yet to try this

I think the most effective way to solve this would be to disable the touchpad. While googling for "disable touchpad" I found this link. Unfortunately the navigation given was not present in my laptop. So I switched off my laptop and started it again. Pressed F2. Went to BIOS settings. POST behavior -> Mouse and Touchpad -> Serial (use serial mouse, disable touchpad). Saved and rebooted. Now the touchpad no longer works, and as predicted, the mouse pointer has stopped jumping. Good riddance. I was terribly annoyed by this jumping mouse menace for a long time. I should have done this as soon as I faced this problem.

Monday, May 12, 2008

How to license a new product through OAM?

Here's how you can license a new product through OAM:

1. Login to OAM
2. Click on Site Map (The link near Applications Dashboard link)
3. Click on License Manager
4. Under License heading, click Products
5. Click on blue plus icon of More options
6. Click on the radio button: License Applications Product
7. Click on continue button
8. You'll get a list of all the individual products
9. Check the box on the left of the product you want to license
10. Click on Next button
11. Click on Submit button
12. You'll get a message: Sucessfully licensed the selected product(s). Click OK

That's it you are done. You may regenerate all the forms of the newly licensed product just to make sure.

Friday, May 9, 2008

ORA-06508 during patches which call worker fpainvcr.ldt

Today while applying a merged patch which contains 300 functional patches, the DBAs reported the following error:

ATTENTION: All workers either have failed or are waiting:

FAILED: file fpainvcr.ldt on worker 2.

ATTENTION: Please fix the above failed worker(s) so the manager can continue.

On checking the worker log and the request log they found:

Uploading from the data file /stage11i/ebiz/appl/fpa/11.5.0/patch/115/import/US/fpainvcr.ldt
Altering database NLS_LANGUAGE environment to AMERICAN
Dump from LCT/LDT files (/stage11i/ebiz/appl/fpa/11.5.0/patch/115/import/fpacriteria.lct(115.2), /stage11i/ebiz/appl/fpa/
11.5.0/patch/115/import/US/fpainvcr.ldt) to stage tables
Dump LCT file /stage11i/ebiz/appl/fpa/11.5.0/patch/115/import/fpacriteria.lct(115.2) into FND_SEED_STAGE_CONFIG
Dump LDT file /stage11i/ebiz/appl/fpa/11.5.0/patch/115/import/US/fpainvcr.ldt into FND_SEED_STAGE_ENTITY
Dumped the batch (FPA_OBJECTS_TL INVESTMENT_CRITERIA 1 , FPA_OBJECTS_TL INVESTMENT_CRITERIA 8 ) into FND_SEED_STAGE_ENTITY
Upload from stage tables
Error loading seed data for FPA_OBJECTS_TL: OBJECT = INVESTMENT_CRITERIA, ID = 1, ORA-06508: PL/SQL: could not find program unit being called
Error loading seed data for FPA_OBJECTS_TL: OBJECT = INVESTMENT_CRITERIA, ID = 2, ORA-06508: PL/SQL: could not find program unit being called
Error loading seed data for FPA_OBJECTS_TL: OBJECT = INVESTMENT_CRITERIA, ID = 3, ORA-06508: PL/SQL: could not find program unit being called
Error loading seed data for FPA_OBJECTS_TL: OBJECT = INVESTMENT_CRITERIA, ID = 4, ORA-06508: PL/SQL: could not find program unit being called
Error loading seed data for FPA_OBJECTS_TL: OBJECT = INVESTMENT_CRITERIA, ID = 5, ORA-06508: PL/SQL: could not find program unit being called
Error loading seed data for FPA_OBJECTS_TL: OBJECT = INVESTMENT_CRITERIA, ID = 6, ORA-06508: PL/SQL: could not find program unit being called
Error loading seed data for FPA_OBJECTS_TL: OBJECT = INVESTMENT_CRITERIA, ID = 7, ORA-06508: PL/SQL: could not find program unit being called
Error loading seed data for FPA_OBJECTS_TL: OBJECT = INVESTMENT_CRITERIA, ID = 8, ORA-06508: PL/SQL: could not find program unit being called

On checking the lct file it has reference to a package called fpa_objects_pkg:

UPLOAD FPA_OBJECTS_TL
" begin

if (:UPLOAD_MODE = 'NLS') then
fpa_objects_pkg.TRANSLATE_ROW (
:ID,
:OBJECT,
:NAME,
:DESCRIPTION,
:OWNER );

We checked the status of fpa_objects_pkg and it was INVALID.

After compiling this object:

alter package fpa_objects_pkg compile body;

We restarted the failed worker and it completed without errors.

Wednesday, May 7, 2008

Simple query to list locked objects

v$locked_object is the view which has a list of locked objects. For a long time I used this query to get a list of objects:

select object_name from
dba_objects
where object_id in
(
select object_id
from v$locked_object)

The above query never completed, and I always used to do a ctrl+c.

I rewrote the query by using the EXISTS clause instead of IN:

select owner,object_type,object_name
from dba_objects a
where exists
(
select object_id
from v$locked_object b
where a.object_id=b.object_id
)
/

The above query executes in seconds. I got hold of an old article from TechRepublic:

When coding a SQL statement with tables in master-detail relationships, it's common to have to decide whether to write the query using the WHERE EXISTS (. . .) clause or the WHERE value IN (. . .) clause. You may resist using WHERE EXISTS because it has the awkward syntax of returning a value, which you always ignore.

However, there's a difference when using rule-based optimization. You can determine the performance of a rule-based query by understanding which table is the driving table and how many rows each part returns.

When you write a query using the IN clause, you're telling the rule-based optimizer that you want the inner query to drive the outer query (think: IN = inside to outside). For example, to query the 14-row EMP table for the direct reports to the employee KING, you could write the following:

select ename from emp e
where mgr in (select empno from emp where ename = 'KING');

Here's the EXPLAIN PLAN for this query:

OBJECT OPERATION
---------- ----------------------------------------
SELECT STATEMENT()
NESTED LOOPS()
EMP TABLE ACCESS(FULL)
EMP TABLE ACCESS(BY INDEX ROWID)
PK_EMP INDEX(UNIQUE SCAN)

This query is virtually equivalent to this:

select e1.ename from emp e1,(select empno from emp where ename = 'KING') e2
where e1.mgr = e2.empno;

You can write the same query using EXISTS by moving the outer query column to a subquery condition, like this:

select ename from emp e
where exists (select 0 from emp where e.mgr = empno and ename = 'KING');

When you write EXISTS in a where clause, you're telling the optimizer that you want the outer query to be run first, using each value to fetch a value from the inner query (think: EXISTS = outside to inside).

The EXPLAIN PLAN result for the query is:

OBJECT OPERATION
---------- ----------------------------------------
SELECT STATEMENT()
FILTER()
EMP TABLE ACCESS(FULL)
EMP TABLE ACCESS(BY INDEX ROWID)
PK_EMP INDEX(UNIQUE SCAN)

This is virtually similar to the PL/SQL code:

set serveroutput on;
declare
l_count integer;
begin
for e in (select mgr,ename from emp) loop
select count(*) into l_count from emp
where e.mgr = empno and ename = 'KING';
if l_count != 0 then
dbms_output.put_line(e.ename);
end if;
end loop;
end;

To determine which clause offers better performance in rule-based optimization, consider how many rows the inner query will return in comparison to the outer query. In many cases, EXISTS is better because it requires you to specify a join condition, which can invoke an INDEX scan. However, IN is often better if the results of the subquery are very small. You usually want to run the query that returns the smaller set of results first.

Some people avoid the EXISTS clause because of the requirement to return a result from the query--even though the result is never used. Depending on personal style, people often use 'X,' 1, 0, or null. From looking at the EXPLAIN PLAN output, it appears that the optimizer throws out whatever value you enter and uses 0 all the time. Many developers get into the habit of always entering some constant value.

If you want to run your own tests, or see other examples, here are the two scripts I used:

REM -- explain.sql - view plan from PLAN_TABLE
set feedback off
set verify off
set pages 2000
column operation format a40
column object format a10

TTITLE * STATEMENT_ID = '&1' *
select object_name object,
lpad(' ',level-1)||operation||'('||options||')' operation
from plan_table
start with id = 0 and statement_id = '&1'
connect by prior id = parent_id and statement_id = '&1';

REM -- exists.sql - examples with EXPLAIN PLAN
REM -- IN vs. EXISTS

REM -- if you don't have a PLAN_TABLE, run ...
REM -- @?/rdbms/admin/xplan
alter session set optimizer_goal = rule;
truncate table plan_table;

REM -- find direct reports to KING
explain plan set statement_id = 'IN' for
select ename from emp e
where mgr in (select empno from emp where ename = 'KING');

explain plan set statement_id = 'JOIN-IN' for
select e1.ename from emp e1,(select empno from emp where ename = 'KING') e2
where e1.mgr = e2.empno;

explain plan set statement_id = 'EXISTS' for
select ename from emp e
where exists (select 0 from emp where e.mgr = empno and ename = 'KING');

explain plan set statement_id = '=' for
select ename from emp e
where mgr = (select empno from emp where ename = 'KING');

explain plan set statement_id = 'JOIN1' for
select e1.ename from emp e1,emp e2
where e1.mgr = e2.empno
and e2.ename = 'KING';

REM -- find employees with greater than average salaries
explain plan set statement_id = '>' for
select ename from emp e where e.sal > (select avg(sal) from emp);

explain plan set statement_id = 'JOIN2' for
select e1.ename from emp e1,(select avg(sal) sal from emp) e2
where e1.sal > e2.sal;

Tuesday, May 6, 2008

APPL_TOP snapshot feature

When a DBA goes to adadmin and does an Update current snapshot, what happens actually ? APPL_TOP snapshot feature was a new feature introduced in AD.G. I am pasting some good information from AD.G readme where they have explained this feature:

This feature allows you to record the current set of files and file versions in your APPL_TOP using the "Maintain snapshot information" menu entry in the AD Administration "Maintain Applications Files" menu.

There are two types of APPL_TOP snapshots: current view snapshots and named snapshots. Current view snapshots are created once and updated when appropriate to maintain a consistent view of the APPL_TOP contents. Named snapshots are created once using AD Administration and not updated. Both store information about files, file versions, and bug fixes present in an APPL_TOP.

You can create as many named snapshots of each APPL_TOP as you want. You can also just create or update the current view snapshot. When creating a named snapshot for an APPL_TOP, the snapshot feature will first create or update the current view snapshot for that APPL_TOP. The bug fix information for a named snapshot is copied from the current view snapshot for that APPL_TOP.

You must run AD Administration "Maintain snapshot information" once for each APPL_TOP before you can apply any patch that contains a "compatible feature prereq" line on that APPL_TOP. Running AD Administration "Maintain snapshot information" for an APPL_TOP completely populates the APPL_TOP's current view snapshot. This complete information is required for the automatic prerequisite patch checking feature to give correct results.

AutoPatch automatically updates the list of file versions and bug fixes in the current view snapshot for each APPL_TOP as patches are applied to that APPL_TOP. The combination of running AD Administration "Maintain snapshot information" once for an APPL_TOP and AutoPatch's incremental updates ensures that the current view snapshot for a given APPL_TOP contains an accurate picture of the current set of files and bug fixes present in that APPL_TOP.

APPL_TOP snapshot information is stored in the AD_SNAPSHOTS, AD_SNAPSHOT_FILES, and AD_SNAPSHOT_BUGFIXES tables. There is currently no user interface available for viewing snapshot information. The ability to view and compare APPL_TOP snapshots is being considered for future versions of AD and Oracle Applications Manager 11i.

Some other tables which are referenced are:

SQL> select table_name
2 from dba_tab_columns
3 where column_name='FILE_VERSION_ID';

TABLE_NAME
------------------------------
AD_FILE_VERSIONS
AD_CHECK_FILES
AD_PTCH_HST_EXE_COP_TMP
AD_CHECK_FILE_TEMP
AD_SNAPSHOT_FILES
AD_PATCH_HIST_SNAPS_TEMP
AD_PROCESSED_FILES_TEMP
AD_SNAPSHOT_FILES_TEMP

and AD_FILES table.

To get the version of a file in snapshot tables, you can create a query which has a join between AD_SNAPSHOT_FILES, AD_FILES, AD_FILE_VERSIONS.

etrm.oracle.com gives some more information about the AD_SNAPSHOT_FILES table:

APPLSYS.AD_SNAPSHOT_FILES does not reference any database object

APPLSYS.AD_SNAPSHOT_FILES is referenced by following objects:

AD_FILE_SYS_SNAPSHOTS_PKG
AD_FILE_UTIL
AD_PATCH
FND_IMP_SFILEINFO_VL

Anand asked me an interesting question today (August, 31, 2010) after reading this post. How to run update current view snapshot outside of adadmin. A search on My Oracle Support revealed the package

AD_FILE_SYS_SNAPSHOTS_PKG.update_current_view

which seems to be called by adadmin.

Monday, May 5, 2008

Oracle Database Lite (Olite)

In E-Business Suite, Olite is a technology component if you are using Mobile Field Services


Oracle Database Lite was developed from the ground up to be a smallfootprint, zero-administration, yet powerful mobile or embedded database.Because it requires no administration and has an extremely lightweight footprint, it is ideal for developing applications for mass deployment to thousands of mobile or disconnected end-users. Oracle Database Lite is not a full-featured Oracle Enterprise Edition database. It is specifically optimizedfor mobile and embedded environments. Despite the small footprint, Oracle Database Lite still supports most large-DBMS features, such as transactions,row-level locking, SQL-92, and Java stored procedures and triggers.

A classic implementation of Oracle Database Lite is in a field technicianapplication. For example, Company X has an Oracle Database Standard Edition in their datacenter. However, Company X field technicians are mobile sixty percent of the time without an internet connection. Oracle Database Lite allows Company X to subset the necessary field technician data and deploy the data to the technicians’ Laptop or PDA. The field technician is now able to befully functional and productive while offline and while mobile. If the Laptopor PDA is ever “connected”, Oracle Database Lite will quickly and securely synchronize the data, and changes, with the Oracle Database Standard Edition in the datacenter. Oracle Database Lite is optimized for these types of scenarios and can efficiently scale to thousands of users/devices.

Oracle Database Lite includes two main components.
1) The client component which runs on the mobile device or laptop to store the application data. 2) The server component (Mobile Server) to manage the synchronization and deployment of mobile applications.

Metalink Note 554931.1 lists the desupport notice for Olite versions upto 10.3 which is supported till Dec 2009 after which premier support ends.

QP_PERF_CTRL_PVT is invalid after maintenance pack

Depending on the version and patch status of your instance, you may find that QP_PERF_CTRL_PVT becomes invalid when you apply the maintenance pack patch 3480000. As per metalink note 559196.1, this object is no longer used in 11.5.0 and can be safely dropped. So go ahead and get rid of it:

drop package QP_PERF_CTRL_PVT;

Friday, May 2, 2008

64-bit binaries of ASCP

All E-Business Suite Application binaries are 32-bit. The only exception are ASCP Memory Based Snapshot programs: MSCNWS64.exe and MSONWS64.exe. These are 64-bit binaries. They have been made 64-bit because of their memory intensive nature.

I noticed that whenever we run adadmin to relink binaries of MSC and MSO, these two are never relinked. If you try to relink them manually through adrelink.sh force=y "MSC MSCNWS64.exe" it simply deletes the file and doesn't relink. I asked Oracle about this in an SR and this was their answer:

QUESTION
=========
How to relink MSCNWS64.exe and MSONWS64.exe manually and through adadmin ? We have seen that these files are not relinked if we relink all MSC and MSO binaries through adadmin

ANSWER
=======
Hi Vikram,

Recompiling the 64 bit Memory based planner executables that you are asking about is not required and doesn't work as the code lines are all self contained within that executable and it doesn't use other files in other directories like other executables for recompiling. So basically if an install or patch is applied this executable is fully self contained in similar fashion as if you were running a Netscape browser which is also an executable netscape.exe - so these executables do not need to be compiled/recompiled.

DBUA is unable to connect to X11

In one of the iterations of 10g upgrade, dbua wouldn't launch and give this error:

Exception in thread "main" java.lang.InternalError: Can't connect to X11 window server using 'DB001:6' as the value of the DISPLAY variable.
[Loaded java.lang.StackTraceElement from /gpsess76/oracle/10.2.0/jdk/jre/lib/rt.jar]
at sun.awt.X11GraphicsEnvironment.initDisplay(Native Method)
at sun.awt.X11GraphicsEnvironment.(X11GraphicsEnvironment.java:134)
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:141)
at java.awt.GraphicsEnvironment.getLocalGraphicsEnvironment(GraphicsEnvironment.java:62)
at java.awt.Font.initializeFont(Font.java:308)
at java.awt.Font.(Font.java:344)
at oracle.ewt.lwAWT.LWComponent.(Unknown Source)
at oracle.sysman.assistants.util.wizard.WizardObject.(WizardObject.java:64)
at oracle.sysman.assistants.dbma.ui.DBMAWizard.(DBMAWizard.java:184)
at oracle.sysman.assistants.dbma.ui.UICompManager.(UICompManager.java:153)
at oracle.sysman.assistants.dbma.Dbma.getCompManager(Dbma.java:154)
at oracle.sysman.assistants.dbma.Dbma.execute(Dbma.java:98)
at oracle.sysman.assistants.dbma.Dbma.statusMain(Dbma.java:199)
at oracle.sysman.assistants.dbma.Dbma.main(Dbma.java:180)
[Loaded java.lang.Shutdown from /gpsess76/oracle/10.2.0/jdk/jre/lib/rt.jar]
[Loaded java.lang.Shutdown$Lock from /gpsess76/oracle/10.2.0/jdk/jre/lib/rt.jar]

The X environment was properly set. DISPLAY variable was correct. We were able to launch xclock from the same session without issues. We were also able to start dbua of other instances in the same vnc session, we followed metalink note 443693.1 (on March 24 and 25 2008 this note was available, now this note is not visible) which suggests
modifying dbua: Remove the ‘$JRE_OPTION’ where there is a definition for this
‘–d64’ flag and it will work fine using the default –d32 data model.

We removed this block from dbua as suggested in 443693.1
JRE_OPTION=
if [ $PLATFORM = Solaris ]; then
MACH_HARDWARE=`/bin/uname -i`
case $MACH_HARDWARE in
i86pc)
LD_LIBRARY_PATH=/opt/ORCLcluster/lib:$LD_LIBRARY_PATH
export LD_LIBRARY_PATH
;;
*)
JRE_OPTION=-d64
LD_LIBRARY_PATH_64=/opt/ORCLcluster/lib:$LIB_DIR:$ORACLE_HOME/network/lib:$L
D_LIBRARY_PATH_64
export LD_LIBRARY_PATH_64
;;
esac
fi

After this DBUA launched fine. We got errors later in the upgrade process like this one which is logged in the DBUA trace.log:

/dev11i/oracle/10.2.0/oui/lib/solaris64/liboraInstaller.so: wrong ELF class: ELFCLASS64 followed by a long list of java class errors.

DBUA jumped from 8% to 87% in 1 second after this error and after discussions with Oralce we had to follow the manual route of upgrading through catupgrd.sql.

In the morning today, we faced the same issue of DBUA refusing to launch in another instance. This time we did not hurry with a solution but waited and discussed. Subu asked us to compare the instances where it was running fine. We checked ulimit -a on working and non-working instances. On the instances dbua was working nofiles was set to 65536, on instance where dbua was throwing X11 error, nofiles was set to 2048. So we inserted this command in the .profile of the oracle user:

ulimit -n 65536

Logged in again as oracle user and checked ulimit
time(seconds) unlimited
file(blocks) unlimited
data(kbytes) unlimited
stack(kbytes) 8192
coredump(blocks) unlimited
nofiles(descriptors) 65536
memory(kbytes) unlimited

When we launched dbua after this, it launched without any errors. The issue was resolved. It seems that 64 bit dbua process tries to open more than 2048 files, and unfortunately the 2049th file happens to be an X11 window, because of which we were getting those errors. After resetting nofiles to 65536, it was able to open those files.

Thursday, May 1, 2008

Disable retention on Workflow Queues

Ahmed Alomari, the ex-head of Oracle E-Business Suite performance group suggested in his famous presentation on performance tuning of Apps to disable retention of workflow queues.

SQL> select name,retention
from dba_queues
where owner='APPLSYS' and retention != '0';
SQL> /

NAME RETENTION
----------------------------------------
WF_IN 604800
WF_OUT 604800
WF_DEFERRED 86400
WF_REPLAY_IN 604800
WF_REPLAY_OUT 604800
WF_NOTIFICATION_IN 86400
WF_NOTIFICATION_OUT 86400
WF_JAVA_DEFERRED 86400

8 rows selected.


SQL> exec DBMS_AQADM.ALTER_QUEUE(queue_name=>:b1,retention_time=>0);

A search on metalink for the string DBMS_AQADM.ALTER_QAUEUE returns three hits 273848.1, 468650.1, 358262.1. All of them are related to workflow performance

Oracle Workflow accesses several tables that can grow quite large. Workflow leaves lot of runtime data in certain workflow tables which needs to be purged on a periodic basis. Since a number of modules make use of workflow, the amount of obsolete runtime data that is retained in these tables is huge. The size of these tables and indexes can adversely affect performance.

Will update more on this.