Just a few hours until my flight leaves (which means it’s time to finish packing soon), so I thought I’d post my agenda as it looks like now. There will be changes, for example to have time to attend Unconference sessions and Oracle Closed World :)
Sunday, September 19
13:00-13:45 Moscone West L2, Rm 2014 [UG] – [S318493] Performance-Tuning Web Applications
16:30-17:00 Moscone West L2, Rm 2010 [UG] – [S315685] Stay Away If You Are Technical: This Is Oracle Fusion Middleware for Business
Monday, September 20
11:00-12:00 Moscone South, Rm 309 [CS] – [S317241] Oracle Identity Management 11g Update and Overview
12:30-13:30 Moscone South, Rm 310 [CS] – [S317483] Oracle Access Manager 11g: Demonstrating New Features and Improved Integration
14:00-15:00 Moscone West L2, Rm 2014 [CS] – [S319049] Best Implementation Practices with Oracle Business Intelligence Publisher
15:30-16:30 Moscone South, Rm 309 [CS] – [S317242] Oracle Identity Manager 11g: Achieving New Highs for Cost-Efficiency and Agility
17:00-18:00 Moscone West L3, Rm 3024 [CS] – [S317063] Managing Oracle WebLogic Server: New Features and Best Practices
Tuesday, September 21
08:00-09:00 Hotel Nikko, Nikko Ballroom II [CS] – [S317471] Application-Aware Virtualization
11:30-12:30 Hilton San Francisco, Imperial Ballroom B [HOL] – [S318578] Advanced Web Service Development in Oracle WebLogic Server
12:30-13:30 Marriott Marquis, Golden Gate C3 [CS] – [S318085] Test to Production for Oracle Fusion Middleware
14:00-15:00 Moscone South, Rm 236 [CS] – [S314241] How Norwegian Labour and Welfare Consolidated on Oracle Database Machine
15:30-16:30 Moscone South, Rm 309 [CS] – [S317064] Oracle Identity Management Administration Best Practices
17:00-18:00 Moscone South, Rm 309 [CS] – [S317244] Enforcing Segregation-of-Duties Controls with Identity Management
Wednesday, September 22
10:00-11:00 Moscone West L2, Rm 2002 / 2004 [CS] – [S318137] Oracle Fusion Applications: Adoption and Deployment Overview
11:30-12:30 Moscone South, Rm 309 [CS] – [S317485] Oracle Adaptive Access Manager 11g Release 1 Overview
13:00-14:00 Moscone South, Rm 309 [CS] – [S317276] Building a Strong Foundation for Your Cloud with Identity Management
16:45-17:45 Moscone South, Rm 309 [CS] – [S317243] Complete Identity Access and Governance with Oracle Identity Analytics 11g
Thursday, September 23
09:00-10:00 Moscone South, Rm 310 [CS] – [S317487] End-to-End Secure Identity Propagation Available
10:30-11:30 Moscone West L3, Rm 3018 [CS] – [S317270] Service-Oriented Security: Simplifying Identity Management for Applications
12:00-13:00 Moscone South, Rm 309 [CS] – [S316829] Demystifying IdM: A Customer’s Guide to a Practical IdM Deployment Strategy
13:30-14:30 Moscone West L2, Rm 2024 [CS] – [S318133] Oracle E-Business Suite DBA Techniques: Install and Cloning Best Practices
15:30-16:30 Hotel Nikko, Nikko Ballroom III [CS] – [S317543] Service-Oriented Security 101
My OEL 5.3 server with Oracle hasn’t been behaving quite right lately, so all in all, the best choice at this time was to reinstall. At the same time, I wanted to redesign the disk / partition layout.
So here’s a quick recap of this installation.
First, the OS.
- Downloaded OEL 5.4 64-bit CD’s 1,2,3,4 and 6 from edelivery.oracle.com
- Booted from first disk, deleted all existing volume groups and selected two (out of six) disks to be available for the OS install.
- Disabled SELinux for the time being.
- Created the oracle OS user during OS installation.
(This server has no DVD player, and previous installations taught me that CD 5 wasn’t necessary for me.)
(Which means Linux will put them together in one Volume Group.)
(To prevent some problems when configuring Oracle later, specifically “cannot restore segment prot after reloc” error.)
That went well enough, so now for some preparations.
- In one session: Transfer Oracle software to the new server:
rsync --progress linux.x64_11gR2_database_?of2.zip oracle@newserver:/src/
- In a parallell session: Set up Oracle public YUM server, courtesy of
- Install required packages:
yum install oracle-validated
- Create OS groups for oracle:
usermod -g oinstall -G dba oracle
- Make sure there is a line in the hosts file for the host itself, so the Oracle installer can map the hostname to an IP.
(Including mkdir /src and making oracle the owner of it.)
Create LVM volumes for the remaining disks; here are the commands used:
pvcreate /dev/sdc1 /dev/sdd1
pvcreate /dev/sde1 /dev/sdf1
vgcreate OracleVol01 /dev/sde1 /dev/sdd1
vgcreate OracleVol02 /dev/sdc1 /dev/sdf1
lvcreate -L 100G -n u01 OracleVol01
lvcreate -L 100G -n u02 OracleVol01
mke2fs -j /dev/OracleVol01/u01
mke2fs -j /dev/OracleVol01/u02
e2label /dev/OracleVol01/u01 u01
e2label /dev/OracleVol01/u02 u02
And then, install Oracle software, which is another story :)
Periods of silence shouldn’t last more than a few weeks..
Anyway, I’m busy planning aspects of a Oracle Identity Federation implementation, so that’s the current status. Further out, I’ll supply more details of how OIF is to work with, but if someone has specific OIF related questions, I’d be glad to try to help.
Version used in our project, by the way, will be 11.1.1.
Or: FND_USER table and blocking locks.. I don’t mean recover as in backup, by the way.
Some disclaimers: This scenario happened in an 11.5.10 system, and seems to have been a general problem not specific to our site, but as I haven’t been able to research it further, I don’t know if it could happen elsewhere as well. And even if I couldn’t think of a way around it, some of you might :) Furthermore, the following shouldn’t be a problem for those using the new User Management (UMX) HTML interface to manage EBS users, instead of the older, Forms based “Administer Users” interface. (UMX became available as a patch to 11.5.10, if I remember correctly).
Since the only resolution to this I (and Oracle support) know of, is to shutdown abort the running production EBS database, which is rather grave, I’m hereby noting down our preventive work-around, which banished this issue from our site.
The issue (which happened twice in a couple of weeks):
During normal operation in a non-peak period, typically in the late morning, the Oracle E-Business Suite system would suddenly not accept any more logins. Over the course of perhaps 15 minutes after this, existing users started having trouble accessing different HTML/JSP pages, and after that, the whole system effectively froze up.
The first time this happened, there was no time for troubleshooting, except to discover that there were lots of sessions waiting to get a lock for the FND_USER table, so we had to shut down the services and abort the database. After a call to Oracle support revealed that they did not know of any other way around our situation.
The second time, I was a little more prepared, and though I couldn’t react in time to kill the blocking sessions fast enough the resolve the situation, I did have time to detect a pattern (I’ll get to this in a second).
So the issue in a little more detail, was that – as is normal – each new user session wanted to get hold of their entry in the FND_USER table, to update the TIMESTAMP field. And the already existing sessions also expect to be able to do this once in a while. This was not possible, since one session sat with a blocking lock for it’s own entry. I found this strange, and at least expected to be able to resolve everything by contacting the user to confirm I could kill the session, and by this letting the waiting sessions go on with their business.
But every time the blocker was killed, the next session waiting in line, took over and did not release the lock. And since every user who experienced some kind of hang situation in their session, restarted their browsers and tried to log back in, the waiting line grew much more quickly than it was possible to kill off sessions.
Since there was just a limited amount of time available to experiment with this, I still have no clear understanding of this behaviour. (I had to resort to bouncing/aborting the system at this occasion as well, since that can be done fairly quickly, albeit with the possible necessity of some cleanup work afterwards.)
But the pattern was simply that someone with System Administrator rights had opened the FND_USER entry (in a Form, the correct way) for the SYSADMIN user, for changing its list of Responsibilities. And they had left the Form window open long enough for another user to try the same thing, which was all that’s needed. So if you have a test EBS system you’re not afraid to crash; this could be a useful exercise: Open the SYSADMIN user, change for example an end-date of one of the Responsibilities (back and forth), leave it open, and have another user with Rights attempt to change some other aspect of the SYSADMIN user. If the second session hangs, waiting for the first, have some other users try to log in and out normally, possibly starting up Forms. If you see the above described behaviour, well, then you know it needs to be prevented.
It’s simple enough to avoid, of course. The only policy necessary at our site, instated after the second instance of this issue, was: “Every change to EBS user properties must immediately be followed by closing the form in question”.
As I said, simple enough, and probably very obvious to most EBS sites. But users don’t necessarily think that way, figuring instead that the System is all-powerful and can protect itself against stuff like this. By the way, for those who are wondering, we did also take this opportunity to put in place similar policies for other forms than just the user management. Anyway, the issue never arose again.
I had a very irritating problem on one of my OEL 5 virtual machines, actually on my main Oracle 11g database test server, which is the last stop before production.
Googling did not solve this problem for me, since this error can apparently be caused by several things, so it was unresolved for many weeks, and I considered just creating another virtual machine instead, and reinstalling Oracle.
But this morning, I easily solved the problem, just by reading a man page a little more closely, so I thought I’d share my experience :)
For some unknown reason, I could suddenly not change the password of my oracle user, neither as root or as the user itself, because using the command passwd immediately gave me the message “passwd: Authentication token manipulation error”. I could log in as the user, though, for example by doing su – oracle, or by ssh-ing in with public keys set.
Google thought this had to do with a lot of file access or pam settings issues that I knew were not relevant to my problem, partly because I always received this error before I got to write any new password for the user at all.
Solution (for me):
passwd (when run from the root account) also has the option -d, to delete (set to null), a user’s passwd. It occurred to me that this might work, since it might not involve any checks against the existing, somehow corrupt, passwd. And it did. So, for me, the 10 second solution was – as root:
passwd -d oracle
Hope it might be helpful to others!
An interesting note from Steven Chans presentation today at Oracle OpenWorld:
Oracle Access Manager can now be used directly with Oracle E-Business Suite, without having to go through separate SSO.
But it won’t work for Oracle Portal or Discoverer yet.
Just a quick post from the Android app wp2go; Larry Ellisons keynote @ Oracle OpenWorld #oow09 finally released the first thorough screenshots (actually, at keynote, we were shown the live demo, pretty impressive stuff).
Here’s a published (via Twitter, @eyemsusie) Flickr picture series:
I’m looking forward to working with this :)