Blog Posts

Blog Posts: 35
Items per page
Statistics: Blogs: 12 Blog Posts: 35   1 2 Previous Next

From time to time it's useful to set up security for certain applications and services directly within Apache, using standard Apache access control. This is most useful in denying access to application to specific IP addresses and domains.


This is typically done in a location or directory tag like so:

<location /secure_content/>
Order Deny,Allow
Deny from all
Allow from .intranet.mycompany.com
</location>

 

This effectively denies access to everyone, and only opens it up to computers on your companies intranet domain. One major problem with Oracle's out of the box configuration is that Apache sit's behind webcache and all requests are funneled through WebCache first. If you look at the Apache access logs, the IP address is the same for everyone. This becomes a problem if you want to secure something to only allow local (on that server) access, and deny anyone else from accessing it (as is the case with many web services). Because each request to apache is indistinguishable, there is no way to secure it by default.

 

However, after digging around metalink for a while, I found a very useful and undocumented Apache directive that addresses this issue (I couldnt find it anywhere atleast).


In order to have apache process the actual client IP address, instead of the webcahce IP address, set this directive in httpd.conf:

UseWebCacheIp On

The values here are somewhat counter-intuitive. One would think ON means use the Webcache IP address, and off means use the clients IP. But its the opposite actually.
Set to ON, Apache uses the IP address supplied to Webcache (ie, the client IP). Set to OFF, and apache uses the IP address that it was supplied with (which is always webcache's IP address). So with this set to ON, apache will always see the actual client IP address and should be able to process those allow/deny statements properly.

 

You will also see valid values in the apache access logs now too. Although, another word of caution about that. If you are looking to mine data from the access logs, use the Webcache logs, as a properly functioning webcache will prevent many requests from ever hitting Apache.

Permalink

Siebel Knowledge Zone

 

Dear Partner,

 

The Oracle PartnerNetwork Enablement 2.0 team is delighted to announce the launch of the Siebel Knowledge Zone. The Zone is designed to accelerate your organization's enablement and provide new opportunities to collaborate with Oracle partners and employees.

 

The Siebel Knowledge Zone is comprised of solution-focused Oracle PartnerNetwork pages which provide product information, enablement tools,
solution resources, social media tools, and links to detailed product information on the Oracle PartnerNetwork.

When you visit the Siebel Knowledge Zone, you can put the Oracle ecosystem to work for your business. Use the new social media tools (Oracle Mix, Blogs, Forums, Wikis) to Connect, Collaborate, and Participate.

 

*        Partner-to-Partner - discover complementary offerings that could ignite your business development.
*        Partner-to-Oracle - connect to executives and employees from across Oracle who can help you do business.
*        Partner-to-Customers - new, easy, immediate ways to start a conversation.

To stay up to date with Oracle activity in the Siebel Knowledge Zone, partners are encouraged to join the Siebel for Partners Mix Group.
Discussion areas include:

*        Oracle Siebel Partner Opportunities and Go-to-Market Discussions / Suggestions.
*        Oracle Siebel Knowledge Zone Topics & Suggestions.
*        OPN Program & Portal Discussions and more...

 

To participate in the Siebel for Partners Mix Group, your company must be an Oracle PartnerNetwork member in good standing.

*        Go to  http://mix.oracle.com.*
*        Select "Groups" from the Oracle Mix menu at the top and search for "Siebel for Partners".
*        Select "Siebel for Partners" from the search list.
*        Select "Join this Group" on the upper right side of the page.

* If you are not an Oracle Mix member, create an Oracle Mix profile at  http://mix.oracle.com by clicking on "Sign In" in the upper right corner.

 

 

Copyright C 2009, Oracle.
All rights reserved.


Oracle Corporation - Worldwide Headquarters, 500 Oracle Parkway, OPL -
E-mail Services, Redwood Shores, CA 94065, United States

Permalink

Business Intelligence (BI) is a business management term which refers to applications and technologies which are used to gather, provide access to, and analyze data and information about organization’s operations.


Why Business Intelligence?

  • BI Systems can help an organization develop a more consistent, data-based decision making process for business decisions (avoid “guesswork”)
  • Bi systems can enhance communication among departments, coordinate activities, and enable an organization to respond more quickly to changes
  • BI systems that are well-designed and properly integrated into a company’s processes can improve company’s overall performance Oracle Business Intelligence Suite Enterprise Edition (OBIEE) Design Principals
  • Unified Enterprise view of information
  • Unified Semantic view of information – model the complex information sources as a simple, understandable and logical business model
  • Real-time information access – allow users to combine historical and real-time information to get up-to-the minute view of the business
  • Proactive Intelligence facilities – sends alerts in response to business event
  • Pre-built Analytic applications
  • Hot-Pluggable – into any existing data sources, any pre-packaged applications, any security infrastructure without having to replace existing investment
  • Business process integration – integration between OBI and Workflow manager to help integrate business insight to drive Business process optimization

 

OBIEE Plus Suite Products

obiee_plus_products.png

Oracle BI Server is key in combining various data sources and hiding the complexities of the underlying architecture from the users.  The complex physical structure is converted to a business model and further simplified in the presentation layer.

*No data is stored on the BI Server*.

 

Think of the BI Server as a metadata convergence point.
Oracle BI Server consists of three areas:

* Physical Layer: maps tables from various sources on a physical layer
* Business Model Layer: from the physical, the business models can be constructed (e.g. star schemas)
* Presentation Layer: simplified models that can be used and understood by business users.

 

bi_server.png

Oracle BI Server (Virtual Data Warehouse)

  • “Virtual Data Warehouse”, connects to source databases
  • Integrates incoming data in real-time
  • Translates SQL against business model into optimal SQL against physical data source
  • Own dialect of SQL
  • Common functionality across platforms
  • SQL pass-through possible
  • Caching, summary management
  • Security
  • Partly equivalent to Discoverer Server, partly to Oracle RDBMS EE

If you need to deliver dashboards, custom reports or be able to write back to the database, e.g. when you want to set the next quarter targets on KPIs based on this quarter.  For this, Oracle have tools available (part of OBIEE) like Business Intelligence Publisher, however the tool of choice will be BI Answers and BI Dashboard.  BI Answers will allow users to change how the reports are presented and manipulated to a customer’s specific needs.  It will also give users the ability to export the results to other formats including MS Office Excel, but have advantages over using pivot tables that read from cubes.

bi_dashboard.png
Additional Components of OBIEE include Ad-hoc Analysis, Proactive Detection and Alerts, Disconnected Analytics, MS Office Plug-in, Hyperion Financial Reporting, BI Briefing Books, Intelligent Caching Services, Multidimensional Calculation and Integration Engine that will be covered in greater detail in additional posts.

 

As always, comments are welcome.

Permalink

Compilation of useful PeopleSoft-related materials:

 

PeopleSoft Books:

PeopleSoft Enterprise HRMS and Campus Solutions 9.0 PeopleBooks:
http://download.oracle.com/docs/cd/B40039_02/psft/html/docset.html

 

Blogs:

Jim Holincheck (Gartner): http://blogerp.typepad.com/hcm_research

 

Professional Organizations:

PeopleSoft Higher Education Users Group:
http://www.heug.org/

 

Please add your PSFT reference materials.

Permalink

Imagine you get a collection of emps back from a service, as defined by the following schema (from DB adapter in this case)


   <xs:element name="EmpCollection"
                type="EmpCollection"/>
   <xs:element name="Emp" type="Emp"/>
   <xs:complexType name="EmpCollection">
      <xs:sequence>
         <xs:element name="Emp" type="Emp"
                     minOccurs="0"
                     maxOccurs="unbounded"/>
      </xs:sequence>
   </xs:complexType>
   <xs:complexType name="Emp">
      <xs:sequence>
         <xs:element name="comm" type="xs:decimal"/>
         <xs:element name="deptno" type="xs:decimal"/>
         <xs:element name="empno" type="xs:decimal"/>
         <xs:element name="ename" type="xs:string"/>
         <xs:element name="hiredate" type="xs:dateTime"/>
         <xs:element name="job" type="xs:string"/>
         <xs:element name="mgr" type="xs:decimal"/>
         <xs:element name="sal" type="xs:decimal"/>
      </xs:sequence>
   </xs:complexType>

 

 

and accordingly a message type


    <message name="EmpCollection_msg">
        <part name="EmpCollection"
              element="top:EmpCollection"/>
    </message>


The first step is to create a variable, based on the msg type that contains this collection, and 2 counters, one (i) for the running index, and one (n) for the length of the collection. All as shown below ..


    <variable name="EmpQuerySelect_p_deptno_OutputVariable"
              messageType="ns1:EmpCollection_msg"/>
    <variable name="i" type="ns3:integer"/>
    <variable name="n" type="ns3:integer"/>


Step 2, consists of getting the count of nodes and assigning 1 to the counter


    <assign name="prepare_loop">
      <copy>
        <from expression="number(1)"/>
        <to variable="i"/>
      </copy>
      <copy>
        <from expression="ora:countNodes(
              'EmpQuerySelect_p_deptno_OutputVariable',
              'EmpCollection','/ns2:EmpCollection/Emp')"
        />
        <to variable="n"/>
      </copy>
    </assign>

 

 

Step 3, is the while loop and to get the information from a selected node, by applying the concepts further


    <while name="While_1"
              condition="bpws:getVariableData('i') <=
                         bpws:getVariableData('n')">
      <scope name="Scope_1">
        <variables>
          <variable name="selector" type="ns3:string"/>
          <variable name="element" element="ns2:Emp"/>
        </variables>
        <sequence name="Sequence_1">
          <assign name="Get_Element">
            <copy>
              <from expression="concat(
                   "/ns2:EmpCollection/Emp[",
                   string(bpws:getVariableData('i')),
                   "]")"/>
              <to variable="selector"/>
            </copy>
            <copy>
              <from expression="bpws:getVariableData
                 ('EmpQuerySelect_p_deptno_OutputVariable',
                  'EmpCollection',
                  bpws:getVariableData('selector'))"
              />
              <to variable="element" query="/ns2:Emp"/>
            </copy>
          </assign>
          <empty name="Do_stuff_with_element_Var"/>
          <assign name="Increment_index">
            <copy>
              <from expression="bpws:getVariableData('i')+1"/>
              <to variable="i"/>
            </copy>
          </assign>
        </sequence>
      </scope>
    </while>


Your loop over your collection is done.

Permalink

It's about having a BPEL process offering multiple operations such as createCustomer, deleteCustomer and so on ..

 

Usually you get a wsdl from a service provider, that already contains bindings and services (of course - someone already implemented it.

 

When you try to import it now as client PLNK - the bpel designer (and compiler) will complain? Why? Because you are going to be the service provider and not the service consumer.

 

In a nutshell, delete the binding(s) and service(s) section in the wsdl, and try again, it will work.

 

Now, the steps:

 

Start with a new BPEL project (type empty process). Add a new partnerlink, and name it client - base it on your new wsdl (that is by now binding/service - less). Step one is done, the new face of your process contains all the nice operations defined in the wsdl.

 

Afterwards create an initial pick activity (and flag it to create a new instance). From the pick, delete the onAlarm branch (as it is initiating!), and for each operation add a new branch, to do whatever the process might require. Choose in each branch an operation from the partnerlink and add the callbacks accordingly (invoke/reply - as defined)

Permalink

Step 1:
Create a user in OID using OIDDAS . For e.g “admin_user”.
Step 2:
Create the privilege group by name “AdminPrivGroup” in OID. Make this group available as a role in OID using OIDDAS
Step 3:
Add the user created in Step 1 to the role created in Step 2.
Step 4:
Login to the SOA Suite 10.1.3.1 Midtier & edit the following file
$ORACLE_HOME/j2ee/oc4j_soa/application-deployments/orabpel/admin/orion-web.xml

Add the following lines inside <orion-web-app>
<security-role-mapping name=" ConsolePrivGroupRole">
<group name=" AdminPrivGroup" />
</security-role-mapping>

Step 5:

Edit $ORACLE_HOME/j2ee/oc4j_soa/applications/orabpel/admin/WEB-INF/web.xml . Make the following changes, Add <auth-constraint> inside <security-constraint> as shown below

 

a) <security-constraint>
...
<auth-constraint>
<role-name>
AdminPrivGroup
</role-name>
</auth-constraint>
</security-constraint>


b) Add <login-config> inside <web-app>
<login-config>
<auth-method>
BASIC
</auth-method>
<realm-name>
DEFAULT_REALM_NAME
</realm-name>
</login-config>

c)Provide the <security-role> inside <web-app> as shown below
<security-role>
<description>
BPEL PM User
</description>
<role-name>
AdminPrivGroup
</role-name>
</security-role>

0 References Permalink

We know that Oracle BPEL PM comes with a domain “default”. The roles (which are available in OID) related to this “default” domain includes

BPMDefaultDomainAdmin
This role is to control the access to the “default” domain

BPMSystemAdmin
This role is to control the access to the entire BPEL PM including the “default” domain and all other custom domains

 

I.Steps to create a Custom Domain

1. Login to Oracle BPEL PM as BPEL Administrator
2. Click on BPEL Domains & click on “Create New BPEL Domain”
3. Enter the Domain Id as “custom”. Please note according to Note:406979.1 When you have domain with capital letters in the domain id then you get a file not found error when logging into BPEL console.
4. Click on Create to complete the Custom Domain Creation

 

II.Steps to implement for allowing access to Custom Domain (custom)

1. Create a new user using OIDDAS by the name ‘custom’
2. Create a new OID group called “BPMcustomDomainAdmin"
3. Add the above-created user to this group
4. Login to the SOA Suite mid-tier & navigate to $ORACLE_HOME/j2ee/oc4j_soa
5. Grant permissions to the role created by running the command as shown below

java -Xbootclasspath/a:../../bpel/lib/orabpel-boot.jar -jar ../home/jazn.jar -user oc4jadmin -password bpel123 -grantperm DEFAULT_REALM_NAME  -role BPMcustomDomainAdmin com.collaxa.security.DomainPermission custom all

6. Grant System Administrator privileges by running the following command

java -Xbootclasspath/a:../../bpel/lib/orabpel-boot.jar -jar ../home/jazn.jar -user oc4jadmin -password bpel123 -grantperm DEFAULT_REALM_NAME -role BPMcustomDomainAdmin com.collaxa.security.ServerPermission server all

Note:
As per Note:403225.1, the user ‘custom’ or group BPMcustomDomainAdmin, gets "all" or "nothing" privileges to the "custom" domain. In 10.1.3 it is not possible to go for finer actions like "read-only", "update-also" etc.

1. You can grant access to domains to selected user pool.
2. You can't control the access at different levels.

Permalink

<body><p><span class="Forum_Normal" id="spBody">If Oracle Application Server Metadata Repository is not registered with Oracle Internet Directory (OID), you need to unlock the schema password first.</span></p><p><span class="Forum_Normal"></span></p><p><span class="Forum_Normal"><span style="text-decoration: underline;">Step 1:</span>

Start SQLPlus,<br/>$ sqlplus /nolog<br/>SQL&gt; CONNECT / AS SYSDBA<br/>SQL&gt; ALTER USER orabpel IDENTIFIED BY orabpel ACCOUNT UNLOCK;</span></p><p><span class="Forum_Normal"> orabpel is the default password (or welcome1 in previous version)</span></p><p><span class="Forum_Normal">SQL&gt; ALTER USER orabpel IDENTIFIED BY &lt;new_orabpel_password&gt;;</span></p><p><span class="Forum_Normal"></span></p><p><span class="Forum_Normal"></span></p><p><span class="Forum_Normal"></span></p><p><span style="text-decoration: underline;">Step 2:</span> Change password in Oracle Internet Directory (OID)<br/>$Oracle_Home/bin/ldapsearch -h <span class="italic">oid_host</span> -p <span class="italic">oid_port</span> -D "cn=orcladmin"<br/>-w <span class="italic">orcladmin_passwd</span><br/>-b "orclresourcename=ORABPEL, orclreferencename=<span class="italic">oid_global_db_name</span>,<br/>cn=ias infrastructure databases, cn=ias, cn=products, cn=oraclecontext"<br/>-s base "objectclass=top" orclpasswordattribute</p><p></p><p>Example: $Oracle_Home/bin/ldapsearch -h sti6rb03.idc.oracle.com -p 389 -D "cn=orcladmin"<br/>-w welcome1 -b "orclresourcename=ORABPEL,<br/>orclreferencename=orcloid.idc.oracle.com, cn=ias<br/>infrastructure databases, cn=ias, cn=products, cn=oraclecontext" -s base<br/>"objectclass=top"   orclpasswordattribute</p><p><br/><span style="text-decoration: underline;">Step 3:</span> Change OC4J JDBC Datasources in OC4J_SOA</p><p>Login to Application Server Console as ‘oc4jadmin’ and navigate to OC4J_SOA. Click on Administration Tab -&gt; JDBC Resources</p><p>Click on BPELPM_CONNECTION_POOL link. In the credentials, provide this new password after clicking on ‘Use Cleartext Password’ radio button.</p></body> 


0 References Permalink

10g Snapshot Backup in Oracle

Posted by Community Admin Jan 7, 2009

There are a number of software and hardware technologies out there that enable snapshot backups ranging from Netapp and EMC disk array devices to snapshot capable file systems.

 

These technologies can represent a convenient way to do a backup and in some cases depending on the technology do a very quick restore of large databases. If these snapshots are performed on an active database, it is probably a good idea to put the data files in hot backup mode in order to freeze the data file headers. Before Oracle 10g, this had to be done on a tablespace level or data file level which could take a long time in some cases and introduce additional risk.

 

However, as of 10g you can put the ENTIRE database in hot backup mode using the command ALTER DATABASE BEGIN BACKUP;. No doubt this was introduced to make the use of oracle based snapshot backups easier (error checking, controlfile issues, etc.)

 

To take a database out of hot backup mode, use ALTER DATABASE END BACKUP;. This command was actually introduced in Oracle 9i but only worked when the database is mounted. The reason for this command was to facilitate a quick database start in case the database crashed while doing a snapshot backup or running a traditional backup.

0 References Permalink

Copy datafiles to another mountpoint.

Copy init.ora file to $ORACLE_HOME with user oratest or oraprod

  1. Generate the cloning script with RESETLOGS option and the new database name.
  2. Made new entry into /etc/oratab with the new database name.
  3. clone the database
  4. alter database open resetlogs
  5. add temp file into temp tablespace
  6. from oracle 10g home bin directory run dbua
  7. Preserve current db settings.
  8. To check the status of the upgrade run the following query:

select comp_name, status, version from dba_registry;

Check all the components. Compare to the original database components and make sure that all options that are ON are upgraded accordingly.

0 Comments 0 References Permalink

1. For this installation, you need either the DVDs or a downloaded from OTN version of the DVDs. Optionally, open a Metalink SR to request the media.  From the directory where the DVD files were unzipped, open a terminal window and enter the following: ./runInstaller.sh

 

 



2. Select product - Oracle Database 11g. Make sure the product is selected and click Next.

3. Select basic installation with a starter database. Enter orcl for the Global Database Name andoracle for Database Password and Confirm Password. Then, click Next.

4. The installer now verifies that the system meets all the minimum requirements for installing and configuring the chosen product. Please correct any reported errors before continuing. When the check successfully completes, click Next.

5. Oracle Configuration Manager allows you to associate your configuration information with your Metalink account. You can choose to enable it on this window. Then, click Next.

6. Review the Summary window to verify what is to be installed. Then, click Install.

7. The Configuration Assistants window appears.

8. Your database is now being created. Once database is created, select to unlock usersyou want to use.

9. Execute orainstRoot.sh and root.sh as root;

10. Open s terminal window:

# su -

# cd /home/oracle/oraInventory

# ./orainstRoot.sh

# cd /home/oracle/product/11.1.0

# ./root.sh

# exit

# exit

11. Go back to Installer screen and click OK;

12. Click Exit; click Yes to confirm;

13. open FireFox, go to https://<instancename>:1158/em

accept cert prompt; login as system/oracle

14. Database Control Home Page appears.

0 Comments 0 References Permalink

Ever since Oracle released their database, DBAs have been finding it increasingly difficult to manage large numbers of database files while keeping them performing optimally. In addition, maintaining the redundancy of those data files at their expected levels poses a challenge. Over the decades, many hardware vendors have provided various storage solutions, yet these solutions have yet to meet the performance levels required by data files today. To meet these needs, Oracle developed the exclusive software solution, ASM. ASM is an integrated file system and volume manager expressly built for Oracle database files. So, what is the big advantage ASM provides? Simply put, DBAs can manage a small of number of disk groups rather than a large number of data files.

A disk group is a set of disks managed together as a unit. How can this optimize performance? ASM automatically creates data files and uses Stripe And Mirror Everything (SAME) technology to stripe and mirror the data files evenly across the disks. Moreover, ASM stripes data at a file level using an allocation unit of 1MB, unlike at volume level. ASM also mirrors the data allocation units in different disks.

In the event of a disk failure, ASM is able to recover the data automatically by evenly redistributing the data over the remaining disks in the disk group. ASM can also fully recover the redundancy of data depending upon the remaining space in the disk group.

Nowadays, disks are so advanced that it is highly unlikely all disks in a disk group will fail at the same time. Since ASM can tolerate a couple of disk failures, the disk group will not lose its data. More surprisingly, most of the failure situations are taken care of by ASM automatically. The DBA can even add or remove a disk from a disk group while the database is online. ASM works well with single or clustered databases too. Multiple databases can also use the same ASM storage arrays at the same time.

ASM internally uses Oracle’s standard file architecture, called Oracle Managed Files (OMF), to create and delete files. It eliminates the need for the DBA to directly manage the operating system files within the Oracle database. ASM inherently supports very large files because any disk group can have a maximum of 10,000 disks. ASM is not only making life easier for the DBA, but also cuts down on overall resource costs and maximizes disk performance.


Configuration below uses IBM System Storage DS8000

Set up storage pool (extent pool)


To create the extent pool:
1. Create an array.
dscli> mkarray -dev IBM.1750-13AB73A -raidtype 5 -arsite IBM.1750-13AB73A/S1

2. Create one fixed block from one array.
dscli> mkrank -dev IBM.1750-13AB73A -array A0 -stgtype fb

3. Create a fixed block storage type extent pool.
dscli> mkextpool -rankgrp 0 -stgtype fb ora_RAC

4. Assign an unassigned rank to a extent pool.
dscli> chrank -extpool p0 r0

5. Display a list of array sites and status information.
dscli> lsarraysite
arsite   DA Pair     dkcap (10^9B) State     Array
====================================================
IBM.1750-13AB73A/S1 IBM.1750-13AB73A/0 146.0 Assigned
IBM.175013AB73A/A0

6. Display a list of defined ranks in a storage image and status information.
dscli> lsrank
ID   Group   State   datastate   Array RAIDtype   extpoolID   stgtype
=====================================================
IBM.1750-13AB73A/R0 0 Normal Normal IBM.1750-13AB73A/A0 5
IBM.1750-13AB73A/P0 fb

List the extent pool
dscli> lsextpool
Name   ID   stgtype   rankgrp   status   availstor (2^30B)
%allocated  available  reserved  numvols
====================================================
ora_RAC   IBM.1750-13AB73A/P0 fb   0   below   48   81
48  0  28

Make volume group and LUNs


To create volume group and LUNs:
1. Create a volume group in a storage image.
dscli> mkvolgrp -type scsimask Aix_oracle
creates volumegroup named Aix_oracle and assigns an
identifier in this case the identifier is v12
2. Create an open systems fixed block volume in a storage image.
dscli> mkfbvol -extpool p0 -cap 1 –volgrp v12 -name
oracle_#h 0210-0213
1GB luns for CRS and vote disks (4 total)
dscli> mkfbvol -extpool p0 -cap 10 –volgrp v12 -name
oracle_#h 0410-0419
10g luns for ASM disks (10 total)
3. List the resulting volume group and its members.
dscli> showvolgrp v12
Name Aix_oracle
ID IBM.1750-13AB73A/V12
Type SCSI Mask
Vols 0210 0211 0212 0213 0410 0411 0412
0413 0414 0415 0416 0417 0418 0419

Make host connection and LUN assignments to nodes


Each node has two HBAs - e.g. there will be two host connections per node. To make the host connections:
1. Make an I/O port and host connect configuration.

dscli> mkhostconnect -dev IBM.1750-13AB73A –wwname 10000000c92ce4fd -profile
"IBM pSeries -AIX" -volgrp v12 node1_h0
dscli> mkhostconnect -dev IBM.1750-13AB73A –wwname 10000000c92ce46e -profile
"IBM pSeries -AIX" -volgrp v12 node1_h1
dscli> mkhostconnect -dev IBM.1750-13AB73A –wwname 10000000c92ceb85 -profile
"IBM pSeries -AIX" -volgrp v12 node2_h0
dscli> mkhostconnect -dev IBM.1750-13AB73A –wwname 10000000c92ce23c -profile
"IBM pSeries -AIX" -volgrp v12 node2_h1
dscli> mkhostconnect -dev IBM.1750-13AB73A –wwname 10000000c930e637 -profile
"IBM pSeries -AIX" -volgrp v12 node3_h0
dscli> mkhostconnect -dev IBM.1750-13AB73A –wwname 10000000c930e59b -profile
"IBM pSeries -AIX" -volgrp v12 node3_h1
dscli> mkhostconnect -dev IBM.1750-13AB73A –wwname 10000000c930e45a -profile
"IBM pSeries -AIX" -volgrp v12 node4_h0
dscli> mkhostconnect -dev IBM.1750-13AB73A –wwname 10000000c930e698 -profile
"IBM pSeries -AIX" -volgrp v12 node4_h1


2. Display a list of host connections.
dscli> lshostconnect

3. Storage allocation on hosts
Use Emulex LP9002 HBAs with the following parameter settings:
- Setting up the “Fast I/O failure” supports faster failover to the alternate path.
- Dynamic tracking logic is called when the adapter driver receives an indication from the switch that there has been a link event involving a remote storage device port

These features should be set on all fscsi controllers in an AIX host as follows:
1. Change the characteristics of a device
$ chdev -l fscsi0 -a fc_err_recov=fast_fail
$ chdev -l fscsi0 -a dyntrk=yes
2. Display attribute characteristics
$ lsattr –El fscsi0
attach switch How this adapter is CONNECTED False
dyntrk yes Dynamic Tracking of FC Devices True
fc_err_recov fast_fail FC Fabric Event Error RECOVERY Policy True
scsi_id 0xa1900 Adapter SCSI ID False
sw_fc_class 3 FC Class for Fabric True

3. Display information about devices in the Device Configuration database
$ lsdev |grep hdisk
hdisk0 Available 1S-08-00-8,0 16 Bit LVD SCSI Disk Drive (internal disk)
hdisk1 Available 1S-08-00-9,0 16 Bit LVD SCSI Disk Drive (internal disk)
hdisk2 Defined 1H-08-01 IBM MPIO FC 2107
hdisk3 Defined 1H-08-01 IBM MPIO FC 2107
hdisk4 Defined 1H-08-01 IBM MPIO FC 2107
hdisk5 Defined 1H-08-01 IBM MPIO FC 2107
hdisk6 Defined 1H-08-01 IBM MPIO FC 2107

4. Display the name, location, and description of each device found in the current configuration
$ lscfg –v l hdisk2
$ lscfg -vl hdisk2 |grep Serial
Serial Number...............75023012100

5. List major and minor numbers for each host
$ ls -l hdisk2
brw------- 1 root system 25, 7 Jul 20 16:57 hdisk2

Use SDDPCM to manage the fiber connections


SDDPCM provides commands to display the status of adapters used to access managed devices, to display the status of the devices that the device driver manages, or to map supported storage MPIO devices or paths to a supported storage device location.

1. Display the SDDPCM path information.
$ pcmpath query device
DEV#: 2 DEVICE NAME: hdisk2 TYPE: 2107900 ALGORITHM: Load
Balance
SERIAL: 75023012100
==================================================
Path# Adapter/Path Name State Mode Select Errors
0 fscsi0/path0 OPEN NORMAL 171652 0
1 fscsi0/path1 OPEN NORMAL 171502 0
2 fscsi1/path2 OPEN NORMAL 171440 0
3 fscsi1/path3 OPEN NORMAL 171543 0

Create Cluster disks using DS8000 Volumes


1. Create the shared volume using DS8000
2. Assign these volumes for the CRS OCR and vote disks.
3. Create the major and minor numbers and the character device of those disks.
$ chdev -l hdisk2 -a reserve_policy=no_reserve
$ chdev -l hdisk3 -a reserve_policy=no_reserve
$ mknod /dev/ocr_disk c 25 7
$ mknod /dev/vote_disk c 25 9
4. Set proper permission and access level on those shared volumes.
$ chown oracle:dba ocr_disk
$ chown oracle:dba vote_disk
$ chmod 660 ocr_disk
$ chmod 660 vote_dis
5. During the CRS installation, select the external redundant disks to be used as the OCR and voting disks.

Prepare DS8000 Volumes for ASM


1. Create the shared volume using DS8000
2. Assign these volumes for the ASM disks.
3. Create a major and minor numbers and a character device of those disks.
$ chdev -l hdisk6 -a reserve_policy=no_reserve
$ chdev -l hdisk7 -a reserve_policy=no_reserve
$ mknod /dev/asm_disk1 c 25 2
$ mknod /dev/asm_disk2 c 25 13
4. Set proper permission and access level on those ASM disks.
$ chown oracle:dba /dev/asm_disk1
$ chown oracle:dba /dev/asm_disk2
$ chmod 660 /dev/asm_disk1
$ chmod 660 /dev/asm_disk2
0 Comments 0 References Permalink

You'll install the software in the Oracle Clusterware Home, Oracle ASM Home and Oracle Home on each node for redundancy and higher availability.

1. Preliminary Installation

Perform a full backup

Backup your Oracle RAC 10g environment before upgrading to Oracle RAC 11g.

Install additional software packages

Install the following packages as the root user if they are not already installed on the RAC nodes. These packages can be extracted from Enterprise-R5-GA-Server-i386-disc2.iso and Enterprise-R5-GA-Server-i386-disc3.iso.

  1. compat-libstdc++-33-3.2.3-61.i386.rpm
  2. elfutils-libelf-devel-0.125-3.e15.i386.rpm
  3. gcc-4.1.1-52.e15.i386.rpm
  4. gcc-c++-4.1.1-52.e15.i386.rpm
  5. glibc-devel-2.5-12.i386.rpm
  6. libaio-devel-0.3.106-3.2.i386.rpm
  7. libstdc++-devel-4.1.1-52.e15.i386.rpm
  8. sysstat-7.0.0-3.e15.i386.rpm
  9. unixODBC-2.2.11-7.1.i386.rpm
  10. unixODBC-devel-2.2.11-7.1.i386.rpm

After extracting the packages execute the command below as the root user.

# ls -1
elfutils-libelf-devel-0.125-3.e15.i386.rpm
libaio-devel-0.3.106-3.2.i386.rpm
unixODBC-2.2.11-7.1.i386.rpm
unixODBC-devel-2.2.11-7.1.i386.rpm
#
# rpm -Uvh *.rpm

Verify kernel parameters

The minimum kernel parameters requirements are listed below. If necessary, configure the appropriate parameters in /etc/sysctl.conf on both nodes.

kernel.shmall                = 2097152
kernel.shmmax                = 2147483648
kernel.shmmni                = 4096
kernel.sem                   = 250 32000 100 128
fs.file-max                  = 65536
net.ipv4.ip_local_port_range = 1024 65000
net.core.rmem_default        = 4194304
net.core.rmem_max            = 4194304
net.core.wmem_default        = 262144
net.core.wmem_max            = 262144

2. Upgrade Oracle Clusterware

Upgrade Oracle Clusterware to version 10.2.0.3

Prior to upgrading to Oracle RAC 11g, the Oracle Clusterware must be at least version 10.2.0.3 or 10.2.0.2 with CRS Bundle Patch #2 (reference Bug 5256865) if you would like to do a rolling upgrade. The 10.2.0.3 patchset (5337014) can be downloaded from Oracle Metalink.

Refer to Oracle Metalink Note 419058.1 or information on Oracle 10.2.0.3 patch set for Linux x86.

merlin1-> crsctl query crs activeversion
CRS active version on the cluster is [10.2.0.3.0]

Oracle Clusterware pre-installation checks

Cluster Verification Utility (CVU) reduces the complexity and time it takes to install RAC. The tool scans all the required components in the cluster environment to ensure all criteria are met for a successful installation.

Download and uncompress the Oracle Clusterware 11.1.0.6 software from OTN to a temporary directory and execute runcluvfy.sh.

/stage/clusterware/runcluvfy.sh stage -pre crsinst -n all -verbose > /tmp/prechecks.log

Verify all pre-requisites are met. You can ignore the "Package existence checked failed" for openmotif-2.2.3-3.RHEL3.

Stop all database resources.

merlin1-> srvctl stop database -d devdb
merlin1-> srvctl stop asm -n merlin1
merlin1-> srvctl stop asm -n merlin2
merlin1-> srvctl stop nodeapps -n merlin1
merlin1-> srvctl stop nodeapps -n merlin2
merlin1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.devdb.db   application    OFFLINE   OFFLINE
ora....b1.inst application    OFFLINE   OFFLINE
ora....b2.inst application    OFFLINE   OFFLINE
ora....SM1.asm application    OFFLINE   OFFLINE
ora....N1.lsnr application    OFFLINE   OFFLINE
ora....in1.gsd application    OFFLINE   OFFLINE
ora....in1.ons application    OFFLINE   OFFLINE
ora....in1.vip application    OFFLINE   OFFLINE
ora....SM2.asm application    OFFLINE   OFFLINE
ora....N2.lsnr application    OFFLINE   OFFLINE
ora....in2.gsd application    OFFLINE   OFFLINE
ora....in2.ons application    OFFLINE   OFFLINE
ora....in2.vip application    OFFLINE   OFFLINE

Prepare the Oracle Clusterware Home for upgrade

Execute the preupdate.sh script on each node to prepare the clusterware home for upgrade. The script stops the Oracle Clusterware stack and changes the permission of files in the Oracle Clusterware Home directory.

As the root user on each node,

# cd /stage/clusterware/upgrade
# ./preupdate.sh -crshome /u02/crs/oracle -crsuser oracle
Shutting down Oracle Cluster Ready Services (CRS):
Stopping resources. This could take several minutes.
Successfully stopped CRS resources.
Stopping CSSD.
Shutting down CSS daemon.
Shutdown request successfully issued.
Shutdown has begun. The daemons should exit soon.
Checking to see if Oracle CRS stack is down...
Oracle CRS stack is down now.

Upgrade the Oracle Clusterware

You are only required to run the Oracle Universal Installer (OUI) on one node. The OUI will automatically install the existing binary software on each node.

As the oracle user on merlin1,

merlin1-> /stage/clusterware/runInstaller
  1. Welcome: Click on Next.
  2. Specify Home Details: Verify the correct CRS_Home directory (/u02/crs/oracle) is displayed.
  3. Specify Hardware Cluster Installation Mode: Verify all nodes are selected.
  4. Product-Specific Prerequisite Checks: Verify overall result is successful.
  5. Summary: Click on Install.
  6. Execute Configuration scripts: Execute the scripts below as the root user sequentially, one at a time. Do not proceed to the next script until the current script completes.
    1. Execute /u02/crs/oracle/install/rootupgrade on merlin1.
    2. Execute /u02/crs/oracle/install/rootupgrade on merlin2.

On merlin1,

# /u02/crs/oracle/install/rootupgrade
Checking to see if Oracle CRS stack is already up...

copying ONS config file to 11.1 CRS home
/bin/cp: `/u02/crs/oracle/opmn/conf/ons.config' and `/u02/crs/oracle/opmn/conf/ons.config' are the same file
/u02/crs/oracle/opmn/conf/ons.config was copied successfully to /u02/crs/oracle/opmn/conf/ons.config
WARNING: directory '/u02/crs' is not owned by root
WARNING: directory '/u02' is not owned by root
Oracle Cluster Registry configuration upgraded successfully
Adding daemons to inittab
Attempting to start Oracle Clusterware stack
Waiting for Cluster Synchronization Services daemon to start
Cluster Synchronization Services daemon has started
Waiting for Event Manager daemon to start
Event Manager daemon has started
Cluster Ready Services daemon has started
Oracle CRS stack is running under init(1M)
clscfg: EXISTING configuration version 3 detected.
clscfg: version 3 is 10g Release 2.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 1: merlin1 merlin1-priv merlin1
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
CRS stack on this node, is successfully upgraded to 11.1.0.6.0
Checking the existence of nodeapps on this node
Creating '/u02/crs/oracle/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u02/crs/oracle/install/paramfile.crs

On merlin2,

# /u02/crs/oracle/install/rootupgrade
Checking to see if Oracle CRS stack is already up...

copying ONS config file to 11.1 CRS home
/bin/cp: `/u02/crs/oracle/opmn/conf/ons.config' and `/u02/crs/oracle/opmn/conf/ons.config' are the same file
/u02/crs/oracle/opmn/conf/ons.config was copied successfully to /u02/crs/oracle/opmn/conf/ons.config
WARNING: directory '/u02/crs' is not owned by root
WARNING: directory '/u02' is not owned by root
Oracle Cluster Registry configuration upgraded successfully
Adding daemons to inittab
Attempting to start Oracle Clusterware stack
Waiting for Cluster Synchronization Services daemon to start
Cluster Synchronization Services daemon has started
Waiting for Event Manager daemon to start
Waiting for Event Manager daemon to start
Event Manager daemon has started
Cluster Ready Services daemon has started
Oracle CRS stack is running under init(1M)
clscfg: EXISTING configuration version 4 detected.
clscfg: version 4 is 11 Release 1.
Successfully accumulated necessary OCR keys.
Using ports: CSS=49895 CRS=49896 EVMC=49898 and EVMR=49897.
node <nodenumber>: <nodename> <private interconnect name> <hostname>
node 2: merlin2 merlin2-priv merlin2
Creating OCR keys for user 'root', privgrp 'root'..
Operation successful.
clscfg -upgrade completed successfully
CRS stack on this node, is successfully upgraded to 11.1.0.6.0
Checking the existence of nodeapps on this node
Creating '/u02/crs/oracle/install/paramfile.crs' with data used for CRS configuration
Setting CRS configuration values in /u02/crs/oracle/install/paramfile.crs


merlin1-> $ORA_CRS_HOME/bin/crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
merlin1-> $ORA_CRS_HOME/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [merlin1] is [11.1.0.6.0]
merlin1-> $ORA_CRS_HOME/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.1.0.6.0]

merlin2-> $ORA_CRS_HOME/bin/crsctl check crs
Cluster Synchronization Services appears healthy
Cluster Ready Services appears healthy
Event Manager appears healthy
merlin2-> $ORA_CRS_HOME/bin/crsctl query crs softwareversion
Oracle Clusterware version on node [merlin2] is [11.1.0.6.0]
merlin2-> $ORA_CRS_HOME/bin/crsctl query crs activeversion
Oracle Clusterware active version on the cluster is [11.1.0.6.0]
  1. Return to the Execute Configuration scripts screen on merlin1 and click on "OK."
  2. Configuration Assistants: Verify that all checks are successful. The OUI does a Clusterware post-installation check at the end. If the CVU fails, correct the problem and re-run the following command as the oracle user:
merlin1-> /u02/crs/oracle/bin/cluvfy stage -post crsinst -n merlin1,merlin2

Performing post-checks for cluster services setup

Checking node reachability...

Node reachability check passed from node "merlin1".


Checking user equivalence...

User equivalence check passed for user "oracle".

Checking Cluster manager integrity...

Checking CSS daemon...

Daemon status check passed for "CSS daemon".

Cluster manager integrity check passed.

Checking cluster integrity...


Cluster integrity check passed


Checking OCR integrity...


Checking the absence of a non-clustered configuration...

All nodes free of non-clustered, local-only configurations.


Uniqueness check for OCR device passed.

Checking the version of OCR...
OCR of correct Version "2" exists.

Checking data integrity of OCR...
Data integrity check for OCR passed.

OCR integrity check passed.

Checking CRS integrity...


Checking daemon liveness...

Liveness check passed for "CRS daemon".

Checking daemon liveness...

Liveness check passed for "CSS daemon".

Checking daemon liveness...

Liveness check passed for "EVM daemon".

Checking CRS health...

CRS health check passed.

CRS integrity check passed.

Checking node application existence...


Checking existence of VIP node application (required)

Check passed.

Checking existence of ONS node application (optional)

Check passed.

Checking existence of GSD node application (optional)

Check passed.

Post-check for cluster services setup was successful.
  1. End of Installation: Click Exit.
At this stage, the Oracle Clusterware has been upgraded to Oracle Clusterware 11g and all cluster resources should be running.
merlin1-> $ORA_CRS_HOME/bin/crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.devdb.db   application    ONLINE    ONLINE    merlin1
ora....b1.inst application    ONLINE    ONLINE    merlin1
ora....b2.inst application    ONLINE    ONLINE    merlin2
ora....SM1.asm application    ONLINE    ONLINE    merlin1
ora....N1.lsnr application    ONLINE    ONLINE    merlin1
ora....in1.gsd application    ONLINE    ONLINE    merlin1
ora....in1.ons application    ONLINE    ONLINE    merlin1
ora....in1.vip application    ONLINE    ONLINE    merlin1
ora....SM2.asm application    ONLINE    ONLINE    merlin2
ora....N2.lsnr application    ONLINE    ONLINE    merlin2
ora....in2.gsd application    ONLINE    ONLINE    merlin2
ora....in2.ons application    ONLINE    ONLINE    merlin2
ora....in2.vip application    ONLINE    ONLINE    merlin2

3. Install Oracle Database 11g Release 1 Software

Create the Oracle Home

As the oracle user, create the new Oracle home on both nodes.

mkdir -p  /u01/app/oracle/product/11.1.0/db_1

Install the Oracle Database software

Download the Oracle Database software from OTN.
As the oracle user on merlin1,

merlin1-> /stage/database/runInstaller
  1. Welcome: Click on Next.
  2. Select Installation Type:
    1. Select Custom.
  3. Specify Home Details:
    1. Oracle Base: /u01/app/oracle.
    2. Name: OraDb11g_home1
    3. Path: /u01/app/oracle/product/11.1.0/db_1
  4. Specify Hardware Cluster Installation Mode:
    1. Select Cluster Installation.
    2. Click on Select All.
  5. Product-Specific Prerequisite Checks: Verify overall result is successful.
  6. Available Product Components: Select all the required components.
  7. Privileged Operating System Groups:
    1. Database Administrator (OSDBA) Group: dba.
    2. Database Operator (OSOPER) Group: oinstall.
    3. ASM administrator (OSASM) Group: dba.
  8. Create Database:
    1. Select Install database Software only.
  9. Summary: Click on Install.
  10. Execute Configuration scripts: Execute the scripts below as the root user.
    1. Execute /u01/app/oracle/product/11.1.0/db_1/root.sh on merlin1.
    2. Execute /u01/app/oracle/product/11.1.0/db_1/root.sh on merlin2.
  11. Return to the Execute Configuration scripts screen on merlin1 and click onOK.
  12. End of Installation: Click on Exit.

4. Upgrade Oracle Database

Pre-database upgrade checks

Prior to running the Database Upgrade Assistant (DBUA), execute the pre-database upgrade checks, utlu111i.sql to verify that all pre-requisites are met. As part of the upgrade process, the DBUA changes the cluster_database parameter automatically from true to false. Re-execute the pre-database upgrade script after making the necessary modifications.

Connect as the sys user,

SQL> spool /tmp/utlu111i.log
SQL> @/u01/app/oracle/product/11.1.0/db_1/rdbms/admin/utlu111i
Oracle Database 11.1 Pre-Upgrade Information Tool    08-13-2007 18:03:45
.
**********************************************************************
Database:
**********************************************************************
--> name:          DEVDB
--> version:       10.2.0.3.0
--> compatible:    10.2.0.1.0
--> blocksize:     8192
--> platform:      Linux IA (32-bit)
--> timezone file: V4
.
**********************************************************************
Tablespaces: [make adjustments in the current environment]
**********************************************************************
--> SYSTEM tablespace is adequate for the upgrade.
.... minimum required size: 743 MB
--> UNDOTBS1 tablespace is adequate for the upgrade.
.... minimum required size: 315 MB
--> SYSAUX tablespace is adequate for the upgrade.
.... minimum required size: 458 MB
--> TEMP tablespace is adequate for the upgrade.
.... minimum required size: 61 MB
--> EXAMPLE tablespace is adequate for the upgrade.
.... minimum required size: 66 MB
.
**********************************************************************
Update Parameters: [Update Oracle Database 11.1 init.ora or spfile]
**********************************************************************
-- No update parameter changes are required.
.
**********************************************************************
Renamed Parameters: [Update Oracle Database 11.1 init.ora or spfile]
**********************************************************************
-- No renamed parameters found. No changes are required.
.
**********************************************************************
Obsolete/Deprecated Parameters: [Update Oracle Database 11.1 init.ora or spfile]
**********************************************************************
--> "background_dump_dest" replaced by  "diagnostic_dest"
--> "user_dump_dest" replaced by  "diagnostic_dest"
--> "core_dump_dest" replaced by  "diagnostic_dest"
.
**********************************************************************
Components: [The following database components will be upgraded or installed]
**********************************************************************
--> Oracle Catalog Views         [upgrade]  VALID
--> Oracle Packages and Types    [upgrade]  VALID
--> JServer JAVA Virtual Machine [upgrade]  VALID
--> Oracle XDK for Java          [upgrade]  VALID
--> Real Application Clusters    [upgrade]  VALID
--> Oracle Workspace Manager     [upgrade]  VALID
--> OLAP Analytic Workspace      [upgrade]  VALID
--> OLAP Catalog                 [upgrade]  VALID
--> EM Repository                [upgrade]  VALID
--> Oracle Text                  [upgrade]  VALID
--> Oracle XML Database          [upgrade]  VALID
--> Oracle Java Packages         [upgrade]  VALID
--> Oracle interMedia            [upgrade]  VALID
--> Spatial                      [upgrade]  VALID
--> Data Mining                  [upgrade]  VALID
--> Expression Filter            [upgrade]  VALID
--> Rule Manager                 [upgrade]  VALID
--> Oracle OLAP API              [upgrade]  VALID
.
**********************************************************************
Miscellaneous Warnings
**********************************************************************
WARNING: --> The "cluster_database" parameter is currently "TRUE" and must be
set to "FALSE" prior to running the upgrade.
WARNING: --> Database contains stale optimizer statistics.
.... Refer to the 11g Upgrade Guide for instructions to update
.... statistics prior to upgrading the database.
.... Component Schemas with stale statistics:
....   SYS
WARNING: --> Database contains schemas with objects dependent on network
packages.
.... Refer to the 11g Upgrade Guide for instructions to configure Network ACLs.
WARNING: --> EM Database Control Repository exists in the database.
.... Direct downgrade of EM Database Control is not supported. Refer to the
.... 11g Upgrade Guide for instructions to save the EM data prior to upgrade.
.

PL/SQL procedure successfully completed.

SQL> spool off

Modify the oracle user environment file

Modify the ORACLE_HOME to reflect the new Oracle Database 11g directory on both nodes.

merlin1-> more .profile
export PS1="`/bin/hostname -s`-> "
export EDITOR=vi
export ORACLE_SID=devdb1
export ORACLE_BASE=/u01/app/oracle
export ORACLE_HOME=$ORACLE_BASE/product/11.1.0/db_1
export ORA_CRS_HOME=/u02/crs/oracle
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export PATH=$ORACLE_HOME/bin:/bin:/usr/bin:/usr/sbin:/usr/loca
l/bin:/usr/X11R6/bin
umask 022

Upgrade the database

As the oracle user, execute dbua on merlin1.

merlin1-> . ./.profile
merlin1-> which dbua
/u01/app/oracle/product/11.1.0/db_1/bin/dbua
merlin1-> dbua
  1. Welcome: Click Next.
  2. Upgrade Operations: Select Upgrade a Database.
  3. Databases: Select devdb.
  4. Database Upgrade Assistant: Click Yes to migrate the existing listener.
  5. Database Upgrade Assistant: Click on No to upgrade ASM later.
  6. Diagnostic Destination:
    1. Oracle Base: /u01/app/oracle
    2. Diagnostic Destination: /u01/app/oracle
  7. Recovery Configuration:
    1. Select Specify Flash Recovery Area.
    2. Flash Recovery Area: +RECOVERYDEST.
    3. Flash Recovery Area Size: 2048 MB.
  8. Recompile Invalid Objects: Select Recompile invalid objects at the end of upgrade.
  9. Summary: Click Finish.

 

  1. Progress: Click OK to see the results of the upgrade.
  2. Upgrade Results: Click Close.
SQL> select comp_name,version,status from dba_registry;

COMP_NAME                               VERSION    STATUS
--------------------------------------- ---------- ------
Oracle Enterprise Manager               11.1.0.6.0 VALID
OLAP Catalog                            11.1.0.6.0 VALID
Spatial                                 11.1.0.6.0 VALID
Oracle Multimedia                       11.1.0.6.0 VALID
Oracle XML Database                     11.1.0.6.0 VALID
Oracle Text                             11.1.0.6.0 VALID
Oracle Data Mining                      11.1.0.6.0 VALID
Oracle Expression Filter                11.1.0.6.0 VALID
Oracle Rule Manager                     11.1.0.6.0 VALID
Oracle Workspace Manager                11.1.0.6.0 VALID
Oracle Database Catalog Views           11.1.0.6.0 VALID
Oracle Database Packages and Types      11.1.0.6.0 VALID
JServer JAVA Virtual Machine            11.1.0.6.0 VALID
Oracle XDK                              11.1.0.6.0 VALID
Oracle Database Java Packages           11.1.0.6.0 VALID
OLAP Analytic Workspace                 11.1.0.6.0 VALID
Oracle OLAP API                         11.1.0.6.0 VALID
Oracle Real Application Clusters        11.1.0.6.0 VALID

18 rows selected.

merlin1-> srvctl config database -d devdb
merlin1 devdb1 /u01/app/oracle/product/11.1.0/db_1
merlin2 devdb2 /u01/app/oracle/product/11.1.0/db_1

merlin1-> $ORA_CRS_HOME/bin/crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.devdb.db   application    ONLINE    ONLINE    merlin1
ora....b1.inst application    ONLINE    ONLINE    merlin1
ora....b2.inst application    ONLINE    ONLINE    merlin2
ora....SM1.asm application    ONLINE    ONLINE    merlin1
ora....N1.lsnr application    ONLINE    ONLINE    merlin1
ora....in1.gsd application    ONLINE    ONLINE    merlin1
ora....in1.ons application    ONLINE    ONLINE    merlin1
ora....in1.vip application    ONLINE    ONLINE    merlin1
ora....SM2.asm application    ONLINE    ONLINE    merlin2
ora....N2.lsnr application    ONLINE    ONLINE    merlin2
ora....in2.gsd application    ONLINE    ONLINE    merlin2
ora....in2.ons application    ONLINE    ONLINE    merlin2
ora....in2.vip application    ONLINE    ONLINE    merlin2

The new diagnostic location

SQL> select name, value from v$parameter where name like '%dump_dest' or name like 'diag%';

NAME                 VALUE
-------------------- --------------------------------------------------
background_dump_dest /u01/app/oracle/diag/rdbms/devdb/devdb1/trace
user_dump_dest       /u01/app/oracle/diag/rdbms/devdb/devdb1/trace
core_dump_dest       /u01/app/oracle/diag/rdbms/devdb/devdb1/cdump
diagnostic_dest      /u01/app/oracle

5. Upgrade Oracle ASM

A separate ASM home is optional; however having one provides the benefits of being able to apply patches or patchsets to the Oracle RDBMS home, independently from the ASM home. Having a separate ASM home and RDBMS home is especially beneficial when running more than one database instances on the same node. The ASM instance availability will not be impacted when the Oracle RDBMS home needs to be patched.

At this point, your ASM home is still running off the Oracle Database 10g Home.

merlin1-> srvctl config asm -n merlin1
+ASM1 /u01/app/oracle/product/10.2.0/db_1
merlin1-> srvctl config asm -n merlin2
+ASM2 /u01/app/oracle/product/10.2.0/db_1

Create the ASM home

As the oracle user on both nodes, create the new ASM home

mkdir /u01/app/oracle/product/11.1.0/asm

and modify the ORACLE_HOME variable in the shell profile to reflect the new ASM home.

ORACLE_HOME=/u01/app/oracle/product/11.1.0/asm

Install Oracle Database 11g Release 1 software in ASM home

As the oracle user on merlin1,

merlin1-> . ./.profile
merlin1-> /stage/database/runInstaller
  1. Welcome: Click Next.
  2. Select Installation Type:
    1. Select Enterprise Edition.
  3. Specify Home Details:
    1. Oracle Base: /u01/app/oracle.
    2. Name: OraASM11g_home.
    3. Path: /u01/app/oracle/product/11.1.0/asm.
  4. Specify Hardware Cluster Installation Mode:
    1. Select Cluster Installation.
    2. Click Select All.
  5. Product-Specific Prerequisite Checks: Verify overall result is successful.
  6. Upgrade an Existing Database:
    1. Do you want to perform an upgrade now?: No.
  7. Select Configuration Option:
    1. Select Install Software Only.
  8. Privileged Operating System Groups:
    1. Database Administrator (OSDBA) Group: dba
    2. Database Operator (OSOPER) Group: oinstall
    3. ASM administrator (OSASM) Group: dba
  9. Summary: Click Install.
  10. Execute Configuration scripts: Execute the scripts below as the root user.
    1. Execute /u01/app/oracle/product/11.1.0/asm/root.sh on merlin1.
    2. Execute /u01/app/oracle/product/11.1.0/asm/root.sh on merlin2.
  11. Return to the Execute Configuration scripts screen on merlin1 and click onOK.
  12. End of Installation: Click Exit.

Upgrade ASM

As the oracle user on merlin1, stop the database and start up the DBUA.

merlin1-> srvctl stop database -d devdb
merlin1-> /u01/app/oracle/product/11.1.0/asm/bin/dbua
  1. Welcome: Click Next.
  2. Upgrade Operations: Select Upgrade Automatic Storage Management Instance.
  3. Summary: Click Finish.
  4. Progress: Click OK to see the results of the upgrade.
  5. Upgrade Results: Click Close.
merlin1-> srvctl config asm -n merlin1
+ASM1 /u01/app/oracle/product/11.1.0/asm
merlin1-> srvctl config asm -n merlin2
+ASM2 /u01/app/oracle/product/11.1.0/asm
merlin1-> srvctl start database -d devdb
merlin1-> crs_stat -t
Name           Type           Target    State     Host
------------------------------------------------------------
ora.devdb.db   application    ONLINE    ONLINE    merlin1
ora....b1.inst application    ONLINE    ONLINE    merlin1
ora....b2.inst application    ONLINE    ONLINE    merlin2
ora....SM1.asm application    ONLINE    ONLINE    merlin1
ora....N1.lsnr application    ONLINE    ONLINE    merlin1
ora....in1.gsd application    ONLINE    ONLINE    merlin1
ora....in1.ons application    ONLINE    ONLINE    merlin1
ora....in1.vip application    ONLINE    ONLINE    merlin1
ora....SM2.asm application    ONLINE    ONLINE    merlin2
ora....N2.lsnr application    ONLINE    ONLINE    merlin2
ora....in2.gsd application    ONLINE    ONLINE    merlin2
ora....in2.ons application    ONLINE    ONLINE    merlin2
ora....in2.vip application    ONLINE    ONLINE    merlin2

Modify Disk Group Compatibility Attributes and Database Compatibility parameter.

As the final step, to utilize the new features of Oracle Database 11g, the database compatibility parameter and the disk group compatibility attributes have to be changed to 11.1.0.

On devdb1 instance,

SQL> show parameter compatible NAME TYPE VALUE ------------------------ ----------- ------------------------- compatible string 10.2.0.1.0 SQL> alter system set compatible='11.1.0' scope=spfile; System altered. On merlin1, restart the database, merlin1-> srvctl stop database -d devdb merlin1-> srvctl start database -d devdb

On ASM1 instance,

SQL> select name,compatibility,database_compatibility from v$asm_diskgroup; NAME COMPATIBILITY DATABASE_COMPATIBILI --------------- ------------- -------------------- DG1 10.1.0.0.0 10.1.0.0.0 RECOVERYDEST 10.1.0.0.0 10.1.0.0.0 SQL> alter diskgroup dg1 set attribute 'compatible.asm'='11.1.0'; Diskgroup altered. SQL> alter diskgroup dg1 set attribute 'compatible.rdbms'='11.1.0'; Diskgroup altered. SQL> alter diskgroup recoverydest set attribute 'compatible.asm'='11.1.0'; Diskgroup altered. SQL> alter diskgroup recoverydest set attribute 'compatible.rdbms'='11.1.0'; Diskgroup altered. SQL> select name,compatibility,database_compatibility from v$asm_diskgroup; NAME COMPATIBILITY DATABASE_COMPATIBILI --------------- ------------- -------------------- DG1 11.1.0.0.0 11.1.0.0.0 RECOVERYDEST 11.1.0.0.0 11.1.0.0.0

6. Explore Oracle Database 11g

This section briefly describes a few of the new features of Oracle Database 11g. A detailed description of the new features is beyond the scope of this guide. For a more comprehensive list, see the Oracle Database New Features Guide 11g Release 1 (11.1).

Automatic Memory Management - With Oracle Database 11g, memory management is further automated with the use of the dynamic parameter, memory_target. You would just be required to specify the total instance memory size and the database will automatically manages the memory distribution between the SGA and the PGA. The view, v$memory_target_advice provides advice on memory tuning.

Interval Partitioning improves partition table manageability by creating new table partitions automatically when inserted rows exceed the partition ranges.

Partitioning by integer value

SQL> create table patients ( 2 patientid number not null,name varchar2(10),address varchar2(15) 3 ) 4 partition by range (patientid) 5 interval (100) 6 (partition p1 values less than (100)) 7 / Table created. SQL> select partition_name,high_value 2 from user_tab_partitions 3 where table_name='PATIENTS'; PARTITION_NAME HIGH_VALUE --------------- --------------- P1 100 SQL> insert into patients values (100,'ROBERT','4 BORNE AVE'); 1 row created. SQL> select partition_name,high_value 2 from user_tab_partitions 3 where table_name='PATIENTS'; PARTITION_NAME HIGH_VALUE --------------- --------------- P1 100 SYS_P81 200 SQL> select count(*) from patients partition (SYS_P81); COUNT(*) ---------- 1

Partitioning by date

SQL> create table userlogs ( 2 transid number, 3 transdt date, 4 terminal varchar2(10) 5 ) 6 partition by range (transdt) 7 interval (numtoyminterval(1,'YEAR')) 8 ( 9 partition p1 values less than (to_date('01-01-2007','mm-dd-yyyy')) 10 ); Table created. SQL> select partition_name,high_value 2 from user_tab_partitions 3 where table_name='USERLOGS'; PARTITION_NAME HIGH_VALUE -------------- ------------------------------------------------------------------------------- - P1 TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SQL> insert into userlogs values (1,'11-AUG-07','XAV0004'); 1 row created. SQL> select partition_name,high_value 2 from user_tab_partitions 3 where table_name='USERLOGS'; PARTITION_NAME HIGH_VALUE -------------- ------------------------------------------------------------------------------- - P1 TO_DATE(' 2007-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SYS_P42 TO_DATE(' 2008-01-01 00:00:00', 'SYYYY-MM-DD HH24:MI:SS', 'NLS_CALENDAR=GREGORIA SQL> select count(*) from userlogs partition (sys_p42); COUNT(*) ---------- 1 Reference Partitioning partitions a child table based on the partitioning scheme of the parent table. SQL> create table patients ( 2 patientid number not null,name varchar2(10), address varchar2(15) 3 ) 4 partition by range (patientid) 5 (partition p1 values less than (100), 6 partition p2 values less than (200)) 7 / Table created. SQL> alter table patients 2 add constraint patients_pk primary key (patientid); Table altered. SQL> create table invoices ( 2 invoiceno number,amount number, patientid number not null, 3 constraint invoices_fk 4 foreign key (patientid) references patients 5 ) 6 partition by reference (invoices_fk); Table created. SQL> select dbms_metadata.get_ddl('TABLE','INVOICES','VCHAN') from dual; DBMS_METADATA.GET_DDL('TABLE','INVOICES','VCHAN') ----------------------------------------------------------------------- CREATE TABLE "VCHAN"."INVOICES" ( "INVOICENO" NUMBER, "AMOUNT" NUMBER, "PATIENTID" NUMBER NOT NULL ENABLE, CONSTRAINT "INVOICES_FK" FOREIGN KEY ("PATIENTID") REFERENCES "VCHAN"."PATIENTS" ("PATIENTID") ENABLE ) PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE( BUFFER_POOL DEFAULT) PARTITION BY REFERENCE ("INVOICES_FK") (PARTITION "P1" PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" NOCOMPRESS , PARTITION "P2" PCTFREE 10 PCTUSED 40 INITRANS 1 MAXTRANS 255 STORAGE(INITIAL 65536 NEXT 1048576 MINEXTENTS 1 MAXEXTENTS 2147483645 PCTINCREASE 0 FREELISTS 1 FREELIST GROUPS 1 BUFFER_POOL DEFAULT) TABLESPACE "USERS" NOCOMPRESS ) SQL> insert into patients values (1,'TOBY','88 Palace Ave'); 1 row created. SQL> insert into invoices values (150,262.12,1); 1 row created. SQL> select count(*) from invoices partition (p1); COUNT(*) ---------- 1 SQL> select count(*) from invoices partition (p2); COUNT(*) ---------- 0 SQL> insert into patients values (110,'GILY','512 HILE STREET'); 1 row created. SQL> insert into invoices values (151,500.01,110); 1 row created. SQL> select count(*) from invoices partition (p1); COUNT(*) ---------- 1 SQL> select count(*) from invoices partition (p2); COUNT(*) ---------- 1 Table Compression in Oracle Database 11gsupports conventional DML and drop column operations. Compressed data are not uncompressed during reading and thus queries on compressed data are noticeably faster since there are fewer data block reads.
SQL> create tablespace tbs1 datafile '/u01/app/oracle/oradata/db11g/tbs1_01.dbf' size 500M; Tablespace created. SQL> create tablespace tbs2 datafile '/u01/app/oracle/oradata/db11g/tbs2_01.dbf' size 500M; Tablespace created. SQL> create table mytable_compress (col1 varchar2(26),col2 varchar2(26)) tablespace tbs1 compress for all operations; Table created. SQL> create table mytable_nocompress (col1 varchar2(26),col2 varchar2(26)) tablespace tbs2; Table created. SQL> alter system flush buffer_Cache; System altered. SQL> alter system flush shared_pool; System altered. SQL> set timing on SQL> insert into mytable_nocompress 2 select 'ABCDEFGHIJKLMNOPQRSTUVWXYZ','ABCDEFGHIJKLMNOPQRSTUVWXYZ' 3 from (select 1 from dual connect by level <= 2000000); 2000000 rows created. Elapsed: 00:00:8.07 SQL> commit; Commit complete. Elapsed: 00:00:00.07 SQL> alter system flush buffer_Cache; System altered. SQL> alter system flush shared_pool; System altered. SQL> insert into mytable_compress 2 select 'ABCDEFGHIJKLMNOPQRSTUVWXYZ','ABCDEFGHIJKLMNOPQRSTUVWXYZ' 3 from (select 1 from dual connect by level <= 2000000); 2000000 rows created. Elapsed: 00:00:41.79 SQL> commit; Commit complete. Elapsed: 00:00:00.04 SQL> select segment_name,extents from user_segments where segment_name like 'MYTABLE%'; SEGMENT_NAME EXTENTS ------------------------------ ---------- MYTABLE_COMPRESS 53 MYTABLE_NOCOMPRESS 88 SQL> select tablespace_name,bytes/1024/1024 from dba_free_space where tablespace_name like 'TBS%'; TABLESPACE_NAME BYTES/1024/1024 ------------------------------ --------------- TBS1 461.9375 TBS2 363.9375 SQL> alter table mytable_compress drop column col2; Table altered. Elapsed: 00:00:21.04


0 Comments 0 References Permalink
Cache Fusion and Oracle RAC

The introduction of the Cache Fusion shared RAM cache for multiple Oracle instances is a breakthrough in clustered solutions. Oracle RAC fully implements Cache Fusion, which both provides high performance and enables continuous cluster availability. The high-availability capability of Oracle RAC is almost unfathomable. It's estimated that in a 12-computer configuration, any application running on Oracle RAC will not experience a catastrophic failure for well over 100,000 years.

Cache Fusion technology changes the internal configuration of the Oracle system global area (SGA). Cache Fusion moves the RAM data buffers from local RAM storage into a shared RAM area accessible by all Oracle instances.

Beyond high performance and high availability, Oracle RAC offers significant benefits as a scalability tool. Whenever the processing load becomes excessive in an existing Oracle RAC cluster, you can add additional processors—each with its own Oracle instance—to the Oracle RAC configuration. This allows companies to start small and scale infinitely as processing demands increase.

Oracle RAC and Hardware Failover

To detect a node failure, the Cluster Manager uses a background process—Global Enqueue Service Monitor (LMON)—to monitor the health of the cluster. When a node fails, the Cluster Manager reports the change in the cluster's membership to Global Cache Services (GCS) and Global Enqueue Service (GES). These services are then remastered based on the current membership of the cluster.

To successfully remaster the cluster services, Oracle RAC keeps track of all resources and resource states on each node and then uses this information to restart these resources on a backup node.

These processes also manage the state of in-flight transactions and work with TAF to either restart or resume the transactions on the new node. Now let's see how Oracle RAC and TAF work together to ensure that a server failure does not cause an unplanned service interruption.

Using Transparent Application Failover

After an Oracle RAC node crashes—usually from a hardware failure—all new application transactions are automatically rerouted to a specified backup node. The challenge in rerouting is to not lose transactions that were "in flight" at the exact moment of the crash. One of the requirements of continuous availability is the ability to restart in-flight application transactions, allowing a failed node to resume processing on another server without interruption. Oracle's answer to application failover is a new Oracle Net mechanism dubbed Transparent Application Failover. TAF allows the DBA to configure the type and method of failover for each Oracle Net client.

For an application to use TAF, it must use failover-aware API calls from the Oracle Call Interface (OCI). Inside OCI are TAF callback routines that can be used to make any application failover-aware.

While the concept of failover is simple, providing an apparent instant failover can be extremely complex, because there are many ways to restart in-flight transactions. The TAF architecture offers the ability to restart transactions at either the transaction (SELECT) or session level:
SELECT failover. With SELECT failover, Oracle Net keeps track of all SELECT statements issued during the transaction, tracking how many rows have been fetched back to the client for each cursor associated with a SELECT statement. If the connection to the instance is lost, Oracle Net establishes a connection to another Oracle RAC node and re-executes the SELECT statements, repositioning the cursors so the client can continue fetching rows as if nothing has happened. The SELECT failover approach is best for data warehouse systems that perform complex and time-consuming transactions.
SESSION failover. When the connection to an instance is lost, SESSION failover results only in the establishment of a new connection to another Oracle RAC node; any work in progress is lost. SESSION failover is ideal for online transaction processing (OLTP) systems, where transactions are small.

Oracle TAF also offers choices on how to restart a failed transaction. The Oracle DBA may choose one of the following failover methods:

BASIC failover. In this approach, the application connects to a backup node only after the primary connection fails. This approach has low overhead, but the end user experiences a delay while the new connection is created.
PRECONNECT failover. In this approach, the application simultaneously connects to both a primary and a backup node. This offers faster failover, because a pre-spawned connection is ready to use. But the extra connection adds everyday overhead by duplicating connections.

Currently, TAF will fail over standard SQL SELECT statements that have been caught during a node crash in an in-flight transaction failure. In the current release of TAF, however, TAF must restart some types of transactions from the beginning of the transaction.

The following types of transactions do not automatically fail over and must be restarted by TAF:
Transactional statements. Transactions involving INSERT, UPDATE, or DELETE statements are not supported by TAF.
ALTER SESSION statements. ALTER SESSION and SQL*Plus SET statements do not fail over.
The following do not fail over and cannot be restarted:
Temporary objects. Transactions using temporary segments in the TEMP tablespace and global temporary tables do not fail over.
PL/SQL package states. PL/SQL package states are lost during failover.

Using Oracle RAC and TAF Together

The continuous availability features of Oracle RAC and TAF come together when these products cooperate in restarting failed transactions. Let's take a closer look at how this works.

Within each connected Oracle Net client, tnsnames.ora file parameters define the failover types and methods for that client. The parameters direct Oracle RAC and TAF on how to restart any transactions that may be in-flight during a hardware failure on the node.

It is important to note that TAF failover control is external to the Oracle RAC cluster, and each Oracle Net client may have unique failover types and methods, depending on processing requirements. The following is a client tnsnames.ora file entry for a node, including its current TAF failover parameters:

world =(DESCRIPTION_LIST =(FAILOVER = true)(LOAD_BALANCE = true)(DESCRIPTION =(ADDRESS =(PROTOCOL = TCP)(HOST = redneck)(PORT = 1521))(CONNECT_DATA =(SERVICE_NAME = bubba)(SERVER = dedicated)(FAILOVER_MODE = (BACKUP=cletus)(TYPE=select) (METHOD=preconnect)(RETRIES=20)(DELAY=3))))

The failover_mode section of the tnsnames.ora file lists the parameters and their values:

BACKUP=cletus. This names the backup node that will take over failed connections when a node crashes. In this example, the primary server is bubba, and TAF will reconnect failed transactions to the cletus instance in case of server failure.

TYPE=select. This tells TAF to restart all in-flight transactions from the beginning of the transaction (and not to track cursor states within each transaction).

METHOD=preconnect. This directs TAF to create two connections at transaction startup time: one to the primary bubba database and a backup connection to the cletus database. In case of instance failure, the cletus database will be ready to resume the failed transaction.

RETRIES=20. This directs TAF to retry a failover connection up to 20 times.

DELAY=3. This tells TAF to wait three seconds between connection retries.

Remember, you must set these TAF parameters in every tnsnames.ora file on every Oracle Net client that needs transparent failover.

Putting It All Together

An Oracle Net client can be a single PC or a huge application server. In the architectures of giant Oracle RAC systems, each application server has a customized tnsnames.ora file that governs the failover method for all connections that are routed to that application server.

Watching TAF in Action The transparency of TAF operation is a tremendous advantage to application users, but DBAs need to quickly see what has happened and where failover traffic is going, and they need to be able to get the status of failover transactions. To provide this capability, the Oracle data dictionary has several new columns in the V$SESSION view that give the current status of failover transactions.

The following query calls the new FAILOVER_TYPE, FAILOVER_METHOD, and FAILED_OVER columns of the V$SESSION view. Be sure to note that the query is restricted to nonsystem sessions, because Oracle data definition language (DDL) and data manipulation language (DML) are not recoverable with TAF.

selectusername, sid, serial#, failover_type, failover_method, failed_over fromv$session whereusername not in ('SYS','SYSTEM', 'PERFSTAT') andfailed_over = 'YES';

You can run this script against the backup node after an instance failure to see those transactions that have been reconnected with TAF.

Conclusion

Oracle RAC, TAF, and Cache Fusion work together to guarantee continuous availability and infinite scalability. To summarize, here's a short description of each component:

Oracle RAC. The clustering component of Oracle that allows the creation of multiple, independent Oracle instances, all sharing a single database.

Cache Fusion. The shared RAM component of Oracle RAC that provides fast interchange of Oracle data blocks between SGA regions.

TAF. The failover method implemented on the Oracle Net client to restart in-flight transactions when a node crashes.
0 Comments 0 References Permalink

#!/usr/bin/perl -w

# Callout program that will, on a SERVICEMEMBER DOWN event, search for any sessions
# running on the local instance and remove them.
# Note: This callout is at present DATABASE specific
#

use strict;

# Replace the following two directory paths with the Oracle DBD location
use lib '/usr/perl/DBD-Oracle-1.16/blib/lib';
use lib '/usr/perl/DBD-Oracle-1.16/blib/arch';
#
use Oraperl;
use DBI;

# Replace the following variables with appropriate values
my $CRS_HOME="/u01/crs/oracle/product/10/app";
my $ORACLE_HOME="/u01/app/oracle/product/10.2.0/db_1";
my $GetHOST = "/bin/hostname";
# TMP refers to the log location only
my $TMP = "/tmp";
#
# Add the Instance - host mapping
my %hostInstMap;
# Add an entry for each node/instance
# following the convention - $hostInstMap{"NodeName"} = "InstanceName"
$hostInstMap{"pmrac1"} = "barb1";
$hostInstMap{"pmrac2"} = "barb2";
#
#
# Logging enabled
my $LOGFILE = "$TMP/cleanup_SRV.log";

#
my $instance;
my $database;
my $host;
my $service;
my $state;
my $reason;
my $card;
my $status;
my ($key,$value) = "";
my $myHost = "";
my ($myServ, $myInst) = "";
my @services;
my ($iHost, $iInst) = "";
my $sid_list;
my ($aref,$serv);

#

# Open a logfile
local *LOG_FILE;
open (LOG_FILE, ">>$LOGFILE") or do {
   print "Cannot open $LOGFILE\n";
   exit(1);
};

# Determine this host
system("$GetHOST > $TMP/myhost");
local *TEMP_FILE;
open ( TEMP_FILE,"$TMP/myhost") or do {
   print "Cannot determine hostname\n";
   exit(1);
};
while (<TEMP_FILE>) {
   chomp;
   $myHost = $_;
};
close(TEMP_FILE);


# Uncomment these lines if only interested in specific events

if ($ARGV[0] ne "SERVICEMEMBER") { exit(0); };
#if ($ARGV[0] ne "INSTANCE") { exit(0); };
#if ($ARGV[0] ne "SERVICE") { exit(0); };
#if ($ARGV[0] ne "NODE") { exit(0); };

for (my $i=0; $i <= $#ARGV; $i++) {
   #print "$i $ARGV[$i]\n";
   if ($ARGV[$i] =~ m#=#) {
      ($key,$value) = (split /=/, $ARGV[$i]);
      #print "Key = $key  Value = $value\n";
      if ($key eq "service") {
         $service = $value;
      } elsif ($key eq "instance") {
         $instance = $value;
         $ENV{ORACLE_SID} = $value;
      } elsif ($key eq "database") {
         $database = $value;
      } elsif ($key eq "host") {
         $host = $value;
      } elsif ($key eq "card") {
         $card = $value;
      } elsif ($key eq "status") {
         $status = $value;
      } elsif ($key eq "reason") {
         $reason = $value;
      }
   }
}
print LOG_FILE "Host = $host DB = $database Inst = $instance Service = $service Status = $status Reason = $reason MyHost=$myHost\n";

#

# The following function will attempt to remove sessions using a service
# that has gone down.
#
if ($host eq "$myHost") {
   if ($status eq "down" && $ARGV[0] eq "SERVICEMEMBER") {


      print LOG_FILE "Attempting cleanup of service: $service\n";

      # Disconnect sessions from current instance
      dbAccess($service,$instance);
   } else {
      #print "Not a down event\n";
   }

} else {
   print LOG_FILE "Event generated on a different node\n";
}

# Sub routine to connect to Current instance and disconnect

sub dbAccess {

   my ($servIn,$instIn) = @_;

   #$servIn = "\'\%" . $servIn . "\%\'";
   $servIn = "'$servIn'";
   print LOG_FILE "In dbAccess subroutine\n";

   my $dbh = DBI->connect("DBI:Oracle:", "", "" , { ora_session_mode => 2 } )
                      or die "Couldn't connect to database: " . DBI->errstr;

   # Prepare the SQL statement
   my $sth = $dbh->prepare("SELECT s.SID, s.SERIAL\#,p.pid,p.spid FROM v\$session s, v\$process p WHERE service_name = $servIn and p.addr=s.paddr")
                or die "Couldn't prepare statement: " . $dbh->errstr;

   my @data;

   #$sth->execute($servIn)             # Execute the query
   $sth->execute()             # Execute the query
      or die "Couldn't execute statement: " . $sth->errstr;

   # Build a list of the SIDs connected by this service
   while (@data = $sth->fetchrow_array()) {

      if ($sth->rows != 0) {

         $sid_list = "'$data[0],$data[1]'";
         # Remove sessions
         my $sth2 = $dbh->prepare("ALTER SYSTEM KILL SESSION $sid_list IMMEDIATE")
                         or die "Couldn't prepare statement: " . $dbh->errstr;

system("date +'%D %H:%M:%S.%N' >> /tmp/DTP_co.log") ;
         print LOG_FILE "Removing session SID=$data[0]:$data[1]  PID=$data[2]:$data[3]\n";
         $sth2->execute()
                   or die "Couldn't execute statement: " . $sth2->errstr;
         print LOG_FILE "Removed session SID = $data[0]:$data[1]  PID=$data[2]:$data[3]\n";
system("date +'%D %H:%M:%S.%N' >> /tmp/DTP_co.log") ;

         $sth2->finish;
      }
   }

   if ($sth->rows == 0) {
      print LOG_FILE "No sessions for service $servIn on $instance.\n";
   }

   $sth->finish;


   #disconnect from database
   print LOG_FILE "Dropping database connect\n";
   $dbh->disconnect;
}


0 Comments 0 References Permalink

perl script:

#!/usr/bin/perl -w

# Callout program that will, on an INSTANCE UP event start any services defined against
# this database. This is to address the issue of INSTANCE STOP setting non-uniform service state
# to OFFLINE.
# Note: Running services will not be relocated.

use strict;

# Replace the following variables with appropriate values
my $CRS_HOME="/opt/oracle/product/10.2.0/crs";
my $ORACLE_HOME="/opt/oracle/product/10.2.0/db_1";
my $GetHOST = "/bin/hostname";
# TMP refers to the log location only
my $TMP = "/tmp";
#
#
# Logging enabled
my $LOGFILE = "$TMP/SRV_co.log";

#
my $instance;
my $database;
my $host;
my $service;
my $state;
my $reason;
my $card;
my $status;
my ($key,$value) = "";
my $myHost = "";
my ($myServ) = "";

#

# Open a logfile
local *LOG_FILE;
open (LOG_FILE, ">>$LOGFILE") or do {
   print "Cannot open $LOGFILE\n";
   exit(1);
};

# Determine this host
system("$GetHOST > $TMP/myhost");
local *TEMP_FILE;
open ( TEMP_FILE,"$TMP/myhost") or do {
   print "Cannot determine hostname\n";
   exit(1);
};
while (<TEMP_FILE>) {
   chomp;
   $myHost = $_;
};
close(TEMP_FILE);


# Uncomment these lines if only interested in specific events

if ($ARGV[0] ne "INSTANCE") { exit(0); };
#if ($ARGV[0] ne "SERVICEMEMBER") { exit(0); };
#if ($ARGV[0] ne "SERVICE") { exit(0); };
#if ($ARGV[0] ne "NODE") { exit(0); };

for (my $i=0; $i <= $#ARGV; $i++) {
   #print "$i $ARGV[$i]\n";
   if ($ARGV[$i] =~ m#=#) {
      ($key,$value) = (split /=/, $ARGV[$i]);
      #print "Key = $key  Value = $value\n";
      if ($key eq "service") {
         $service = $value;
      } elsif ($key eq "instance") {
         $instance = $value;
         $ENV{ORACLE_SID} = $value;
      } elsif ($key eq "database") {
         $database = $value;
      } elsif ($key eq "host") {
         $host = $value;
      } elsif ($key eq "card") {
         $card = $value;
      } elsif ($key eq "status") {
         $status = $value;
      } elsif ($key eq "reason") {
         $reason = $value;
      }
   }
}
# print LOG_FILE "Host = $host DB = $database Inst = $instance Service = $service Status = $status Reason = $reason MyHost=$myHost\n";

#

# The following function will set service state such that they will restart
#
if ($host eq "$myHost") {
   if ($status eq "up" && $ARGV[0] eq "INSTANCE") {

#      print LOG_FILE "Attempting set of service state for database: $database\n";
 
      # Determine services associated with this database
      srvMap($database, $instance);
   } else {

   }

} else {
   #print LOG_FILE "Event generated on a different node\n";
}

# Sub routine to start services defined against a particular database

sub srvMap {

   my ($dbIn, $instanceIn) = @_;
   local*SRVFILE;

   #print LOG_FILE "In srvMap subroutine for $dbIn\n";

   #system("date +'%D %H:%M:%S.%N' >> /tmp/SRV_co.log") ;
#
#  Identify services defined for this database

   system("$ORACLE_HOME/bin/srvctl config service -d $dbIn > $TMP/serviceMap-$dbIn.out");
   open (SRVFILE,"$TMP/serviceMap-$dbIn.out") or do {
      print "Cannot open SRVFILE\n";
      exit(1);
   };
   while (<SRVFILE>) {
      chomp;
      ($myServ) = ($_ =~ m#^([\w]+) #);
      # print LOG_FILE "Starting service $myServ for database $database\n";
#
#     Only one of the following two lines needs to be active. The first line will attempt to start each service
#     somewhere in the system. Depending on the system configuration, this may cause other instances to start.
#     The second method will ONLY start the service on the instance that just started.
#     Neither method will affect currently running services.

      system("$ORACLE_HOME/bin/srvctl start service -d $dbIn -s $myServ");
#      system("$ORACLE_HOME/bin/srvctl start service -d $dbIn -s $myServ -i $instanceIn");
   };
   #system("date +'%D %H:%M:%S.%N' >> /tmp/SRV_co.log") ;
   print LOG_FILE "Routine complete\n";
}



1 Comments 0 References Permalink

Here are the steps to upgrade EBS database from 9.2.0.6 to 10.2.0.2

1. patch 5478710 (TXK (FND & ADX) AUTOCONFIG ROLLUP PATCH O)
[oracle@ebs2 bin]$ ./txkprepatchcheck.pl -script=ValidateRollup -outfile=$APPLTMP/txkValidateRollup.html -appspass=apps*** ALL THE FOLLOWING FILES ARE REQUIRED FOR RESOLVING RUNTIME ERRORS*** STDOUT = /appl/prodcomn/rgf/prod_ebs2/TXK/txkValidateRollup_Tue_Dec_19_23_36_11_2006_std out.log

Reportfile /appl/prodcomn/temp/txkValidateRollup.html generated successfully.

enable maintenance mode using adadmin
Please select an option:
1. Enable Maintenance Mode
2. Disable Maintenance Mode
3. Return to Main Menu

Enter your choice [3] : 1
sqlplus -s &un_apps/***** @/appl/prodappl/ad/11.5.0/patch/115/sql/adsetmmd.sql ENABLESpawned Process 30742
Successfully enabled Maintenance Mode.


After applying the patch make a new appsutil.zip file...
[oracle@ebs1 5478710]$ $ADPERLPRG $AD_TOP/bin/admkappsutil.plStarting the generation of appsutil.zipLog file located at /appl/prodappl/admin/log/MakeAppsUtil_12200852.logoutput located at /appl/prodappl/admin/out/appsutil.zipMakeAppsUtil completed successfully.
Copy appsutil.zip to your new 10g $ORACLE_RDBMS_HOME, once you have created this new $ORACLE_HOME. In the new ORACLE_HOME unzip -o..


run autoconfig on db-tier
[oracle@ebs2 prod_ebs2]$ ./adautocfg.shEnter the APPS user password:AutoConfig is configuring the Database environment...
AutoConfig will consider the custom templates if present. Using ORACLE_HOME location : /ebs/proddb/9.2.0 Classpath : /ebs/proddb/9.2.0/jre/1.4.2/lib/rt.jar:/ebs/proddb/9.2.0/jdbc/lib/ojdbc14.jar

:/ebs/proddb/9.2.0/appsutil/java/xmlparserv2.zip:/ebs/proddb/9.2.0/appsutil/java :/ebs/proddb/9.2.0/jlib/netcfg.jar
Using Context file : /ebs/proddb/9.2.0/appsutil/prod_ebs2.xml
Context Value Management will now update the Context file
Updating Context file...COMPLETED
Attempting upload of Context file and templates to database...COMPLETED
Updating rdbms version in Context file to db920Updating rdbms type in Context file to 32 bitsConfiguring templates from ORACLE_HOME ...
AutoConfig completed successfully.

The log file for this session is located at: /ebs/proddb/9.2.0/appsutil/log/prod_ebs2/12200014/adconfig.log

With 11i.AD.I.2, you have to manually regenerate my jar files using adadmin.

2. patch 4653225, 11.5.10 INTEROP PATCH FOR 10GR2
3. 10201_database_linux32.zip
using runInstaller to install the 10GR2 software in it's own ORACLE_HOME /ebs/proddb/10.2.0
4. 10201_companion_linux32.zip
Install 10G products in the 10g ORACLE_HOME (second option in the install menu...)
5. p4547817_10202_LINUX.zip


6. before the database upgrade run the tool utlu102i.sql in the old 9i database. This script will generate a upgrade report, and will show what changes have to be made before you can upgrade.
SQL> @utlu102i.sql


Oracle Database 10.2 Upgrade Information Utility 12-20-2006 02:33:32
**********************************************************************
Database:
**********************************************************************
--> name: PROD-
-> version: 9.2.0.6.0
--> compatible: 9.2.0
--> blocksize: 8192.
**********************************************************************
Logfiles: [make adjustments in the current environment]******************************************************************* ***
--> The existing log files are adequate. No changes are required.
....
7. Gather statistics
8. created the SYSAUX tablespace..

CREATE TABLESPACE SYSAUX DATAFILE '/ebs/proddata/sysaux01.dbf' SIZE 500M AUTOEXTEND ON NEXT 10M MAXSIZE 2000M
NOLOGGING
ONLINE
PERMANENT
EXTENT MANAGEMENT LOCAL
AUTOALLOCATEBLOCKSIZE 8K
SEGMENT SPACE MANAGEMENT AUTO;

9. copy the initprod.ora to the new ORACLE_HOME and adjust the parameters for 10gR2

10. set the following variables to the new 10g home... - ORACLE_HOME - PATH - ORA_NLS10 - LD_LIBRARY_PATH
11. Startup database in upgrade mode....
SQL> startup upgrade pfile=/ebs/proddb/10.2.0/dbs/initprod.ora
ORA-32006: SQL_TRACE initialization parameter has been deprecated
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1264892 bytes
Variable Size 411042564 bytes
Database Buffers 650117120 bytes
Redo Buffers 11317248 bytes
Database mounted.
Database opened.
SQL> shutdown abort;
ORACLE instance shut down.
There are still wrong parameters in de init.ora, shuwdown and correct the parameterfile...
SQL> startup upgrade pfile=/ebs/proddb/10.2.0/dbs/initprod.ora
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1264892 bytes
Variable Size 411042564 bytes
Database Buffers 650117120 bytes
Redo Buffers 11317248 bytes
Database mounted.
Database opened.
SQL>SPOOL upgrade.log
SQL>@catupgrd.sql

During this sql the following error occurs..
ERROR at line 1:ORA-06553: PLS-213: package STANDARD not accessible
SQL> conn sys as sysdba
Enter password:
Connected.
SQL> SELECT * FROM DBA_OBJECTS WHERE OWNER = 'SYS' 2 AND OBJECT_NAME = 'STANDARD';
Seems status is invalid..The standard package is needed to compile...
SQL> ALTER PACKAGE STANDARD COMPILE;
Still errors occured.

Then commend out the following plsql part in the init.ora
#plsql_optimize_level = 2 #MP

#plsql_code_type = native #MP
#plsql_native_library_dir = /prod11i/plsql_nativelib
#plsql_native_library_subdir_count = 149

restarted the catupgrd.sql and now the error did not occur...

Now the upgrade runs into a
ORA-0600 ORA-00600: internal error code,
arguments: [kqludp2], [0x49A44E2C], [1], [], [], [], [], []

Don't forget to set the following parameter to 0...
aq_tm_processes = 0

finally after a few days with ORA-0600 errors and startingover again..
TIMESTAMP

--------------------------------------------------------------------------------
COMP_TIMESTAMP UPGRD_END 2006-12-27 17:33:11
1 row selected.
.Oracle Database 10.2 Upgrade Status Utility 12-27-2006 17:33:12.

Component Status Version HH:MM:SS
Oracle Database Server VALID 10.2.0.2.0 00:41:17
JServer JAVA Virtual Machine VALID 10.2.0.2.0 00:00:00
Oracle XDK VALID 10.2.0.2.0 00:00:00
Oracle Database Java Packages VALID 10.2.0.2.0 00:00:00
Oracle Text VALID 10.2.0.2.0 00:00:00
Oracle XML Database VALID 10.2.0.2.0 00:00:00
Oracle Real Application Clusters INVALID 10.2.0.2.0 00:00:02
Oracle Data Mining VALID 10.2.0.2.0 00:00:00
OLAP Analytic Workspace VALID 10.2.0.2.0 00:00:00
OLAP Catalog VALID 10.2.0.2.0 00:00:00
Oracle OLAP API VALID 10.2.0.2.0 00:00:00
Oracle interMedia VALID 10.2.0.2.0 00:00:00
Spatial VALID 10.2.0.2.0 00:05:28.
Total Upgrade Time: 01:11:16
PL/SQL procedure successfully completed.

12. Shutdown the database
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL>
Do not use shutdown abort !!!!!
13. Compile remaining stored PL/SQL and JAVA code
SQL> startup restrict
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1264892 bytes
Variable Size 411042564 bytes
Database Buffers 650117120 bytes
Redo Buffers 11317248 bytes
Database mounted.
Database opened.
SQL>@utlrp.sql

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_BGN 2006-12-27 17:54:11

one hour later, still

SQL> SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);

COUNT(*)

----------
111314

invalid objects to go....
And already...


SQL> SELECT COUNT(*) FROM UTL_RECOMP_COMPILED;
COUNT(*)
----------
45517

objects compiled...

TIMESTAMP
--------------------------------------------------------------------------------
COMP_TIMESTAMP UTLRP_END 2006-12-28 09:30:58
1 row selected.


SQL> SELECT COUNT(*) FROM obj$ WHERE status IN (4, 5, 6);

COUNT(*)

----------
230

SQL> select count(*) from dba_objects
2 where status like 'INVALID';
COUNT(*)
----------
238

Still invalid objects...maybe compiling via adadmin will work.

14. run $APPL_TOP/admin/adgrants.sql
[oracle@ebs2 admin]$ sqlplus /nolog
SQL*Plus: Release 10.2.0.2.0 - Production on Thu Dec 28 13:18:50 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> conn sys as sysdba
Enter password:
Connected.
SQL> @adgrants.sql applsys

15. create spfile from pfile
SQL> create spfile from pfile='/ebs/proddb/10.2.0/dbs/initprod.ora';
File created.
16. grant create procedure to ctxsys
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1264892 bytes
Variable Size 411042564 bytes
Database Buffers 650117120 bytes
Redo Buffers 11317248 bytes
Database mounted.
Database opened.
SQL> conn apps/apps
Connected.
SQL> @adctxprv.sql manager CTXSYS
Connecting to SYSTEM
Connected.
PL/SQL procedure successfully completed.
Commit complete.
Disconnected from Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - ProductionWith the Partitioning, OLAP and Data Mining options

17. Next step should be 'run autoconfig'...but where is the context file ??
First create the context file...
[oracle@ebs2 bin]$ perl adbldxml.pl tier=db appsuser=apps appspass=apps
Starting context file generation for db tier..

Using JVM from /ebs/proddb/10.2.0/jre/1.4.2/bin/java to execute java programs..
The log file for this adbldxml session is located at:/ebs/proddb/10.2.0/appsutil/log/adbldxml_12281400.log
Enter the value for Display Variable: >ebs2:0.0
The context file has been created at:/ebs/proddb/10.2.0/appsutil/prod_ebs2.xml

Now run autoconfig
[oracle@ebs2 bin]$ ./adconfig.sh

Enter the full path to the Context file: /ebs/proddb/10.2.0/appsutil/prod_ebs2.xml
Enter the APPS user password:
AutoConfig is configuring the Database environment...
AutoConfig will consider the custom templates if present.

Using ORACLE_HOME location : /ebs/proddb/10.2.0 Classpath : /ebs/proddb/10.2.0/jre/1.4.2/lib/rt.jar:/ebs/proddb/10.2.0/jdbc/lib/ojdbc14.jar :/ebs/proddb/10.2.0/appsutil/java/xmlparserv2.zip
:/ebs/proddb/10.2.0/appsutil/java:/ebs/proddb/10.2.0/jlib/netcfg.jar:/ebs/proddb /10.2.0/jlib/ldapjclnt10.jar
Using Context file : /ebs/proddb/10.2.0/appsutil/prod_ebs2.xml
Context Value Management will now update the Context file
Updating Context file...COMPLETED
Attempting upload of Context file and templates to database...COMPLETED
Updating rdbms version in Context file to db102Updating rdbms type in Context file to 32 bitsConfiguring templates from ORACLE_HOME ...
AutoConfig completed successfully.The log file for this session is located at: /ebs/proddb/10.2.0/appsutil/log/prod_ebs2/12281412/adconfig.log

18. Gather sys statistics
SQL> conn sys as sysdba
Enter password:
Connected.
SQL> shutdown immediate;
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup restrict;
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1264892 bytes
Variable Size 415236868 bytes
Database Buffers 645922816 bytes
Redo Buffers 11317248 bytes
Database mounted.
Database opened.
SQL> @/appl/prodappl/admin/adstats.sql
Connected.
-----------------------------------------------------
adstats.sql started at 2006-12-28 14:18:20
---
Checking for the DB version and collecting statistics ...
PL/SQL procedure successfully completed.

---------------------------------------------------

adstats.sql ended at 2006-12-28 15:10:41
---
Commit complete.

19. Re-create grants and synonyms using adadmin
Maintain Applications Database Entities

---------------------------------------------------
1. Validate APPS schema
2. Re-create grants and synonyms for APPS schema

Following error occurs...
declare*ERROR at line 1:ORA-04063: package body "SYSTEM.AD_DDL" has errors

ORA-06508: PL/SQL: could not find program unit being called: "SYSTEM.AD_DDL"ORA-06512: at line 19
Seems a known problem according to Metalink.

Note 387745.1 brings the sollution...
Run utlrp.sql again....
[oracle@ebs2 admin]$ sqlplus /nolog
SQL*Plus: Release 10.2.0.2.0 - Production on Thu Dec 28 15:39:13 2006
Copyright (c) 1982, 2005, Oracle. All Rights Reserved.
SQL> conn sys as sysdba
Enter password:
Connected.
SQL> shutdown
Database closed.
Database dismounted.
ORACLE instance shut down.
SQL> startup upgrade
ORACLE instance started.
Total System Global Area 1073741824 bytes
Fixed Size 1264892 bytes
Variable Size 415236868 bytes
Database Buffers 645922816 bytes
Redo Buffers 11317248 bytes
Database mounted.
Database opened.
SQL> @utlirp.sql
SQL>shutdown
SQL>startup
SQL>@utlrp.sql

Problem solved...
Again run adadmin

20. Startup services.

0 Comments 0 References Permalink

Order of Startup Shutdown
————————————–
As in Oracle Apps 11i order for startup is
1) Start Database Tier Services
–Start Database Listener
–Start Database
Then
2) Start Application/Middle Tier Services
– adstrtal.sh

Order for shutdown in Oracle Apps R12 is
1) Stop Application/Middle Tier Services
– adstpall.sh
Then
2) Stop Database Tier Services
–Stop Database
–Stop Database Listener

Database Tier Scripts in R12
————————————–
For Database tier you need to start database and database listener. Scripts are located in Database_Install_Dir/db/tech_st/10.2.0/appsutil/scripts/$CONTEXT_NAME- For Database
Use script addbctl.sh
- For Database Listener
Use script addlnctl.sh

or alternatively you can use
lsnrctl startstop listener_name (For Database Listener)
sqlplus “/as sysdba”
SQL> startup shutdown immediate

Middle/Application Tier Scripts in R12
————————————————-

Scripts for Application Tier services in R12 are located in “Install_base/inst/apps/$CONTEXT_NAME/admin/scripts
where CONTEXT_NAME is of format SID_HOSTNAME

i) adstrtal.sh
Master script to start all components/services of middle tier or application tier. This script will use Service Control API to start all services which are enabled after checking them in context file (SID_HOSTNAME.xml or CONTEXT_NAME.xml)

ii) adstpall.sh
Master script to stop all components/services of middle tier or application tier.

iii) adalnctl.sh
Script to start / stop apps listener (FNDFS and FNDFS). This listener will file will be in 10.1.2 ORACLE_HOME (i.e. Forms & Reports Home)
listener.ora file will be in $INST_TOP/apps/$CONTEXT_NAME/ora/10.1.2/network/admin directory

iv) adapcctl.sh
Script to start/stop Web Server or Oracle HTTP Server. This script uses opmn (Oracle Process Manager and Notification Server) with syntax similar to opmnctl [startstop]proc ohs
like opmnctl stopproc ohs .

v) adcmctl.sh
Script to start / stop concurrent manager (This script in turn calls startmgr.sh )

vi) adformsctl.sh
Script to start / stop Forms OC4J from 10.1.3 Oracle_Home. This script will also use opmnctl to start/stop Forms OC4J like
opmnctl stopproc type=oc4j instancename=forms

vii) adformsrvctl.sh
This script is used only if you wish to start forms in socket mode. Default forms connect method in R12 is servlet.
If started this will start frmsrv executable from 10.1.2 Oracle_Home in Apps R12

viii) adoacorectl.sh
This script will start/stop oacore OC4J in 10.1.3 Oracle_Home. This scripts will also use opmnctl (similar to adapcctl & adformsctl) to start oacore instance of OC4J like
opmnctl startproc type=oc4j instancename=oacore

ix) adoafmctl.sh
This script will start/stop oafm OC4J in 10.1.3 Oracle_Home. This scripts will also use opmnctl (similar to above) to start oacore instance of OC4J like
opmnctl startproc type=oc4j instancename=oafm

x) adopmnctl.sh
This script will start/stop opmn service in 10.1.3 Oracle_Home. opmn will control all services in 10.1.3 Oracle_Home like web server or various oc4j instances. If any services are stopped abnormally opmn will/should start them automatically.

xi) jtffmctl.sh
This script will be used to start/stop one to one fulfilment server.

xii) mwactl.sh
To start / stop mwa telnet server where mwa is mobile application.

Log File Location for Startup Shutdown Services in R12
———————————————————————-
Log files for startup/shutdown scripts for application/mid tier in R12 are in $INST_TOP/apps/$CONTEXT_NAME/logs/appl/admin/log
(adalnctl.txt, adapcctl.txt, adcmctl.txt, adformsctl.txt, adoacorectl.txt, adoafmctl.txt, adopmnctl.txt, adstrtal.log, jtffmctl.txt )



0 Comments 0 References Permalink

FNDCPASS is an EBS tool to change passwords of database schema’s within the Oracle EBS. For example, you can change the APPS password using FNDCPASS, but also any other schema in the EBS database. FNDCPASS can also be used to change the password of an application user (like sysadmin).

 

To change the APPS password use…
FNDCPASS apps/*** 0 Y system/***** SYSTEM APPLSYS [new_password]
(the apps password is also mentioned in some config files, so you have to change those files manually !!!)

 

To change any other schema…
FNDCPASS apps/**** 0 Y system/***** ORACLE GL [new_password]

 

To change the password of a application user
FNDCPASS apps/*** 0 Y system/****** USER SYSADMIN [new_password]

 

When changing the password of all schemas in the database, you have a lot off FNDCPASS to do…there are almost 200 schemas in the EBS database that need to be changed. Default the password is schema name, so gl/gl and ap/ap…

 

When installing patch 4676589 (11i.ATG_PF.H Rollup 4) a new feature is added to FNDCPASS. Now you can use the ALLORACLE functionality to change all the schema passwords in one FNDCPASS.

 

To use the new FNDCPASS feature apply the following patches:
1. install AD: Patch 11i.AD.I.4 (patch 4712852)
2. install patch 5452096
Purging timing information for prior sessions.
sqlplus -s APPS/***** @/appl/prodappl/ad/11.5.0/admin/sql/adtpurge.sql 10 1000
Spawned Process 17504
Done purging timing information for prior sessions.
AutoPatch is complete.
AutoPatch may have written informational messages to the file/appl/prodappl/admin/prod/log/u6451215.lgi
Errors and warnings are listed in the log file/appl/prodappl/admin/prod/log/u6451215.log
and in other log files in the same directory.
3. run the Technology Stack Validation Utility
[oracle@ebs11i bin]$ ./txkprepatchcheck.pl -script=ValidateRollup -outfile=$APPLTMP/txkValidateRollup.html -appspass=apps
*** ALL THE FOLLOWING FILES ARE REQUIRED FOR RESOLVING RUNTIME ERRORS
***STDOUT /appl/prodcomn/rgf/ebs11i/TXK/txkValidateRollup_wed_Sep_19_stdout.log
Reportfile /appl/prodcomn/temp/txkValidateRollup.html generated successfully.
4. run autoconfig
5. apply patch 4676589 (11i.ATG_PF.H Rollup 4, Applications Technology Family)
6. After the install
7. apply patch 3865683 (AD: Release 11.5.10 Products Name Patch)
8. apply patch 4583125 (Oracle XML Parser for Java) see note 271148.1

 

Verify if the upgrade has been successful..
cd $JAVA_TOP
[oracle@ebs11i java]$ unzip -l appsborg.zip grep 9.0.4
0 04-19-03 02:10 .xdkjava_version_9.0.4.0.0_production
[oracle@instancename ebs11i java]$
if there is an xdkjava_version_9.0.4.0.0_production entry, then XML parser is installed.
9. run autoconfig

 

Now try the new FNDCPASS function..

 

[oracle@instancename ebs11i]$ FNDCPASS apps/apps 0 Y system/manager ALLORACLE WELCOME

Log filename : L4754382.log
Report filename : O2726002.out
[oracle@instancename ebs11i]$
[oracle@instancename ebs11i]$ sqlplus apps/apps
SQL*Plus: Release 8.0.6.0.0 - Production on Wed Sep 19 13:49:39 2007
(c) Copyright 1999 Oracle Corporation. All rights reserved.
Connected to:
Oracle Database 10g Enterprise Edition Release 10.2.0.2.0 - Production
With the Partitioning, OLAP and Data Mining options
SQL> conn gl/welcome
Connected.
SQL> conn ap/welcome
Connected.
SQL>

0 Comments 0 References Permalink

Oracle E-Business Suite Release 12 Installation Guide: http://download.oracle.com/docs/cd/B34956_01/current/acrobat/120oaig.pdf

Oracle E-Business Suite Release 12 Maintenance Procedures: http://download.oracle.com/docs/cd/B34956_01/current/acrobat/r12adproc.pdf

Oracle E-Business Suite Release 12 Maintenance Utilities: http://download.oracle.com/docs/cd/B34956_01/current/acrobat/r12adutil.pdf

Oracle E-Business Suite Release 12 Patching Procedures: http://download.oracle.com/docs/cd/B34956_01/current/acrobat/oa_patching_r12.pdf

Permalink

How to find Apps Version (11i/R12/12i) >>  Connect to database as user apps
SQL> select release_name from apps.fnd_product_groups;
Output like 12.0.4 or 11.5.10.2

 

Web Server/Apache or Application Server in Apps 11i/R12 >> Log in as Application user, set environment variable and run below query $IAS_ORACLE_HOME/Apache/Apache/bin/httpd -version
Output for 11i should be like
Server version: Oracle HTTP Server Powered by Apache/1.3.19 (Unix)
Server built:   Jan 26 2005 11:06:44 (iAS 1.0.2.2.2 rollup 5)

Output for R12 should be like
Server version: Oracle-Application-Server-10g/10.1.3.0.0Oracle-HTTP-Server
Server built:   Dec  4 2006 14:44:38

 

Forms & Report version (aka developer 6i) in 11i >> Log in as Application user, set environment variable and run below query
$ORACLE_HOME/bin/f60run | grep Version | grep Forms

output like
Forms 6.0 (Forms Runtime) Version 6.0.8.25.2 (Production)
Check fourth character in version 25 which means Forms 6i patchset 16 (25-9)

 

Forms & Report version in R12/12i >> Log in as Application user, set environment variable and run below query
$ORACLE_HOME/bin/rwrun | grep Release

Output should be like


Report Builder: Release 10.1.2.2.0
You can safely ignore warnings

 

Oracle Jinitiator in 11i/R12/12i >>

Log in as Application user, set environment variable and run below query
grep jinit_ver_comma $CONTEXT_FILE  

(
Default is Java Plug-In for R12/12i )

Oracle Java Plug-in in 11i/R12/12i >>

Log in as Application user, set environment variable and run below query
grep plugin $CONTEXT_FILE

File Version on file system >>
adident Header <filename>
or
strings <file_name> | grep Header

Here adident is AD Utility (Oracle Apps) and strings is Unix utility

 

Version of pld file >>
*.pld are source code of *.pll which are inturn source of *.plx.  *.pll is in $AU_TOP/resource and to find its version check

adident Header $AU_TOP/resource/<filename>.pll
IGSAU012.pll:
$Header IGSAU012.pld 115.1.115100.1 2004/04/01 05:40:18 appldev ship $

or
strings $AU_TOP/resource/<filename>.pll | grep -i header

FDRCSID(’$Header: IGSAU012.pld 115.1.115100.1 2004/04/01 05:40:18 appldev ship $’);

 

OA Framework Version >> http:// hostname.domainName:port/OA_HTML/OAInfo.jsp (Only for 11i);  Log in as Application user, set environment variable and run below query

adident Header $FND_TOP/html/OA.jsp
adident Header $OA_HTML/OA.jsp

output for both should look like
$Header OA.jsp 115.60 2006/03/31 00:47:28 atgops1 noship $

120.21 means OA Framework Version (coming soon..)
115.60 means OA Framework Version (coming soon..)
115.56 means OA Framework Version (coming soon..)
115.36 means OA Framework Version 5.7
115.27 means OA Framework Version 5.6E
115.26 means OA Framework Version 5.5.2E

 

Discoverer Version for 11i (3i or 4i) >> Log in as Application user, set environment variable and run below query
$ORACLE_HOME/bin/disc4ws | grep -i Version

 

Discoverer Version for 11i or R12 (10g AS) >> Check under Application Server Section as 10g AS Discoverer is on standalone

 

Workflow Version with Apps >> Connect to Database as apps user
SQL> select TEXT Version from   WF_RESOURCES where  NAME = ‘WF_VERSION’;
Output like 2.6.0 means workflow version 2.6.0

 

Oracle Single Sign On >>
Connect to database which holds SSO repository
SQL>select version from orasso.wwc_version$;

 

Oracle Internet Directory >>

There are two component in OID (Software/binaries & Schema/database)

>>> To find software/binary version

$ORACLE_HOME/bin/oidldapd -version
output should look like

oidldapd: Release 10.1.4.0.1 - Production on thu sep 11 11:08:12 2008
Copyright (c) 1982, 2006 Oracle.  All rights reserved.

 

>>> To find Schema Version/ database use

ldapsearch -h <hostname> -p <port> -D “cn=orcladmin” -w “<password>” -b “” \
-s base “objectclass=*” orcldirectoryversion

and output should be like

version: 1
dn:
orcldirectoryversion: OID 10.1.4.0.1

or run following query in database
SQL> select attrval from ods.ds_attrstore where entryid = 1 and attrname = ‘orcldirectoryversion’;

Output should be like OID 10.1.4.0.1

 

Application Server >> Oracle Application Server (Prior to Oracle WebLogic Server)
If application server is registered in database (Portal, Discoverer) check from database
SQL> select * from ias_versions;
or
SQL>select * from INTERNET_APPSERVER_REGISTRY.SCHEMA_VERSIONS;

 

AOC4J (Oracle Container for J2EE)
Set ORACLE_HOME
cd $ORACLE_HOME/j2ee/home
java -jar oc4j.jar -version

 

Oracle Portal >>
SQL> select version from portal.wwc_version$;

 

Database Component

I) Oracle Database

To find database version
SQL> select * from v$version;
or
All component version in database

$ORACLE_HOME/OPatch/opatch lsinventory -detail

 

Oracle Enterprise Manager >>
Metalink Note 605398.1
How to to find the version of the main EM components

Unix Operating System

Solaris -> cat /etc/release
Red Hat Linux -> cat /etc/redhat-release



0 Comments 0 References Permalink

A customer recently asked our consulting services on using LDAP with PeopleSoft.

There are several scenarios where LDAP may be used:

 

External Authentication

In this instance the customer chooses an attribute in the user object which will contain the PeopleSoft user ID. The login process is configured to access the LDAP server using the user credentials entered in the challenge screen. Sign-on PeopleCode connects to the LDAP server, retrieves the user object which matches the value entered by the user as the "UserID", extracts the DN from the user object and attempts to BIND the user object using the entered password. If this sequence is successful, Signon PeopleCode extracts the value in the attribute which has been configured as storing the PeopleSoft user ID, usually "uid" and makes a call to SetAuthenticationResult to cache the user profile and log the user into a PeopleSoft session.

 

Dynamic Role Creation

This is an extension to the authentication functionality above. If the user successfully authenticates against LDAP but does not have an entry in PSOPRDEFN and a default Role has been configured, the entry will be created in PSOPRDEFN and the user will be logged into that default Role in PeopleSoft. This default Role is usually the Self Service Role, so customer PeopleSoft administrators do not have to create an account for every employee, for instance.

 

With Dynamic Roles, a user account can be created or modified using attribute values in the user object, queries against the PeopleSoft instance or other custom logic.

 

PeopleSoft Directory Interface (PDI)

This is a licensable option with HCM and developed/supported as an Enterprise Component.
With this option, the LDAP schema is modified with PeopleSoft specific object classes and attributes to create a structure in LDAP which reflects the organizational structure defined in HCM. Messages are created from Workforce Management events to modify the LDAP structure to reflect changes in the workforce.

 

LDAP authentication and Role management are described in the Security Administration PeopleBook, http://www.oracle.com/applications/peoplesoft/tools_tech/ent/ptools/peoplebook-s ecurity-administration.pdf, which is part of the PeopleTools suite., PDI is described in the Enterprise Components PeopleBook, http://download.oracle.com/docs/cd/B40039_02/psft/acrobat/hrcs9ecq-b1206.pdf, which is part of the HCM suite.

 

PeopleSoft supports LDAP v3, and delivers 4 pre-built configurations:
- Oracle Internet Directory
- Sun Java System Directory Server
- Novell eDirectory
- Microsoft Active Directory
There is also a custom option to allow any other configuration to be defined.

0 Comments 0 References Permalink

Virtualization in Technology

Posted by Community Admin Jan 7, 2009
Even with the incredible traction virtualization is making in the market it’s still easy to get confused as to what virtualization means in every case. Is it server virtualization provided by hypervisors like Xen or ESX? Is it storage virtualization? Network virtualization? Broadly, it's all of the above.

 

One thing that's clear is that the market for virtualization is growing and maturing, and one advantage of this evolution is that the industry analysts are starting to categorize and differentiate between virtualization technologies and virtualization vendors. This helps us sure, but it also helps customers understand where they are in their own deployments vs. where they can go in the future.

 

Recently, the pundits have begun making distinctions between the different types or phases of virtualization. For example, IDC has identified two current levels of virtualization; Virtualization 1.0 and 2.0+. Virtualization 1.0 is defined as server virtualization – using hypervisors to partition resources – while Virtualization 2.0+ is defined as the next generation of virtualization technology – focused on virtualizing data center infrastructure beyond just the server.

 

Virtualization 1.0 is targeted at reducing capex through consolidation, something that hypervisors do extremely well. Virtualization 2.0+ focuses on reducing opex by adding capabilities around infrastructure virtualization, management tools to simplify management, and higher value tools like Disaster Recovery and hardware failover. Ultimately, Virtualization 2.0+ transforms data center infrastructure into flexible, changeable assets that can be deployed, moved and managed seamlessly.

 

The categorization that's happening now is important. It allows customers to wade through the confusing virtualization landscape and choose products that they can actually benefit from and complement what they already have installed. We know that virtual machine sprawl is becoming real and with this sprawl comes a new set of challenges – managing the infrastructure that connects the sprawling VMs.

 

Virtualization relative to IT infrastructures has been huge success. We are effectively separating the Software from the Hardware which can provide a multitude of benefits. One of the early benefits has been consolidation. As IT had evolved to a “one application, one server” mentality – server virtualization offers a way to radically consolidate HW resources. Virtual networks let use share resources without building new infrastructures.

 

It has been done on Mainframes for decades. The point is that we now can do it on HW that is affordable, scaleable, and where little is required in terms of fault tolerance. Virtualization is deemed to go well beyond it beginnings and become a key base unpinning within the Flat IT world. Utilization will be a part of it but, more importantly, virtualization allows us to create a completely fluid and dynamic IT environment. This fluidity is the lynchpin in terms of how we really build IT Utilities.

 

If you have ever configured a server, think about the time it takes to change a server from a Web server to a database server, to an Email server. Even with the best tools, it is not easy and it is definitely not dynamic. With virtualization the fluidity of change will simply move to a completely new level, allowing IT resources to be applied (leveraging other technologies like Grid) almost instantaneously to meet changing needs.

 

Most IT Managers, I believe they would just be happy not having to plan downtime when the want to migrate a server or storage system. Yes, in ten years we might have a fully autonomous dynamic self-managing IT environment but, there is huge value is just the first basic step – separating the Software from the Hardware.

 

Key changes:
Virtual Infrastructures - Virtualization has been a hot trend for some time now and the technology (virtualization) will exist wherever there is Hardware. Virtualization is important for utilization but also ultimately critical for building a truly dynamic IT environment. Virtualization will simply free IT from any specific coupling to HW.

 

Information-Centric IT - In the existing IT environments, we've moved from being Server-centric to OS-centric to Application-centric. In the next generation, we will become more network-centric but fundamentally start building Information Technology actually around the Information. This is powerful. It means that Information is no longer captive to a single application but can be leveraged across any number of applications.

 

Services Oriented Architecture - You may think there is nothing new here but this is where major new change are coming. We've always considered applications services as a part of this construct but that all interaction with data/information will occur at the SOA layer. Applications and users will receive and store their information by interacting with Information Services. These services will provide the protection, archiving, compliance, security, and other capabilities as a service. A single application will no longer “own” data. Information will exist as an independent element that can be managed independently and used by any authorized application. Combined with delivering resources, this creates a Services Oriented Infrastructure.

 

Composite Apps built without code – Within the services framework complete applications are simply connected with workflow (BPM) tools just like working with Visio. Composite applications are built by coupling information, security, application, and other services together in a prescribed way.

 

Model-Based Management Provides Orchestration or Resources and Services – To pull all of these capabilities together we need management. Traditional framework-centric management is just not going to cut it, however. Today’s management technologies simply can’t handle the virtual, dynamic, and complex environments that will be constructed. This is where model-based management comes in; it will transform how we think about management. Simply, model-based management will provide the orchestration necessary to deliver highly reliable and scaleable systems across these complex environments.

 

Virtual Appliances as preferred delivery model for Application Services As all interaction and communications between application services and information services will operate at the SOA layer, many of the complex, driver-centric functions that exist within today’s operating systems will simply no longer be utilized. Base operating environments will exist principally to provide a compute environment for applications. Hence, we will start to see more Applications embed base OS and other base capabilities directly with their offerings. This will simplify integration, test, security, delivery and support. We are seeing major examples of this today – for example with Oracle’s recent embedded Linux announcement.
0 Comments 0 References Permalink

You might have noticed the option to login to Jaggy Community with either username and password or OpenID.  What is the benefit and significance of OpenID?

 

OpenID is a free and easy way to use a single digital identity across the Internet.  With one OpenID you can login to all your favorite websites and forget about online paperwork.  It eliminates the need for multiple usernames across different websites simplifying your online experience.

For geeks, OpenID is an open, decentralized, free framework for user-centric digital identity. OpenID takes advantage of already existing Internet technology (URI, HTTP, SSL, Diffie-Hellman) and realizes that people are already creating identities for themselves whether it be at their blog, photostream, profile page, etc. With OpenID you can easily transform one of these existing URIs into an account which can be used at sites which support OpenID logins.

OpenID is still in the adoption phase and is becoming more and more popular, as large organizations like AOL, Microsoft, Sun, Novell, etc. begin to accept and provide OpenIDs. Today it is estimated that there are over 160-million OpenID enabled URIs with nearly ten-thousand sites supporting OpenID logins.

 

Who owns or controls OpenID?

OpenID has arisen from the open source community to solve the problems that could not be easily solved by other existing technologies. OpenID is a lightweight method of identifying individuals that uses the same technology framework that is used to identify websites. As such, OpenID is not owned by anyone, nor should it be. Today, anyone can choose to be an OpenID user or an OpenID Provider for free without having to register or be approved by any organization.

 

OpenID - key to unlocking the true potential of E2.0

OpenID can be a small (but key) part of the identity services story. The main problem that OpenID tries to solve is one that most people who use the internet extensively face - that of too many usernames and passwords. Instead of having to remember a username/password combo for each website they interact with (Google, Yahoo, Flickr, blogs, etc), you can set up and use a single OpenID account at all those websites instead. OpenID also hopes to provide a number of technological advantages to the whole authentication experience by figuring out ways to prevent phishing and pharming attacks.

 

So OpenID's main aim is at providing a secure, scalable solution for the authentication service in the identity stack. To a lesser extent, it also hopes to help the identity provider and authorization services by becoming a transport container for identity claims that drive these services.  OpenID-enabling existing applications for an external audience is already a trivial exercise. It's a simple API, and plugins or toolkits are available for most programming environments. I think the much bigger deal is looking at OpenID from the opposite perspective - using enterprise security infrastructure to support OpenID authentication.

 

Expecting Enterprise 2.0 success by simply adopting social networking features of Web 2.0 just seems a little naive. For a start, it implies and requires phenomenal change in the social and organizational fabric of a company to get off the ground, and there is no guarantee the benefits will be worth the pain of change. In many organizations it may just be too much, too soon, and fail completely.  Imagine for a moment the ideal world. I would have a corporate identity that works transparently within the enterprise and also for useful external services. And I could keep this quite separate from my personal identity.  The question is how realistically this can be achieved?  It will take Identity Management, Web Application providers, and Enterprise Software vendors to support third party OpenID credentials.  This will allow extending corporate identities beyond the boundaries of the organization in a safe and controlled manner.  This will likely be a slow process considering the delayed adoption of the previous contender SAML (Security Assertion Markup Language) - an XML-based standard for exchanging authentication and authorization data between security domains.  Part of the delayed adoption is due to proliferation of non-interoperable proprietary technologies.  In addition, there are enormous economic incentives for companies that run social networks to not let users of other networks access their services.  Shareholder value is often a function of how many users they have, and how hard it is to switch.  The harder it is to switch, the more money each user is worth.

Comparing Enterprise Identity with Open Web Identity

And while OpenID iteself seems to have the upper hand in terms of marketshare and competent execution, it’s still too early to declare a winner in the Web identity sweepstakes. However, there’s no reason that enterprises can’t support all the digital identity and open social graph initiatives they find rewarding today, creating an open, successful two-way relationship with the Web and its countless offerings.

 

It’s just another example of how opening up and giving up control on the network can create gains larger than what you relinquish. For it’s clear that while there will be issues with open Web identity, particularly around phishing and other exploits, the advantage of having a single, simple, straightforward network identity for workers wherever they go could be an enormous win for forward-thinking enterprises.

0 Comments 0 References Permalink
WebDAV is a protocol extension to HTTP 1.1 that supports distributed authoring and versioning. With WebDAV, the Internet becomes a transparent read and write medium, where content can be checked out, edited, and checked in to a URL address. mod_dav is an implementation of the WebDAV specification.
The term OraDAV refers to the capabilities available through the mod_oradav module. mod_oradav is the Oracle module that is an extended implementation of mod_dav, and is integrated with the Oracle HTTP Server. mod_oradav can read and write not only to local files, but also to an Oracle Database. The Oracle Database must have an OraDAV driver installed.
Similar to the portal DAD configuration file, WebDAV has it own configuration file (ORACLE_HOME/Apache/oradav/conf/oradav.conf) that contains the OraDAV parameters and start with DAV and DAVParam. These parameters are specified within a "Location" directive. The oradav.conf file is included in the httpd.conf file in an include statement.
By default, the OracleAS Portal DAV URL is:
http://hostname:portno /dav_portal/portal/
For example:
http://mysite.oracle.com:7777/dav_portal/portal
The dav_portal part of the URL is the default name of a virtual directory used to differentiate between portal access through a WebDAV client and portal access that uses the pls virtual directory. portal is the DAD of the portal installation.
Due to the way some WebDAV clients behave, users might experience authentication requests multiple times. To avoid this, the portal administrator can enable the cookie option by adding the following line to the oradav.conf file:
DAVParam ORACookieMaxAge '<'seconds'>'
where seconds is the amount of time in seconds before the cookie expires.
For example a value of 28800 is 8 hours and means that once a user has logged on through a WebDAV client, user will not be prompted for a user name and password again until 8 hours have passed.
0 Comments 0 References Permalink

Instructions are for TSM on AIX 5L (5.3 ML5)

 

1. cd to /etc/security/adsm
2. Rename TSM.PWD to TSM.PWD.OLD
3. Login to the tsm server and from tsm command prompt change the password for the node as follows:

update node nodename password

4. Go back to the node and cd to /usr/tivoli/tsm/client/ba/bin

vi dsm.sys

immediately below SErvername TSM create another line: nodename name of the node. Your dsm.sys file should look like this:

SErvername  TSM
nodename <nodename>
    COMMMethod         TCPip
    TCPPort            1500
    TCPServeraddress   10.113.2.123
passwordaccess generate

5. Save dsm.sys file and exit
6. type dsmc Hit return
7. You should see prompt like this one:
Node Name: <NODENAME>
Please enter your user id <NODENAME>:

8. Hit enter and enter the password that you have specified for the node on TSM server.

Now you should be in the tsm prompt
9. Exit and go back to /etc/security/adsm confirm that the new password file is created.



0 Comments 0 References Permalink

Outgoing mail delivery stops working on an Exchange Server 2007 Hub Transport server after you install Forefront Security for Exchange.

After you install and configure Microsoft Forefront Security for Exchange on a Microsoft Exchange Server 2007-based computer that is running the Hub Transport role, you experience the following symptoms:


• Exchange 2007 accepts and delivers incoming e-mail messages as expected. However, Exchange 2007 no longer sends outgoing e-mail messages. Outgoing messages remain in the submission queue.
• The following information is logged in the Forefront Security for Exchange ProgramLog.txt file:

"ERROR: Unable to retrieve internet monitor interface."
"ERROR: SybLicense: Failed to create MSXML instance: -2147221008"
"ERROR: LICENSING: Invalid initialization parameters!"
"ERROR: CoCreateInstance failed in GetLists (0x800401F0)"

To resolve this problem, follow these steps:

 

Step 1: Assign the appropriate DCOM permissions to the SELF account

1. On the Exchange 2007-based server that is running the Hub Transport role, click Start, click Run, type dcomcnfg, and then click OK.
2. Expand and then click Component Services.
3. Under Component Services, expand Computers, right-click My Computer, and then click Properties.
4. Click the COM Security tab, and then click Edit Default under Access Permissions.
5. If SELF does not appear in the Group or user names list, click Add, type SELF, click Check Names, and then click OK.
6. Click SELF, and then click to select the following check boxes in the Allow column:
Local Access
• Remote Access

7. Click OK two times. Then restart the Exchange-related services and the Forefront Security for Exchange-related services.

Step 2: Configure the log on account for the Microsoft Exchange Transport service

1. On the Exchange 2007-based server that is running the Hub Transport role, click Start, click Run, type services.msc, and then click OK.
2. In the list of services, right-click Microsoft Exchange Transport, and then click Properties.
3. Click the Log On tab, and then click This account.
4. Click Browse, type Network Service, click Check Names, and then click OK.

Note Microsoft Windows automatically generates a password for the Network Service account. Therefore, you do not have to specify a password for this account.
5. Click OK. Then restart the Exchange 2007-related services and the Forefront Security for Exchange-related services.



0 Comments 0 References Permalink

You can change/reset ias_admin password as follow:

1. Using Enterprise Manager (Application Server Control) Web Site
–Login to Instance Home Page
–Click on Preferences (top right)
–Click on “Change Password” on left menu
–Enter current password and New Password

Bounce iasconsole, e.g.

emctl stop iasconsole

emctl start iasconsole


2. Using Command line tool

cd $ORACLE_HOME/bin
emctl set password <old_password> <new_password>
like
emctl set password welcome1 welcome2(Here welcome1 is current ias_admin password and welcome2 is new password which you wish to reset)

Bounce iasconsole.

If you don’t know current ias_admin password then change it in configuration file

 

3. Change ias_admin password directly in configuration file
–Backup $ORACLE_HOME/sysman/j2ee/config/jazn-data.xml
–Search for entry like below

  <user>
    <name>ias_admin</name>
      <credentials>{903}8QkQ/crno3lX0f3+67dj6WxW9KJMXaCu</credentials>
  </user>

and Update new password (welcome2 like )

  <user>
    <name>ias_admin</name>
      <credentials>!welcome2</credentials>
  </user>

Note ! (Exclamation Mark in front of password. This signifies that password is stored in clear text)



0 Comments 0 References Permalink

TSM Backup Test in IBM

Posted by Community Admin Nov 15, 2008

# cd /usr/tivoli/tsm/client/ba/bin
# dsmc
IBM Tivoli Storage Manager
Command Line Backup/Archive Client Interface
  Client Version 5, Release 3, Level 4.3
  Client date/time: 12/25/06   17:40:40
(c) Copyright by IBM Corporation and other(s) 1990, 2006. All Rights Reserved.

 

Node Name: <NODENAME>
Session established with server TSM: AIX-RS/6000
  Server Version 5, Release 3, Level 3.4
  Server date/time: 12/25/06   17:40:39  Last access: 12/25/06   17:38:24

 

tsm> archive /home/test/plevel.sql
Archive function invoked.

 

Directory-->                 256 /home/test [Sent]
Normal File-->               373 /home/test/plevel.sql [Sent]
Archive processing of '/home/test/plevel.sql' finished without failure.


Total number of objects inspected:        2
Total number of objects archived:         2
Total number of objects updated:          0
Total number of objects rebound:          0
Total number of objects deleted:          0
Total number of objects expired:          0
Total number of objects failed:           0
Total number of bytes transferred:      405  B
Data transfer time:                    0.00 sec
Network data transfer rate:        98,876.95 KB/sec
Aggregate data transfer rate:          0.19 KB/sec
Objects compressed by:                    0%
Elapsed processing time:           00:00:02
tsm> retrieve /home/test/plevel.sql /home/test/restore/plevel.sql
Retrieve function invoked.

 

Retrieving             373 /home/test/plevel.sql --> /home/test/restore/plevel.sql [Done]

 

Retrieve processing finished.

 

Total number of objects retrieved:        1
Total number of objects failed:           0
Total number of bytes transferred:      405  B
Data transfer time:                    0.00 sec
Network data transfer rate:        17,977.62 KB/sec
Aggregate data transfer rate:          0.13 KB/sec
Elapsed processing time:           00:00:03
tsm>

0 Comments 0 References Permalink
1 2 Previous Next