Oracle e-business Code Migration Tool


Oracle e-business Code Migration tool :


For the latest content please refer to :

http://www.applikast.net/technical/tools/oracle-e-business-customization-installation-tool



12 comments:

  1. I like this. I am using ant now. What are you doing to handle dependencies and ordering execution?

    ReplyDelete
  2. Well for the DB all the objects are loaded first and then recompiled in the DB.

    For java objects dependencies are handled by running ant for compilation.

    For xml publisher all dtd's are loaded before the templates.

    ReplyDelete
  3. I get the following error "pre_process user hook not defined: No such file or directory at Install.pl line 937." Is this because I do not have open2 libraries installed. Will you share link or instructions on installing open2 libraries on EBS middletier server?

    ReplyDelete
  4. Oops! it is nothing to do with the libraries. ...
    I should have mentioned this in the installation instructions. Can you create two blank files by the name pre_process.txt and post_process.txt in the same directory from where you are running Install.pl .

    ReplyDelete
  5. It executes ok, but did not seem to handle the "sql" directory. I have a simple sql script that creates a view as my test case.

    ReplyDelete
  6. That is odd. Can you set the option debug=true and send the debug files across. I can take a quick look.

    ReplyDelete
  7. Does this tool compile the forms/reports?
    Does it also move the finished fmx to the AU_TOP?

    And finally, does it only recompile the files that have changed since last migration?

    ReplyDelete
  8. Yes it does all of that.

    With regards to the last migration it's best to put only changed files in the folder structure for migration. The developers should give you just the changed files so it should work fine for you.

    ReplyDelete
  9. You say the tool supports forms and reports, but the code itself in install.pl doesn’t handle it.

    It seems like the tool will always execute everything in the folder structure. Like you said, this will require the developer to stage only the files he wants to be processed in the environment. Can we have avoid that requirement to a high extent?

    It looks quite flexible in terms of allowing itself to be controlled by a couple of environment variables, which is quite nice I think, but it still lacks flow control and gives little/no idea on what fails during migration.

    Your thoughts please.

    ReplyDelete
  10. Hi Abdul,

    Thanks for your invaluable comments. As you pointed out the file was a wrong version. i have now uploaded the right versions with the code to process rdf and fmb files. It also has the code to compile pld files.

    As for the flow of installation/migration how would you want to control it? The tool was built on a previous project where we gave deltas as installation packages and this worked just fine. But I do see your point of controlling flow. How do you think you want this to be controlled? i can definitely look at modifying the code to incorporate that.

    THanks,
    Arun

    ReplyDelete
  11. Let me explain how I foresee flow control and delta migration to work:

    - Tool is invoked as a shell script on the Hudson server
    1 Hudson server pulls latest version for working branch down to local filesystem (In Linux)
    2 Shell script recursively copies the entire corresponding file structure (Either scp or rsync) over to the designated apps-tier for target environment
    a. This will effectively overwrite any local amendments anyone might have made on the filesystem (And that is how we want/need it to be)
    3 Shell script then invokes the migration script remotely (Executes it on the apps tier by logging onto apps server with ssh and invoking the shell)
    a. Note: This migration script is copied as part of step 2
    4 The migration script for each object type will traverse the tree-structure on the server, covering all custom objects (XXP2P, XXR2P, XXO2C, XXR2R, XXARS)
    a. /admin/sql  Files here will not be touched, as they are all covered by DB installation framework (Separate job)
    b. /forms/US and /forms/CHS  Files are processed by xxars_compile_forms.sh
    c. /admin/FNDLOAD  Files are processed by xxars_handle_fndload.sh
    d. /reports/US and /reports/CHS  Files are processed by xxars_compile_reports.sh
    5 Delta detection:
    a. Each artifact associated to a file explicitly listed in xxars_.sh has a logical name.
    b. Each artifact has its corresponding md5 checksum generated on the server
    c. The checksum is sent to a tracking procedure within the database for each environment along with the logical name
    a. If checksum is identical to the checksum already stored as processed, no further action is required
    b. If logical name does not have a checksum associated to it yet, it will return that a compilation is required
    c. If logical name already has a checksum associated to it, and it has changed, it will return that a compilation is required
    d. If a compilation is required, then the corresponding file will be either compiled (For forms/reports) or have FNDLOAD or similar executed (For .ldt files)

    Now, I haven’t implemented this feature yet in the architecture objects, but this is the approach we should take.

    Please let me know your email ID so that I can send a version of the object we would call xxars_handle_fndload.sh that I used previously so that you get a better idea of what I mean by xxars_.sh (As this would be the one for the FNDLOAD work area).

    Each shell script should be self-contained, and not know about the calling shell script on the Hudson build server.

    ReplyDelete
  12. Nice blog I liked it thanks for sharing this blog..........................visit this bellow website for Oracle R12 financials online training

    ReplyDelete

There was an error in this gadget