Monday, September 19, 2016

Cloning an Oracle Database with APEX applications

About a year ago I asked a question about cloning Oracle Databases in the APEX section of the Oracle Community. The question is here (no need to click the link, all of the content is also in this blog post):

I haven't received much traction, so I'm reformulating it as a recommendation. Perhaps I'll get more feedback in this form. I'll also point out an element I think is a bug in'll have to read to the end to see that.

It is a common practice to clone a production database and utilize the clone for testing or development purposes. This happens a great deal with eBusiness Suite implementations, but also with many other installations. Below is a sample list of steps (with limited guidance) that should be done to avoid side effects with APEX applications.

  1. Use RMAN or your favorite technology to backup and restore the production database, but DO NOT START.
  2. Change the database SID and name if not done above.
  3. Set JOB_QUEUE_PROCESSES to 0.  This is a key step to make sure that when you start the database things don't start "happening."
  4. Start the database.
  5. Assuming you are running a runtime only environment in Production, you will likely want to install the APEX builder in this new clone. Run apxdvins.sql to upgrade the runtime into a full development environment.
  6. Log into the INTERNAL workspace and modify instance settings: Instance URL, Image URL, SMTP server settings (if you wish to use a different SMTP server), Print Server settings, any other settings you want.
  7. Navigate to the Manage Instance > Mail Queue and delete anything in the queue. The clone may have happened while things were in the queue.
  8. Manage Instance > Interactive Report Descriptions: Delete all of the Interactive Report subscriptions. This is also a key step to ensure that you don't have emails going out to production users from your development or test environment.
  9. Manage Instance > Session State: Purge all session state. There could be sensitive production data that you don't want left around in session state.
  10. Modify any settings specific to your own applications, e.g. web service URLs, lookup values, etc.
  11. Reset JOB_QUEUE_PROCESSES to appropriate value.

It would also be great to have Oracle provide a script that does the above things (with the exception of #10, of course).

I promised to call out a bug. Item #7 should delete all of the interactive report subscriptions, but it doesn't--at least not in APEX v5. The list of report subscriptions skips any application

where build_status = 'RUN_AND_HIDDEN'

This is (at least) packaged applications that have not been "unlocked." It turns out that deleting these subscriptions is NOT EASY. I originally thought a script like this might do it:

-- this would need to be done in each workspace because it uses the workspace views

  for irRec in (select notify_id from APEX_APPLICATION_PAGE_IR_SUB) loop
      p_subscription_id =>  irRec.notify_id);
  end loop;


Unfortunately, APEX_APPLICATION_PAGE_IR_SUB doesn't see the subscriptions. It has the same issue as the page--it won't show subscriptions for applications with build_status = 'RUN_AND_HIDDEN'.

I tried a few other things, but in the end, the only way I could get rid of these was to just delete them from the underlying table, run as the APEX_050000 schema:

delete from wwv_flow_worksheet_notify;

Thursday, June 23, 2016

Query to find users granted an ACL -- the natural question after seeing ORA-24247

You may have encountered ORA-24247: network access denied by access control list (ACL) and wondered "who has access to what from my database?" I extended a query from the Oracle documentation to give me the results I wanted: ACL Name, Username, host, lower port, upper port, and if granted connect and resolve.

with privs as (
  SELECT acl,  u.username, host, lower_port, upper_port, 
         DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE_ACLID(aclid, u.username, 'connect'),
            1, 'GRANTED', 0, 'DENIED', null) conn_privilege,
         DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE_ACLID(aclid, u.username, 'resolve'),
            1, 'GRANTED', 0, 'DENIED', null) res_privilege       
     FROM dba_network_acls a, dba_users u
select *
  from privs
  where conn_privilege is not null
     or res_privilege is not null
  order by acl, username

It's nothing special, but can be a handy query.

Monday, June 13, 2016

Super Quick Oracle REST Service with OAuth2 and client_credentials

I had the need to allow system A to talk to system B via a REST service. The data was sensitive and powers above me requested that system A use OAuth2 to connect to system B. This REST service call does not involve an end user, it's system A pulling a CSV extract from system B. There are many ways to protect this, but the decision was to use OAuth2. Below is a cookbook on how to do this. This example assumes you have ORDS 3.x installed with the ORDS_PUBLIC_USER and ORDS_METADA schemas configured.

I am running all commands as the user ANTON.

  1. For this example I will create a data source for our query:
    -- create the table in the ANTON database schema
    create table anton_table (c1 varchar2(500), c2 varchar2(500) );

    -- insert some sample data
    insert into anton_table (c1, c2)
      select owner, table_name
        from all_tables
        where rownum <= 20;

    -- commit
  2. Enable REST on the ANTON schema:


        ORDS.ENABLE_SCHEMA(p_enabled => TRUE,
                           p_schema => 'ANTON',
                           p_url_mapping_type => 'BASE_PATH',
                           p_url_mapping_pattern => 'anton',
                           p_auto_rest_auth => FALSE);


    -- check to see that it worked
    select id, parsing_schema from user_ords_schemas;

  3. Define a REST Module:

       p_module_name    => 'antonModule',
       p_base_path      => '/antonmodule',
       p_items_per_page => 25,
       p_status         => 'PUBLISHED',
       p_comments       => NULL );


  5. Define a template. This is a URL pattern associated with the module "antonModule." Bind variables are contained in squiggly brackets: {bindVariableName}.

       p_module_name => 'antonModule',
       p_pattern     => '/sqltest/{abc}/{def}',
       p_priority    => 0,
       p_etag_type   => 'HASH',
       p_etag_query  => NULL,
       p_comments    => NULL );
    This is interesting so take note! Notice my bind variables are abc and def. I tried using c1 and c2 but it seems that the bind variable names can not have numerals in them. Developer beware!
  6. Define a REST Handler based upon a sql query that takes two bind variables (abc and def) and returns a CSV file:

       p_module_name => 'antonModule',
       p_pattern     => '/sqltest/{abc}/{def}',
       p_method      => 'GET',
       p_source_type => ords.source_type_csv_query,
       p_source      => q'[select c1, c2 from anton_table where c1 = :abc and c2 = :def  ]',
       p_items_per_page  => 25,
       p_mimes_allowed   => NULL ,
       p_comments  => NULL );  


    This is worth repeating: Notice my bind variables are abc and def. I tried using c1 and c2 but it seems that the bind variable names can not have numerals in them. Developer beware!
  7. At this point you can test your service:


  8.  Define a privilege to protect it with OAuth2:


    l_roles       owa.vc_arr;
    l_patterns    owa.vc_arr;
    l_modules     owa.vc_arr;
      -- l_roles intentionally left empty
      -- populate arrays
      l_modules(1) := 'antonModule';

      p_privilege_name  => 'antonpriv',
      p_roles           => l_roles,
      p_patterns        => l_patterns,
      p_modules         => l_modules,
      p_label           => 'antonTestingPriv',
      p_description     => 'anton testing priv',
      p_comments        => null);


  9. Now you will find it protected:

  10. Create a client that is allowed to access it:

       p_name            => 'antonclient',
       p_grant_type      => 'client_credentials',
       p_owner           => 'anton',
       p_description     => NULL,
       --p_origins_allowed => NULL, -- param name depends on ORDS version
       p_allowed_origins => NULL,   -- param name depends on ORDS version
       p_redirect_uri    => 'http://localhost:8080/redirect',
       p_support_email   => '',
       p_support_uri     => 'http://localhost:8080/support',
       p_privilege_names => 'antonpriv');
  11. Get the client_id and client_secret. You will need to log in as a user that has access to select from the ords_metadata tables (e.g. ORDS_METADATA or SYSTEM).

    select * from ords_metadata.oauth_clients;
  12. If you want to be able to do this from an http (not https) URL (which you should NEVER do in production--this is just for testing!!):

    -- turn off need for SSL
    1.    Locate the folder where the Oracle REST Data Services configuration is stored.
    2.    Edit the file named defaults.xml.
    3.    Add the following setting to the end of this file just before the </properties> tag.
    4.    <entry key="security.verifySSL">false</entry>
    5.    Save the file.
    6.    Restart Oracle REST Data Services if it is running.
  13. Test getting a bearer token

    curl -i -d "grant_type=client_credentials" --user "[client_id]:[client_secret]" http://localhost:8080/ords/anton/oauth/token

    You should receive a response like this:

    HTTP/1.1 200 OK
    Content-Type: application/json
    X-Frame-Options: SAMEORIGIN
    Transfer-Encoding: chunked


    -- curl is a tool for making http(s) requests from the command line
    --  -i allows you do to this as http -- though you SHOULD use https
    --  -d allows you to pass in data
    --  --user allows you to pass a user:password for basic authentication
    --  then pass the appropriate URL to get a bearer token
  14. Test getting your CSV

    curl -i --header "Authorization: Bearer [token from step 10]" http://localhost:8080/ords/anton/antonmodule/sqltest/ANTON/ANTON_TABLE --output anton.csv

Friday, April 15, 2016

LOBs Over a Database Link

There are lots of pluses and minuses to db links, but they are certainly easy and used in the right context they work very well. I admit that I sometimes use them when there is a better technical solution--just because it is so easy and the better solution may not be worth the time.

The case of LOBs over db links can be tricky. You can't select a lob over a db link in SQL or PL/SQL:

select my_blob
  from my_table@mylink;

results in

ORA-22992: cannot use LOB locators selected from remote tables

There are several techniques that work. You CAN do

insert into my_local_table (the_blob)
  select my_blob
  from my_table@mylink;

There is another interesting technique here:

We recently had a requirement to just show the first few hundred characters of a lob over a db link. It was a complicated query and the developer wrote something like this:

select local.c1, local.c2, remote.c3
    , dbms_lob.substr(remote.my_blob, 200, 1) blob200
  from local_table local
  inner join remote_table@mylink remote on remote.c2 = local.c2;

This worked fine in the development and test environment. In production it gave the ORA-22992. It depended on how the optimizer chose to run the query. If the dbms_lob.substr ran on the remote database it was fine, but if it had to pull the blob to the local db it was a problem.

We solved it by forcing the the dbms_lob.substr to run on the remote node:

select local.c1, local.c2, remote.c3
    , dbms_lob@mylink.substr(remote.my_blob, 200, 1) blob200
  from local_table local
  inner join remote_table@mylink remote on remote.c2 = local.c2;

Thursday, March 31, 2016

apex_web_service.make_rest_request not working with POST

I recently encountered a web service that I was unable to use with POST and apex_web_service. I was using a statement like this:

    l_clob       CLOB;

    l_clob := apex_web_service.make_rest_request(
        p_url => 'http://myMachine/myService',
        p_http_method => 'POST',
        p_parm_name => apex_util.string_to_table('param1:param2'),
        p_parm_value => apex_util.string_to_table('xyz:xml'));


I've used this many times in the past, but this particular service would not recognize the parameters passed in p_parm_name and p_parm_value. I was able to use curl with the same transaction.

curl -X POST -d "param1=xyz&param2=xml" http://myMachine/myService

I must say, it was VERY frustrating. I finally enabled full logging on Apache using mod_dumpio. 

(Apache 2.4.x)
# uncomment
LoadModule dumpio_module modules/

# add
    <IfModule dumpio_module>
#LogLevel debug
LogLevel dumpio:trace7   # Apache 2.4
DumpIOInput On
DumpIOOutput On
#DumpIOLogLevel debug   # does not work in 2.4

I reviewed the logs to find out if apex_web_service.make_rest_request was doing something different than curl. I say "if," but clearly something had to be different. In the logs I found this line from curl but not from apex_web_service:

mod_dumpio:  dumpio_in (data-HEAP): Content-Type: application/x-www-form-urlencoded\r\n

I was able to get things working by adding the Content-Type header as shown below.

    l_clob       CLOB;
  apex_web_service.g_request_headers(1).name := 'Content-Type';
  apex_web_service.g_request_headers(1).value := 'application/x-www-form-urlencoded'; 

    l_clob := apex_web_service.make_rest_request(
        p_url => 'http://myMachine/myService',
        p_http_method => 'POST',
        p_parm_name => apex_util.string_to_table('param1:param2'),
        p_parm_value => apex_util.string_to_table('xyz:xml'));


I hope this helps someone!

Thursday, January 08, 2015


I've recently been involved with extending a number of systems that have pre-built data models. I'm generally unhappy with these data models for a variety of reasons. There are many great academic texts on data modeling. I will try to put together a bibliography in an upcoming post. For now, I'll start by discussing the "never delete data" trend. It is generally coupled with the use of a column to indicate that the data should have been deleted (typically a column named VOID) but was instead allowed, indeed required, to linger forever in the table.

There are typically two arguments in favor of the "never delete, add a VOID column" data model: I want to know what happened from a traceability perspective, and, I want to be able to do incremental extracts to populate some other system and need to know if I need to VOID the row in the other system.

Example without VOID

It's easiest to deal with a concrete example, so let's make one. Assume we have an employee table that stores data about employees. For the purposes of my argument (and because it probably makes sense) let us assume we require a unique SSN for each employee. Typically this table would look like this:

    "SSN"        VARCHAR2(32) NOT NULL,

    "SALARY"     NUMBER,
    constraint  "EMPLOYEE_PK" primary key ("ID")

alter table "EMPLOYEE" add
constraint "EMPLOYEE_SSN_UK"
unique ("SSN")

Because we want to be able to do incremental updates, we need the LAST_UPDATED column to be not null and we need to ensure it is always set correctly. There are many reasons to avoid triggers; just do an internet search for "Tom Kyte triggers" to see a number of valid arguments. For this purpose, though, I will add a trigger:

create or replace trigger "EMPLOYEE_BRIUT"
insert or update on "EMPLOYEE"
for each row

  :new.last_updated := sysdate;


Note that I have told the database that SSN will be unique by adding EMPLOYEE_SSN_UK.
The database will automatically also create an unique index of the same name.

Let's explore what happens if two users attempt to insert employees with the same SSN.

User A (note lack of commit):
insert into employee (ssn, last_name)
  values ('123456789', 'Smith');

 1 rows inserted.

 User B:
 insert into employee (ssn, last_name)
  values ('123456789', 'Smith');


User A:



 User B:

Error starting at line : 1 in command -
insert into employee (ssn, last_name)
  values ('123456789', 'Smith')
Error report -
SQL Error: ORA-00001: unique constraint (ANTON.EMPLOYEE_SSN_UK) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

The database recognized that the SSN was a duplicate and did not allow User B to insert. Had User A issued a rollback, User B could have continued, but once User A had committed the record, User B received an error.

This functionality was completed with a single line of code:

alter table "EMPLOYEE" add constraint "EMPLOYEE_SSN_UK" unique ("SSN")

If I were coding an API and wanted to capture the error, it would require one additional line of code (assuming you already have the keyword EXCEPTION in your API):

  when DUP_VAL_ON_INDEX then ... do something

Later you realize that this person NEVER should have been entered into the employee table. This was not an employee, it was a customer. You issue the following command:

delete from employee
  where ssn = '123456789';

Later still, this customer becomes an employee. You issue the following:

insert into employee (ssn, last_name)
  values ('123456789', 'Smith');


This works just fine and no additional code is required.

Example With VOID

Let us assume that someone convinces you to disallow any deletes. Instead you are asked to add a VOID column. The VOID column will contain a V if the record is "void," else it will be null.

    "SSN"        VARCHAR2(32) NOT NULL,

    "SALARY"     NUMBER,
    "VOID"       VARCHAR2(1),

    constraint  "EMPLOYEE_ND_PK" primary key ("ID")

create or replace trigger "EMPLOYEE_ND_BRIUT"
insert or update on "EMPLOYEE_NO_DELETE"
for each row

  :new.last_updated := sysdate;


Given the scenario listed above, we won't be able to add the unique constraint on SSN. If we were to do so, we would not be able to add them employee the second time as there would already be an employee record with that same SSN. Perhaps we could get away with making SSN + VOID unique.

alter table "EMPLOYEE_NO_DELETE" add constraint "EMPLOYEE_ND_SSN_V_UK" unique ("SSN", VOID)

That seems to do the trick.

insert into employee_no_delete (ssn, last_name)
  values ('123456789', 'Smith');

1 rows inserted.
update employee_no_delete
  set void = 'V'
  where ssn = '123456789';

1 rows updated.
insert into employee_no_delete (ssn, last_name)
  values ('123456789', 'Smith');

1 rows inserted.

We still have all of the great features around row locking on uniqueness provided by the database.

Of course, if your users are anything like mine, you will find that Mr. Smith has once again been added as employee but he is really a customer. So...

update employee_no_delete
  set void = 'V'
  where ssn = '123456789';

Ah, but here we get

Error starting at line : 1 in command -
update employee_no_delete
  set void = 'V'
  where ssn = '123456789'
Error report -
SQL Error: ORA-00001: unique constraint (ANTON.EMPLOYEE_ND_SSN_V_UK) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.

We already have a row with this combination of SSN and VOID. There is no choice but to remove the constraint, and, along with it, all of the multi-user concurrency features provided by the database.

alter table "EMPLOYEE_NO_DELETE" drop constraint

By dropping the constraint we lose more than we can possibly recover by adding our own code. Nevertheless, let's try.

Now we have to add code to ensure that two users (in two separate sessions) never insert or update rows to cause a duplicate SSN. That means we must insist that all updates happen through an API. You might argue that only updates that include the SSN must go through the API. There are edge cases where this could cause deadlocks--and more importantly, it would be difficult to allow updates to everything except SSN. Hence, we have a new rule: All updates must go through our API. This rule also means that, unless we code special additional APIs, all processing is row by row (AKA slow by slow). Want to give everyone a 10% raise? That means updating each row rather than issuing a single update. Our API is also somewhat complicated. We must ensure that there is only one insert or update that involves SSN at a time--across sessions. As we don't have much insight into the happenings of another session, we'll need some way to track this. In order to serialize the inserts and any updates that might change the SSN, we must lock the EMPLOYEE_NO_DELETE table--the whole table. This means before each insert or update we must issue

lock table employee_no_delete in share mode nowait;

We might consider using WAIT instead of NOWAIT, especially as we assume that there will be a lot of table locks.

lock table employee_no_delete in share mode wait 10;

Alternative Method

I've seen this implemented manually by creating another table that tracks table names--and then the API must lock the appropriate row.

    constraint  "TABLE_NAME_PK" primary key ("TABLE_NAME")

insert into table_with_void_column (table_name)
  values ('EMPLOYEE_NO_DELETE');


We would never actually update this row, but would lock it in order to interact between sessions.

That method involves creating a new table and code to accomplish something Oracle already provides. Obviously, that is something we are already attempting by adding the VOID column and coding around it, so I'm not surprised to see custom table locking implemented by the same folks who implement custom delete handling.

Back to the API

In order to ensure that a row is never deleted and that we never have a duplicate SSN, we need an API such as the one below.

create or replace package employee_ND_api is

procedure ins ( p_emp_rec    in employee_no_delete%rowtype) ;

end employee_ND_api;  


create or replace package body employee_ND_api is

procedure ins ( p_emp_rec    in employee_no_delete%rowtype) is

duplicate_ssn   exception;
l_count         number;


  lock table employee_no_delete in share mode wait 10;

    select 1 into l_count
      from dual
      where exists (select 1 from employee_no_delete e
        where e.ssn = p_emp_rec.ssn);

    -- oops we found a row already there
    raise duplicate_ssn;

  exception when no_data_found then null; -- ok to continue
    insert into employee_no_delete values p_emp_rec;

-- note: We can not commit. There should be a full transaction,
--       actions before and after this action, that need to be
--       committed together. Hence, we lock the row in
--       table_with_void_column until the whole transaction
--       completes.
end ins;

end employee_ND_api; 


The update routine would be slightly more complicated as we must also lock the row we intend to update, but the INS routine above points to some issues already. We have now locked the entire EMPLOYEE_NO_DELETE table. As noted, we can't commit the newly inserted record as there may be other DML that needs to occur--inserts or updates to other rows or data in other tables. Hence, the entire table remains locked until the final commit or rollback. No other session can insert or update any row of EMPLOYEE_NO_DELETE until we complete.

Moreover, there may be many tables--perhaps all tables--in our system with this same requirement. Hence, when we attempt to update data in another table in our unified transaction, we will need to take the same approach--lock the entire table. Unless every transaction in our system always follows the same order, we will certainly run in to deadlocks on a frequent basis: one session will lock EMPLOYEE_NO_DELETE, another will lock DEPT_NO_DELETE, the first will attempt to lock DEPT_NO_DELETE but be blocked. Then the second will attempt to lock EMPLOYEE_NO_DELETE and the database will detect a deadlock--forcing a rollback of one of the sessions. There is no way to avoid this.

The Incremental Update Requirement

Clearly EMPLOYEE_NO_DELETE, with its void column, has problems. The requirement to do incremental updates of another system, though, remains. If we return to the EMPLOYEE table and allow the row to be deleted using the EMPLOYEE table rather than setting VOID = 'V' in the EMPLOYEE_NO_DELETE table, how does the incremental update routine know to remove (or void) the row?

This is trivial with the use of a trigger on the EMPLOYEE table. Whenever a delete occurs, write a row to another table to indicate the delete. I'll mention Tom Kyte's dislike of triggers here again. I generally agree with Tom on this point. I don't even like the trigger I used above to populate the last_updated column. In the case of audit tables, though, I think a trigger is absolutely warranted. We are not changing any data in the base table and there are no foreign keys or even constraints on the audit table. Users would only ever be granted SELECT on that table. This is the place for a trigger. Depending upon our audit requirements, we might just indicated who took the action, when and whether it was an insert, update or delete. If we really need traceability, though, it's easy to capture the whole image of the row. I'll do that for this example:

-- create the audit table 

    "ID"         NUMBER,
    "SSN"        VARCHAR2(32) NOT NULL,

    "SALARY"     NUMBER,
    "ROW_ACTION"     VARCHAR2(32) 

-- create the trigger

create or replace trigger "EMPLOYEE_ARIUT"
insert or update or delete on "EMPLOYEE"
for each row

l_action  varchar2(32);

  if inserting then l_action := 'INSERT';
  elsif updating then l_action := 'UPDATE';
  else l_action := delete;
  insert into employee_audit (id, ssn, last_name, first_name

    , salary, other_info, last_updated, row_action)
    values (, :new.ssn, :new.last_name, :new.first_name
    , :new.salary, :new.other_info, :new.last_updated, l_action);



The incremental routine can simply query the deleted row to gather the data. With sufficient data in the audit table, we can create a view that looks exactly like EMPLOYEE_NO_DELETE, but without its inherent shortcomings (nay, fatal flaws).

--create a view that has the deleted row
create or replace view employee_with_void
select id, ssn, last_name, first_name, salary
    , other_info, last_updated, null void
  from employee
select id, ssn, last_name, first_name, salary
    , other_info, last_updated, 'V' void
  from employee_audit
  where row_action = 'DELETE' 

The audit table can provide much better information if we need it. With just a LAST_UPDATED column (and no audit table), the incremental routine would never know about multiple changes that occur between the incremental runs. It may not need to, but if it does, the audit table provides that ability.

In fact (not supposition, but absolute fact) the right method is to allow the row to be deleted. This provides true data protection, performs better and requires far less code which is also far less complicated.

Still Unconvinced?

OK, maybe you don't have the requirement for any unique constraints. First, I don't buy that argument for well over 90% of tables. If you are doing incremental updates in another system, you need some way to identify where to apply the incremental update--that would be the logical unique key. But, for the sake of argument, we will assume that you don't have any unique keys. That means that you don't need to be concerned about multi-user concurrency issues as much. What about that VOID column, though? Will you allow a row to be "unvoided?" Will you allow any updates to a voided record? If not, you would definitely need an API to keep that from happening. Given every possible argument and every leniency of requirement, it will still be less complicated for developers and users of the data to AVOID the VOID. For every query implemented by every user or developer save the incremental update routine the query must contain

where void is null

Indexes may need to include the VOID column--you'll need to do the analysis on a table by table basis.

You definitely need an API and likely slow by slow processing everywhere.

And here is the worst part, even the incremental load process is more complicated--if you want to get it right. If you just have two columns for audit/delete purposes, LAST_UPDATED and VOID, you lack the fidelity of knowing a row ever existed or what its incremental states were. Take this scenario, for example:

midnight: incremental process runs
9:00 am  User adds employee Smith, SSN = 123456789
9:15 am  User updates Smith to Jones, SSN = 123459999
10:00 am Payroll runs and pays Jones $500
noon  User updates Jones to Smith, SSN = 123456789
1:00 pm User voids the record
midnight: incremental process runs

All that the last incremental process sees is that there was a row for Smith, SSN = 123456789 and that it was voided. What is the incremental process expected to do with that information?

Need I mention foreign keys? If you have either a parent or a child record, how do you handle the relationship? Obviously a parent record can't be deleted; it would have to be voided. All child records would have to be voided as well. The cascading all has to be coded for--not to mention the locking of all of the cascading. At this point I have to ask: why did your company spend so much money on an Oracle database? I doubt it was to hire developers to code the same features again--with less functionality.

Adding a table is easy. Creating a view is easy. All subsequent code benefits from these constructs. Why does the VOID persist? If anyone has a reason for a VOID column, let me know in the comments. Until then, please join with me in this movement--AVOID the VOID.


Monday, October 06, 2014

Boston APEX Meetup

It's been a long time since I posted, but this seems like as good a reason as any to start up again.  C2 is sponsoring an APEX meetup in the Boston area.  You can find the info here:

I'm looking forward to catching up with other members of the Oracle and APEX communities.  Stay tuned for more blog posts as well!


Monday, October 03, 2011

SOAPEX at OOW 2011

Recently, I've been doing a lot with Oracle Application Express (APEX) and web services.  At Oracle OpenWorld I came across a presentation on just this topic.  Douwe Pieter van den Bos, an Oracle Ace, presented on using APEX with the Oracle SOA Suite.  I have used SOA Suite in the past, and quite like it, but recently the web services I've been using are SOAP based services that I have very little influence on.  They are not built with or deployed on the Oracle SOA Suite.

The SOAPEX presentation gave a nice overview of how to set up APEX to use web services and made the smart recommendation to build your web service references in a single application (SOAPEX) and then use the inherit/subscription model of APEX to keep things up to date.

My specific challenges have been a little more difficult, specifically in the need to consume very large and complicated web services--so complex (or possibly overly complicated) that APEX is unable to parse the wsdl.  More on this later...

Oracle 12c Database

It's definitely about the cloud at Oracle OpenWorld 2011.  In many ways Oracle has always promoted it's database in the private cloud--long before the term cloud (private or public) became popular.  At previous OpenWorlds, Larry Ellison poked fun at the cloud, noting that the notion isn't new.  Well, even Oracle must bend to the popularity of the term cloud.  I haven't yet heard an official name for the next database release, but I'm betting on Oracle 12c.

Wednesday, March 30, 2011

It's Been a Long Time

My last blog post was a tribute to Scott Spadafore.  A year and a half before, I also offered tribute to another close friend and APEX guru, Carl Backstrom.  I have had a hard time getting past the loss of these two friends.  Such a hard time that this is my first post in over a year--my first since Scott's haiku.  I have started to write many posts, but none seemed worthy of moving Scott's haiku down the page.  Scott was a pillar of the Oracle community--not just Application Express, though he certainly dominated that arena.

The passing of these two friends was a loss to many others as well.  Several months ago John Scott had a grand idea.  He gathered together over a dozen people who had benefited from the work of Carl and Scott, who had enjoyed success because of their efforts, and had become friends with Carl, Scott and each other through the APEX community.  John suggested that we jointly write a book, in Carl and Scott's memory, and donate the author royalties to the funds established for Carl's and Scott's children.  I was honored to be included in this group.

John's blog provides the full list of authors and more info about the book, Expert Oracle Application Express.  I offer my thanks to John for coordinating this project, and to all of the contributors.

Monday, March 22, 2010

Haiku Two

In November 2008 I offered a haiku in Carl Backstrom's memory.  That post referenced texting a haiku.  That text originated with Scott Spadafore.  It is with great sadness that I offer haiku two in Scott's memory.

spring leaps forth
though warmth, shining sun
brilliance lost

Tuesday, March 09, 2010

Thank You Granny!

My wife recently took a look at this blog and told me that it would be better with pictures.  She also suggested that not every post had to start with: If you are installing/configuring/coding with the Oracle product abc and you get error ORA-nnnnn . . . With that in mind, I share the following.

I used to commute to work by bike most days, but I was out of commission for about a year.  Today was a beautiful morning, just right to get back on the bike for a ride to work.  After a year off it was a bit of a shakey start.  Just figuring out where my gear was.  Pairing down my now typical road warrior gear to the bare essentials (for example, a 4 port switch instead of the 10 port).  Stuffing everything into a new bag.  Hoping my lunch would not spill out into traffic.

It used to take me 26 minutes door to door.  This morning was a little slower.  Which brings me to the title of this post.  Below is a shot of my Campy Chorus Racing Triple.

A Campy Racing Triple is for people who want to think they are still fast, but realize their lifestyle will include occasionally towing a trailer with the road bike, possibly a child seat.  That small ring is known as the Granny Gear.  It's for those times, for people like me.  It's not for a commute to work, mind you.  It's for carrying heavy loads, for extreme circumstances.  Note that the photo does not show me using the granny.  That is, there is no actual evidence that I actually resorted to it today.  The shot below is merely circumstantial.

That says 41minutes, 51 seconds.  Granted, that included finding my lock in my stuffed bag and locking up, but with all the excuses I can muster, it is still sloooow.  I think of it this way, though . . . By car it takes 28 minutes.  That means I spent about 14 minutes this morning just doing something I enjoy.  Everyone should get 14 minutes a day to do something they enjoy.  And, I get another 14 minutes this evening on my way home.  I wish you the same.

Thursday, February 25, 2010

APEX Refresh Classic Report Region AJAX style

We often have the need to refresh a classic report region, AJAX style. It is straightforward to get a refresh link on the page. I used to just build a link using $a_report (the APEX built-in for doing partial page refresh on classic reports). But I have found it is better to create a javascript function in the region header or footer. This has the advantage that you can call it from a button on the page, or from any location on the page, not just from within the header or footer itself. If you put the following

<script type="text/javascript">
function c2RefreshTasks(){ 
  pId = '#REGION_ID#';   // report region id
  $a_report(pId.substring(1),'1','15');  // APEX built-in

This allows me to put a link anywhere on the page

<a href="javascript:c2RefreshTasks();">refresh tasks</a>

I can also create a standard button anywhere on the page that calls this javascript.

I recently also had the need to pop a new window (child), add a task in that child window, close the child and then refresh the task region in parent window. It turned out to be easy...

Just create the APEX child window that does the insert/update.&nbsp; Have it branch to a page (e.g. P99). On P99, put the following in the HTML header

<script type="text/javascript">
 // important to have the try because the parent window might have changed...
{ try {
  catch(err) {

Saturday, December 19, 2009

Oracle Application Express 4.0 (APEX 4.0) Early Adopter

Just about everyone has already blogged about it, but APEX 4.0 EA is available now.
What else is there to say that has not already been said?  Not much probably, but I'll point out one new feature: APEX 4.0 is not available in Internet Explorer.  I have it on good authority that this feature will only be in the beta and will not make it into the production release--we can hope, though.  (Thanks to Neelesh Shah for pointing out this new feature.)

Also, check out the new SQL Developer features:

And, of course, Patrick's sample plugin:

Thursday, December 03, 2009

ODTUG Kaleidoscope 2010: APEX Plugin Showcase

One of the best new features of APEX 4.0 is the extensible plugin architecture.  You will be able to create your own item and region types as well as custom dynamic actions (javascript enabled actions on the browser).  You'll be able to add these plugins right in to the builder so they are available to all of your developers.  You will be able to share (or even sell) these with the APEX community.  This is big.

To get you started the fine folks at ODTUG are going to build five fantastic plugins and give them away to anyone who attends Kaleidoscope 2010.  You can check out the details here.

The trick to writing a great plugin is to have a great idea.  I'll have some input into the five plugins.  If there is an APEX item type, region type or dynamic action that you think should be there but isn't, please leave me a comment and let me know. 

Monday, November 30, 2009

Interesting APEX with dblink issue

We recently ran in to a problem when connecting across a database link to a Postgres database.  The query looked like this
select "column1", "column2"
  from "my_schema"."my_long_named_view"@my_db_link
It works fine from sql*plus and sql Developer, but when running the same query in Application Express (in an app or in the SQL Workshop) we got this error:
ORA-28500: connection from ORACLE to a non-Oracle system returned this message: [Generic Connectivity Using ODBC]DRV_QspecDescribe: DB_ODBC_RECORD (189): ; [OpenLink][ODBC][PostgreSQL Server] current transaction is aborted, commands ignored until end of transaction block (SQL State: S1000; SQL Code: 1) ORA-02063: preceding 2 lines from MY_DB_LINK
Dave Rydzewski came up with the solution.  Shorten the name of the view in Postgres and change the query to look like this (note two fewer double quotes "):
select "column1", "column2"
  from "my_schema.short_v"@my_db_link
I'm still not sure what APEX does to make it blow up.  I wonder how many people use APEX with a db link to Postgres.

Tuesday, November 24, 2009

VMware pc2mac

I thought tonight was going to be the night.  I purchased a macbook pro about 6 weeks ago.  I've slowly been getting familiar with the mac, and realizing that I'll probably still need a PC image at least occasionally.  Last night I installed VMware Fusion for the mac (nice that it's called Fusion, that way it almost has something to do with this Oracle blog).  Tonight I was to create the image from my work laptop and test it out on the macbook.  Alas, try as I might, I still don't have an image.  This is my story.

I read the VMware readme (yep, I do that kind of thing).  I learned about pc2mac, the utility that lets you stream the image right off your running pc and onto an image on your mac.  I read the guides.  I installed the pc2mac utility on my PC, restarted my PC and the VMware Fusion PC Migration Agent screen opened right up.  I ran VMware on the mac and followed the instructions, typed in the four-digit passcode, the administrator password, clicked continue, and got an error:

Converter failed to connect to remote machine.
An error occured while transferring data
I tried a few times, but did not get any further.  I decided to come back later, but first, I returned to the PC to close out the migration agent.  Noticing the checkbox "Run the VMware Fusion PC Migration Agent Every time I start my PC", I realized that I probably don't want this to every time.  I unchecked the box.

Returning later to try again, I could not find a way to start up that migration assistant.  I googled.  I rebooted, re-installed pc2mac, rebooted, uninstalled, rebooted, reinstalled, rebooted.  Mind you, this was the PC I was rebooting--not a quick affair, much like this blog post.  I'll cut to the chase, even an uninstall and reinstall would not bring up that screen again.  I finally found the trick though.  Edit the registry:

HKEY_CURRENT_USER > Software > VMware, Inc. > PC Migration Agent
Edit RunAtStartup and change from 0 to 1.

Of course, the change from 0 to 1 was just a guess, but it worked.

Now, at 11:06pm, I'm back to having a pc2mac utility that will run the first screen, but it still does not work.  Now that's progress.  Oh, and my wife just said, "How do I find your blog?"  Good night everybody.

[Update: 25 Nov 2009, 4:53pm]  I also had an issue logging an issue with VMware.  The first problem was that I could not register the product.  The VMware registration website kept giving an error that it was not a valid code despite the fact that it was in the email and the product installation accepted it.  I was able to log a customer service issue about not being able to register, but I was not able to log a technical issue without first registering.

I called and spoke with an extremely helpful support rep.  She told me they were having issues with some registration numbers and offered to create the technical ticket for me.  She was so helpful that I went back to the VMware site to drop a note to anyone that would accept it.  This is the note I typed:

I recently purchased VMware Fusion.  I generally know my way around technology, but just registering my VMware software on the website was impossible (truly, as there is currently a bug in your system).  After quite a bit of frustration, though, I finally called and spoke with a support representative.  I'm sorry that did not get her full name, but she created a new SR for me, 1459858086, and it has been assigned to SGARDNER.

This support rep was very helpful, polite and overall reflected very well on VMware.  Though my issue has not been resolved, I had a very positive experience speaking with this rep.  If you have the ability to commend her I hope you will.
That's not bad, right?  Unfortunately, though, it never went anywhere.  The only page I could find a place type the note was here:
The page never allowed me to pick a Country or State.  It just kept showing "loading..." without ever showing a country.  Of course, those fields are mandatory.

Sorry friendly support rep, I'm afraid that note won't make it anywhere.

Wednesday, November 04, 2009

Looking for APEX Developers

I have a client in the northeast of the US. If you are really good with the Oracle DB, pl/sql and APEX, and live or want to move to the US Northeast, send me your info and I'll pass it along. Please send an email with your resume to me (anton) at work (



Wednesday, September 30, 2009

Cloning an Oracle Schema

When I started this blog I decided I would only blog about things you could not find reasonably easily with a simple google search. This post violates that rule. Cloning a schema is something that I do fairly frequently but not frequently enough that I remember the exact syntax. First I googled it every time, later I created a little text file with the commands. Having already gone to the trouble to write a text file, I might as well just paste it in here. I use datapump. In the example below I want to export the SCOTT schema which has all of its data in the USERS tablespace. I want to import it into a database (either the same or another Oracle database) as the user SCOTT2 in the tablespace USERS2. That means I need to change both the database schema and tablespace. Here are the commands (replace values as appropriate for your env):
export ORACLE_HOME=/opt/oracle/product/oracle10g
export ORACLE_SID=c2dev1
./expdp system/ dumpfile=scott.dmp schemas=scott
Note: this will create the file scott.dmp in the location $ORACLE_HOME/admin/dpdump. You can create a different directory for it, but for my purposes this is sufficient.

That is all it takes to do the export.
If you plan to import into the same database it can stay in the same location.

To do the import into a different database you will need to copy the scott.dmp file to the right location for the other database.
cp $ORACLE_HOME/admin/c2dev1/dpdump/scott.dmp /mynas/scott.dmp
Then I ssh to the target machine and copy to the new target location
cp /mynas/scott.dmp $ORACLE_HOME/admin/c2dev2/dpdump/scott.dmp

export ORACLE_HOME=/opt/oracle/product/oracle10g
export ORACLE_SID=c2dev2
./impdp system/ dumpfile=scott.dmp remap_schema=scott:scott2 remap_tablespace=USERS:USERS2
That should do it. This should make it easy for me to do in the future and hopefully help someone else along the way.

Tuesday, September 29, 2009

APEXposed 2009 - A request for input

I will be speaking at APEXposed 2009 in Atlanta, GA on 10 & 11 November. Some of you have certainly seen the How to Hack an APEX Application presentation. I will be giving a revised version of that. It is difficult to find interesting things because the APEX developers keep adding features to make it harder for developers to get into trouble, but I'll have a few items of interest, plus the old standby's.

My second talk is APEX and the Oracle Database.
The power of APEX is partly the immense scope of capabilities present in the Oracle database. In this presentation I am going to show how to use many of these capabilities within APEX. Below are a few topics I have in mind.
  • Oracle Text (Intermedia)
  • Spacial
  • SQL Analytics
  • File Compression
  • owa routines
  • utl_inaddr
  • External Tables
  • Virtual Private Database
I'd love to get additional ideas--if you have any favorites, please let me know!

Wednesday, June 10, 2009

Migrating Portal Repository with change in DN

Another in a long line of very esoteric issues...

If you are migrating a portal repository, possibly from Production back into Dev or vice versa you may run into the following error when running ptlconfig

STEP 1 : Populating Portal seed in OID
Creating Lightweight User Accounts and Groups in OID
Portal schema version is:
Error code : -6502
Error message: ORA-06502: PL/SQL: numeric or value error
ERROR: creating lightweight users and groups in OID ... exiting

PL/SQL procedure successfully completed.

This happens if you have changed the base dn of your OID. For example, you might have had a dn of dc=concept2completion,dc=net
and then decided to change to dot com

or possibly you removed a sub-domain
changed to

In either case, you will get the error listed above. The problem is that the script secoidd.sql relies on the dn stored in the table wwsub_model$. Below is an extract of secoidd.sql

if l_subscriber_dn is null then
-- The control should never reach here as the subscriber DN
-- should be available in the wwsub_model$ table.
So, if you get into this position, you will need to update the value in the dn column of wwsub_model$.

Seems like such a simple solution. So simple that I have now gone to the process of figuring it out three times.