Thursday, May 18, 2017

Moving your APEX Workspace to a Clone

Last September I posted a list of steps to take when cloning an Oracle database that has APEX installed. The scenario is common for some, especially EBS customers--clone production and turn it into development. You have fresh data that way and are sure that you are developing against the exact configuration of the production environment. I'm not suggesting this as a best practice--I'm just recognizing that it happens.

In addition to the steps I described in September, I recently ran into the need to migrate Workspace users and Team Development content from an existing development instance into the new production clone. If you've followed the best practice of initially exporting your workspace from Dev and migrating it to Test and Production, moving users and other content into the clone is pretty easy. By having the same workspace in Dev, Test and Production you automatically also have it in the Production clone.

I won't say this process is supported, but we have used it successfully.

Step 1: Log into the APEX_ADMIN application (INTERNAL workspace).

Step 2: Export the workspace that has the Team Development content you wish to migrate.

Step 3: Edit the export file and remove the following

The section that begins:  prompt  Creating workspace ABC
Any groups that already exist or that you don't wish to create in the clone: wwv_flow_api.create_user_groups 
Any users that exist or that you don't wish to create: wwv_flow_fnd_user_api.create_fnd_user

Step 4: Run the edited file in the APEX_nnnnnn user of the clone.

That's it. Your Team Dev content should be there.

Thursday, May 04, 2017

The Benefits of an IoT Cloud Infrastructure

I recently presented on the Internet of Things at the Montreal Oracle Developer Day. It was a great day of Oracle tech and I had an engaged and enthusiastic group of participants. The session was less than on hour long, yet during this time a few attendees assembled an IoT thermostat while I created an Oracle REST Data Services (ORDS) JSON service to interact with device. I also created an APEX application to control the device and report on the data it passed to the service. I ended the session with an overview of the benefits of an IoT Cloud Infrastructure--specifically Oracle's IoT Cloud Service. After the session a couple people asked me why I used an Electric Imp instead of a Raspberry Pi. These questions let me know that I didn't do enough to highlight the benefits of the infrastructure services.

I often do this presentation as a hands-on lab during which every participant builds a physical device. I may have 60 or more devices to prepare for a single session. While that is a lot of devices, it pales in comparison to the volume of devices that a typical IoT product might involve. How many Nest and Ecobee thermostats are in the wild? Fitbits? Initially programming the devices, hooking them up to a wifi network, communicating with the devices through firewalls and across wide area networks, pushing patches to the devices are big concerns when dealing with the scale associated with IoT. A few years ago, I chose the Electric Imp as a device because it has a built-in infrastructure for handling these kinds of concerns. A Raspberry Pi is a great device, but it doesn't have the built-in IoT support that comes with an Electric Imp.

Oracle is addressing these requirements through its IoT Cloud Service (IOTCS). The benefits of using an IoT infrastructure include the following:

  • Device Virtualization
  • High Speed Bi-Directional Messaging
  • Device Management
  • Stream Processing, Data Enrichment, Event Store
  • Integration via applications and APIs

If you are looking to build a product, it's time to consider an IoT platform. I'll discuss more about the Oracle IoTCS in a future post.

For now, though, I want to encourage people to play with whatever physical to software interface is fun to work with. Inexpensive and effective sensors are readily available. Electric Imp, Raspberry Pi, Arduino, Zensio, Sparkfun Thing and many other platforms make it easy to build up a device. Have fun.


Tuesday, April 18, 2017

XML Parsing failed with LPX-00261: invalid URL


As promised in my last post, I'll address the parsing error you get when you attempt to convert the text returned by the NCBI efetch utilities into an XMLType. The Oracle XML parser within the database treats the XML comment "<!DOCTYPE TaxaSet PUBLIC "-//NLM//DTD Taxon, 14th January 2002//EN" "https://www.ncbi.nlm.nih.gov/entrez/query/DTD/taxon.dtd">" as an indicator to validate the XML against the DTD located at "https://www.ncbi.nlm.nih.gov/entrez/query/DTD/taxon.dtd". In many, maybe even most, cases this is fine. If you are generating an XML payload you may want to validate it before sending it out to avoid the embarrassment of sending an invalid message. If you are retrieving it from the source, though, odds are good it will be valid and validating it against the DTD may just be extra overhead. It's also possible that the XML parser doesn't understand the DTD. At least one version of the database has a bug related to exactly how that comment is formatted. In my case there are at least two issues related to validating against the DTD, and I have no need for it. Below is the query and the associated error.

select xmltype(
'<?xml version="1.0" ?>
<!DOCTYPE TaxaSet PUBLIC "-//NLM//DTD Taxon, 14th January 2002//EN" "https://www.ncbi.nlm.nih.gov/entrez/query/DTD/taxon.dtd">
<TaxaSet><Taxon>
    <TaxId>33208</TaxId>
    <ScientificName>Metazoa</ScientificName>
    <OtherNames>
        <GenbankCommonName>metazoans</GenbankCommonName>
        <BlastName>animals; animals</BlastName>
        <Synonym>Animalia</Synonym>
        <CommonName>multicellular animals</CommonName>
    </OtherNames>
    <ParentTaxId>33154</ParentTaxId>
    <Rank>kingdom</Rank>
    <Division>Invertebrates</Division>
    <GeneticCode>
        <GCId>1</GCId>
        <GCName>Standard</GCName>
    </GeneticCode>
    <MitoGeneticCode>
        <MGCId>1</MGCId>
        <MGCName>Standard</MGCName>
    </MitoGeneticCode>
    <Lineage>cellular organisms; Eukaryota; Opisthokonta</Lineage>
    <LineageEx>
        <Taxon>
            <TaxId>131567</TaxId>
            <ScientificName>cellular organisms</ScientificName>
            <Rank>no rank</Rank>
        </Taxon>
        <Taxon>
            <TaxId>2759</TaxId>
            <ScientificName>Eukaryota</ScientificName>
            <Rank>superkingdom</Rank>
        </Taxon>
        <Taxon>
            <TaxId>33154</TaxId>
            <ScientificName>Opisthokonta</ScientificName>
            <Rank>no rank</Rank>
        </Taxon>
    </LineageEx>
    <CreateDate>1995/02/27 09:24:00</CreateDate>
    <UpdateDate>2017/02/16 16:52:33</UpdateDate>
    <PubDate>1992/05/26 01:00:00</PubDate>
</Taxon>
</TaxaSet>') the_xml
  from dual;


ORA-31011: XML parsing failed 
ORA-19213: error occurred in XML processing at lines 2 
LPX-00261: invalid URL https://www.ncbi.nlm.nih.gov/entrez/query/DTD/taxon.dtd

Fortunately, you can instruct the database to NOT validate.

alter session set events='31156 trace name context forever, level 2'

will instruct the database to skip the validation. You can then use xmlTable, xmlQuery, etc. without receiving an error.

Note: you may need to "grant alter session" to the user in order for this to work, particularly if using an execute immediate within a package to do the alter session command.




Saturday, April 15, 2017

The Man in the Middle

I recently ran into an Oracle database bug (version 12c). It's definitely a bug, and there may even be a patch for it, maybe. We've all been in this position, though. There's definitely a bug. We've identified something similar in the Oracle support site bug database. There's a patch for it. How long will it take to get the patch installed? Can I even install the patch on an Oracle Database Appliance (or Exadata or whatever other special circumstance)? What happens if the patch doesn't fix the specific issue?

Here is my scenario. I need to interact with the National Institutes of Health taxonomy database. It has REST based services--check them out:

https://eutils.ncbi.nlm.nih.gov/entrez/eutils/esearch.fcgi?db=taxonomy&term=Metazoa&usehistory=n

https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=taxonomy&id=33208

So, super easy, I can use apex_web_service.make_rest_request to make the request, parse the response and do some science!

Let's see what happens:

select 
  apex_web_service.make_rest_request(
        p_url => 'https://eutils.ncbi.nlm.nih.gov/entrez/eutils/efetch.fcgi?db=taxonomy&id=33208',
        p_http_method => 'GET'
        ) taxonomy_info
        from dual
        ;

ORA-29273: HTTP request failed
ORA-28860: Fatal SSL error
ORA-06512: at "SYS.UTL_HTTP", line 368
ORA-06512: at "SYS.UTL_HTTP", line 1118
ORA-06512: at "APEX_050100.WWV_FLOW_WEB_SERVICES", line 636
ORA-06512: at "APEX_050100.WWV_FLOW_WEB_SERVICES", line 760
ORA-06512: at "APEX_050100.WWV_FLOW_WEBSERVICES_API", line 236
ORA-06512: at line 1
29273. 00000 -  "HTTP request failed"
*Cause:    The UTL_HTTP package failed to execute the HTTP request.
*Action:   Use get_detailed_sqlerrm to check the detailed error message.
           Fix the error and retry the HTTP request.

Now, I know that I've used this same kind of call from this same database before, so I do a little more digging and try this:

select 
  apex_web_service.make_rest_request(
        p_url => 'https://oracle.com',
        p_http_method => 'GET'
        )
        from dual
        ;

ORA-29273: HTTP request failed
ORA-29024: Certificate validation failure
ORA-06512: at "SYS.UTL_HTTP", line 1258
ORA-06512: at "APEX_050100.WWV_FLOW_WEB_SERVICES", line 717
ORA-06512: at "APEX_050100.WWV_FLOW_WEB_SERVICES", line 760
ORA-06512: at "APEX_050100.WWV_FLOW_WEBSERVICES_API", line 236
ORA-06512: at line 1
29273. 00000 -  "HTTP request failed"
*Cause:    The UTL_HTTP package failed to execute the HTTP request.
*Action:   Use get_detailed_sqlerrm to check the detailed error message.
           Fix the error and retry the HTTP request.

It's a different error, but still and error! Now I'm getting concerned. I assume it has to do with using https because both errors reference SSL/TLS. I try a variety of http and https sites. Most work fine, but occasionally I get one of the two errors above. I think they are actually two different errors. The second can probably be fixed by adding some certs to the db wallet, but the first, the one I really care about, appears to need a database patch: Patch 24666032. I'm not sure that would fix it, and for a variety of reasons, I can't get the patch installed and tested in a timely fashion.

But...the scientists have work to do and I don't want to stand in the way of science. So, I developed a hack. Please don't judge me. Of course, the title of this post has already given it away. It's seems so obvious when you already know the answer, but it took some pondering to come up with it: the man in the middle. While I am the man in the middle, between the scientists and the data, I need another man in the middle to solve the problem. That MITM is an Apache Reverse Proxy.

As I mentioned, I know this database can use apex_web_service.make_rest_request, definitely with my own internal development web server. I set up a reverse proxy on our internal apache server to route the request to the NIH REST server.


  1. Added an entry to our internal DNS server
    [internal IP address]   ncbi.concept2completion.com
  2. Created a virtual host entry on our internal Apache web server and restarted
    <VirtualHost *:443>
       ServerName ncbi.concept2completion.com
     
       ProxyRequests on
       SSLProxyEngine on
       ProxyPass / https://eutils.ncbi.nlm.nih.gov/
       ProxyPassReverse / https://eutils.ncbi.nlm.nih.gov/
    </VirtualHost>
  3. Pointed the apex_web_service.make_rest_request to our internal server.

select 
  apex_web_service.make_rest_request(
        p_url => 'https://ncbi.concept2completion.com/entrez/eutils/efetch.fcgi?db=taxonomy&id=33208',
        p_http_method => 'GET'
        ) taxonomy_info
        from dual
        ;


<?xml version="1.0" ?>
<!DOCTYPE TaxaSet PUBLIC "-//NLM//DTD Taxon, 14th January 2002//EN" "https://www.ncbi.nlm.nih.gov/entrez/query/DTD/taxon.dtd">
<TaxaSet><Taxon>
    <TaxId>33208</TaxId>
    <ScientificName>Metazoa</ScientificName>
    <OtherNames>
        <GenbankCommonName>metazoans</GenbankCommonName>
        <BlastName>animals; animals</BlastName>
        <Synonym>Animalia</Synonym>
        <CommonName>multicellular animals</CommonName>
    </OtherNames>
    <ParentTaxId>33154</ParentTaxId>
    <Rank>kingdom</Rank>
    <Division>Invertebrates</Division>
    <GeneticCode>
        <GCId>1</GCId>
        <GCName>Standard</GCName>
    </GeneticCode>
    <MitoGeneticCode>
        <MGCId>1</MGCId>
        <MGCName>Standard</MGCName>
    </MitoGeneticCode>
    <Lineage>cellular organisms; Eukaryota; Opisthokonta</Lineage>
    <LineageEx>
        <Taxon>
            <TaxId>131567</TaxId>
            <ScientificName>cellular organisms</ScientificName>
            <Rank>no rank</Rank>
        </Taxon>
        <Taxon>
            <TaxId>2759</TaxId>
            <ScientificName>Eukaryota</ScientificName>
            <Rank>superkingdom</Rank>
        </Taxon>
        <Taxon>
            <TaxId>33154</TaxId>
            <ScientificName>Opisthokonta</ScientificName>
            <Rank>no rank</Rank>
        </Taxon>
    </LineageEx>
    <CreateDate>1995/02/27 09:24:00</CreateDate>
    <UpdateDate>2017/02/16 16:52:33</UpdateDate>
    <PubDate>1992/05/26 01:00:00</PubDate>
</Taxon>

</TaxaSet>


A hack? Yes, but done in the name of science.

Stay tuned for my next post on how to solve the parsing issue with the XML shown above. Also, we now have a pl/sql API for interacting with the NIH efetch service--it's extensive and provides access to the taxonomy lineage, synonyms, etc. Contact me if you are interested in the API.

Wednesday, January 04, 2017

ORDS Quirks - slashes and feeds

Returning to the stated purpose of this blog, "specific solutions to isolated problems," I've run into a few quirks with Oracle REST Data Services recently. The first is simply figuring out exactly what version of ORDS I am running. There are (at least) two components, the mid-tier java (ords.war) and the database metadata and packages (in ORDS_METADATA). Does anyone know the best way to get these? (Edit: See comments for answer to this.)

Next, I've run into quirks around exactly where to put slashes. In an earlier post (http://c2anton.blogspot.com/2016/06/super-quick-oracle-rest-service-with.html) I demonstrated how to use the APIs to create ORDS modules, templates and handlers. I tried to use the exact commands in a newer ORDS version and ran into issues when using bind variables. I kept getting THE "ORACLE REST DATA SERVICES" banner along with "404 not found" whenever I created a template and handler with a bind variable.



This used to work:

  1. begin
  2.   ORDS.DEFINE_MODULE(
       p_module_name    => 'antonModule',
       p_base_path      => '/antonmodule',
       p_items_per_page => 25,
       p_status         => 'PUBLISHED',
       p_comments       => NULL );

      commit;
    end;

    begin  
      ORDS.DEFINE_TEMPLATE(
       p_module_name => 'antonModule',
       p_pattern     => '/sqltest/{abc}/{def}',
       p_priority    => 0,
       p_etag_type   => 'HASH',
       p_etag_query  => NULL,
       p_comments    => NULL );
    
      commit;
    end;

But in the new version I have to put a trailing slash in define_module ( p_base_path => '/antonmodule/' ) and remove it from the beginning of p_pattern in define_template (  p_pattern => 'sqltest/{abc}/{def}'   ).

It took my more time that I wish to admit to figure that out.

*****************************

The next issue is with the feed handler. The feed should create a $ref of this format

https://mymachine/ords/myuser/mymodule/myfeed/{uid}

but it is creating this
https://mymachine/ords/myuser/mymodule/{uid}

Maybe I just need to upgrade (edit: I upgraded to latest ORDS, 3.0.9. I still have the same issues.), but yikes, these two issues have made for a long day today. Hopefully this helps someone spend a little less time on this than I did :)

Monday, September 19, 2016

Cloning an Oracle Database with APEX applications

About a year ago I asked a question about cloning Oracle Databases in the APEX section of the Oracle Community. The question is here (no need to click the link, all of the content is also in this blog post):

https://community.oracle.com/thread/3786014

I haven't received much traction, so I'm reformulating it as a recommendation. Perhaps I'll get more feedback in this form. I'll also point out an element I think is a bug in APEX...you'll have to read to the end to see that.

It is a common practice to clone a production database and utilize the clone for testing or development purposes. This happens a great deal with eBusiness Suite implementations, but also with many other installations. Below is a sample list of steps (with limited guidance) that should be done to avoid side effects with APEX applications.


  1. Use RMAN or your favorite technology to backup and restore the production database, but DO NOT START.
  2. Change the database SID and name if not done above.
  3. Set JOB_QUEUE_PROCESSES to 0.  This is a key step to make sure that when you start the database things don't start "happening."
  4. Start the database.
  5. Assuming you are running a runtime only environment in Production, you will likely want to install the APEX builder in this new clone. Run apxdvins.sql to upgrade the runtime into a full development environment.
  6. Log into the INTERNAL workspace and modify instance settings: Instance URL, Image URL, SMTP server settings (if you wish to use a different SMTP server), Print Server settings, any other settings you want.
  7. Navigate to the Manage Instance > Mail Queue and delete anything in the queue. The clone may have happened while things were in the queue.
  8. Manage Instance > Interactive Report Descriptions: Delete all of the Interactive Report subscriptions. This is also a key step to ensure that you don't have emails going out to production users from your development or test environment.
  9. Manage Instance > Session State: Purge all session state. There could be sensitive production data that you don't want left around in session state.
  10. Modify any settings specific to your own applications, e.g. web service URLs, lookup values, etc.
  11. Reset JOB_QUEUE_PROCESSES to appropriate value.


It would also be great to have Oracle provide a script that does the above things (with the exception of #10, of course).

I promised to call out a bug. Item #7 should delete all of the interactive report subscriptions, but it doesn't--at least not in APEX v5.0.4.00.12. The list of report subscriptions skips any application

where build_status = 'RUN_AND_HIDDEN'

This is (at least) packaged applications that have not been "unlocked." It turns out that deleting these subscriptions is NOT EASY. I originally thought a script like this might do it:

-- this would need to be done in each workspace because it uses the workspace views
begin

  for irRec in (select notify_id from APEX_APPLICATION_PAGE_IR_SUB) loop
  
    APEX_IR.DELETE_SUBSCRIPTION(
      p_subscription_id =>  irRec.notify_id);
    
  end loop;

end;

Unfortunately, APEX_APPLICATION_PAGE_IR_SUB doesn't see the subscriptions. It has the same issue as the page--it won't show subscriptions for applications with build_status = 'RUN_AND_HIDDEN'.

I tried a few other things, but in the end, the only way I could get rid of these was to just delete them from the underlying table, run as the APEX_050000 schema:

delete from wwv_flow_worksheet_notify;





Thursday, June 23, 2016

Query to find users granted an ACL -- the natural question after seeing ORA-24247

You may have encountered ORA-24247: network access denied by access control list (ACL) and wondered "who has access to what from my database?" I extended a query from the Oracle documentation to give me the results I wanted: ACL Name, Username, host, lower port, upper port, and if granted connect and resolve.

with privs as (
  SELECT acl,  u.username, host, lower_port, upper_port, 
     DECODE(
         DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE_ACLID(aclid, u.username, 'connect'),
            1, 'GRANTED', 0, 'DENIED', null) conn_privilege,
     DECODE(
         DBMS_NETWORK_ACL_ADMIN.CHECK_PRIVILEGE_ACLID(aclid, u.username, 'resolve'),
            1, 'GRANTED', 0, 'DENIED', null) res_privilege       
     FROM dba_network_acls a, dba_users u
     )
select *
  from privs
  where conn_privilege is not null
     or res_privilege is not null
  order by acl, username
     ;


It's nothing special, but can be a handy query.

Monday, June 13, 2016

Super Quick Oracle REST Service with OAuth2 and client_credentials

I had the need to allow system A to talk to system B via a REST service. The data was sensitive and powers above me requested that system A use OAuth2 to connect to system B. This REST service call does not involve an end user, it's system A pulling a CSV extract from system B. There are many ways to protect this, but the decision was to use OAuth2. Below is a cookbook on how to do this. This example assumes you have ORDS 3.x installed with the ORDS_PUBLIC_USER and ORDS_METADA schemas configured.

I am running all commands as the user ANTON.

  1. For this example I will create a data source for our query:
    -- create the table in the ANTON database schema
    create table anton_table (c1 varchar2(500), c2 varchar2(500) );

    -- insert some sample data
    insert into anton_table (c1, c2)
      select owner, table_name
        from all_tables
        where rownum <= 20;

    -- commit
    commit;
  2. Enable REST on the ANTON schema:

    DECLARE
      PRAGMA AUTONOMOUS_TRANSACTION;
    BEGIN

        ORDS.ENABLE_SCHEMA(p_enabled => TRUE,
                           p_schema => 'ANTON',
                           p_url_mapping_type => 'BASE_PATH',
                           p_url_mapping_pattern => 'anton',
                           p_auto_rest_auth => FALSE);
       
        commit;

    END;

    -- check to see that it worked
    select id, parsing_schema from user_ords_schemas;

  3. Define a REST Module:

    begin
  4.   ORDS.DEFINE_MODULE(
       p_module_name    => 'antonModule',
       p_base_path      => '/antonmodule',
       p_items_per_page => 25,
       p_status         => 'PUBLISHED',
       p_comments       => NULL );

      commit;
    end;


      
  5. Define a template. This is a URL pattern associated with the module "antonModule." Bind variables are contained in squiggly brackets: {bindVariableName}.

    begin  
      ORDS.DEFINE_TEMPLATE(
       p_module_name => 'antonModule',
       p_pattern     => '/sqltest/{abc}/{def}',
       p_priority    => 0,
       p_etag_type   => 'HASH',
       p_etag_query  => NULL,
       p_comments    => NULL );
    
      commit;
    end;
    
    This is interesting so take note! Notice my bind variables are abc and def. I tried using c1 and c2 but it seems that the bind variable names can not have numerals in them. Developer beware!
    
    
    
    
  6. Define a REST Handler based upon a sql query that takes two bind variables (abc and def) and returns a CSV file:

    begin  
      ORDS.DEFINE_HANDLER(
       p_module_name => 'antonModule',
       p_pattern     => '/sqltest/{abc}/{def}',
       p_method      => 'GET',
       p_source_type => ords.source_type_csv_query,
       p_source      => q'[select c1, c2 from anton_table where c1 = :abc and c2 = :def  ]',
       p_items_per_page  => 25,
       p_mimes_allowed   => NULL ,
       p_comments  => NULL );  

      commit;
    end;


    This is worth repeating: Notice my bind variables are abc and def. I tried using c1 and c2 but it seems that the bind variable names can not have numerals in them. Developer beware!
  7. At this point you can test your service:

    http://localhost:8080/ords/anton/antonmodule/sqltest/ANTON/ANTON_TABLE

  8.  Define a privilege to protect it with OAuth2:

    declare

    l_roles       owa.vc_arr;
    l_patterns    owa.vc_arr;
    l_modules     owa.vc_arr;
    begin
     
      -- l_roles intentionally left empty
     
      -- populate arrays
      l_modules(1) := 'antonModule';
     

      ORDS.DEFINE_PRIVILEGE(
      p_privilege_name  => 'antonpriv',
      p_roles           => l_roles,
      p_patterns        => l_patterns,
      p_modules         => l_modules,
      p_label           => 'antonTestingPriv',
      p_description     => 'anton testing priv',
      p_comments        => null);

      commit;

    end; 
  9. Now you will find it protected:

    http://localhost:8080/ords/anton/antonmodule/sqltest/ANTON/ANTON_TABLE
  10. Create a client that is allowed to access it:

    begin
      OAUTH.create_client(
       p_name            => 'antonclient',
       p_grant_type      => 'client_credentials',
       p_owner           => 'anton',
       p_description     => NULL,
       --p_origins_allowed => NULL, -- param name depends on ORDS version
       p_allowed_origins => NULL,   -- param name depends on ORDS version
       p_redirect_uri    => 'http://localhost:8080/redirect',
       p_support_email   => 'anton@somewhere.com',
       p_support_uri     => 'http://localhost:8080/support',
       p_privilege_names => 'antonpriv');
      
       commit;
    end;
  11. Get the client_id and client_secret. You will need to log in as a user that has access to select from the ords_metadata tables (e.g. ORDS_METADATA or SYSTEM).

    select * from ords_metadata.oauth_clients;
  12. If you want to be able to do this from an http (not https) URL (which you should NEVER do in production--this is just for testing!!):

    -- turn off need for SSL
    1.    Locate the folder where the Oracle REST Data Services configuration is stored.
    2.    Edit the file named defaults.xml.
    3.    Add the following setting to the end of this file just before the </properties> tag.
    4.    <entry key="security.verifySSL">false</entry>
    5.    Save the file.
    6.    Restart Oracle REST Data Services if it is running.
  13. Test getting a bearer token

    curl -i -d "grant_type=client_credentials" --user "[client_id]:[client_secret]" http://localhost:8080/ords/anton/oauth/token

    You should receive a response like this:

    HTTP/1.1 200 OK
    Content-Type: application/json
    X-Frame-Options: SAMEORIGIN
    Transfer-Encoding: chunked

    {"access_token":"Zc9b9HDoP9rUKB189Bf1Yg..","token_type":"bearer","expires_in":3600}


    Notes:
    -- curl is a tool for making http(s) requests from the command line
    --  -i allows you do to this as http -- though you SHOULD use https
    --  -d allows you to pass in data
    --  --user allows you to pass a user:password for basic authentication
    --  then pass the appropriate URL to get a bearer token
  14. Test getting your CSV

    curl -i --header "Authorization: Bearer [token from step 10]" http://localhost:8080/ords/anton/antonmodule/sqltest/ANTON/ANTON_TABLE --output anton.csv

Friday, April 15, 2016

LOBs Over a Database Link


There are lots of pluses and minuses to db links, but they are certainly easy and used in the right context they work very well. I admit that I sometimes use them when there is a better technical solution--just because it is so easy and the better solution may not be worth the time.

The case of LOBs over db links can be tricky. You can't select a lob over a db link in SQL or PL/SQL:

select my_blob
  from my_table@mylink;

results in

ORA-22992: cannot use LOB locators selected from remote tables

There are several techniques that work. You CAN do

insert into my_local_table (the_blob)
  select my_blob
  from my_table@mylink;

There is another interesting technique here:

https://technology.amis.nl/2012/07/02/select-a-blob-across-a-database-link-without-getting-ora-22992/


We recently had a requirement to just show the first few hundred characters of a lob over a db link. It was a complicated query and the developer wrote something like this:


select local.c1, local.c2, remote.c3
    , dbms_lob.substr(remote.my_blob, 200, 1) blob200
  from local_table local
  inner join remote_table@mylink remote on remote.c2 = local.c2;

This worked fine in the development and test environment. In production it gave the ORA-22992. It depended on how the optimizer chose to run the query. If the dbms_lob.substr ran on the remote database it was fine, but if it had to pull the blob to the local db it was a problem.


We solved it by forcing the the dbms_lob.substr to run on the remote node:

select local.c1, local.c2, remote.c3
    , dbms_lob@mylink.substr(remote.my_blob, 200, 1) blob200
  from local_table local
  inner join remote_table@mylink remote on remote.c2 = local.c2;

Thursday, March 31, 2016

apex_web_service.make_rest_request not working with POST

I recently encountered a web service that I was unable to use with POST and apex_web_service. I was using a statement like this:

declare
    l_clob       CLOB;
BEGIN

    l_clob := apex_web_service.make_rest_request(
        p_url => 'http://myMachine/myService',
        p_http_method => 'POST',
        p_parm_name => apex_util.string_to_table('param1:param2'),
        p_parm_value => apex_util.string_to_table('xyz:xml'));

END;

I've used this many times in the past, but this particular service would not recognize the parameters passed in p_parm_name and p_parm_value. I was able to use curl with the same transaction.

curl -X POST -d "param1=xyz&param2=xml" http://myMachine/myService

I must say, it was VERY frustrating. I finally enabled full logging on Apache using mod_dumpio. 

(Apache 2.4.x)
# uncomment
LoadModule dumpio_module modules/mod_dumpio.so


# add
    <IfModule dumpio_module>
#LogLevel debug
LogLevel dumpio:trace7   # Apache 2.4
DumpIOInput On
DumpIOOutput On
#DumpIOLogLevel debug   # does not work in 2.4
    </IfModule>

I reviewed the logs to find out if apex_web_service.make_rest_request was doing something different than curl. I say "if," but clearly something had to be different. In the logs I found this line from curl but not from apex_web_service:

mod_dumpio:  dumpio_in (data-HEAP): Content-Type: application/x-www-form-urlencoded\r\n

I was able to get things working by adding the Content-Type header as shown below.


declare
    l_clob       CLOB;
BEGIN
  apex_web_service.g_request_headers(1).name := 'Content-Type';
  apex_web_service.g_request_headers(1).value := 'application/x-www-form-urlencoded'; 

    l_clob := apex_web_service.make_rest_request(
        p_url => 'http://myMachine/myService',
        p_http_method => 'POST',
        p_parm_name => apex_util.string_to_table('param1:param2'),
        p_parm_value => apex_util.string_to_table('xyz:xml'));

END;


I hope this helps someone!

Thursday, January 08, 2015

AVOID the VOID

I've recently been involved with extending a number of systems that have pre-built data models. I'm generally unhappy with these data models for a variety of reasons. There are many great academic texts on data modeling. I will try to put together a bibliography in an upcoming post. For now, I'll start by discussing the "never delete data" trend. It is generally coupled with the use of a column to indicate that the data should have been deleted (typically a column named VOID) but was instead allowed, indeed required, to linger forever in the table.

There are typically two arguments in favor of the "never delete, add a VOID column" data model: I want to know what happened from a traceability perspective, and, I want to be able to do incremental extracts to populate some other system and need to know if I need to VOID the row in the other system.

Example without VOID

It's easiest to deal with a concrete example, so let's make one. Assume we have an employee table that stores data about employees. For the purposes of my argument (and because it probably makes sense) let us assume we require a unique SSN for each employee. Typically this table would look like this:


CREATE table "EMPLOYEE" (
    "ID"         NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
    "SSN"        VARCHAR2(32) NOT NULL,
    "LAST_NAME"  VARCHAR2(128) NOT NULL,
    "FIRST_NAME" VARCHAR2(128),

    "SALARY"     NUMBER,
    "OTHER_INFO" VARCHAR2(256),
    "LAST_UPDATED" DATE NOT NULL,
    constraint  "EMPLOYEE_PK" primary key ("ID")
)
/

alter table "EMPLOYEE" add
constraint "EMPLOYEE_SSN_UK"
unique ("SSN")
/
  

Because we want to be able to do incremental updates, we need the LAST_UPDATED column to be not null and we need to ensure it is always set correctly. There are many reasons to avoid triggers; just do an internet search for "Tom Kyte triggers" to see a number of valid arguments. For this purpose, though, I will add a trigger:

create or replace trigger "EMPLOYEE_BRIUT"
BEFORE
insert or update on "EMPLOYEE"
for each row
begin

  :new.last_updated := sysdate;

end;


Note that I have told the database that SSN will be unique by adding EMPLOYEE_SSN_UK.
The database will automatically also create an unique index of the same name.

Let's explore what happens if two users attempt to insert employees with the same SSN.

User A (note lack of commit):
insert into employee (ssn, last_name)
  values ('123456789', 'Smith');


 1 rows inserted.


 User B:
 insert into employee (ssn, last_name)
  values ('123456789', 'Smith');


 ....waiting....

User A:

commit;

committed.

 User B:

Error starting at line : 1 in command -
insert into employee (ssn, last_name)
  values ('123456789', 'Smith')
Error report -
SQL Error: ORA-00001: unique constraint (ANTON.EMPLOYEE_SSN_UK) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.



The database recognized that the SSN was a duplicate and did not allow User B to insert. Had User A issued a rollback, User B could have continued, but once User A had committed the record, User B received an error.

This functionality was completed with a single line of code:

alter table "EMPLOYEE" add constraint "EMPLOYEE_SSN_UK" unique ("SSN")

If I were coding an API and wanted to capture the error, it would require one additional line of code (assuming you already have the keyword EXCEPTION in your API):

exception
  when DUP_VAL_ON_INDEX then ... do something

Later you realize that this person NEVER should have been entered into the employee table. This was not an employee, it was a customer. You issue the following command:

delete from employee
  where ssn = '123456789';
 
commit;


Later still, this customer becomes an employee. You issue the following:

insert into employee (ssn, last_name)
  values ('123456789', 'Smith');


commit;

This works just fine and no additional code is required.

Example With VOID

Let us assume that someone convinces you to disallow any deletes. Instead you are asked to add a VOID column. The VOID column will contain a V if the record is "void," else it will be null.

 CREATE table "EMPLOYEE_NO_DELETE" (
    "ID"         NUMBER GENERATED BY DEFAULT ON NULL AS IDENTITY,
    "SSN"        VARCHAR2(32) NOT NULL,
    "LAST_NAME"  VARCHAR2(128) NOT NULL,
    "FIRST_NAME" VARCHAR2(128),

    "SALARY"     NUMBER,
    "OTHER_INFO" VARCHAR2(256),
    "VOID"       VARCHAR2(1),

    "LAST_UPDATED" DATE NOT NULL,
    constraint  "EMPLOYEE_ND_PK" primary key ("ID")
)
/


create or replace trigger "EMPLOYEE_ND_BRIUT"
BEFORE
insert or update on "EMPLOYEE_NO_DELETE"
for each row
begin

  :new.last_updated := sysdate;

end;
 
/

Given the scenario listed above, we won't be able to add the unique constraint on SSN. If we were to do so, we would not be able to add them employee the second time as there would already be an employee record with that same SSN. Perhaps we could get away with making SSN + VOID unique.

alter table "EMPLOYEE_NO_DELETE" add constraint "EMPLOYEE_ND_SSN_V_UK" unique ("SSN", VOID)
/

That seems to do the trick.

insert into employee_no_delete (ssn, last_name)
  values ('123456789', 'Smith');


1 rows inserted.
 
update employee_no_delete
  set void = 'V'
  where ssn = '123456789';
 

1 rows updated.
 
insert into employee_no_delete (ssn, last_name)
  values ('123456789', 'Smith');
 

1 rows inserted.
 
commit;


We still have all of the great features around row locking on uniqueness provided by the database.

Of course, if your users are anything like mine, you will find that Mr. Smith has once again been added as employee but he is really a customer. So...

update employee_no_delete
  set void = 'V'
  where ssn = '123456789';


Ah, but here we get

Error starting at line : 1 in command -
update employee_no_delete
  set void = 'V'
  where ssn = '123456789'
Error report -
SQL Error: ORA-00001: unique constraint (ANTON.EMPLOYEE_ND_SSN_V_UK) violated
00001. 00000 -  "unique constraint (%s.%s) violated"
*Cause:    An UPDATE or INSERT statement attempted to insert a duplicate key.
           For Trusted Oracle configured in DBMS MAC mode, you may see
           this message if a duplicate entry exists at a different level.
*Action:   Either remove the unique restriction or do not insert the key.


We already have a row with this combination of SSN and VOID. There is no choice but to remove the constraint, and, along with it, all of the multi-user concurrency features provided by the database.

alter table "EMPLOYEE_NO_DELETE" drop constraint
"EMPLOYEE_ND_SSN_V_UK"
/   


By dropping the constraint we lose more than we can possibly recover by adding our own code. Nevertheless, let's try.

Now we have to add code to ensure that two users (in two separate sessions) never insert or update rows to cause a duplicate SSN. That means we must insist that all updates happen through an API. You might argue that only updates that include the SSN must go through the API. There are edge cases where this could cause deadlocks--and more importantly, it would be difficult to allow updates to everything except SSN. Hence, we have a new rule: All updates must go through our API. This rule also means that, unless we code special additional APIs, all processing is row by row (AKA slow by slow). Want to give everyone a 10% raise? That means updating each row rather than issuing a single update. Our API is also somewhat complicated. We must ensure that there is only one insert or update that involves SSN at a time--across sessions. As we don't have much insight into the happenings of another session, we'll need some way to track this. In order to serialize the inserts and any updates that might change the SSN, we must lock the EMPLOYEE_NO_DELETE table--the whole table. This means before each insert or update we must issue

lock table employee_no_delete in share mode nowait;

We might consider using WAIT instead of NOWAIT, especially as we assume that there will be a lot of table locks.

lock table employee_no_delete in share mode wait 10;

Alternative Method

I've seen this implemented manually by creating another table that tracks table names--and then the API must lock the appropriate row.

CREATE table TABLE_WITH_VOID_COLUMN (
    TABLE_NAME VARCHAR2(256),
    constraint  "TABLE_NAME_PK" primary key ("TABLE_NAME")
)
/


insert into table_with_void_column (table_name)
  values ('EMPLOYEE_NO_DELETE');

commit;

We would never actually update this row, but would lock it in order to interact between sessions.

That method involves creating a new table and code to accomplish something Oracle already provides. Obviously, that is something we are already attempting by adding the VOID column and coding around it, so I'm not surprised to see custom table locking implemented by the same folks who implement custom delete handling.

Back to the API

In order to ensure that a row is never deleted and that we never have a duplicate SSN, we need an API such as the one below.

create or replace package employee_ND_api is

procedure ins ( p_emp_rec    in employee_no_delete%rowtype) ;

end employee_ND_api;  

/


create or replace package body employee_ND_api is

procedure ins ( p_emp_rec    in employee_no_delete%rowtype) is

duplicate_ssn   exception;
l_count         number;

begin

  lock table employee_no_delete in share mode wait 10;
  begin

    select 1 into l_count
      from dual
      where exists (select 1 from employee_no_delete e
        where e.ssn = p_emp_rec.ssn);

    -- oops we found a row already there
    raise duplicate_ssn;

  exception when no_data_found then null; -- ok to continue
    insert into employee_no_delete values p_emp_rec;
  end;

-- note: We can not commit. There should be a full transaction,
--       actions before and after this action, that need to be
--       committed together. Hence, we lock the row in
--       table_with_void_column until the whole transaction
--       completes.
end ins;

end employee_ND_api; 

/

The update routine would be slightly more complicated as we must also lock the row we intend to update, but the INS routine above points to some issues already. We have now locked the entire EMPLOYEE_NO_DELETE table. As noted, we can't commit the newly inserted record as there may be other DML that needs to occur--inserts or updates to other rows or data in other tables. Hence, the entire table remains locked until the final commit or rollback. No other session can insert or update any row of EMPLOYEE_NO_DELETE until we complete.

Moreover, there may be many tables--perhaps all tables--in our system with this same requirement. Hence, when we attempt to update data in another table in our unified transaction, we will need to take the same approach--lock the entire table. Unless every transaction in our system always follows the same order, we will certainly run in to deadlocks on a frequent basis: one session will lock EMPLOYEE_NO_DELETE, another will lock DEPT_NO_DELETE, the first will attempt to lock DEPT_NO_DELETE but be blocked. Then the second will attempt to lock EMPLOYEE_NO_DELETE and the database will detect a deadlock--forcing a rollback of one of the sessions. There is no way to avoid this.

The Incremental Update Requirement

Clearly EMPLOYEE_NO_DELETE, with its void column, has problems. The requirement to do incremental updates of another system, though, remains. If we return to the EMPLOYEE table and allow the row to be deleted using the EMPLOYEE table rather than setting VOID = 'V' in the EMPLOYEE_NO_DELETE table, how does the incremental update routine know to remove (or void) the row?

This is trivial with the use of a trigger on the EMPLOYEE table. Whenever a delete occurs, write a row to another table to indicate the delete. I'll mention Tom Kyte's dislike of triggers here again. I generally agree with Tom on this point. I don't even like the trigger I used above to populate the last_updated column. In the case of audit tables, though, I think a trigger is absolutely warranted. We are not changing any data in the base table and there are no foreign keys or even constraints on the audit table. Users would only ever be granted SELECT on that table. This is the place for a trigger. Depending upon our audit requirements, we might just indicated who took the action, when and whether it was an insert, update or delete. If we really need traceability, though, it's easy to capture the whole image of the row. I'll do that for this example:


-- create the audit table 

CREATE table "EMPLOYEE_AUDIT" (
    "ID"         NUMBER,
    "SSN"        VARCHAR2(32) NOT NULL,
    "LAST_NAME"  VARCHAR2(128) NOT NULL,
    "FIRST_NAME" VARCHAR2(128),

    "SALARY"     NUMBER,
    "OTHER_INFO" VARCHAR2(256),
    "LAST_UPDATED" DATE NOT NULL,
    "ROW_ACTION"     VARCHAR2(32) 
)
/


-- create the trigger

create or replace trigger "EMPLOYEE_ARIUT"
AFTER
insert or update or delete on "EMPLOYEE"
for each row

declare
l_action  varchar2(32);
begin


  if inserting then l_action := 'INSERT';
  elsif updating then l_action := 'UPDATE';
  else l_action := delete;
 
  insert into employee_audit (id, ssn, last_name, first_name

    , salary, other_info, last_updated, row_action)
    values (:new.id, :new.ssn, :new.last_name, :new.first_name
    , :new.salary, :new.other_info, :new.last_updated, l_action);

end;

/

The incremental routine can simply query the deleted row to gather the data. With sufficient data in the audit table, we can create a view that looks exactly like EMPLOYEE_NO_DELETE, but without its inherent shortcomings (nay, fatal flaws).

--create a view that has the deleted row
create or replace view employee_with_void
select id, ssn, last_name, first_name, salary
    , other_info, last_updated, null void
  from employee
union
select id, ssn, last_name, first_name, salary
    , other_info, last_updated, 'V' void
  from employee_audit
  where row_action = 'DELETE' 
/

The audit table can provide much better information if we need it. With just a LAST_UPDATED column (and no audit table), the incremental routine would never know about multiple changes that occur between the incremental runs. It may not need to, but if it does, the audit table provides that ability.

In fact (not supposition, but absolute fact) the right method is to allow the row to be deleted. This provides true data protection, performs better and requires far less code which is also far less complicated.

Still Unconvinced?

OK, maybe you don't have the requirement for any unique constraints. First, I don't buy that argument for well over 90% of tables. If you are doing incremental updates in another system, you need some way to identify where to apply the incremental update--that would be the logical unique key. But, for the sake of argument, we will assume that you don't have any unique keys. That means that you don't need to be concerned about multi-user concurrency issues as much. What about that VOID column, though? Will you allow a row to be "unvoided?" Will you allow any updates to a voided record? If not, you would definitely need an API to keep that from happening. Given every possible argument and every leniency of requirement, it will still be less complicated for developers and users of the data to AVOID the VOID. For every query implemented by every user or developer save the incremental update routine the query must contain

where void is null

Indexes may need to include the VOID column--you'll need to do the analysis on a table by table basis.

You definitely need an API and likely slow by slow processing everywhere.

And here is the worst part, even the incremental load process is more complicated--if you want to get it right. If you just have two columns for audit/delete purposes, LAST_UPDATED and VOID, you lack the fidelity of knowing a row ever existed or what its incremental states were. Take this scenario, for example:

midnight: incremental process runs
9:00 am  User adds employee Smith, SSN = 123456789
9:15 am  User updates Smith to Jones, SSN = 123459999
10:00 am Payroll runs and pays Jones $500
noon  User updates Jones to Smith, SSN = 123456789
1:00 pm User voids the record
midnight: incremental process runs

All that the last incremental process sees is that there was a row for Smith, SSN = 123456789 and that it was voided. What is the incremental process expected to do with that information?

Need I mention foreign keys? If you have either a parent or a child record, how do you handle the relationship? Obviously a parent record can't be deleted; it would have to be voided. All child records would have to be voided as well. The cascading all has to be coded for--not to mention the locking of all of the cascading. At this point I have to ask: why did your company spend so much money on an Oracle database? I doubt it was to hire developers to code the same features again--with less functionality.

Adding a table is easy. Creating a view is easy. All subsequent code benefits from these constructs. Why does the VOID persist? If anyone has a reason for a VOID column, let me know in the comments. Until then, please join with me in this movement--AVOID the VOID.