Feed aggregator

Oracle Fusion Financials R13 -  Reports and Analyses

Richard Byrom - Sat, 2020-11-28 17:42

Perhaps the most useful report listing I always have at my disposals is the one Oracle delivers for Financials. It lists all the reports currently available for the following modules: -

  • Accounting Hub
  • Advanced Collections
  • Assets
  • Budgetary Control
  • Cash Management
  • Expenses
  • Financials for EMEA
  • Financials for Asia/Pacific
  • Financials for the Americas
  • General Ledger
  • Intercompany Accounting
  • Payables
  • Payments
  • Receivables
  • Revenue Management
  • Tax

Furthermore it classifies the reports accounting to type, sub type and run type and provides catalogue location, description and report output samples.

Download the Oracle Financials Cloud Release 13 Reports and Analyses Listing which covers reports available from Release 12 up to R13.18A

Preview of Oracle Fusion Financials R13 - Reports and Analyses Spreadsheet

Documentum – Custom facets not showing up after full reindex?

Yann Neuhaus - Fri, 2020-11-27 16:19

Beginning of the year, while performing a migration from a Documentum 7.3 environment on VM to Documentum 16.4 on Kubernetes, a customer had an issue where their custom facets weren’t showing up on D2 after a full reindex. At the end of the migration, since xPlore has been upgraded as well (from xPlore 1.5 to 16.4, from VM to K8s), then a full reindex has been executed so that all the documents are indexed. In this case, it was several millions of documents that were indexed and it took a few days. Unfortunately, at the end of the full reindex, the customer saw that the facets weren’t working…

Why is that exactly? Well, while configuring custom facets, you will need to add subpath configuration for the facet computing and that is a schema change inside the index. Each and every schema change requires at the very least an online rebuild of the index so that the change of the schema is propagated into each and every node of the index. Unless you are doing this online rebuild, the xPlore index schema will NOT be refreshed and the indexing of documents will therefore use the old schema. In case you are wondering what is the “online rebuild” I’m talking about, it’s the action behind the button “Rebuild Index” that you can find in the Dsearch Admin UI under “Home >> Data Management >> <DOMAIN_NAME> (usually Repo name) >> <COLLECTION_NAME> (e.g.: default or Node1_CPS1 or Node4_CPS2 …)“:

This action will not index any new content, it will however create a new index based on the refreshed schema and then copy all the nodes from the current index to the new one. At the end, it will replace the current index with the new one and this can be done online without downtime. This button was initially present for both Data collections (where your documents are) as well as ApplicationInfo collections (ACLs, Groups). However in recent versions of xPlore (at least since 16.4), the feature has been removed for the ApplicationInfo collections.

 

So, what is the minimum required to configure custom facets? The answer is that it depends… :). Here are some examples:

  • If the xPlore has never been started, the index doesn’t exist yet and therefore configuring the facets inside the indexserverconfig.xml file would take effect immediately at the first startup. In this case, an online rebuild wouldn’t even be needed. However, it might not always be easy to modify the indexserverconfig.xml file before xPlore even starts; it depends on how you are deploying the components…
  • If the xPlore has been started at least once but indexing hasn’t started yet (0 content inside the Data collections), then you can just login to the Dsearch Admin UI and perform the online rebuild on the empty collections. This will be almost instantaneous so you will most probably not even see it happen though.
    • If this is a new environment, then make sure the IndexAgent is started in normal mode after that so that it will process incoming indexing requests and that’s it
    • If this is an existing environment, then you will need to execute a full reindex operation using your preferred choice (IndexAgent full reindex action, through some select queries, through the ids.txt)
  • If the xPlore has been started at least once and the indexing has been completed, then you will need to perform the online rebuild as well. However, this time, it will take probably quite some time because as I mentioned earlier, it needs to copy all the indexed nodes to a new index. This process is normally faster than a full reindex because it’s only xPlore internal communications, because it only duplicates the existing index (and applied schema change) and because there is no exchange with the Content Server. Once the online rebuild has been performed, then the facets should be available.

 

Even if an online rebuild is faster than a full reindex, based on the size of the index, it might still take from hours to days to complete. It is therefore quite important to plan this properly in advance in case of migration or upgrade so that you can start with an online rebuild on an empty index (therefore instantaneously done) and then perform the needed full reindex after, instead of the opposite. This might save you several days of pain with your users and considerably reduce the load on the Dsearch/CPS.

This behavior wasn’t really well documented before. I had some exchange with OpenText on this topic and they created the KB15765485 based on these exchanges and also based on what is described in this blog. I’m not sure if that is really better now but at least there is a little bit more information.

 

Cet article Documentum – Custom facets not showing up after full reindex? est apparu en premier sur Blog dbi services.

Kubernetes with microK8s: First steps to expose a service to external

Dietrich Schroff - Fri, 2020-11-27 15:02

At home i wanted to have my own kubernetes cluster. I own 2 raspberry pi based on ubuntu, so i decided to install microK8s:

--> https://ubuntu.com/blog/what-can-you-do-with-microk8s

The installation is very well explained here:

https://ubuntu.com/tutorials/install-a-local-kubernetes-with-microk8s#1-overview

 

BUT: i found nowhere a tutorial how to run an container and expose the port in a way that i is reachable from other pc like localhost.

So here we go:

kubectl create deployment web --image=nginx
kubectl expose deployment web --type=NodePort --port=80

After that just do:

# kubectl get all
NAME                      READY   STATUS    RESTARTS   AGE
pod/web-96d5df5c8-5xvfc   1/1     Running   0          112s

NAME                 TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE
service/kubernetes   ClusterIP   10.152.183.1    <none>        443/TCP        2d5h
service/web          NodePort    10.152.183.66   <none>        80:32665/TCP   105s

NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/web   1/1     1            1           112s

NAME                            DESIRED   CURRENT   READY   AGE
replicaset.apps/web-96d5df5c8   1         1         1       112s

On you kubernetes node you can reach the service with 10.152.183.66:80.

For getting the nginx from another pc just use:

<yourkuberneteshost>:32665

For me:



 


PostgreSQL 14: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on the fly

Yann Neuhaus - Fri, 2020-11-27 09:52

It is a common misunderstanding that VACUUM FULL saves you from running out of disk space if you already have space pressure. Running a VACUUM FULL temporarily requires at least double the space, as the table (and the indexes on the table) get completely re-written. PostgreSQL 14 will probably come with a solution for that as this patch introduces the possibility to move relations from one tablespace to another, when either CLUSTER, VACUUM FULL or REINDEX is executed.

As this is about moving relations from one tablespace to another we obviously need at least two tablespaces to play with:

postgres=# \! mkdir /var/tmp/tbs1
postgres=# \! mkdir /var/tmp/tbs2
postgres=# create tablespace tbs1 location '/var/tmp/tbs1';
CREATE TABLESPACE
postgres=# create tablespace tbs2 location '/var/tmp/tbs2';
CREATE TABLESPACE
postgres=# \db
          List of tablespaces
    Name    |  Owner   |   Location    
------------+----------+---------------
 pg_default | postgres | 
 pg_global  | postgres | 
 tbs1       | postgres | /var/tmp/tbs1
 tbs2       | postgres | /var/tmp/tbs2
(4 rows)

Lets assume we have a table in the first tablespace and we face space pressure on that file system:

postgres=# create table t1 ( a int, b date ) tablespace tbs1;
CREATE TABLE
postgres=# insert into t1 select x, now() from generate_series(1,1000000) x;
INSERT 0 1000000

Without that patch there is not much you can do, except for this (which blocks for the duration of the operation):

postgres=# alter table t1 set tablespace tbs2;
ALTER TABLE
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | text    |           |          | 
 c      | date    |           |          | 
Tablespace: "tbs2"

This will move the files of that table to the new tablespace (but not the indexes). If you really want to get back the space on disk with “vacuum full” you can now do that:

postgres=# vacuum (tablespace tbs1, full true)  t1;
VACUUM
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | date    |           |          | 
Tablespace: "tbs1"

The very same is possible with reindex:

postgres=# create index i1 on t1 (a);
CREATE INDEX
postgres=# reindex (tablespace tbs2) index i1;
REINDEX

… and cluster:

postgres=# cluster (tablespace tbs1, index_tablespace tbs1) t1 using i1;
CLUSTER
postgres=# \d t1
                 Table "public.t1"
 Column |  Type   | Collation | Nullable | Default 
--------+---------+-----------+----------+---------
 a      | integer |           |          | 
 b      | date    |           |          | 
Indexes:
    "i1" btree (a) CLUSTER, tablespace "tbs1"
Tablespace: "tbs1"

postgres=# 

Nice.

Cet article PostgreSQL 14: Allow CLUSTER, VACUUM FULL and REINDEX to change tablespace on the fly est apparu en premier sur Blog dbi services.

TCPS Connection With an Oracle Instant Client

Pete Finnigan - Fri, 2020-11-27 09:46
All of our products ( PFCLScan , PFCLCode , PFCLObfuscate and http://www.petefinnigan.com/products/pfclforensics.htm) can use an Oracle instant client to connect to the target database(s) or even a full client. It is of course simpler to use an instant client if....[Read More]

Posted by Pete On 27/11/20 At 03:56 PM

Categories: Security Blogs

Local Web-App (ASP.NET Core with EF Core works flawlessly) Publishing to azure

Tom Kyte - Fri, 2020-11-27 09:46
When I've published the ASP.NET Core Web App to Azure I get the following Error when trying to use the oracle database. <code> NetworkException: ORA-12154: TNS:could not resolve the connect identifier specified OracleInternal.Network.AddressResolution..ctor(string TNSAlias, SqlNetOraConfig SNOConfig, Hashtable ObTnsHT, Hashtable ObLdpHT, string instanceName, ConnectionOption CO) OracleException: ORA-12154: TNS:could not resolve the connect identifier specified OracleInternal.ConnectionPool.PoolManager<PM, CP, PR>.Get(ConnectionString csWithDiffOrNewPwd, bool bGetForApp, OracleConnection connRefForCriteria, string affinityInstanceName, bool bForceMatch)</code> I fix these errors by setting TNS_ADMIN & WALLET_LOCATION in my Application settings (ENV) I have also set WEBSITE_LOAD_USER_PROFILE Then I get a new error when trying to run Oracle DB code. An unhandled exception occurred while processing the request. OracleException: Connection request timed out OracleInternal.ConnectionPool.PoolManager<PM, CP, PR>.Get(ConnectionString csWithDiffOrNewPwd, bool bGetForApp, OracleConnection connRefForCriteria, string affinityInstanceName, bool bForceMatch) This one I can't seem to figure out. Does anyone have any pointers or things I can try ? My connection string looks like this and works perfectly on the local setup. <code>"User Id=user;Password=password;Data Source=db202011081718_medium;"</code> and is injected here in startup.cs <code> services.AddDbContext<ApplicationDbContext>(options => options.UseOracle( Configuration.GetConnectionString("OracleDBContextConnection"))); </code> I must be missing something, related to the azure/oracle specifics that aren't playing nice I've watched this video 3/4 times https://www.youtube.com/watch?v=-IkDuJy30mY
Categories: DBA Blogs

Delete 40%(4TB out of 10TB) of unwanted data using Partitioning techniques from Oracle db(11g) and reclaim space.

Tom Kyte - Fri, 2020-11-27 09:46
Problem statement: Delete 40%(4TB out of 10TB) of unwanted data using Partitioning techniques from Oracle db(11g) and reclaim space. Details: My application DB schema size 10 TB. Oracle 11g and the schema is co located with many other schema's, hence any option with DB down time is allowed. ? Top tables based on size will start from 3 TB (Table_1), 2 TB(Table_2), so on. ? Planned to delete 40% of unwanted data from each table(with a where criteria). ? Delete scripts are practically not possible on huge table data, as it is causing DB sever hung AND impacting other schema services. Below are not an options due to downtime constraints and No FS to create temp tables and switch back. ? Removing data by creating a table ? Switch the tables over(CTAS) Strategy for deletion proposed: ? Partitioning (split partition / sub-portioning) of these tables based on data patterns(where clause) to separate. ? After separating purge eligible data, truncate it in back ground (during off business hrs). --> drop / truncate unwanted partitions. ? After truncating the partition, make the table un-partitioned back to original. Reclaiming of space: best option to choose ? 1. alter database datafile xxx.dbf resize 2. alter tablespace xxx coalesce 3. export-import 4. dbms_redefinition 5. alter table xxx shrink Any guidance on the proposed solution ??? ? ? what partitions to be used for a pattern that has joins/where clause that includes 3 to 4 tables ? ? Pattern based partitioning examples and is that supported in 11g ? ? How to un-partition the main table back to original after truncating the unwanted partitions ?
Categories: DBA Blogs

Cannot read external table or read using utl_file.get_line on Windows 10 with Oracle 18c

Tom Kyte - Fri, 2020-11-27 09:46
I just upgraded to Oracle 18c XE from 11g. I have an external table that worked fine on 11g, but I keep getting the following errors on 18c. ORA-29913: error in executing ODCIEXTTABLEOPEN callout ORA-29400: data cartridge error KUP-11604: no directory object specified for log file The directory does exist and I have the correct grants. <code>SELECT * FROM all_directories WHERE directory_name = 'MYDIR';</code> OWNER DIRECTORY_NAME DIRECTORY_PATH ORIGIN_CON_ID SYS MYDIR C:\Users\sneuf\OneDrive\PLSQL\OracleDirectory 1 <code>SELECT * FROM all_tab_privs WHERE table_name = 'MYDIR' AND grantee = 'C##_SNEUF';</code> GRANTOR GRANTEE TABLE_SCHEMA TABLE_NAME PRIVILEGE GRANTABLE HIERARCHY COMMON TYPE INHERITED SYS C##_SNEUF SYS MYDIR EXECUTE YES NO NO DIRECTORY NO SYS C##_SNEUF SYS MYDIR READ YES NO NO DIRECTORY NO SYS C##_SNEUF SYS MYDIR WRITE YES NO NO DIRECTORY NO I'm pretty sure I'm missing a grant somewhere, but I can't figure out what. Here is my table: <code>CREATE TABLE C##_SNEUF.CHECKING_TBL_EXT2 ( DB_KEY NUMBER, CHECK_NUM VARCHAR2(10), TRANS_DATE TIMESTAMP (6), DESCRIPTION VARCHAR2(100), DEPOSIT_WITHDRAWAL VARCHAR2(1), AMOUNT VARCHAR2(12), MEMO VARCHAR2(200) ) ORGANIZATION EXTERNAL ( TYPE ORACLE_LOADER DEFAULT DIRECTORY MYDIR ACCESS PARAMETERS ( RECORDS DELIMITED BY NEWLINE BADFILE MYDIR: 'checking.bad' FIELDS TERMINATED BY "," OPTIONALLY ENCLOSED BY '"' LDRTRIM MISSING FIELD VALUES ARE NULL ( DB_key CHAR, check_num CHAR(10), trans_date CHAR(21) DATE_FORMAT DATE MASK 'MM/DD/YYYY HH24:MI:SS', description CHAR(100), deposit_withdrawal CHAR(1), amount CHAR(12), memo CHAR(200) ) ) LOCATION ( MYDIR: 'checking.csv' ) ) REJECT LIMIT UNLIMITED ;</code> Thanks, Steve
Categories: DBA Blogs

Documentum DM_SYSOBJECT_E_NO_BROWSE_ACCESS No browse access for sysobject with ID

Yann Neuhaus - Thu, 2020-11-26 06:50

When upgrading you may get this specific issues if you are trying to edit content or create new documents. We got it when creating a new document in a specific folder, D2 told us that the folder doesn’t exists and that it had no browse access.

Foolishness! The folder does exist and all accesses are set properly so what happened? Here is the full log:

Caused by: com.documentum.fc.client.DfIdNotFoundException: [DM_API_E_EXIST]error:  "Folder specified by 0b010ba18005277d does not exist."
        at com.documentum.fc.client.impl.session.Session.getFolderBySpecification(Session.java:1289)
        at com.documentum.fc.client.impl.session.SessionHandle.getFolderBySpecification(SessionHandle.java:835)
        at com.emc.smartwave.web.servlets.listeners.properties.D2CreationServicePlugin.printParentFolder(D2CreationServicePlugin.java:151)
        at com.emc.smartwave.web.servlets.listeners.properties.D2CreationServicePlugin.formerOnAfter(D2CreationServicePlugin.java:187)
        at com.emc.smartwave.web.servlets.listeners.properties.D2CreationServicePlugin.createProperties(D2CreationServicePlugin.java:133)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.callPlugin(InjectSessionAspect.java:487)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.ajc$inlineAccessMethod$com_emc_d2fs_dctm_aspects_InjectSessionAspect$com_emc_d2fs_dctm_aspects_InjectSessionAspect$callPlugin(InjectSessionAspect.java:1)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.process(InjectSessionAspect.java:130)
        at com.emc.d2fs.dctm.web.services.create.D2CreationService.createProperties(D2CreationService.java:97)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.callPlugin(InjectSessionAspect.java:487)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.ajc$inlineAccessMethod$com_emc_d2fs_dctm_aspects_InjectSessionAspect$com_emc_d2fs_dctm_aspects_InjectSessionAspect$callPlugin(InjectSessionAspect.java:1)
        at com.emc.d2fs.dctm.aspects.InjectSessionAspect.process(InjectSessionAspect.java:236)
        ... 55 common frames omitted
Caused by: com.documentum.fc.common.DfException: [DM_SYSOBJECT_E_NO_BROWSE_ACCESS]error:  "No browse access for sysobject with ID '0b010ba18005277d'."
        at com.documentum.fc.client.impl.docbase.DocbaseExceptionMapper.newException(DocbaseExceptionMapper.java:57)
        at com.documentum.fc.client.impl.connection.docbase.MessageEntry.getException(MessageEntry.java:39)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseMessageManager.getException(DocbaseMessageManager.java:137)
        at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.checkForMessages(NetwiseDocbaseRpcClient.java:329)
        at com.documentum.fc.client.impl.connection.docbase.netwise.NetwiseDocbaseRpcClient.applyForObject(NetwiseDocbaseRpcClient.java:672)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection$8.evaluate(DocbaseConnection.java:1382)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.evaluateRpc(DocbaseConnection.java:1141)
        at com.documentum.fc.client.impl.connection.docbase.DocbaseConnection.applyForObject(DocbaseConnection.java:1374)
        at com.documentum.fc.client.impl.docbase.DocbaseApi.parameterizedFetch(DocbaseApi.java:107)
        at com.documentum.fc.client.impl.objectmanager.PersistentDataManager.fetchFromServer(PersistentDataManager.java:191)
        at com.documentum.fc.client.impl.objectmanager.PersistentDataManager.getData(PersistentDataManager.java:82)
        at com.documentum.fc.client.impl.objectmanager.PersistentObjectManager.getObjectFromServer(PersistentObjectManager.java:355)
        at com.documentum.fc.client.impl.objectmanager.PersistentObjectManager.getObject(PersistentObjectManager.java:311)
        at com.documentum.fc.client.impl.session.Session.getObject(Session.java:965)
        at com.documentum.fc.client.impl.session.Session.getFolderBySpecification(Session.java:1281)
        ... 74 common frames omitted

The issue resolution is quite simple, it’s just a cache problem.

You only need to force clearing the caches on the clients. There’s a simple step to do this. The content server is keeping a reference on the “client version” in the dm_docbase_config object. We only need to increment its value to force all clients to reload and clear the caches. So here is how you do it:

In DQL:

retrieve,c,dm_docbase_config
set,c,l,client_pcaching_change
7
save,c,l
reinit,c

Just increase the client_pcaching_change number by 1.

Cet article Documentum DM_SYSOBJECT_E_NO_BROWSE_ACCESS No browse access for sysobject with ID est apparu en premier sur Blog dbi services.

Row sizes 3

Jonathan Lewis - Thu, 2020-11-26 06:08

Several years ago I published a couple of examples of scripts that I’d been using since Oracle 6 (with minor changes over time) to analyse the content of tables in some detail. The first was a breakdown of the lengths of the rows in the table, the second was a map showing the distribution of nulls in the rows. There used to be a third component of the analysis which produced a report of the number of non-null entries for each column in the table,  but by the time I published the first two parts there didn’t seem to be much point in publishing the third since you could get a sufficiently accurate picture by querying view user_tab_columns after gathering stats:


break on num_rows

select
        tab.num_rows, col.column_name , col.num_nulls
from 
        user_tables             tab
join
        user_tab_columns        col
on
        col.table_name = tab.table_name
where
        tab.table_name = 'T1' 
order by 
        col.num_nulls desc, col.column_id
/


  NUM_ROWS COLUMN_NAME           NUM_NULLS
---------- -------------------- ----------
     56777 EDITION_NAME              56777
           CREATED_APPID             56777
           CREATED_VSNID             56777
           MODIFIED_APPID            56777
           MODIFIED_VSNID            56777
           SUBOBJECT_NAME            56570
           DATA_OBJECT_ID            55353
           DEFAULT_COLLATION         51058
           EDITIONABLE               40216
           OWNER                         0
           OBJECT_NAME                   0
           OBJECT_ID                     0
           OBJECT_TYPE                   0
           CREATED                       0
           LAST_DDL_TIME                 0
           TIMESTAMP                     0
           STATUS                        0
           TEMPORARY                     0
           GENERATED                     0
           SECONDARY                     0
           NAMESPACE                     0
           SHARING                       0
           ORACLE_MAINTAINED             0
           APPLICATION                   0
           DUPLICATED                    0
           SHARDED                       0


In this example t1 is a copy of view all_objects and I’ve listed the columns in order of most frequently null with the table’s num_rows reported at the top for comparison purposes. If you were going to do something that made it a good idea to rebuild the table you could use this report as a guide to re-arranging the column order to position the most frequently null columns towards the end of the row (saving the “trailing nulls take no space” length bytes).

This approach depends on the stats being up to date and accurate, of course, so if you didn’t have them, and didn’t want to collect them, another strategy would be to run a query like the following:


select
        to_char(count(OWNER),'999,999,999,999,999') OWNER,
        to_char(count(OBJECT_NAME),'999,999,999,999,999') OBJECT_NAME,
        to_char(count(SUBOBJECT_NAME),'999,999,999,999,999') SUBOBJECT_NAME,
        to_char(count(OBJECT_ID),'999,999,999,999,999') OBJECT_ID,
        to_char(count(DATA_OBJECT_ID),'999,999,999,999,999') DATA_OBJECT_ID,
        to_char(count(OBJECT_TYPE),'999,999,999,999,999') OBJECT_TYPE,
        to_char(count(CREATED),'999,999,999,999,999') CREATED,
        to_char(count(LAST_DDL_TIME),'999,999,999,999,999') LAST_DDL_TIME,
        to_char(count(TIMESTAMP),'999,999,999,999,999') TIMESTAMP,
        to_char(count(STATUS),'999,999,999,999,999') STATUS,
        to_char(count(TEMPORARY),'999,999,999,999,999') TEMPORARY,
        to_char(count(GENERATED),'999,999,999,999,999') GENERATED,
        to_char(count(SECONDARY),'999,999,999,999,999') SECONDARY,
        to_char(count(NAMESPACE),'999,999,999,999,999') NAMESPACE,
        to_char(count(EDITION_NAME),'999,999,999,999,999') EDITION_NAME,
        to_char(count(SHARING),'999,999,999,999,999') SHARING,
        to_char(count(EDITIONABLE),'999,999,999,999,999') EDITIONABLE,
        to_char(count(ORACLE_MAINTAINED),'999,999,999,999,999') ORACLE_MAINTAINED,
        to_char(count(APPLICATION),'999,999,999,999,999') APPLICATION,
        to_char(count(DEFAULT_COLLATION),'999,999,999,999,999') DEFAULT_COLLATION,
        to_char(count(DUPLICATED),'999,999,999,999,999') DUPLICATED,
        to_char(count(SHARDED),'999,999,999,999,999') SHARDED,
        to_char(count(CREATED_APPID),'999,999,999,999,999') CREATED_APPID,
        to_char(count(CREATED_VSNID),'999,999,999,999,999') CREATED_VSNID,
        to_char(count(MODIFIED_APPID),'999,999,999,999,999') MODIFIED_APPID,
        to_char(count(MODIFIED_VSNID),'999,999,999,999,999') MODIFIED_VSNID,
        to_char(count(*),'999,999,999,999,999') row_count 
fromi
         t1
;

You don’t really need the to_char() function, but it’s a conveience for what I’m going to do with the SQL. Obviouslty it would be a bit tedious to create this statement by hand so, as I did in the previous “rowsize” notes, I’ve written some code to generate it for me:


rem
rem     row_size_2b.sql
rem     
rem     Generate SQL to report counts of 
rem     non-null columns in a table.
rem

set linesize 32000
set feedback off
define m_table = '&1'

declare
        m_string        varchar2(32000) := 'select ';
        m_cursor        sys_refcursor;

begin
        for r in (
                select 
                        column_name, data_type
                from    user_tab_columns
                where   table_name = upper('&m_table')
        ) loop
                m_string := m_string ||  
                                ' to_char(count(' || trim(r.column_name) || '),''999,999,999,999,999'') ' ||
                                trim(r.column_name) || ',' || chr(10) ;
        end loop;

        m_string := m_string || ' to_char(count(*),''999,999,999,999,999'') row_count from &m_table';

--      dbms_output.put_line(m_string);
        print_table(m_string);

end;
/

set linesize 156

The script accepts a table name from the user’s schema (you could edit this to query dba_tab_columns), constructs a string (as shown above – though I’ve done a little cosmetic work on it), and then passes the resulting string to a verion of Tom Kyte’s print_table() routine which produces the following output:


OWNER                         :               56,777
OBJECT_NAME                   :               56,777
SUBOBJECT_NAME                :                  207
OBJECT_ID                     :               56,777
DATA_OBJECT_ID                :                1,424
OBJECT_TYPE                   :               56,777
CREATED                       :               56,777
LAST_DDL_TIME                 :               56,777
TIMESTAMP                     :               56,777
STATUS                        :               56,777
TEMPORARY                     :               56,777
GENERATED                     :               56,777
SECONDARY                     :               56,777
NAMESPACE                     :               56,777
EDITION_NAME                  :                    0
SHARING                       :               56,777
EDITIONABLE                   :               16,561
ORACLE_MAINTAINED             :               56,777
APPLICATION                   :               56,777
DEFAULT_COLLATION             :                5,719
DUPLICATED                    :               56,777
SHARDED                       :               56,777
CREATED_APPID                 :                    0
CREATED_VSNID                 :                    0
MODIFIED_APPID                :                    0
MODIFIED_VSNID                :                    0
ROW_COUNT                     :               56,777

-----------------

1 rows selected

As with many of the scripts I’ve pubished, the task is one I rarely do, so I haven’t bothered to add in all the extra little refinements that might make it foolproof and totally self-contained. In this case, for example I’d have to go through a final couple of manual steps to edit the output by deleteing the last few lines then (since I use vi) issuing “!%!sort -k3 -n -r” which would sort the results in reverse order, numerically, according to the third field.

 

print_table()

Jonathan Lewis - Thu, 2020-11-26 03:40

Many years ago Tom Kyte published a small PL/SQL procedure to output each row in a table as a list of (column_name , value). I often use the original version, even though Tom refined it several times over the years. From time to time the links in my blog that I’ve set up to point to Tom’s code have simply stopped working so I’ve decided to publish a reference copy on my blog in case newer readers need to find a copy and the current link above stops  working.

rem
rem     Script:         print_table.sql
rem     Dated:          Nov 2001
rem     Author:         Tom Kyte - from Expert One on One Oracle.
rem
rem     Purpose:
rem     Prints the results of a query one column per line.
rem
rem     Created as a 'authid current_user' so that it can
rem     access any table that the caller is allowed to access
rem     from the SQL*Plus prompt.
rem
rem     A suitably privilefed user could also make this publicly
rem     available through the public synonym and grant at the end
rem     of the script
rem
rem     Minor modifications by Jonathan Lewis to separate rows
rem     in the result set, and report the number of rows returned
rem

create or replace procedure print_table( p_query in varchar2 )
authid current_user
is
        l_theCursor     integer default dbms_sql.open_cursor;
        l_columnValue   varchar2(4000);
        l_status        integer;
        l_descTbl       dbms_sql.desc_tab;
        l_colCnt        number;
        l_rowcnt        number := 0;
begin
        execute immediate 'alter session set nls_date_format=''dd-mon-yyyy hh24:mi:ss'' ';

        dbms_sql.parse(  l_theCursor,  p_query, dbms_sql.native );
        dbms_sql.describe_columns( l_theCursor, l_colCnt, l_descTbl );

        for i in 1 .. l_colCnt loop
                dbms_sql.define_column(l_theCursor, i, l_columnValue, 4000 );
        end loop;

        l_status := dbms_sql.execute(l_theCursor);

        while ( dbms_sql.fetch_rows(l_theCursor) > 0 ) loop
                for i in 1 .. l_colCnt loop
                        dbms_sql.column_value( l_theCursor, i, l_columnValue );
                        dbms_output.put_line(
                                rpad( l_descTbl(i).col_name, 30 )
                                || ' : ' || l_columnValue
                        );
                end loop;
                dbms_output.new_line;
                dbms_output.put_line( '-----------------' );
                dbms_output.new_line;
                l_rowcnt := l_rowcnt + 1;
        end loop;

        dbms_output.put_line(l_rowcnt || ' rows selected');

        execute immediate 'alter session set nls_date_format=''dd-mon-rr'' ';

exception
        when others then
                execute immediate 'alter session set nls_date_format=''dd-mon-rr'' ';
        raise;
end;
/

-- create public synonym print_table for print_table;
-- grant execute on print_table to public;

The reason I’m publishing this now is that I’m about to publish the third variant of a piece of code I wrote many years ago, and first blogged about in 2012.

Oracle Autonomous JSON Database (AJD) - From Notation to Autonomous

In August 13th, 2020, Oracle announced the availability of Autonomous JSON Database—a new cloud service built for developers who are looking for an easy to use,...

We share our skills to maximize your revenue!
Categories: DBA Blogs

Normal Forms

Tom Kyte - Wed, 2020-11-25 21:26
Hi Tom, Just wanted to know , if oracle database supports all the normal forms we know i.e 1st,2nd,3rd.BCNF,4th,5th. http://psoug.org/reference/normalization.html Is scott schema we use , is it fully normalized/or can be normalized till 5th normal form? In general, till which normal form it is sufficient to conclude that the created database is in agreement with the standards. Thanks, Raghavendra B
Categories: DBA Blogs

Directory List including modify date and times - casting the date problem

Tom Kyte - Wed, 2020-11-25 21:26
hi Tom, I started with your dir listing code and it's working out very nice but I am trying to figure out how to get the rest of the file info using File object. I am having a problem with the dates specifically. Here's my attempt but as you can see theres a problem with the dates and I cant seem to figure it out.. GRANT JAVAUSERPRIV to SOLIVER; drop table DIR_LIST; create global temporary table DIR_LIST ( filename varchar2(255),lastmodified date) on commit delete rows; create or replace and compile java source named "DirList" as import java.io.*; import java.sql.*; public class DirList { public static void getList(String directory) throws SQLException { String element; File path = new File(directory); File[] FileList = path.listFiles(); String TheFile; Date ModiDate; #sql { DELETE FROM DIR_LIST}; for(int i = 0; i < FileList.length; i++) { TheFile = FileList[i].getAbsolutePath(); ModiDate = new Date(FileList[i].lastModified()); #sql { INSERT INTO DIR_LIST (FILENAME,LASTMODIFIED) VALUES (:TheFile,:ModiDate) }; } } } / create or replace procedure get_dir_list( p_directory in varchar2 ) as language java name 'DirList.getList( java.lang.String )'; / --implementation EXEC dbms_java.grant_permission( 'SOLIVER', 'java.io.FilePermission', 'c: emp*','read' ); exec get_dir_list( 'c: emp' ); select to_char(lastmodified,'YYYY-MM-DD HH24:MI:SS'),filename from dir_list order by lastmodified; I would appreciate any help...
Categories: DBA Blogs

Delete on referenced partitioned table taking time

Tom Kyte - Wed, 2020-11-25 21:26
Hi Connor, Chris, I have referenced partitioned tables as below Its a daily range partition on ORDER_DATE column. <code> CREATE TABLE ORDERS ( ORDER_ID NUMBER(11), PART_DATE DATE DEFAULT trunc(sysdate), .... 150 columns .... ) PARTITION by range (PART_DATE) ( partition ORDERS_P_01_JAN_2018 values less than (to_date('02-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_02_JAN_2018 values less than (to_date('03-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_03_JAN_2018 values less than (to_date('04-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_04_JAN_2018 values less than (to_date('05-JAN-2018','DD-MON-YYYY')), partition ORDERS_P_MAXVALUE values less than (maxvalue) ); CREATE TABLE ORDERS_DETAILS ( ORDER_ID NUMBER(11) NOT NULL, .... 70 columns .... CONSTRAINT FK_ORDER_ID FOREIGN KEY (ORDER_ID) REFERENCES ORDERS (ORDER_ID) ) PARTITION BY REFERENCE (FK_ORDER_ID); </code> The issue here is - it takes significant time to delete records from both child "ORDER_DETAILS" as well as parent "ORDERS". Daily partition record count of each table is around 800k in "ORDERS" & 2.7 million in "ORDER_DETAILS" and these table holds 2 months of data and rest gets purged on daily basis (using native delete statement). Earlier we were deleting these records using 50k records per iteration followed by commit. Then we tried the limit counter with 5k but it was taking same to finish purging of all records for that day. The next option we are planning to truncate partition instead of delete statement. But before doing so, would like to seek your help so that i can check more on delete taking time. Could you please suggest something which can be checked at our end.
Categories: DBA Blogs

DynamoDB PartiQL – part II: SELECT

Yann Neuhaus - Wed, 2020-11-25 14:29
By Franck Pachot

.
In the previous post I insertd a few rows in a Demo table using the SQL-like new API on DynamoDB. I checked my items with a SELECT but was limited in the ORDER BY clause. Here is the most important to understand: there are no additional data processing engine here. PartiQL (pronounce it like ‘particle’ and it helps to avoid any kind of dyslexia) parses a statement with INSERT/UPDATE/DELETE/SELECT and calls the NoSQL API you already know (Put/Get/Update/Delete Item, Query and Scan). It looks like SQL but SQL is a language that declares operations on a set of rows, like relational tables or views, which are a logical layer above the physical model. In RDBMS, you build your SQL queries according to your business needs, not the physical layout. Of course, the physical layout (like indexing, partitioning) is also optimized for this access, but this is done independently. With PartiQL on DynamoDB you must know which operation will happen when you write your statement. Because all the simplicity and scalability of DynamoDB resides on the bounded API that matches the physical layout:

  • GetItem does partition pruning + unique hash/index access to retrieve one item
  • Query does partition pruning + index access to retrieve a sorted range of items
  • Scan reads all partitions, possibly in parallel, to retrieve all items

Of course, the cost is different. With the DynamoDB API you know which one you are doing because you call a different operation. With PartiQL you should know what you are doing but you execute the same statement (SELECT) and the data access will depend on the columns referenced in WHERE clause. Basically, if you don’t have an equality predicate on the partition (HASH) key, you have to read all partitions (Scan). If you have an equality predicate on the partition (HASH) key and inequality on the sort (RANGE) key you benefit from partition pruning (Query). This is obvious when you know what is a hash function, but error-prone if you don’t know the data model. The DynamoDB API helps you to prevent that because your fingers should hurt when typing “scan” for a large table.

Scan

So, if what you want is actually get all items, because you need all of them, or maybe to filter out a small part of them only, you want a scan. Yes, it reads everything, but it is the most efficient access to read a large portion of your table. Because with one RCU you can get many items. Doing the same (getting all items) with GetItem would cost one RCU per item (I suppose strong consistency and small items here). To put it basically, for OLTP workload (many users reading few items) you avoid scans on large tables. DynamoDB is a key-value store: the goal is to access by the key. And for some reporting or to export data, you may scan, which is expensive (in time and RCU) but not done frequently.

As seen in the previous post, scanning the whole table to get all items with all attributes is a simple SELECT * FROM:


[opc@a aws]$ aws dynamodb execute-statement --statement "select * from Demo"

{"Items":[
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"a"},"MyKeySort":{"N":"1"}},
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"use parameters when embedding SQL in programs"},"MyKeySort":{"N":"2"}},
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"c"},"MyKeySort":{"N":"3"}},
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"d"},"MyKeySort":{"N":"4"}},
{"MyKeyPart":{"N":"2"},"MyUnstructuredData":{"S":"e"},"MyKeySort":{"N":"5"}},
{"MyKeyPart":{"N":"1"},"MyUnstructuredData":{"S":"here is my first insert :)"},"MyKeySort":{"N":"1"}}
]}

As long as there’s no equality predicate on the primary key (or the hash part of it in case of composite hash/sort key) the SELECT will do a scan. I mentioned “equality”, we will see later when there are many equality predicates or a list of values to be equal to. We will see later, probably in a further post, what happens with secondary indexes. Anyway, this is not a RDBMS. When you query the table, there’s no query planer to optimize the access to read from an index. If you want to access by a secondary index, the index name must be mentioned in the FROM clause.

Another thing that we have seen in the previous post is that, as it is a scan, you cannot have the partition key in the ORDER BY because DynamoDB does not sort the rows when retrieved from multiple partitions, and PartiQL do not do further data processing on the result. So, basically, there’s no possible ORDER BY when not having a WHERE clause on the partition key:



[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo order by MyKeyPart"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Must have WHERE clause in the statement when using ORDER BY clause.

[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo order by MyKeySort"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Must have WHERE clause in the statement when using ORDER BY clause.

Query

Then, if we query for one partition only, this is a Query rather than a Scan. Here is an example where I select only the items where MyKeyPart = 2 which, with the HASH function, maps to only one partition:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart = 2"

{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}}]}

The items are ordered by MyKeySort even in the absence of ORDER BY because this is how it is stored and retreived physically within each partition. But, as SQL is a declarative language, I prefer not to rely on the order without ORDER BY clause.

Here is the correct way to do it, with no additional cost:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart = 2 order by MyKeySort"
{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}}]}


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart = 2 order by MyKeySort desc"
{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}}]}

Here because there is only one value for MyKeyPart I didn’t need to put MyKeyPart in the ORDER BY, but with multiple values you need to:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart = 1  or MyKeyPart = 2 order by MyKeySort"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Must have hash key in ORDER BY clause when more than one hash key condition specified in WHERE clause.

[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart = 1  or MyKeyPart = 2 order by MyKeyPart,MyKeyPart desc"

{"Items":[{"MyKeyPart":{"N":"1"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}}]}

[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart = 1  or MyKeyPart = 2 order by MyKeyPart desc,MyKeyPart"

{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"1"},"MyKeySort":{"N":"1"}}]}

You might be surprised to see this query with multiple values run as a Query rather than a Scan. What if they come from multiple partitions?
This is possible when the number of values is well known in advance (“1” and “2” here) and then this can be sorted first, and a Query run for each of them. Of course, this will multiply the cost of it. For example, because I know that I inserted values 1 to 5, I can get all my items with:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart in [1,2,3,4,5] order by MyKeyPart,MyKeySort"

{"Items":[{"MyKeyPart":{"N":"1"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}}]}

So I’m able to get all items sorted now? Yes, but for a higher cost than a scan because it will query them one by one. I would be cheaper to Scan here but there is no optimizer to estimate the cost of both operations and choose the cheaper. But at least, the cost is predictable as it is proportional to the number of key values in the list.

I cannot use inequalities, or BETWEEN, because they work on a range and this Query access can be done only on known values.


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart between 2 and 2 order by MyKeyPart,MyKeySort"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Must have at least one non-optional hash key condition in WHERE clause when using ORDER BY clause.

Here, even if maths tells me that it is equivalent to equality (“MyKeyPart between 2 and 2” is the same as “MyKeyPart = 2”) we have no optimizer there to do those transformations. The rule is basic: a set of value can be sorted and queried individually but anything else is considered as a range of value that cannot be accessed with a hash function.

How can I be sure about this behaviour? I have a small table where the response time difference is not significant. Be best proof is to see what happens when full table scan is impossible. There’s an IAM policy to deny scans:

PartiQL documentation (https://t.co/ezzXfsyADB) explains the scenarios when a SELECT can become a SCAN. You can also explicitly deny Scan via IAM policy (https://t.co/elVs4HpvtM) to avoid full table scans.

— DynamoDB (@dynamodb) November 24, 2020


I have created a user with deny on “dynamodb:PartiQLSelect” action on condition “dynamodb:FullTableScan”=”True”

With this user profile I execute the following:


[opc@a aws]$ aws --profile noscan dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo"

An error occurred (AccessDeniedException) when calling the ExecuteStatement operation: User: arn:aws:iam::802756008554:user/ddb-noscan is not authorized to perform: dynamodb:PartiQLSelect on resource: arn:aws:dynamodb:eu-west-1:802756008554:table/Demo with an explicit deny

[opc@a aws]$ aws --profile noscan dynamodb execute-statement --statement "select MyKeyPart,MyKeySort \
from Demo where MyKeyPart in [1,2,3,4,5]"                

{"Items":[{"MyKeyPart":{"N":"1"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"5"}}]}

It is clear that when Full Table Scan is denied the WHERE on a list of 5 values is still possible. Because it 5 query calls instead of a scan.

Pagination

I have additionally inserted many rows with MyKeyPart=10 and large size attributes, and query them:


$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart=10"

{"Items":[{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"1"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"2"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"3"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"4"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"5"}}],"NextToken":"CS4sUIPi4Efg7eSg4sGJZHJ09C/m8JWMwLXB+DF5n54EIBl6yuPZNAHfoRUFg7qgGg872qXswoXSZEI/XAIfvUPNisWSYGrPiquxLFakMecd6aF/ggaexxpKlhPS+ridkOXu8HoWIuWgSXFRBa32QmIXITRhrSMwuT1Q54+6Li6emcxvtpJfmxvxWf/yQkece5nqQIwH/EC3vAr1SZ4Pd537qexKejVHJ+2QrXALwG283UR/obWc53A2HTQ+G3cNeL4xOvVwp9gsOhlKxhsRrS+GqHRF0IHlGrpsdc0LkbMS1hISuagp/KZ0dqP/v7ejB6HsEHhFYZeKYZBoysTYTzhpB02NF3F4MSKp8QF4nO4vcq4="}

I get a few items and a “Next Token” that is quite large.

I can query the next pages with the –next-token option:


$ ws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart=10 \
--next-token CS4sUIPi4Efg7eSg4sGJZHJ09C/m8JWMwLXB+DF5n54EIBl6yuPZNAHfoRUFg7qgGg872qXswoXSZEI/XAIfvUPNisWSYGrPiquxLFakMecd6aF/ggaexxpKlhPS+ridkOXu8HoWIuWgSXFRBa32QmIXITRhrSMwuT1Q54+6Li6emcxvtpJfmxvxWf/yQkece5nqQIwH/EC3vAr1SZ4Pd537qexKejVHJ+2QrXALwG283UR/obWc53A2HTQ+G3cNeL4xOvVwp9gsOhlKxhsRrS+GqHRF0IHlGrpsdc0LkbMS1hISuagp/KZ0dqP/v7ejB6HsEHhFYZeKYZBoysTYTzhpB02NF3F4MSKp8QF4nO4vcq4="

{"Items":[{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"6"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"7"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"8"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"9"}},{"MyKeyPart":{"N":"10"},"MyKeySort":{"N":"10"}}],"NextToken":"FjHEA2wnIK74SlGaS6TiPSv2fEwfiZhJNHyxvJ+qG750oeKlqSNyx9IDdCUD+m2rSpodPIFJhYYQHXBM9sJed3k6qaA/aUk4s4DUlPvZHl7WAJ4rTY0AmNDUYBPqWyCV8FliSsGPtFTfj1A9T4zD1TU6uuvNIORY/zKHtsAjWzT4Jsg5y32MFcVOmOsDBhyWsQotFqxy1ErMGhJy3cQnEvy1P1KpQak6sflzp3sWLWzUgOXQB/xF1PXRtT8w/E1lPk26LnA/L2bA91nucuohN63hP3MVojPH0GkPCjZsx08wJTn4MEpqDArEREWO2XCkL/GI7vTtYw6GXRenKZoatSG55yKCVDkFRuw7cbK749mEIb6r6Xs="}

Again, this is completely different from SQL databases where you have cursors, but this is adapted to DynamoDB query that reads ranges of items with small chunks.

Projection

I used SELECT with * to get the whole item key and attributes (like ALL_ATTRIBUTES), and with a list of attributes to do a query projection (like SPECIFIC_ATTRIBUTES). There’s no aggregation and I don’t think we can do the equivalent of COUNT. Here is how I would do it if it were possible:


[opc@a aws]$ aws dynamodb execute-statement --statement "select count(*) from Demo"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Unexpected path component at 1:8

This clearly not supported (yet).

According to the documentation expressions should be allowed, like this:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort,size(MyUnstructuredData) from Demo where MyKeyPart=2 and size(MyUnstructuredData)>10"

An error occurred (ValidationException) when calling the ExecuteStatement operation: Unexpected path component at 1:28

Apparently, this size() function is allowed only on the WHERE clause:


[opc@a aws]$ aws dynamodb execute-statement --statement "select MyKeyPart,MyKeySort from Demo where MyKeyPart=2 and size(MyUnstructuredData)>10"

{"Items":[{"MyKeyPart":{"N":"2"},"MyKeySort":{"N":"2"}}]}

In summary, be careful. You must know only which attribute you filter in the WHERE clause. Equality on partition key allows single hash partition access. Without it, it is a scan which can take time and lof of RCU.

Cet article DynamoDB PartiQL – part II: SELECT est apparu en premier sur Blog dbi services.

Oracle Fusion Procurement R13 -&nbsp; Reports and Analyses

Richard Byrom - Wed, 2020-11-25 12:32

With each major release Oracle produces a seeded listing of reports. Periodically with the quarterly updates one might see a a few new reports appear. I always find it quite useful to have this to hand when working with clients as it is searchable and you can filter by specific modules. Download the Oracle Procurement Cloud Release 13 Reports and Analyses Listing which outlines what’s available for Sourcing, Suppliers, Purchasing, Self Service Procurement and Purchasing.

Documentum Administrator plugins installation not possible

Yann Neuhaus - Wed, 2020-11-25 07:27

In some customer’s contexts you cannot install whatever you want. For policy reasons, or security. And documentum components really love to install client softwares to interact with them. In this blog we will discuss about the Documentum Administrator tool and the plugins it asks to install in order to gain full access to its functionalities.

Documentum Administrator’s plugins

DA requires at least 3 things to be installed on client machines in order to have the file transfer working:

  • The browser extension (IE, Chrome, Firefox, aso)
  • The NativeSetup.exe
  • You will have to change some settings in the browsers as well, in order to allow the execution of some scripts for these sites, white list it, and so on

These two programs allow to transfer and view files from the Documentum Administrator web application. I think it should be the same for Webtop as it’s based on the same. Unfortunately, some customers cannot allow users, even administrators to run specific exe files on their machines. So we tried to install them manually with some policies allowance and some elevated rights but it didn’t work very well as our policies were too restrictive.

In addition we were not allowed to change some parameters in the browsers as it was managed by the security team. So what can we do?

Setup Documentum Administrator in HTTP mode

Depending on what you need to do with the DA, you can enable the HTTP mode. This mode allows to use the default browser/http transfer protocols, hence it will not require anything to be installed on the end user machine. However this mode has some drawbacks and limitations.

Here is how to enable the HTTP mode:

  1. Login to the web server and navigate to the exploded war file folder
  2. Locate the file: da/wdk/app.xml
  3. Change <default-mechanism> in <contentxfer> to http
  4. Save the file and restart the web application server
  5. Allow popups in the browser

Now the content transfer has been set to http mode.

Limitations

HTTP content transfer is supported for XML files but only for a single file used with the Default XML Application. For virtual documents, only the root (parent) file is transferred. The browser handles the launch of viewing and editing applications.

The checkout directory is not configurable. To checkout a document, users must select Edit, then either save the document or open and save it. On checkin, the user must navigate to the location in which the document was saved. User preferences are not supported in this mode.

Feature UCF HTTP Download to client UCF Deployment at client side No client-side deployment Drag and drop Supported on IE Browser Not supported Viewing or editing application Configurable Controlled by browser ACS support Supported Limited support for export or edit; not supported for view with relative links Progress display Supported Supported only by certain browser Preferences Supported Not supported Restart interrupted operation Supported Not supported Checkout Supported Limited support. User must save and select checkout location Edit Supported Limited support. User must save and select checkout location Checkin Supported Limited support. User must navigate to saved document View Supported; does not transfer content if file is up to date on client Supported; always transfers content Export Supported Supported Import Supported Limited support. Single file selection at a time, no folder import Client xfer tracing Supported Not supported Server xfer tracing Supported Supported File compression Supported, with configurable exceptions Turn on HTTP compression in web.xml XML application Supported Import single file against Default XML Application Virtual document Supported Root only Temp working directory for upload/download temp.working.dir None Checked out files checkout.dir User must save file, then check in from file Exported files export.dir or user choice Browser specific UI to select a download location Viewed files viewed.dir Browser temporary internet files location, for example, $java{user.home}\Local Settings\Temporary Internet Files User location (base for other locations) user.dir (defined in config file) None Registry file location registry.file None Registry mode registry.mode None Log file location logs.dir None Tracing/debug tracing.enabled None File polling file.poll.interval None

Cet article Documentum Administrator plugins installation not possible est apparu en premier sur Blog dbi services.

Pages

Subscribe to Oracle FAQ aggregator