I've just been having an informative off-list conversation with Mike Sykes. As he pointed out in a message that Marc forwarded to the list http://archives.postgresql.org/pgsql-hackers/2003-04/msg00411.php we shouldn't feel bad about the fact that our temp table implementation doesn't conform to the standard's semantics for temp tables, because almost no one else does it the spec's way either. Oracle and DB2, to name a couple of big players, do it effectively the same way we do. Show
But he also points out that we are confused about the difference between GLOBAL and LOCAL temporary tables. In the spec, this distinction does *not* mean cross-session vs session-private temp tables, as we wrote in the documentation. In fact, there are no cross-session temp tables at all in SQL92. GLOBAL means there is one instance per session, while LOCAL means there is one instance per module invocation (which is thus necessarily within a session). The text of the spec is clear: ... Global and created local temporary tables are effectively materialized only when referenced in an SQL-session. Every <module> in every SQL-session that references a created local temporary table causes a distinct instance of that created local temporary table to be materialized. That is, the contents of a global temporary table or a created local temporary table cannot be shared between SQL-sessions. In addition, the contents of a cre- ated local temporary table cannot be shared between <module>s of a single SQL-session. Since we don't have modules, the distinction between GLOBAL and LOCAL temp tables is meaningless for us. However, if we were to someday implement modules, we would probably expect that the existing flavor of temp tables would remain globally visible throughout each session. That is, the temp tables we have more nearly approximate the spec's GLOBAL temp tables than LOCAL temp tables. As Mike pointed out in the message referenced above, Oracle's and DB2's Postgres-equivalent syntax uses GLOBAL not LOCAL to describe temp tables. So it now seems clear to me that we are in error to reject CREATE GLOBAL TEMP TABLE; we ought to accept that. What I am wondering now is if we should flip the logic to reject CREATE LOCAL TEMP TABLE? Or should we just silently accept both? I'm leaning towards the latter, on the grounds of backward compatibility. regards, tom lane
Oracle does global SQLserver does local for example. Global Temp TableDhimant Antani, February 23, 2004 - 12:38 am UTC First of all let me say that this is the best site when it comes to Oracle Database. I did like to have more info on Global Temp Tables. Please provide some examples if possible Thanks and Regards, Dhimant February 23, 2004 - 7:41 am UTC simply search this site for global temporary table you'll see plenty Riaz Shahhid, February 23, 2004 - 1:25 am UTC Dhimant ! Just earch this sitr for "Temporary tables" or "Global Temprary Tables" and you will find lots about that. Forced to use global temporary table, how to avoid it ?Ashiq Shamsudeen A, February 23, 2004 - 8:40 am UTC Hi tom, I've forced to use a global temporary table to hold data. Is any way out in avoiding it. Look the below Package package body test_pkg as procedure proc_b( i_tracking_no in varchar2, my_ref_cur out test_pkg.my_cv ) as begin for i in (select cl_number,hts_no,part_id,product_desc,source_country from classification_product_master where cl_number = i_tracking_no) loop insert into class_test ( htsno, us_hsno, cl_number, status, rec_status, hts_desc, part_id, product_desc, source_country ) select i.hts_no, hts_chapter_no, i.cl_number, 'Y', rec_status, hts_desc, i.part_id, i.product_desc, i.source_country from lcs_hts_mult_desc where (hts_chapter_no,effective_date) = (select hts_chapter_no, max(effective_date) from lcs_hts_mult_desc where country_id = '02' and hts_chapter_no = i.hts_no and language_cd='EN' group by hts_chapter_no); end loop; open my_ref_cur for select * from class_test; end proc_b; end test_pkg; In the classification_product_master table i've duplicates values for hts_no ,say select hts_no from classification_product_master where .... hts_no 3924104000 3924104000 3924104000 3924104000 7323940026 7323940026 7323940026 7323940026 7323940026 7323940026 7323940026 7323940026 8214909000 8214909000 8214909000 8214909000 8214909000 8309900000 8309900000 Using this values I'll be passing to the query select hts_chapter_no, max(effective_date) from lcs_hts_mult_desc where country_id = '02' and hts_chapter_no = i.hts_no <=== and language_cd='EN' group by hts_chapter_no Is any another way around to avoid using this temp table ? February 23, 2004 - 9:04 am UTC that logic is not any different than this query: select i.hts_no, j.hts_chapter_no, i.cl_number, 'Y', j.rec_status, j.hts_desc, i.part_id, i.product_desc, i.source_country from classification_product_master i, lcs_hts_mult_desc j where i.cl_number = I_TRACKING_NO and (j.hts_chapter_no,j.effective_date) = (select hts_chapter_no, max(effective_date) from lcs_hts_mult_desc where country_id = '02' and hts_chapter_no = i.hts_no and language_cd = 'EN' group by hts_chapter_no ); Your existing logic does not "de-dup" any rows from classification_product_master, it takes every row in that table and joins it to the "most recent" row in lcs_hts_mult_desc table (so if there were 15 rows in classification_product_master with hts_no "12345", you would get 15 (or more) rows back. It might be more efficient to simply query: select * from ( select i.hts_no, j.hts_chapter_no, i.cl_number, 'Y', j.rec_status, j.hts_desc, i.part_id, i.product_desc, i.source_country, j.effective_date, max( case when country_id = '02' and language_cd = 'EN' then j.effective_date end ) over ( partition by j.hts_chapter_no ) max_effective_date from classification_product_master i, lcs_hts_mult_desc j where i.cl_number = I_TRACKING_NO and i.hts_no = j.hts_chapter_no ) where effective_date = max_effective_date / that joins I to J by hts_no (which is what you do) and keeps only the record with the effective_date = max( effective_date ) for rows with that hts_no and country_cd = '02' and language_cd = 'EN'. You were never forced to use a gtt, never. ON COMMIT DELETE ROWSDuan, February 23, 2004 - 9:36 am UTC I sometimes use them as a "multi-record" application interface to the underlying database. In such a scenario, an application inserts a number of records in a "global temporary on commit delete rows" table and calls a stored proc to do something with a whole passed record set, usually some inserts, updates and-or merge statements, on a whole bulk of records, never on one by one in a loop. Upon commit or rollback, contents of temp table just disappear. From the perspective of the application, the interface could not be simpler - start transaction, do a number of inserts and call a stored proc when the whole set is ready to be processed. How useful do you find this way of using global temporary tables in your practice, Tom? February 23, 2004 - 10:02 am UTC You just described the same thing I did in my answer :) as an interface table they are excellent. Also...Kashif, February 23, 2004 - 11:48 am UTC Hi Tom, I've also used GTTs when I've run into a query that takes a while to execute, and I need to execute that query over and over in my procedure. I simply dump the contents of the query once into the GTT at the beginning, and then hit the GTT as many times as I need throughout the procedure for the data. Some might argue that well, you might not be pulling in the latest set of the data, e.g. if some user added data to the underlying tables of the query after I store the resultset in the GTT, to which I have to say that this is happening in the middle of the night, no users on the database, so no chance of that happening. Your thoughts? Kashif February 23, 2004 - 4:41 pm UTC I'd have to understand why I needed the result over and over and over again. If so, so be it -- but many times, I find I can take the procedural logic that needed that set time and again and reduce it to a much smaller set of non-procedural code. It would be a valid use case, yes. temp table useful - did I say that?dev, February 27, 2004 - 4:53 pm UTC I'm probably asking for trouble, since there are so many ways to approach this problem, but here is a case where a two step approach involving an intermediate table works more quickly than a single query. Start with a query that performs well, add a predicate ( the one below that includes smk_u ), watch the perfectly good execution plan change slightly, performance go out the window. The original plan used hash joins, new plan uses hash joins then nested loops, performance is unacceptable. I know, work on tuning the query. To me it is quicker to divide and conquer - I am considering using a temp table as a workaround as demonstrated below. Original query - if you remove the DECODE(NVL predicate it runs quickly, with that predicate it is just too slow. SELECT tran_pd, cal_date, sku0, store, SUM(sal_u) sal_u, SUM(smk_u) smk_u, SUM(sal_r) sal_r, temp_price, perm_price FROM (SELECT tran_pd, cal_date,sku0, store, sal_u,smk_u , sal_r, pricing_bp.get_temp_price(store, cal_date,sku0) temp_price, pricing_bp.get_perm_price(store,cal_date,sku0) perm_price FROM skudaily skud, calendar c WHERE tran_pd BETWEEN :p_start_pd and :p_end_pd AND store = :store AND NVL(sal_u,0) - DECODE(:p_sales, 1,0,NVL(smk_u,0)) <> 0 AND c.cal_pd = tran_pd AND sku0 in (SELECT tmsk_sku0 FROM rpt_temp_sku0s WHERE tmsk_request_id = :p_req_id ) ) GROUP BY tran_pd, sku0, store , cal_date / where skudaily is a well-indexed, mature union view with millions of rows. I know, use partitioned tables. The fairly easy, fairly fast workaround.. variable p_req_id number; variable p_start_pd varchar2(8); variable p_end_pd varchar2(8); variable p_sales number; variable store number; begin :p_req_id := &&1; :p_start_pd := '20030131'; :p_end_pd := '20040131'; :p_sales := 1; :store := &&2; end; / alter session set optimizer_goal = CHOOSE; insert into junk_pjk_gtemp SELECT tran_pd, cal_date,sku0, store, sal_u,smk_u , sal_r FROM skudaily skud, calendar c WHERE tran_pd BETWEEN :p_start_pd and :p_end_pd AND c.cal_pd = tran_pd AND sku0 in (SELECT tmsk_sku0 FROM rpt_temp_sku0s WHERE tmsk_request_id = :p_req_id ) AND store = :store / commit; SELECT tran_pd, cal_date, sku0, store, count(*) num_rows, SUM(sal_u) sal_u, SUM(smk_u) smk_u, SUM(sal_r) sal_r, pricing_bp.get_temp_price(store, cal_date,sku0) temp_price, pricing_bp.get_perm_price(store,cal_date,sku0) perm_price FROM junk_pjk_gtemp WHERE NVL(sal_u,0) - DECODE(:p_sales, 1,0,NVL(smk_u,0)) <> 0 group by tran_pd, cal_date, sku0 , store / By moving the problem predicate to the second query, I can execute the first query using the efficient execution plan. It works, it's faster, the annoying predicate is used on the small data set in the temp table. I didn't have to try to out think the optimizer, I kept the nice plan it gave me and instead of struggling with tuning the query I used a temp table. Fire away... February 27, 2004 - 5:05 pm UTC replace junk_pjk_gtemp with: SELECT tran_pd, cal_date,sku0, store, sal_u,smk_u , sal_r FROM skudaily skud, calendar c WHERE tran_pd BETWEEN :p_start_pd and :p_end_pd AND c.cal_pd = tran_pd AND sku0 in (SELECT tmsk_sku0 FROM rpt_temp_sku0s WHERE tmsk_request_id = :p_req_id ) AND store = :store and rownum >= 0 and leave as one query -- but I'd be really surprised that a predicate on nvl(column) would do that personally. it is not like that predicate would help us towards an index. Global temporary tables as a big 'in list'Dale Ogilvie, February 28, 2004 - 4:17 am UTC I think the discussion here about using GTT's as an 'interface' table matches our usage. We have a master table containing a few thousand rows, and a detail table containing tens of millions of rows with a foreign key to the master table, and a GTT with only one column designed to receive inserts of master table primary key values. In addition to this we have a stored procedure that performs a query (for reporting purposes) on the detail table using a query like (but much more complex than) this: SELECT detail.* FROM detail, gtt WHERE detail.master_id = gtt.master_id; At the moment I'm not sure that our schema and the query above is as good as it can be. Specifically: 1. gtt.master_id is not declared unique, whereas master.master_id is unique, should it be? 2. gtt.master_id does not have a foreign key relationship defined to master. Can (should) a fk to master be defined for the column on the gtt? 3. My third concern is to do with the variability in the number of rows that can be inserted into gtt, anything from 1 to 5000. We currently have inserted stats on the gtt to tell the CBO that it contains 0 rows, so the CBO will choose a plan based on this information, and thereafter all queries will use this plan. My concern is that a plan that is optimal for 1 row, turns out to be not so flash for 1000. In practice we can expect either 10 or 1000 rows in gtt. What is the best approach to handle this variablity? I have looked at the DYNAMIC_SAMPLING and DYNAMIC_SAMPLING_EST_CDN hints which look promising. SELECT /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */ detail.* FROM detail, gtt WHERE detail.master_id = gtt.master_id; Does using these hints mean that we are effectively doing a hard parse each time the stored proc is called? February 28, 2004 - 11:58 am UTC Ok, here big_table is 5,000,000 row table. GTT is like yours is I assume.... @connect big_table/big_table drop table gtt; create global temporary table gtt ( id int ) on commit delete rows; create or replace procedure foo( p_rand in number default dbms_random.value( 0, 5000 )) as l_cnt number := 0; begin insert into gtt select rownum from big_table where rownum <= p_rand; dbms_output.put_line( sql%rowcount || ' rows in gtt' ); for x in ( select /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */ big_table.* from big_table, gtt where big_table.id = gtt.id ) loop l_cnt := l_cnt+1; end loop; dbms_output.put_line( l_cnt || ' rows fetched' ); commit; end; / @trace exec foo(0) exec foo exec foo exec foo(5000) the result of that TKPROF was: select /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */ big_table.* from big_table, gtt where big_table.id = gtt.id call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.00 0 2 0 0 Execute 4 0.00 0.00 0 0 0 0 Fetch 11716 1.26 1.12 0 58579 0 11712 --- -- -- -- -- -- -- total 11721 1.26 1.12 0 58581 0 11712 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 152 (BIG_TABLE) (recursive depth: 1) Rows Row Source Operation - --------- 11712 NESTED LOOPS (cr=58579 r=0 w=0 time=901210 us) 11712 TABLE ACCESS FULL GTT (cr=11731 r=0 w=0 time=228424 us) 11712 TABLE ACCESS BY INDEX ROWID BIG_TABLE (cr=46848 r=0 w=0 time=483374 us) 11712 INDEX UNIQUE SCAN BIG_TABLE_PK (cr=35136 r=0 w=0 time=260253 us)(object id 35091) Now, that is showing us 1 hard parse (misses in library) and no subsequent reparses. This is what we expect plsql parses once per session for us and the first one was hard since we just created GTT for the very first time....Note the query plan nested loops. Now, we want to see what happens in other sessions: disconnect !tk @connect big_table/big_table @trace exec foo(50000) disconnect !tk That tkprof shows: select /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */ big_table.* from big_table, gtt where big_table.id = gtt.id call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 50001 5.41 5.15 639 250006 0 50000 --- -- -- -- -- -- -- total 50003 5.41 5.15 639 250006 0 50000 Misses in library cache during parse: 0 Optimizer goal: CHOOSE Parsing user id: 152 (BIG_TABLE) (recursive depth: 1) Rows Row Source Operation - --------- 50000 NESTED LOOPS (cr=250006 r=639 w=0 time=4143709 us) 50000 TABLE ACCESS FULL GTT (cr=50006 r=0 w=0 time=1020992 us) 50000 TABLE ACCESS BY INDEX ROWID BIG_TABLE (cr=200000 r=639 w=0 time=2339062 us) 50000 INDEX UNIQUE SCAN BIG_TABLE_PK (cr=150000 r=0 w=0 time=1195898 us)(object id 35091) Missing in library = 0, so it too was soft parsed -- if there were hard parsed (with 50,000 rows) what would have happened? @connect big_table/big_table alter system flush shared_pool; @trace exec foo(50000) select /*+ dynamic_sampling(gtt 10) dynamic_sampling_est_cdn(gtt) */ big_table.* from big_table, gtt where big_table.id = gtt.id call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.02 0.02 0 2 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 50001 35.13 74.99 78818 96819 0 50000 --- -- -- -- -- -- -- total 50003 35.15 75.01 78818 96821 0 50000 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 152 (BIG_TABLE) (recursive depth: 1) Rows Row Source Operation - --------- 50000 HASH JOIN (cr=96819 r=78818 w=6720 time=74166346 us) 50000 TABLE ACCESS FULL GTT (cr=82 r=0 w=0 time=68518 us) 5000000 TABLE ACCESS FULL BIG_TABLE (cr=96737 r=72098 w=0 time=37932714 us) Rows Execution Plan - --------- 0 SELECT STATEMENT GOAL: CHOOSE 50000 NESTED LOOPS 50000 TABLE ACCESS (FULL) OF 'GTT' 5000000 TABLE ACCESS GOAL: ANALYZED (BY INDEX ROWID) OF 'BIG_TABLE' 0 INDEX (UNIQUE SCAN) OF 'BIG_TABLE_PK' (UNIQUE) A very different plan with a hard parse (note how explain plan lies the first plan there is from the trc file's STAT records the other was "explain=u/p". EP cannot see certain things like bind variable types, bind variable values, results of dynamic sampling and so on) So, after all of that. here are my responses
That said, maybe you decide "you know, every 1000 rows we would like the opportunity to revisit this particular plan". You could use an approach like this: create or replace procedure foo( p_rand in number default dbms_random.value( 0, 5000 )) as l_cnt number := 0; cursor just_a_template is select big_table.* from big_table, gtt; type array is table of just_a_template%rowtype; l_data array; /* that just a template really is just a template so we can easily define a record to fetch into -- an array of records really (i know you must be on 9ir2 that can do this since you are using dynamic sampling */ l_cursor sys_refcursor; begin insert into gtt select rownum from big_table where rownum <= p_rand; dbms_output.put_line( sql%rowcount || ' rows in gtt' ); /* here is the trick spoon feed the size into the query, this will not only make the sql text different every 1000 rows but it'll give the optimizer "about how many rows". why did I pick 1000? no reason, it was an easy number to work with */ open l_cursor for 'select /*+ cardinality( gtt ' || trunc(sql%rowcount/1000)*1000 || ') */ big_table.* from big_table, gtt where big_table.id = gtt.id'; loop fetch l_cursor bulk collect into l_data limit 100; l_cnt := l_cnt+l_data.count; exit when l_cursor%notfound; end loop; close l_cursor; dbms_output.put_line( l_cnt || ' rows fetched' ); commit; end; / If you tkprof that, you'll see it does nested loops for small numbers, hash joins for larger numbers. Handling variability in GTT contentsDale Ogilvie, February 28, 2004 - 10:15 pm UTC Thanks. I expect your cardinality hint approach is the way to go. One question though, given these two tkprof results from your reply: call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 50001 5.41 5.15 639 250006 0 50000 --- -- -- -- -- -- -- total 50003 5.41 5.15 639 250006 0 50000 and call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.02 0.02 0 2 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 50001 35.13 74.99 78818 96819 0 50000 --- -- -- -- -- -- -- total 50003 35.15 75.01 78818 96821 0 50000 Why is it that the second one that did the hard parse is so much more expensive than the first which only did a soft parse? Where is all the extra CPU and elapsed time use coming from? The dynamic_sampling on a fatter GTT? The hash join? February 29, 2004 - 9:01 am UTC it was the effect of having the default optimizer_index_* settings, causing the index range scans to be aborted "too soon" in the mix. for that simple of a query, to get 50,000 well clustered rows from 5,000,000 -- probably should be an index range scan. big_table@ORA9IR2> alter session set optimizer_index_caching =0; Session altered. big_table@ORA9IR2> alter session set optimizer_index_cost_adj =100; Session altered. big_table@ORA9IR2> select /*+ cardinality(gtt 50000) */ big_table.* 2 from big_table, gtt 3 where big_table.id = gtt.id 4 / Execution Plan ---------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=20705 Card=50000 Bytes=5650000) 1 0 HASH JOIN (Cost=20705 Card=50000 Bytes=5650000) 2 1 TABLE ACCESS (FULL) OF 'GTT' (Cost=11 Card=50000 Bytes=650000) 3 1 TABLE ACCESS (FULL) OF 'BIG_TABLE' (Cost=6943 Card=5000000 Bytes=500000000) big_table@ORA9IR2> alter session set optimizer_index_caching =90; Session altered. big_table@ORA9IR2> alter session set optimizer_index_cost_adj =10; Session altered. big_table@ORA9IR2> select /*+ cardinality(gtt 50000) */ big_table.* 2 from big_table, gtt 3 where big_table.id = gtt.id 4 / Execution Plan ---------- 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=5012 Card=50000 Bytes=5650000) 1 0 NESTED LOOPS (Cost=5012 Card=50000 Bytes=5650000) 2 1 TABLE ACCESS (FULL) OF 'GTT' (Cost=11 Card=50000 Bytes=650000) 3 1 TABLE ACCESS (BY INDEX ROWID) OF 'BIG_TABLE' (Cost=2 Card=1 Bytes=100) 4 3 INDEX (UNIQUE SCAN) OF 'BIG_TABLE_PK' (UNIQUE) big_table@ORA9IR2> big_table@ORA9IR2> set autotrace off dynamic samplingDave, March 01, 2004 - 6:03 pm UTC Tom, you've listed examples using sql_trace/tkprof to grab the dynamically altered plan when dynamic_sampling was involved. Is there any way through V$ views (or other) to see this change? March 01, 2004 - 6:50 pm UTC An exampleJoe Simunac, April 09, 2004 - 1:40 pm UTC You asked for an example where a GTT was needed. Here's one that we have to use. We have two accounting tables, one with live data and one with archive data. The table are identical in structure. We want a report with data from each table combined. The temp table looks as follows: ID Transaction Number Detail Code Detail Desc ... I have to pull the matching fields from each table and load them into the GTT (two cursors in the procedure), then query the GTT sorted by Transaction Number, to put everything in order. Is there a better way to do this, or is using a temporary table the only way? April 09, 2004 - 4:15 pm UTC UNION ALL achieves you goal very nicely without a GTT, just union all the two queries together and sort the results select * from ( select .... query1 UNION ALL select .... query2 ) order by whatever you want. One more questionJoe Simunac, April 14, 2004 - 8:09 am UTC Duh! I shouldn't ask questions on a Friday afternoon after my brain has gone home. That was too obvious. I apologize. Now for the real question: I'm trying to return a result set based on the UNION, but one of the fields needs to be calculated. It's a running balance - the sum of the previous running balance and the current amount. My predecessor used a temporary table to store the info and then read through it again to do the calculation. I can't think of a way to do this with just one SELECT statement. Should I create an object with my data structure and then a table based on that object? I'll admit that using objects is not my strong point. What I'd really like is to call a function (passing the user id) that would return a result set with all the information, including any calculated fields - just one for this one, but it could be more in other situations. Thanks for any help you can give. April 14, 2004 - 9:01 am UTC you mean like this: ops$tkyte@ORA9IR2> select ename, deptno, sal, sum(sal) over (order by ename) 2 from (select ename, deptno, sal from t1 3 union all 4 select ename, deptno, sal from t2) 5 order by ename 6 / ENAME DEPTNO SAL SUM(SAL)OVER(ORDERBYENAME) ADAMS 20 1100 1100 CLARK 10 2450 3550 FORD 20 3000 6550 JONES 20 2975 9525 KING 10 5000 14525 MILLER 10 1300 15825 SCOTT 20 3000 18825 SMITH 20 800 19625 8 rows selected. <b>Analytics Rock Analytics Roll Analytics Rule</b> A reader, April 14, 2004 - 9:31 am UTC is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier April 14, 2004 - 10:50 am UTC alias it one layer down -- select ename NAME, deptno, sal from emp UNION ALL ... PerfectJoe Simunac, April 14, 2004 - 10:10 am UTC Thanks, Tom! That was exactly what I needed. I'm obviously going to have to learn more about analytics, since they are so useful. Thanks again. Your responses are always helpful! -Joe One last questionJoe Simunac, April 14, 2004 - 11:19 am UTC Thanks for your previous responses on UNION and the analytical functions. I have one last question. I've set up a record type and a ref cursor based on that record. When I try to fetch into the cursor I'm getting a PLS-00386 type mismatch error. The record is defined using %type for all fields except two (they are concatenated in the query, so I have them defined as VARCHAR2(200), which should easily cover the largest that could be returned. The original fields are VARCHAR2, anyway). The %type is based on the table field the query is selecting from. Does the UNION change data types in any way? Or am I missing something really obvious? Version 9.2.0.4 of Oracle. Here is the code: type rbrec is record (pidm spriden.spriden_pidm%type, tran tbraccd.tbraccd_tran_number%type, term tbraccd.tbraccd_term_code%type, taxt tbbdetc.tbbdetc_taxt_code%type, detail tbraccd.tbraccd_detail_code%type, ddesc tbraccd.tbraccd_desc%type, typei tbbdetc.tbbdetc_type_ind%type, signed tbraccd.tbraccd_amount%type, amt tbraccd.tbraccd_amount%type, debit tbraccd.tbraccd_amount%type, credit tbraccd.tbraccd_amount%type, eff tbraccd.tbraccd_effective_date%type, scd tbraccd.tbraccd_srce_code%type, sid varchar2(200), sname varchar2(200), runbal tbraccd.tbraccd_amount%type); type rbcur_type is ref cursor return rbrec; function crerbal (pidmin in number) return rbcur_type is rbcur rbcur_type; cursor c_getrb is select pidm, tran, term, taxt, detail, ddesc, typei, signed, amt, debit, credit, eff, scd, sid, sname, nvl(sum(signed) over (order by pidm, tran),0) runbal from (select tboaccd_pidm pidm, tboaccd_tran_number tran, tboaccd_term_code term, tbbdetc_taxt_code taxt, tboaccd_detail_code detail, tboaccd_desc ddesc, tbbdetc_type_ind typei, decode(tbbdetc_type_ind,'C',1,-1)*tboaccd_amount signed, tboaccd_amount amt, decode(tbbdetc_type_ind,'C',1,0)*tboaccd_amount debit, decode(tbbdetc_type_ind,'P',1,0)*tboaccd_amount credit, tboaccd_effective_date eff, null scd, substr(spriden_id,1,3)||'-'||substr(spriden_id,4,2)||'-'||substr(spriden_id,6,4) SID, spriden_last_name||', '||spriden_first_name||' '||spriden_mi sname from spriden, tbbdetc, tboaccd where spriden_pidm = pidmin and spriden_change_ind is null and tboaccd_pidm=spriden_pidm and tbbdetc_detail_code=tboaccd_detail_code union select tbraccd_pidm pidm, tbraccd_tran_number tran, tbraccd_term_code term, tbbdetc_taxt_code taxt, tbraccd_detail_code detail, tbraccd_desc ddesc, tbbdetc_type_ind typei, decode(tbbdetc_type_ind,'C',1,-1)*nvl(tbraccd_amount,0) signed, tbraccd_amount amt, decode(tbbdetc_type_ind,'C',1,0)*nvl(tbraccd_amount,0) debit, decode(tbbdetc_type_ind,'P',1,0)*nvl(tbraccd_amount,0) credit, tbraccd_effective_date eff, tbraccd_srce_code scd, substr(spriden_id,1,3)||'-'||substr(spriden_id,4,2)||'-'||substr(spriden_id,6,4) SID, spriden_last_name||', '||spriden_first_name||' '||spriden_mi sname from spriden, tbbdetc, tbraccd where spriden_pidm = pidmin and spriden_change_ind is null and tbraccd_pidm=spriden_pidm and tbbdetc_detail_code=tbraccd_detail_code and tbraccd_detail_code != 'PURG') order by pidm, tran; begin open c_getrb; fetch c_getrb into rbcur; close c_getrb; return rbcur; end crerbal; April 14, 2004 - 11:37 am UTC can you put the example in a diet -- slim it down, provide create table scripts and have a complete reproducible example... as few columns as possible, as little code as you can. begin select ... into l_rec from select * from (query) where rownum = 1; end; would be my approach by the query - no procedural code really needed. A reader, April 14, 2004 - 9:27 pm UTC Tom sorry for not getting it,Can you please clarify. "alias it one layer down -- select ename NAME, deptno, sal from emp UNION ALL" Thanks April 15, 2004 - 8:21 am UTC scott@ORA10G> select name, deptno, sal, sum(sal) over (order by name) 2 from (select ename NAME, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5 order by name 6 / NAME DEPTNO SAL ------ -- -- SUM(SAL)OVER(ORDERBYNAME) ----- ADAMS 20 1100 ...... can i cache global temporary table?A reader, August 20, 2004 - 12:14 pm UTC I am trying to cache global temporary table. so I did the following alter table my_global_temp_table storage( buffer_pool keep); doesn't work tried alter temporary table my_global_temp_table storage( buffer_pool keep); doesn't work tried alter global temporary table my_global_temp_table storage( buffer_pool keep); doesn't work Is there anyway I can cache that global temp table? August 21, 2004 - 11:00 am UTC the amount cached is a function of your temporary space -- this is "temp stuff" have you identified a serious performance related issue with regards to gtt's and their doing direct IO? if so, lets see it (the metrics you used to identify this) and we'll work from there. gtt's are special -- they consume no space until you start using them, they are not traditional segments. readerA reader, February 24, 2005 - 2:14 pm UTC For the global temporary table which is created in the temporary tablespace, is there a v$ view or dba_ view to monitor the amount of space used (blocks) by the gtt February 24, 2005 - 5:37 pm UTC v$sort* (two of them) shows the temp space used by a session. Avioding Temp TablesVidya, March 15, 2005 - 5:32 pm UTC Hi Tom, My process is something like this. insert into gtt select col1,col2.... from bigTable1, bigTable2 Where ..... After this I work on small set of data which is in temp table. The operation i do are open cursor for select col1, col2... from gtt group by col1, col2 update gtt set col3=val1 where col1=val2. end loop. Off all the example you have given, there were no group by clause which were avioded. What i can do to aviod using gtt. Is record by record processing the only way. March 15, 2005 - 9:20 pm UTC insufficient data. No clue where val1 or val2 come from. is a temp table appropriate here?Jeremy, April 07, 2005 - 1:27 am UTC you're pretty adamant that a temp table is almost always unnecessary in Oracle. i'm writing a PL/SQL procedure that processes a large table in increments, and a global temp table seems very appropriate to me... the temp table stores some aggregate info on the big table, which is then used by several subsequent statements. seems much more efficient then having each statement have the same subquery (that does a full table scan to get the same aggregate info). one possible alternative might be using a materialized view and QUERY_REWRITE... (??) but the temp table seems much simpler to me... what do you think? begin --STATEMENT USED TO CREATE TEMP TABLE: --create global temporary table CTLLDR$TEMP_BUILD_NOTES ( -- ID VARCHAR2(10), -- CNT NUMBER, -- LEN NUMBER, -- FIRST_PIECE VARCHAR2(16), -- LAST_PIECE VARCHAR2(16) --) on commit preserve rows; --create index CTLLDR$TEMP_NOTES_I1 on CTLLDR$TEMP_BUILD_NOTES(cnt,id); execute immediate 'insert into CTLLDR$TEMP_BUILD_NOTES select "'||varfield||'" ID, count(*) cnt, sum("LENGTH") len, min(to_char(nvl(SEQ,99999),''00000'')||ID) first_piece, max(to_char(nvl(SEQ,99999),''00000'')||ID) last_piece from "'||vartabledetail||'" group by "'||varfield||'" '; -- BIG UPDATE 1EXECUTE IMMEDIATE 'update "'||vartable||'" t set "NOTE"=( select REPLACE(RPAD(min(n."DATA"),min(n."LENGTH")),CHR(141)||CHR(10)) from "'||tabnnote||'" n where n."'||varfield||'"=t."ID" having count(*)=1 group by n."'||fldnote||'" ) where ID in (select ID from CTLLDR$TEMP_BUILD_NOTES where cnt=1)'; -- BIG UPDATE 2EXECUTE IMMEDIATE 'update "'||vartable||'" t set "NOTE"=( ... ) where ID in (select ID from CTLLDR$TEMP_BUILD_NOTES where cnt=2)'; -- UPDATE 3OPEN refnnote FOR 'select "'||varfield||'","LENGTH","DATA" from "'||vartabledetail||'" where "'||varfield||'" in ( select ID from CTLLDR$TEMP_BUILD_NOTES where cnt>2 and len<4000 ) order by "'||varfield||'",to_char(nvl(SEQ,99999),''00000'')||ID'; FETCH refnnote INTO noteid, notelen, notedata; WHILE refnnote%FOUND LOOP ... etc (update 4 is the last one; gets records where LEN>4000)April 07, 2005 - 9:12 am UTC it would be the last option, not the first, correct. (temp tables) why two updates, why not one? what version are you using here. Re: is a temp table appropriate here?Jeremy, April 07, 2005 - 1:25 pm UTC \> why two updates, why not one? actually i tried it as one update, and had it working... wouldn't have had a problem doing it either way, but having two statements just seemed to have slightly better readability and didn't really run significantly slower (i think it took maybe less than 5% longer, which to me was worth it for code that's more clear about what it's doing and easier for the next guy to figure out) basically, this procedure is run just after loading a bunch of data from external tables. the memo fields are stored in pieces in the external tables and this procedure reassembles the pieces into CLOB's. the reason for 4 updates is that over 50% of the notes only have 1 or 2 pieces, and can be built very quickly with the simple update statement(s). The other two updates are for (3) notes with less than 4000 characters - stored inline - and (4) notes with more than 4000 characters, stored in the LOB segment (less than 1% of the notes are more than 4000 chars). the stored procedure actually gets run more than once, on different tables (which each have the same structure)... hence variables for "table" "tabledetal" and "field" -- BIG UPDATE 2 (notes w/ 2 pieces)EXECUTE IMMEDIATE 'update "'||vartable||'" t set "NOTE"=( select REPLACE(RPAD(d1."DATA",d1."LENGTH"),CHR(141)||CHR(10)) || REPLACE(RPAD(d2."DATA",d2."LENGTH"),CHR(141)||CHR(10)) from "'||vartabledetail||'" d1, "'||vartabledetail||'" d2 where t."ID"=d1."'||varfield||'" and (d1."'||varfield||'",to_char(nvl(d1.SEQ,99999),''00000'')||d1.ID) in ( select ID, first_piece from CTLLDR$TEMP_BUILD_NOTES where cnt=2 ) and t."ID"=d2."'||varfielde||'" and (d2."'||varfield||'",to_char(nvl(d2.SEQ,99999),''00000'')||d2.ID) in ( select ID, last_piece from CTLLDR$TEMP_BUILD_NOTES where cnt=2 ) ) where ID in (select ID from CTLLDR$TEMP_BUILD_NOTES where cnt=2)'; April 07, 2005 - 1:35 pm UTC comment? I can give you some nasty looking procedural code, it'll be as clear as it can be, but without a comment for the next guy, it is just nasty procedural code. I can give you some what I believe to be the MOST READABLE code on the planet, and six months later I have no clue what it does till I reverse engineer it. comments fix lots of things! so, I'd be back to "we should be doing this as a single sql statement, so no global temporary tables are not needed here" Re: is a temp table appropriate here?Jeremy, April 07, 2005 - 1:28 pm UTC using 10g [10.1.0.3.0] (is there a better way to do this in 10g?) April 07, 2005 - 1:37 pm UTC merge might be very useful Re: is a temp table appropriate here?Jeremy, April 07, 2005 - 3:47 pm UTC \> so, I'd be back to "we should be doing this as a single sql statement, so no global temporary tables are not needed here" the temp table is still used by updates 3 and 4, which are procedural (and i don't think there's an easy way to make them bulk). they use OPEN/FETCH/CLOSE to iterate through several records at a time and build the CLOBs. in the case of update 4, it builds chunks in memory and then WRITEAPPEND's each chunk to the clob. is uses the global temp table to figure out which records have >4000 chars and thus need to be stored out-of-line. \> merge might be very useful would merge still apply if i'm only doing updates? (the row *always* already exists in the target table, I'm just populating the CLOB column.) \======== I've got another question though. I've been looking closer and I'm not sure the temp table is really saving me any processing after all... and I may just wind up going back to not using it... seems that every time I use the temp table in the WHERE clause to get the records I want, the DB wants to make sure that each record still exists in the source table and so it does a HASH SEMI back to the base from the temp. additionally, it's seeing that there'll be high selectivity after the join and so it's doing a full table scan on the base table. seems to me that if it's going to FTS the base table every time then I might as well just do... WHERE id IN (SELECT id FROM basetable HAVING COUNT(*)=2 GROUP BY id) ...because this will do the same FTS (with an additional sort/group by operation but without the hash join) sure wish there was some way I could do that FTS/GROUP BY only once for the 3 or 4 update/select statements I need to do on that base table... hmmm... also, considering this new info, i'm going to go back and see if i might have been wrong about the speed difference between 2 update statements vs 1... i bet you're right and 1 statement will go faster April 07, 2005 - 4:40 pm UTC merge can apply in 9i even if only doing updates (if the table you merge "from" the using table -- only has rows that are in the target table) and especially so in 10g where you can skip the insert part. It is not that the single statment will go faster, I just think less code is better code, comments are good :) Re: is a temp table appropriate here?Jeremy, April 08, 2005 - 2:41 pm UTC for anyone who might read this thread... FYI, I did manage to find a much better way to do this. :) Got rid of the update statements, moved everything into one single OPEN/FOR-FETCH loop and used BULK COLLECT/LIMIT to process several hundred rows at a time... Also, used analytic functions to get the aggregate info I needed in the same Full Table Scan that's actually getting the data. I wish the CBO was smart enough to realize that the WINDOW SORT is putting the rows in the same order as the ORDER BY (and thus avoid the extra [massive] sort which does nothing in this case)... But anyway, it runs in half the time it did before, and the code is much easier to read (especially with the comments <g>...) And no, there are no "global temp tables". :) April 08, 2005 - 2:46 pm UTC It does, if the sort is unneccesary -- no sort will be done.... scott@ORA9IR2> select empno, sal, rank() over (partition by job order by sal) rnk 2 from emp 3 order by job, sal; Execution Plan ---------- 0 SELECT STATEMENT Optimizer=CHOOSE 1 0 WINDOW (SORT) 2 1 TABLE ACCESS (FULL) OF 'EMP' Re: is a temp table appropriate here?Jeremy, April 08, 2005 - 6:10 pm UTC \>It does, if the sort is unneccesary -- no sort will be done.... yes, you're right... thanks global temp tablesenthil, October 23, 2005 - 4:28 am UTC Hello Tom, will global temp table generate more redo log files. i have made some temp tables as global temp tables. all of a sudden more redo log files have generated. are GTT related to redo log files? Senthil. October 23, 2005 - 4:52 am UTC global temporary tables generate undo. undo is protected by redo. hence, global temporary tables will generate some amount of redo, yes. global temp table to store session userWayne, October 23, 2005 - 4:08 pm UTC Tom, Using connection pools in our Java application, we cannot identify who is who directly, therefore, we cannot use them directly in the create_by or last_update_by fields (handled by trigger). I think we can use global temp table (with on commit delete rows setting, and other careful controls) to store LDAP user id (when they sign on to the application). There are something our application has control, for example, before it releases a connection back to the connection pool, the transactions are either committed or rollback-ed. In our tests, it worked quite well, do you think this is a feasible solution? Thanks, October 24, 2005 - 12:27 am UTC why not just use an application context? or dbms_session.set_client_id (that will be audited even, that identifier becomes part of the audit trail) instead of a global temporary table. Just set the context right after you grab a connection, much like you must be inserting into a global temporary table now. DBMS_SESSIONWayne, October 24, 2005 - 7:57 pm UTC Thanks for your great suggestion, it is very clean. We will look into using that. Global Temp TablesSenthil, October 27, 2005 - 7:55 am UTC Tom, is there any way to make the Global temp tables to prevent logging? alter statement is not supported here. October 27, 2005 - 12:31 pm UTC conventional path inserts generate UNDO. they have to, you need to be able to rollback, you need to be able to support multi-versioning. UNDO is always protected by redo. If you direct path the global temporary table ( insert /*+ APPEND */) you can bypass undo ON THE TABLE - but not on the indexes. Hence you can reduce (marginally typically as it is usually indexes that generate the most undo) the amount of redo, but you cannot eliminate it. Temporary Table allocationbrahma prakash, November 23, 2005 - 3:21 am UTC Hi Tom, First i like to tell that i am a great admirer of this site. I am unknown about how temporary tables allocation is done. I know that they are allocated in the users default temporary tablespace and segments created when data is first inserted. What i would like to know is
If there is a quota that exits, can a user be given a create table privilage and make quota=0 on his default tablespace, so that he can only create temporary table. Is this possible. Please give your comments on this. Thank you. November 23, 2005 - 9:47 am UTC
what if single user fills up the temp spaceBrahma, November 24, 2005 - 8:13 pm UTC So, what if a single user fills up the temporary space. How can the others use it. November 25, 2005 - 9:32 am UTC by definition: they cannot use it if all that you have permitted to be used is in USE. GTT and redo logCharlie Zhu, December 28, 2005 - 2:13 pm UTC Why the INSERT into GTT generated so many redo logs without the APPEND hints? CREATE GLOBAL TEMPORARY TABLE SCOTT.VC_GT ( LISTINGSID NUMBER(20) NOT NULL, VENDCATSID NUMBER ) ON COMMIT PRESERVE ROWS; exec runStats_pkg.rs_start; insert /* append */ into scott.vc_gt select listingsid, Min(VENDCATSID) VENDCATSID from scott.lstvendcat group by listingsid ; exec runStats_pkg.rs_middle; insert /*+ append */ into scott.vc_gt select listingsid, Min(VENDCATSID) VENDCATSID from scott.lstvendcat group by listingsid ; exec runStats_pkg.rs_stop; scott@lab> exec runStats_pkg.rs_stop; Run1 ran in 15 hsecs Run2 ran in 68 hsecs run 1 ran in 22.06% of the time Name Run1 Run2 Diff ... STAT...redo entries 253 19 -234 STAT...consistent gets 418 162 -256 STAT...consistent gets from ca 418 162 -256 STAT...db block gets 568 170 -398 STAT...consistent changes 525 35 -490 STAT...db block gets from cach 568 39 -529 STAT...session logical reads 986 332 -654 STAT...db block changes 800 53 -747 LATCH.cache buffers chains 3,183 481 -2,702 STAT...undo change vector size 79,196 2,900 -76,296 STAT...redo size 97,816 4,280 -93,536 Run1 latches total versus runs -- difference and pct Run1 Run2 Diff Pct 3,694 834 -2,860 442.93% PL/SQL procedure successfully completed. December 28, 2005 - 5:59 pm UTC (I would hardly term 92K of redo "so many redo logs", but...) the conventional path insert generates UNDO, UNDO is always protected by REDO. Here is an excerpt from Expert Oracle: Database Architecture regarding this: <quote> Temporary Tables and Redo/Undo Temporary tables are still considered a relatively new feature of Oracle, having been introduced only in Oracle8i version 8.1.5. As such, there is some confusion surrounding them, in particular in the area of logging. In Chapter 10, we will cover how and why you might use temporary tables. In this section, we will explore only the question How do temporary tables work with respect to logging of changes? Temporary tables generate no redo for their blocks. Therefore, an operation on a temporary table is not recoverable. When you modify a block in a temporary table, no record of this change will be made in the redo log files. However, temporary tables do generate undo, and the undo is logged. Hence, temporary tables will generate some redo. At first glance, it doesnt seem to make total sense: why would they need to generate undo? This is because you can roll back to a SAVEPOINT within a transaction. You might erase the last 50 INSERTs into a temporary table, leaving the first 50. Temporary tables can have constraints and everything else a normal table can have. They might fail a statement on the five-hundredth row of a 500-row INSERT, necessitating a rollback of that statement. Since temporary tables behave in general just like normal tables, temporary tables must generate undo. Since undo data must be logged, temporary tables will generate some redo log for the undo they generate. This is not nearly as ominous as it seems. The primary SQL statements used against temporary tables are INSERTs and SELECTs. Fortunately, INSERTs generate very little undo (you need to restore the block to nothing, and it doesnt take very much room to store nothing), and SELECTs generate no undo. Hence, if you use temporary tables for INSERTs and SELECTs exclusively, this section means nothing to you. It is only if you UPDATE or DELETE that you might be concerned about this. I set up a small test to demonstrate the amount of redo generated while working with temporary tables, an indication therefore of the amount of undo generated for temporary tables, since only the undo is logged for them. To demonstrate this, I will take identically configured permanent and temporary tables, and then perform the same operations on each, measuring the amount of redo generated each time. The tables I used were simply as follows: ops$tkyte@ORA10G> create table perm 2 ( x char(2000) , 3 y char(2000) , 4 z char(2000) ) 5 / Table created. ops$tkyte@ORA10G> create global temporary table temp 2 ( x char(2000) , 3 y char(2000) , 4 z char(2000) ) 5 on commit preserve rows 6 / Table created. I set up a small stored procedure to allow me to perform arbitrary SQL and report the amount of redo generated by that SQL. I will use this routine to perform INSERTs, UPDATEs, and DELETEs against both the temporary and permanent tables: ops$tkyte@ORA10G> create or replace procedure do_sql( p_sql in varchar2 ) 2 as 3 l_start_redo number; 4 l_redo number; 5 begin 6 select v$mystat.value 7 into l_start_redo 8 from v$mystat, v$statname 9 where v$mystat.statistic# = v$statname.statistic# 10 and v$statname.name = 'redo size'; 11 12 execute immediate p_sql; 13 commit; 14 15 select v$mystat.value-l_start_redo 16 into l_redo 17 from v$mystat, v$statname 18 where v$mystat.statistic# = v$statname.statistic# 19 and v$statname.name = 'redo size'; 20 21 dbms_output.put_line 22 ( to_char(l_redo,'9,999,999') ||' bytes of redo generated for "' || 23 substr( replace( p_sql, chr(10), ' '), 1, 25 ) || '"...' ); 24 end; 25 / Procedure created. Then, I ran equivalent INSERTs, UPDATEs, and DELETEs against the PERM and TEMP tables: ops$tkyte@ORA10G> set serveroutput on format wrapped ops$tkyte@ORA10G> begin 2 do_sql( 'insert into perm 3 select 1,1,1 4 from all_objects 5 where rownum <= 500' ); 6 7 do_sql( 'insert into temp 8 select 1,1,1 9 from all_objects 10 where rownum <= 500' ); 11 dbms_output.new_line; 12 13 do_sql( 'update perm set x = 2' ); 14 do_sql( 'update temp set x = 2' ); 15 dbms_output.new_line; 16 17 do_sql( 'delete from perm' ); 18 do_sql( 'delete from temp' ); 19 end; 20 / 3,297,752 bytes of redo generated for "insert into perm "... 2,182,200 bytes of redo generated for "update perm set x = 2"...
1,100,252 bytes of redo generated for "update temp set x = 2"...
3,218,804 bytes of redo generated for "delete from perm"...
3,212,084 bytes of redo generated for "delete from temp"...
PL/SQL procedure successfully completed.
As you can see,
GTT, heap table and redo logCharlie Zhu, December 30, 2005 - 2:43 pm UTC Hi Tom, I read this part twice, that's I wonder why the big redo log generated for temp table. As you did in the trigger and redo log part, I tested all case yesterday, and know all details. Here is the number: GTT vs Heap table,
STAT...redo entries 255 920 665 STAT...redo size 92,820 693,268 600,448 Run1 latches total versus runs -- difference and pct Run1 Run2 Diff Pct 4,807 8,882 4,075 54.12%
STAT...redo entries 18 149 131 STAT...redo size 3,872 545,216 541,344 Run1 latches total versus runs -- difference and pct Run1 Run2 Diff Pct 693 1,076 383 64.41%
exec runStats_pkg.rs_start; insert /*+ append */ into scott.vc_gt select listingsid, Min(VENDCATSID) VENDCATSID from scott.lstvendcat group by listingsid ; exec runStats_pkg.rs_middle; insert /*+ append */ into scott.vc_stg nologging select listingsid, Min(VENDCATSID) VENDCATSID from scott.lstvendcat group by listingsid ; exec runStats_pkg.rs_stop; STAT...redo entries 17 162 145 STAT...redo size 3,820 27,328 23,508 STAT...session uga memory max 199,968 0 -199,968 STAT...session pga memory max 786,432 0 -786,432 Run1 latches total versus runs -- difference and pct Run1 Run2 Diff Pct 965 4,277 3,312 22.56% PL/SQL procedure successfully completed. Typo, paste the redo againCharlie Zhu, December 30, 2005 - 3:52 pm UTC For (3), No force logging, +APPEND direct insert GTT vs. heap table: 27426 rows inserted. scott@lab> exec runStats_pkg.rs_stop; Run1 ran in 14 hsecs Run2 ran in 67 hsecs run 1 ran in 20.9% of the time Name Run1 Run2 Diff STAT...redo entries 17 27 10 STAT...undo change vector size 2,468 2,496 28 STAT...redo size 3,748 4,272 524 Run1 latches total versus runs -- difference and pct Run1 Run2 Diff Pct 10,658 10,674 16 99.85% temptables for (inefficient) shaped recordsetGeoff Keenan, February 23, 2006 - 2:04 pm UTC great site. i generate a shaped recordset (about 10 child levels deep) from a 10 million record table. (The much more efficient 'connect by' sql does not give me drilldown functionality when displaying the data in a hierarchical grid control.) it's real slow. i wondered if you think a temp table populated by an insert query which uses the 'connect by' syntax in order to reduce # of records from 10 million to only the records needed for the shaped recordset (~1 million or less) is an appropriate use of temp tables. thanks. February 23, 2006 - 7:12 pm UTC why doesn't connect by give you "drill down"? I missed that part, you just start with the right bit? Where is it stored?A reader, July 20, 2006 - 5:05 am UTC Where is the session data for global temporary table stored? Temp tablespace? Thanks July 22, 2006 - 4:38 pm UTC that is where it can be "swapped" to. It is buffered as well in the buffer cache. Where is it stored?A reader, July 21, 2006 - 6:51 am UTC Could you tell us where is the data for global temporary table stored? Thanks July 22, 2006 - 7:23 pm UTC in temp and/or buffer cache Global Temporary TableCT VELU, August 01, 2006 - 11:30 am UTC Hi Tom I am in the process of tuning an application.I have a reporting table of 80 columns with 55 million records .There are two indexes on the table.One before insert or update trigger.Trigger has a PL/SQL function, No sql involved. Every day the process goes like this inside the application 1. Delete 2 million record (15 minutes). 2. Insert 2 million records (15 to twenty minutes). 3. Procedure 1 to 20 fetches these 2 million records in different queries and process it , sometimes we do update. 4. There are no intermediate commits involved. I did tuned some of the sql statements with my knowledge.I was able to bring it down from 1hour 45 miunutes to 1 hour 15 minutes. By looking at the sql statements and tkprof trace I found lot of time is spent on Reading these two million records every time the table is hit(of course it uses the index every time). MY Aim is to speed up this process so that I can be well within 1 hour. I was thinking 1. Create Global Temp table GTT 2.Insert 2 million records in to Global Temp Table GTT. 3.process this two million records in the procedures( here I am expecting time reduction ) 4.Delete 2 million records from the permanent report table 5.Insert 2 million records from GTT to permanent table.( I cant do a merge because of the business practice in our shop). I have read in your book and from documentation.temp table is in Disk not in memory.so it has to do physical read.In one of your book you have said that physical read from temp space is not going to be included in logical I/O( what does that mean ? wont I have the block in buffer for further processing). I have seen your comments saying that Global Temporary Tables are almost not needed but I did see in this post you are saying we can use it as an Interface table. Please do clarify whether I can use it here to speed up the process. Thank you very much for your time CT P.S with a bit of knowledge I think it is going to add up an overhead of inserting 2 million records in temp table but should not be a killer. August 01, 2006 - 7:05 pm UTC temp tables are cached in the buffer cache. it can have physical reads, but truth be told - I'd rather full scan a 2million row table than do 2 million indexed reads! this might be one time where I'd say "global temporary table, makes sense..." This is a lot like my "interface table" example. Very similar indeed to my "palm sync" example. don't know why a BUSINESS PRACTICE would prevent a TECHNICAL SOLUTION that arrives at the same answer from being used?????? This I am very curious about - can you explain? Business RuleCT Velu, August 01, 2006 - 10:02 pm UTC Hi Tom Thank you very much for your response.Thought you were busy and I did tested for few procedures and was able to bring down the time,yet to implement in full scale will post the result here. It goes like this 1.we delete 1400 header record(header numbers are from sequence)this will delete the corresponding 2 million records in the report table in question(which has a detail number as a primary key coming from sequence) 2.Insert 1400 or more header records depending on the sales detail(I guess we are inserting only for customers who has sales, not for every one)so today we may have 100 records and tomorrow we may have 110. 3.Now insert around 2 million records in to detail table(report table in question) from sales table and process. My belief is that they may have a row in detail table which qualifies the condition today may not qualify tomorrow for that period(In that case we can't do merge with update and insert becoz the unwanted row will be still in the table and will yield wrong results down the line).So every day we want to have this 2 million rows fresh. Actually my original thought was to implement materialised view with anlytical function as 70 columns are pertaining to the monthly sales, yearly sales, quarterly sales..etc.The reason why I thought like this is that we run this procedure for 3 times for 3 different quarters(3 x 1.45 Hours) but then I droped because we needed fresh rows. If you have any different thought how I can do this, I am willing to listen. Thaks again for your response. CT August 02, 2006 - 11:18 am UTC ok, so records are deleted as well as updated and inserted. 10g can do that (the merge can
temp table and joinAnil, August 09, 2006 - 5:14 pm UTC Hi tom Following two select statements returns same number of row's but the Query using temp table is much faster than simple Query (pls look at the COST) could you pls throw some light on this temp table contains only 1 row insert into SPE_INTR_SEARCH_RESULT_GLBTMP values(1,98,'x'); SQL> SELECT 2 1 3 ,spe.SPR_SEG_PROFILE_SYS_ID 4 ,'X' 5 FROM 6 segregation_profile spe 7 ,SPE_INTR_SEARCH_RESULT_GLBTMP g 8 WHERE 9 1=1 10 AND spe.BRWR_CUS_SYS_ID = g.SPR_SEG_PROFILE_SYS_ID 11 AND g.GLBTMP_DATASET_NO = 1; Execution Plan 0 SELECT STATEMENT Optimizer=CHOOSE (Cost=15 Card=1 Bytes=36) 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'SEGREGATION_PROFILE' (Co 2 1 NESTED LOOPS (Cost=15 Card=1 Bytes=36)
3 2 TABLE ACCESS (FULL) OF 'SPE_INTR_SEARCH_RESULT_GLBTMP' 4 2 INDEX (RANGE SCAN) OF 'XIF1SEGREGATION_PROFILE' (NON-U SQL> select spr_seg_profile_sys_id from segregation_profile where brwr_cus_sys_id = 98;
Execution Plan0 SELECT STATEMENT Optimizer=CHOOSE (Cost=24 Card=1110 Bytes=1 1 0 TABLE ACCESS (BY INDEX ROWID) OF 'SEGREGATION_PROFILE' (Co 2 1 INDEX (RANGE SCAN) OF 'XIF1SEGREGATION_PROFILE' (NON-UNI regards,
ANIL August 09, 2006 - 7:46 pm UTC look at the estimated cardinality - that'll explain the cost but in any case - one would want a tkprof report to say anything about relative performance. Anil, August 11, 2006 - 2:21 pm UTC Hi tom, Here is the tkprof report for the same TKPROF: Release 9.2.0.1.0 - Production on Thu Aug 10 11:08:20 2006 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. Trace file: bdsd2_ora_1323152_seeker.trc Sort options: default ******************************************************************************** count = number of times OCI procedure was executed cpu = cpu time in seconds executing elapsed = elapsed time in seconds executing disk = number of physical reads of buffers from disk query = number of buffers gotten for consistent read current = number of buffers gotten in current mode (usually for update) rows = number of rows processed by the fetch or execute call ******************************************************************************** alter session set events '10046 trace name context forever, level 12' call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 0 0.00 0.00 0 0 0 0 Execute 1 0.00 0.01 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 --- -- -- -- -- -- -- total 1 0.00 0.01 0 0 0 0 Misses in library cache during parse: 0 Misses in library cache during execute: 1 Optimizer goal: CHOOSE Parsing user id: 41 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 48.88 48.88 ******************************************************************************** insert into SPE_INTR_SEARCH_RESULT_GLBTMP values (1,98,'x') call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.09 1.35 21 49 0 0 Execute 1 0.00 0.08 3 1 2 1 Fetch 0 0.00 0.00 0 0 0 0 --- -- -- -- -- -- -- total 2 0.09 1.44 24 50 2 1 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- db file sequential read 3 0.01 0.02 SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 0.01 0.01 ******************************************************************************** BEGIN DBMS_OUTPUT.GET_LINES(:LINES, :NUMLINES); END; call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.18 0 0 0 0 Execute 1 0.03 0.23 0 0 0 1 Fetch 0 0.00 0.00 0 0 0 0 --- -- -- -- -- -- -- total 2 0.03 0.41 0 0 0 1 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 18.27 18.27 ******************************************************************************** select spr_seg_profile_sys_id from segregation_profile where brwr_cus_sys_id = 98 call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 281 0.02 0.14 60 638 0 4189 --- -- -- -- -- -- -- total 283 0.02 0.15 60 638 0 4189 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Rows Row Source Operation - --------- 4189 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE 4189 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 281 0.00 0.00 db file sequential read 60 0.01 0.07 SQL*Net message from client 281 73.75 87.02 ******************************************************************************** SELECT 1 ,spe.SPR_SEG_PROFILE_SYS_ID ,'X' FROM segregation_profile spe ,SPE_INTR_SEARCH_RESULT_GLBTMP g WHERE 1=1 AND spe.BRWR_CUS_SYS_ID = g.SPR_SEG_PROFILE_SYS_ID AND g.GLBTMP_DATASET_NO = 1 call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.01 0.01 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 281 0.22 0.26 82 7470 0 4189 --- -- -- -- -- -- -- total 283 0.23 0.27 82 7470 0 4189 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Rows Row Source Operation - --------- 4189 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE 11004 NESTED LOOPS 6814 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP 4189 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 281 0.00 0.00 db file sequential read 62 0.01 0.01 db file scattered read 3 0.02 0.02 SQL*Net message from client 281 112.56 124.16 ******************************************************************************** SELECT DECODE('A','A','1','2') FROM DUAL call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.01 0.11 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.00 0.00 2 3 0 1 --- -- -- -- -- -- -- total 3 0.01 0.11 2 3 0 1 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 1 0.00 0.00 db file sequential read 2 0.00 0.00 ******************************************************************************** OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 5 0.11 1.66 21 49 0 0 Execute 6 0.03 0.33 3 1 2 2 Fetch 563 0.24 0.41 144 8111 0 8379 --- -- -- -- -- -- -- total 574 0.38 2.41 168 8161 2 8381 Misses in library cache during parse: 5 Misses in library cache during execute: 1 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 566 0.00 0.00 SQL*Net message from client 565 112.56 278.36 db file sequential read 127 0.01 0.11 db file scattered read 3 0.02 0.02 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 11 0.01 0.53 0 0 0 0 Execute 17 0.06 0.66 0 0 0 0 Fetch 40 0.02 0.12 24 52 0 29 --- -- -- -- -- -- -- total 68 0.09 1.31 24 52 0 29 Misses in library cache during parse: 11 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- db file sequential read 24 0.01 0.10 6 user SQL statements in session. 11 internal SQL statements in session. 17 SQL statements in session. ******************************************************************************** Trace file: bdsd2_ora_1323152_seeker.trc Trace file compatibility: 9.00.01 Sort options: default 1 session in tracefile. 6 user SQL statements in trace file. 11 internal SQL statements in trace file. 17 SQL statements in trace file. 17 unique SQL statements in trace file. 2140 lines in trace file. regards, ANIL August 11, 2006 - 2:55 pm UTC 6814 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP how'd we get that many rows from a 1 row table? A reader, August 14, 2006 - 2:09 pm UTC Hi tom, here's the recent tk prof report ..I don't know how it is showing these many number of records TKPROF: Release 9.2.0.1.0 - Production on Mon Aug 14 13:59:17 2006 Copyright (c) 1982, 2002, Oracle Corporation. All rights reserved. Trace file: bdsd2_ora_1028258_seeker14aug.trc Sort options: default ******************************************************************************** count = number of times OCI procedure was executed cpu = cpu time in seconds executing elapsed = elapsed time in seconds executing disk = number of physical reads of buffers from disk query = number of buffers gotten for consistent read current = number of buffers gotten in current mode (usually for update) rows = number of rows processed by the fetch or execute call ******************************************************************************** alter session set events '10046 trace name context forever, level 12' call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 0 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 --- -- -- -- -- -- -- total 1 0.00 0.00 0 0 0 0 Misses in library cache during parse: 0 Misses in library cache during execute: 1 Optimizer goal: CHOOSE Parsing user id: 41 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 3.18 3.18 ******************************************************************************** alter session set tracefile_identifier = 'seeker14aug' call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 2 0.00 0.06 0 0 0 0 Execute 2 0.00 0.00 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 --- -- -- -- -- -- -- total 4 0.00 0.07 0 0 0 0 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 2 0.00 0.00 SQL*Net message from client 2 35.72 43.95 ******************************************************************************** rollback call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.06 0 0 0 0 Execute 1 0.00 0.07 3 2 52 0 Fetch 0 0.00 0.00 0 0 0 0 --- -- -- -- -- -- -- total 2 0.00 0.13 3 2 52 0 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- db file sequential read 3 0.02 0.02 log file sync 1 0.02 0.02 SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 2.98 2.98 ******************************************************************************** select count(*) from SPE_INTR_SEARCH_RESULT_GLBTMP call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 2 0.00 0.00 0 0 0 1 --- -- -- -- -- -- -- total 4 0.00 0.00 0 0 0 1 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Rows Row Source Operation - --------- 1 SORT AGGREGATE 0 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 2 0.00 0.00 SQL*Net message from client 2 7.12 7.13 ******************************************************************************** alter session set events '10046 trace name context forever, level 12' call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 0 0.00 0.00 0 0 0 0 --- -- -- -- -- -- -- total 2 0.00 0.00 0 0 0 0 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 3.44 3.44 ******************************************************************************** insert into SPE_INTR_SEARCH_RESULT_GLBTMP values (1,98,'x') call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 1 8 1 Fetch 0 0.00 0.00 0 0 0 0 --- -- -- -- -- -- -- total 2 0.00 0.00 0 1 8 1 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 22.54 22.54 ******************************************************************************** SELECT 1 ,spe.SPR_SEG_PROFILE_SYS_ID ,'X' FROM segregation_profile spe ,SPE_INTR_SEARCH_RESULT_GLBTMP g WHERE 1=1 AND spe.BRWR_CUS_SYS_ID = g.SPR_SEG_PROFILE_SYS_ID AND g.GLBTMP_DATASET_NO = 1 call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.05 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 281 0.06 0.11 76 631 0 4190 --- -- -- -- -- -- -- total 283 0.06 0.17 76 631 0 4190 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Rows Row Source Operation - --------- 4190 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE 4192 NESTED LOOPS 1 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP 4190 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 281 0.00 0.00 db file sequential read 76 0.00 0.01 SQL*Net message from client 281 23.66 36.22 ******************************************************************************** select spr_seg_profile_sys_id from segregation_profile where brwr_cus_sys_id = 98 call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.02 0.08 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 281 0.06 0.05 0 628 0 4190 --- -- -- -- -- -- -- total 283 0.08 0.14 0 628 0 4190 Misses in library cache during parse: 1 Optimizer goal: CHOOSE Parsing user id: 41 Rows Row Source Operation - --------- 4190 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE 4190 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 281 0.00 0.00 SQL*Net message from client 281 0.61 16.57 ******************************************************************************** SELECT DECODE('A','A','1','2') FROM DUAL call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.00 0.00 0 0 0 0 Fetch 1 0.00 0.00 0 3 0 1 --- -- -- -- -- -- -- total 3 0.00 0.00 0 3 0 1 Misses in library cache during parse: 0 Optimizer goal: CHOOSE Parsing user id: 41 Rows Row Source Operation - --------- 1 TABLE ACCESS FULL OBJ#(222) Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 1 0.00 0.00 SQL*Net message from client 1 1143.24 1143.24 ******************************************************************************** OVERALL TOTALS FOR ALL NON-RECURSIVE STATEMENTS call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 9 0.02 0.27 0 0 0 0 Execute 10 0.00 0.07 3 3 60 1 Fetch 565 0.12 0.17 76 1262 0 8382 --- -- -- -- -- -- -- total 584 0.14 0.52 79 1265 60 8383 Misses in library cache during parse: 7 Misses in library cache during execute: 1 Elapsed times include waiting on following events: Event waited on Times Max. Wait Total Waited -------- Waited -- ---- SQL*Net message to client 571 0.00 0.00 SQL*Net message from client 571 1143.24 1279.30 db file sequential read 79 0.02 0.04 log file sync 2 0.02 0.02 OVERALL TOTALS FOR ALL RECURSIVE STATEMENTS call count cpu elapsed disk query current rows --- -- -- -- -- -- -- Parse 1 0.00 0.00 0 0 0 0 Execute 1 0.01 0.00 0 5 2 1 Fetch 0 0.00 0.00 0 0 0 0 --- -- -- -- -- -- -- total 2 0.01 0.00 0 5 2 1 Misses in library cache during parse: 0 10 user SQL statements in session. 1 internal SQL statements in session. 11 SQL statements in session. ******************************************************************************** Trace file: bdsd2_ora_1028258_seeker14aug.trc Trace file compatibility: 9.00.01 Sort options: default 1 session in tracefile. 10 user SQL statements in trace file. 1 internal SQL statements in trace file. 11 SQL statements in trace file. 10 unique SQL statements in trace file. 1955 lines in trace file. regards, ANIL August 14, 2006 - 3:03 pm UTC so, tkprof shows no difference really now. before, you had made a mistake - that table did not have one row. Now, it does. Anil, August 14, 2006 - 4:00 pm UTC hi tom, Please let us know why there is no diff(may be bcz there are not considerable amt of rows in temp table? or..). So which approach is better. Though we are seeing the diff in COST. So does it mean we can have either of the approaches i.e sigle Query approach or global temp table( which will hold intermidiate data set) Pls explain with an Example Thanks a lot for Quick turnaround regards, Anil August 14, 2006 - 4:12 pm UTC bcz? there is no difference because the plans are very similar and the amount of work to get that single row was neglible. right now, today you have: Rows Row Source Operation - --------- 4190 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE 4192 NESTED LOOPS 1 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP 4190 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019) which differs from the "hard coded" plan in that we are full scanning that 1 row table. Before you had: Rows Row Source Operation - --------- 4189 TABLE ACCESS BY INDEX ROWID SEGREGATION_PROFILE 11004 NESTED LOOPS 6814 TABLE ACCESS FULL SPE_INTR_SEARCH_RESULT_GLBTMP 4189 INDEX RANGE SCAN XIF1SEGREGATION_PROFILE (object id 282019) A RATHER LARGE 6,814 rows - eg: not 1 as stated. global temporary table usageQing Yu, August 30, 2006 - 4:23 pm UTC Tom, what is the best way to capture global temporary table usage for a session? We are considering to poll V$TEMPSEG_USAGE. Is there any other better way? August 30, 2006 - 5:57 pm UTC define "capture global temporary table usage" for me. Updating a GTTPaul McKerley, August 31, 2006 - 3:51 am UTC Hi Tom: I have a question about updates to GTTs. Export Oracle: Database Architecture has a passage (quoted on the page above) saying that an update of a null column in a GTT will produce very little redo. But this does not match our experience. We've found that updating null columns in GTTs is far more expensive than inserting to a new GTT. Here's a test package that compares updating and merging into a GTT vs. selecting into a new one. You need two GTTs: create global temporary table test_gtt_upd_1 ( owner varchar2(30) not null, table_name varchar2(128) not null, last_analyzed date null ) on commit delete rows; create global temporary table test_gtt_upd_2 ( owner varchar2(30) not null, table_name varchar2(128) not null ) on commit delete rows; -- The first test will create a GTT based on dba_tables. -- It will have null in the last_analyzed -- column that will be updated from dba_tables. -- -- The second test will accomplish the same thing using -- a second GTT with a copy of part of dba_tables. This -- will be joined against dba_tables to populate the -- original table. Statistics will be gathered and -- reported. -- -- The third test tries the update with a merge statement. create or replace package pkg_test_gtt_update as procedure test_update; procedure test_insert; procedure test_merge; procedure report_metrics; procedure run_test; procedure start_stats; procedure stop_stats(idx in varchar2); end; / show errors; create or replace package body pkg_test_gtt_update as type metric_at_type is table of int index by varchar2(10); redo_at metric_at_type; gets_at metric_at_type; before_redo int; before_gets int; procedure run_test as begin test_update; test_insert; test_merge; report_metrics; end; procedure start_stats as begin select value into before_gets from v$mystat join v$statname using (statistic#) where name = 'consistent gets'; select value into before_redo from v$mystat join v$statname using (statistic#) where name = 'redo size'; end; procedure stop_stats( idx in varchar2 ) as begin select value - before_gets into gets_at(idx) from v$mystat join v$statname using (statistic#) where name = 'consistent gets'; select value - before_redo into redo_at(idx) from v$mystat join v$statname using (statistic#) where name = 'redo size'; end; procedure test_update as begin start_stats; insert into test_gtt_upd_1 select owner, table_name, null from dba_tables; update test_gtt_upd_1 tt set last_analyzed = (select last_analyzed from dba_tables dt where dt.owner = tt.owner and dt.table_name = tt.table_name ); stop_stats('update'); commit; end; procedure test_insert as begin start_stats; insert into test_gtt_upd_2 select owner, table_name from dba_tables; insert into test_gtt_upd_1 tt1 select tt2.owner, tt2.table_name, dt.last_analyzed from test_gtt_upd_2 tt2 join dba_tables dt on tt2.owner = dt.owner and tt2.table_name = dt.table_name; stop_stats('insert'); commit; end; procedure test_merge as begin start_stats; insert into test_gtt_upd_1 select owner, table_name, null from dba_tables; merge into test_gtt_upd_1 gtt1 using ( select owner, table_name, last_analyzed from dba_tables ) dt on ( gtt1.owner = dt.owner and gtt1.table_name = dt.table_name ) when matched then update set gtt1.last_analyzed = dt.last_analyzed; stop_stats('merge'); commit; end; procedure report_metrics as begin dbms_output.put_line('Test redo gets ' ); dbms_output.put_line('-----' ); dbms_output.put( 'Updating ' ); dbms_output.put(to_char(redo_at('update'),'99999999999')); dbms_output.put(' '); dbms_output.put_line(to_char(gets_at('update'),'99999999999')); dbms_output.put( 'Inserting ' ); dbms_output.put(to_char(redo_at('insert'),'99999999999')); dbms_output.put(' '); dbms_output.put_line(to_char(gets_at('insert'),'99999999999')); dbms_output.put( 'Merging ' ); dbms_output.put(to_char(redo_at('merge'),'99999999999')); dbms_output.put(' '); dbms_output.put_line(to_char(gets_at('merge'),'99999999999')); end; end; / show errors; Here are the results that I get consistently on my 10gR2 system: paul@SURPRISE> exec pkg_test_gtt_update.run_test; Test redo gets --------- Updating 311144 29648 Inserting 12044 24136 Merging 311040 22913 Any idea what's going on? Thanks, Paul August 31, 2006 - 9:22 am UTC I see I was ambigous - I was comparing global temporary tables to real tables there and it wasn't really clear. ops$tkyte%ORA9IR2> create global temporary table gtt ( x varchar2(30) ) on commit delete rows; Table created. ops$tkyte%ORA9IR2> create table t ( x varchar2(30) ); Table created. ops$tkyte%ORA9IR2> ops$tkyte%ORA9IR2> insert into gtt select null from all_objects; 30722 rows created. ops$tkyte%ORA9IR2> insert into t select null from all_objects; 30722 rows created. ops$tkyte%ORA9IR2> @mystat "redo size" ops$tkyte%ORA9IR2> set echo off NAME VALUE redo size 442452 ops$tkyte%ORA9IR2> update gtt set x = rpad('x',30,'x'); 30722 rows updated. ops$tkyte%ORA9IR2> @mystat2 ops$tkyte%ORA9IR2> set echo off NAME VALUE DIFF redo size 8981464 8,539,012 ops$tkyte%ORA9IR2> @mystat "redo size" ops$tkyte%ORA9IR2> set echo off NAME VALUE redo size 8981464 ops$tkyte%ORA9IR2> update t set x = rpad('x',30,'x'); 30722 rows updated. ops$tkyte%ORA9IR2> @mystat2 ops$tkyte%ORA9IR2> set echo off NAME VALUE DIFF redo size 24478372 15,496,908 I'll try to "disambiguate that" in subsequent releases. Re: Updating a GTTPaul McKerley, August 31, 2006 - 11:30 am UTC Thanks for the feedback, Tom. That was very helpful. Re: Updating a GTTPaul McKerley, September 09, 2006 - 9:43 pm UTC Tom: I went back and looked at this and realized that your answer didn't really address my example. I think my example showed that when working with GTTs, copying all your data to a new GTT in order to populate a new column is far more efficient that a updating null column in the original GTT. It looks like your example demonstrates that updating a null column in a GTT is far more efficient than updating a null column a permanent table. That's true but not really the point. Basically, I'm trying to prove that updating null values in GTTs is worse than inserting into a new GTT. That's counter-intuitive to many people. Regards, Paul McKerley Performance IssueKhurram, January 05, 2007 - 3:10 am UTC SQL> DISCONNECT Disconnected from Oracle Database 10g Enterprise Edition Release 10.1.0.2.0 - Production With the Partitioning, OLAP and Data Mining options SQL> CONN test2/test2@takaful Connected. SQL> SELECT COUNT() FROM vw_ins_outstanding 2 / COUNT() vw_ins_outstanding is a complex view which in turn call another view and then this another view in turn call another viewand so on,there is 4 level views like vw_A-->vw_B-->vw_C-->vw_D-->TABLES and each view composed on with db tables having joining condition and as well combination of union all with some other predicates like OR,AND etc. Tom All business logic dependent on these views these views are using everywhere in whole system,so couldnt courage to replace view. One major problem 1 of this view say VW_A or vw_ins_outstanding is lower level view which is based on some other view which i mentioned as above vw_A-->vw_B-->vw_C-->vw_D-->TABLES is getting used in reports predicate like SELECT 1 pol_type,IA.CLIENT_CODE,PARTTAKER_NAME PNAME, DECODE(NVL(INSURED_NAME,'N'),'N',P.PARTTAKER_NAME,INSURED_NAME) PARTTAKERNAME ,IA.GENERATE_AGAINST, DECODE(INSURED_ADDRESS,NULL,ltrim(P.ADDRESS),INSURED_ADDRESS) ADDRESS, P.BUSINESS_CODE,IB.BUSINESS_NAME,IA.POLICY_TYPE_CODE,IA.CERTIFICATE_CODE, SUBSTR(IA.ASSORTED_STRING,7,17) POL_NO,NULL END_NO,IA.POLICY_DATE,IA.PERIOD_FROM,IA.PERIOD_TO_DATE, IA.AGENCY_CODE,IA.SUB_AGENCY_CODE,IA.TAKAFUL_TYPE, IA.DOCUMENT_CODE,IA.ASSORTED_CODE assorted_code ,sumcovered_func(ia.policy_type_code,ia.assorted_code,ia.document_code) sum_covered, SUBSTR(ASSORTED_STRING,-18,10) ASS_STRING,SUBSTR(ia.ASSORTED_CODE,1,2) ORD, SUBSTR(ASSORTED_STRING,-18,10) MASS_STRING From INS_ASSORTED IA,INS_BUSINESS IB, INS_PARTTAKER P , (SELECT POLICY_CODE FROM VW_INS_OUTSTANDING<----- WHERE REC_TYPE in ('Due','Rcv') GROUP BY POLICY_CODE
HAVING NVL(SUM(DUE_AMOUNT),0)<>0) v1,
(SELECT ASSORTED_CODE FROM VW_INS_OUTSTANDING<-------
WHERE MODE_CODE IN (DECODE(PRINCIPAL_COMPANY,'D','24'),'31','02') GROUP BY ASSORTED_CODE
HAVING SUM(NVL(DUE_AMOUNT,0)) <> 0) v2
WHERE IA.CLIENT_CODE=P.PARTTAKER_CODE
AND P.BUSINESS_CODE=IB.BUSINESS_CODE(+)
AND ia.assorted_code=v1.policy_code
AND ia.assorted_code=v2.assorted_code
AND ia_zone_code=decode(:zone_code,'00',ia_zone_code,:zone_code)
AND ia_branch_code= decode(:branch_code,'00',ia_branch_code,:branch_code)
&POL_COND
AND P.PARTTAKER_CODE = DECODE(:CLIENTCODE,'00',P.PARTTAKER_CODE,:CLIENTCODE)
AND IA.AGENCY_CODE = DECODE(:AGENCYCODE,'00',IA.AGENCY_CODE,:AGENCYCODE)
AND IA.POLICY_DATE BETWEEN TO_CHAR(:PERIODFROM,'DD/MON/YY') AND TO_CHAR(:PERIODTO,'DD/MON/YY')
&COND1
AND IA.POLICY_TYPE_CODE in (SELECT POLICY_TYPE_CODE from ins_policy_types &COND)
&COND2
UNION
This view is taking so much time cause at runtime others chaining view query getting parsed with data.To avoid this chaining
parsing i created the GLOBAL TEMP TABLE (tmp_vw_ins_outstanding) with the same structure like vw_ins_outstanding
CREATE GLOBAL TEMPORARY TABLE tmp_vw_ins_outstanding
(
REC_TYPE VARCHAR2(7),
ASSORTED_CODE VARCHAR2(12),
CLAIM_CODE VARCHAR2(10),
CLAIM_INFO_CODE VARCHAR2(10),
CLASS_CODE VARCHAR2(2),
PRINCIPAL_COMPANY VARCHAR2(1),
INSURANCE_TYPE_CODE VARCHAR2(1),
DUE_AMOUNT NUMBER,
DUE_DATE DATE,
MODE_CODE VARCHAR2(3),
SEQ_NO NUMBER(28),
COINSURANCE_CODE VARCHAR2(8),
POLICY_CODE VARCHAR2(12),
CHEQUE_NO VARCHAR2(20),
CHEQUE_AMOUNT NUMBER,
CHEQUE_DATE DATE,
VOUCHER_NO VARCHAR2(15),
VOUCHER_DATE DATE,
SUP_BY VARCHAR2(20),
PAID_TO_CODE VARCHAR2(8),
CANCEL_DATE DATE,
SUP_DATE DATE,
POLICY_DATE DATE,
ISSUE_DATE DATE,
DOCUMENT_CODE VARCHAR2(2))
ON COMMIT PRESERVE ROWS
/
Then at runtime i get populated this table using collection bulk bind which takes only 1 min
DECLARE
TYPE test1_tab IS TABLE OF vw_ins_outstanding%ROWTYPE;
t_tab test1_tab := test1_tab();
BEGIN
SELECT
BULK COLLECT INTO t_tab
FROM vw_ins_outstanding;
FORALL i IN t_tab.first .. t_tab.last COMMIT;
END;
/
After that i used tmp_vw_ins_outstanding instead vw_ins_outstanding in my report query after creating index on this gtt,
still no luck with performance.
SQL> SELECT COUNT() FROM tmp_vw_ins_outstanding
2 /
COUNT(*) SQL> SET AUTOTRACE TRACEONLYSQL> SELECT POLICY_CODE FROM TMP_VW_INS_OUTSTANDING 2 WHERE REC_TYPE in ('Due','Rcv') 3 AND MODE_CODE IN ('24','31','02') 4 GROUP BY POLICY_CODE 5 HAVING NVL(SUM(DUE_AMOUNT),0)<>0 6 / 1157 rows selected. Execution Plan 0 SELECT STATEMENT Optimizer=ALL_ROWS (Cost=318 Card=100 Bytes 1 0 FILTER
2 1 SORT (GROUP BY) (Cost=318 Card=100 Bytes=2200)
3 2 TABLE ACCESS (FULL) OF 'TMP_VW_INS_OUTSTANDING' (TABLE Statistics
I hope you will divert me in correct direction to solve my problem. Khurram size of temporary tableA reader, February 27, 2007 - 2:38 pm UTC Is there any way, a DBA or anyone else (other than the person who ran the query :)) - can have a idea of the number of rows in a global temporary table ? Currently we have a SQL query going on in our prod database which is running for almost 4 hours now INSERT INTO tabX SELECT UPPER( 'BCZ' ), sxe.cpp_primary_id, ..... smd.user_group_char15 AS SEC_CURRENCY, e.base_currency AS PORT_CURRENCY1 FROM tabA,tabB,.... WHERE cond1 and cond2 and ... and e.entity_id in (select entity_id from fmc.entity_list_temp) ) fmc.entity_list_temp is a global temp table used in the above query - (global temp table with 'on commit retain rows') IS there anyway we can have an idea of the size of this glbal temp table fmc.entity_list_temp - so that we can know how much time the query should take normally ? IS there any view similar to dba_Extents for global temp tables thanks Apraim February 27, 2007 - 2:44 pm UTC ops$tkyte%ORA10GR2> create global temporary table gtt as select * from all_objects where 1=0; Table created. ops$tkyte%ORA10GR2> ops$tkyte%ORA10GR2> select blocks from v$sort_usage where username = USER; no rows selected ops$tkyte%ORA10GR2> insert into gtt select * from all_objects; 50071 rows created. ops$tkyte%ORA10GR2> select blocks from v$sort_usage where username = USER;
ops$tkyte%ORA10GR2> insert into gtt select * from all_objects;
50071 rows created.
ops$tkyte%ORA10GR2> select blocks from v$sort_usage where username = USER;
size of gttA reader, February 27, 2007 - 2:51 pm UTC Does v$sort_usage shows the information only for the current user or for all users ? If it is only for current user, then that won't give any idea about the size of gtt used by someone else Also I assume v$sort_usage includes blocks used for sorting/merging as well - in adddition to gtt usage ? thanks Apraim February 27, 2007 - 3:09 pm UTC i used 'where username=user', sort of implies..... it is for all users, you have to filter. yes, v$sort_usage shows other stuff, but - look at the table, there are many columns in there to help you out (like sql_id in 10g for example) size of gttA reader, February 27, 2007 - 3:16 pm UTC But in 8i, when I ran the query for v$sort_usage, it was showing only my sessions. That is the reason I asked whether it shows info for all users Maybe 10g has more info thanks Apraim Can GTT be a solution for sub-queries using views?Reader, July 27, 2007 - 12:05 am UTC Tom--I have a query that uses sub-queries which are populated using function calls.This query takes 2 mins to run.If I comment the sub-queries part in the main query, it returns in 5.6 secs.I found this post while searching in google,which claims to use GTT as solution. http://dba.ipbhost.com/index.php?showtopic=4388 Most of the times what this author says and what he writes do not match.So I request you to help me solve this pls.. SELECT P.DUR_HRS_DAY AS P_event_hrs_2day, (SELECT sum(p1.dur_hrs_day) from c_parent_DAY p1 --This column comes from a view where it is populated using a function WHERE p.event_id = p1.event_id and trunc(p1.daytime,'MM')=trunc(p.daytime,'MM') ) AS P_event_hrs_mth, ( SELECT sum(p1.prod_loss_day) FROM r_parent_DAY p1 -Similar columns as mentioned above WHERE p.event_id = p1.event_id and trunc(p1.daytime,'MM')=trunc(p.daytime,'MM') ) AS parent_loss_mth, (SELECT sum(p1.lpg_val_day) FROM r_parent_DAY p1 WHERE p.event_id = p1.event_id and trunc(p1.daytime,'MM')=trunc(p.daytime,'MM') ) AS parent_lpg_loss_mth, (SELECT sum(c1.dur_hrs_day) from c_child_DAY c1 WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') ) AS C_event_hrs_mth, (select sum(c1.prod_loss_day) FROM r_child_DAY c1 WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') ) AS child_loss_mth, (select sum(c1.lpg_val_day) from r_child_DAY c1 WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') ) AS child_lpg_loss_mth FROM R_PARENT_DAY P, R_CHILD_DAY C where P.daytime = (SELECT MAX(P1.daytime) FROM R_PARENT_DAY P1 AND (select sum(p1.prod_loss_day) from r_parent_DAY p1 where p.event_id = p1.event_id ORDER BY parent_loss_mth DESC July 27, 2007 - 2:03 pm UTC I'd be looking to remove the scalar subqueries altogether. why did you use them - why not JUST JOIN??? A reader, July 27, 2007 - 5:12 pm UTC Could you please help me understand with an example..How do I join and re-write this query? July 27, 2007 - 8:29 pm UTC for example, you have: (SELECT sum(c1.dur_hrs_day) from c_child_DAY c1 WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') ) AS C_event_hrs_mth, (select sum(c1.prod_loss_day) FROM r_child_DAY c1 WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') ) AS child_loss_mth, (select sum(c1.lpg_val_day) from r_child_DAY c1 WHERE c.event_id = c1.event_id and trunc(c1.daytime,'MM')=trunc(c.daytime,'MM') ) AS child_lpg_loss_mth just join to: is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier0 by event_id and trunc(daytime,'mm') A reader, July 28, 2007 - 9:10 am UTC Thank you Tom. Hope this speeds up the execution Will try and update you. Regards A reader, July 30, 2007 - 10:44 am UTC Tom--The query returns 500 rows but I should get 1 row with 5 columns.Why is "group by" not grouping? (select sum(dur_hrs_day) C_event_hrs_mth, sum(oil_prod_loss_day_bbls) child_oil_loss_mth, sum(lpg_val_day_bbls) child_lpg_loss_mth, event_id, trunc(daytime,'mm') daytime from rv_ct_PROD_DEFER_child_DAY group by event_id, trunc(daytime,'mm') ) Thanks July 30, 2007 - 5:41 pm UTC if you want one row, there would be NO group by. you are getting the sums by event_id and trunc(daytime,'mm') - you must have 500 combinations of that. you are getting precisely what you asked by - sums by EVENT_ID/trunc(DAYTIME,'mm') and you want to, umm, JOIN to that set by event_id and trunc(daytime,'mm') look you have a query like: is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier1 I'm suggesting: is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier2 furthermore, if you can "push" any predicates into the inline view, DO SO, eg, if you have: select ..., (select sum(x) ..... ) from a, ... where a.y between (1 and 50) put the predicate on Y into the inline view as well is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier3 A reader, August 01, 2007 - 2:53 pm UTC Tom--I apologize for taking on this again.But this is the crucial part of our reporting queries that is causing lot of slow down.Taking the responsibility to fix this, I am seeking your help. Following ur advice I changed the query to look like this: is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier4 But I get an error ORA-00904 INVALID IDENTIFIER at the place where I referenced the in-line view columns(iv. <column_name>). Also could you please show me how to push the last predicate above the ORDER BY clause into the in-line view. All it does is to check if the value is greater than 100. I tried doing it but got variety of errors. Appreciate your time and thoughts.. August 05, 2007 - 10:08 am UTC well, I don't have your tables, so I get even worse errors. eg: one needs an example to get syntax right. A reader, August 01, 2007 - 4:39 pm UTC oops---I resolved half the issue myself.Now the problem is predicates and the total cost.After using the in-line views the cost has gone up to negative numbers.Is CBO going crazy on the query? Can you help me get the cost and the execution time down pls? is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier5 I was shocked to see the negative cost because I never expected it to be so. GTT issueDeepak, October 05, 2007 - 7:36 am UTC Hi Tom, Need your help in one issue pertaining to Global Temporary Tables(GTT). Have created a GTT with "on commit preserve rows" option. The purpose of creating GTT here is to insert distinct values into the GTT from a table. The source table has an index on the columns for which we need to compute the distinct values. In my case multiple sessions keep doing the same operation simultaneously on the GTT. My insert statement is as follows... insert into my_gtt select distinct col1,col2 from t1; The above insert operation is waiting on "db file sequencial read" wait events for ever. When I executed the query part of the above insert statement (select distinct col1,col2 from t1;) the result came in less than a second. Am unable to really understand what is making the insert operation slow. When I examined the v$lock view I found that all the sessions have aquired TO & TM locks on the GTT and no one is blocking the other. One thing I have observed is, when only one session is inserting into the GTT then the insertion is getting complete quite fast. Please help me on this. Memory usageAndrew Markiewicz, January 21, 2008 - 4:22 pm UTC Tom. Since the memory comes out of the buffer cache, is there any concern about the use of a gtt using too many resources? For instance a sort operation would temporarily use memory and/or some temp tablespace, but it should be released after the sort. But if an app loads a gtt and leaves it for the session, and if many sessions do the same, there could be memory tied up but not released until the sessions end (if the gtts are session based and not transaction based). Also, I ran the "select blocks from v$sort_usage" after inserting 1 row into a gtt with 1 column. The gtt uses 128 blocks, block size 8192, so 1M of memory for the 1 row. is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier6 The 128 blocks is the initial extent size of the TEMP tablespace. I was thinking of setting up a separate temp tablespace with a small initial extent so the gtts would not require as much memory per use (especially if they do not release the memory for a while). Should I even bother with this? It's not causing an issue now with our gtt usage, but we may have some potential designs that might use them more extensively. I just don't want to create the gtt, install the app and watch the available buffer cache shrink to 0 while the db is constantly swapping the buffer cache. Unless I'm wrong on my understanding of how the gtts work. Thanks Andrew January 21, 2008 - 10:21 pm UTC ... Since the memory comes out of the buffer cache, is there any concern about the use of a gtt using too many resources? ... the buffer cache is quite capable of taking care of itself, it won't let too much of one thing in there. It'll be cached like anything else is cached. It is flushable - releasable - discardable, it is not "special" GTT insert - multiple sessions - high CPUReene, March 03, 2008 - 2:14 am UTC is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier7 March 03, 2008 - 7:06 am UTC ... mulitple users insert the data (using an oracle standard program) and we have been observing that this is causing a spike in CPU . ... we have multiple users doing work THAT is causing the cpu to be used.
Something else is afoot here if you see what you say you are seeing - suggestion: whittle away at this until you find the bit that makes it serial (eg: there is a bit of application logic you coded somewhere doing that - if it is happening)
enable tracing including waits. alter session set events '10046 trace name context forever, level 12'; get the insert into statement ready to go in both sessions. as simultaneously as possible, run them, exit sqlplus and post the relevant bit of the TKPROF (for the insert statement). What you'll discover is both run at the same time. and we'll see how long they take, the work they perform and what if anything they wait one. Usage of temp tables with mutating tables in a triggerj.sieben, March 14, 2008 - 9:20 am UTC Hi Tom, hopefully not too far off the topic, I'd like to ask whether you feel that using temporary tables in the context of mutating table problems might be efficient. I recall that you advice to use a PL/SQL table to collect information on rows to be changed within an "for each row" trigger to compute the changes on the underlying table afterwards. Is there any reason against using a temporary table to collect this information and make it available for the after update trigger? I wonder whether the database might benefit from working with SQL directly against this temporary table as opposite to looping over the PL/SQL table? March 15, 2008 - 9:42 am UTC ... I wonder whether the database might benefit from working with SQL directly against this temporary table as opposite to looping over the PL/SQL table? ..... totally depends on your needs, totally. If you need to perform SQL against the set - sure, a global temporary table might make sense (don't forget to clean it up just like the plsql table) GTT-how fetch order works ?Atul Gupta, April 23, 2009 - 4:18 am UTC Hello Tom, I've forms applications in which i have to pass data from one form to another form , i use global temporary table for it. Say if table name is comments with column like comm_short and comm_long and i insert using forms application called form(child form) data like comm_short comm_long 1 11 2 22 3 33 4 44 and fetch this data in calling form (main form) using (for i in (select * from COMMENT)) inside KEY COMMIT trigger in calling form & commit data. Data inserts into database the way i inserted into called(child) form and this is what i want....... but is it 100% sure that in above scenario the fetching order will be the same order as that of inserting ?? Your confirmation will be landmark to decide if we should add sequence no. for row number identification. Many Thanks. br atul April 27, 2009 - 10:41 am UTC ... but is it 100% sure that in above scenario the fetching order will be the same order as that of inserting ?? ... of course not, rows are never ever assured to come out in the order of insertion, the ONLY WAY - repeat ONLY WAY you can get rows in some order is to use ORDER BY - period http://asktom.oracle.com/Misc/order-in-court.html You need another column to order by, add a third column "seq" and put the numbers 1, 2, 3, 4 in there (unless comm_short is already "sortable") Use temp tables with J2EE / session poolingGregor, July 01, 2009 - 12:11 pm UTC Hi Tom, I read various articles on temporary tables on AskTom and also the Oracle Guide on creating temp tables ( ) My Use Case is the same is in the documentation: "Temporary tables are useful in applications where a result set is to be buffered (temporarily persisted)..." My problem:
My question:
July 06, 2009 - 7:07 pm UTC forget j2ee - it is not relevant at all. It is just a fla (four letter acronym) Your concern is using global temporary tables with a connection pool IF the developrers code like this:
and never care about that temp table again - you are fine. If they use the temp table during the generation of a single web page - they are fine (as long as they grab the connection ONCE) if they need the global temporary table to persist from page to page - well, they are out of luck with a connection pool. It will not work, they will have to use a persistent table and manage the data (clean it up when sessions expire, etc) themselves. .... ) We just need temporary tables whose content is visible to all database session. Is that not possible? .... think about what was just asked for there.... hmmmm. That would be a permanent table and since ONLY YOU know when the data is no longer necessary - only YOU can manage that table (remove data when it isn't needed anymore) "Non-durable" tablesGregor, July 07, 2009 - 9:57 am UTC Hi Tom, thanks for your comments! I have one remark and one precision of my question because my primary concern is avoiding redo logs for the hit buffer use case. 1. I distinguish between visibilty of the elements in a temp table and lifetime of the elements; when the database session is killed the elements in the temp table of this database session are gone, fine! But's pity that the visibilty is also restricted. Ok, that's life... 2. In our application, of course we know how long we need the data in the "temp" table (which in fact is a hit buffer) and we also delete the data when the application's session is killed or when our server start-up. Currently, in fact we use a permanent table for that. My concern is about redo log creation which is not needed in this case. When you think about thousand users performing queries in our application with thousands of hits which need to be buffered then we have huge amount of uneeded redo logs. Technically, I need a table which does no logging (except undo logs and their redos), but which is not "durable" in terms of ACID durability. But the NOLOGGING options have no effect for insert/update/delete statement and even a direct path insert via /* append */ is not possible due to exclusive table lock in this case (as far as I know). Would need a "force nologging" for the table! Do I miss anything? Seems that I have no chance to realize a hit buffer in conjunction with connection pooling in 2009 (though there are nice options like FCF in Oracle RAC optimized for connection pools). July 07, 2009 - 6:39 pm UTC ... because my primary concern is avoiding redo logs for the hit buffer use case. ... you cannot. Even for global temporary tables, you cannot - they generate undo, which in turns generates redo. It happens. there is quite simply no way to ever avoid it. index updates will always generate gobs of undo and redo (temporary and permanent). conventional path inserts will always as well. a direct path insert can bypass undo, and if you wish redo generation for the TABLE data - but never for the index data (temporary or not) People constantly and continuously worry about hypothetical situations. Did you know that your session state in an APEX application is stored in a persistent table - millions of deletes and inserts are performed on this single table every day on my small site. Billions of times per day on something like metalink. And it is a 'real table'. Global Temporary Tables (GTT)reader, July 23, 2009 - 10:42 am UTC Hi Tom, I have a general question for your regarding GTT's. We have a feed that loads a number of temp tables and query that selects from them. We have noticed with exactly the same feed, exactly the same data - different execution plans are generated - either nested loops or MERGE JOIN CARTESIAN. When we have the latter, the job hangs. Our dynamic sampling is set to the default of two - would it be an idea to set higher dynamic sampling? Does the exceution plan generated depend on the load on the box? i.e if there is a high load does the optimizer look at less plans before deciding which is best? Thanks Temporary Tablesreader, July 26, 2009 - 12:20 pm UTC Hi Tom, Thanks for your reply. I don't think it is being hard parsed as the parameters to the sql are PL/SQL variables (so they are bind variables). Here is the query. It is inside a procedure in a package body. INSERT INTO TMP_RISK_WEIGHT_MAP( class_id ,ctry_inc_cd ,derivative_ind ,maturity_band) SELECT /*+ dynamic_sampling(5) */ DISTINCT CCDB.class_id ,CCDB.ctry_inc_cd ,'N' ,fn_fac_maturity_band(FAC.last_review_dt, FAC.renewal_dt, FAC.maturity_dt, FAC.approval_dt) FROM SRC_FAC_DET FAC ,TMP_CCDB_MAP CCDB WHERE FAC.cob_dt = pi_cob_dt AND FAC.run_id = pi_run_id AND EXISTS (SELECT row_id FROM TMP_MAP_ROWIDS WHERE row_id = fac.ROWID AND cob_dt = fac.cob_dt AND run_id = fac.run_id AND gcds_type_cd = 'F') AND NVL(FAC.cpty_type_cd, g_default_string_key) =CCDB.id_type_cd AND NVL(FAC.cpty_cd, g_default_string_key) \=CCDB.id_value_cd AND NVL(FAC.cpty_ccdb_id, g_default_number_key) \=CCDB.pushed_ccdb_id AND CCDB.ccdb_id IS NOT NULL; So, pi_run_id and pi_cob_dt are parameters from the procedure. I have added the dynamic sampling hint (at level 5). The gtt's are: TMP_CCDB_MAP and TMP_MAP_ROWIDS. If it is soft parsed, and the last guy loaded 1 record into one of the GTT's and I loaded 100,000. How can I get the last plan? I thought the execution plan is "independent" of someone elses session? July 26, 2009 - 9:35 pm UTC ... I thought the execution plan is "independent" of someone elses session? ... no, that is what shared sql is all about - one hard parse generated the plan - everyone else uses the SAME plan over and over and over again. Presuming you do not run this query frequently - you might consider using dynamic sql so that you can use a unique sql statement (put a comment in it for example that includes a date/timestamp - that'll make the query unique). Then the dynamic sampling will happen on every execution - instead of just once. Temporary Tablesreader, July 27, 2009 - 5:26 am UTC Hi Tom, Thanks for your explanation! It makes more sense now - we use temp tables all over the place. Do you have an example in your one of your books where you show this happening (by example). July 27, 2009 - 6:04 am UTC show what is happening? that sql is shared? that one hard parse will generate a plan and the subsequent soft parses will reuse the sql? That I demonstrate on almost every page on this site not sure what you mean for this one? Temporary Tablesreader, July 27, 2009 - 7:23 am UTC Hi Tom, I don't understand how putting a comment in the SQL, makes the code unique in the shared pool. Should the comment be put like this - 01/01/9999 ? INSERT /* 01/01/9999 */ INTO ....table_name( col1 ,col2 ,col3 ,col4) SELECT /*+ dynamic_sampling(5) */ DISTINCT Surely, user1 running this and user2 running this will both generate the same SQL in the shared pool? Thanks July 27, 2009 - 7:35 pm UTC with date and timestamp 01/01/2009 12:01:05 - so as to make the text UNIQUE. My assumption/presumption and PREREQUISITE for doing this is (as written above) is the sql is infrequently executed so the date/timestamps could never be the same. sql is found and reused in the shared pool based on a hash of the original sql statement - change just ONE BYTE of the text and they are different sql's. select * from emp; Select * from emp; those are two different sql's to the shared pool. Temporary Tablesreader, July 27, 2009 - 7:25 am UTC Sorry Tom - I have just read your reply more carefully. You mention dynamic SQL - so I could generate different comments. Thanks Global Temp TablesReader, July 27, 2009 - 5:22 pm UTC What options are there, if this piece of SQL is called frequently? Would dynamic sampling and hint to use Nested Loops be enough or are there other options? July 27, 2009 - 8:43 pm UTC you would probably set it up so that you have a "small med and large" version. That is - you know how many rows you put into the global temporary table - you have the power (the knowledge) to categorize those row counts into "small, med and large" values - in your code, you could: open cursor for 'select /*+ ' || small_med_or_large_flag || ' */ ..... from ...'; You would open one of three cursors - and they would have the small plan or the medium plan or the large plan. see https://www.oracle.com/technetwork/issue-archive/2014/14-sep/o54asktom-2279184.html and look for Query Plans with Temporary Tables GTT'sreader, July 28, 2009 - 9:59 am UTC Hi Tom, Thanks for the idea - another idea I had was delete and lock table stats for the TEMP tables. Then, when the insert statement runs - it will automatically sample the data. Would this be a viable alternative? Could we use dbms_stats.set_table_stats for each of the temp tables (after loading records). Would the stats be available for a particular concurrent session? July 28, 2009 - 8:49 pm UTC why would it automatically sample the data? It would only do that on a hard parse. So, you would be right back to square one - one size fits all. Unless you combine this with dynamic sql so that you have a hard parse with different sizes, this doesn't automatically do anything. If you use set table stats, you would need to serialize operations, else my set table stats will overwrite your set table stats. gtt'sreader, July 29, 2009 - 6:15 am UTC Hi Tom - ok that makes sense now! Thanks! gtt'sreader, August 01, 2009 - 10:28 am UTC If you have a piece of SQL that's shared. It's hard parsed the first time and soft parsed thereafter. Is it possible "that a plan" can be aged out from the shared pool. With regards to this problem - our users have said that it will hang (doing a merge cartesian join on the 1st attempt). Kill process, restart job, hangs again (same plan), kill, restart and then it works on the third attempt. No changes to the SQL, no change to the data. There are other similar processes running the same code on the box, that, depending on the data could be successful. Thanks for clarifying ... August 04, 2009 - 12:55 pm UTC ... It's hard parsed the first time and soft parsed thereafter. ... Maybe, it's hard parsed - stop. When I program, it is parsed once and then executed over and over and over and over and over and over again and again (eg: my parses are tiny compared to my executes) GTT and Local App CtxParthiban Nagarajan, August 20, 2009 - 2:27 am UTC Hi Tom REM I welcome you to India ( SANGAM'09 ;) » In a disconnected, stateless architecture, we cannot use GTTs. Am I right ... ? » In two-tier environment, how a GTT and Local Application Contexts are getting differed ... ? August 24, 2009 - 5:08 pm UTC » In a disconnected, stateless architecture, we cannot use GTTs. Am I right no, you are not right. You can use them in a single connection pool grab - not across connection pool grabs. eg: a single connection pool grab that:
is perfectly fine. I do not even know how to compare global temporary tables with application contexts? GTTsam, February 08, 2010 - 6:16 pm UTC February 15, 2010 - 2:05 pm UTC no, not really, set up a test case - and you'll see. I did, I created 50,000 stores, 500,000 sales records. I rewrote their overly complex query: is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier8 and tested it (the rewrite) versus theirs, versus the global temporary table situation. Guess which took the most CPU to run? And did the most logical IO's? And generated the most redo? I most wholeheartedly disagree with the sarcasm regarding the "university approach". If one knows SQL, one knows SQL. If one does not - a two table join looks complex. To me, the global temporary table is truly hard to understand, you have to look at a total of 4 different SQL statements - understand that they are all related - figure them out one by one, put them together as a whole in your head (to make sense of them) and then figure what what they do. Versus a single sql statement, that has everything you need right there. What IS MISSING from the example is that in real life, the SQL statement would be preceded with: is there a way to use alias in analaytic function SQL> ed Wrote file afiedt.buf 1 select ename name, deptno, sal, sum(sal) over (order by name) 2 from (select ename, deptno, sal from emp 3 union all 4 select ename name, deptno, sal from emp) 5* order by name SQL> / select ename name, deptno, sal, sum(sal) over (order by name) ERROR at line 1:
ORA-00904: "NAME": invalid identifier9 Now, all of a sudden, it aint' so hard to understand is it? If you do not comment this suggested code:
0 it too would be so much gibberish, until you reversed engineered it. Global temporary tables deleted on autocommitKedar, March 11, 2010 - 1:29 am UTC My oracle stored procedure contains the select SQL statement from SQL server. When I execute this in analyzer it returns the records but in ASP script it returns blank recordset. Does GTT is being deleted in this instance. if yes any solution or settings for that. March 11, 2010 - 8:30 am UTC autocommit - you are using AUTOCOMMIT???? stop doing that, that is the most childlike setting possible - I still cannot believe it is the microsoft default. Disable autocommit, end your transactions when transactions should be ended, not until then. GTTA reader, March 12, 2010 - 10:33 am UTC some more helpV, September 14, 2010 - 1:31 pm UTC Dear Tom, Good day to you, can you please throw some light on why GTT have pctfree as 10 and pctused as 40 and these can't be changed. Also if you can explain on how are blocks used by GTT and effect of pctfree 10 and pctused 40 on the space usage. Regards, September 15, 2010 - 7:34 am UTC space is managed differently for global temporary tables (gtt), there are no freelists (there is a single user in effect, each sessions gets its *own* gtt) - and pctused should not be even remotely considered as you would/should be using automatic segment space management - whereby pctused is meaningless no matter what it is set to. Thanks for taking out your time on this.A reader, September 17, 2010 - 2:58 am UTC Dear Tom, thanks a lot for taking out your time on this question.
Thanks a lot for your help on this. Regards, September 17, 2010 - 7:12 am UTC
how can data be loaded into a gtt by sqlldr (which creates a session, loads and ends a session) be useful in any sense??? I don't think sqlldr is loading a gtt here. There would NEVER be any data in the gtt since once sqlldr terminated its session - the data would disappear. thanks for your helpV, September 17, 2010 - 8:18 am UTC Dear Tom, Thanks for your help and your time to answer my question. I am sorry I mixed up the things, data in gtt is not populated by sql*loader, my mistake. Regards, V HWM issue and global temp tablesRohan, September 21, 2010 - 6:55 pm UTC Tom, We use lots of global temp tables but we often encounter HWM issue. What would be the best way to reset HWM? First of all, Why the GTT would have HWM issue as its temporary? We user 10g Rel 2. September 23, 2010 - 9:45 am UTC what "HWM" issue - the HWM is a 'fact' not an issue. who said a global temporary table has a high water mark? You are the first one to use that term on this page. What "HWM issue" do you have with a global temporary table - those tables are allocated session by session (your gtt always starts at "zero" for every session you create - every session has their OWN segment). GTT IssueMuralidhar, January 27, 2011 - 5:20 am UTC Hello Tom, One of User wrote on "GTT Issue", even Im also facing the same thing. Im using execute immediate of dynamic sql which contains GTT insertion. GTT dont have any index. I did on commit preserve rows for GTT. It was working for 3 months, and suddenly stopped inserting into GTT. February 01, 2011 - 3:11 pm UTC no it didn't, something else is happening. give us an example. We don't "suddenly stop inserting" - it doesn't work that way, you are doing something to cause zero rows to be inserted or you are truncating the table or something. Size of GTT in terms of ROWS or BYTES.Rajorshi Kumar Sen, November 23, 2011 - 12:14 pm UTC Hi Tom Just a part of my question have already been addressed where we can estimate the size of a Global temporary table from v$sort_uage , but what if: 1. there are more than 1 GTT under a schema. Seems v$sort_usage does not have column by which we can filter our a particuler GTT. 2. if a GTT is being used multiple users how to we find the total size of the GTT. 3. how do we know the total number of rows residing in the GTT irrespective of any user who is storing records in the GTT. Thanks in advance for your answers. regards ---Raj November 23, 2011 - 12:32 pm UTC
I'm very curious what you would do this this data? Can't be represented as straight up SQLKevin Ryan (big Fan), December 02, 2011 - 2:37 pm UTC Hi Tom, I saw your statement ~ 'anything that uses a temp table can be done with regular SQL'. My requirement is for a search, that uses multiple keywords. The key words are sent as a single string - comma delimited. I break them into an array. Keyword Array - loop opened... I select (as insert into GTT) the PK of every record whose fields contain my first search term. I proceed through the list of search terms doing the same. Loop ended.... I select the keys (having count = # of search terms) and populate my cursor based on the list of PK values provided. I would prefer to do this type of operation in one swoop if possible, as my use of the Global Temporary Table seems to be causing "java.sql.SQLException: ORA-01410: invalid ROWID" errors. Thoughts? December 06, 2011 - 11:08 am UTC Temp TablesPavan, February 08, 2012 - 8:48 am UTC Hi Tom, I am facing performance issue becasue of the below scenario. Please suggest whether I should be using temporaray variable or not. I have procedure that gets values from the result set of join of 2 views and inserts them into a table. This procedure is suffering with performance issue because of this. I am planning to use Temporary tables. Will this solve my problem on performance. Thanks, Pavan February 08, 2012 - 12:14 pm UTC it won't do anything but make it take longer in all probability. You'll have added extra steps (populate - write - the temp tables, then read them again) You need to find out where you are spending your time - and look to that to 'fix' this. Likely: it will be query tuning and looking at the 2 views and seeing if "we really need a 3rd view that doesn't involve the first two views" Global Temporary Table VS Physical TablePavan, April 10, 2012 - 7:22 am UTC Hi Tom, I have scenario where in my procedure would return a refcursor.Now I am not sure on the number of records it would return (Its based on the input parameter). So I thought of using pagination concept here. I would be returning 500 records at a time. We are populating records which is a union of multiple SQL queries. So I thought I will first get the entire result set into a physical table and then query this physical table for returning 500 records at a time.I would TRUNCATE this physical table after it returns the entire resultset. So I am not really sure if using a GLOBAL TEMPORARY TABLE would be of any use in this case. Please suggest. Thanks, Pavan April 10, 2012 - 7:51 am UTC why not just return a ref cursor - period. no temporary tables no extra real tables just return a cursor. The client controls how many records to fetch from it. The act of opening it does *no real work* in retrieving the data, the fetching does. Pranay, July 31, 2012 - 2:14 am UTC Dear Tom, I have a requirement where the DBA will run my update query on production database which will update certain values if they are null. He is insisting on providing a rollback script too for this.We thought of creating a GTT and storing all the records which are going to be updated in that table and later in rollback script will update these particular records back to null. Do you think this can be achieved without using GTT ? July 31, 2012 - 12:24 pm UTC here is your rollback script: rollback; ????? I'm confused, if they run an update and it fails - just rollback. If they want the ability to undo this change for some period of time, tell them to implement a guaranteed undo retention setting - then you can use flashback table anytime during that period of time to restore it. Otherwise - tell the dba - hey, you already have backups, you could just do a point in time restore of this tablespace in a temporary instance and retrieve the data if we need it. We don't need to write code for this. Temp Table RedoDoug, August 06, 2012 - 12:56 am UTC Hi Tom, I created 2 tables 1 TEMP and 1 "Permanent" Session 1
1 Session 2
2 Then ran an insert with Autotrace on
3 Session 2
4 The difference in the 2 was surprising, I am not sure why there 20 db block gets for the Temp table as opposed to the Perm table but the big surprise was 0 redo size for the perm table and 632 redo size for the temp table. I expected to see more Redo for the perm table given that I believe that no redo is generated for temp tables just the rollback segment data that a temp table generates. I certainly did not expect to see zero redo for session 2. So I inserted a few more rows and the stats are almost the same. Would love to hear whats going on why I had 0 redo for the 1st insert into the perm table and why redo for inserts into the perm table is the same size as inserting to the temp table. Regards Doug August 17, 2012 - 11:59 am UTC NEVER EVER NEVER NEVER as in DO NOT EVER use sys or system, they are ours, they are not for your use. use your OWN ACCOUNTS. if you do, you'll see something realistic:
5 sys and system are special, magical, not to be used ever. Global Temporary table PerformanceSrini, September 18, 2012 - 10:32 pm UTC Hi Tom, I read your "expert one on one Oracle" and stumbled at the following line: "Just to drive home the point, the goal here is to not create tables in your stored procedures at runtime. That is not the proper way to do this in Oracle. DDL is an expensive operation, we want to avoid doing that at runtime. The temporary tables for an application should be created during the application installation never at run‐time." In our project the OLTP application is creating a global temporary table for every logged in user, populates it and drops it after reading the populated data. I flagged this as a NOT recommended approach and suggested them to create the table once and use it multiple times. The developer, however disagreed to this and said creating table only takes milli seconds and this will not create a performance issue at runtime. I tend to disagree with him but I need to prove that this approach (creating tables per login) is not correct. How can this be proved? September 20, 2012 - 5:40 am UTC ask him this: why would you do something millions of times you only have to do once? what happens WHEN your process crashes AFTER creating it, but BEFORE dropping it? where is your code to clean up orphan tables? why would you do something millions of times you only have to do once? what happens when two people create a table at the same time - or dozens? Have you tested the scalability? why would you do something millions of times you only have to do once? where are the stored procedures and views you have created that use these objects - oh wait, you cannot create stored procedures and views to use objects that do not exist. why would you do something millions of times you only have to do once? do you understand how to program efficiently in Oracle or did you stop after reading some sql server documentation? why would you do something millions of times you only have to do once? Can you tell which point I think is the most obvious - and the silliest thing they are doing? tell you what, revoke create table from this schema - an application schema for security reasons shouldn't be able to create tables at runtime in production. It is not a privilege they should even have. Heck, the developer schema - where they put their application (their code, their views) should NOT have create table - they shouldn't be able to create a table in the first place. It sure looks like more than a few milli seconds and it sure looks like it used a few more latches (which is a fancy name for a lock, a serialization device, a scalability inhibitor) do they know that there are HUNDREDS (a little short of a thousand) sql statements executed for a create/drop pair????? this is a no brainer, I don't get why a "developer" would do something over and over when.... they can do it once, have it be done, have it be documented, have it be managed.... One way seems easy, the other way seems way hard, way unreliable, why "not too performant", way over privileged and so on. |