My third time visiting the annual UKOUG conference in Birmingham started all wrong. At Schiphol Airport, the usual luggage check routine took place: laptop out of the suitcase, wallet/keys/belt apart, toothpaste apart. And afterwards putting everything back in. But I forgot to close the wheeled suitcase and when putting it on the ground, my MacBook Pro fell out. A quick inspection revealed that it still worked, but the screen just stayed black. When I arrived at the hotel, I also noticed that my international power adapter set included everything except the British one. The hotel didn't have a spare one and the local store did not sell adapters. Fortunately, Roel Hartman offered his MacBook for my presentation at Wednesday, and I could borrow a British power adapter from Luc Bors, who was leaving Birmingham monday afternoon. And when I told the story to Hans Forbrich, he also offered me a set. And then I learned that this year's speakers gift was .... an international power adapter set.
So, back to the conference itself. Again, it was full of excellent presentations. These were the presentations I chose to hear:
Monday:
Opening Keynote by Dermot O'Kelly and Andrew Sutherland
Keynote by Tom Kyte: Oracle's Latest Generation of Database Technology
David Peake: Oracle APEX 4.2 Unplugged
Dimitri Gielis: Moving to the APEX Listener
Hilary Farrell: RESTful Web Services in Oracle Application Express 4.2
Roel Hartman: Pump Up the Volume! The APEX Data Loader Inside Out
Martin Corry with funny anecdotes about rugby and team play.
Tuesday:
Anthony Rayner: Build a Great User Experience with Oracle Application Express
David Peake: Deploying and Developing Application Express with Oracle Pluggable Databases
John Scott: Apex Error Handling Enhancements
Tanel Poder: Exadata Performance Method
James Morle: Building a Winning Oracle Database Architecture
John Scott: Oracle APEX: Websockets (or When Push Comes to Shove)
Aino Andriessen: Deploy with Joy: Using Hudson to Build and Deploy your ADF Fusion Application
Tom Kyte: What's New in Oracle Database Application Development
Wednesday:
Me: Professional Software Development Using APEX
Tony Hasler: The MODEL Clause Explained
Anthony Rayner: Building Mobile Web Applications with Oracle Application Express
Paul Broughton: APEX: Why Not Google It?
Although most presentations were very good, the one that stood out, for me, was John Scott's Websockets presentation. With illustrative and very entertaining demo's, he not only showed how Websockets work, but especially what's possible if you let your imagination run wild. If you ever get the chance to see this presentation, please do yourself a favour.
My own presentation was on Wednesday morning, called "Professional Software Development Using APEX". It is a talk about how to do version control, parallel development, one-step builds, daily builds and unit tests with APEX. In a small room, Hall 7b, but well attended with 40-50 people. Using Roel's MacBook Air, the presentation itself went fine, but I was not entirely satisfied. Although I have some experience with presenting, this time I was more nervous than usual. Probably due to the changes I had made the weekend before and not being able to prepare those last-minute changes well enough because of not having access to my MacBook Pro. And it showed. All presenters talk about a topic they are passionate about, but you need to get that passion across. And nerves do exactly the opposite. Usually, my presentations score a tiny bit above conference average, this time my presentation scored a bit below. Still good (above 4), by the way, especially considering the other amazing presentations out there. The benefit is that I think I've learned a lot more from this experience than usual and I'm looking forward to improve on the next occasion.
On the social front, I visited the ACE dinner at Sunday evening,
organized by Debra Lilley (thanks Debra!). Had some nice conversations
at the dinner table with Sten Vesterli, Michael Abbey, Killian Evers and Piet de Visser among others.
Next to the presentations, one of the benefits of such a conference is
that you get to meet a lot of Oracle friends again. And you meet some
new ones. This year for example, I had the pleasure of meeting Timo Raitalaakso, also
known as "rafu", Finnish SQL guru, who came to me at the end of my own
session. Together with Tuomas Pystynen and Jacco Landlust, we were on the same plane back home.
After the conference, I took my MacBook Pro for repair and I got it back last weekend. So if you are wondering why I waited almost three weeks with this post: that's the reason. Next year, the conference will be split up to get an even more technology focussed event in Manchester. Hopefully I'll get an abstract accepted again. For this year, a big "thank you" to the UKOUG team for organizing a great event, developing a good event app and even responding to tweets.
Pages
▼
Saturday, December 22, 2012
Monday, November 12, 2012
Ciber knowledge session November 28
On Wednesday evening November 28, my colleague Marcel Hoefs and I will both do a one-hour knowledge session at Ciber Nieuwegein. What's new about it, is that the knowledge session is not only for Ciber colleagues, but for anybody who would like to attend. Both sessions will be in Dutch, so for the remainder of this post, I'll switch to Dutch and copy Marcel's invitation text:
Graag nodig ik jullie
uit voor de 2e Oracle kennissessie van dit jaar woensdag 28
november Ciber kantoor Nieuwegein vanaf 17:30u.
Rob van Wijk:
Professioneel software ontwikkelen met APEX
We
hebben in de Oracle SL een APEXSoFa opgetuigd. In deze sessie hoor je
hoe we de database en APEX geconfigureerd hebben samen met Subversion,
Hudson, APEXExport en APEXExportSplitter en SQL*Developer's UtUtil.
Hiermee doen we software configuratiebeheer, bouwen we dagelijks opnieuw
de gehele applicatie op in een enkele stap, ontwikkelen we parallel in
onze eigen APEX werkruimtes en database schema's, integreren we ons werk
continu, en integreren we componenttesten in dit proces.
Marcel Hoefs:
Anydata
Ooit
op zoek geweest naar een Oracle datatype waarmee je alles op kan slaan
zonder de karakteristieken van de opgeslagen data kwijt te raken? Dan is
anydata het zelf beschrijvende datatype dat standaard in de Oracle
database hiervoor beschikbaar is. In mijn presentatie zal ik wat dieper
in gaan op de mogelijkheden en onmogelijkheden van dit datatype. Tevens
wat praktijkervaring en een usecase hiervoor.
Aanmelden, i.v.m. broodjes, uiterlijk dinsdag 27 november bij replace('rob van wijk',' ','.') || '@ciber.com'
Saturday, September 15, 2012
Keep clause
You may have seen an aggregate function like this in SQL queries:
Even though these functions were already introduced in version 9, I've seen lots of code that could have used these functions, but didn't. And that's a pity because it's a wasted opportunity to write shorter and faster code. The common use case I'm talking about is when you have a detail table with a validity period. Typically with a column startdate, and optionally an enddate. For such a table, you often have to know the values of the currently valid row. An example: suppose we have a table RELATIONS and for each relation we want to know his address at a certain point in time:
The query is shorter and -to me- clearer at first glance. However, the main reason for my enthusiasm for the aggregate functions FIRST and LAST is because it's just faster. To show this, let's execute those queries against a table with 300,000 rows, 100,000 relations with 3 addresses each:
PS: this topic and much more is covered in an upcoming Live Virtual Seminar for Oracle University on October 2nd
max(value) keep (dense_rank first order by mydate)or this analytic variant:
max(value) keep (dense_rank last order by mydate) over (partition by relation_nr)Unfortunately, when you start searching for the "keep" clause, you won't find anything in the Oracle documentation (and hopefully because of this blogpost, people will now have a reference). Of course Oracle documents such functions. You only have to know that they are called FIRST and LAST in the SQL Language Reference.
Even though these functions were already introduced in version 9, I've seen lots of code that could have used these functions, but didn't. And that's a pity because it's a wasted opportunity to write shorter and faster code. The common use case I'm talking about is when you have a detail table with a validity period. Typically with a column startdate, and optionally an enddate. For such a table, you often have to know the values of the currently valid row. An example: suppose we have a table RELATIONS and for each relation we want to know his address at a certain point in time:
SQL> create table relations 2 ( id number not null primary key 3 , name varchar2(30) not null 4 ) 5 / Table created. SQL> insert into relations 2 select 1, 'Oracle Nederland' from dual union all 3 select 2, 'Ciber Nederland' from dual 4 / 2 rows created. SQL> create table relation_addresses 2 ( relation_id number not null 3 , startdate date not null 4 , address varchar2(30) not null 5 , postal_code varchar2(6) not null 6 , city varchar2(30) not null 7 , constraint ra_pk primary key (relation_id,startdate) 8 , constraint ra_r_fk foreign key (relation_id) references relations(id) 9 ) 10 / Table created. SQL> insert into relation_addresses 2 select 1, date '1995-01-01', 'Rijnzathe 6', '3454PV', 'De Meern' from dual union all 3 select 1, date '2011-01-01', 'Hertogswetering 163-167', '3543AS', 'Utrecht' from dual union all 4 select 2, date '2000-01-01', 'Frankrijkstraat 128', '5622AH', 'Eindhoven' from dual union all 5 select 2, date '2006-01-01', 'Meerkollaan 15', '5613BS', 'Eindhoven' from dual union all 6 select 2, date '2010-01-01', 'Burgemeester Burgerslaan 40b', '5245NH', 'Den Bosch' from dual union all 7 select 2, date '2015-01-01', 'Archimedesbaan 16', '3439ME', 'Nieuwegein' from dual 8 / 6 rows created. SQL> begin 2 dbms_stats.gather_table_stats(user,'relations'); 3 dbms_stats.gather_table_stats(user,'relation_addresses'); 4 end; 5 / PL/SQL procedure successfully completed.Relation "Oracle Nederland" has two addresses, and its current address being at the Hertogswetering. And fictively, relation "Ciber Nederland" has four addresses. The current address is the Den Bosch one. And I've also recorded a future address in Nieuwegein. Note that, in real life, the latter three are all Ciber offices currently in use. To get the active relation addresses on October 1st, 2012, I can use this query:
SQL> var REFERENCE_DATE varchar2(10) SQL> exec :REFERENCE_DATE:='2012-10-01' PL/SQL procedure successfully completed. SQL> select ra.relation_id 2 , max(ra.startdate) startdate 3 from relation_addresses ra 4 where ra.startdate <= to_date(:REFERENCE_DATE,'yyyy-mm-dd') 5 group by ra.relation_id 6 / RELATION_ID STARTDATE ----------- ------------------- 1 01-01-2011 00:00:00 2 01-01-2010 00:00:00 2 rows selected.But what if I want to retrieve the current address belonging to these rows? In fact, this is frequently being asked in Oracle forums. Prior to Oracle8, you would have used a query like below:
SQL> select ra.relation_id 2 , ra.startdate 3 , ra.address 4 , ra.postal_code 5 , ra.city 6 from relation_addresses ra 7 where ra.startdate <= to_date(:REFERENCE_DATE,'yyyy-mm-dd') 8 and not exists 9 ( select 'a relation_address with a more recent startdate' 10 from relation_addresses ra2 11 where ra2.relation_id = ra.relation_id 12 and ra2.startdate <= to_date(:REFERENCE_DATE,'yyyy-mm-dd') 13 and ra2.startdate > ra.startdate 14 ) 15 / RELATION_ID STARTDATE ADDRESS POSTAL CITY ----------- ------------------- ------------------------------ ------ ------------------------------ 1 01-01-2011 00:00:00 Hertogswetering 163-167 3543AS Utrecht 2 01-01-2010 00:00:00 Burgemeester Burgerslaan 40b 5245NH Den Bosch 2 rows selected.This uses a correlated subquery accessing the table (or index belonging to) table RELATION_ADDRESSES twice. Which can be prevented from Oracle8 onwards by using an analytic function:
SQL> select relation_id 2 , startdate 3 , address 4 , postal_code 5 , city 6 from ( select ra.relation_id 7 , ra.startdate 8 , ra.address 9 , ra.postal_code 10 , ra.city 11 , row_number() over (partition by ra.relation_id order by ra.startdate desc) rn 12 from relation_addresses ra 13 where ra.startdate <= to_date(:REFERENCE_DATE,'yyyy-mm-dd') 14 ) 15 where rn = 1 16 / RELATION_ID STARTDATE ADDRESS POSTAL CITY ----------- ------------------- ------------------------------ ------ ------------------------------ 1 01-01-2011 00:00:00 Hertogswetering 163-167 3543AS Utrecht 2 01-01-2010 00:00:00 Burgemeester Burgerslaan 40b 5245NH Den Bosch 2 rows selected.Here you compute the row_number when you partition the result set per relation_id ordered by startdate in descending order. Meaning the most recent date starting before the reference date, gets row_number 1 assigned per relation_id. By using an inline view, we can filter on the outcome of the analytic function, and only select the rows with row_number 1. In forums, you'll see this solution often being adviced. Compared to the correlated subquery, this query selects only once from table RELATION_ADDRESSES. However, you can do even better by just adding three "keep clause" functions to the original query:
SQL> select ra.relation_id 2 , max(ra.startdate) startdate 3 , max(ra.address) keep (dense_rank last order by ra.startdate) address 4 , max(ra.postal_code) keep (dense_rank last order by ra.startdate) postal_code 5 , max(ra.city) keep (dense_rank last order by ra.startdate) city 6 from relation_addresses ra 7 where ra.startdate <= to_date(:REFERENCE_DATE,'yyyy-mm-dd') 8 group by ra.relation_id 9 / RELATION_ID STARTDATE ADDRESS POSTAL CITY ----------- ------------------- ------------------------------ ------ ------------------------------ 1 01-01-2011 00:00:00 Hertogswetering 163-167 3543AS Utrecht 2 01-01-2010 00:00:00 Burgemeester Burgerslaan 40b 5245NH Den Bosch 2 rows selected.The three extra aggregate functions all do a "dense_rank last order by startdate", meaning "sort the rows by startdate, and pick only those rows which have the most recent startdate". If you have more rows with the same startdate, the max function at the start tells Oracle to pick the value with the maximum address/postal_code/city. However, (relation_id,startdate) is unique, so ties are impossible and thus the max function is a dummy. I also could have used min.
The query is shorter and -to me- clearer at first glance. However, the main reason for my enthusiasm for the aggregate functions FIRST and LAST is because it's just faster. To show this, let's execute those queries against a table with 300,000 rows, 100,000 relations with 3 addresses each:
SQL> create table relations 2 ( id number not null primary key 3 , name varchar2(30) not null 4 ) 5 / Table created. SQL> create table relation_addresses 2 ( relation_id number not null 3 , startdate date not null 4 , address varchar2(30) not null 5 , postal_code varchar2(6) not null 6 , city varchar2(30) not null 7 , constraint ra_pk primary key (relation_id,startdate) 8 , constraint ra_r_fk foreign key (relation_id) references relations(id) 9 ) 10 / Table created. SQL> insert into relations 2 select level 3 , dbms_random.string('a',30) 4 from dual 5 connect by level <= 100000 6 / 100000 rows created. SQL> insert into relation_addresses 2 select 1 + mod(level-1,100000) 3 , date '2013-01-01' - numtodsinterval(level,'hour') 4 , dbms_random.string('a',30) 5 , dbms_random.string('a',6) 6 , dbms_random.string('a',30) 7 from dual 8 connect by level <= 300000 9 / 300000 rows created. SQL> begin 2 dbms_stats.gather_table_stats 3 ( user 4 , 'relations' 5 , cascade=>true 6 , method_opt=>'FOR ALL INDEXED COLUMNS SIZE 254' 7 , estimate_percent=>100 8 ); 9 dbms_stats.gather_table_stats 10 ( user 11 , 'relation_addresses' 12 , cascade=>true 13 , method_opt=>'FOR ALL INDEXED COLUMNS SIZE 254' 14 , estimate_percent=>100 15 ); 16 end; 17 / PL/SQL procedure successfully completed.Note that I created histograms with 254 buckets just to make the optimizer see that it should full scan the table, despite the "startdate <= :REFERENCE_DATE" predicate. This next query should give a clue what's in the table:
SQL> select * 2 from relation_addresses 3 where relation_id in (1,2,99999,100000) 4 order by relation_id 5 , startdate 6 / RELATION_ID STARTDATE ADDRESS POSTAL CITY ----------- ------------------- ------------------------------ ------ ------------------------------ 1 09-03-1990 15:00:00 tKgXePxuAIdhFBNJLIRRjodrlJzGOl vPIAbL pNkbFHTJPrVuDIYLxsCfUfetBsKJIE 1 05-08-2001 07:00:00 LybVzfpzoQzXjpCAdkSZrkYrwUtZtL cWJwFe IczTRyjITWCJIOErccfITVvsqRVyMF 1 31-12-2012 23:00:00 lNEwsdYhbwdqRxHTSCTCykgICxiXKL oXzHQF YfyKFmiboCWfmNLjVLZoKmUDoMFaDu 2 09-03-1990 14:00:00 svOylQPkbyfympSXRMeyudfFErFvlO MLFdpG LTtAKdrpUmCwFgqEmoKxnUtWecwgcV 2 05-08-2001 06:00:00 BsRCUviBiLHaAEjyRVnIedRAWzuVSe DlBlZW ErQmCkDgNDTMOdZzceFYrMXnZmmjxg 2 31-12-2012 22:00:00 wqdFdXoBdmmCooLtGfWOMKukIMrDlI geRRHz DaPpWHOOdWgbjLaRkxfFDUIPgVgvEt 99999 12-10-1978 01:00:00 FsXOjUdNIgjjGjnWpJjTTscbcuqsxa PdhVtm qOskmLwRlngSEihmlpYhmNHhvtrpBc 99999 09-03-1990 17:00:00 sqoKYNeDntZtAUSmSDMtIQZloTSVeD uGPszi GIDctptEomcGzYGYhUGhKHgDRZJCmY 99999 05-08-2001 09:00:00 fhHGwuGPIHSOaKdjDvDcqTzsbHZzqR tpaLAP rVYCmijzqJmhlnZZLXkHpgFmLAEiTS 100000 12-10-1978 00:00:00 WwxfHcVfkFfItgcXfjPnKTiATlHjao nSOjSn vZNRsRySNPlmQKgCJjcpiEOhQIxzoy 100000 09-03-1990 16:00:00 cGcVPMsFyxCBrnsZtMYBnaAflXiNff NVKRIr SseFWkWyUDgaPpbxdmENdLjurGbJPK 100000 05-08-2001 08:00:00 dRfCmqdmbhcmaMvyYBpewPsFBCVdlG BMQWLY YPaAGnKKUkfdnAeAyLYeUBfXwezsEo 12 rows selected.So there are a couple of rows that are filtered because they're in the future, but for most rows, the latest row is the current one. This is the plan of the first query with the correlated subquery:
SQL> select * from table(dbms_xplan.display_cursor(null,null,'iostats last')) 2 / PLAN_TABLE_OUTPUT --------------------------------------------------------------------------------------------------------------------------------------- SQL_ID d6p5uh67h65yb, child number 0 ------------------------------------- select ra.relation_id , ra.startdate , ra.address , ra.postal_code , ra.city from relation_addresses ra where ra.startdate <= to_date(:REFERENCE_DATE,'yyyy-mm-dd') and not exists ( select 'a relation_address with a more recent startdate' from relation_addresses ra2 where ra2.relation_id = ra.relation_id and ra2.startdate <= to_date(:REFERENCE_DATE,'yyyy-mm-dd') and ra2.startdate > ra.startdate ) Plan hash value: 3749094337 --------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | --------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 100K|00:00:00.66 | 15071 | 3681 | |* 1 | HASH JOIN RIGHT ANTI | | 1 | 2978 | 100K|00:00:00.66 | 15071 | 3681 | |* 2 | INDEX FAST FULL SCAN| RA_PK | 1 | 297K| 297K|00:00:00.05 | 1240 | 35 | |* 3 | TABLE ACCESS FULL | RELATION_ADDRESSES | 1 | 297K| 297K|00:00:00.12 | 13831 | 3646 | --------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("RA2"."RELATION_ID"="RA"."RELATION_ID") filter("RA2"."STARTDATE">"RA"."STARTDATE") 2 - filter("RA2"."STARTDATE"<=TO_DATE(:REFERENCE_DATE,'yyyy-mm-dd')) 3 - filter("RA"."STARTDATE"<=TO_DATE(:REFERENCE_DATE,'yyyy-mm-dd')) 30 rows selected.A HASH JOIN ANTI for the not exists, and a total of .66 seconds. Next, the plan for the query with the analytic row_number function:
SQL> select * from table(dbms_xplan.display_cursor(null,null,'iostats last')) 2 / PLAN_TABLE_OUTPUT --------------------------------------------------------------------------------------------------------------------------------------- SQL_ID 1zd4wqtxkc2vz, child number 0 ------------------------------------- select relation_id , startdate , address , postal_code , city from ( select ra.relation_id , ra.startdate , ra.address , ra.postal_code , ra.city , row_number() over (partition by ra.relation_id order by ra.startdate desc) rn from relation_addresses ra where ra.startdate <= to_date(:REFERENCE_DATE,'yyyy-mm-dd') ) where rn = 1 Plan hash value: 2795878473 ------------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | ------------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | | 100K|00:00:00.97 | 7238 | 3646 | |* 1 | VIEW | | 1 | 297K| 100K|00:00:00.97 | 7238 | 3646 | |* 2 | WINDOW SORT PUSHED RANK| | 1 | 297K| 200K|00:00:00.93 | 7238 | 3646 | |* 3 | TABLE ACCESS FULL | RELATION_ADDRESSES | 1 | 297K| 297K|00:00:00.09 | 7238 | 3646 | ------------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("RN"=1) 2 - filter(ROW_NUMBER() OVER ( PARTITION BY "RA"."RELATION_ID" ORDER BY INTERNAL_FUNCTION("RA"."STARTDATE") DESC )<=1) 3 - filter("RA"."STARTDATE"<=TO_DATE(:REFERENCE_DATE,'yyyy-mm-dd')) 29 rows selected.Note that this query takes longer than the correlated subquery above: .97 seconds versus .66 seconds. The HASH JOIN ANTI took .49 seconds (.66 - .05 -.12) where computing the ROW_NUMBER took .84 seconds (.93 - .09). So here, on my laptop, I have avoided .05 seconds for the INDEX FAST FULL SCAN, but spend .35 (.84 - .49) seconds more for the computation. Likely, when I/O is more expensive than on my laptop, the time of the first query will go up and the times will be closer to each other. Now the keep clause variant:
SQL> select * from table(dbms_xplan.display_cursor(null,null,'iostats last')) 2 / PLAN_TABLE_OUTPUT --------------------------------------------------------------------------------------------------------------------------------------- SQL_ID dcw8tyyqtu2kk, child number 0 ------------------------------------- select ra.relation_id , max(ra.startdate) startdate , max(ra.address) keep (dense_rank last order by ra.startdate) address , max(ra.postal_code) keep (dense_rank last order by ra.startdate) postal_code , max(ra.city) keep (dense_rank last order by ra.startdate) city from relation_addresses ra where ra.startdate <= to_date(:REFERENCE_DATE,'yyyy-mm-dd') group by ra.relation_id Plan hash value: 2324030966 ------------------------------------------------------------------------------------------------------------ | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | ------------------------------------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 1 | | 100K|00:00:00.55 | 7238 | 3646 | | 1 | SORT GROUP BY | | 1 | 100K| 100K|00:00:00.55 | 7238 | 3646 | |* 2 | TABLE ACCESS FULL| RELATION_ADDRESSES | 1 | 297K| 297K|00:00:00.09 | 7238 | 3646 | ------------------------------------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 2 - filter("RA"."STARTDATE"<=TO_DATE(:REFERENCE_DATE,'yyyy-mm-dd')) 24 rows selected.The shortest query, the shortest plan and the fastest execution. The SORT GROUP BY immediately reduces the number of intermediate rows from 297K to 100K, whereas the WINDOW SORT PUSHED RANK had to compute the row_number for all 297K rows.
PS: this topic and much more is covered in an upcoming Live Virtual Seminar for Oracle University on October 2nd
Saturday, May 19, 2012
Much Ado About Nothing?
I was reading this presentation PDF of Hugh Darwen recently, called How To Handle Missing Information Without Using NULL. Several great thinkers and founders of the relational theory consider NULL as the thing that should not be. For example, one slide in the above mentioned PDF is titled SQL's Nulls Are A Disaster. And I found a paper with the amusing title The Final Null In The Coffin.
I can understand the critique. The introduction of NULL leads to three valued logic, which makes programs much more complex and harder to prove correct. All database professionals likely have been bitten by NULLs several times during their career, myself included. And a NULL can have several interpretations. By using NULL, you are not making clear what is meant. If the value for column hair_colour is NULL, does it mean the person is bald? Or do you know the person has hair, but you just don't know what colour? Or can the person be bald or have hair, but you just don't know which one applies? Or is the person in the midst of a hair colouring exercise and you only temporarily don't know the colour? If you're creative, I'm sure you can come up with other interpretations as well.
On the other hand, the theorists don't have to build database applications for end users who like reasonable response times, and I do. Avoiding nulls at all cost typically leads to a data model that has more tables than needed, requiring more joins and therefore making queries slower. So I have to make a trade off. In general I try to avoid nullable columns as much as possible, for example by chosing subtype implementations instead of supertype implementations, and by modelling entity subtypes in the first place, but I will never let it noticeably slow down my application. At my current job, I'm making a data model right now. Having read all use cases, I know how the data will be used and so I know where in the model there is room to avoid an extra nullable column. One thing I'll never voluntarily do though, is make up strange outlier values just to get rid of the null.
Any way, I was curious to see how Hugh Darwen handles missing information without using nulls. In his paper, he has a concise example, which I'll translate to Oracle syntax in this blogpost to see what practically needs to happen to avoid nulls in his example. He starts with this table:
Now what if we'd like to simulate a query against the PERS_INFO table? Darwen uses this expression to transform the seven tables back to the PERS_INFO table:
The distributed key (constraints 3, 4 and 5):
PS: For all Dutch DBA's: here is a symposium you don't want to miss.
I can understand the critique. The introduction of NULL leads to three valued logic, which makes programs much more complex and harder to prove correct. All database professionals likely have been bitten by NULLs several times during their career, myself included. And a NULL can have several interpretations. By using NULL, you are not making clear what is meant. If the value for column hair_colour is NULL, does it mean the person is bald? Or do you know the person has hair, but you just don't know what colour? Or can the person be bald or have hair, but you just don't know which one applies? Or is the person in the midst of a hair colouring exercise and you only temporarily don't know the colour? If you're creative, I'm sure you can come up with other interpretations as well.
On the other hand, the theorists don't have to build database applications for end users who like reasonable response times, and I do. Avoiding nulls at all cost typically leads to a data model that has more tables than needed, requiring more joins and therefore making queries slower. So I have to make a trade off. In general I try to avoid nullable columns as much as possible, for example by chosing subtype implementations instead of supertype implementations, and by modelling entity subtypes in the first place, but I will never let it noticeably slow down my application. At my current job, I'm making a data model right now. Having read all use cases, I know how the data will be used and so I know where in the model there is room to avoid an extra nullable column. One thing I'll never voluntarily do though, is make up strange outlier values just to get rid of the null.
Any way, I was curious to see how Hugh Darwen handles missing information without using nulls. In his paper, he has a concise example, which I'll translate to Oracle syntax in this blogpost to see what practically needs to happen to avoid nulls in his example. He starts with this table:
SQL> select * 2 from pers_info 3 / ID NAME JOB SALARY ---------- ---------- ---------- ---------- 1234 Anne Lawyer 100000 1235 Boris Banker 1236 Cindy 70000 1237 Davinder 4 rows selected.Which contains four NULL values. The meaning of those NULL values can't be seen from this table, but this is what they are meant to be:
- Boris earns something, but we don't know how much
- Cindy does some job, but we don't know what it is
- Davinder doesn't have a job
- Davinder doesn't have a salary
SQL> select * 2 from called 3 / ID NAME ---------- -------- 1234 Anne 1235 Boris 1236 Cindy 1237 Davinder 4 rows selected. SQL> select * 2 from does_job 3 / ID JOB ---------- ------ 1234 Lawyer 1235 Banker 2 rows selected. SQL> select * 2 from job_unk 3 / ID ---------- 1236 1 row selected. SQL> select * 2 from unemployed 3 / ID ---------- 1237 1 row selected. SQL> select * 2 from earns 3 / ID SALARY ---------- ---------- 1234 100000 1236 70000 2 rows selected. SQL> select * 2 from salary_unk 3 / ID ---------- 1235 1 row selected. SQL> select * 2 from unsalaried 3 / ID ---------- 1237 1 row selected.Here we achieved a data model where every NULL has been banned out.
Now what if we'd like to simulate a query against the PERS_INFO table? Darwen uses this expression to transform the seven tables back to the PERS_INFO table:
WITH (EXTEND JOB_UNK ADD ‘Job unknown’ AS Job_info) AS T1, (EXTEND UNEMPLOYED ADD ‘Unemployed’ AS Job_info) AS T2, (DOES_JOB RENAME (Job AS Job_info)) AS T3, (EXTEND SALARY_UNK ADD ‘Salary unknown’ AS Sal_info) AS T4, (EXTEND UNSALARIED ADD ‘Unsalaried’ AS Sal_info) AS T5, (EXTEND EARNS ADD CHAR(Salary) AS Sal_info) AS T6, (T6 { ALL BUT Salary }) AS T7, (UNION ( T1, T2, T3 )) AS T8, (UNION ( T4, T5, T7 )) AS T9, (JOIN ( CALLED, T8, T9 )) AS PERS_INFO : PERS_INFOTranslated to Oracle syntax, this becomes:
SQL> with t1 as 2 ( select id 3 , 'Job unknown' as job_info 4 from job_unk 5 ) 6 , t2 as 7 ( select id 8 , 'Unemployed' as job_info 9 from unemployed 10 ) 11 , t3 as 12 ( select id 13 , job as job_info 14 from does_job 15 ) 16 , t4 as 17 ( select id 18 , 'Salary unknown' as sal_info 19 from salary_unk 20 ) 21 , t5 as 22 ( select id 23 , 'Unsalaried' as sal_info 24 from unsalaried 25 ) 26 , t6 as 27 ( select id 28 , salary 29 , to_char(salary,'fm999G999') as sal_info 30 from earns 31 ) 32 , t7 as 33 ( select id 34 , sal_info 35 from t6 36 ) 37 , t8 as 38 ( select id 39 , job_info 40 from t1 41 union all 42 select id 43 , job_info 44 from t2 45 union all 46 select id 47 , job_info 48 from t3 49 ) 50 , t9 as 51 ( select id 52 , sal_info 53 from t4 54 union all 55 select id 56 , sal_info 57 from t5 58 union all 59 select id 60 , sal_info 61 from t7 62 ) 63 , pers_info as 64 ( select c.id 65 , c.name 66 , j.job_info 67 , s.sal_info 68 from called c 69 inner join t8 j on (c.id = j.id) 70 inner join t9 s on (c.id = s.id) 71 ) 72 select * 73 from pers_info 74 / ID NAME JOB_INFO SAL_INFO ---------- -------- ----------- -------------- 1235 Boris Banker Salary unknown 1237 Davinder Unemployed Unsalaried 1234 Anne Lawyer 100,000 1236 Cindy Job unknown 70,000 4 rows selected.Very elaborate, but the optimizer does a great job at simplifying the query under the covers, as can be seen in this execution plan:
SQL> select * 2 from table(dbms_xplan.display_cursor(null,null,'allstats last')) 3 / PLAN_TABLE_OUTPUT --------------------------------------------------------------------------------------------------------------------------------------- SQL_ID bmrtdy0jad18p, child number 0 ------------------------------------- with t1 as ( select id , 'Job unknown' as job_info from job_unk ) , t2 as ( select id , 'Unemployed' as job_info from unemployed ) , t3 as ( select id , job as job_info from does_job ) , t4 as ( select id , 'Salary unknown' as sal_info from salary_unk ) , t5 as ( select id , 'Unsalaried' as sal_info from unsalaried ) , t6 as ( select id , salary , to_char(salary,'fm999G999') as sal_info from earns ) , t7 as ( select id , sal_info from t6 ) , t8 as ( select id , job_info from t1 union all select id , job_info from t2 union all select id , job_info from t3 ) , t9 as ( select id , sal_info from t4 union all select id , sal_info from t5 union all select id , sal_info from t7 ) , pers_info as ( select c.id , c.name , j.job_info , s.sal_info from called c inner join t8 j on (c.id = j.id) Plan hash value: 583520090 ------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | ------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 4 |00:00:00.01 | 14 | | | | |* 1 | HASH JOIN | | 1 | 4 | 4 |00:00:00.01 | 14 | 1011K| 1011K| 550K (0)| |* 2 | HASH JOIN | | 1 | 4 | 4 |00:00:00.01 | 8 | 1180K| 1180K| 548K (0)| | 3 | TABLE ACCESS FULL | CALLED | 1 | 4 | 4 |00:00:00.01 | 2 | | | | | 4 | VIEW | | 1 | 4 | 4 |00:00:00.01 | 6 | | | | | 5 | UNION-ALL | | 1 | | 4 |00:00:00.01 | 6 | | | | | 6 | TABLE ACCESS FULL| JOB_UNK | 1 | 1 | 1 |00:00:00.01 | 2 | | | | | 7 | TABLE ACCESS FULL| UNEMPLOYED | 1 | 1 | 1 |00:00:00.01 | 2 | | | | | 8 | TABLE ACCESS FULL| DOES_JOB | 1 | 2 | 2 |00:00:00.01 | 2 | | | | | 9 | VIEW | | 1 | 4 | 4 |00:00:00.01 | 6 | | | | | 10 | UNION-ALL | | 1 | | 4 |00:00:00.01 | 6 | | | | | 11 | TABLE ACCESS FULL | SALARY_UNK | 1 | 1 | 1 |00:00:00.01 | 2 | | | | | 12 | TABLE ACCESS FULL | UNSALARIED | 1 | 1 | 1 |00:00:00.01 | 2 | | | | | 13 | TABLE ACCESS FULL | EARNS | 1 | 2 | 2 |00:00:00.01 | 2 | | | | ------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("C"."ID"="S"."ID") 2 - access("C"."ID"="J"."ID") 45 rows selected.If I had to build the PERS_INFO table back again with a query myself, I'd use this shorter query with six left outer joins:
SQL> select c.id 2 , c.name 3 , coalesce(j.job,nvl2(ju.id,'Job unknown',null),nvl2(ue.id,'Unemployed',null)) job_info 4 , coalesce(to_char(e.salary,'fm999G999'),nvl2(su.id,'Salary unknown',null),nvl2(us.id,'Unsalaried',null)) salary_info 5 from called c 6 left outer join does_job j on (c.id = j.id) 7 left outer join job_unk ju on (c.id = ju.id) 8 left outer join unemployed ue on (c.id = ue.id) 9 left outer join earns e on (c.id = e.id) 10 left outer join salary_unk su on (c.id = su.id) 11 left outer join unsalaried us on (c.id = us.id) 12 / ID NAME JOB_INFO SALARY_INFO ---------- -------- ----------- -------------- 1234 Anne Lawyer 100,000 1236 Cindy Job unknown 70,000 1235 Boris Banker Salary unknown 1237 Davinder Unemployed Unsalaried 4 rows selected.Although, as you can see below, the plan doesn't really improve:
SQL> select * 2 from table(dbms_xplan.display_cursor(null,null,'allstats last')) 3 / PLAN_TABLE_OUTPUT --------------------------------------------------------------------------------------------------------------------------------------- SQL_ID 6x45b27mvpb1m, child number 0 ------------------------------------- select c.id , c.name , coalesce(j.job,nvl2(ju.id,'Job unknown',null),nvl2(ue.id,'Unemployed',null)) job_info , coalesce(to_char(e.salary,'fm999G999'),nvl2(su.id,'Salary unknown',null),nvl2(us.id,'Unsalaried',null)) salary_info from called c left outer join does_job j on (c.id = j.id) left outer join job_unk ju on (c.id = ju.id) left outer join unemployed ue on (c.id = ue.id) left outer join earns e on (c.id = e.id) left outer join salary_unk su on (c.id = su.id) left outer join unsalaried us on (c.id = us.id) Plan hash value: 3398518218 --------------------------------------------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem | --------------------------------------------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 4 |00:00:00.01 | 15 | | | | |* 1 | HASH JOIN OUTER | | 1 | 4 | 4 |00:00:00.01 | 15 | 955K| 955K| 528K (0)| |* 2 | HASH JOIN OUTER | | 1 | 4 | 4 |00:00:00.01 | 12 | 1000K| 1000K| 523K (0)| |* 3 | HASH JOIN OUTER | | 1 | 4 | 4 |00:00:00.01 | 10 | 1035K| 1035K| 536K (0)| |* 4 | HASH JOIN OUTER | | 1 | 4 | 4 |00:00:00.01 | 8 | 1063K| 1063K| 536K (0)| |* 5 | HASH JOIN OUTER | | 1 | 4 | 4 |00:00:00.01 | 6 | 1114K| 1114K| 537K (0)| |* 6 | HASH JOIN OUTER | | 1 | 4 | 4 |00:00:00.01 | 4 | 1180K| 1180K| 538K (0)| | 7 | TABLE ACCESS FULL| CALLED | 1 | 4 | 4 |00:00:00.01 | 2 | | | | | 8 | TABLE ACCESS FULL| JOB_UNK | 1 | 1 | 1 |00:00:00.01 | 2 | | | | | 9 | TABLE ACCESS FULL | UNEMPLOYED | 1 | 1 | 1 |00:00:00.01 | 2 | | | | | 10 | TABLE ACCESS FULL | SALARY_UNK | 1 | 1 | 1 |00:00:00.01 | 2 | | | | | 11 | TABLE ACCESS FULL | UNSALARIED | 1 | 1 | 1 |00:00:00.01 | 2 | | | | | 12 | TABLE ACCESS FULL | DOES_JOB | 1 | 2 | 2 |00:00:00.01 | 2 | | | | | 13 | TABLE ACCESS FULL | EARNS | 1 | 2 | 2 |00:00:00.01 | 3 | | | | --------------------------------------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): --------------------------------------------------- 1 - access("C"."ID"="E"."ID") 2 - access("C"."ID"="J"."ID") 3 - access("C"."ID"="US"."ID") 4 - access("C"."ID"="SU"."ID") 5 - access("C"."ID"="UE"."ID") 6 - access("C"."ID"="JU"."ID") 43 rows selected.But the two plans above are really complex, compared with a simple query against the PERS_INFO table with nullable columns:
SQL> select * 2 from pers_info 3 / ID NAME JOB SALARY ---------- ---------- ---------- ---------- 1234 Anne Lawyer 100000 1235 Boris Banker 1236 Cindy 70000 1237 Davinder 4 rows selected. SQL> select * 2 from table(dbms_xplan.display_cursor(null,null,'allstats last')) 3 / PLAN_TABLE_OUTPUT --------------------------------------------------------------------------------------------------------------------------------------- SQL_ID 016x9f106gj27, child number 1 ------------------------------------- select * from pers_info Plan hash value: 1584579034 ----------------------------------------------------------------------------------------- | Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | ----------------------------------------------------------------------------------------- | 0 | SELECT STATEMENT | | 1 | | 4 |00:00:00.01 | 7 | | 1 | TABLE ACCESS FULL| PERS_INFO | 1 | 4 | 4 |00:00:00.01 | 7 | ----------------------------------------------------------------------------------------- 13 rows selected.If queries like this are not very frequent in your database, you might want to take this extra work for granted and avoid the NULL. But you need to consider something else as well: the new schema requires much more constraints. Using just the PERS_INFO table, a single primary key constraint on the Id column is all you need. But for the new model, Darwen describes 9, but really 15 constraints:
- No two CALLED rows have the same Id. (Primary key)
- Every row in CALLED has a matching row in either DOES_JOB, JOB_UNK, or UNEMPLOYED.
- No row in DOES_JOB has a matching row in JOB_UNK.
- No row in DOES_JOB has a matching row in UNEMPLOYED.
- No row in JOB_UNK has a matching row in UNEMPLOYED.
- Every row in DOES_JOB has a matching row in CALLED. (Foreign key)
- Every row in JOB_UNK has a matching row in CALLED. (Foreign key)
- Every row in UNEMPLOYED has a matching row in CALLED. (Foreign key)
- Constraints 2 through 8 repeated, mutatis mutandis, for CALLED with respect to EARNS, SALARY_UNK and UNSALARIED.
SQL> alter table called add primary key (id) 2 / Table altered.And so are constraints 6, 7 and 8:
SQL>alter table does_job add foreign key (id) references called (id) 2 / Table altered. SQL> alter table job_unk add foreign key (id) references called (id) 2 / Table altered. SQL> alter table unemployed add foreign key (id) references called (id) 2 / Table altered.But constraint 2 says that the Id in table CALLED is a foreign distributed key. And constraints 3, 4 and 5 say the Id's of tables DOES_JOB, JOB_UNK and UNEMPLOYED are a distributed key. Oracle doesn't have declarative support for distributed keys or for foreign distributed keys. We could write database trigger code to implement this, which is very hard to do correct or we could use the materialized view trick to have the condition validated at the end of a transaction, instead of at the end of the statement, which also has its downsides. And such deferred constraint checking is explicitly ruled out by The Third Manifesto as well. Nevertheless, here is how it can be done.
The distributed key (constraints 3, 4 and 5):
SQL> create materialized view log on does_job with rowid 2 / Materialized view log created. SQL> create materialized view log on job_unk with rowid 2 / Materialized view log created. SQL> create materialized view log on unemployed with rowid 2 / Materialized view log created. SQL> create materialized view distributed_key_vw 2 refresh fast on commit 3 as 4 select d.rowid rid 5 , d.id id 6 , 'D' umarker 7 from does_job d 8 union all 9 select j.rowid 10 , j.id 11 , 'J' 12 from job_unk j 13 union all 14 select u.rowid 15 , u.id 16 , 'U' 17 from unemployed u 18 / Materialized view created. SQL> alter table distributed_key_vw 2 add constraint distributed_key_check 3 primary key (id) 4 / Table altered.And to show that the distributed key implementation works:
SQL> insert into job_unk values (1234) 2 / 1 row created. SQL> commit 2 / commit * ERROR at line 1: ORA-12048: error encountered while refreshing materialized view "RWIJK"."DISTRIBUTED_KEY_VW" ORA-00001: unique constraint (RWIJK.DISTRIBUTED_KEY_CHECK) violatedAnd the foreign distributed key ("Every row in CALLED has a matching row in either DOES_JOB, JOB_UNK, or UNEMPLOYED.") can be implemented like this:
SQL> create materialized view log on does_job with rowid 2 / Materialized view log created. SQL> create materialized view log on job_unk with rowid 2 / Materialized view log created. SQL> create materialized view log on unemployed with rowid 2 / Materialized view log created. SQL> create materialized view foreign_distributed_key_vw 2 refresh fast on commit 3 as 4 select c.rowid c_rowid 5 , dj.rowid dj_rowid 6 , ju.rowid ju_rowid 7 , ue.rowid ue_rowid 8 , c.id id 9 , dj.id dj_id 10 , ju.id ju_id 11 , ue.id ue_id 12 from called c 13 , does_job dj 14 , job_unk ju 15 , unemployed ue 16 where c.id = dj.id (+) 17 and c.id = ju.id (+) 18 and c.id = ue.id (+) 19 / Materialized view created. SQL> alter table foreign_distributed_key_vw 2 add constraint foreign_distributed_key_check 3 check (coalesce(dj_id,ju_id,ue_id) is not null) 4 / Table altered.And some proof that this implementation works:
SQL> insert into called values (1238,'Elise') 2 / 1 row created. SQL> commit 2 / commit * ERROR at line 1: ORA-12008: error in materialized view refresh path ORA-02290: check constraint (RWIJK.FOREIGN_DISTRIBUTED_KEY_CHECK) violatedWould I go through the extra trouble of an implementation with 6 more tables, 14 extra constraints and worse performance like above? It depends. It depends on how often the data is queried, and on how often it is updated concurrently. And on whether the distinction between the possible multiple meanings of NULL is relevant in my case. And whether I have sufficient extra time to implement it. Using Oracle, probably most often, I won't.
PS: For all Dutch DBA's: here is a symposium you don't want to miss.
Wednesday, March 28, 2012
Mastering Oracle Trace Data with Cary Millsap
At CIBER we are very proud to announce that Cary Millsap will give his one day seminar called Mastering Oracle Trace Data in the Netherlands. The event will take place at the Carlton President Hotel at Utrecht on Wednesday, May 23. You can register and read more about this event here.
The seminar is aimed at DBA's, database application developers, data warehouse specialists and anyone caring about the speed of a database application. I'm sure that, among many other things, you'll leave the seminar with a very clear mindset about performance. If you are in doubt whether you should come or not, please read one of his many excellent papers, Thinking Clearly About Performance.
It's an opportunity you don't want to miss. Hope to see you there soon!
The seminar is aimed at DBA's, database application developers, data warehouse specialists and anyone caring about the speed of a database application. I'm sure that, among many other things, you'll leave the seminar with a very clear mindset about performance. If you are in doubt whether you should come or not, please read one of his many excellent papers, Thinking Clearly About Performance.
It's an opportunity you don't want to miss. Hope to see you there soon!
Tuesday, March 27, 2012
Third OGh APEX dag
Yesterday was the third annual APEX day, organized by the Dutch Oracle usergroup OGh. It's the biggest APEX only event in the world, I've been told, with approximately 280 attendees. Learco Brizzi, Marti Koppelmans and myself were very proud to again have a great lineup of presenters and presentations.
The day started with a keynote by Patrick Wolf telling about and showing a lot of new 4.2 features. Of course, he could not be sure if every single feature will eventually make it to this 4.2 release. The APEX team focussed on mobile development. I was mostly impressed by the demo showing how relatively easy it will be to develop a great looking iPhone application. A query, select a list type and voila. And I think this is why the Oracle community likes APEX so much: it makes developing applications easy. It reminded me of this presentation by Kathy Sierra: it makes you feel awesome.
Next up were the three parallel tracks. I heard quite a lot of people saying they were having a hard time making choices between great presentations, and I was no exception to that rule.
I saw Roel Hartman's "5 Cool Things you can do with HTML 5". Great presentation, especially the first real slide :-), loved the websockets demo with beaconpush, and I learned a lot.
Next presentation was John Scott's "APEX 4 - Error Handling enhancements". Having seen quite a number of John's presentations in the past, I know he always takes a subject to the next level with last years translation plugin as a highlight. This time he showed how easy it is to automatically log any errors in JIRA, so users don't have to report errors anymore: you already know it. Unfortunately, his last demo didn't work. He wanted to show how to a add a screenshot from the users browser automatically to the JIRA ticket.
The fourth presentation was by Sergei Martens called "Building desktop-like web applications with Ext JS & Apex". He was very enthusiastic about APEX applications using Ext JS, but he didn't succeed completely in explaining to me what aspects he liked so much. I mean, I agree some of the builtin themes are quite boring, but I don't think the video showed a "sexy" theme. To me, a great design is a simple design, but that's a matter of personal preference. Next, he had some severe troubles with a very slow Windows (a pleonasm?) laptop. This must be a presenters' nightmare, especially since he seemed well prepared, but this was out of his control. He handled the situation remarkably well though.
The last presentation of the day was Margreet den Hartigh's and Alex Nuijten's presentation called "From Zero to APEX". A customer story about a Uniface application that was rebuild using APEX, with a team of Uniface developers that not only had to learn APEX, but also PL/SQL, HTML, CSS, Subversion, Javascript and a lot more. Almost all Dutch municipalities use this application. It looks like they can win a lot more by integrating the databases of all municipalities to a single hosted database, with some partitioning, VPD and resource manager. The cost of ownership will decrease dramatically that way. Free tip from me :-)
A nice dinner with several visitors and presenters ended a great APEX day.
The day started with a keynote by Patrick Wolf telling about and showing a lot of new 4.2 features. Of course, he could not be sure if every single feature will eventually make it to this 4.2 release. The APEX team focussed on mobile development. I was mostly impressed by the demo showing how relatively easy it will be to develop a great looking iPhone application. A query, select a list type and voila. And I think this is why the Oracle community likes APEX so much: it makes developing applications easy. It reminded me of this presentation by Kathy Sierra: it makes you feel awesome.
Next up were the three parallel tracks. I heard quite a lot of people saying they were having a hard time making choices between great presentations, and I was no exception to that rule.
I saw Roel Hartman's "5 Cool Things you can do with HTML 5". Great presentation, especially the first real slide :-), loved the websockets demo with beaconpush, and I learned a lot.
Next presentation was John Scott's "APEX 4 - Error Handling enhancements". Having seen quite a number of John's presentations in the past, I know he always takes a subject to the next level with last years translation plugin as a highlight. This time he showed how easy it is to automatically log any errors in JIRA, so users don't have to report errors anymore: you already know it. Unfortunately, his last demo didn't work. He wanted to show how to a add a screenshot from the users browser automatically to the JIRA ticket.
The fourth presentation was by Sergei Martens called "Building desktop-like web applications with Ext JS & Apex". He was very enthusiastic about APEX applications using Ext JS, but he didn't succeed completely in explaining to me what aspects he liked so much. I mean, I agree some of the builtin themes are quite boring, but I don't think the video showed a "sexy" theme. To me, a great design is a simple design, but that's a matter of personal preference. Next, he had some severe troubles with a very slow Windows (a pleonasm?) laptop. This must be a presenters' nightmare, especially since he seemed well prepared, but this was out of his control. He handled the situation remarkably well though.
The last presentation of the day was Margreet den Hartigh's and Alex Nuijten's presentation called "From Zero to APEX". A customer story about a Uniface application that was rebuild using APEX, with a team of Uniface developers that not only had to learn APEX, but also PL/SQL, HTML, CSS, Subversion, Javascript and a lot more. Almost all Dutch municipalities use this application. It looks like they can win a lot more by integrating the databases of all municipalities to a single hosted database, with some partitioning, VPD and resource manager. The cost of ownership will decrease dramatically that way. Free tip from me :-)
A nice dinner with several visitors and presenters ended a great APEX day.
Sunday, March 25, 2012
Connect By Filtering
A hierarchical query is typically executed using a plan that starts with the operation CONNECT BY WITH FILTERING, which has two child operations. The first child operation implements the START WITH clause and the second child operation contains a step called CONNECT BY PUMP, implementing the recursive part of the query. Here is an example of such a plan using the well known hierarchical query on table EMP:
You can see a great and more detailed explanation of connect by with filtering here on Christian Antognini's blog.
When I was researching the new recursive subquery factoring clause one and a half year ago, and compared a standard hierarchical query on EMP using recursive subquery factoring with a query using the good old connect by, I stumbled upon a new optimizer algorithm for implementing recursive queries:
You might wonder what I did to make two exactly the same queries to use a different execution plan, but I'll address that later. First, I'd like to show there are two optimizer hints available, with which you can control which algorithm the optimizer uses:
And this was surprising to me. As the version column suggests, the no_connect_by_filtering hint and the accompanying new algorithm were already introduced in version 10.2.0.2! I checked with my old 10.2.0.4 database and it is indeed present and can be used there:
But you need the no_connect_by_filtering hint in version 10.2.0.4 for this query. If you do not provide the hint, this is the result:
Which explains why I didn't see the CONNECT BY NO FILTERING WITH START-WITH earlier. It seems that Oracle has adjusted the cost calculation of connect by queries somewhere between 10.2.0.4 and 11.2.0.1. Just look at the cost from both execution plans on 10.2.0.4 using a regular explain plan statement and a "select * from table(dbms_xplan.display):
The cost of 3 is due to the full table scan of EMP, and no additional cost is added for the hierarchical query.
These are the plans from 11.2.0.2:
The numbers from the 11.2.0.2 show more sophistication than just the cost of the table scan. The optimizer can't know how many levels deep the data is, but version 10.2.0.4 apparently picked 1, and left the total cost unchanged from 3 to 3. I'm curious to know in which version in between 10.2.0.4 and 11.2.0.2 this cost calculation changed. If anyone who is reading this, has a version in between and likes to check, please let me know in the comments. My guess would be that 11.2.0.1 contained the cost change.
What does CONNECT BY NO FILTERING WITH START-WITH do?
Let's explore this, using this table:
The data is tree shaped where each parent node has exactly 9 child nodes. One tenth of the data, with an id that ends with the digit 3, has its indicator column set to 'N'. This select query will make it clearer how the data looks like:
When hearing the word "filter", I almost immediately associate it with a WHERE clause. But a where clause in a connect by query, is not what is meant by connect by filtering. The documentation states:
So a where clause predicate is evaluated AFTER the connect by has done its job. You can see that happening here:
The "indicator = 'N'" predicate is at step 1, which is executed after the CONNECT BY WITH FILTERING at step 2. Note that although this query is executed in 11.2.0.2, the optimizer has chosen the old CONNECT BY WITH FILTERING.
Connect by filtering is done by using filters in your CONNECT BY clause. Here is an example using the predicate "indicator = 'N'" inside the CONNECT BY clause:
In the A-rows column, you can see that the connect by filtering was effective here. Only the necessary rows were being read. And this is the key difference between the two connect by algorithms: with CONNECT BY WITH FILTERING, you can filter within each recursion, whereas CONNECT BY NO FILTERING WITH START-WITH has to read everything, does an in-memory operation, and return the result. With this example, the latter is much less efficient:
100K rows were being read, and the A-time was 0.14 seconds instead of 0.01 seconds. I wondered where those 0.14 seconds went to, since the plan shows it's NOT for the full table scan. Using Tom Kyte's runstats_pkg reveals this:
The major difference is the number of rows sorted! The CONNECT BY NO FILTERING WITH START-WITH sorts all 100K rows. This is a surprise, because normally when you sort, you use memory from the PGA workarea, which shows up in your memory statistics from your execution plan. But the no filtering plan did not show those statistics (OMem, 1Mem, Used-Mem). I have no explanation for this phenomenon yet.
Let's zoom in on the sorting:
So CONNECT BY WITH FILTERING did 8 sorts (2286 - 2278) and sorted 12 rows (9425522 - 9425510), whereas CONNECT BY NO FILTERING WITH START-WITH did 2 (2288 - 2286) sorts and sorted 100,001 rows (9525523 - 9425522).
And finally, I promised to explain why the first two queries of this blogpost are identical, but show a different execution plan. The reason is simple: the first one is executed on 10.2.0.4 and the second one on 11.2.0.2.
SQL> select lpad(' ', 2 * level - 2, ' ') || ename as ename
2 , level
3 , job
4 , deptno
5 from emp
6 connect by mgr = prior empno
7 start with mgr is null
8 /
ENAME LEVEL JOB DEPTNO
-------------------- ---------- --------------------------- ----------
KING 1 PRESIDENT 10
JONES 2 MANAGER 20
SCOTT 3 ANALYST 20
ADAMS 4 CLERK 20
FORD 3 ANALYST 20
SMITH 4 CLERK 20
BLAKE 2 MANAGER 30
ALLEN 3 SALESMAN 30
WARD 3 SALESMAN 30
MARTIN 3 SALESMAN 30
TURNER 3 SALESMAN 30
JAMES 3 CLERK 30
CLARK 2 MANAGER 10
MILLER 3 CLERK 10
14 rows selected.
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
2 /
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------
SQL_ID d2c7xqxbr112u, child number 0
-------------------------------------
select lpad(' ', 2 * level - 2, ' ') || ename as ename , level , job , deptno from emp connect by
mgr = prior empno start with mgr is null
Plan hash value: 1869448388
--------------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads | OMem | 1Mem | Used-Mem |
--------------------------------------------------------------------------------------------------------------------------------
|* 1 | CONNECT BY WITH FILTERING| | 1 | | 14 |00:00:00.02 | 15 | 6 | 2048 | 2048 | 2048 (0)|
|* 2 | TABLE ACCESS FULL | EMP | 1 | 1 | 1 |00:00:00.01 | 3 | 6 | | | |
|* 3 | HASH JOIN | | 4 | | 13 |00:00:00.01 | 12 | 0 | 1452K| 1452K| 853K (0)|
| 4 | CONNECT BY PUMP | | 4 | | 14 |00:00:00.01 | 0 | 0 | | | |
| 5 | TABLE ACCESS FULL | EMP | 4 | 2 | 56 |00:00:00.01 | 12 | 0 | | | |
--------------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MGR"=PRIOR NULL)
2 - filter("MGR" IS NULL)
3 - access("MGR"=PRIOR NULL)
24 rows selected.
You can see a great and more detailed explanation of connect by with filtering here on Christian Antognini's blog.
When I was researching the new recursive subquery factoring clause one and a half year ago, and compared a standard hierarchical query on EMP using recursive subquery factoring with a query using the good old connect by, I stumbled upon a new optimizer algorithm for implementing recursive queries:
SQL> select lpad(' ', 2 * level - 2, ' ') || ename as ename
2 , level
3 , job
4 , deptno
5 from emp
6 connect by mgr = prior empno
7 start with mgr is null
8 /
ENAME LEVEL JOB DEPTNO
-------------------- ---------- --------- ----------
KING 1 PRESIDENT 10
JONES 2 MANAGER 20
SCOTT 3 ANALYST 20
ADAMS 4 CLERK 20
FORD 3 ANALYST 20
SMITH 4 CLERK 20
BLAKE 2 MANAGER 30
ALLEN 3 SALESMAN 30
WARD 3 SALESMAN 30
MARTIN 3 SALESMAN 30
TURNER 3 SALESMAN 30
JAMES 3 CLERK 30
CLARK 2 MANAGER 10
MILLER 3 CLERK 10
14 rows selected.
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
2 /
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------
SQL_ID d2c7xqxbr112u, child number 0
-------------------------------------
select lpad(' ', 2 * level - 2, ' ') || ename as ename , level
, job , deptno from emp connect by mgr = prior empno
start with mgr is null
Plan hash value: 763482334
-------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | Reads |
-------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 14 |00:00:00.02 | 6 | 6 |
|* 1 | CONNECT BY NO FILTERING WITH START-WITH| | 1 | | 14 |00:00:00.02 | 6 | 6 |
| 2 | TABLE ACCESS FULL | EMP | 1 | 14 | 14 |00:00:00.02 | 6 | 6 |
-------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MGR"=PRIOR NULL)
filter("MGR" IS NULL)
22 rows selected.
You might wonder what I did to make two exactly the same queries to use a different execution plan, but I'll address that later. First, I'd like to show there are two optimizer hints available, with which you can control which algorithm the optimizer uses:
SQL> select *
2 from v$sql_hint
3 where name like '%CONNECT_BY_FILTERING%'
4 /
NAME SQL_FEATURE CLASS
----------------------- ------------ -----------------------
INVERSE TARGET_LEVEL PROPERTY VERSION VERSION_OUTLINE
----------------------- ------------ ---------- ---------- ---------------
CONNECT_BY_FILTERING QKSFM_ALL CONNECT_BY_FILTERING
NO_CONNECT_BY_FILTERING 2 16 10.2.0.2 10.2.0.2
NO_CONNECT_BY_FILTERING QKSFM_ALL CONNECT_BY_FILTERING
CONNECT_BY_FILTERING 2 16 10.2.0.2 10.2.0.2
2 rows selected.
And this was surprising to me. As the version column suggests, the no_connect_by_filtering hint and the accompanying new algorithm were already introduced in version 10.2.0.2! I checked with my old 10.2.0.4 database and it is indeed present and can be used there:
SQL> select version
2 from v$instance
3 /
VERSION
---------------------------------------------------
10.2.0.4.0
1 row selected.
SQL> select /*+ no_connect_by_filtering gather_plan_statistics */
2 lpad(' ', 2 * level - 2, ' ') || ename as ename
3 , level
4 , job
5 , deptno
6 from emp
7 connect by mgr = prior empno
8 start with mgr is null
9 /
ENAME LEVEL JOB DEPTNO
-------------------- ---------- --------------------------- ----------
KING 1 PRESIDENT 10
JONES 2 MANAGER 20
SCOTT 3 ANALYST 20
ADAMS 4 CLERK 20
FORD 3 ANALYST 20
SMITH 4 CLERK 20
BLAKE 2 MANAGER 30
ALLEN 3 SALESMAN 30
WARD 3 SALESMAN 30
MARTIN 3 SALESMAN 30
TURNER 3 SALESMAN 30
JAMES 3 CLERK 30
CLARK 2 MANAGER 10
MILLER 3 CLERK 10
14 rows selected.
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
2 /
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 39kr5s8dxz7j0, child number 0
-------------------------------------
select /*+ no_connect_by_filtering gather_plan_statistics */ lpad(' ', 2 * level - 2, '
') || ename as ename , level , job , deptno from emp connect by mgr = prior
empno start with mgr is null
Plan hash value: 763482334
----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------------
|* 1 | CONNECT BY NO FILTERING WITH START-WITH| | 1 | | 14 |00:00:00.01 | 3 |
| 2 | TABLE ACCESS FULL | EMP | 1 | 14 | 14 |00:00:00.01 | 3 |
----------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MGR"=PRIOR NULL)
filter("MGR" IS NULL)
21 rows selected.
But you need the no_connect_by_filtering hint in version 10.2.0.4 for this query. If you do not provide the hint, this is the result:
SQL> select /*+ gather_plan_statistics */
2 lpad(' ', 2 * level - 2, ' ') || ename as ename
3 , level
4 , job
5 , deptno
6 from emp
7 connect by mgr = prior empno
8 start with mgr is null
9 /
ENAME LEVEL JOB DEPTNO
-------------------- ---------- --------------------------- ----------
KING 1 PRESIDENT 10
JONES 2 MANAGER 20
SCOTT 3 ANALYST 20
ADAMS 4 CLERK 20
FORD 3 ANALYST 20
SMITH 4 CLERK 20
BLAKE 2 MANAGER 30
ALLEN 3 SALESMAN 30
WARD 3 SALESMAN 30
MARTIN 3 SALESMAN 30
TURNER 3 SALESMAN 30
JAMES 3 CLERK 30
CLARK 2 MANAGER 10
MILLER 3 CLERK 10
14 rows selected.
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
2 /
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 6zhtnf720u0bm, child number 0
-------------------------------------
select /*+ gather_plan_statistics */ lpad(' ', 2 * level - 2, ' ') || ename as ename , level
, job , deptno from emp connect by mgr = prior empno start with mgr is null
Plan hash value: 1869448388
-----------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
-----------------------------------------------------------------------------------------------------------------------
|* 1 | CONNECT BY WITH FILTERING| | 1 | | 14 |00:00:00.01 | 15 | 2048 | 2048 | 2048 (0)|
|* 2 | TABLE ACCESS FULL | EMP | 1 | 1 | 1 |00:00:00.01 | 3 | | | |
|* 3 | HASH JOIN | | 4 | | 13 |00:00:00.01 | 12 | 1452K| 1452K| 843K (0)|
| 4 | CONNECT BY PUMP | | 4 | | 14 |00:00:00.01 | 0 | | | |
| 5 | TABLE ACCESS FULL | EMP | 4 | 2 | 56 |00:00:00.01 | 12 | | | |
-----------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("MGR"=PRIOR NULL)
2 - filter("MGR" IS NULL)
3 - access("MGR"=PRIOR NULL)
24 rows selected.
Which explains why I didn't see the CONNECT BY NO FILTERING WITH START-WITH earlier. It seems that Oracle has adjusted the cost calculation of connect by queries somewhere between 10.2.0.4 and 11.2.0.1. Just look at the cost from both execution plans on 10.2.0.4 using a regular explain plan statement and a "select * from table(dbms_xplan.display):
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 2 | 50 | 3 (0)| 00:00:01 |
|* 1 | CONNECT BY WITH FILTERING| | | | | |
|* 2 | TABLE ACCESS FULL | EMP | 1 | 29 | 3 (0)| 00:00:01 |
|* 3 | HASH JOIN | | | | | |
| 4 | CONNECT BY PUMP | | | | | |
| 5 | TABLE ACCESS FULL | EMP | 2 | 50 | 3 (0)| 00:00:01 |
----------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 350 | 3 (0)| 00:00:01 |
|* 1 | CONNECT BY NO FILTERING WITH START-WITH| | | | | |
| 2 | TABLE ACCESS FULL | EMP | 14 | 350 | 3 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
The cost of 3 is due to the full table scan of EMP, and no additional cost is added for the hierarchical query.
These are the plans from 11.2.0.2:
----------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
----------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 3 | 156 | 15 (20)| 00:00:01 |
|* 1 | CONNECT BY WITH FILTERING| | | | | |
|* 2 | TABLE ACCESS FULL | EMP | 1 | 25 | 4 (0)| 00:00:01 |
|* 3 | HASH JOIN | | 2 | 76 | 9 (12)| 00:00:01 |
| 4 | CONNECT BY PUMP | | | | | |
|* 5 | TABLE ACCESS FULL | EMP | 13 | 325 | 4 (0)| 00:00:01 |
----------------------------------------------------------------------------------
------------------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 14 | 728 | 5 (20)| 00:00:01 |
|* 1 | CONNECT BY NO FILTERING WITH START-WITH| | | | | |
| 2 | TABLE ACCESS FULL | EMP | 14 | 350 | 4 (0)| 00:00:01 |
------------------------------------------------------------------------------------------------
The numbers from the 11.2.0.2 show more sophistication than just the cost of the table scan. The optimizer can't know how many levels deep the data is, but version 10.2.0.4 apparently picked 1, and left the total cost unchanged from 3 to 3. I'm curious to know in which version in between 10.2.0.4 and 11.2.0.2 this cost calculation changed. If anyone who is reading this, has a version in between and likes to check, please let me know in the comments. My guess would be that 11.2.0.1 contained the cost change.
What does CONNECT BY NO FILTERING WITH START-WITH do?
Let's explore this, using this table:
SQL> create table t (id, parent_id, value, indicator)
2 as
3 select level - 1
4 , case level when 1 then null else trunc((level-1)/10) end
5 , round(dbms_random.value * 1000)
6 , case mod(level,10) when 4 then 'N' else 'Y' end
7 from dual
8 connect by level <= 100000
9 /
Table created.
SQL> alter table t
2 add constraint cbt_pk
3 primary key (id)
4 /
Table altered.
SQL> create index i1 on t (parent_id,indicator)
2 /
Index created.
SQL> exec dbms_stats.gather_table_stats(user,'t',cascade=>true)
The data is tree shaped where each parent node has exactly 9 child nodes. One tenth of the data, with an id that ends with the digit 3, has its indicator column set to 'N'. This select query will make it clearer how the data looks like:
SQL> select *
2 from t
3 where id < 24 or id > 99997
4 order by id
5 /
ID PARENT_ID VALUE I
---------- ---------- ---------- -
0 656 Y
1 0 289 Y
2 0 365 Y
3 0 644 N
4 0 364 Y
5 0 841 Y
6 0 275 Y
7 0 529 Y
8 0 500 Y
9 0 422 Y
10 1 598 Y
11 1 104 Y
12 1 467 Y
13 1 296 N
14 1 105 Y
15 1 220 Y
16 1 692 Y
17 1 793 Y
18 1 29 Y
19 1 304 Y
20 2 467 Y
21 2 716 Y
22 2 837 Y
23 2 432 N
99998 9999 609 Y
99999 9999 24 Y
26 rows selected.
When hearing the word "filter", I almost immediately associate it with a WHERE clause. But a where clause in a connect by query, is not what is meant by connect by filtering. The documentation states:
Oracle processes hierarchical queries as follows:
A join, if present, is evaluated first, whether the join is specified in the FROM clause or with WHERE clause predicates.
The CONNECT BY condition is evaluated.
Any remaining WHERE clause predicates are evaluated.
So a where clause predicate is evaluated AFTER the connect by has done its job. You can see that happening here:
SQL> explain plan
2 for
3 select id
4 , parent_id
5 , sys_connect_by_path(id,'->') scbp
6 from t
7 where indicator = 'N'
8 connect by parent_id = prior id
9 start with parent_id is null
10 /
Explained.
SQL> select * from table(dbms_xplan.display)
2 /
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------
Plan hash value: 2502271019
---------------------------------------------------------------------------------------
| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |
---------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 11 | 319 | 164 (3)| 00:00:02 |
|* 1 | FILTER | | | | | |
|* 2 | CONNECT BY WITH FILTERING | | | | | |
|* 3 | TABLE ACCESS FULL | T | 1 | 11 | 80 (2)| 00:00:01 |
| 4 | NESTED LOOPS | | 10 | 240 | 82 (2)| 00:00:01 |
| 5 | CONNECT BY PUMP | | | | | |
| 6 | TABLE ACCESS BY INDEX ROWID| T | 10 | 110 | 2 (0)| 00:00:01 |
|* 7 | INDEX RANGE SCAN | I1 | 10 | | 1 (0)| 00:00:01 |
---------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - filter("INDICATOR"='N')
2 - access("PARENT_ID"=PRIOR "ID")
3 - filter("PARENT_ID" IS NULL)
7 - access("PARENT_ID"="connect$_by$_pump$_002"."prior id ")
22 rows selected.
The "indicator = 'N'" predicate is at step 1, which is executed after the CONNECT BY WITH FILTERING at step 2. Note that although this query is executed in 11.2.0.2, the optimizer has chosen the old CONNECT BY WITH FILTERING.
Connect by filtering is done by using filters in your CONNECT BY clause. Here is an example using the predicate "indicator = 'N'" inside the CONNECT BY clause:
SQL> select id
2 , parent_id
3 , sys_connect_by_path(id,'->') scbp
4 from t
5 connect by parent_id = prior id
6 and indicator = 'N'
7 start with parent_id is null
8 /
ID PARENT_ID SCBP
---------- ---------- --------------------------------------------------
0 ->0
3 0 ->0->3
33 3 ->0->3->33
333 33 ->0->3->33->333
3333 333 ->0->3->33->333->3333
33333 3333 ->0->3->33->333->3333->33333
6 rows selected.
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
2 /
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------
SQL_ID dzkjzrrzgnvd5, child number 0
-------------------------------------
select id , parent_id , sys_connect_by_path(id,'->') scbp
from t connect by parent_id = prior id and indicator = 'N'
start with parent_id is null
Plan hash value: 3164577763
---------------------------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers | OMem | 1Mem | Used-Mem |
---------------------------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 6 |00:00:00.01 | 294 | | | |
|* 1 | CONNECT BY WITH FILTERING | | 1 | | 6 |00:00:00.01 | 294 | 2048 | 2048 | 2048 (0)|
|* 2 | TABLE ACCESS FULL | T | 1 | 1 | 1 |00:00:00.01 | 277 | | | |
| 3 | NESTED LOOPS | | 6 | 5 | 5 |00:00:00.01 | 17 | | | |
| 4 | CONNECT BY PUMP | | 6 | | 6 |00:00:00.01 | 0 | | | |
| 5 | TABLE ACCESS BY INDEX ROWID| T | 6 | 5 | 5 |00:00:00.01 | 17 | | | |
|* 6 | INDEX RANGE SCAN | I1 | 6 | 5 | 5 |00:00:00.01 | 12 | | | |
---------------------------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("PARENT_ID"=PRIOR NULL)
2 - filter("PARENT_ID" IS NULL)
6 - access("PARENT_ID"="connect$_by$_pump$_002"."prior id " AND "INDICATOR"='N')
27 rows selected.
In the A-rows column, you can see that the connect by filtering was effective here. Only the necessary rows were being read. And this is the key difference between the two connect by algorithms: with CONNECT BY WITH FILTERING, you can filter within each recursion, whereas CONNECT BY NO FILTERING WITH START-WITH has to read everything, does an in-memory operation, and return the result. With this example, the latter is much less efficient:
SQL> select /*+ no_connect_by_filtering */ id
2 , parent_id
3 , sys_connect_by_path(id,'->') scbp
4 from t
5 connect by parent_id = prior id
6 and indicator = 'N'
7 start with parent_id is null
8 /
ID PARENT_ID SCBP
---------- ---------- --------------------------------------------------
0 ->0
3 0 ->0->3
33 3 ->0->3->33
333 33 ->0->3->33->333
3333 333 ->0->3->33->333->3333
33333 3333 ->0->3->33->333->3333->33333
6 rows selected.
SQL> select * from table(dbms_xplan.display_cursor(null,null,'allstats last'))
2 /
PLAN_TABLE_OUTPUT
---------------------------------------------------------------------------------------------------------------------------------------
SQL_ID 3fcr31tp83by9, child number 0
-------------------------------------
select /*+ no_connect_by_filtering */ id , parent_id ,
sys_connect_by_path(id,'->') scbp from t connect by parent_id =
prior id and indicator = 'N' start with parent_id is null
Plan hash value: 2303479083
----------------------------------------------------------------------------------------------------------
| Id | Operation | Name | Starts | E-Rows | A-Rows | A-Time | Buffers |
----------------------------------------------------------------------------------------------------------
| 0 | SELECT STATEMENT | | 1 | | 6 |00:00:00.14 | 277 |
|* 1 | CONNECT BY NO FILTERING WITH START-WITH| | 1 | | 6 |00:00:00.14 | 277 |
| 2 | TABLE ACCESS FULL | T | 1 | 100K| 100K|00:00:00.01 | 277 |
----------------------------------------------------------------------------------------------------------
Predicate Information (identified by operation id):
---------------------------------------------------
1 - access("PARENT_ID"=PRIOR NULL)
filter("PARENT_ID" IS NULL)
22 rows selected.
100K rows were being read, and the A-time was 0.14 seconds instead of 0.01 seconds. I wondered where those 0.14 seconds went to, since the plan shows it's NOT for the full table scan. Using Tom Kyte's runstats_pkg reveals this:
SQL> declare
2 cursor c1
3 is
4 select /*+ connect_by_filtering */ id
5 , parent_id
6 , sys_connect_by_path(id,'->') scbp
7 from t
8 connect by parent_id = prior id
9 and indicator = 'N'
10 start with parent_id is null
11 ;
12 cursor c2
13 is
14 select /*+ no_connect_by_filtering */ id
15 , parent_id
16 , sys_connect_by_path(id,'->') scbp
17 from t
18 connect by parent_id = prior id
19 and indicator = 'N'
20 start with parent_id is null
21 ;
22 begin
23 runstats_pkg.rs_start;
24 for r in c1 loop null; end loop;
25 runstats_pkg.rs_middle;
26 for r in c2 loop null; end loop;
27 runstats_pkg.rs_stop;
28 end;
29 /
Run1 ran in 0 hsecs
Run2 ran in 10 hsecs
run 1 ran in 0% of the time
Name Run1 Run2 Diff
STAT...HSC Heap Segment Block 16 15 -1
STAT...db block changes 48 47 -1
STAT...consistent gets - exami 9 8 -1
STAT...db block gets from cach 32 33 1
STAT...db block gets 32 33 1
STAT...redo subscn max counts 0 1 1
STAT...redo ordering marks 0 1 1
STAT...redo entries 16 15 -1
STAT...calls to kcmgas 0 1 1
STAT...calls to kcmgcs 29 28 -1
STAT...free buffer requested 0 1 1
STAT...Heap Segment Array Inse 16 15 -1
STAT...consistent changes 32 31 -1
STAT...heap block compress 9 8 -1
STAT...parse time cpu 1 0 -1
STAT...buffer is pinned count 1 0 -1
STAT...session cursor cache co 1 0 -1
STAT...sql area evicted 1 0 -1
LATCH.undo global data 11 10 -1
LATCH.SQL memory manager worka 3 5 2
LATCH.messages 0 2 2
LATCH.OS process allocation 0 2 2
LATCH.simulator hash latch 20 23 3
LATCH.object queue header oper 4 1 -3
STAT...workarea executions - o 10 6 -4
STAT...table fetch by rowid 15 10 -5
STAT...index scans kdiixs1 6 0 -6
LATCH.row cache objects 280 274 -6
STAT...sorts (memory) 8 2 -6
STAT...CPU used by this sessio 2 11 9
STAT...Elapsed Time 1 11 10
STAT...recursive cpu usage 2 12 10
STAT...no work - consistent re 300 284 -16
STAT...buffer is not pinned co 36 20 -16
STAT...session logical reads 354 337 -17
STAT...consistent gets from ca 313 296 -17
STAT...consistent gets from ca 322 304 -18
LATCH.shared pool 186 168 -18
STAT...consistent gets 322 304 -18
LATCH.shared pool simulator 23 4 -19
LATCH.cache buffers chains 785 740 -45
STAT...undo change vector size 3,500 3,420 -80
STAT...redo size 4,652 4,560 -92
STAT...session uga memory 0 -65,488 -65,488
STAT...session pga memory 0 -65,536 -65,536
STAT...sorts (rows) 12 100,001 99,989
Run1 latches total versus runs -- difference and pct
Run1 Run2 Diff Pct
1,467 1,384 -83 106.00%
PL/SQL procedure successfully completed
The major difference is the number of rows sorted! The CONNECT BY NO FILTERING WITH START-WITH sorts all 100K rows. This is a surprise, because normally when you sort, you use memory from the PGA workarea, which shows up in your memory statistics from your execution plan. But the no filtering plan did not show those statistics (OMem, 1Mem, Used-Mem). I have no explanation for this phenomenon yet.
Let's zoom in on the sorting:
SQL> select sn.name
2 , ms.value
3 from v$mystat ms
4 , v$statname sn
5 where ms.statistic# = sn.statistic#
6 and sn.name like '%sort%'
7 /
NAME VALUE
----------------------- ----------
sorts (memory) 2278
sorts (disk) 0
sorts (rows) 9425510
3 rows selected.
SQL> select id
2 , parent_id
3 , sys_connect_by_path(id,'->') scbp
4 from t
5 connect by parent_id = prior id
6 and indicator = 'N'
7 start with parent_id is null
8 /
ID PARENT_ID SCBP
---------- ---------- --------------------------------------------------
0 ->0
3 0 ->0->3
33 3 ->0->3->33
333 33 ->0->3->33->333
3333 333 ->0->3->33->333->3333
33333 3333 ->0->3->33->333->3333->33333
6 rows selected.
SQL> select sn.name
2 , ms.value
3 from v$mystat ms
4 , v$statname sn
5 where ms.statistic# = sn.statistic#
6 and sn.name like '%sort%'
7 /
NAME VALUE
----------------------- ----------
sorts (memory) 2286
sorts (disk) 0
sorts (rows) 9425522
3 rows selected.
SQL> select /*+ no_connect_by_filtering */ id
2 , parent_id
3 , sys_connect_by_path(id,'->') scbp
4 from t
5 connect by parent_id = prior id
6 and indicator = 'N'
7 start with parent_id is null
8 /
ID PARENT_ID SCBP
---------- ---------- --------------------------------------------------
0 ->0
3 0 ->0->3
33 3 ->0->3->33
333 33 ->0->3->33->333
3333 333 ->0->3->33->333->3333
33333 3333 ->0->3->33->333->3333->33333
6 rows selected.
SQL> select sn.name
2 , ms.value
3 from v$mystat ms
4 , v$statname sn
5 where ms.statistic# = sn.statistic#
6 and sn.name like '%sort%'
7 /
NAME VALUE
----------------------- ----------
sorts (memory) 2288
sorts (disk) 0
sorts (rows) 9525523
3 rows selected.
So CONNECT BY WITH FILTERING did 8 sorts (2286 - 2278) and sorted 12 rows (9425522 - 9425510), whereas CONNECT BY NO FILTERING WITH START-WITH did 2 (2288 - 2286) sorts and sorted 100,001 rows (9525523 - 9425522).
And finally, I promised to explain why the first two queries of this blogpost are identical, but show a different execution plan. The reason is simple: the first one is executed on 10.2.0.4 and the second one on 11.2.0.2.