Read by other session or buffer busy waits occur a when another session is reading the block into the buffer OR Another session holds the buffer in an incompatible mode to our request. This wait event was known as buffer busy wait event before oracle 10g.
These waits indicate read/read, read/write, or write/write contention. The Oracle session is waiting to pin a buffer. A buffer must be pinned before it can be read or modified. Only one process can pin a buffer at any one time.
“Read by other session wait event” wait event indicates a wait for another session to read the data from the disk into oracle buffer cache .The main cause for this wait event is contention for "hot" blocks or objects .i.e several processes repeatedly reading the same blocks, e.g. many sessions scanning the same index or performing full table scans on the same table.
When query is requested data from the database, Oracle will first read the data from disk into the SGA. If two or more sessions request the same information,
the first session will read the data into the buffer cache while other sessions wait.
Before 10g this wait event knows as buffer busy wait.
+---------Find the file_id and block_id using below query---------+
SELECT p1 "file#", p2 "block#", p3 "class#"
FROM v$session_wait
WHERE event = 'read by other session';
+--------Find the object name using below query----------+
SELECT relative_fno, owner, segment_name, segment_type
FROM dba_extents
WHERE file_id = &FILE
AND &BLOCK BETWEEN block_id
AND block_id + blocks - 1;
+----------You can also find the sql id and sql statement using below queries ---------+
SELECT
s.p1 file_id, s.p2 block_id,o.object_name obj,
o.object_type otype,
s.SQL_ID,
w.CLASS,event
FROM v$session s,
( SELECT ROWNUM CLASS#, CLASS FROM v$waitstat ) w,
all_objects o
WHERE
event IN ('read by other session')
AND
w.CLASS#(+)=s.p3
AND o.object_id (+)= s.row_wait_OBJ#
ORDER BY 1;
SELECT sql_text FROM v$sqltext WHERE sql_id=&sq_id ORDER BY piece
Once you identify the hot blocks and the segments they belong to, and related quires then you reduce the using following solutions
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+---Tune the query----+
This block of contention can belong to data block,segement header or undo block.
Data Blocks:
-Eliminate HOT blocks from the application.
-Reduce the number of rows per block( by moving table to tablespace with smaller block size or by below techniques)
-Try rebuilding the object with a higher PCTFREE so that you reduce the number of rows per block.
-Increase INITRANS and MAXTRANS and reduce PCTUSED This will make the table less dense .
-Check for repeatedly scanned /unselective indexes.
-Check for 'right- hand-indexes' (indexes that get inserted into at the same point by many processes).
INDEX: then the insert index leaf block is probably hot,
solutions:- Hash partition the index, Use reverse key index.
TABLE: then insert block is hot,
solutions:- Use free lists, Put Object in ASSM tablespace
Segemnt Header:
Use Automate segment management that is bit maps or Increase / Add of number of FREELISTs and FREELIST GROUPs
If "segment header" occurs at the same time as CLASS= "data block" on the same object and the object is of OTYPE= "TABLE" then this is just a confirmation that the TABLE needs to use free lists or ASSM.
Most likely extent allocation problems, look at extent size on tablespace and increase the extent size to there are few extent allocations and less contention on the File Header Block.
If "segment header" occurs at the same time as CLASS= "data block" on the same object and the object is of OTYPE= "TABLE" then this is just a confirmation that the TABLE needs to use free lists or ASSM.
Most likely extent allocation problems, look at extent size on tablespace and increase the extent size to there are few extent allocations and less contention on the File Header Block.
Undo Header:Increase the number of Rollback Segments
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Block contention wait events are also recorded in specific view V$WAITSTAT and since V$SESSION has all the wait events data integrated with it from 10g and it also have the row wait information, below query can also be used to find the sql statements.
SELECT
s.p1 file_id, s.p2 block_id,o.object_name obj,
o.object_type otype,
s.SQL_ID,
w.CLASS,event
FROM v$session s,
( SELECT ROWNUM CLASS#, CLASS FROM v$waitstat ) w,
all_objects o
WHERE
event IN ('read by other session')
AND
w.CLASS#(+)=s.p3
AND o.object_id (+)= s.row_wait_OBJ#
ORDER BY 1;
SELECT SQL_FULLTEXT from V$SQL WHERE sql_id=&
SQL_FULLTEXT is CLOB column which displays full query
SELECT
p1 file_id , p2 block_id ,o.object_name obj,
o.object_type otype,
ash.SQL_ID,
w.CLASS
FROM v$active_session_history ash,
( SELECT ROWNUM CLASS#, CLASS FROM v$waitstat ) w,
all_objects o
WHERE event='read by other session'
AND w.CLASS#(+)=ash.p3
AND o.object_id (+)= ash.CURRENT_OBJ#
AND ash.sample_time > SYSDATE - &MIN/(60*24)
ORDER BY 1;
select object_name,value from v$segment_statistics where owner='JW' and object_type='INDEX' and statistic_name='buffer busy waits';
------------------------------------------------------------------------------------------------------------
Scenario:-
------------------------------------------------------------------------------------------------------------
Recently I was involved in resolving an ongoing degrading performance issue for an application. It was at a state whereby the users were just frustrated and the business was not happy at all.
When it ended up in my court, the first thing I did was to identify the window when the user experience was really bad. Hence, I had to liaise with app vendors to undertand the application a bit and what else was running on it.
It was finally understood that there is a small etl job that runs on the database during business hours regularly. It was one of the business requirments to fulfill a reporting need. Hence, stopping that etl process was out of the question. Also, the application performance slows down when the etl kicks off and the response time gets better when the etl is finished. Also, the etl used to run 4 times in the business hours.
After analysing the statspack report of the database since it was 9i, I found that the top wait event when in the slowness window was 'Buffer Busy Waits' apart from IO:
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
db file sequential read 2,198,322 13,940 40.34
buffer busy waits 1,273,203 9,852 28.51
CPU time 4,241 12.27
enqueue 1,170 3,414 9.88
latch free 97,147 1,542 4.46
Also the buffer busy waits are not in the top 5 wait events when performance issue is not there.
I tried to find whether there was a link when users were complaining about database performance and buffer busy waits and I did find a link. Here is an output of my query from statspack to check.....
users were complaining about database performance when when the buffer busy waits was high. Also, the contention in on data blocks.
After running some monitoring script to find out which object was having this, it was identified that there was a massive table. Also, the database default block size was 32K meaning having alot of rows per block. To make it worse, the table was sitting on a local managed tablespace with manual segment space management. Another interesting finding during the analysis was that the reason code for buffer busy wait event was 300
(Block is being read by another session, and no other suitable block image was found, so we wait until the read is completed. This may also occur after a buffer cache assumed deadlock. The kernel can't get a buffer in a certain amount of time and assumes a deadlock. Therefore it will read the CR version of the block. )
It simply indicated we were having hot block contention.
Hence, my recommnedation to fix the issue was to create another 8K blocksize tablespace which had 'automatic segment space managment' and move the table with the issue to it. Once the migration was done, the buffer busy waits disappeared from the top 5 wait events and users were finding the application response time normal even when the etl was running. The business was happy as well.
The buffer busy wait events was also showing better results after the change
------------------------------------------------------------------------------------------------------------
To reduce contention for table blocks due to delete, select or update statements, reduce the number of rows per block. This can be done by using a smaller block size.
To reduce contention for table blocks due to insert statements, increase the number of freelists, or buffer frames.
To reduce contention for index blocks the best strategy is to implement a Reverse index.
In most situations the goal is to spread queries over a greater number of blocks, to avoid concentrating on any single one.
+------- Segment Contention (Table /Index)----------+
SELECT OBJECT_NAME, OBJ#, STATISTIC_NAME, VALUE FROM v$SEGMENT_STATISTICS where STATISTIC_NAME LIKE '%waits%' AND VALUE > 0 ORDER BY STATISTIC_NAME, VALUE DESC
SELECT OBJECT_NAME, OBJ#, STATISTIC_NAME, VALUE FROM v$SEGMENT_STATISTICS where STATISTIC_NAME LIKE '%waits%' AND VALUE > 0 ORDER BY STATISTIC_NAME, VALUE DESC
select object_name,value from v$segment_statistics where owner='JW' and object_type='INDEX' and statistic_name='buffer busy waits';
------------------------------------------------------------------------------------------------------------
Scenario:-
------------------------------------------------------------------------------------------------------------
Recently I was involved in resolving an ongoing degrading performance issue for an application. It was at a state whereby the users were just frustrated and the business was not happy at all.
When it ended up in my court, the first thing I did was to identify the window when the user experience was really bad. Hence, I had to liaise with app vendors to undertand the application a bit and what else was running on it.
It was finally understood that there is a small etl job that runs on the database during business hours regularly. It was one of the business requirments to fulfill a reporting need. Hence, stopping that etl process was out of the question. Also, the application performance slows down when the etl kicks off and the response time gets better when the etl is finished. Also, the etl used to run 4 times in the business hours.
After analysing the statspack report of the database since it was 9i, I found that the top wait event when in the slowness window was 'Buffer Busy Waits' apart from IO:
Top 5 Timed Events
~~~~~~~~~~~~~~~~~~ % Total
Event Waits Time (s) Ela Time
-------------------------------------------- ------------ ----------- --------
db file sequential read 2,198,322 13,940 40.34
buffer busy waits 1,273,203 9,852 28.51
CPU time 4,241 12.27
enqueue 1,170 3,414 9.88
latch free 97,147 1,542 4.46
Also the buffer busy waits are not in the top 5 wait events when performance issue is not there.
I tried to find whether there was a link when users were complaining about database performance and buffer busy waits and I did find a link. Here is an output of my query from statspack to check.....
users were complaining about database performance when when the buffer busy waits was high. Also, the contention in on data blocks.
After running some monitoring script to find out which object was having this, it was identified that there was a massive table. Also, the database default block size was 32K meaning having alot of rows per block. To make it worse, the table was sitting on a local managed tablespace with manual segment space management. Another interesting finding during the analysis was that the reason code for buffer busy wait event was 300
(Block is being read by another session, and no other suitable block image was found, so we wait until the read is completed. This may also occur after a buffer cache assumed deadlock. The kernel can't get a buffer in a certain amount of time and assumes a deadlock. Therefore it will read the CR version of the block. )
It simply indicated we were having hot block contention.
Hence, my recommnedation to fix the issue was to create another 8K blocksize tablespace which had 'automatic segment space managment' and move the table with the issue to it. Once the migration was done, the buffer busy waits disappeared from the top 5 wait events and users were finding the application response time normal even when the etl was running. The business was happy as well.
The buffer busy wait events was also showing better results after the change
------------------------------------------------------------------------------------------------------------
To reduce contention for table blocks due to delete, select or update statements, reduce the number of rows per block. This can be done by using a smaller block size.
To reduce contention for table blocks due to insert statements, increase the number of freelists, or buffer frames.
To reduce contention for index blocks the best strategy is to implement a Reverse index.
In most situations the goal is to spread queries over a greater number of blocks, to avoid concentrating on any single one.
SQL> select * from v$waitstat where class='data block';
CLASS COUNT TIME
------------------ ---------- ----------
data block 1676599 115212971
CLASS COUNT TIME
------------------ ---------- ----------
data block 1676599 115212971
Ques: I think, at least once everyone of us got confused between these two waits.
By the names of these wait events, it seems that both are same. In earlier Oracle versions, READ BY OTHER SESSION wait situation was included in BUFFER BUSY WAIT only. But in 10g and later versions, Oracle introduced a separate wait event "READ BY OTHER SESSION.
By the names of these wait events, it seems that both are same. In earlier Oracle versions, READ BY OTHER SESSION wait situation was included in BUFFER BUSY WAIT only. But in 10g and later versions, Oracle introduced a separate wait event "READ BY OTHER SESSION.
Finally, I intentionally generated these two wait events
simultaneously and these are in top waits in AWR report. See below, READ BY
OTHER SESSION belongs to User I/O wait class and BUFFER BUSY WAIT event belongs
to Concurrency wait class. Can anyone guess what is the difference between these
two?
Ans: Suppose you want a block in memory but that block is not readily available you have to
wait. Now depending on the reason why you have to wait oracle has made this
classification of buffer busy waits and read by other session waits.
If you are waiting because the block is currently being read into memory by
some other session via IO then oracle will record your wait time as read by
other session wait event,
if you are waiting because some other session
is currently accessing same blocks for
writing then oracle will record the wait time of your session as buffer busy wait common scenario is incase
of concurrent inserts.
In other words read by other session is just a
special case of buffer busy waits.
No comments:
Post a Comment