In this case, ideally sql server should have blocked only the inserted rows
. The select statement could have returned all other rows and select
statement on filter condition on non-index field that matches only the
existing records could also worked.
MS Access does not block select statement while Insert transaction is
running. The retrieved records from Access also does not contain any
uncommitted records.
But still, I am very surprised why SQL Server could not perform this
operation as MS Access could very well do.
And also one option that should not happen in our application in this
scenario is "Read uncommitted" option in SQL Server which retrieves the
uncommitted rows in the select.
Suggestions please !
"Uri Dimant" wrote:
> Baskar
> http://www.sql-server-performance.com/blocking.asp
> http://www.sql-server-performance.c...ucing_locks.asp
>
>
> "Baskar" <Baskar@.discussions.microsoft.com> wrote in message
> news:880BF5FF-E31F-4BA2-95B1-CDDE6C8616C8@.microsoft.com...
>
>On Thu, 11 Aug 2005 07:22:01 -0700, Baskar wrote:
(snip)
>MS Access does not block select statement while Insert transaction is
>running. The retrieved records from Access also does not contain any
>uncommitted records.
Hi Baskar,
There are two possible explanations for this.
#1. Access uses what's called "snapshot isolation" - this means that
changing data doesn't block others from retrieving it, but will show the
others the "before change" image of the data. You don't see data that is
inserted in an uncommitted transaction, you still see rows that have
been deleted in an uncommitted transaction and for rows that are updated
in an uncommitted transaction, you see the values as they were before
the update.
Snapshot isolation is supported in Oracle. I've read that it will also
be supported by SQL Server 2005.
Frankly, I don;t think that Access supports snapshot isolation. I prefer
my explanation #2:
#2: Access is buggy. If a row is inserted in an uncommitted transaction,
it won't be shown - but if a row is deleted in the same uncomitted
transaction, then that row won't show either.
Why is this behaviour buggy? Well, consider this simple example: one Mr.
B. Gates enter the bank and says he wants to change his account number.
Since Account# is the primary key, the row with the old account# has to
be deleted, and a new row inserted. Before this transaction is
committed, the accountant runs a query to check the totals. And all hell
breaks loose, because suddenly a few billion dollars have gone missing!
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||hi,
The concurrent select while insert does not work sometimes even when the
select query is based on an index-field. This happens when there are more
records (say 3000) in the table and it works when there are less records
(<250).
Also, SQL Server restricts the select / update operations during insert even
when the insert and select / update are working on different data. Is this
the supposed behavior in SQL Server ?
Are there any work around or alternative solution for this problem ?
Regards
Baskar
"Hugo Kornelis" wrote:
> On Thu, 11 Aug 2005 07:22:01 -0700, Baskar wrote:
> (snip)
> Hi Baskar,
> There are two possible explanations for this.
> #1. Access uses what's called "snapshot isolation" - this means that
> changing data doesn't block others from retrieving it, but will show the
> others the "before change" image of the data. You don't see data that is
> inserted in an uncommitted transaction, you still see rows that have
> been deleted in an uncommitted transaction and for rows that are updated
> in an uncommitted transaction, you see the values as they were before
> the update.
> Snapshot isolation is supported in Oracle. I've read that it will also
> be supported by SQL Server 2005.
> Frankly, I don;t think that Access supports snapshot isolation. I prefer
> my explanation #2:
> #2: Access is buggy. If a row is inserted in an uncommitted transaction,
> it won't be shown - but if a row is deleted in the same uncomitted
> transaction, then that row won't show either.
> Why is this behaviour buggy? Well, consider this simple example: one Mr.
> B. Gates enter the bank and says he wants to change his account number.
> Since Account# is the primary key, the row with the old account# has to
> be deleted, and a new row inserted. Before this transaction is
> committed, the accountant runs a query to check the totals. And all hell
> breaks loose, because suddenly a few billion dollars have gone missing!
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
Our application need to be able to concurrently insert and select from the
same table irrespective of the data.
But, the following issue has been found in the sql server 2000.
Initially , the insert statements (around 1000 rows) for a table is executed
in a transaction inside a stored procedure.
like:
"Insert into Test (pkey,field1,field2,strfield) values
(1400,1144,12025,'test sp insert 2')"
and simultaneously a select statement to retrieve all records from the same
table is executed in the query analyzer. Then the select statement waits
until the insert operation completes.
"select * from Test"
But, if a select statement that filters the data based on the index fields
is used, then it executes without delay.
select * from test where pkey=2000 and field1=1111
(Index: pkey and field1 combination) Suppose if the filter condition
includes the data that is being inserted, then also it is blocked.
In SQL Server 2000, whether it is possible to avoid the blocking of select
statement while insert.
Note that the insert statement is executed in the default isolation level of
sql server 2000.
In MS Access, concurrent inserts and select works without any issue.
Table and Index references:
CREATE TABLE [dbo].[Test] (
[pkey] [bigint] NOT NULL ,
[field1] [bigint] NOT NULL ,
[field2] [int] NULL ,
[strfield] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]
GO
CREATE INDEX [IX_Test] ON [dbo].[Test]([field1], [pkey]) ON [PRIMARY]
>|||X-Newsreader: Forte Agent 1.91/32.564
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Complaints-To: abuse@.supernews.com
Lines: 55
Path: TK2MSFTNGP08.phx.gbl!newsfeed00.sul.t-online.de!t-online.de!news-spur1
.maxwell.syr.edu!news.maxwell.syr.edu!sn-xit-04!sn-xit-12!sn-xit-09!sn-post-
01!supernews.com!corp.supernews.com!not-for-mail
Xref: TK2MSFTNGP08.phx.gbl microsoft.public.sqlserver.programming:546663
On Tue, 16 Aug 2005 08:49:04 -0700, Baskar wrote:
>The concurrent select while insert does not work sometimes even when the
>select query is based on an index-field. This happens when there are more
>records (say 3000) in the table and it works when there are less records
>(<250).
Hi Baskar,
Even after rereading the thread, I don't get the exact picture of what
you are doing. It would help tremendously if you could post some SQL
statements that I can run to reproduce this behaviour on my test
database. I'd need CREATE TABLE statements to set up the table, INSERT
statements for some sample starting data, then the INSERT and SELECT
statements to run concurrently.
It's much easier to comment what I can see!
>Also, SQL Server restricts the select / update operations during insert eve
n
>when the insert and select / update are working on different data. Is this
>the supposed behavior in SQL Server ?
That depends on a lot of factors. Again, I'd have to see the code in
order to comment.
>Are there any work around or alternative solution for this problem ?
Default locking behaviour is to block conflicting requests: if a
connection attempts to do something with locked data, it'll have to wait
until the lock is released. You can override this default behaviour with
one of the following locking hints:
* WITH (NOLOCK) -- specifies that no shared locks are taken, and that
existing locks are disregarded. The consequence of this is known as
"dirty reads" - data that has been updated, but not yet committed and
possibly not even yet changed. The change might be rolled back later; in
that case, your query has returned data that never really existed in the
database.
Setting the transaction isolation level to READ UNCOMMITTED is
equivalent to specifying WITH (NOLOCK) on all queries.
* WITH (READPAST) -- specifies that locked data should be skipped; if
some rows in the table are locked, your query won't wait, but will
return a result set without any data for those rows (as if they don't
exist at all). Applies only to row-level locks, and won't work if your
transaction isolation level is anything other than READ COMMITTED (the
default).
Note that these locking hints will only affect read operations. For any
data modification, the locking can't be bypassed.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||hi,
The table details are available in the first thread of this question.
Basically, our application need to allow users to download data (insert
statements) and concurrently users should be able to work with the reports
(select statements).
Also, "With ReadPast" option will work for fetching records while insert.
But it will not fetch records that are locked because of update. This
behavior could not be acceptable for our application.
According to the SQL Server documentation, data that is being updated only
will be locked and insertion of data will lock only the specific data and no
t
others. But, select statement is blocked even when tried to retrieve the dat
a
which is not being inserted.
Is there any standard solution or alternative for this problem ?
Regards
Baskar
"Hugo Kornelis" wrote:
> On Tue, 16 Aug 2005 08:49:04 -0700, Baskar wrote:
>
> Hi Baskar,
> Even after rereading the thread, I don't get the exact picture of what
> you are doing. It would help tremendously if you could post some SQL
> statements that I can run to reproduce this behaviour on my test
> database. I'd need CREATE TABLE statements to set up the table, INSERT
> statements for some sample starting data, then the INSERT and SELECT
> statements to run concurrently.
> It's much easier to comment what I can see!
>
> That depends on a lot of factors. Again, I'd have to see the code in
> order to comment.
>
> Default locking behaviour is to block conflicting requests: if a
> connection attempts to do something with locked data, it'll have to wait
> until the lock is released. You can override this default behaviour with
> one of the following locking hints:
> * WITH (NOLOCK) -- specifies that no shared locks are taken, and that
> existing locks are disregarded. The consequence of this is known as
> "dirty reads" - data that has been updated, but not yet committed and
> possibly not even yet changed. The change might be rolled back later; in
> that case, your query has returned data that never really existed in the
> database.
> Setting the transaction isolation level to READ UNCOMMITTED is
> equivalent to specifying WITH (NOLOCK) on all queries.
> * WITH (READPAST) -- specifies that locked data should be skipped; if
> some rows in the table are locked, your query won't wait, but will
> return a result set without any data for those rows (as if they don't
> exist at all). Applies only to row-level locks, and won't work if your
> transaction isolation level is anything other than READ COMMITTED (the
> default).
> Note that these locking hints will only affect read operations. For any
> data modification, the locking can't be bypassed.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
>|||On Wed, 17 Aug 2005 00:26:07 -0700, Baskar wrote:
>The table details are available in the first thread of this question.
Hi Baskar,
I thought that was just an example made up on the fly, since there is no
primary key or unique constraint, no unique index and no clustered
index.
Also, these details do not explain this behaviour:
I'd really have to be able to run a repro script on my database that
mimics this behaviour - i.e. do a BEGIN TRAN + INSERT in one connection
but don't commit or rollback yet, then do a SELECT in another connection
and check the results.
If you can post INSERT and SELECT statements that reproduce this
behaviour, I'll see if I can find the cause.
>Basically, our application need to allow users to download data (insert
>statements) and concurrently users should be able to work with the reports
>(select statements).
>Also, "With ReadPast" option will work for fetching records while insert.
>But it will not fetch records that are locked because of update. This
>behavior could not be acceptable for our application.
That will be hard to do. Basically, you are asking SQL Server to
distinguish between locks for newly inserted data and locks for updated
data. This distinction is not available in SQL Server's architecture.
(And for good reason, I think - what if I insert a row and then go on to
update that same row, all in the same transaction? how to handle updates
to the primary key column? etc etc)
>According to the SQL Server documentation, data that is being updated only
>will be locked and insertion of data will lock only the specific data and n
ot
>others. But, select statement is blocked even when tried to retrieve the da
ta
>which is not being inserted.
That's because the data is already in the database, it's just not yet
confirmed (read: the transaction is not yet committed). SQL Server won't
just pretend it's not there (unless you use READPAST - but that would
affect updated rows as well); neither will SQL Server say that it IS
there if there is still the possibility of a rollback (unless you use
NOLOCK - but that too would affect updated rows as well).
>Is there any standard solution or alternative for this problem ?
You might consider using a two-table approach: insert the new rows into
a staging table, create the reports off a complete table and create an
automated task that will move all new rows from the staging table to the
complete table every once in a while.
If it is crucial that your reports have up-to-the-second accuracy, this
won't be a good approach - but in that case, I guess you wouldn't want
the reports to bypass rows currently inserted either.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||hi,
The following are the real time table and stored proc details:
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[Dummy]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[Dummy]
GO
CREATE TABLE [dbo].[Dummy] (
[pkey_id] [bigint] IDENTITY (1, 1) NOT NULL ,
[index_id] [int] NULL ,
[intField1] [int] NULL ,
[strField1] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS
NOT NULL ,
[intField2] [int] NULL ,
[intField3] [int] NULL ,
[intField4] [int] NOT NULL ,
[datetimeField] [datetime] NULL ,
[bitfield] [bit] NULL ,
[strField2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Dummy] WITH NOCHECK ADD
CONSTRAINT [PK_Dummy] PRIMARY KEY CLUSTERED
(
[pkey_id]
) ON [PRIMARY]
GO
CREATE INDEX [IX_Dummy] ON [dbo].[Dummy]([index_id]) ON [PRIMARY]
GO
Test stored procedure for insert:
CREATE procedure prcMultiInserttoDummyTable (
@.indexid bigint=777,
@.TotalRows bigint =2000
)
as
Declare @.CurRow bigint
set @.CurRow=0
begin tran
While (@.CurRow < @.TotalRows)
Begin
Insert into Dummy (indexid,intField1,strField1,intField2,i
ntField3,
intField4, datetimeField, bitfield, strField2)
values(@.indexid,555,'test',10,0,168,'02/13/1998',0,'test stored proc insert'
)
set @.CurRow = @.CurRow +1
end
waitfor delay '00:00:20' -- wait to test the concurrent insertion and
selection
commit tran
Test select statements:
select * from dummy where indexid=123 -- filtered based on index
select * from dummy
select * from Dummy where intField1=245 - filtered based on non-index
"Hugo Kornelis" wrote:
> On Wed, 17 Aug 2005 00:26:07 -0700, Baskar wrote:
>
> Hi Baskar,
> I thought that was just an example made up on the fly, since there is no
> primary key or unique constraint, no unique index and no clustered
> index.
> Also, these details do not explain this behaviour:
>
> I'd really have to be able to run a repro script on my database that
> mimics this behaviour - i.e. do a BEGIN TRAN + INSERT in one connection
> but don't commit or rollback yet, then do a SELECT in another connection
> and check the results.
> If you can post INSERT and SELECT statements that reproduce this
> behaviour, I'll see if I can find the cause.
>
> That will be hard to do. Basically, you are asking SQL Server to
> distinguish between locks for newly inserted data and locks for updated
> data. This distinction is not available in SQL Server's architecture.
> (And for good reason, I think - what if I insert a row and then go on to
> update that same row, all in the same transaction? how to handle updates
> to the primary key column? etc etc)
>
> That's because the data is already in the database, it's just not yet
> confirmed (read: the transaction is not yet committed). SQL Server won't
> just pretend it's not there (unless you use READPAST - but that would
> affect updated rows as well); neither will SQL Server say that it IS
> there if there is still the possibility of a rollback (unless you use
> NOLOCK - but that too would affect updated rows as well).
>
> You might consider using a two-table approach: insert the new rows into
> a staging table, create the reports off a complete table and create an
> automated task that will move all new rows from the staging table to the
> complete table every once in a while.
> If it is crucial that your reports have up-to-the-second accuracy, this
> won't be a good approach - but in that case, I guess you wouldn't want
> the reports to bypass rows currently inserted either.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
>|||On Fri, 19 Aug 2005 06:34:07 -0700, Baskar wrote:
>hi,
>The following are the real time table and stored proc details:
(snip)
Hi Baskar,
My apologies for the delay in replying.
Thanks for the excellent repro script - I was able to reproduce the
problem you're having on my computer.
I was also able to find the reason. In your table, the column index_id
has an extremely low selectivity (in case you're unfamiliar with the
term, that means that the same values are repeated over and over again).
This makes an index on this column useless - if you run the queries
while no inserts are running and check the exectuion plan, you'll see
that a table scan is used in all three queries.
The reason that an index on this column is not used is a matter of cost
estimate. If the table has for instance 120,000 total rows, and 10,000
rows have the correct value in index_id, then the choice for the
optimizer is to:
a) scan the index to find the 10,000 matches very quickly, then do
10,000 individual bookmaark lookups to retrieve the complete information
for a total of 10,000 + some page reads - or
b) scan the full table; with probably 50 - 150 rows per page, this will
cost only 1200 page reads.
There are several ways that you can modify your script to see that SQL
Server won't wait when the index really is used:
1. Make sure that the index_id column is more selective. I modified the
stored procedure to set index_id to "@.indexid+@.CurRow", so that all
values in index_id would be different. The optimizer will now choose to
use the index for the first of the SELECT statements, and the data will
be returned while the insert proc is running (unless the proc also
inserts a new matching row, of course).
2. Change the query so that the index on index_id is covering:
SELECT pkey_id, index_id
FROM Dummy
WHERE index_id = 777
Now SQL Server won't have to do a bookmaark lookup since all columns
needed are included in the index pages. You'll see all matching rows
returned while another connection is busy inserting rows with a
different value for index_id.
3. Use an index hint to force the optimizer to use the index. Of course,
this will increase the cost of the query.
SELECT *
FROM Dummy (INDEX (IX_Dummy))
WHERE index_id=777
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)
No comments:
Post a Comment