Sunday, February 19, 2012
Concurrent transaction in READ COMMITED isolation level
I currently work on Sql Serveur transactions (not distributed) and their
behaviors according to the selected isolation level.
I try to reproduce the default working of Oracle, and I have the following
problem:
With an isolation level READ COMMITED, by default Oracle "lock" no
request for reading modified records in another transaction (or connection)
not yet validated. Set apart the modifications made in the transaction in
progress, the result of a SELECT always return the "version" of the
validated recordings. Thus, there is no blocking in reading.
Sql Server 2000 function differently with the same isolation level. A
request SELECT is systematically blocked if the set of result must contain a
record new or modified not yet validated. Key word READPAST in request
SELECT solves only the problem related to the new records, the modified
record always block the request for reading!
Is there a solution?
Thank you in advance for your assistance.The key in either situation is to hold locks as short of a time span as
possible. READ COMMITTED puts a lock on an object, does it's thing with the
object, drops the lock on the object, and then moves to the next object
doing the same thing to the next object (usually we are talking about a row
in a table.)
So, in a well built system, the only contention is when you have a lot of
writes to rows that people also are trying to look at. So first question,
do you have people who need to see the data that people are actually
actively modifying? If so, then you do have a quandry. If not (and usually
not) then you may have indexing problems where you are scanning an entire
table instead of just some rows.
This is the place to start, but the crux of it all is simple, separate
readers from writers as much as possible, hold locks for as short a time
period as possible. It will be the same in 2005 when we get versioning as
well. The longer the locks, the more likely we get inconsistency and or
locking.
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Christopher" <Dev@.effect.fr> wrote in message
news:utLZFZkWFHA.2420@.TK2MSFTNGP12.phx.gbl...
> Hello,
> I currently work on Sql Serveur transactions (not distributed) and their
> behaviors according to the selected isolation level.
> I try to reproduce the default working of Oracle, and I have the following
> problem:
> With an isolation level READ COMMITED, by default Oracle "lock" no
> request for reading modified records in another transaction (or
> connection) not yet validated. Set apart the modifications made in the
> transaction in progress, the result of a SELECT always return the
> "version" of the validated recordings. Thus, there is no blocking in
> reading.
> Sql Server 2000 function differently with the same isolation level. A
> request SELECT is systematically blocked if the set of result must contain
> a record new or modified not yet validated. Key word READPAST in request
> SELECT solves only the problem related to the new records, the modified
> record always block the request for reading!
> Is there a solution?
> Thank you in advance for your assistance.
>|||Thank you for this explanation.
Just a precision according to my context:
I don't know if some people need frequently to see the data that others
are actually modifying, but it's a possibility in any business application.
It can be a problem in some situations.
However, in all cases, I'm agree that it's better to hold locks for a
short time (short transaction).
Christopher.
"Louis Davidson" <dr_dontspamme_sql@.hotmail.com> a crit dans le message de
news: OIIrzzlWFHA.132@.TK2MSFTNGP14.phx.gbl...
> The key in either situation is to hold locks as short of a time span as
> possible. READ COMMITTED puts a lock on an object, does it's thing with
> the object, drops the lock on the object, and then moves to the next
> object doing the same thing to the next object (usually we are talking
> about a row in a table.)
> So, in a well built system, the only contention is when you have a lot of
> writes to rows that people also are trying to look at. So first question,
> do you have people who need to see the data that people are actually
> actively modifying? If so, then you do have a quandry. If not (and
> usually not) then you may have indexing problems where you are scanning an
> entire table instead of just some rows.
> This is the place to start, but the crux of it all is simple, separate
> readers from writers as much as possible, hold locks for as short a time
> period as possible. It will be the same in 2005 when we get versioning as
> well. The longer the locks, the more likely we get inconsistency and or
> locking.
> --
> ----
--
> Louis Davidson - http://spaces.msn.com/members/drsql/
> SQL Server MVP
>
> "Christopher" <Dev@.effect.fr> wrote in message
> news:utLZFZkWFHA.2420@.TK2MSFTNGP12.phx.gbl...
>|||> I don't know if some people need frequently to see the data that others
> are actually modifying, but it's a possibility in any business
> application.
It is a possibility, though I find in quite a few cases blocking comes from
poor planning for separation rather than a need for two people to work on
the same rows.
--
----
Louis Davidson - http://spaces.msn.com/members/drsql/
SQL Server MVP
"Christopher" <Dev@.effect.fr> wrote in message
news:ORK5XV4WFHA.2740@.TK2MSFTNGP14.phx.gbl...
> Thank you for this explanation.
> Just a precision according to my context:
> I don't know if some people need frequently to see the data that others
> are actually modifying, but it's a possibility in any business
> application.
> It can be a problem in some situations.
> However, in all cases, I'm agree that it's better to hold locks for a
> short time (short transaction).
> Christopher.
>
> "Louis Davidson" <dr_dontspamme_sql@.hotmail.com> a crit dans le message
> de news: OIIrzzlWFHA.132@.TK2MSFTNGP14.phx.gbl...
>
Concurrent select is locked while another transaction executes Ins
same table irrespective of the data.
But, the following issue has been found in the sql server 2000.
Initially , the insert statements (around 1000 rows) for a table is executed
in a transaction inside a stored procedure.
like:
"Insert into Test (pkey,field1,field2,strfield) values
(1400,1144,12025,'test sp insert 2')"
and simultaneously a select statement to retrieve all records from the same
table is executed in the query analyzer. Then the select statement waits
until the insert operation completes.
"select * from Test"
But, if a select statement that filters the data based on the index fields
is used, then it executes without delay.
select * from test where pkey=2000 and field1=1111
(Index: pkey and field1 combination) Suppose if the filter condition
includes the data that is being inserted, then also it is blocked.
In SQL Server 2000, whether it is possible to avoid the blocking of select
statement while insert.
Note that the insert statement is executed in the default isolation level of
sql server 2000.
In MS Access, concurrent inserts and select works without any issue.
Table and Index references:
CREATE TABLE [dbo].[Test] (
[pkey] [bigint] NOT NULL ,
[field1] [bigint] NOT NULL ,
[field2] [int] NULL ,
[strfield] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]
GO
CREATE INDEX [IX_Test] ON [dbo].[Test]([field1], [pkey]) ON [PRIMARY]Baskar
http://www.sql-server-performance.com/blocking.asp
http://www.sql-server-performance.c...ucing_locks.asp
"Baskar" <Baskar@.discussions.microsoft.com> wrote in message
news:880BF5FF-E31F-4BA2-95B1-CDDE6C8616C8@.microsoft.com...
> Our application need to be able to concurrently insert and select from the
> same table irrespective of the data.
> But, the following issue has been found in the sql server 2000.
> Initially , the insert statements (around 1000 rows) for a table is
> executed
> in a transaction inside a stored procedure.
> like:
> "Insert into Test (pkey,field1,field2,strfield) values
> (1400,1144,12025,'test sp insert 2')"
> and simultaneously a select statement to retrieve all records from the
> same
> table is executed in the query analyzer. Then the select statement waits
> until the insert operation completes.
> "select * from Test"
> But, if a select statement that filters the data based on the index fields
> is used, then it executes without delay.
> select * from test where pkey=2000 and field1=1111
> (Index: pkey and field1 combination) Suppose if the filter condition
> includes the data that is being inserted, then also it is blocked.
> In SQL Server 2000, whether it is possible to avoid the blocking of select
> statement while insert.
> Note that the insert statement is executed in the default isolation level
> of
> sql server 2000.
> In MS Access, concurrent inserts and select works without any issue.
> Table and Index references:
> CREATE TABLE [dbo].[Test] (
> [pkey] [bigint] NOT NULL ,
> [field1] [bigint] NOT NULL ,
> [field2] [int] NULL ,
> [strfield] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
> ) ON [PRIMARY]
> GO
> CREATE INDEX [IX_Test] ON [dbo].[Test]([field1], [pkey]) ON [PRIMARY]
>
>
>
Concurrent select is locked while another transaction executes
. The select statement could have returned all other rows and select
statement on filter condition on non-index field that matches only the
existing records could also worked.
MS Access does not block select statement while Insert transaction is
running. The retrieved records from Access also does not contain any
uncommitted records.
But still, I am very surprised why SQL Server could not perform this
operation as MS Access could very well do.
And also one option that should not happen in our application in this
scenario is "Read uncommitted" option in SQL Server which retrieves the
uncommitted rows in the select.
Suggestions please !
"Uri Dimant" wrote:
> Baskar
> http://www.sql-server-performance.com/blocking.asp
> http://www.sql-server-performance.c...ucing_locks.asp
>
>
> "Baskar" <Baskar@.discussions.microsoft.com> wrote in message
> news:880BF5FF-E31F-4BA2-95B1-CDDE6C8616C8@.microsoft.com...
>
>On Thu, 11 Aug 2005 07:22:01 -0700, Baskar wrote:
(snip)
>MS Access does not block select statement while Insert transaction is
>running. The retrieved records from Access also does not contain any
>uncommitted records.
Hi Baskar,
There are two possible explanations for this.
#1. Access uses what's called "snapshot isolation" - this means that
changing data doesn't block others from retrieving it, but will show the
others the "before change" image of the data. You don't see data that is
inserted in an uncommitted transaction, you still see rows that have
been deleted in an uncommitted transaction and for rows that are updated
in an uncommitted transaction, you see the values as they were before
the update.
Snapshot isolation is supported in Oracle. I've read that it will also
be supported by SQL Server 2005.
Frankly, I don;t think that Access supports snapshot isolation. I prefer
my explanation #2:
#2: Access is buggy. If a row is inserted in an uncommitted transaction,
it won't be shown - but if a row is deleted in the same uncomitted
transaction, then that row won't show either.
Why is this behaviour buggy? Well, consider this simple example: one Mr.
B. Gates enter the bank and says he wants to change his account number.
Since Account# is the primary key, the row with the old account# has to
be deleted, and a new row inserted. Before this transaction is
committed, the accountant runs a query to check the totals. And all hell
breaks loose, because suddenly a few billion dollars have gone missing!
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||hi,
The concurrent select while insert does not work sometimes even when the
select query is based on an index-field. This happens when there are more
records (say 3000) in the table and it works when there are less records
(<250).
Also, SQL Server restricts the select / update operations during insert even
when the insert and select / update are working on different data. Is this
the supposed behavior in SQL Server ?
Are there any work around or alternative solution for this problem ?
Regards
Baskar
"Hugo Kornelis" wrote:
> On Thu, 11 Aug 2005 07:22:01 -0700, Baskar wrote:
> (snip)
> Hi Baskar,
> There are two possible explanations for this.
> #1. Access uses what's called "snapshot isolation" - this means that
> changing data doesn't block others from retrieving it, but will show the
> others the "before change" image of the data. You don't see data that is
> inserted in an uncommitted transaction, you still see rows that have
> been deleted in an uncommitted transaction and for rows that are updated
> in an uncommitted transaction, you see the values as they were before
> the update.
> Snapshot isolation is supported in Oracle. I've read that it will also
> be supported by SQL Server 2005.
> Frankly, I don;t think that Access supports snapshot isolation. I prefer
> my explanation #2:
> #2: Access is buggy. If a row is inserted in an uncommitted transaction,
> it won't be shown - but if a row is deleted in the same uncomitted
> transaction, then that row won't show either.
> Why is this behaviour buggy? Well, consider this simple example: one Mr.
> B. Gates enter the bank and says he wants to change his account number.
> Since Account# is the primary key, the row with the old account# has to
> be deleted, and a new row inserted. Before this transaction is
> committed, the accountant runs a query to check the totals. And all hell
> breaks loose, because suddenly a few billion dollars have gone missing!
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
Our application need to be able to concurrently insert and select from the
same table irrespective of the data.
But, the following issue has been found in the sql server 2000.
Initially , the insert statements (around 1000 rows) for a table is executed
in a transaction inside a stored procedure.
like:
"Insert into Test (pkey,field1,field2,strfield) values
(1400,1144,12025,'test sp insert 2')"
and simultaneously a select statement to retrieve all records from the same
table is executed in the query analyzer. Then the select statement waits
until the insert operation completes.
"select * from Test"
But, if a select statement that filters the data based on the index fields
is used, then it executes without delay.
select * from test where pkey=2000 and field1=1111
(Index: pkey and field1 combination) Suppose if the filter condition
includes the data that is being inserted, then also it is blocked.
In SQL Server 2000, whether it is possible to avoid the blocking of select
statement while insert.
Note that the insert statement is executed in the default isolation level of
sql server 2000.
In MS Access, concurrent inserts and select works without any issue.
Table and Index references:
CREATE TABLE [dbo].[Test] (
[pkey] [bigint] NOT NULL ,
[field1] [bigint] NOT NULL ,
[field2] [int] NULL ,
[strfield] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL
) ON [PRIMARY]
GO
CREATE INDEX [IX_Test] ON [dbo].[Test]([field1], [pkey]) ON [PRIMARY]
>|||X-Newsreader: Forte Agent 1.91/32.564
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
X-Complaints-To: abuse@.supernews.com
Lines: 55
Path: TK2MSFTNGP08.phx.gbl!newsfeed00.sul.t-online.de!t-online.de!news-spur1
.maxwell.syr.edu!news.maxwell.syr.edu!sn-xit-04!sn-xit-12!sn-xit-09!sn-post-
01!supernews.com!corp.supernews.com!not-for-mail
Xref: TK2MSFTNGP08.phx.gbl microsoft.public.sqlserver.programming:546663
On Tue, 16 Aug 2005 08:49:04 -0700, Baskar wrote:
>The concurrent select while insert does not work sometimes even when the
>select query is based on an index-field. This happens when there are more
>records (say 3000) in the table and it works when there are less records
>(<250).
Hi Baskar,
Even after rereading the thread, I don't get the exact picture of what
you are doing. It would help tremendously if you could post some SQL
statements that I can run to reproduce this behaviour on my test
database. I'd need CREATE TABLE statements to set up the table, INSERT
statements for some sample starting data, then the INSERT and SELECT
statements to run concurrently.
It's much easier to comment what I can see!
>Also, SQL Server restricts the select / update operations during insert eve
n
>when the insert and select / update are working on different data. Is this
>the supposed behavior in SQL Server ?
That depends on a lot of factors. Again, I'd have to see the code in
order to comment.
>Are there any work around or alternative solution for this problem ?
Default locking behaviour is to block conflicting requests: if a
connection attempts to do something with locked data, it'll have to wait
until the lock is released. You can override this default behaviour with
one of the following locking hints:
* WITH (NOLOCK) -- specifies that no shared locks are taken, and that
existing locks are disregarded. The consequence of this is known as
"dirty reads" - data that has been updated, but not yet committed and
possibly not even yet changed. The change might be rolled back later; in
that case, your query has returned data that never really existed in the
database.
Setting the transaction isolation level to READ UNCOMMITTED is
equivalent to specifying WITH (NOLOCK) on all queries.
* WITH (READPAST) -- specifies that locked data should be skipped; if
some rows in the table are locked, your query won't wait, but will
return a result set without any data for those rows (as if they don't
exist at all). Applies only to row-level locks, and won't work if your
transaction isolation level is anything other than READ COMMITTED (the
default).
Note that these locking hints will only affect read operations. For any
data modification, the locking can't be bypassed.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||hi,
The table details are available in the first thread of this question.
Basically, our application need to allow users to download data (insert
statements) and concurrently users should be able to work with the reports
(select statements).
Also, "With ReadPast" option will work for fetching records while insert.
But it will not fetch records that are locked because of update. This
behavior could not be acceptable for our application.
According to the SQL Server documentation, data that is being updated only
will be locked and insertion of data will lock only the specific data and no
t
others. But, select statement is blocked even when tried to retrieve the dat
a
which is not being inserted.
Is there any standard solution or alternative for this problem ?
Regards
Baskar
"Hugo Kornelis" wrote:
> On Tue, 16 Aug 2005 08:49:04 -0700, Baskar wrote:
>
> Hi Baskar,
> Even after rereading the thread, I don't get the exact picture of what
> you are doing. It would help tremendously if you could post some SQL
> statements that I can run to reproduce this behaviour on my test
> database. I'd need CREATE TABLE statements to set up the table, INSERT
> statements for some sample starting data, then the INSERT and SELECT
> statements to run concurrently.
> It's much easier to comment what I can see!
>
> That depends on a lot of factors. Again, I'd have to see the code in
> order to comment.
>
> Default locking behaviour is to block conflicting requests: if a
> connection attempts to do something with locked data, it'll have to wait
> until the lock is released. You can override this default behaviour with
> one of the following locking hints:
> * WITH (NOLOCK) -- specifies that no shared locks are taken, and that
> existing locks are disregarded. The consequence of this is known as
> "dirty reads" - data that has been updated, but not yet committed and
> possibly not even yet changed. The change might be rolled back later; in
> that case, your query has returned data that never really existed in the
> database.
> Setting the transaction isolation level to READ UNCOMMITTED is
> equivalent to specifying WITH (NOLOCK) on all queries.
> * WITH (READPAST) -- specifies that locked data should be skipped; if
> some rows in the table are locked, your query won't wait, but will
> return a result set without any data for those rows (as if they don't
> exist at all). Applies only to row-level locks, and won't work if your
> transaction isolation level is anything other than READ COMMITTED (the
> default).
> Note that these locking hints will only affect read operations. For any
> data modification, the locking can't be bypassed.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
>|||On Wed, 17 Aug 2005 00:26:07 -0700, Baskar wrote:
>The table details are available in the first thread of this question.
Hi Baskar,
I thought that was just an example made up on the fly, since there is no
primary key or unique constraint, no unique index and no clustered
index.
Also, these details do not explain this behaviour:
I'd really have to be able to run a repro script on my database that
mimics this behaviour - i.e. do a BEGIN TRAN + INSERT in one connection
but don't commit or rollback yet, then do a SELECT in another connection
and check the results.
If you can post INSERT and SELECT statements that reproduce this
behaviour, I'll see if I can find the cause.
>Basically, our application need to allow users to download data (insert
>statements) and concurrently users should be able to work with the reports
>(select statements).
>Also, "With ReadPast" option will work for fetching records while insert.
>But it will not fetch records that are locked because of update. This
>behavior could not be acceptable for our application.
That will be hard to do. Basically, you are asking SQL Server to
distinguish between locks for newly inserted data and locks for updated
data. This distinction is not available in SQL Server's architecture.
(And for good reason, I think - what if I insert a row and then go on to
update that same row, all in the same transaction? how to handle updates
to the primary key column? etc etc)
>According to the SQL Server documentation, data that is being updated only
>will be locked and insertion of data will lock only the specific data and n
ot
>others. But, select statement is blocked even when tried to retrieve the da
ta
>which is not being inserted.
That's because the data is already in the database, it's just not yet
confirmed (read: the transaction is not yet committed). SQL Server won't
just pretend it's not there (unless you use READPAST - but that would
affect updated rows as well); neither will SQL Server say that it IS
there if there is still the possibility of a rollback (unless you use
NOLOCK - but that too would affect updated rows as well).
>Is there any standard solution or alternative for this problem ?
You might consider using a two-table approach: insert the new rows into
a staging table, create the reports off a complete table and create an
automated task that will move all new rows from the staging table to the
complete table every once in a while.
If it is crucial that your reports have up-to-the-second accuracy, this
won't be a good approach - but in that case, I guess you wouldn't want
the reports to bypass rows currently inserted either.
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)|||hi,
The following are the real time table and stored proc details:
if exists (select * from dbo.sysobjects where id =
object_id(N'[dbo].[Dummy]') and OBJECTPROPERTY(id, N'IsUserTable') = 1)
drop table [dbo].[Dummy]
GO
CREATE TABLE [dbo].[Dummy] (
[pkey_id] [bigint] IDENTITY (1, 1) NOT NULL ,
[index_id] [int] NULL ,
[intField1] [int] NULL ,
[strField1] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS
NOT NULL ,
[intField2] [int] NULL ,
[intField3] [int] NULL ,
[intField4] [int] NOT NULL ,
[datetimeField] [datetime] NULL ,
[bitfield] [bit] NULL ,
[strField2] [varchar] (50) COLLATE SQL_Latin1_General_CP1_CI_AS
NULL
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[Dummy] WITH NOCHECK ADD
CONSTRAINT [PK_Dummy] PRIMARY KEY CLUSTERED
(
[pkey_id]
) ON [PRIMARY]
GO
CREATE INDEX [IX_Dummy] ON [dbo].[Dummy]([index_id]) ON [PRIMARY]
GO
Test stored procedure for insert:
CREATE procedure prcMultiInserttoDummyTable (
@.indexid bigint=777,
@.TotalRows bigint =2000
)
as
Declare @.CurRow bigint
set @.CurRow=0
begin tran
While (@.CurRow < @.TotalRows)
Begin
Insert into Dummy (indexid,intField1,strField1,intField2,i
ntField3,
intField4, datetimeField, bitfield, strField2)
values(@.indexid,555,'test',10,0,168,'02/13/1998',0,'test stored proc insert'
)
set @.CurRow = @.CurRow +1
end
waitfor delay '00:00:20' -- wait to test the concurrent insertion and
selection
commit tran
Test select statements:
select * from dummy where indexid=123 -- filtered based on index
select * from dummy
select * from Dummy where intField1=245 - filtered based on non-index
"Hugo Kornelis" wrote:
> On Wed, 17 Aug 2005 00:26:07 -0700, Baskar wrote:
>
> Hi Baskar,
> I thought that was just an example made up on the fly, since there is no
> primary key or unique constraint, no unique index and no clustered
> index.
> Also, these details do not explain this behaviour:
>
> I'd really have to be able to run a repro script on my database that
> mimics this behaviour - i.e. do a BEGIN TRAN + INSERT in one connection
> but don't commit or rollback yet, then do a SELECT in another connection
> and check the results.
> If you can post INSERT and SELECT statements that reproduce this
> behaviour, I'll see if I can find the cause.
>
> That will be hard to do. Basically, you are asking SQL Server to
> distinguish between locks for newly inserted data and locks for updated
> data. This distinction is not available in SQL Server's architecture.
> (And for good reason, I think - what if I insert a row and then go on to
> update that same row, all in the same transaction? how to handle updates
> to the primary key column? etc etc)
>
> That's because the data is already in the database, it's just not yet
> confirmed (read: the transaction is not yet committed). SQL Server won't
> just pretend it's not there (unless you use READPAST - but that would
> affect updated rows as well); neither will SQL Server say that it IS
> there if there is still the possibility of a rollback (unless you use
> NOLOCK - but that too would affect updated rows as well).
>
> You might consider using a two-table approach: insert the new rows into
> a staging table, create the reports off a complete table and create an
> automated task that will move all new rows from the staging table to the
> complete table every once in a while.
> If it is crucial that your reports have up-to-the-second accuracy, this
> won't be a good approach - but in that case, I guess you wouldn't want
> the reports to bypass rows currently inserted either.
> Best, Hugo
> --
> (Remove _NO_ and _SPAM_ to get my e-mail address)
>|||On Fri, 19 Aug 2005 06:34:07 -0700, Baskar wrote:
>hi,
>The following are the real time table and stored proc details:
(snip)
Hi Baskar,
My apologies for the delay in replying.
Thanks for the excellent repro script - I was able to reproduce the
problem you're having on my computer.
I was also able to find the reason. In your table, the column index_id
has an extremely low selectivity (in case you're unfamiliar with the
term, that means that the same values are repeated over and over again).
This makes an index on this column useless - if you run the queries
while no inserts are running and check the exectuion plan, you'll see
that a table scan is used in all three queries.
The reason that an index on this column is not used is a matter of cost
estimate. If the table has for instance 120,000 total rows, and 10,000
rows have the correct value in index_id, then the choice for the
optimizer is to:
a) scan the index to find the 10,000 matches very quickly, then do
10,000 individual bookmaark lookups to retrieve the complete information
for a total of 10,000 + some page reads - or
b) scan the full table; with probably 50 - 150 rows per page, this will
cost only 1200 page reads.
There are several ways that you can modify your script to see that SQL
Server won't wait when the index really is used:
1. Make sure that the index_id column is more selective. I modified the
stored procedure to set index_id to "@.indexid+@.CurRow", so that all
values in index_id would be different. The optimizer will now choose to
use the index for the first of the SELECT statements, and the data will
be returned while the insert proc is running (unless the proc also
inserts a new matching row, of course).
2. Change the query so that the index on index_id is covering:
SELECT pkey_id, index_id
FROM Dummy
WHERE index_id = 777
Now SQL Server won't have to do a bookmaark lookup since all columns
needed are included in the index pages. You'll see all matching rows
returned while another connection is busy inserting rows with a
different value for index_id.
3. Use an index hint to force the optimizer to use the index. Of course,
this will increase the cost of the query.
SELECT *
FROM Dummy (INDEX (IX_Dummy))
WHERE index_id=777
Best, Hugo
--
(Remove _NO_ and _SPAM_ to get my e-mail address)
Friday, February 17, 2012
Concurrent Connections
Could help me out to find the following information from the Query
Analyzer?
Number of Concurrent Connections/Users
Average Transaction Per/Sec
Database Growth Rate
RegardsHi,
You can querry system table master..sysprocesses and get required counters.
Amol Lembhe
"Praveen" wrote:
> HI All
> Could help me out to find the following information from the Query
> Analyzer?
> Number of Concurrent Connections/Users
> Average Transaction Per/Sec
> Database Growth Rate
> Regards
>
Concurrent Connections
Could help me out to find the following information from the Query
Analyzer?
Number of Concurrent Connections/Users
Average Transaction Per/Sec
Database Growth Rate
RegardsHi,
You can querry system table master..sysprocesses and get required counters.
Amol Lembhe
"Praveen" wrote:
> HI All
> Could help me out to find the following information from the Query
> Analyzer?
> Number of Concurrent Connections/Users
> Average Transaction Per/Sec
> Database Growth Rate
> Regards
>
Concurrent Connections
Could help me out to find the following information from the Query
Analyzer?
Number of Concurrent Connections/Users
Average Transaction Per/Sec
Database Growth Rate
Regards
Hi,
You can querry system table master..sysprocesses and get required counters.
Amol Lembhe
"Praveen" wrote:
> HI All
> Could help me out to find the following information from the Query
> Analyzer?
> Number of Concurrent Connections/Users
> Average Transaction Per/Sec
> Database Growth Rate
> Regards
>
concurrent access during snapshot generation
In the Publication properties, under the snapshot tab, there is a check box
for 'Concurrent access during snapshot gneration', the default is not
checked.
1. I am thinking about checking the box because the benefit is great, but
what is the consequence? I am curious of why it isn't default to 'checked' box
2. If leaving it unchecked, say I have 100 tables in the snapshot, the
system only locks a table one at a time during the process, they shouldn't
lock all the tables at once, am I right?
wingman
For the first part, the methodology is different. Transactions made during
the snapshot time are sent to the distribution database and will be
synchronised after the snapshot. This is possible in transactional because
it is transaction-based, but not in merge. However, in some cases it might
be desirable to quiesce the articles while they are being snapshotted, so as
to limit the transactions getting queued up. I'm not too sure if your
assumption is correct about the second part and will check it.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul,
Thanks for the explanation. Just so I understand it correctly because I
don't quite know the word 'quiesce' means.
If the box is unchecked, the snaplock process will stop any transactions
being sent to the distribution database until the snapshot is done.
If the box is checked, any transactions will be sent to distribution
database, which in turns sends them to the subscriber database while snapshot
is going on.
Your answer prompts me a new question or a clarification of what I thought I
understand. When that check box indicates "Do Not lock tables during
snapshot generation.....'. What does it mean? Does it lock the tables in
publisher database or in subscriber database?
"Paul Ibison" wrote:
> For the first part, the methodology is different. Transactions made during
> the snapshot time are sent to the distribution database and will be
> synchronised after the snapshot. This is possible in transactional because
> it is transaction-based, but not in merge. However, in some cases it might
> be desirable to quiesce the articles while they are being snapshotted, so as
> to limit the transactions getting queued up. I'm not too sure if your
> assumption is correct about the second part and will check it.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
>
|||Wingman,
it's referring to a table-lock on the publisher. The tables usually don't
exist on the subscriber, or if they do, the default is to drop them anyway.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
|||Paul,
Sorry I am still not quite clear. Please pattern me and clarify the following
So if I unchecked the box(tables will be locked), when the snapshot is
happening, which of the following scenario is happening:
1. the snapshot process will stop any transactions being sent to the
distribution database until the snapshot is done. This means any changes to
any tables are sitting in the queue of the distribution database. During the
snapshot process, the tables involved are locked.
2. the snapshot process will only stop transactions related to the tables
involved in the snapshot and these transactions will be held in the queue of
the distribution database. This means changes are still allowed to be made
to other tables except the tables involved in the snapshot. During the
snapshot process, the tables involved are locked.
If the box is checked, are the above situations reversed? For example,
there will be no queueing in the distribution database and the tables
involved in the snapshot are not locked.
Again, thank for your time and patience very much.
"Paul Ibison" wrote:
> Wingman,
> it's referring to a table-lock on the publisher. The tables usually don't
> exist on the subscriber, or if they do, the default is to drop them anyway.
> Cheers,
> Paul Ibison SQL Server MVP, www.replicationanswers.com
> (recommended sql server 2000 replication book:
> http://www.nwsu.com/0974973602p.html)
>
>
|||With the box unchecked, it is the tables on the Publisher that are locked.
Other processes trying to update these tables will be blocked until the lock
is removed. This should be verifiable by testing and simultaneously
profiling or using the current activity window.
Cheers,
Paul Ibison SQL Server MVP, www.replicationanswers.com
(recommended sql server 2000 replication book:
http://www.nwsu.com/0974973602p.html)
Tuesday, February 14, 2012
Concurrency in Transaction
the scenario
Frontend - MS Access (not yet decided whether MDB or ADP)
Backend - MS SQL Server
it is a conversion from MS Access backend to MS SQL Server Backend.
Planning to create stored procedures for all the Inserts, Updates,
Deletes and Business Rules / Validations wherever it is possible.
the problem
i am running in concurrency problem. the same classic scenario of two
users retrieving the same row (record) from the same table. it allows
both the user to update the record, that is, the user who updates last
has his changes saved though he retrieved that particular record
second.
what i need is that the user who retrieved the record second shouldn't
be able to update or delete the record when it is already retrieved by
any other user.
would appreciate if someone pointed me in the right direction to solve
the above problem, i know it is related to isolation property but am
not sure
thanx in advance
regards
balabala (balkiir@.gmail.com) writes:
> the problem
> i am running in concurrency problem. the same classic scenario of two
> users retrieving the same row (record) from the same table. it allows
> both the user to update the record, that is, the user who updates last
> has his changes saved though he retrieved that particular record
> second.
> what i need is that the user who retrieved the record second shouldn't
> be able to update or delete the record when it is already retrieved by
> any other user.
> would appreciate if someone pointed me in the right direction to solve
> the above problem, i know it is related to isolation property but am
> not sure
One convenient solution is to use a timestamp column. A timestamp column
is a column which automatically is updated every time a row is update.
Timestamp has nothing to do with date and time, but is a 8-byte binary
value.
When you read a row, you retrieve the timestamp value to the client,
and then you use that in the WHERE condition when you update:
UPDATE tbl
SET ...
WHERE keycol = @.key
AND tstamp = @.tstamp
If @.@.rowcount is 0 after the update, this means that the tstamp value
that you read is no longer good, because someone else have updated the
row.
This is a little different from your request, as here the first process
that update wins, not the first that reads it.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||thanx a lot erland. have a great day