Using Merge replication between SQL Server 2000 and SQL Server CE,
is there any way that row deletes could occur on the subscriber without a reinitialize
or explicit delete of row on publisher.
More specifically, if there is a row filter that returns a days worth of data with each
days pull, for example, and the filter looked like
select <columns> from Table where UpdateDate < GETDATE() and UpdateDate >= DATEADD(d,1,GETDATE())
would there be some implicit delete at subscriber each day because data sent changed?
My research indicates this does not happen, but I have a colleague who thinks differently.
Merge replication is trigger-based, meaning a change has to happen for the filter to be evaluated. So basically your filter will not guarantee your subscriber to have rows that meet your filter criteria without some process that does some sort of dummy update to the rows for it to be processed.
However please look at this best practice topic "Best Practices for Time-Based Row Filters": http://msdn2.microsoft.com/en-us/library/ms365153.aspx, it shows how you can accomplish what you want.
|||Thanks for the clarification. I have been able to convince myself that deletes occur on the subscriber for records outside the filter if I apply some update. I am trying to help a customer with replication performance issues. The application does a reinitialize on every request from the user for synchronization. This is sending 10000 or more rows. When I asked them why they do a reinit every time, they were not sure (the original programmer is gone), but they said there was a problem with deletes taking so long and it was better to just reinit. That is why I asked the original question. I did not know what deletes they could be talking about.
I am curious about the deleted records. It seems that the server must be sending over deletes for every updated record outside the filter. In my case, the filter is not only time based but uses HOST_NAME(). There are about 1000 handhelds out there. So if I used the method of the best-practices article, it would seem that with each synch I would be sending over deletes for all 1000 users each time.
Maybe that was what the original programmer did and what was taking too much time (even though there would be no actual delete, because those other records were never there).
Is this the behavior that would occur?
Thanks again
Ed Santosusso
|||In SQL 2000, yes deletes are much slower compared to inserts and updates, but this has been improved in SQL 2005 provide you use the new pre-computed partitions.
|||Moving data in and out of a partition can send over deletes, it's the only way to remove the data from the subscriber. And in SQL 2000, yes deletes are much slower compared to inserts and updates, but this has been improved in SQL 2005 provide you use the new pre-computed partitions.
No comments:
Post a Comment