using the File Group "Primary" (which I recently read is bad) and I
have 3 index levels with some Data values having in excess of 700K
rows.
Is this bad and should I be worried? Is there some housekeeping I
should do in these situations?
TIA
Robrcamarda (rcamarda@.cablespeed.com) writes:
> I'm using Idera's SQL Diagnostic Manager and its showing me my index is
> using the File Group "Primary" (which I recently read is bad)
There are situations where you can split up databases on several
file groups, and for instance have non-clustered index on a separate
volume. Note that if you relocate the clustered index, you relocate
the data as well.
But this should only be done if you have a clear understand of what you
win. None of our customer's databases have more than the two files
each database is born with. (And thus only one file group.)
> and I have 3 index levels with some Data values having in excess of 700K
> rows.
> Is this bad and should I be worried? Is there some housekeeping I
> should do in these situations?
It's a good idea to run DBCC DBREINDEX on your tables, if they tend
to fragment. Whether they fragment, can be concluded by using
DBCC SHOWCONTIG.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||In books online there is an example which checks for fragmentation
above a level of 30% and runs the reindex function. Is this a good
number ot use or is it one of those "depends" kind of things?|||pb648174 (google@.webpaul.net) writes:
> In books online there is an example which checks for fragmentation
> above a level of 30% and runs the reindex function. Is this a good
> number ot use or is it one of those "depends" kind of things?
It's not a bad number. We ship our maintenance job that uses the output
from DBCC SHOWCONTIG, and if a table is fragmented enough, we run DBCC
DBREINDEX. And the bar where we reindex is, as far as I recall, precisely
30%...
What we have adding recently, and me and our admin-kind-of-guy has not
really arrived on the best strategy for, is to run UPDATE STASTISTICS
WITH FULLSCAN on table we don't reindex. Table that don't get defragmented
despite heavy insertion traffic, probably has a monotonic clustered
index, so statistics will be inaccurate after a while.
Then as always there are cases where you may want to deviate. For instance,
clustered index an guids is often said to be recipe for quick fragmentation.
However, SQL Server MVP Greg Linwood pointed out to me, that this can
be used to your advantage. You define the index with a relatively low
fill factor, say 50%. What will happen now is that insertion will happen
all over the place, but page splits will be rare, since all pages have
room to spare. So with design, framgmenation actually decreases as time
goes. Up to a certain point that is, once you are starting to fill up
more and more pages, page split will rage here and there. The idea is
that you monitor the state of the database closely, and that you have a
maintenance window where you again can reindex to 50%.
It goes without saying that this strategy is nothing for the left-hand
DBA, but requires thorough understanding and most of all, daily
monitoring of the state of the database.
--
Erland Sommarskog, SQL Server MVP, esquel@.sommarskog.se
Books Online for SQL Server SP3 at
http://www.microsoft.com/sql/techin.../2000/books.asp|||Do you have an example of that maintenance job somewhere? We don't look
at the database daily and have many installations so it needs to be
scripted and run on its own without any supervision or intervention.
I'm right handed and not a DBA...
No comments:
Post a Comment