Hi,
I have a table that contains log data, usually around a million records. The
table has about 10 columns with various attributes of the logged data,
nothing special. We're using SQL Server 2000.
Some of the columns (for example "category") have duplicate values
throughout the records. We have a web page that queries the table to show
all the unique columns, for example:
select distinct CATEGORY from table TEST
Obviously the server has to scan all rows in order to get all unique columns
which takes quite a while, especially since that web page contains several
of these types of queries. We also have a MAX(DATE) and MIN(DATE) query that
also add to the load.
I already created indexes on the CATEGORY (actually on all categories)
column which might help a little but I'm pretty sure that there has got to
be a better way.
I also create a view (select distinct CATEGORY from table TEST) and tried to
index it, but it won't let me index a query that contains a DISTINCT
statement.
Isn't there a way to create an index that contains only the distinct values?
Is there another way to speed this up?
Thanks for any hints!try creating a view with schema binding and use group by (which will return the distinct values), then you should be able to add a unique index on the view.|||Thanks, that actually seemed to work. I had tried this before but
discouraged by (yet another) error message mentioning count_big(*) missing.
Now I realized that I can just add the count_big(*) and it saved the index.
Initial Tests look promising - thanks.
"mark baekdal" <anonymous@.discussions.microsoft.com> wrote in message
news:BD3A168C-90FD-4FDC-AE1C-46618C6FF5A6@.microsoft.com...
> try creating a view with schema binding and use group by (which will
return the distinct values), then you should be able to add a unique index
on the view.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment