Maintaining Partitioned Views

Jul 20, 2005

Hello,

I have a large set of data that I have set up as a partitioned view.
The view is partitioned by a datetime column and the individual tables
each represent one month's worth of data. I need to keep at least two
year's worth of data at all times, but after two years I can archive
the data. A sample of the code used is below. It is simplified for
space reasons.

My question is, how do other people maintain the database in this type
of scenario? I could create all of the tables necessary for the next
year and then go through that at the end of each year (archive tables
over two years, add new tables, and change the view), but I was also
thinking that I might be able to write a stored procedure that runs
once a month and does all three of those tasks automatically. It seems
like a lot of dynamic SQL code though for something like that.
Alternatively, I could write VB code to handle it in a DTS package.
So, my question again is, how are others doing it? Any suggestions?

Thanks!
-Tom.

CREATE TABLE [dbo].[Station_Events_200401] (
[event_time] [datetime] NOT NULL ,
[another_column] [char] (8) NOT NULL )
GO

CREATE TABLE [dbo].[Station_Events_200402] (
[event_time] [datetime] NOT NULL ,
[another_column] [char] (8) NOT NULL )
GO

CREATE VIEW Station_Events
AS
SELECT event_time,
another_column
FROM Station_Events_200401
UNION ALL
SELECT event_time,
another_column
FROM Station_Events_200402
GO

View 3 Replies


ADVERTISEMENT

Partitioned Views

Aug 23, 2006

Hi,

I've starting to explore the Distributed Partitoned Views, in order to use it in the next project, and I've found the article:
"MS SQL Server Distributed Partitioned Views"
By Don Schlichting

I came across the following problem:
While running sample:
USE test
GO

CREATE VIEW AllAuthors
AS

SELECT *
FROM AuthorsAM,
TEST1.test.dbo.AuthorsNZ

GO

I got the error message:
Server: Msg 4506, Level 16, State 1, Procedure AllAuthors, Line 5
Column names in each view or function must be unique. Column name 'au_lname' in view or function 'AllAuthors' is specified more than once.

Could anyone please explain? Can't i use the same column names in both tables?

Regards,
Yifat

View 3 Replies View Related

Partitioned Views

Aug 17, 2001

I would like to break up a very large table into about ten smaller ones. With partitioning to be efficient the columns in the check constraint need to be used when accessing the view. The problem is the table has a composite primary key made up of LocationID/ProductID. With another composite index on ProductID/LocationID. This is accessed both ways from our applications.

I would like to partition the table by LocationID. But then when called by ProductID a scan of all tables in the view would have to be done.

In Oracle there is something called a global index that would solve this. Is there anything similar in SQL Server or does anybody have a work around?

Thanks,

Rob

View 2 Replies View Related

Distributed Partitioned Views

Jul 14, 2006

Hi everyone,
I have some doubts about distributed partitioned views.
When we create a distributed partitioned view whcih include three server, do we have tocreate this same distributed partitioned view in that three server in order to make each server to see adn especially modify it ?

Thanks

View 1 Replies View Related

Delete Statements In Partitioned Views

Dec 10, 2002

Hi, I am using sql2000 ent edition. I have a partitioned view based on 8 tables. My selects and inserts are fine. But, when I run a delete on the view based on a query on the paritioned column, I get a "Transaction (Process ID 149) was deadlocked and has been chosen as a victim".
I looked at the query plan and it was showing a parallel query on all the underlying tables. So, I put the Option(maxdop 1), using only one processor and the delete worked fine.

Does anybody know why? is parallel query create deadlocks? is there any known problems with deletes on partitioned views?
same question for updates. I think I have the same problem for updates.

Any help will be useful.
thanks!!

View 1 Replies View Related

Books On Lies - Partitioned Views

May 8, 2007

From BOL...

CHECK constraints are not needed for the partitioned view to return the correct results. However, if the CHECK constraints have not been defined, the query optimizer must search all the tables instead of only those that cover the search condition on the partitioning column. Without the CHECK constraints, the view operates like any other view with UNION ALL. The query optimizer cannot make any assumptions about the values stored in different tables and it cannot skip searching the tables that participate in the view definition.

Then why am I getting index scans on my partitioning column on tables that fail the search value based on their check constraint?

Not looking for an answer because I know the query optimizer is a fickle b*tch and I did not post any code, but I needed to rant.

View 10 Replies View Related

SQL SERVER 2000 Partitioned Views Bug

May 12, 2008

Thanks in advance in reading this post ! I'm facing a situation in sql server 2000 sp4 with partitioned views.

I have a partition views that joins about 10 tables, in each table there is a check constraint.

For example, if a exec a select count(*) from VIEW where col1 = '20080101' , it goes for the table that has data for '20080101' .

If I exec a select col1,col2,col3,col4 from VIEW where col1='20080101', it goes to all tables and make an index seek.

I want the beaviour of query 1, beause it is just looking on 1 table and not one the 10.



Thanks in advance !

View 3 Replies View Related

Very Urgent : Bcp Or Dts Data To Partitioned Views

Jan 1, 2004

someone please tell me if i can bcp or dts data in to partition views
it is failing with the error the bulk operation does not support this

View 14 Replies View Related

T-SQL (SS2K8) :: Local Partitioned Views

May 13, 2014

SQL 2008R2, Standard on Windows 2008R2 Enterprise.I have implemented a set of local partitioned views to facilitate multi-threading in one of our applications.Testing with SQL and the insert, update, Delete all work fine. However the inserts from SSIS are Bulk and then fail.According to all I have read I should be able to add "instead of Trigger" for the insert and it should work fine. URL....

I have created instead of triggers for insert and delete and again they work fine when I test them with T-SQL but fail in the SSIS package. The error message is that the view is not updateable.

View 0 Replies View Related

256 Table Limit For Partitioned Views

Aug 8, 2005

I have a partitioned view sitting over several tables and I'm slowlyapproaching the 256 number. Can anybody confirm if there is such alimit for the maximum number of tables that a partitioned view canhold?If this is true, does anybody have any suggestions or ideas to workaround this max limit?TIA!

View 4 Replies View Related

Problems With Partitioned Views And Pruning

Jul 20, 2005

/*problem: Trying to get partitioned views to "prune" unneededpartitions fromselect statements against the partitioned view. There are 5partitionedtables. Each with a check constraint based on a range of formula_idcolumn.Test: Run this script to create the 5 partitioned tables and thepartitioned view. Thenrun the explain plans on the select statements at the end of thescript and see that wecan only prune if we give a seemingly superfluous is not nullcriteria in addition tothe formula_id.Ideal: We want to only have to use the formula_id in the selectstatement to prune.*//*note: you may get errors on the drops first time run*/drop table dbo.cs_working_e2goCREATE TABLE dbo.cs_working_e2 (formula_id int NOT NULLCONSTRAINT formula_id_e14CHECK (formula_id between 1and 1000),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3436DEFAULT 1CONSTRAINT Binary_flag_rule667CHECK (authority_flag IN(0, 1)),interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6926DEFAULT 0CONSTRAINT Binary_flag_rule668CHECK (interpolated_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1807DEFAULT getdate(),CONSTRAINT XPKcs_working_e2PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_e2 ON dbo.cs_working_e2(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_e2 ON dbo.cs_working_e2(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"godrop table dbo.cs_working_indexes2goCREATE TABLE dbo.cs_working_indexes2 (formula_id int NOT NULLCONSTRAINT formula_id_indexes14CHECK (formula_id between7001 and 9000),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3437DEFAULT 1CONSTRAINT Binary_flag_rule669CHECK (authority_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6927DEFAULT 0CONSTRAINT Binary_flag_rule670CHECK (interpolated_flag IN(0, 1)),time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1808DEFAULT getdate(),CONSTRAINT XPKcs_working_indexes2PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_indexes2 ONdbo.cs_working_indexes2(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_indexes2 ON dbo.cs_working_indexes2(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"godrop table dbo.cs_working_other2goCREATE TABLE dbo.cs_working_other2 (formula_id int NOT NULLCONSTRAINT formula_id_other14CHECK (formula_id >= 9001),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3438DEFAULT 1CONSTRAINT Binary_flag_rule671CHECK (authority_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6928DEFAULT 0CONSTRAINT Binary_flag_rule672CHECK (interpolated_flag IN(0, 1)),time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1809DEFAULT getdate(),CONSTRAINT XPKcs_working_other2PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_other2 ONdbo.cs_working_other2(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_other2 ON dbo.cs_working_other2(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"godrop table dbo.cs_working_p1q12goCREATE TABLE dbo.cs_working_p1q12 (formula_id int NOT NULLCONSTRAINT formula_id_p1q114CHECK (formula_id between3001 and 7000),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3439DEFAULT 1CONSTRAINT Binary_flag_rule673CHECK (authority_flag IN(0, 1)),interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6929DEFAULT 0CONSTRAINT Binary_flag_rule674CHECK (interpolated_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1810DEFAULT getdate(),CONSTRAINT XPKcs_working_p1q12PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_p1q12 ONdbo.cs_working_p1q12(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_p1q12 ON dbo.cs_working_p1q12(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"godrop table dbo.cs_working_pq2goCREATE TABLE dbo.cs_working_pq2 (formula_id int NOT NULLCONSTRAINT formula_id_pq14CHECK (formula_id between1001 and 3000),submission_id int NOT NULL,node_id int NOT NULL,reference_year smallint NOT NULL,observation_period datetime NOT NULL,authority_flag tinyint NOT NULLCONSTRAINT ONE_DEFAULT3440DEFAULT 1CONSTRAINT Binary_flag_rule675CHECK (authority_flag IN(0, 1)),interpolated_flag tinyint NOT NULLCONSTRAINT ZERO_DEFAULT6930DEFAULT 0CONSTRAINT Binary_flag_rule676CHECK (interpolated_flag IN(0, 1)),observation_value decimal_datatype NOT NULL,time_created smalldatetime NOT NULLCONSTRAINTCURRENT_DATE_DEFAULT1811DEFAULT getdate(),CONSTRAINT XPKcs_working_pq2PRIMARY KEY NONCLUSTERED (formula_id, submission_id,node_id, reference_year, observation_period)--ON "INDEXES")--ON "WORKING"goCREATE UNIQUE CLUSTERED INDEX XAK1cs_working_pq2 ONdbo.cs_working_pq2(submission_id ASC,formula_id ASC,node_id ASC,authority_flag ASC,observation_period ASC,reference_year ASC,observation_value ASC)goCREATE INDEX XIE1cs_working_pq2 ON dbo.cs_working_pq2(node_id ASC,authority_flag ASC,formula_id ASC,observation_period ASC,reference_year ASC,observation_value ASC)--ON "INDEXES"go----- create view ---------drop view cs_working2goCREATE VIEW cs_working2 (submission_id, node_id, reference_year,observation_period, formula_id, observation_value, interpolated_flag,time_created, authority_flag) ASSELECT we.submission_id, we.node_id, we.reference_year,we.observation_period, we.formula_id, we.observation_value,we.interpolated_flag, we.time_created, we.authority_flagFROM cs_working_e2 weunion allSELECT wo.submission_id, wo.node_id, wo.reference_year,wo.observation_period, wo.formula_id, wo.observation_value,wo.interpolated_flag, wo.time_created, wo.authority_flagFROM cs_working_other2 wounion allSELECT wpq.submission_id, wpq.node_id, wpq.reference_year,wpq.observation_period, wpq.formula_id, wpq.observation_value,wpq.interpolated_flag, wpq.time_created, wpq.authority_flagFROM cs_working_pq2 wpqunion allSELECT wp1q1.submission_id, wp1q1.node_id, wp1q1.reference_year,wp1q1.observation_period, wp1q1.formula_id, wp1q1.observation_value,wp1q1.interpolated_flag, wp1q1.time_created, wp1q1.authority_flagFROM cs_working_p1q12 wp1q1union allSELECT wi.submission_id, wi.node_id, wi.reference_year,wi.observation_period, wi.formula_id, wi.observation_value,wi.interpolated_flag, wi.time_created, wi.authority_flagFROM cs_working_indexes2 wigo--- sample selects against partitioned view -----/*--run explain plan here and see all 5 partitions being pulledselect * from cs_working--run explain plan here and see just the 1 partitionselect * from cs_working_e2--run explain plan and see this is not pruning to the needed partitionselect * from cs_workingwhere formula_id = 1--run explain plan and see it is now pruning to the needed partitionselect * from cs_workingwhere formula_id = 1and submission_id is not null--run explain plan and see it is now pruning to the needed partition,tooselect * from cs_workingwhere formula_id = 1and observation_value is not null*/

View 1 Replies View Related

Distributed Partitioned Views/Federated Servers Anyone?

Apr 21, 2003

I'm trying to do some researh on the use of SQL's DPV. I'm looking for feedback from people who've actually done this production to know more about the design challenges and level of added administration required. Any information will be much appreciated. Thanks.


aK

View 4 Replies View Related

DB Engine :: Partitioned Views In Standard Edition?

Apr 24, 2015

We have are SQL 2014 Standard edition, I have a situation where-in I plan to partition a table now since table partition is not supported in standard version I thought about Partitioned views however now I am stuck where I cant make the view writable because of the identity column in the base table.

Do I have any other option in this case?

View 5 Replies View Related

Help With Partitioned Views Or Updating Data From Multiple Tables

Mar 16, 2008

Hi All,

My database's design is set out here. In summary, I'm trying to model a Stock Exchange for a Technical Analysis application written using Visual C++. In order to create the hierachy I'm using a Nested Set Model. I'm now trying to write code to add and delete equities (or, more generically, nodes) to the database using a form presented to the user in my application. I have example SQL code to create the necessary add and delete procedures that calculate the changes to the values in the lft and rgt columns, but these examples focus around a single table, where as my design aggregates rows from multiple tables using UNION ALL:




Code Snippet
CREATE VIEW vw_NSM_DBHierarchy -- Nested Set Model Database Hierarchy
AS
SELECT clmStockExchange, clmLeft, clmRight FROM tblStockExchange_
UNION ALL
SELECT clmMarkets, clmLeft, clmRight FROM tblMarkets_
UNION ALL
SELECT clmSectors, clmLeft, clmRight FROM tblSectors_
UNION ALL
SELECT clmEPIC, clmLeft, clmRight FROM tblEquities_




Essentially, I'm trying to create an updateable view but I receive the error "UNION ALL View is not updatable because a partitioning column was not found". I suspect that my design in wrong or lacks and this problem is highlighting the design flaws so any suggestions would be greatly appreciated.

View 9 Replies View Related

SQL Server 2014 :: Query Plan For Partitioned Views Not Running As They Should

Mar 29, 2015

I've been using partitioned views in the past and used the check constraint in the source tables to make sure the only the table with the condition in the where clause on the view was used. In SQL Server 2012 this was working just fine (I had to do some tricks to suppress parameter sniffing, but it was working correct after doing that). Now I've been installing SQL Server 2014 Developer and used exactly the same logic and in the actual query plan it is still using the other tables. I've tried the following things to avoid this:

- OPTION (RECOMPILE)
- Using dynamic SQL to pass the parameter value as a static string to avoid sniffing.

To explain wat I'm doing is this:

1. I have 3 servers with the same source tables, the only difference in the tables is one column with the server name.
2. I've created a CHECK CONSTRAINT on the server name column on each server.
3. On one of the three server (in my case server 3) I've setup linked server connections to Server 1 and 2.
4. On Server 3 I've created a partioned view that is build up like this:

SELECT * FROM [server1].[database].[dbo].[table]
UNION ALL SELECT * FROM [server2].[database].[dbo].[table]
UNION ALL SELECT * FROM [server3].[database].[dbo].[table]5. To query the partioned view I use a query like this:

SELECT *
FROM [database].[dbo].[partioned_view_name]
WHERE [server_name] = 'Server2'

Now when I look at the execution plan on the 2014 environment it is still using all the servers instead of just Server2 like it should be. The strange thing is that SQL 2008 and 2012 are working just fine but 2014 seems not to use the correct plan.

View 9 Replies View Related

SQL Server 2008 :: Partitioned Views Table Elimination Not Working

Jul 7, 2015

I have some Partitioned Views and on all queries using a table for the in clause, table elimination isn't happening.

Check Constraint is on the oid column

This works as expected, only goes to 2 tables;
SELECT *
FROM view_oap_all
WHERE oid IN ( '05231416529481', '06201479586431' )

This works as expected, only goes to 2 tables;
SELECT *
FROM view_oap_all
WHERE oid IN ( SELECT oid
FROM owners
WHERE oid IN ( '05231416529481', '06201479586431' ) )

This is checking all tables (headingnames are unique), ive tried this for the last 3 hours on many different tables containing the oid column.

Unless I write the oid as in the above queries it just doesn't work.
SELECT *
FROM view_oap_all
WHERE oid IN ( SELECT oid
FROM owners
WHERE headingname = 'TestSystem' )

View 6 Replies View Related

SQL 2012 :: Using Partitioned Views In Order To Manage Table Sizes

Oct 13, 2015

I have a few databases that are using Partitioned Views in order to manage the table sizes and they all work well for our purposes. Recently I noticed a table that had grown to 400+ million rows and want to partition it as well, so I went about creating new base tables based on the initial table's structure, just adding a column to both table and primary key to be able to build a Partitioned View on them.The first time around, on a test system, everything worked flawlessly but when I put the same structure in place on the production system I get the dreaded "UNION ALL view 'DBName.dbo.RptReportData' is not updatable because the primary key of table '[DBName].[dbo].[RptReportData_201405]' is not included in the union result. [SQLSTATE 42000] (Error 4444)" error.

I have searched high and low and everything I see points to a few directives in order for a UNION ALL view to be updatable:

- Need a partitioning column that is part of the primary key
- Need a CHECK constraint that make the base tables exclusive, i.e. data cannot belong to more than one table
- Cannot have IDENTITY or calculated columns in the base tables
- The INSERT statement needs to specify all columns with actual values, i.e. not DEFAULT

Well, according to me, my structure fulfills these conditions but the INSERT fails anyway. CREATE scripts below scripted from SQL Server. I only modified them to be on a single row - it is easier to verify that they are identical in a text editor that way.

SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
SET ANSI_PADDING ON
GO

[code]....

View 3 Replies View Related

Query Server To Find All Partitioned Tables, Partition Name, Column Used, Partitioned By

Dec 17, 2007

I want to find a way to get partition info for all the tables in all the databases for a server. Showing database name, table name, schema name, partition by (maybe; year, month, day, number, alpha), column used in partition, current active partition, last partition (for date partitions I want to know if the partition goes untill 2007, so I can add 2008)

all I've come up with so far is:






Code Block

SELECT distinct o.name From sys.partitions p
inner join sys.objects o on (o.object_id = p.object_id)
where o.type_desc = 'USER_TABLE'
and p.partition_number > 1

View 3 Replies View Related

Maintaining History

Jun 11, 2008

hi ,
i am working on an application using c#, visual studio 2005, sql server 2005.
i have a few tables in sql server 2005.
i need to save the history. (i.e) all the inserts, updates, and deleats performed on the tables.
can any one suggest me how can i achieve that.
should i use triggers and save the changes in another table ???
waiting for your suggestion??
thank you

View 13 Replies View Related

Maintaining Index

Oct 22, 2000

Please what is the best way to perform index maintenance. I use 7.0
We have been having slow server performance, and one of the options is to do index maintenance. I have researched but could not get a clear picture of what I should do. Has anybody performed the same task before? Thanks for your help!!!

View 1 Replies View Related

Maintaining Atomicity

Apr 5, 2004

Hello Friends,
Iam new to this sql server arena. I have implemented a procedure which does a series of insert and update statements and all of this statements must be implemented all at once or none. But if I got error in some statements , the rest of the statements are been executed. Please suggest me a way or code snippet to achieve atomicity in a sqlserver procedure.

regards,
Ch.Praveen Kumar.

View 4 Replies View Related

Maintaining Statistics

Jan 29, 2008

Scenario:
For the most part we let SQL Server (2005) maintain our statistics for us. However we do have several large processes written in stored procedures. There is one main controller procedure that can call any number of other procedures. This process can take anywhere from 5 minutes to an hour+ to run (based on the size of the client). Back in the day of SQL Server 2000 we found that the performance of this procedure would diminish over time (while it was running). We implemented a queued concept of issuing UPDATE STATISTICS commands. This was done by adding a SQL Server job that ran every 10 minutes looking for new records in a table. Records where inserted at key points in these stored procedures (after large deletes, updates, inserts).

Goal:
Now, with all that background and with 2005, I'd like to review this concept and remove this implementation if possible, or at least remove the close association of maintaining the statistics from the business jobs. In 2005, are there better ways to monitor and maintain statistics at more of an administrative (but automated) way?

View 15 Replies View Related

Maintaining A Database?

Feb 27, 2007

Our database(s) are all over the place - no documentation - lot's ofrubbish and unused stuff.I'm managing a project focusing on data quality that covers codechanges, alterations to DTS packages, schema changes etc etc.What I'd like to do is see where the bit I want to change is beingused.that might mean what stored procs use a field and what sprocs use thatsproc.maybe it's which dts packages use a sproc (and again up thehieararchy)The list is a long one but basically I need to know what the effectsare of changes.Is there a tool out there that lets me navigate a database to thatlevel of detail - I understand something along the same lines isavailable for MS Access but I can't find it for SQL Server.Thanks

View 1 Replies View Related

Maintaining Security

May 24, 2006

I am a beginer in SQL Server. I have developed a simple accounting application in VB and SQL. Now I have successfully completed my application. Now I want to deploy it to my client. So I installed SQl Server and required VB components in the clients computer. I also created 'sa' login and secret password only know by me. I thought my data in that clients computer was full safe but later on i found that we can also connect to the sql server using the NT administrative account and easily change the data of the database. So now I am worried that if someone enters and access the clients computer with administrator's password then he/she can change my data resulting the corruption of the data. So is there any way that I can prevent the access the database to the client with the NT administrative account or any way 2 track the way the data changed?

View 5 Replies View Related

Urgent -- Maintaining Database

Apr 25, 2008

hi all,
 i am working on portal site where i have created 18 tables in on database, i dont know weather i am right or wrong . Should i continue with the same or create two tables one will be master and another will contain common fields.
but if i will create one table for all then what will happend.
please tell me what to do and why?
asap please............
 Thanks for spending ur valuable time for me.
 

View 4 Replies View Related

Maintaining Variable After EXEC

Jul 20, 2005

Hello,I am fairly new at stored procedures. I have created some that willgo through a table and return a start date and an end date that isdependent upon the fiscal period you want, but I then need to usethose dates in another stored procedure to retrieve the information Ineed. My stored procedure looks like this.================================================== ====================CREATE PROCEDURE dbo.R920ExtTotal@MthsBack Decimal OUTPUTASDECLARE @sSQL AS NVARCHAR(255), @StartDate as SMALLDATETIME, @EndDateas SMALLDATETIMEExec @StartDate = GetMthStart @MthsBackExec @EndDate = GetMthEnd @MthsBackSET @sSQL = 'Select count(extension) as Total From r920f00 Where([date] BETWEEN "' +CONVERT(nvarchar, @StartDate) +'" and "' +CONVERT(nvarchar, @EndDate) +'")'Select @sSQLEXEC (@sSQL)ReturnGO================================================== ===============The problem is my variables @StartDate and @EndDate do not retaintheir values after the EXEC statement and revert to 01/01/1900. Howcan I get around this problem?Thanks!!!!Chip

View 2 Replies View Related

Are Embedded Views (Views Within Views...) Evil And If So Why?

Apr 3, 2006

Fellow database developers,I would like to draw on your experience with views. I have a databasethat includes many views. Sometimes, views contains other views, andthose views in turn may contain views. In fact, I have some views inmy database that are a product of nested views of up to 6 levels deep!The reason we did this was.1. Object-oriented in nature. Makes it easy to work with them.2. Changing an underlying view (adding new fields, removing etc),automatically the higher up views inherit this new information. Thismake maintenance very easy.3. These nested views are only ever used for the reporting side of ourapplication, not for the day-to-day database use by the application.We use Crystal Reports and Crystal is smart enough (can't believe Ijust said that about Crystal) to only pull back the fields that arebeing accessed by the report. In other words, Crystal will issue aSelect field1, field2, field3 from ReportingView Where .... eventhough "ReportingView" contains a long list of fields.Problems I can see.1. Parent views generally use "Select * From childview". This meansthat we have to execute a "sp_refreshview" command against all viewswhenever child views are altered.2. Parent views return a lot of information that isn't necessarilyused.3. Makes it harder to track down exactly where the information iscoming from. You have to drill right through to the child view to seethe raw table joins etc.Does anyone have any comments on this database design? I would love tohear your opinions and tales from the trenches.Best regards,Rod.

View 15 Replies View Related

Maintaining SQL Data On A Remote Host.

Oct 29, 2007

Hi,What is the preferred way to maintain SQL tables on a remote host?I am a newbie to building ASP.NET websites on a remote host.A stumbling point has been the maintenance of SQL tables on the remote host.I understand about doing complete backup and restores,but I am seeking a quicker way to maintain individual files.I would like to click and edit but instead am going through the following 30+ clicks.Is there a easier way?Thanks.
For example, what I do now to build a new data table for a hosted website.1) Design table 1a) Name 1b) Fields & Types
2) SQL Server Management Studio Express (assuming existing database) 2a) Select Database & Tables 2b) Add new table 2c) Add fields,  Key must be INT for ACCESS 2d) Save as (Name_Table)
3) MS Access  (requires ODBC to be setup first through the Windows control panel) 3a) Tables / New / Link / ODBC /Machine_Data_Source 3b) Pick table 3c) Edit data, as needed
4) To transfer data, first:Select the database in the VWD solution explorer, then right-click and select the new "Publish to Provider" 4a) Database Publishing Wizard   4b) Choose table to script a backup from 4c) Build script & Copy
5) Start Ipswitch FTP ( this step can be rplaced by 6e below) 5a) locate folder & sql script file and choose destination directory 5b) Transfer file
6) Login to remote host host (1and1) 6a) MS SQL Administration  6b) Admin (MyLittleTools Admin) 6c) Tools 6d) Quey Analyser 6e) Paste script (from step 4) 6f) Submit (Run) 6g) Verify table built
FYI: Script to build and populate the new table "Name_Table"Built by step 4c above, pasted into remote Hosts Query Analyzer by step 6e above.
/****** Object:  Table [dbo].[Name_Table]    Script Date: 10/28/2007 18:03:58 ******/IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Name_Table]') AND type in (N'U'))DROP TABLE [dbo].[Name_Table]GO/****** Object:  Table [dbo].[Name_Table]    Script Date: 10/28/2007 18:03:58 ******/SET ANSI_NULLS ONGOSET QUOTED_IDENTIFIER ONGOIF NOT EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N'[dbo].[Name_Table]') AND type in (N'U'))BEGINCREATE TABLE [dbo].[Name_Table]( [ID] [int] NOT NULL, [Name] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Address] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [City] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [State] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Zip] [nchar](10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Acsz] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Phone] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Fax] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, CONSTRAINT [PK_Name_Table_1] PRIMARY KEY CLUSTERED ( [ID] ASC)WITH (PAD_INDEX  = OFF, STATISTICS_NORECOMPUTE  = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS  = ON, ALLOW_PAGE_LOCKS  =
ON))ENDGOINSERT [dbo].[Name_Table] ([ID], [Name], [Address], [City], [State], [Zip], [Acsz], [Phone], [Fax]) VALUES (1, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL)INSERT [dbo].[Name_Table] ([ID], [Name], [Address], [City], [State], [Zip], [Acsz], [Phone], [Fax]) VALUES (2, NULL, NULL,
NULL, NULL, NULL, NULL, NULL, NULL)INSERT [dbo].[Name_Table] ([ID], [Name], [Address], [City], [State], [Zip], [Acsz], [Phone], [Fax]) VALUES (3, N'Third
name', NULL, NULL, NULL, NULL, NULL, NULL, NULL)
 

View 1 Replies View Related

Maintaining SQL Server At Customer Sites

Apr 18, 2003

I am wondering how people maintain their SQL Servers which run at several customers sites and disk space is getting smaller and smaller? I want to say that we have tables in SQL dbs which hold a lot of date consisting of statistics, errors, logs etc.
They grow and grow and existing data is not needed anymore as soon as the data get older than let's say for one year. How do you overcome the problem reducing the tables but not charging the system too much as the major application also runs on the same server?

Thanks for any input

mipo

View 1 Replies View Related

Maintaining A Log For The Users Connected To Sql Server Db.

Dec 8, 2005

Hi !

I need to maintain a record such as how many time any user (e.g, sa) connects to the sql server. Means whenever any person is connecting to the database through application or directly, then i need to know that through which sql user(e.g sa), any body connected.

Regards,
Shabber Abbas Rizvi.

View 1 Replies View Related

Suggestions On Maintaining Audit Fields

Apr 14, 2008

Currently all of our tables in several databases have the following columns:

user_added (this is nvarchar)
host_added (this is nvarchar)
date_added (this is datetime)
user_modified (this is nvarchar)
host_modified (this is nvarchar)
date_modified (this is datetime)

Right now our policy is that (a) the _added columns use defaults to populate the data on INSERTS and triggers are generated to update the _modified fields upon an UPDATE of the table.

Our practice has been (a) to manually create these fields in our scripts as we create new tables in our system and (b) create triggers to perform the update anytime we create a new table.

This practice has been fine until recently where we have been outsourcing some of our development and not all of our standards have been adhered to, including this one. I'd like to look at alternatives for somehow maintaining these concepts outside of our development workflows.

The first thing I'd like to inquire about is regarding options to eliminate having developers include these columns in the CREATE TABLE statements. Is it possible in SQL Server 2005 to capture when an CREATE TABLE statement is executed and override/append to the initial CREATE TABLE statement?

The second thing I'd like to inquire about is regarding options to eliminate having developers write the initial trigger that maintains the _modified fields. I guess if there are options to capture when a CREATE TABLE statement is executed, we could possibly generate a CREATE TRIGGER statement against that object as well?

Another idea I would like thoughts on are using some sort of 'table inheritence' to store this information for all objects in our database? This idea come up when I saw this article - http://www.sqlteam.com/article/implementing-table-inheritance-in-sql-server. Do you think the situation I explained here would fall into this concept?

I'm also open to any other thoughts and/suggestions.

View 2 Replies View Related

Maintaining SQL, Defragmenting Index Or Harddrive?

Apr 4, 2007

Hello All!

I have a asp.net website with SQL 2005 DB .
DB size of 1.5GB with ~10 tables in it. The largest table has 200k of records in it (website users table), with 500 new records every day.

I've setup this database 4 months ago and didn't touch it since then.
I really have no knowledge what SQL needs in terms of index maintenances / hard drive maintenances.

Lately , the website searches started to be really slow , and I started to get timeout error and deadlock errors.
I have a few indexes for each table based on the recommendation MS-SQL Database tuning advisor gave me.

Some of the index's are :
Page fullness : 99%
Total Fragmentation: 24%

Other are :
Page fullness : 65%
Total Fragmentation: 99%


I guess I need to start maintaining the DB , defragmenting index or hard-drive?
Can anyone help me and provide me with guide/information on what is needed to be done in order to keep SQL running fast and happily?
or a guide on defragmenting index's and how ofen do i need to defrag?

Thanks,
Shar

View 15 Replies View Related

Maintaining Unique Keys When Offline

Aug 7, 2007

If you have a "Orders" table that is being sync'd to subscribers that are ocassionaly offline, and the subscribers add rows to their local Orders table. When they go online to sync with the published "Orders" table, how do you handle keeping the "OrderId" field unique?

Example:
Both salespeople sync the following data down:
OrderId Desc
1 Order 1
2 Test Order



Both salespeople go offline and add orders
Salesperson 1 adds:
OrderId Desc
3 Joes Order

Salesperson 2 adds:
OrderId Desc
3 Kathys Order


Now, when they go back online, they both will sync their orders up to the main database and they both have the OrderId of 3.

View 3 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved