Partitioning Table Based On A Non Primary Key Column
Oct 25, 2007
We are facing few issues pertaining to creation of primary key on a non - partitioned column in sql server 2005. Herewith attaching the text file containing the detailed scenario.
Pls advice.
Pls find some of the scenario with the example given below:
We have done the following steps
1.Creating a Partition Function
CREATE PARTITION FUNCTION pf_EncounterODS_StateID (CHAR(2))
AS RANGE RIGHT FOR VALUES
(
'CA', -- CA
'MI', -- MI
'NM', -- NM
'OH', -- OH
'TX', -- TX
'UT', -- UT
'WA' -- WA
)
GO
2.Creating a Partition Schema
CREATE PARTITION SCHEME [ps_EncounterODS_StateID]
AS PARTITION pf_EncounterODS_StateID TO
(
[PRIMARY],
[ENC_DM_DATA_01], -- CA
[ENC_DM_DATA_03], -- MI
[ENC_DM_DATA_04], -- NM
[ENC_DM_DATA_05], -- OH
[ENC_DM_DATA_06], -- TX
[ENC_DM_DATA_07], -- UT
[ENC_DM_DATA_02] -- WA
)
GO
4.Creating a primary key on validationErrorSID column in the fact table
ALTER TABLE [Fact] ADD CONSTRAINT NQValidationError PRIMARY KEY CLUSTERED ([ValidationErrorSID])
Step 4 throws an error as
---------------------------------
Column 'StateID' is partitioning column of the index 'NQValidationError'. Partition columns for a unique index must be a subset of the index key.
If we include StateID along with ValidationErrorSID for index,then it works fine.But we need to have only ([ValidationErrorSID]) for indexing.
We have a database and have 6-7 growing tables. All the tables have Primary and foreign key relation. I want to do partition based on the date column.
I need 3 partitions
First partition has to hold present data second partition need to hold the previous year data (SAS storage) Third partition need to hold all the old data and need to be in the archive database
I understand that first we need to disable the constraints (Indexes PK & FK) Then create partition function and partition schema Then Create the Constraints again
I have a requirement of table partitioning. we have 10 years of data on a table which is 30 billion up rows on 2005 server we are upgrading it to 2014. we have to keep 7 years of data. there is no keys on table or date column. since its a huge amount of data and many users its slow down the process speed. we are thinking to do partition on 7 years for Quarterly based. but as i said there is no date column on table we have to use reference table to get date. is there a way i can do the partitioning with out adding date column on table? also does partition will make query faster?Â
I have think three ways to do it. 1. leave as it is. 2. 7 years partition on one server 3. 3 years partition on server1 and 4 years partition on server2 (for 4 years is snapshot better?)
I need to add a child table that will tell us who the participants counselor is, what I did was I did a Make Table query based off the primary key of the Parent table and made that the link (foreign key) for the People_tbl and the Counselor_tbl, so if the counselor changes then the user adds the record to the counselor tbl and then puts in the Effective date. The problem is that when I run a report it doesn't show the present counselor always shows the old counselor?
Code: SELECT Student_ind.StudentFirstName, Student_ind.StudentLastName, Student_ind.[Student ID], People_tbl.[Family ID], People_tbl.FirstName, People_tbl.LastName, People_tbl.[Parent ID] FROM People_tbl RIGHT OUTER JOIN Student_ind ON People_tbl.[Family ID] = Student_ind.[Family ID] WHERE (People_tbl.LastName = @Enter_LastName) AND (People_tbl.FirstName = @Enter_FirstName)
I have a Users table that I use for membership. I am using username varchar(30) as the primary key for this table since username will always be unique.
The question I have is regarding how SQL Server actually stores data:
I see that when I add users, they are always stored alphabetically sorted on username.
I was expecting that all users will appear on the users table in the order they were added.
Example: I have 3 users (john, jonah, wilson). Now I added 4 user with username='bob'
If I execute select * from users, it returns me (bob, john, jonah, wilson). Look bob has become the first row of the table.
My question: Is Sql server moving 3 older rows to make room for 'bob' and it is also rebuilding part of the index due this new username 'bob'?
If this is the case, then it will have big impact if I have 100K users and I add one user that becomes firstrow. In that case 99,999 rows will have to move.
Bottom line, insert, delete will be very expensive.
I know sql server keeps data physically sorted on PK. But I am concerned here since rows are losing the order in which they were inserted.
I want to have an other employee table named employee_modified
Empno empname salary commission derived_column1(salary+commission) derived_column2(derived_column1 + xxxx) and so on derive other columns based on the earlier derived columns)
Is that possible to do it.. or am I doing something wrong.
I have 2 tables: Order(ID, Quantity) and Product(ID,Name, Price) and I want to add a calculated field in Order table based on the price column in the Product table. How do i do that?
this query returns the values i want in the table.
select a.quantity * b.price from tblCustomerPurchases as a join tblProduct as b on a.ID=b.ID
I want to add new primary key into existing table which already has a primary key. But,I do not want to remove the old primary key, since there are many records and the old primary key also have relationship with other table
When I am using this query:
alter table hem154 add indexNO uniqueidentifier default newid()
alter table hem154 add CONSTRAINT pk_hem154_indexNo PRIMARY KEY (PK_indexNO) go
Note: Hem154 ~ Table name indexNo ~ Column Name
I get this runtime error:
Msg 1779, Level 16, State 0, Line 1 Table 'hem154' already has a primary key defined on it. Msg 1750, Level 16, State 0, Line 1
This is a fairly simple question, but what is the easiest way to:create a new numbered column (where value is simply the row number) inan existing table and setting it as a primary key?
We want to add a new int identity column as a primary key to an already existing table that has a primary key on Guid. Here is the DDL:
CREATE TABLE [dbo].[VRes]( [VResID] [uniqueidentifier] NOT NULL, [Mes] [varchar](max) NOT NULL, [PID] [uniqueidentifier] NOT NULL, [Segt] [int] NOT NULL,
[code]....
Also we currently have 3 million rows on this table. Is having an integer column as identity column and primary key better or shd I consider using BigInt?
Hi, I have two tables. I want to update two columns in my first table,[ADD_BSL_SALES] and [ADD_BSL_COST] with two values [Sales] and[Costs] held in my #temp table but based on a RUN_DATE from my firsttable.Can anyone point me in the right direction?Thanks in Advance ď?BryanCREATE TABLE [GROSMARG_AUDIT_ADDITION] ([RUN_DATE] [datetime] NULL ,[SALES_DIFF] [numeric](19, 6) NULL ,[COST_DIFF] [numeric](19, 6) NULL ,[ADD_BSL_SALES] [numeric](18, 0) NULL ,[ADD_BSL_COST] [numeric](18, 0) NULL ,[ADD_SALES_DIFF] [numeric](18, 0) NULL ,[ADD_COST_DIFF] [numeric](18, 0) NULL) ON [PRIMARY]GOINSERT RUN_DATE,datetime,INSERT SALES_DIFF,numeric(19,6),INSERT COST_DIFF,numeric(19,6)INSERT ADD_BSL_SALES,numeric(18,0),INSERT ADD_BSL_COST,numeric(18,0),INSERT ADD_SALES_DIFF,numeric(18,0)INSERT ADD_COST_DIFF,numeric(18,0)--- Second TableCREATE TABLE #DUPTOTALS[Sales][Costs]
I have come up with one scenarios where I have three table like Product, Services and Subscription. I have to create one table say Bundle where I can have some of the product id , service id and Subscription id , i.e. a bundle may contains sum prduct , services and subscription . How I can design these relations ?
Hi,I don't know if I missed anything. I have 2 member tables and onepartition view in SQL 2000 defined as followingCREATE VIEW Server1.dbo.UTableASSELECT*FROMServer1..pTable1UNION ALLSELECT*FROMServer2..pTable2CREATE TABLE pTable1 ([ID1] [int] IDENTITY (1000, 2) NOT NULL ,[ID2] [int] NOT NULL ,...<other columns>.........CONSTRAINT [PK_tblLot] PRIMARY KEY CLUSTERED([ID1],[ID2]) ON [PRIMARY] ,CHECK ([ID2] = 1015)) ON [PRIMARY]CREATE TABLE [pTable2] ([ID1] [int] IDENTITY (1001, 2) NOT NULL ,[ID2] [int] NOT NULL ,...<other columns>.........CONSTRAINT [PK_tblLot] PRIMARY KEY NONCLUSTERED([ID1],[ID2]) WITH FILLFACTOR = 90 ON [PRIMARY] ,CHECK ([ID2] <1015)) ON [PRIMARY]SELECT is working fine. However, I got error message if I issue anupdate command such asUPDATE UTableSET somecol = somevalWhere somecol2 = somecondServer: Msg 4436, Level 16, State 12, Line 1UNION ALL view 'UTable' is not updatable because a partitioning columnwas not found.Anyone have any idea? ID2 is my partition column, why the SQL 2Kdoesn't see it. It is a part of primary key, having checkingconstrain, and no other constrain on it. Am I missing something?Thanks a lot.
In all the 3 tables I have QAAccountID and FunctionalAreaID as Primary key. I am getting this error when I try to insert
'UNION ALL view 'pv_mainQAStatus' is not updatable because a partitioning column was not found.' How to over come this. I am used to writing an INSTEAD OF Trigger.
I am using Reporting Services 2000. If you find out that Reporting Services 2005 would resolve this issue, please lemme know also. But I want to mention that I would prefer a way to fix this without changing Reporting Services versions.
I have a table that has a group on ProductTypes. This group is set to PageBreak at end. What I need to do is to conditionally hide an entire column based on the current group's ProductType.
Can you help me figure this one out ? I've tried everything I found on the net, especially everything on this page : http://blogs.msdn.com/chrishays/rss.xml
i have the folowing databases DB1,DB2,DB3,D4,DB5........
i have to loop through each of the databases and find out if the database has a table with the name 'Documents'( like 'tbdocuments' or 'tbemplyeedocuments' and so on......)
If the tablename having the word 'Documents' is found in that database i have to add a column named 'IsValid varchar(100)' against that table in that database and there can be more than 1 'Documents' table in a database.
Table: classes Columns: classID, hp Table: char_active Columns: name, classID, hp
The classes table is already populated.
What I want to do is insert a new row into char_active using the name and classID column, and have the HP column auto populate based on the corresponding value in the classes table. This is the trigger I wrote but I'm getting the error
Incorrect syntax near 'inserted'.
I'm new to sql, this is actually the first trigger I've tried writing.Â
create trigger new_hp on curr_chars.char_active instead of insert as declare @hp tinyint select @hp=lists.classes.hp from lists.classes where lists.classes.classID=inserted.classID insert into curr_chars.char_active (name, classID, hp) inserted.name, inserted.classID, @hp go
I'm hoping someone can help with with a task I've been given. I need to write a trigger which will act effectively as a method of automatically distributing of incoming call ticket records. See DDL below for creation of the Assignment table, which holds information on the call ticket workload.
SELECT COUNT(CallID) AS [Total Calls], AssignmentGroup, Assignee FROM #Assignment GROUP BY AssignmentGroup, Assignee ORDER BY COUNT(CallID) DESC , AssignmentGroup, Assignee
What I need to do is write a trigger for on INSERT to automatically update the Assignee column with the name of the person who currently has the least active calls. For example, using the data above, the next PC Support call will go to Mickey Mouse, and the next two Service Desk calls will go to Jim Smith.
So, the logic for the trigger would be
UPDATE #Assignment SET Assignee = (SELECT Assignee FROM #Assignment WHERE COUNT(CallID) = MIN(COUNT(CallID))
But that's only the logic, and obviously it doesn't work with the syntax being nothing like correct.
Does any one have an idea or pointers as to how I should go about this?
Hi I am having a problem in auditing the column data in tables.My requirement is i have write a trigger which is capable of auditing the columns which are going to be added in the future also with out using dynamic SQL.is there any way to do so. I feel if i can get the column data based on ordinal position then it is possible. Can any body suggest. My set Up is like this I have a base_table to be audited. I have a Audit_spec table which contains name of the table and columns to be audited. And Audit table which actually captures the table name,column name ,old value and new value. I have to audit only those columns in the Audit_spec spec. If schema changes(Like new column added) happens to base_table and I want that column to be audited.with out any changes to my trigger code i should handle the newly added column ..
I have a set of data spread across a number of tables regarding stock market data. An example of this follows:
Market Capitalization...
Date CompA CompB 01/01/11 100 5 02/01/11 102 4
Share Price....
Date CompA CompB 01/01/11 100 100 02/01/11 101 99
Event Data...
Date Company 01/01/11 CompA 02/01/11 CompB
Pretty simply, I need a way to retrieve the market capitalisation and share price data based on the event data. So for instance I say 'oh, there is an event on the 01/01/11 involving company A, the market capitalisation on this day was 100, then for the next event it was 4 for company B.
I can also transpose the data so that the company name is in the rows and the dates in the columns for the market cap and share price tables, but this leads to the issue that when I try and get the data, I don't know how to query the correct company for that date.
For instance: SELECT Event.Date, Event.Company FROM Event
how do I now say.....
SELECT MarketCapitalisation.Column WHERE Column = Event.Company AND MarketCapitalisation.Date = Event.Date.
I have played around with a few basic joins, but I am having issue with the principle of that second to last line of SQL (so only getting the correct column).
I still have a copy of the data in excel so can flip things around as needed, but that would only mean that I would have the issue of WHERE Column = Event.Date instead of Event.Company.
I have 3 columns. I would like to update a table based on job_cd and permit_nbr column. if we have same job_cd and permit_nbr, reference number should be same else it should take max(reference number) from the table +1 for all rows where reference_nbr column is null
I have added a Slowly Changing Dimension transformation to an SSIS package and have launched the Wizard to edit it. After selecting the source (a SQL Server 2005 instance), if I select a very large table (9+ million rows), I'm encountering two strange behaviors: 1. The wizard hangs for several minutes before displaying the columns from that table. 2. The wizard does not display the primary key column. This, of course, is the column I want to designate as the "business key", but can't because it's not displayed. I know this is more like a fact table than a dimension, but this is not a data warehouse. This is just a very large table, and I need to update a field in certain records based on the contents of a source text file. Is there another transformation I should use to perform updates?
I want to update Flag column in second table based on the Adder names.
If the Applicatiion has atleast one AIX and Adder name is UDB then the flag would be True. If the Application has more the one AIX and Adder names are diferent then the flag would be null.
APpName OS Adder
App1 ||| Windows|||Null App1 ||| Linux |||UDB App1 ||| AIX |||UDB App1 ||| Linux |||Sql
I'm trying extract a column from the table based on certain Conditions: This is for PowerPivot.
Here is the scenario:
I have a table "tb1" with (project_id, month_end_date, monthly_proj_cost ) and table "tb2" with (project_id, key_member_type, key_member, start_dt_active, end_dt_active).
I would like to extract Key_member where key_member_type="PM" and active as of tb1(month_end_date).
I am using CROSS APPLY instead of UNPIVOT to unpivot > one column. I am wondering if I can dynamically replace column names based on different tables? The example code that I have working is based on the "Allergy" table. I have thirty more specialty tables to go. I'll show the working code first, then an example of another table's columns to show differences:
select [uplift specialty], [member po],[practice unit name], [final nomination status] ,[final uplift status], [final rank], [final uplift percentage] ,practiceID=row_number() over (partition by [practice unit name] order by Metricname) ,metricname,Metricvalue, metricpercentilerank
[code]....
Rheumatology Table:The columns that vary start with "GDR" and [GDR Percentile Rank] so I'm just showing those:
I have a table #vert where I have value column. This data needs to be updated into two channel columns in #hori table based on channel number in #vert table.
CREATE TABLE #Vert (FILTER VARCHAR(3), CHANNEL TINYINT, VALUE TINYINT) INSERT #Vert Values('ABC', 1, 22),('ABC', 2, 32),('BBC', 1, 12),('BBC', 2, 23),('CAB', 1, 33),('CAB', 2, 44) -- COMBINATION OF FILTER AND CHANNEL IS UNIQUE CREATE TABLE #Hori (FILTER VARCHAR(3), CHANNEL1 TINYINT, CHANNEL2 TINYINT) INSERT #Hori Values ('ABC', NULL, NULL),('BBC', NULL, NULL),('CAB', NULL, NULL) -- FILTER IS UNIQUE IN #HORI TABLE
One way to achieve this is to write two update statements. After update, the output you see is my desired output
UPDATE H SET CHANNEL1= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=1 -- updates only channel1 UPDATE H SET CHANNEL2= VALUE FROM #Hori H JOIN #Vert V ON V.FILTER=H.FILTER WHERE V.CHANNEL=2 -- updates only channel2 SELECT * FROM #Hori -- this is desired output
my channels number grows in #vert table like 1,2,3,4...and so Channel3, Channel4....so on in #hori table. So I cannot keep writing too many update statements. One other way is to pivot #vert table and do single update into #hori table.
I have a results table that was created from many different sources in SSIS. I have done calculations and created derived columns in it. I am trying to figure out if there is a way to remove duplicate rows from this table without first writing it to a temp sql table and then parsing through it to remove them.
each row has a like key in a column - I would like to remove like rows keeping specific columns in the resulting row based on the data in this key field.