SQL 2012 :: Apply Primary Key On A Huge Table So Downtime Is Minimal?

Jul 30, 2015

I have a table (named table1) with 20million rows. It takes around 11 minutes to apply the primary key to this table. There are some tables with over 100 million rows so based on the previous time if my calculations are correct it will take close to an hour apply this primary key for tables with around 100 million rows.

My current solution is to create another table (named table2) with no indexs or primary keys. Pump over only like 5 days worth of data, then apply the primary key. Then have a script that will eventually populate table2 with the rest of the data gradually. When I say gradually I mean like insert like every 100k per hour or something. Keep in mind this table2 is heavily updated with new records.

View 2 Replies


ADVERTISEMENT

SQL Server 2012 :: Restore Database With Minimal Downtime

Oct 12, 2015

I have a process that restores a production DB, overwriting the existing copy each night. I'd like to keep the solution "up" for as long as possible. And this'll be more important if I want to update it in the day (where there are more queries) too. The nature of queries thrown at the system is that there are about 20 per hour, it's underpinning a reporting system, it's not an OLTP system.

It seems to me I could restore the fresh DB copy into a holding DB, then rename it to the production DB name at the end of the process. The rename process should be pretty much instant.

But I need to think about detecting and waiting for queries to complete on the prod DB, before removing/demoting it (actually, I though to rename it, then reusing it as the next copy to update).

View 5 Replies View Related

Upgrade To Sql 2000 With Minimal Downtime

Jul 20, 2005

Hello,I'm upgrading from SQL 7 to SQL 2000 on another box. To minimize thedowntime I would like to1) backup my sql 7 database,2) copy it to the new box with SQL 2000 already installed,3) restore the database on the SQL 2000 box,4) Shutdown my sql 7 database,5) Copy the transaction logs to the SQL 2000 database,6) Restore the transaction logs to the SQl 2000 database,7) Bring up SQL 2000.My only concern with this is restoring the transaction logs that werecreated on SQL 7 to SQL 2000. Do you know if I can do this?Do you see any (other) problem(s) with my plan.Thanks, Scott

View 1 Replies View Related

SQL 2012 :: Minimal Logging Insert Statement On Non Clustered Index Table

Jul 9, 2014

I understand that minimal logging can occur on a non clustered indexed heap as long as [URL] ...

*not replicated

*tablock is used

*table is empty

The following test seems to contradict this

In the test I create a non indexed heap, insert some record and check the log, then repeat the test on an indexed heap.

The results suggest that even though the conditions for minimal logging into a indexed heap are met, minimal logging is not happening although it does happen on an non indexed heap. What am I doing wrong?

CREATE DATABASE logtest
GO
USE logtest
GO
CREATE TABLE test (field varchar(100))
GO
CHECKPOINT

[Code] ....

View 2 Replies View Related

SQL 2012 :: Minimize Migration Downtime On Large DB?

Apr 20, 2015

I'm preparing a checklist for myself before getting ready to migrate from 2005 to 2012. Our largest database is a nice one at over 250GB. I'm thinking my best bet to minimize any downtime would be to Restore the DB (NORECOVERY) on the new server and keep rolling it forward with the transactional logs. Eventually I'll need to bring the old DB offline and do one last backup and apply that one to the new server but that should be a small time frame given the whole process could take several hours.

View 5 Replies View Related

SQL 2012 :: Replaying Workloads With Minimal External Factors

Apr 8, 2014

I'm currently working on a project at work to test the effects of database compression, trying to obtain measurable data on the impact of the compression on other server resources, and therefore whether the reduction in space used is worth the extra overhead. This has involved taking a trace of a production customer's workload for a period of time and replaying it against a backup using Distributed replay in synchronised mode.

I'm then taking a trace of that replay, as well as using perfmon to record useful data about the server, before and after compression is enabled. Finally, I'm loading the traces into a tool called Qure to analyse the impact of the compression on reads, writes, CPU, overall duration etc.

What I'm finding is that even across 2 different 'baseline' runs, which are replaying the exact same workload against the exact same database, performance etc differs to a significant enough degree that it calls into question the validity of the test. I can only put this down to the fact this server is on a VM, which is affecting available resources, which in turn affects execution plans the workload is generating and causes different replays of the same workload. I'm therefore looking at doing this on a standalone server, but I still can't be sure the differences will go away.

How to make tests such as this as similar as possible on multiple runs, when elements outside of SQL Server are in effect out of my control?

View 0 Replies View Related

Huge Deletes In A Huge Table

Apr 3, 2000

SQL 7 SP1 NT4 SP5

I have a TRANSACTION table with 150 million rows.

I have a USER table.

Each user has about 600 records in the TRANSACTION table.

The TRANSACTION cluster index is on USERID + RECID . The second index is on USERID + Fieldx + Fieldy.

The TRANSACTION table gets about 1.4 million inserts in a normal day and about 40,000 updates.

I want to go through the USER table and delete all users who have not visited me in a while.

I want to do this without substantially hindering performance in a production environment. I can perform this over a week period or two if needed.

The best way I thought of doing this was to grab x amount of users in a cursor and loop through deleting their corresponding TRANSACTION records.

Does anyone have any ideas on a better way. What is going to happen to my indices during this time ?

Thanks !!!

View 3 Replies View Related

SQL Server 2012 :: Restore Database And Create Users Using Minimal Elevated Privilege

Apr 29, 2015

What I want to do is :

- restore a backup of a 3rd party database onto one of our servers
- this has no users that I can use
- there is some ETL processing so we're using Control-M to manage the process
- create a database user and grant it db_reader.

I'd like to do this without granting any users elevated privileges if possible.

What I've done so far is grant the Control-M user (this is a domain user) dbcreator rights and made it owner of our copy of the database that is being refreshed.

The refresh is completing, but Control-M is not able to log onto the database to create the user.

What is the best way to accomplish this task without granting the control-m user sysadmin rights?

Would I be able to do it if I used a SQL Agent job for the restore and user creation?

View 1 Replies View Related

SQL 2012 :: Create Primary Key On A Table?

Apr 22, 2014

In what situations we can create primary key on a table? I mean what is the minimum no of rows we can prefer to create PK.

View 8 Replies View Related

SQL 2012 :: FK Value Not Exsit In Primary Table It Referenced To

Sep 5, 2015

I am migrating data. I found a strange thing in the existing table, there is a column named workshopCaseID in a TruancyCase table ,The datatype for workshopCaseID is an int (null). it is a FK, there are some records are 0 values, but the fk referenced primary table only have 1-8 values, how are the 0 values get inserted there?

View 6 Replies View Related

SQL Server 2012 :: Insert Foreign Key Value Into Primary Table?

Oct 2, 2015

In a special request run, I need to update locker and lock tables in a sql server 2012 database, I have the following 2 table definitiions:

CREATE TABLE [dbo].[Locker](
[lockerID] [int] IDENTITY(1,1) NOT NULL,
[schoolID] [int] NOT NULL,
[number] [varchar](10) NOT NULL,
[lockID] [int] NULL
CONSTRAINT [PK_Locker] PRIMARY KEY NONCLUSTERED

[code]....

The locker table is the main table and the lock table is the secondary table. I need to add 500 new locker numbers that the user has given to me to place in the locker table and is uniquely defined by LockerID. I also need to add 500 new rows to the corresponding lock table that is uniquely defined in the lock table and identified by the lockid.

Since lockid is a key value in the lock table and is uniquely defined in the locker table, I would like to know how to update the lock table with the 500 new rows. I would then like to take value of lockid (from lock table for the 500 new rows that were created) and uniquely place those 500 lockids uniquely into the 500 rows that were created for the lock table.

I have sql that looks like the following so far:

declare @SchoolID int = 999
insert into test.dbo.Locker ( [schoolID], [number])
select distinct LKR.schoolID, A.lockerNumber
FROM [InputTable] A
JOIN test.dbo.School SCH ON A.schoolnumber = SCH.type and A.schoolnumber = @SchoolNumber
JOIN test.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID
AND A.lockerNumber not in (select number from test.dbo.Locker where schoolID = @SchoolID)
order by LKR.schoolID, A.lockerNumber

I am not certain how to complete the rest of the task of placing lockerid uniquely into lock and locker tables?

View 7 Replies View Related

SQL Server 2012 :: How To Add A Primary Key For Existing Column In The Table

Oct 19, 2015

How to add a primary key for existing column in the table

View 8 Replies View Related

Transact SQL :: 2012 / Insert Foreign Key Value Into Primary Table?

Oct 2, 2015

In a special request run, I need  to update locker and lock tables in a sql server 2012 database, I have the following 2 table definitions:

CREATE TABLE [dbo].[Locker](
 [lockerID] [int] IDENTITY(1,1) NOT NULL,
 [schoolID] [int] NOT NULL,
 [number] [varchar](10) NOT NULL, 
 [lockID] [int] NULL 
 CONSTRAINT [PK_Locker] PRIMARY KEY NONCLUSTERED

[Code] ....

The locker table is the main table and the lock table is the secondary table. I need to add 500 new locker numbers that the user has given to me to place in the locker table and is uniquely defined by LockerID. I also need to add 500 new rows to the corresponding lock table that is uniquely defined in the lock table and identified by the lockid.

Since lockid is a key value in the lock table and is uniquely defined in the locker table, I would like to know how to update the lock table with the 500 new rows.  I would then like to take  value of lockid (from lock table for the 500 new rows that were created) and uniquely place those 500 lockids uniquely into the 500 rows that were created for the lock table.

I have sql that looks like the following so far:

declare @SchoolID int = 999
insert into test.dbo.Locker ( [schoolID], [number])
select distinct LKR.schoolID, A.lockerNumber
 FROM [InputTable] A
JOIN test.dbo.School SCH ON A.schoolnumber = SCH.type and A.schoolnumber = @SchoolNumber
JOIN test.dbo.Locker LKR ON SCH.schoolID = LKR.schoolID
AND A.lockerNumber not in (select number from test.dbo.Locker where schoolID = @SchoolID)
order by LKR.schoolID,  A.lockerNumber

I am not certain how to complete the rest of the task of placing lockerid uniquely into lock and locker tables? Thus can you either modify the sql that I just listed above and/or come up with some new sql that will show me how to accomplish my goal?

View 7 Replies View Related

SQL 2012 :: Auto ID Primary Key Specified As Int And Table Data Is Deleted Periodically?

Jun 25, 2015

I have a table that has a primary key that is auto incremented by 1. This table's data is cleared out periodically and as data gets added the auto id primary key continues to increase in numeric value. Once the data is cleared from the table the auto id names could be used again(the eventId is not stored) Currently the eventID is at 26,581,399. I know the maximum int value is 2,147,483,647.

How should I handle this? or rebuild the table every time the data is cleared(problematically)?

View 3 Replies View Related

SQL 2012 :: Apply And Operator On Same Column

Jun 11, 2015

I have one table NewsEntities which contains one to many relationship.ie. One NewsId can have multiple EntityId.

Consider,NewsId=1 has two entities with EntityId 1 and EntityId 2 .there multiple newsid which can same or different EntityId .

Now I want to find those news in which EntityId=1 is Present but EntityId=2 is absent

View 7 Replies View Related

SQL Server 2012 :: Using Cross Apply To UNPIVOT Data

Jan 15, 2014

I was reading Kenneth Fisher's and Dwain Camps' articles on unpivoting using cross apply... And I can actually get them to work....

CREATE TABLE #TxCycle(
Cycle INT NOT NULL,
PatientID INT NOT NULL,
ALOPECIA TINYINT,
Causality1 TINYINT,
Relatedness1 TINYINT,

[Code] ....

The one thing I was wondering was this: how do I extract the symptom names from the field list without knowing them all beforehand? Dwain does this

-- DDL and sample data for UNPIVOT Example 2
CREATE TABLE #Suppliers
(ID INT, Product VARCHAR(500)
,Supplier1 VARCHAR(500), Supplier2 VARCHAR(500), Supplier3 VARCHAR(500)
,City1 VARCHAR(500), City2 VARCHAR(500), City3 VARCHAR(500))

Can this be adapted if you don't know all the column names beforehand? (Likely not). Back in the dark ages, when I was working on a database like this, it was in Access, and I could loop over the fields collection and evaluate each field name. (Yes, I know you're not supposed to store information in field names, but I inherited that mess!)

View 7 Replies View Related

SQL 2012 :: Restore MDF Then Apply Logs / Point In Time

Jun 6, 2014

Our backup system has worked ok for us to date. We can restore back to either full saves or up to a certain log (we take log backups on the hour). We've never had to, but wanted to test restoring to a point-in-time with the backup data.

What the system does is generates .mdf and .ldf files, which is essentially a full backup say in the middle of the night. It then creates .bak files for the log backups based on the backup set you want to restore.

I can detach the database and apply the .mdf and .ldf and re-attach the database, but to apply the .bak files I need to get the database into a (recovering) state. I can't seem to do that. Otherwise when I try to apply the .bak files the system says: The log or differential backup cannot be restored because no files are ready to rollforward.

How to apply a .mdf and then apply the .bak files?

View 7 Replies View Related

SQL 2012 :: Apply Service Pack When Databases Are In Replication?

Jun 5, 2015

how to apply service pack when the databases are in replication?

View 2 Replies View Related

SQL 2012 :: Re-Indexed With Wrong Fill Factor And Now All Databases Are Huge

Jan 24, 2015

As the title says I re-indexed all of my databases using the wrong fill factor. Instead of using 90% as the fill factor I misunderstood and set this at 10%. So I believe my databases are now packed with a ton of unused space. The DB sizes should be about 5-6 GB but have since grown to 20-40GB. I am very new to SQL administration and don't know of a safe way to remove this unused space so that my databases return to their normal sizes. The databases do not grow very much at all so the free space is not really that necessary.

View 9 Replies View Related

SQL Server 2012 :: CROSS APPLY Returning Records From Left Recordset Even When No Matching Record In Right One

Oct 7, 2014

Following is the query that I'm running:

create table a (id int, name varchar(10));
create table b(id int, sal int);
insert into a values(1,'John'),(1,'ken'),(2,'paul');
insert into b values(1,400),(1,500);

select *
from a
cross apply( select max(sal) as sal from b where b.id = a.id)b;

Below is the result for the same:

idname sal
1John500
1ken500
2paulNULL

Now I'm not sure why the record with ID 2 is coming using CROSS APPLY, shouldn't it be avoided in case of CROSS APPLY and only displayed when using OUTER APPLY.

One thing that I noticed was that if you remove the Aggregate function MAX then the record with ID 2 is not shown in the output. I'm running this query on SQL Server 2012.

View 6 Replies View Related

CROSS APPLY Vs OUTER APPLY Example Messed Up?

Nov 27, 2007

Hi... I'm reading the MS Press 70-442 Self-Paced Training kit, and I'm having problems with this example.
I'd like help getting it to work correctly, or understand why it is isn't working the way I planned.

On page 67, the lab is about the APPLY operator (CROSS APPLY and OUTER APPLY). I first have to input a sample table-valued function into the AdventureWorks database:




Code Block
CREATE FUNCTION fnGetAvgCost(@ProdID int)
RETURNS @RetTable TABLE (AvgCost money)
AS
BEGIN
WITH Product(stdcost)
AS
(
SELECT avg(standardcost) as AvgCost
FROM Production.ProductCostHistory
WHERE ProductID = @ProdID
)
INSERT INTO @RetTable
SELECT * FROM Product
RETURN
END



and then run a sample T-SQL statement





Code Block
SELECT p.Name, p.ProductNumber,
Convert(varchar, cost.AvgCost,1) AS 'Average Cost'
FROM Production.Product p
CROSS APPLY fnGetAvgCost(p.ProductID) AS cost
WHERE cost.AvgCost IS NOT NULL
ORDER BY cost.AvgCost desc

My problem is with the WHERE clause... According to page 56, CROSS APPLY returns only rows from the outer table that produces a result set, so why do I need to explicitly filter NULL values?

When I remove the WHERE clause, the query retrieves lots of NULL AvgCost values.

Again, according to page 56, it is the OUTER APPLY that returns all rows that return a result set and will include NULL values in the columns that are returned from the table-valued function.

So, in short, I don't see the difference between CROSS APPLY and OUTER APPLY, using this example, when I remove the WHERE clause?

(Please refrain from introducing another example into this question.)

View 8 Replies View Related

Problem On Huge Table.

Mar 14, 2001

We have a huge table which has 12 million records. And when I run the following script, it took 50 hours. Is there anyone who can help? Thanks.

update TableA
set In=e.In, EA=e.ea, We=e.we
from TableA c, TableB e
where c.code=e.code

TableA 12,000,000 records.
TableB 750,000 records.
And have clustered index on each code field.

View 6 Replies View Related

Alter A Huge Table

May 18, 2001

I need to alter a table (expand the column size for varchar(10) to varchar(255)) and the table has 200 million rows.
Please suggest me the best and the fastest method to achieve it. The database is on SQL 7.0


Thanks

View 1 Replies View Related

Huge Table Backup

May 13, 2008

Hello
i want to ask about the huge table(table with many tera records) backup time cost , any one can help me please in determining the time cost nearly

View 2 Replies View Related

Horizontal Partition Of A Huge Table

Jul 1, 2005

Hi,

I have a table with 52 million rows which resides on Primary file group in my database. Because of huge number of rows the performance has gone very down and I would like to break the table into parts.

Can anyone suggest me the steps for doing the same and the number of parts that should be made. It is named as Account_Transactions and contains information of Policies in an insurance database.

Rajat

View 2 Replies View Related

Frequency Counts From Huge Table

Mar 12, 2008

Hey guys,

I have a table with about 80 columns and 400 millions records. Each columns has different responses that I need to get frequency for. I need to get counts for each response from all the columns... I have a query that does it, but it will run forever... what is the best way to do so?

My starting query:

select res, sum(cnt) from
(
select col1 res, count(*) as cnt from table1 with (nolock)
group by col1
union all
select col2 res, count(*) as cnt from table1 with (nolock)
group by col2

........................

select col80 res, count(*) as cnt from table1 with (nolock)
group by col80
)a group by res

View 1 Replies View Related

How To Apply Table Changes In My Production Database?

Sep 29, 2007



Here is the scenario:

- I have one production database and one test database.
- In the test database i have done a few changes on the structure - mostly adding new fields in the tables.
- Now i want to apply the table changes to my production database.
Does anyone have any suggestions on how i should do that?

I mean, of course i could just manually check every field and update the changes - but that could take a while -
i guess there are some better ways this could be done, like:
- backup data
- delete tables
- recreate tables using structure of test database
- reinsert data

Is this the correct way? Are there any easy ways to do this?


The changes i made should not make problems regarding data that is already stored (like changing a string field to int)

View 5 Replies View Related

Does DB Size Decrease When I Delete A Huge Table ??

Jan 15, 2004

Hi,
My DB size (Right click on DB Name, Data Files tab, Space Allocated field) was 10914 MB.

I delete a huge table (1.2 million records * 15 columns).
I checked the db size again. It didnt change.
Shouldn't it decrease because I delete a huge table ??

View 14 Replies View Related

Urgent! Please Help! Deleting Data From A Huge Table

Mar 16, 2004

I have a huge table with 4 primary keys on it. I need to delete the data from this table ( approx. 5.6 millions records to be deleted). It takes a hell lot of time to delete it by normal query.
Can someone please suggest me a better way?
Any help will be appreciated.

View 14 Replies View Related

How To Select Specific 2 Rows Out Of A Huge Table

Jul 24, 2013

I have a very large table , and from that table I need just 2 records with column1 = 'A' and column1 = 'B' .

Here I don't think if I can not use OR or IN or Case operators because I need exactly 2 records not more.

View 6 Replies View Related

Fastest Way To Move Huge Table Across Servers

Mar 23, 2008

Hi Guys,

What is the fast way to move huge table (77 million) records with 25 columns across servers? The servers are not linked though.

Thanks for the help.

View 3 Replies View Related

The Best Way To Delete A Huge Number Of Records In Table

May 30, 2008

Hi Everyone,



We have a large test database with million of records for more than company site Code. Sometime we want to refresh the data of that database for one or more site Codes.

In order to do that I have to delete all records of the site code we want to refresh on the test database first then copy a new set of data from production database over. Since we refresh data based on the site code therefore I have to use the Delete command instead of Truncate.

Since this is a huge database with thousand of tables and million of records per table I have a performance issues with delete command. So what would be the best to delete a large number of records without writing any information to database log file?



FYI: The Recovery model of this database is Simple


Regards,



Jdang

View 9 Replies View Related

Truncate A Huge Table Having 21 Millions Of Rows

Nov 19, 2015

SQL Server: 2008 R2

Question A :  I need to truncate a table, it has 21 millions of rows and it has a size of 14 GB.             

                                1-  How do I find out if this table is not being referenced by a FOREIGN KEY?
                                2-  Does it Participates in a indexed view?
                                3- Is being published by using transactional replication or merge replication?

Question B:  How do I safely truncate that table? 

View 8 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved