SQL 2012 :: Minimize Migration Downtime On Large DB?

Apr 20, 2015

I'm preparing a checklist for myself before getting ready to migrate from 2005 to 2012. Our largest database is a nice one at over 250GB. I'm thinking my best bet to minimize any downtime would be to Restore the DB (NORECOVERY) on the new server and keep rolling it forward with the transactional logs. Eventually I'll need to bring the old DB offline and do one last backup and apply that one to the new server but that should be a small time frame given the whole process could take several hours.

View 5 Replies


ADVERTISEMENT

SQL Server 2012 :: Restore Database With Minimal Downtime

Oct 12, 2015

I have a process that restores a production DB, overwriting the existing copy each night. I'd like to keep the solution "up" for as long as possible. And this'll be more important if I want to update it in the day (where there are more queries) too. The nature of queries thrown at the system is that there are about 20 per hour, it's underpinning a reporting system, it's not an OLTP system.

It seems to me I could restore the fresh DB copy into a holding DB, then rename it to the production DB name at the end of the process. The rename process should be pretty much instant.

But I need to think about detecting and waiting for queries to complete on the prod DB, before removing/demoting it (actually, I though to rename it, then reusing it as the next copy to update).

View 5 Replies View Related

SQL 2012 :: Apply Primary Key On A Huge Table So Downtime Is Minimal?

Jul 30, 2015

I have a table (named table1) with 20million rows. It takes around 11 minutes to apply the primary key to this table. There are some tables with over 100 million rows so based on the previous time if my calculations are correct it will take close to an hour apply this primary key for tables with around 100 million rows.

My current solution is to create another table (named table2) with no indexs or primary keys. Pump over only like 5 days worth of data, then apply the primary key. Then have a script that will eventually populate table2 with the rest of the data gradually. When I say gradually I mean like insert like every 100k per hour or something. Keep in mind this table2 is heavily updated with new records.

View 2 Replies View Related

Very Large Table Migration

Sep 21, 2006

I have to migrate an Oracle Db to SQL Server 2005, including a 450 gb table with images. The estimate is that it will take about 24 hours to move this data. I€™m using SSIS with just one OLEDB input and sending to one OLEDB output. The SSIS process will be running on the destination SQL Server with 2gb of memory and at least half that memory in use by other apps (including SQL Server). I tried the SQL Server Destination but received errors trying to use it.

Are there any suggestions on settings or the best way to do this?

View 1 Replies View Related

SQL 2012 :: Database Migration Time

Sep 2, 2014

Usually when in the process of migration, how to calculate how much time it will take to migrate the database. If a database full backup is taking approximately 3 hr how much time it will take to restore that database.

Which process is better to migrate the SQL server from 2008 to 2014.

Database migration wizard or restoring the db using backup file?

Usually if you are new to the sql server and you don't have any idea about the OS how can you find how much time it will take for migration for each database and for all databases

View 2 Replies View Related

How To Minimize The Transaction Log SQL 2000

Dec 11, 2001

My database log file is now 53M, is it a way to minimize it, delete it or erase it, this log file is to big for nothing, I try to shrink it and delete it manually, but this corrupt the database, can someone help me thx ?

View 2 Replies View Related

Minimize Transaction Logging

May 14, 2007

We use SQL PE 2000 in our lab to store sampled data (lots of data collected at high sampling rate) in real-time. i know this may be an uncommon use of SQL server. For this particular application, performance is much more important than data recoverability. In other words, once-in-a-while data loss is acceptable. Actually we have never used the transaction log for recovery since we started using the SQL PE in this way a few years ago. Certain transaction logging such as bulk deletion is particularly wasteful of the resource. As my understanding from reading the online book and some threads on this forum, the best thing we can do is using bulk-logged recovery model. I am not sure if that helps us at all because it seems that bulk deletion (i.e. DELETE FROM MyTable WHERE ..., may affect hundreds of thousands of records) is not on the list of minimally logged transactions.



I am wondering if anyone could share some good advice.



Thanks in advance!

View 4 Replies View Related

Minimize (not Hide) Parameters

Sep 13, 2007

Hello,

I am wondering if RS has the capability to minimize the parameter portion on the top. I see that parameter can be hidden, but I would need the capability to just minimize and if the user needs to change the parameter, they can still click on a button to un-minimize.



Does that make sense?

-Lawrence

View 4 Replies View Related

How To Minimize A Log File Growth?

Nov 27, 2007

SQL Server 2005
We need to set up a database with minimal log file growth or none at all. This database is used as an intermediate step in the data extraction process, i.e. ( daily inserts and truncations) - data recovery is not an issue, but the size of the log file is. How should I set this database options?
Can I set log file growth to 0 ( none)? Will it affect inserts?

Thank you

View 6 Replies View Related

How To Solve Or Minimize Locking In This Case?

May 13, 2004

If batch processing and normal transactions need to run in parallel, how can I solve or minimize locking issue? Since batch processing will use PAG or TAB most of the time, how can other users to access data at that time?


Thank you for any suggestion.

View 1 Replies View Related

SQL 2012 :: Sybase To Server Migration - Date Column Format

Jun 18, 2014

I am currently working for the Sybase to Sql Server migration project and have been able to test migrate few tables using SSIS. After migration, i was doing some data comparision and could see the date formats are different between Sybase and Sql server. However, there are no issues with data like day in sybase becoming month in Sql server except the formats are different.

Do I need to act on this date formats? Not sure if this would cause any issues in front end application that will consume sql server date data.

View 1 Replies View Related

SQL 2012 :: Migration From Access Database Completely To Query For A Report?

Jan 15, 2015

we want to completely do away with access, on my report tool,which is crystal report, we want query to be our data source so , we do not want to use ssis packageor import from access to sql, how can l handle bringing all table and the query from Access together.

View 1 Replies View Related

SQL 2012 :: Migration Of SSIS Script - (binary Code Not Found)

Jul 6, 2015

exporting an SSIS from a 2008 R2 SQL Server and re-import it into a SQL 2012.

I have inherited an SQL 2008 R2 Server running two SSIS packages. I have both as .dtsx files and & a manifest to install.The task at hand is now to migrate those to a new SQL 2012 server.[There is also an IIS running on that machine and project-specific .dll and stuff within the IIS document root. I do not know if these .dlls relate to the IIS web page or to the SQL .dtsx) But the migrated web site runs fine].

Installation of the packages into the MSDB works out flawlessly and one of the scripts runs fine, but the other fails to run with the error:

"Error: the binary code for the script is not found. Please open the script in the designer by clicking Edit Script button and make sure it builds successfully."

Google tell me something about a "Pre-compile option"-setting on server as explained here. I can not find this option setting anywhere in SSMS. Also as the migration is from 2008 R2 -> 2012 this should not be relevant as in both versions pre-compilation should be automatic, right?

- I installed visual studio 2013 community

- I installed SSDTBI

- Start the "Integration Services Import Project Wizard"

to import the SSIS directly from the "Integration Services Catalog"

However, things don't quite work very well - Trying to import (from the locally installed SQL instance) it will not display the tree of SSIS stuff, but only the root directory.

So importing directly from the running system won't work. Let's see if we can get somewhere with the .dtsx As I _do_ actually have the .dtsx files here, why not open them up directly in Visual studio and try to get compiled whatever needs to be compiled.I create a new "integration Services" Project and open up the .dtsx into this project. ==> LOTS of errors of all kind.

(The job of this script is to fetch messages from an Exchange.)But opening up this specific bit of code doesn't work a bit - there is no binary code in it and how to reate it or where to get it from...

View 4 Replies View Related

SQL 2012 :: Partitioning A Large Dimension?

May 9, 2014

The SQL CAT team's recommendation is to avoid partitioning dimension tables: URL.....I have inherited a dimension table that has almost 3 billion rows and is 1TB and been asked to look at partitioning and putting maintenance in place, etc.I'm not a DW expert so was wondering what are the reasons to not partition dimensions?

View 7 Replies View Related

SQL 2012 :: Too Large Log File Size

Feb 11, 2015

My log file size is of 5 GB, I just wanna reduce this to some extent without adopting shrinking method. So is there any way to do the same ?

View 1 Replies View Related

Storing All Stored Procedures In One Database To Minimize Different Connection Strings

Jul 20, 2005

In order to minimize the number of connection strings I have to use toaccess different databases on the same Sql Server, I was consideringstoring all stored procedures in just one database. I want to do thisbecause connection pooling in my application - ASP.NET is based onthis connection string. So if I need to access 6 different databaseson one sql server & set 6 different connection strings, I end upcreating 6 different connection pools.Other than it might create more management work for the DBA, are thereany performance implications with implementing this scheme? Do storedprocedures run any slower if they access tables that are stored indifferent databases within the same server?Any comments/suggestions are appreciated.TIA,Minh Tran

View 4 Replies View Related

SQL 2012 :: Index Maintenance For Large Tables?

Mar 8, 2014

We are having very big tables in TBS and wanted to setup a strategy for index maintenance.

View 3 Replies View Related

SQL 2012 :: Parsing Large Varbinary Fields

Jun 9, 2015

I have a table with raw scientific test results in a single field, some of which are over 25Mb field. I need to parse into the field to find and aggregate selected values from the field.

Table structure is
CREATE TABLE [dbo].[Gxxx_Data](
[id] [uniqueidentifier] NOT NULL,
[Status] [nvarchar](50) NULL,
[GxxxItem_ID] [int] NULL,
[Stats_Data] [varbinary](max) NULL,
) ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]

[code]...

From which I need to parse and summarize the (Assembler) opcodes (MOV,CMPi, SHR etc...)I need to parse the large field [Stats_Data] to locate the target data.The internal result strings are delimited with Char(10), conservative counts are from 64k to over 100k lines in each record. Is there a way to parse the individual lines into another table (temp) that would be queried/regexed ?

View 9 Replies View Related

SQL Server 2012 :: ShrinkFile On Large Database With TRUNCATEONLY

Dec 30, 2013

We have a large OLAP database, about 2.5 TB spread out over 3 data files on three different drives, and recently someone ran a query that created a table that continued to grow until the data files filled the available disk space (about 3 TB total - 1 TB per drive).

Tonight I plan on running a full backup (it's in Simple mode) and running a ShrinkFile on all three files sequentially with TRUNCATEONLY just so it will remove the space after the last extent. Any way to tell ahead of time how much space this will recover?

Granted running a DB Shrink is one of those things you just don't do, but this is a one-time shot and unavoidable to get the file size back under control.

View 5 Replies View Related

SQL Server 2012 :: Purge Process On A Large Table

Jan 9, 2014

I am attempting to do a rather simple purge task on a very large table. This task will need to take place daily and delete records older than 6 months out of the database. On first pass this will delete well over 130 million rows. I thought the best way to handle this is create a proc and call the proc from a SQL Agent Job that runs nightly. Here is an example of the script:

CREATE PROCEDURE usp_Purge_WCFLogger
AS
SET NOCOUNT ON
EXEC sp_rename 'dbo.logs', 'logs_work'
GO
SELECT * INTO dbo.Logs_Backup FROM dbo.Logs_Work WHERE TIMESTAMP < DATEADD(month, -6, GETDATE())

[Code] .....

View 3 Replies View Related

SQL 2012 :: Create Clustered Index On A Very Large Table (500 GB)

May 7, 2014

I need to create a Clustered Index (CI) on a very large SQL Server 2012 database table. This table has about approximately 10 billion rows, 500 GB in size. The job ran for about 20 hours into it and then fails with error: "Out of disk space in tempdb". My tempDB size is 1.8TB, but yet it's still not enough.

Here is my script:

CREATE CLUSTERED INDEX CI_IndexName
ON TableName(Column1,Column2)
WITH (MAXDOP= 4, ONLINE=ON, SORT_IN_TEMPDB = ON, DATA_COMPRESSION=PAGE)
ON sh_WeekDT(Day_DT)
GO

View 9 Replies View Related

SQL 2012 :: Partitioning Large Table On Nullable Date

May 15, 2014

I have a very large table that I need to partition. Ideally the table will write to three filegroups. I have defined the Partition function and scheme as follows.

CREATE PARTITION FUNCTION vm_Visits_PM(datetime)
AS RANGE RIGHT FOR VALUES ('2012-07-01', '2013-06-01')
CREATE PARTITION SCHEME vm_Visits_PS
AS PARTITION vm_Visits_PM TO (vm_Visits_Data_Archive2, vm_Visits_Data_Archive, vm_Visits_Data)

This should create three partitions of the vm_Visits table. I am having a few issues, the first has to do with adding a new clustered index Primary Key to the existing table. The main issue here is that the closed column is nullable (It is a datetime by the way). So running the following makes SQL Server upset:

ALTER TABLE dbo.vm_Visits
ADD CONSTRAINT [PK_vm_Visits] PRIMARY KEY CLUSTERED
(
VisitID ASC,
Closed
)
ON [vm_Visits_PS](Closed)

I need to define a primary key on the VisitId column, but I need to include the Closed column in order to partition on it.how I would move data between partitions on a monthly basis. Would I simply update the Partition function, or have to to some sort of merge, split, or switch function?

View 2 Replies View Related

SQL 2012 :: Create Script That Will Import Large XML Files?

Jul 28, 2014

I need to create script that will import large XML files (500 - 7GB) on a daily basis and store the data in a relational db structure.

What is the best and fastest way of importing such files. I have played around with smaller files and found the following.

1. SSIS XML Data Source: It doesn't seem to like the complex elements types and throws out the file.
2. Using Bulk File Import, sorting the file in XML variable and using XQuery to parse the file: This works but it can't take a file more than 2GB in size, so I can't use this method.
3. C# + XML Serialization: This also works, but seems to be terribly slow. I open the DB connection once, so it doesn't open and close for each db call, but still seems like it takes a long time.

how to import large XML quickly in a relational table structure?

View 9 Replies View Related

SQL Server 2012 :: Fetching Large Data From Teradata?

Jun 9, 2015

I am fetching large amount of data from teradata to sql server using linked server. I am facing below query:

A transport-level error has occurred when receiving results from the server. (provider: TCP Provider, error: 0 - The semaphore timeout period has expired.)

View 0 Replies View Related

SQL Server 2012 :: How To Batch Delete Large Table

Jun 16, 2015

I have a table with about 466 Million rows. In this table there is a int column called WeeksToRetain as well as a EventDate column containing the date the row was inserted. I am trying to delete all the rows that that should be deleted according to the WeeksToRetain. For example, if the EventDate is 5/07/15 with a 1 in the WeeksToRetain column the row should be removed by 5/14/15. I am not sure what days SQL considers the beginning and end of the week. However the core issue I am having is the sheer mass of deletions I must do and log growth.

So I am trying to do the delete in batches. More specifically I want to load a temporary table with a million rows, then use the temporary table to load a sub temporary table with 100,000 rows and join this temporary table to the table I want to delete from looping through 10 times to get 1 million. The Logging.EvenLog table which is the table I'm trying to purge has a clustered index on EventDate (ASC). I would like to run this in a schedule job with enough time between executions for log backups to run.

DECLARE @i int
DECLARE @RowCount int
DECLARE @NextBatchDate datetime
CREATE TABLE #BatchProcess
(
EventDate datetime,
ApplicationID int,

[Code] .....

View 9 Replies View Related

SQL 2012 :: Merge Agent Hanging After A Large Request

Aug 29, 2015

Using merge replication + web synchronization, I have a situation when there are large amount of data changes to upload to the publisher, Merge agent would create a large request and send it over. The publisher gets it and is able to work on it. After few minutes it has finished but (I assume) the connection has been dropped. At the subscriber's side, it appears that the merge agent is hung. The output would look like something like this:

Upload request size is XXX bytes.
Uploaded a total of 100 chunks.
Uploaded a total of 200 chunks.
Uploaded a total of 211 chunks.

The request message was sent to [URL] ....

Normally, when the publisher finishes working, the merge agent then continues processing. But when it takes more than few minutes (it seems to break about at 2 or 3 minutes), merge agent will hang as long as the InternetTimeout setting is (currently 20 minutes) before finally failing and retrying.

But that's not right. The publisher was done and can't communicate back to the merge agent (presumably because the connection was dropped). As a result, merge agent will try to re-enumerate changes on top of giving appearance that it's hung.

I've already fiddled with settings such as MaxUploadChanges, UploadGenerationsPerBatch, UploadReadChangesPerBatch, and UploadWriteChangesPerBatch. However, none of those setting actually ensure that the request message is too large. It has worked in breaking up the changes into separate batches (e.g. processing a single table rather than all tables) which results in more frequent updates and thus avoid the problem.

However, when a single table has several changes, it is still lumped into one large request which then takes more than 2-3 minutes to process on publisher's side and thus I still end up with the same symptom of merge agent hanging.

Is there anything else I could try to get merge agent to keep its connection alive even during processing a large request?

View 0 Replies View Related

SQL Server 2012 :: Delete Large Number Of Records?

Sep 8, 2015

I have the following scenario:

SQL database on SQL 2012

Large Production table 15 Million record

The table has 3 years of data

New monthly data is being added every month.

A New Monthly data is being loaded, checked and finally approved after 6 or 7 iteration before approval.Because of this iteration the monthly data set is being added then deleted then added then deleted few times.Because the table is big this process takes time, any thoughts on how to make the delete insert process faster.Keep in mind I cannot do much because it is a production table and is being access by other users to do other analysis.

Delete is done based on trx_date which is a year/month combo, like 201508.

The table has monthly sales by customer aggregated.

The table structure is:

CREATE TABLE [dbo].[Sales](
[batch_key] [int] NOT NULL,
[Company_key] [int] NOT NULL,
[customer_key] [char](22) NOT NULL,
[Trx_Date] [int] NOT NULL,
[account] [nvarchar](35) NOT NULL,

[code].....

View 9 Replies View Related

Sql Server Downtime RCA

Nov 30, 2007

My SQL Server 2005 Database was down last night.From logs I can find out below details only.




Code Block
Date 11/30/2007 1:01:34 AM
Log SQL Server (Archive #1 - 11/30/2007 1:01:00 AM)
Source spid4s
Message
SQL Server is terminating in response to a 'stop' request from Service Control Manager. This is an informational message only. No user action is required.







Now I want to find out root cause for this.Who has stopped this service.And If services was stopped automatically then which program/service is resposible?

So Please suggest me how to do root cause analysis for this downtime.

View 6 Replies View Related

SQL Server 2012 :: Select Large Data From Multiple Tables

May 10, 2014

In a Library Management database we have these tables

1) Document ( DocNo , Doc_type , permalink,inDate)
2)Title(id, DocNo,Main_Title, Other_Title)
3)Author(id , Author_Name , Author_Family,Type--Like:main author , translator ,....)
4)Publisher(id,DocNo , Name,Publisedate,address)
5)Subject(id,DocNo,Subject)
6)Description(id,DocNo,ISBN,description)--one document may have some ISBN,etc

In document table I have 500,000 records.

I want to search a word in these tables ,for example i want to search 'Computer' ,this word may be in subject or title or description and etc. How can I do this with best performance?

View 3 Replies View Related

SQL Server 2012 :: Merging Two Large Tables (More Than 100m Rows)

Aug 18, 2014

SQL 2012

I have a source table in the staging database stg.fact and it needs to be merged into the warehouse table whs.Fact.

stg.fact is not a delta feed it is basically an intra-day refresh.

Both tables have a last updated date so its easy to see which have changed.

It will be new (insert) or changed (update) data that I am interested in, there are no deletions.

As this could be in the millions of rows that are inserts or updates then this needs to be efficient.

I expect whs.Fact to go to >150 million rows.

When I have done this before I started with T-SQL Merge statement and that was not performant once I got to this size.

My original option was to do this is SSIS with a lookup task that marks the inserts and updates and deal with them seperately. However I set up the lookup tranformation the reference data set will have a package variable in the SQL commnd. This does not seem possible with the lookup in 2012! Currently looking at Merge Join transformation and any clever basic T-SQL that could work as this will need to be fast, and thats where I think that T-SQL may be the better route.

Both tables will have >100,000,000 rows
Both tables have the last updated date
The Tables are in different databases but on the same SQL Instance
Each table holds 5 integer columns, one Varchar, one datatime

Last time I used Merge it was a wider table with lots of columns so don't know if this would be an option.

View 6 Replies View Related

SQL Server 2012 :: Best Way To Handle Like Percentage On Column Too Large For Index

Sep 18, 2015

We have a table to 100M rows and up until now we were fine with an non clustered index a varchar(4000) because we never went above 900 bytes (yes it is a bad design).We have the need to support international character sets now so the column was updated to nvarchar(4000) and now we have data past the 900 byte limit.

The data is long, seems useless but is needed by the business and they need to be able to search "where bigcolumn like 'test%'". With an index, even with a huge amount of data, it was 'fast'. Now of course without an index it is unusable. The wildcard is always at the end of the search. I made a full text index on the column and basic queries such as: select * from ourtable where contains(bigcolumn, 'AReallyLongStringofTextHere') works fine unless there is a space in the data. We loose thousands of returned rows because of spaces in the data.

I have tried select * from ourtable where contains(bigcolumn, '"AReallyLongStringofTextHere that includes spaces"') but not all of the data is returned. I get 112 rows with the contains statement. The table scanning statement of "select * from ourtable where bigcolumn like 'AReallyLongStringofTextHere that includes spaces%' returns 1939 rows.I understand that a full text index is breaking the long string up since it contains spaces. Is there a way to retain the entire string as 1 index entry or is there a way to fix my query to return all of the rows?

View 9 Replies View Related

Calculate Downtime In Given Period

Aug 7, 2013

I have a table as seen below and need to calculate the downtime in a given period.

dateandtime Equipment No. Status
1/2/2013 ! 4 ! 0
1/5/2013 ! 3 ! 1
1/8/2013 ! 8 ! 0
1/3/2013 ! 5 ! 1
1/2/2013 ! 1 ! 0
1/4/2013 ! 4 ! 1

The ! is my attempt to make a line between columns...

View 13 Replies View Related

SQL 2012 :: Generating CREATE TABLE Scripts For Large Number Of Tables

Feb 11, 2014

Other than right-clicking on each individual table in SSMS and generating a CREATE script, is there a simple way to generate CREATE TABLE scripts for tables within a given database?

Background: I have a bunch of tables in one database, and I would like to add tables to a second database that have the same names and basic structures of some of the tables from the first database.

I do not need to transfer any data from the tables, this is a seperate project that will use a similar data structure. I just want to generate the CREATE TABLE scripts for 30ish tables within the first database, and then I'll tweak the scripts as appropriate and run them against the new database.

[URL] ....

View 7 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved