SQL Server 2008 :: Large Tables In OLTP

Jul 14, 2015

How many no of records of the tables are called large tables.

We are getting more deadlocks. We are using default isolation. Read & insert statements are blocking each other and causes dead locks.

I am thinking that might be purging will reduce deadlocks.

The table has 15million records. Is this table consider as large table or not in OLTP systems?

In general how many records we need to consider as large table.

View 1 Replies


ADVERTISEMENT

SQL Server 2008 :: Retrofitting Partitioning To Existing (large) Tables

Feb 9, 2015

We have an existing BI/DW process that adds large chunks of data daily (~10M rows) to an existing table, as well as using Deletes to remove stale data. This scenario seems to beg for partitioning to support switching in/out data.

After lots of reading on this, I have figured out the mechanics of the switching, bit I still have some unknowns about the indexes needed to support this.

The table currently has several non-clustered indexes, including one on the partitioning column - let's call that column snapshotdate. Fortunately there are no FKs involved, and no constraints.

Most of the partitioning material I see focuses on creating a clustered PK to assist with switching. Not sure if this is actually necessary, but assume I create one using an Identity column (currently missing) plus snapshotdate.

For the other non-clustered, non-unique indexes, can I just add the snapshotdate to the end of the index? i.e. will that satisfy the switching requirement?

View 1 Replies View Related

SQL Server 2008 :: Add ID And Primary Key To Large Table(s)

Apr 24, 2015

Background:

* SQL Server 2008 R2
* Database was created from a third party product. The product writes to the 3 tables that I need to make changes to 24/7 and downtime is not an option. All changes must be done live.
* Database overall size is ~200 GB
* The 3 tables I must update make up ~190 GB of that space.
* Tables have no primary key or ID columns. Therefore, the data is highly fragmented.
* Of the ~190 GB of space allocated for the tables, there is roughly 70 GB of actual data.
* Rows of the table are not guaranteed to be unique. In fact, on one of the tables, tests were ran with a small sample of data and duplicates were very much evident.

What I'm trying to accomplish here is to get an ID column added to the 3 tables and set that ID field as the primary key. Doing so will force the data to become much less fragmented than it is currently and with purging and new inserts, eventually fragmentation will be nearly non-existent.

Problem:
Making table changes on tables this large while data is constantly being added poses many risks and can cause data loss. This was tried on a smaller table than these three and the entire table was lost in the process. Restore from backup was needed to get back to most recent log backup point.

Original Solution:
My original plan was to create a backup of each table and run the script below to migrate the majority of the data temporarily into the new table. I could then update the original table (which now would contain much less data) and then migrate the data back.

CREATE TABLE #temp
(
MsgDate varchar(10)
,MsgTime varchar(8)
,MsgPriority varchar(30)
,MsgHostname varchar(255)

[Code] ....

Original Solution Problem:
The problem with the solution above is that it calls the DELETE function on the original table using the values from the temporary table. When there are duplicate rows, which have not all been inserted into the backup table yet, they will all be removed from the original table because there is nothing unique to separate them out. In my testing, I had 10,000 rows in the original table and ended up with 9,959 rows in the backup table.

Question 1: Is my approach to making these table changes reasonable?
Question 2a: If so, how can I make sure I don't lose data as part of this temporary migration of the data to my backup tables?
Question 2b: If not, what would be a better approach that isn't going to cause disruption to the application that INSERTs data 24/7 and won't have any risk of data loss?

View 9 Replies View Related

DB Engine :: In-Memory OLTP Use With Existing Tables / Index / Procedures

Nov 10, 2015

1. I need to make use of in memory engine for my pr-existed develop procedures ,tables ,index.  do I need and code changes for application and how to store tables /indexes in OLTP memory

Assume table index may have primary key index as well.

2. If table with one primary index and 2 foreign constraints, 3 non clusters indexed. which one able o load to memory area and how t do that.

3. In memory is lock free zone. usually locks will happpen in RDMS context . how this works without locks.

View 3 Replies View Related

SQL Server 2008 :: Update Statistics With Large Databases

Feb 5, 2015

Currently our database size is around 350G. It will grow up to 1.5 TB

We have the

Auto create statistics option :True,
auto update statistics option :True,
auto update statistics asynchronously option : False

at database level

we have a weekly job, update statistics running very long time. It is created through maintenance plan using the option full scan.

Previously they tested with sampling but instead of full scan running with the sampling effected the queries.

Is there option to avoid the long time job duration.

If we didn't run the statistics manually what will happen? How do you maintain statistics with large databases

View 9 Replies View Related

SQL Server 2008 :: Migrate Contents Of Large Table From One DB To Another?

Mar 3, 2015

I have a large table containing about 800 million rows with an average row length of about 1K. The columns in the table are char columns. I need to move the contents of this table into a similar table where the target columns are varchar. The original table column definitions are compatible with the target table but the reverse is not necessarily true. For example, one column is being changed from int to bigint. The table is partitioned.

So, what is the fastest way to migrate the data. I was thinking to unload each partition into a flat file and load the target table running multiple load streams? Is this a good way?

View 0 Replies View Related

SQL Server 2008 :: Replication Error - Field Size Too Large

Feb 1, 2011

I've got two databases on the same server and replicate some tables from one database to another.The replication is configured so not to drop the table if it exists, but to delete the data based on the filter if one exists. There are two tables on the subscriber that have some extra columns.

I get "field size too large" error when trying to replicate them. Is there a workaround without having to make the publisher and the subscriber tables identical by schema?

View 5 Replies View Related

SQL Server 2008 :: What Could Cause Large Gaps In Servers Default Trace

Feb 12, 2015

I have the default trace on a SQL Server 2008R2 instance enabled and found today that there is a gap of nearly 4 minutes in the trace during a time of the day when there most certainly is not going to be a 4 minute window of nothing.

What if anything could cause the default trace to have a gap like this? The SQL Server Instance (against my preferences) is hosted on VMware however it has its on HOST and so its resources are not being shared with any other server. The data & log files reside on different parts of the SANS. Our IT & Network admins are looking into the issue on their end but when I looked and found a near 4 minute gap in the default trace it hit me that this could be something above/outside of SQL Server.

View 1 Replies View Related

SQL Server 2008 :: How To Find Statements That Cause Large Memory Paging

Apr 22, 2015

I am monitoring our production server, and noticed that periodically we have spikes of Memory Paging Rate (pages/sec).

How to find particular queries/stored procedures that causing this?

View 5 Replies View Related

SQL Server 2008 :: Speed Up Text Search In Large Result Set?

Jul 14, 2015

I have a query below which filters detail field in the #TempLogins table. The details field is a text field which contains many types of text strings, some containing urls that have parts like "ResultID=5" which is what is contained in the ResultIDSearch and ResultSetIDSearch fields. The records with entries like "ResultID=5" are the ones I'm trying to filter for.

The problem I have is that the query takes way too long to run. The TempLogin table has around 200 K records and the TempSearch table has around 80 K records.

select * from #TempLogins a where exists
(select 1 from #TempSearch t1 where
a.detail like '%' + t1.ResultIDSearch + '%'
or
a.detail like '%' + t1.ResultSetIDSearch + '%')

View 1 Replies View Related

SQL Server 2008 :: Large Binary Dataset - Database Or File System?

Jun 2, 2015

I have a well-structured but also very large binary data-set that is generated by a C++ application every five minutes. The data needs to be accessed by SQL applications. Since data is generated every five minutes, performance is key, both for write and read. The data set is about 500MB.If data is written to the file system, the write performance doesn't involve SQL server. For reading it, I have a CLR to read the portions of the data that I need based on offset and length. That works and is very fast. The problem is that data is stored in the file system, so it is not self-contained within the database.

A second option that I haven't explored yet, is to write the data into a table as VARBINARY(MAX). I would read the data using SUBSTRING with appropriate offset and length. Performance of SQL write/read of binary data of this size, and whether there is a third option I haven't thought off. I'm using SQL Server 2014.

View 5 Replies View Related

SQL Server 2008 :: Making Use Of A Large Transaction File To Delete Records?

Jun 5, 2015

Currently we has a database of size about 300G. Because our backup system failed some time past we were left with a transaction log file which grew to about 160G. However our backups are working again and everything is working fine. My understanding is that now the transaction log file is practically empty but the capacity remains at 160G.

When you delete records the deleted transactions are going to get logged to the transaction file. My understanding is when a backup is done these transactions get discarded out of the transaction file.

could I make use of this relatively large transaction file and start deleting transactions without out actually adding to the transaction file size.

The plan is to delete records from logging tables that are not referenced to by any other table without this increasing the transaction log file.For example over a period of a few weeks we can delete a chunk of records from a table. Then after it has completed a backup we can delete another chunk of records out of this table until we have got the table down to the records that we now need.Will this work?

View 2 Replies View Related

SQL Server 2008 :: Changing Large / Existing Table To Sparse Columns?

Sep 21, 2015

I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.

1.) three steps for each column in each table in each db.

Step 1: update table to allow for nulls

Step 2: update tabe set column=null where column =0.00

Step 3 update table set sparse columns

2.)

Step 1: Create entirely new table with sparse column definitions

Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS

Step 3: drop original table, rename new table to original name

View 0 Replies View Related

SQL Server 2008 :: Deleting Large Number Of Rows With Foreign Key And Mirroring Setup

Feb 19, 2015

We have a database. It is enabled for mirroring. We need to delete the old records. That is around 500k records from a table. But it has foreign key relation. How to do in Production servers these kind of deletes?

View 2 Replies View Related

SQL Server 2008 :: How To Find Which Queries / Processes Causing Large Memory Paging Rate

Mar 30, 2015

Our monitoring tool shows that our production system periodically experiencing large rate - up to 800 memory pages/sec. How to find out which particular queries, S.P., processes that initiate this?

View 3 Replies View Related

SQL Server Admin 2014 :: Storing Very Large Tables?

Jan 24, 2015

We are in the middle of re-designing few tables (namely transaction tables) that would store very large data and would be hosted on cloud (Azure). The old design of this product breaks transaction tables into monthly tables. i.e. say ORDERS Table would be physically broke into twelve monthly tables over a year like ORDERS0115 (mmyy), ORDERS0215 and so on.

We are in the opinion that keeping the entire transactions in one Table is better. Would like to know what's the best practices for transaction tables like the one mentioned above? Is it better to use one table with partitions. I read somewhere that partitions can slow down SELECT queries if not designed and thought properly.Since this would be hosted on cloud (Azure), do you think some additional things are to be taken care? How a site like Amazon keeps their transactions tables?

View 8 Replies View Related

SQL Server 2012 :: Select Large Data From Multiple Tables

May 10, 2014

In a Library Management database we have these tables

1) Document ( DocNo , Doc_type , permalink,inDate)
2)Title(id, DocNo,Main_Title, Other_Title)
3)Author(id , Author_Name , Author_Family,Type--Like:main author , translator ,....)
4)Publisher(id,DocNo , Name,Publisedate,address)
5)Subject(id,DocNo,Subject)
6)Description(id,DocNo,ISBN,description)--one document may have some ISBN,etc

In document table I have 500,000 records.

I want to search a word in these tables ,for example i want to search 'Computer' ,this word may be in subject or title or description and etc. How can I do this with best performance?

View 3 Replies View Related

SQL Server 2012 :: Merging Two Large Tables (More Than 100m Rows)

Aug 18, 2014

SQL 2012

I have a source table in the staging database stg.fact and it needs to be merged into the warehouse table whs.Fact.

stg.fact is not a delta feed it is basically an intra-day refresh.

Both tables have a last updated date so its easy to see which have changed.

It will be new (insert) or changed (update) data that I am interested in, there are no deletions.

As this could be in the millions of rows that are inserts or updates then this needs to be efficient.

I expect whs.Fact to go to >150 million rows.

When I have done this before I started with T-SQL Merge statement and that was not performant once I got to this size.

My original option was to do this is SSIS with a lookup task that marks the inserts and updates and deal with them seperately. However I set up the lookup tranformation the reference data set will have a package variable in the SQL commnd. This does not seem possible with the lookup in 2012! Currently looking at Merge Join transformation and any clever basic T-SQL that could work as this will need to be fast, and thats where I think that T-SQL may be the better route.

Both tables will have >100,000,000 rows
Both tables have the last updated date
The Tables are in different databases but on the same SQL Instance
Each table holds 5 integer columns, one Varchar, one datatime

Last time I used Merge it was a wider table with lots of columns so don't know if this would be an option.

View 6 Replies View Related

T-SQL (SS2K8) :: Extracting Large Tables From Offsite Linked Server

Oct 1, 2014

I work in Healthcare IS for Company A, but Company B is hosting one of our EMR programs for us. This was done on purpose, so that whether a patient is seen at one or the other, their medical history is more complete. However, this puts all of the data that I need to get to on a server across town that I can only access via Sql Server Management Studio as a linked server.

Now, in some ways, the performance has been better than I expected, but sometimes it behaves very erratically. I am using OPENQUERY to handle all of the pulls, and am not joining to any local tables, in order to maximize efficiency.

Here is some of the code I run, and what happens:

SELECT *
FROM
OPENQUERY([linkedservername],
'
SELECT *
FROM Encounter_ItemChild

[Code] ....

***The above query was programmatically generated by taking the IDs from the second query, and packing as many into the IN condition as possible. Each statement could hold only about 900 IDs, so around 70 queries get built...however, each one returns the records in question in 1-2 seconds.

My main question is...if the second query pulls all IDs from Encounter in a few seconds, and that query is used in the first query's WHERE clause, why does it spin and spin, while manually throwing the IDs in instead runs almost instantly?

View 3 Replies View Related

SQL Server Admin 2014 :: Columnstore Index On Large Tables

Jul 1, 2015

I created columnstore index on the table with 20 columns and about 1000 000 000 rows

every day added about 5M rows

"select" queries became faster because of batch mode and table demand less disk space then before

I have also 6 similar tables with 5 000 000 000 rows and plan to move them on columnstore index

server has 128 G RAM

What pitfalls I could face if I will have so many columnstore indexes on one server?

How a could see problems in DMV?

View 3 Replies View Related

SQL Server Storing Large Amounts Of Data In Multiple Tables

Jul 20, 2005

Hello,Currently we have a database, and it is our desire for it to be ableto store millions of records. The data in the table can be divided upby client, and it stores nothing but about 7 integers.| table || id | clientId | int1 | int2 | int 3 | ... |Right now, our benchmarks indicate a drastic increase in performanceif we divide the data into different tables. For example,table_clientA, table_clientB, table_clientC, despite the fact thetables contain the exact same columns. This however does not seem veryclean or elegant to me, and rather illogical since a database existsas a single file on the harddrive.| table_clientA || id | clientId | int1 | int2 | int 3 | ...| table_clientB || id | clientId | int1 | int2 | int 3 | ...| table_clientC || id | clientId | int1 | int2 | int 3 | ...Is there anyway to duplicate this increase in database performancegained by splitting the table, perhaps by using a certain type ofindex?Thanks,Jeff BrubakerSoftware Developer

View 4 Replies View Related

Large MSMerge_History, MSMerge_genhistory,MSRepl_errors Tables In SQL Server 2005 Express Edition (Merge Push Subscriber)

Nov 23, 2006

SQL Server 2005 Standard Edition act as publisher and distributor.
All subscribers are SQL Server 2005 Express Edition.
According to 2005 book online, "MSMerge_History table exists in distribution database". Howevey I found this table in Subscriber database which is in SQL Server 2005 Express Edition.
The problem is this table (MSMerge_History) and other two tables (MSMerge_genhistory, MSRepl_errors) are quite large. We want to keep SQL Server 2005 Express database as small as possible so we can put more data into it.
Is there anyway (Manually or automatically) to clean those tables in SQL Server 2005 Express? Please help.

View 3 Replies View Related

SQL Server 2008 :: Linking Two Tables To One

Feb 6, 2015

I have table1(Client widgets) and Table2(Client Toys) I want to link both Table1 and table2 up to Table3(ClientNames)

All tables have a clientid field

I want to right join them so I get all the client widgets and Client toys for each client...

View 2 Replies View Related

SQL Server 2008 :: Creating Tables From ERD

Mar 3, 2015

I need to write create table statements for the er diagram that I attached. I am new to sql and I have trouble integrating foreign keys with these bigger er diagrams.

These are the tables I need to create:
Create Table Author(...)
Create Table Writes(...)
Create Table Book(...)
Create Table Copy(...)
Create Table Loan(...)
Create Table Customer(...)

View 1 Replies View Related

SQL Server Admin 2014 :: Server In Memory OLTP

Apr 24, 2014

SQL Server In Memory OLTP 2014?

View 6 Replies View Related

SQL Server 2008 :: Join And SUM Values From 2 Tables

Jan 29, 2015

I am new in SQL and i need do a query where I need sum values from 2 tables, when i do it the Sum values are not correct. this is my query

SELECT D.Line AS Line, D.ProductionLine AS ProductionLine, D.Shift AS Shift, SUM(CAST(D.DownTime AS INT)) AS DownTime,
R.Category, SUM(Cast(R.Downtime AS INT)) AS AssignedDowntime,
CONVERT(VARCHAR(10), D.DatePacked,101) AS DatePacked
FROM Production.DownTimeReason R
left JOIN Production.DownTimeHistory D

[Code] .....

View 3 Replies View Related

SQL Server 2008 :: Iterate Through Database Tables?

Mar 18, 2015

MS SQL 2008.I want to execute a delete query on certain tables in my database to delete some rows in the tables.The tables selected has a certain name pattern (the name ends with "Temp").

So I can do this to get a list of the table names

SELECT name
FROM sys.Tables where
name like '%Temp'

Now I want to check each table to see if it has a column with the name "DateStamp" and then execute a delete query as follows:

delete form TableName where
DateStamp is < '2010-01-01'

In other words I need to iterate through the tables names, How to do this?

View 4 Replies View Related

SQL Server 2008 :: How To Update Certain Column From All Tables Within DB

Mar 19, 2015

I have a query, I am trying to update a certain column in my query you can see that is hard coded. The column that I am trying to update is "O_Test" I used a select statement trying to figure out how many records that accounts for with the entire database of all the tables. I got 643 records. So I am wondering if there is a way I can update all the columns without looking up each table and updating each one. This Update statement wont work because I am accounting for all records in the DB of all tables associated of what I hard coded

SELECT t.name AS table_name,
SCHEMA_NAME(schema_id) AS schema_name,
c.name AS column_name
FROM sys.tables AS t
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
WHERE c.name LIKE '%O_Test%'
ORDER BY schema_name, table_name;

View 5 Replies View Related

SQL Server 2008 :: Finding Column Within All Tables In DB

May 1, 2015

I am trying to find a way where I can search for a column that is associated in all tables of the database. I have created a query but is not executing correctly.

SELECT t.name AS table_name,
SCHEMA_NAME(schema_id) AS schema_name,
c.name AS column_name
FROM sys.tables AS t8
INNER JOIN sys.columns c ON t.OBJECT_ID = c.OBJECT_ID
WHERE c.name LIKE '%Status%'
ORDER BY schema_name, table_name;

View 3 Replies View Related

SQL Server 2008 :: Finding Tables With No Createdate?

May 4, 2015

In our Production db we have all most all tables have the column created or createdate.

I need to find out all tables without the created or createdate column

SELECT t8.name AS table_name,
SCHEMA_NAME(schema_id) AS schema_name,
c.name AS column_name
FROM sys.tables AS t8
INNER JOIN sys.columns c
ON t8.OBJECT_ID = c.OBJECT_ID

[code]....

View 2 Replies View Related

SQL Server 2008 :: Comparing Tables For Like Values?

Jul 17, 2015

I have a table of raw data with supplier names, and i need to join it to our supplier database and pull the supplier numbers.

The issue is that the raw data does not match our database entries for these suppliers; sometimes there are extra periods, commas, or abbreviations (i.e. FedEx, FederalExpress, FedEx, inc.) etc. I'm trying to create a query that will search for entries that are similar.

I tried setting a variable to be equal to the raw data field, and then using a LIKE '%@Variable%' to try and return anything that would contain it, but it didnt return any rows.

View 9 Replies View Related

SQL Server 2008 :: Finding All Relations Between Tables?

Sep 1, 2015

Below I have a query which list the relations (constraints) between tables.

I want to list all the relations which are visible in the Database Diagrams.

The list is not complete. How do I get a complete list ?

--
-- Query to show the relations (constraints) between tables.
-- This does not show the complete list
--
SELECT A.constraint_name,
B.table_name AS Child,
C.table_name AS Parent,

[Code] ...

View 4 Replies View Related

SQL Server 2008 :: How To Encrypt Either Columns Or Tables

Nov 2, 2015

how to encrypt a either columns or tables in SQL server (2008 R2).

Do I need to encrypt a whole table of can I encrypt certain columns of a table ?

View 2 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved