SQL Server Admin 2014 :: MDW Data Collector - Large Number Of SPIDs

Nov 6, 2015

I've installed the MDW (Mangement Data Warehouse) database on our central monitoring SQL Server. I've then added a number of servers to be monitored. The data is collected on the servers that are being monitored and uploaded to the central MDW Monitoring server.

On the servers that are being monitored, I'm seeing a large number (over 1000) of SPIDs being generated by 'SQL Server Data Collector'.

Is this normal behaviour? I've seen more blocking as a result of this.

Is there any way to reduce the number of SPIDs generated?

View 0 Replies


ADVERTISEMENT

SQL Server Admin 2014 :: Setup Performance Monitor Collector On Workstation Collecting Remote Server Data?

Mar 31, 2015

I set up the collector, and specify the Run As as my AD account in the Collector Set - Properties - General screen. My AD account is the local admin of the remote server.

However, the collector does not seem to work. Although the collecting set is shown as running, the The blg file stays at 64K. If I open it, there is nothing inside (no counter at the bottom). What did I miss?

View 1 Replies View Related

SQL Server Admin 2014 :: Database Mirroring For Large Number Of Databases

Oct 27, 2015

I have a 2 node cluster having 4 cores each wherein having 3 instances of SQL 2008 R2 enterprise comprising of 60 databases, 20 on each instance. I need to setup mirroring for each of the databases to a secondary server having 4 cores and 3 instances. What i understand is that in this case the mirror server will be providing max of 512 worker threads and the 60 mirror databases would consume 240 threads.what all needs to be checked for looking into the feasabilty of going ahead with a async mirror setup as mentioned above.

View 0 Replies View Related

SQL Server Admin 2014 :: Storing Very Large Tables?

Jan 24, 2015

We are in the middle of re-designing few tables (namely transaction tables) that would store very large data and would be hosted on cloud (Azure). The old design of this product breaks transaction tables into monthly tables. i.e. say ORDERS Table would be physically broke into twelve monthly tables over a year like ORDERS0115 (mmyy), ORDERS0215 and so on.

We are in the opinion that keeping the entire transactions in one Table is better. Would like to know what's the best practices for transaction tables like the one mentioned above? Is it better to use one table with partitions. I read somewhere that partitions can slow down SELECT queries if not designed and thought properly.Since this would be hosted on cloud (Azure), do you think some additional things are to be taken care? How a site like Amazon keeps their transactions tables?

View 8 Replies View Related

SQL Server Admin 2014 :: How To Analyze Large Procedure Cache

Jun 15, 2015

I want to analyze procedure cache, to find inefficient plans and parameter issues.

I do it trow DMV But my requests to DMV are very slow and demand resources because procedure cache is about several GB Actually I dont need on-line analysis.

Is it possible to have fast snapshot of procedure cache?

View 0 Replies View Related

SQL Server Admin 2014 :: Columnstore Index On Large Tables

Jul 1, 2015

I created columnstore index on the table with 20 columns and about 1000 000 000 rows

every day added about 5M rows

"select" queries became faster because of batch mode and table demand less disk space then before

I have also 6 similar tables with 5 000 000 000 rows and plan to move them on columnstore index

server has 128 G RAM

What pitfalls I could face if I will have so many columnstore indexes on one server?

How a could see problems in DMV?

View 3 Replies View Related

SQL Server Admin 2014 :: High Number Of VLFs

Oct 7, 2015

I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.

I created 2 DBs with 100MB datafile and 50MB logfile.

TestDB log file had 100MB autogrowth
TestDB2 log file had 1% growth.

I inserted 1048576 records, took the backup

Ran DBCC loginfo and
TestDB had 40 VLFs and
TestDB2 had 165 VLFs

But when I restored both DBs, this is what I got.

TestDB:
RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec).
SQL Server Execution times:
CPU Time = 125ms, elapsed time = 8323 ms.

TestDB2:
RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec).
SQL Server Execution Times:
CPU time = 109 ms, elapsed time = 8314 ms.

Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.

View 6 Replies View Related

SQL Server Admin 2014 :: How To Get Number Of Execution In Specific Time - Not From First

Apr 7, 2015

I have this query

SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count
FROM sys.dm_exec_query_stats AS a
CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b
where last_execution_time >= '2015-04-07 10:01:01.01'
ORDER BY execution_count DESC

But the result of execution count is from the first. I want to know it only one day.

View 9 Replies View Related

SQL Server Admin 2014 :: Why Number Of Reads Increases During Insert Test

Jun 2, 2014

I am writing a performance baseline test.

The first test writes 5000000 rows in one table. I realise this is not representative OLTP behaviour, but it worked me to start interpreting performance counters and to test several setups to be discussed with our server, storage and network administrators. This way we have been able to compare the results of different hard disks, Lun vs vmdk, 1GB vs 10GB network, AMD vs Intel, etc. This way I can also compare several SQL setups (recovery model, max memory config, ...)

The screenshot shows the results of 2 runs on the same server : Win2012R2, SQL2014, 16GB RAM.

In test 1 min/max server memory was set to 9215MB/10751MB
In test 2 min/max server memory was set to 13311MB/14847MB

The script assures the number of bytes inserted in the nvarchar columns is always the same.

This explains why the number of pages and the number of MB in the table are the same at the end of the 2 tests (column 5 and 6)

Since ca 13GB has to be written, the results of test 1 show the lead time is increasing once more than 10GB has been inserted (column 8 and 9) In addition you can see at that moment

- buffer cache hit ratio is decreasing
- page life expectance becomes "terrible"
- free list stall/sec increases
- lazy writes/sec increases
- readlatency increases (write latency does not)

In test 2 (id 3 in column 1 in the screenshot) those counters are not really influenced (since the 5000000 rows can all be stored in memory).

Now what I do not understand is :

Why the number of pages read (instance level) as well as the number of bytes read and the number of reads (databaselevel) is increasing extremely during run 1.

I expected to see serious impact on write behavior, since SQL server is forced to start flushing dirty pages once memory is filled. Well actually you can see here the number of writes (not the the number of bytes written) starts to increase faster in test 1 after 4000000 rows, but there's no real impact on write latency.

Finally I want to notice

- I'm the only user on this machine
- the table has a clustered index on a identity column
- there are no foreign key constraints
- inserts are executed using a loop, not one big transaction
- to monitor progress and behaviour/impact, each 10.000 loops the counters are stored using dmv queries

So I wonder why SQL Server starts to execute so many reads in test 1.

View 4 Replies View Related

SQL Server Admin 2014 :: Database File Placement And Number Of Files

Feb 2, 2015

Database File Placement Layout? We are planning to implement a new SQL Server 2014 OLTP Database with a 1 TB Data file and 1 TB Log File. I am looking at the possible layout of the database files and trying to determine the best possible configuration. My knowledge/research tells me that items which need separate storage due to constant simultaneous access are:

Data files – should go on the fastest reading storage.
Log files – should go on the fastest writing storage.
TempDb – involves a lot of writing at the same time the data files are being read.
Indexes - (including full text indexes) - involves a lot of writing at the same time the data files are being read.

Also, are there any benefit to having multiple OLTP Database Log files? Because SQL Server writes to the log file sequentially, I do not see any advantages to having multiple database log files. In a SQL Server 2012 Class I took last summer, under “Determining File Placement and Number of Files”, it states “Use a single log file in most situations as log files are written sequentially.”

View 9 Replies View Related

SQL Server Admin 2014 :: Practical Upper Limit On Number Of DB Users

Sep 9, 2015

Our development team wanted to create a database user for each application user in the application and use these for granular data access control, which at first, sounded like a good idea but our initial testing ran into some interesting results.

Our target user base was about 15 million users with an estimated 1% concurrency rate, and finding no MS documentation on an upper limit to the number of users a database can have we began some load testing to see how the database performed. In the hundreds of thousands of users range our test database had a hard time performing well under light loads (even without any concurrent connections).

When we purged the users and reverted back to just a handful of service accounts, performance went back to "normal" under the same loads. I began to wonder if this is a situation where throwing more hardware at the problem would overcome the issue or if there is a practical upper limit to the number of users a single database can handle well.

(There were of course other cons to this arrangement and I certainly was never going to expand the users tree in the object explorer for a database like this, but we thought it a solution worth investigating.)

What is the largest number of users any of you have had in a single database?

View 3 Replies View Related

SQL Server Admin 2014 :: Picking Static Port Number For Named Instance

Apr 3, 2015

Basically the question is, which number should I pick?

View 4 Replies View Related

SQL Server Admin 2014 :: Get Average Of Two Largest Number Amount Three Column For Particular Identity

May 3, 2015

ID A B C AVG
------------------------
1 08 09 10 -
------------------------
2 10 25 26 -
------------------------
3 09 15 16 -
------------------------

I want to calculate the average of the larges two number from the column A,B & C for particular identity and store that average in the AVG column....

View 9 Replies View Related

SQL Server Admin 2014 :: Change Data Capture(CDC) For Data Warehouse / Reporting?

Aug 12, 2015

I have a requirement to implement CDC for 50+ tables to implement incremental data changes warehouse/reporting rather than exporting the whole table data. The largest table is having more than half a billion records.

The warehouse use a daily copy of OLTP db (daily DB refresh). How can I accomplish this. Is there a downside in implementing CDC just for the sake of taking incremental changes on the tables?

Is there any performance impact if we enable CDC on OLTP db?

Can we make use of the CDC tables on the environment we do daily db refresh so that the queries don't hit OLTP database?

What is the best way to implement CDC to take incremental changes for reporting.

View 0 Replies View Related

SQL Server Admin 2014 :: Cannot Export All Data From Database

Dec 4, 2013

And have chosen the destination - unstructered (flat) file. But the wizard proposes to export only one table (dbo.Acocount) and all the others from the list are not exported. How can I export ALL the data into one file.I need to do this to edit the syntax in the editor and then import this data and database structure into Postgresql

View 4 Replies View Related

SQL Server Admin 2014 :: TDE Table Data Encryption?

Jun 8, 2015

I'm having problems with the following code:

--DROP MASTER KEY
--GO
USE master;
CREATE MASTER KEY
ENCRYPTION BY PASSWORD = 'Pass@word1';
GO
USE master;

[code]....

What am I missing? What do I have to do if I get in a situation where I need to back out and start over?

[URL]

View 9 Replies View Related

SQL Server 2014 :: Index Dates To Numbers With A Large Data Set?

Jun 16, 2015

I am trying to index dates to numbers with a large data set.

The first colums is index, the next is FactorsS, the next is value and the next is Date and the last is Lag.

Would it be difficult to write code that would determine the lag values. The lag value is based on the date value.

Index FactorS Value Date Lag
1 XYZ 2.3 12/31/2014 1
2 XYZ 1.4 12/30/2014 2
3 XYZ 3.3 12/29/2014 3
4 ABC 1.8 12/31/2014 1
5 ABC 2.2 12/30/2014 2
6 CBA 1.7 12/31/2014 1
7 CBA 1.8 12/30/2014 2
8 CBA 1.9 12/29/2014 3
9 CBA 2.1 12/28/2014 4

View 9 Replies View Related

SQL Server Admin 2014 :: Space Did Not Gets Released After Removing Data

Nov 21, 2013

We have deleted 120GB of data but space did not released even after 2 days. Is there any reason for this? tell me how exactly it releases the space after truncating a 120GB table?

View 8 Replies View Related

SQL Server Admin 2014 :: How Data Will Transfer From Source To Destination

Aug 6, 2014

Does Replication use linked server.

If not then how data will transfer from source to destination.

Note : As per my knowledge it is not backup and restore like logshipping.

View 2 Replies View Related

SQL Server Admin 2014 :: Replicating Data To A Table Via A View

Aug 11, 2014

I am trying to replicate data from a view in the publisher to a table in the subscriber (transaction replication). I do not need the view's base table, or the view itself, replicated to the subscriber. I only want to data from the view to feed a table in the subscriber.

Is this possible?

Running SQL Server 2008 R2 Enterprise.

View 1 Replies View Related

SQL Server Admin 2014 :: Full Backup While Data Is Modifying

Nov 13, 2014

If data is modified (by an insert, update, or delete) while the backup is running, will the backup contain those changes or will it be added to the database afterwards?

View 2 Replies View Related

SQL Server Admin 2014 :: How To Identify Data Leakage In A Database

Dec 29, 2014

how to identify the data leakage in a database , as I heard in one of my environment?

what is the meaning for data leakage ?

View 3 Replies View Related

SQL Server Admin 2014 :: How To Identify Empty Data File

Jan 21, 2015

I was running an operation to shrink file/emptyfile a data file, and then remove it.

It blocked and caused a huge mess, I suspect on the removal part. But I want to confirm that the emptyfile completed (and that the engine isn't going to try to put more data in there for when I schedule the removal part again a week or more from now).

How does the engine know not to put any more data in there, and how long does that situation last?

View 3 Replies View Related

SQL Server Admin 2014 :: Structure Of Log Saving (for Data Mining)

May 12, 2015

We saved huge log data from user behaviour in our site .

But In data mining time , we saw that most of them cant use for data mining

What is the best practice about data gathering from user movement in site?

is there any best practice Template for this ?

View 0 Replies View Related

SQL Server Admin 2014 :: Column Level Data Encryption

Jun 17, 2015

I need to encrypt some column level data in multiple tables in SQL server 2014. I've never tried encryption in SQL server 2014. How can I achieve it?

View 4 Replies View Related

SQL Server Admin 2014 :: How To Fetch Data From Oracle Database

Oct 14, 2015

how to fetch data from oracle database in sql server 2014

example:

oracle schema :t1
sql server :t2

now am in t2 sql server database

now am executing below query

select * from t1.tablename ;

View 1 Replies View Related

SQL Server Admin 2014 :: Setup Database Part Of Data Center?

Apr 8, 2015

My company is migrating all their servers to a new data center and I get to specify what we need for the db servers.

We've got a 22 prod servers (mainly physical) with a couple of TB of data on sql 2000 to 2012.

We expect to move to sql2014, and consolidate and virtualise where ever possible.

But I'd like start with specifying an overall architecture for this: some Best Practices to guide the build at a server and an installation level

View 1 Replies View Related

SQL Server Admin 2014 :: How A New Partition Function Apply For Current Data

Apr 15, 2015

I have a heavy database , More than 100 GB only for six month .every Query on it takes me along time and I dont have enough space to add more indexes.by a way I decided to do partitioning. I create a partition function , on date filed and all Data records per month was appointed to a separate file.And is partitioning only for Future data entry?

View 9 Replies View Related

SQL Server Admin 2014 :: Transactional Replication With Filtered Published Data

Apr 15, 2015

From distribution db, which table(s) store info about filtered data?

View 0 Replies View Related

SQL Server Admin 2014 :: Inconsistent Data In Database And Master Files?

Apr 21, 2015

USE <database>

select * from sys.database_files

and

select * from sys.master_files where database_id= <db id>

give me different size of memory optimized file in <database>

Microsoft SQL Server 2014 - 12.0.2456.0 (X64)

View 1 Replies View Related

SQL Server Admin 2014 :: Cannot Open Data File When Running Agent Job

Apr 30, 2015

I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.

We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).

Here what I have done so far:

I created the database

Created the file and successfully imported data into it using the Import Data feature.

I saved the SSIS package

Scheduled an Agent Job for this package to run at certain time,daily

At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.

Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.

View 9 Replies View Related

SQL Server Admin 2014 :: Automatically Daily Load Data From Oracle?

Oct 7, 2015

how to load the data from oracle to sql server

oracle source is having 7 tables
sql server target is having 7 tables

i have used VISUAL STUDIO and created the one data for individual but how to run in sql server that ssis package

View 9 Replies View Related

SQL Server Admin 2014 :: Move Text Data From Primary To New Filegroup?

Oct 15, 2015

I need to modify a table to reside on a new filegroup and also point TEXTIMAGE_ON to that filegroup instead of PRIMARY. Apparently in the past, the only way to achieve this via SQL is to create a new table, copy over data, drop the old table and rename the new table to the original name. I found this solution in the SQL Server 2005 forum.

Is there any other way to alter this table in order to point the TEXTIMAGE_ON to new filegroup using SQL Server 2014? We are on Standard edition. The technique I am using is the drop constraint (with move option) and add constraint (to new filegroup) commands. The data and indexes move, but not the text data (it still is in primary filegroup).

View 0 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved