I've been using partitioned views in the past and used the check constraint in the source tables to make sure the only the table with the condition in the where clause on the view was used. In SQL Server 2012 this was working just fine (I had to do some tricks to suppress parameter sniffing, but it was working correct after doing that). Now I've been installing SQL Server 2014 Developer and used exactly the same logic and in the actual query plan it is still using the other tables. I've tried the following things to avoid this:
- OPTION (RECOMPILE) - Using dynamic SQL to pass the parameter value as a static string to avoid sniffing.
To explain wat I'm doing is this:
1. I have 3 servers with the same source tables, the only difference in the tables is one column with the server name. 2. I've created a CHECK CONSTRAINT on the server name column on each server. 3. On one of the three server (in my case server 3) I've setup linked server connections to Server 1 and 2. 4. On Server 3 I've created a partioned view that is build up like this:
SELECT * FROM [server1].[database].[dbo].[table] UNION ALL SELECT * FROM [server2].[database].[dbo].[table] UNION ALL SELECT * FROM [server3].[database].[dbo].[table]5. To query the partioned view I use a query like this:
SELECT * FROM [database].[dbo].[partioned_view_name] WHERE [server_name] = 'Server2'
Now when I look at the execution plan on the 2014 environment it is still using all the servers instead of just Server2 like it should be. The strange thing is that SQL 2008 and 2012 are working just fine but 2014 seems not to use the correct plan.
I've the following T-SQL block that I'm executing where I want to print the following output in next line but not sure how to do this:
This is printing everything in one line: SET @BODYTEXT = ' Dear ' + @fname +',We realize you may have taken ' + @course_due + ' in '+ @month_last_taken +'.'
How do I do this: SET @BODYTEXT = ' Dear ' + @fname + ',We realize you may have taken ' + @course_due + ' in '+ @month_last_taken +'.'
Also how can I create a table in this variable, something like this:
(TABLE) LIST THE COURSE CODE, COURSE NAME , EMPLOYEE ID, EMPLOYEE NAME
(Course Name) (Last Completed) (Now due in Month/year)
My T-SQL code:
DECLARE @email varchar(500) ,@intFlag INT ,@INTFLAGMAX int ,@TABLE_NAME VARCHAR(100) ,@EMP_ID INT ,@fname varchar(100)
The following query runs fine on sql 7.0 but it kind of hangs/keeps running without any output on sql server 2000 --------------------- Set @cmd = 'Update ABCD Set '+@day+'_LB = IsNull(LB,0), '+@day+'_UT = IsNull(UNIT,0) From tempData as T Where T.STORE_NUM = ABCD.STORE_NUM And T.ITM_ID = ABCD.UPC' execute (@cmd) ------------------------------------ But if we hard code the @day parameter and run the query like this it runs very fast on sql 2000....
Update ABCD Set THIRD_LB = IsNull(LB,0), THIRD_UT = IsNull(UNIT,0) From tempData as T Where T.STORE_NUM = ABCD.STORE_NUM And T.ITM_ID = ABCD.UPC ------------------------------
I need out put in single row for below 2 queries using joins
SELECT [total_physical_memory_kb] / 1024 AS [Total_Physical_Memory_In_MB], CPU_Count AS NumberofCPU FROM [master].[sys].[dm_os_sys_memory] CROSS JOIN sys.dm_os_sys_info
We carried out an in-place upgrade on our production server on Saturday - going from 2008 R2 to 2014.
We had tested this method out in dev/test and pre-production with only minor post issues to fix.
However, on production we had an issue whereby checkdb was hitting 100% CPU and caused overnight processes to hang. The checkdb statement was terminated and disabled by a colleague at 1 am.
Since then we have restored this database to a dev server and ran checkdb against it with no_infomsgs and all_errormsgs but it still hasn't finished since Monday morning!
The database is just over 800 GB and whilst checkdb was crippling the cpu, logical reads are less than one. However, sp_whoisactive is showing that it has done 56 million reads so far and this number increases periodically so it looks like it keeps going back to re-check the database with a deep dive.
Also, on a different environment, we ran check table statements and one of them took over 9 hours for a single table but came back clean (see attachment).
We need to wait for the output but the database is still in use in production and the mess will just get worse if it is indeed corrupted.
We are consolidating some old SQL server-environments from 'OLD' to 'NEW' and one of our vendors is protesting on behalve of the collation we use on our 'NEW' SQL server.
Our old server (SQL 2005) contains databases with collation SQL_Latin1_General_CP1_CI_AS
Our new server (2014) has the standard collation Latin1_General_CI_AS
Both collations have CI and AS
From experience I know different databases can reside next to eachother on the same Instance.
The only problem could be ('could be !!') the use of TempDB with a high volume of transaction to be executured in TempDB and choosing for Snapshot Isolation Level ....
The application the databases belong to is very static, hardly updated, and questioned only several time per hour (so no TempDB issue I guess).
using different databases using a different collation running on the same instance?
I have a need to insert stored procedure output a table and in addition to that add a datetimestamp column.. For example, Below is the process to get sp_who output into Table_Test table. But I want to add one additional column in Table_test table with datetimestamp when the procedure was executed.
IF Object_id('GoldenSecurity') IS NOT NULL DROP TABLE dbo.GoldenSecurity; IF Object_id('GoldenSecurityRowVersion') IS NOT NULL DROP TABLE dbo.GoldenSecurityRowVersion;
I'm looking at installing 2008R2 and 2014 side by side, then using Mirroring to provide HA for the 2008R2 instance and AoHA for the 2014 instance. I'd be using the same two physical servers for both the Mirroring pair and the AoHA pair.
I am trying to write a query to calculate the running difference between data on different dates. Below is what my table of data looks like. Basically I want to calculate the difference between the total_completed for each state and date.
below is my code (I almost have what I need) I just can't figure out how show 0 as the completed_difference for the first Date for each state since there is no prior date to calculate against.
MRR_TOTALS_WEEK_OVER_WEEK AS ( SELECT T1.[Date] ,T1.States ,T2.Total_Completed ,ROW_NUMBER() OVER(PARTITION BY T1.States ORDER BY T1.States,T1.[Date]) AS ORDERING FROM TOTAL_CHARTS T1 LEFT JOIN TOTAL_COMPLETED T2 ON T1.[Date] = T2.[Date] AND T1.States = T2.States )
Hello, I have created a package that runs without problem. I run the package with the command dtexec /F "package_name.dtsx" > package_name.txt.
Then I run the same package from SQL Server Agent, everything is OK
Then, I tried to edit the command line to have the output file, but I got an error.
The command line is: dtexec /F "package_name.dtsx" MAXCONCURRENT "-1" / CHECKPOINTING OFF /REPORTING E > package_name.txt. (MAXCONCURRENT "-1" / CHECKPOINTING OFF /REPORTING E are created by default)
I recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
I have a SQL Server 2008 instance that is running on "LiveServer" our production database (ProdDB) - and we need to upgrade to 2014. In order to do some upgrade testing, I spun up a VM with the same version of SQL server on the test VM (TestServer), did a backup of the production DB from the live server, and restored it to TestServer under a different name (ProdDBUA).
I then installed SQL2014 Upgrade advisor onto TestServer, and ran it, checking all the boxes (reporting services etc..) and it all came back clean - no issues whatsoever - not a single warning even. I'm under the impression that stored procs/functions etc... all reside within the DB, so a backup will include those. Is that correct?
The problem is, I know I have stored Procs, functions and views that use deprecated joins in that LiveServer.ProdDB. What do I need to do/configure/check in order to make sure that the Upgrade Advisor is actually checking through all that T-SQL that has deprecated code? I want to have a list to give to my report writers of procs/functions/views that need to be rewritten prior to the upgrade going live.
If there is a modification that needs to be run on the TestServer.ProdDBUA, a cursor to change the path etc. DB is running in Compatibility mode 90.
For certain reason, I set nocount off in the proc procABC. I need to collect all the counters(Ins/Upd/Del) and print statment on the isql/w screen to a .txt file.
How could I capture all the running info as a daily log file?
We are experiencing a problem where a server report is rendered in the ReportViewer control successfully but when the user chooses to print the content, the print output is incomplete (ex. only 18 of 23 pages are printed, etc.).
This is happening consistently and has been reproduced for multiple different reports. It does not happen if the same report is printed via the Report Manager web interface.
I have also found that if I switch to Print Layout mode and then initiate the print from the ReportViewer that the problem does not occur. This is our current workaround, but it would be nice to know what is causing this and have it fixed.
Has anybody else seen this? Is this a known issue with my current version (SQLRS 2005 with SP1 applied).
I want to run a powershell script using xp_cmdshell, put the results into a temp table and do some additional stuff.I'm using:
CREATE TABLE #DrvLetter ( iid int identity(1,1) primary key ,Laufwerk char(500), ) INSERT INTO #DrvLetter EXEC xp_cmdshell 'powershell.exe -noprofile -command "gwmi -Class win32_volume | ft capacity, freespace, caption -a"' select * from #DrvLetter SQL server is cutting off the output, what I get is (consider the 3 dots at the end of a line):
I have simple query which creates tables by passing database name as parameter from a parameter table .
SP1 --> creates databases and calls SP2--> which creates tables . I can run it fine via SSMS but when I run it using SSIS it fails with below error .The issue gets more interesting when it fails randomly on some database creation and some creates just fine .
Note** I am not passing any database of name '20'
Exception handler error :
ERROR :: 615 :: Could not find database ID 20, name '20'. The database may be offline. Wait a few minutes and try again. ---------------------------------------------------------------------------------------------------- SPID: 111 Origin: SQL Stored Procedure (SP1) ---------------------------------------------------------------------------------------------------- Could not find database ID 20, name '20'. The database may be offline. Wait a few minutes and try again. ----------------------------------------------------------------------------------------------------
Error in SSIS
[Execute SQL Task] Error: Executing the query "EXEC SP1" failed with the following error: "Error severity levels greater than 18 can only be specified by members of the sysadmin role, using the WITH LOG option.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly.I have sysadmin permission .
Hi! Im just starting to learn how to use an SQL express database. I used it over at my campus and it worked fine and now when I try running it on my machine it gave me this error :
"An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connection.(Provider : SQL Network Interfaces, error : 26 - Error locating server/ instance specified.)"
I know that by default SQL express doesnt not support remote connection but when i try to get the service status running, i got another error which is :
"The service did not respond to the start or control request in a timely fashion, you need administrator privileges to be able to start/stop this service. (SQLSAC)"
I am already logged on to the computer as an administrator but i still cant get the service to run! Pls help!
I am getteing need help Query analyzer error Unable to connect server local Msg17, level 16,state 1 ODBC SQL server driver [DBNETLIB]SQL server does not exist
As you can see I have added only one contract (for the sake of simplicity). At any given date I want to calculate the running total for a contract (all contracts), but the aggregation must take the contract state into consideration.
You can see my expected result for queries 1 - 3. That means, with the actual state ('B') I want to aggregate all values regardless of their state in the past.
For query 2 the state is 'A' ... I hope you can follow.
Now my aim is to aggregate all contract states for any given date with the right value. The past states of each contract are not relevant. And of course future ones (with respect to the query date) shall also be irrelevant.
Maybe the solution is a combination of 'LastNonEmpty' and SUM/PERIODSTODATE ...
I am trying to run a UNION ALL query in SQL SERVER 2014 on multiple large CSV files - the result of which i want to get into a table in SQL Server. below is the query which works in MSAccess but not on SQL Server 2014:
SELECT * INTO tbl_ALLCOMBINED FROM OPENROWSET ( 'Microsoft.JET.OLEDB.4.0' , 'Text;Database=D:DownloadsCSV;HDR=YES', 'SELECT t.*, (substring(t.[week],3,4))*1 as iYEAR, ''SPAIN'' as [sCOUNTRY], ''EURO'' as [sCHAR],
[Code] ....
What i need is:
1] to create the resultant tbl_ALLCOMBINED table
2] transform this table using PIVOT command with following transformation as shown below:
PAGEFIELD: set on Level = 'Item' COLUMNFIELD: Sale_Week (showing 1 to 52 numbers for columns) ROWFIELD: sCOUNTRY, sCHAR, CATEGORY, MANUFACTURER, BRAND, DESCRIPTION, EAN (in this order) DATAFIELD: 'Sale Value with Innovation'
3] Can the transformed form show columnfields >255 columns i.e. if i want to show all KPI values in datafield?
P.S: the CSV's contain the same number of columns and datatype but the columns are >100, so i dont think it will be feasible to use a stored proc to create a table specifying that number of columns.
I wan to print out the dynamic query result so that i can use as a script for some tasks.This is the scenario wher i got stuck, i am not able to print out the result as it return only the last value because of OUTPUT param limitation
Is there any way to print all the 3 INSERT stmt.
IF OBJECT_ID ('tempdb.dbo.#temp') IS NOT NULL DROP TABLE #temp CREATE TABLE #temp (Command varchar(8000)) INSERT INTO #temp SELECT 'INSERT INTO Test1(column1,column2)values(1,2)' UNION ALL SELECT 'INSERT INTO Test2(column1,column2)values(1,2)'
I need to create an output from a T-SQL query that picks a numeric variable and uses the print function to output with leading zeroes if it is less than three characters long when converted to string. For example if the variable is 12 the output should be 012 and if the variable is 3 the output should be 003.
Presently the syntax I am using is PRINT STR(@CLUSTER,3) . But if @CLUSTER which is numeric is less than three characters I get spaces in front.
When viewing an estimated query plan for a stored procedure with multiple query statements, two things stand out to me and I wanted to get confirmation if I'm correct.
1. Under <ParameterList><ColumnReference... does the xml attribute "ParameterCompiledValue" represent the value used when the query plan was generated?
2. Does each query statement that makes up the execution plan for the stored procedure have it's own execution plan? And meaning the stored procedure is made up of multiple query plans that could have been generated at a different time to another part of that stored procedure?