Wanted to verify how I would run a replace...it's actually my first time doing a replace, but figure it's worse to screw this up than just a simple SELECT :)
The table/field is:
SOHeader.ShiptoID
I need to go through and replace only values which are equal to '9999' with the word 'DEFAULT' instead for all rows in that column.
i saw the previous query in which it replaces null values to '-' i also want to do same but i have quantity column which is numeric(38,5) and i have to convert it first to varchar to replace null values of quantity - to '-'
select when quantity is null then cast(quantity as varchar)+ '-' else quantity from saleshistory
I AM HAVING A TABLE WHICH HAS INCREMENTAL COLUMNS,WHERE COLUMNS GETS ADDED EVERY MONTH TO THE TABLE AND THE TABLE THEN CONTAINS PREVIOUS MONTH AND PRESENT MONTH DATA ABOUT CUSTOMERS ,DETAILS AND TRANSACTIONS. THE PROBLEM WITH THIS DATA IS ,IF THE CUSTOMER IS NEW ,THEN IN PREVIOUS MONTHS HIS INFORMATION IS NULL,WHICH HAVE TO BE CODED HAS "NOT PRESENT".
NOW, HOW DO WE CONVERT ALL THE PREVIOUS COLUMNS FOR A PARTICULAR CUSTOMER HAS NULL AT THE SAME TIME ?.
HERE IS HOW THE PROC WRITTENED FOR IT GOES :-
DROP PROCEDURE DE_NAT CREATE PROCEDURE DE_NAT AS BEGIN DECLARE @MONMIN1 NVARCHAR(100),MON NVARCHAR(100),@YEAR NVARCHAR(100) , @MONYEAR NVARCHAR(100) SET @MONMIN1 = DATENAME((MONTH),DATEADD(MONTH,-1,GETDATE())) SET @MON = MONTH(GETDATE()) SET @YEAR = YEAR(GETDATE()) SET @MONYEAR = @MON + @YEAR
EXEC('select A.CUSTOMERS,B.*,CAST(A.RFM_40D AS FLOAT) AS R40 INTO TSD_' + @MONYEAR + ' from TSD_20 A LEFT OUTER JOIN SD20 ' + @MONMIN1 + ' B ON A.CUSTOMERS = B.CUSTOMER') END
THIS PROC JUST ADDS THE PRESENT MONTHS DATA TILL LAST MONTHS DATA.
BUT IF A CUSTOMER IS NEW, THEN HOW DO I REPLACE THE NULL VALUES FOR THE PREVIOUS DATA TO 'NOT PRESENT'
FOR EG :- IF THERE IS A NEW CUSTOMER ,HOW DO WE CHANGE :-
NOW, AS YOU CAN SEE, THAT FOR CUSTOMERS = '101023'. THE COLUMN DFGHHGFD IS, THIS MONTHS DATA , I WANT TO CHANGE ALL NULL VALUES PRESIDING IT AS "INACTIVE"
CAN I CHANGE , ALL COLUMNS FROM NULL TO "INACTIVE" , AT THE SAME TIME. ?
AS NEXT MONTH, AGAIN THE COLUMNS IS GONNA INCREASE WHICH WILL AGAIN CAUSE A PROBLEM .
PLS TELL ME A METHOD , SO THAT I CAN DO THE NEEDFUL.
If my SSIS runs every day, how can I "know" that I'm running it for the first time?
I need an SQL Select1 statement to execute the first day that my flow starts (to get all the old data) and after that SQL Select2 statement should always execute (to get the latest data).
Problem: I schedule a job that calls a stored procedure which loads around 1.5 million records. The Job takes 19 hrs to complete. However, if i run that stored procedure manually in Query Analyser it takes only 45 minutes..
Did anyone faced this problem? Is this known problem..Any suggestions/recommendations?
Is it possible for me to run an update syntax at a certain time say midnight for example?
I'm trying to update a bit field in my table (which acts as a checkbox in my Access front end), but only if three date fields are before todays date. The dates in question are in two other tables.
I am re-writing a old query that update's it's value based on values at run time. Is there anyway to accomplish the following query where column s1.AI referances the values that were updated during run-time.
I want to avoid a Loop (i.e. cursor or a CTE).
Please help!
Declare @Stage Table(StartDate datetime,BenefitInterestID INT PRIMARY KEY, Amount MONEY, InterestAmount MONEY, Interest DECIMAL(10, 4), ai DECIMAL(10, 4)) Insert Into @Stage Select convert(datetime,'2006-12-01 00:00:00.000',101) as StartDate,1 as BenefitInterestID,1701.00 as amount,79.605 as InterestAmount ,0.1000 as Interest,0.0000 as ai Union all select '2007-12-01 00:00:00.000',2,172.80,7.92,0.0500,0 Union all select '2008-12-01 00:00:00.000',4,0.00,0.00,0.0700,0
UPDATEs1 SETs1.ai = s1.Interest * coalesce((SELECT SUM(coalesce(s2.Amount,0) + coalesce(s2.InterestAmount,0)+coalesce(s2.ai,0)) FROM @Stage AS s2 WHERE s2.StartDate < s1.StartDate ),0)
Hi all As you know when you run a piece of TSQL scrypt in Query Analizer at the bottom of the page , sql-server will show you the Time of completion of your code....is there any way to capture this time from SQL-Server environment and use it in the Front-End Application to inform the user?
I have a date/time field with a sql format of "datetime" The actualdate/time data format is MM/DD/YYYY^hh:mm:ss:pm or "1/25/200712:00:16 AM" Both the date and time are combined on the same fieldwith a space seperating the two.I need to run a query on this date/time field using the criteria oftoday's date so if I want to search all records with todays date, Ineed to query on this field. I don't need the time, but just today'sdate (MM/DD/YYYY) portion of the field.Thanks,Brian
I have several sequence containers in one package that fire off execute package tasks. I would like each of the sequence containers to start at the same time when the job starts running. However when I set them up to do that, i get an error that the variable cannot be read because it is locked. I have the variables setup as readonly so not sure why they are being locked. When I run the package and have each sequence container fire off after the previous one ends it runs fine.
We have a database with 20 gig and with huge transactions. The transaction log backup is scheduled every one hour from 3.00 AM to 9.00 PM.
We take a full backup in the disk at 9.00 PM and again a full backup in the tape at 2.00 AM
It works fine in the day from 6.00 AM and complete within seconds and the size is approx. 50 to 200 MB.
But the very first transaction log backup at 3.00 AM is running like 3 hrs and the size is approx. 11 gig whick is almost equivalent to the Full backup size. There are some dts packages that are running in the night and as usual reindex, intergrity checks are running and there no large user traffic during night. But I have no idea which the very first transaction log backup in the morning takes longer time and has this bug size. Is there any work around to fix this proble.
If an application is reflecting timeout errors but there are no backups running, the network is fine and the data and log files are within normal parameters, what else could be the cause of the errors?
I am using SQL Server 2012 to improve the running time on a user defined function named "udf_GetPackagesHirearchyofNetwork_NEW". This function takes a little bit long time to run. It is called in .NET application using Microsoft Entity Framework version 5.
CREATE Function [dbo].[udf_GetPackagesHirearchyofNetwork_NEW] ( @networkItemid int ) RETURNS @PackageTable TABLE ( Id int identity (1,1), ContentItemTargetMapID int, ContentItemMapID int,
Below the Error Log Reported in SQL Agent (64 bit SQLServer 2008R2 Enterprise Edition Service pack 2 ,32Core CPU Numa Node and 64GB Ram  [298] SQLServer Error: 10054, TCP Provider: An existing connection was forcibly closed by the remote host. [SQLSTATE 08S01] [298] SQLServer Error: 10054, Communication link failure [SQLSTATE 08S01] [165] ODBC Error: 0, A network-related or instance-specific error has occurred while establishing a connection to SQL Server. Server is not found or not accessible. Check if instance name is correct and if SQL Server is configured to allow remote connections. For more information see SQL Server Books Online. [SQLSTATE 08001] [382] Logon to server '(local)' failed (ConnUpdateStartExecutionDate)
In my environment, there is maintenance plan configured on one of the server and while running DBCC checkdb on a database of size around 200GB, log file usage of tempdb is increasing and causing the maintenance job to fail.
What can I do to make the maintenance job run successfully, size of the tempdb database is only 50GB and recovery model is set to simple. It cannot be increased as the mount point on which it is residing is 50GB.
If I start a long running query running on a background thread is there a way to abort the query so that it does not continue running on SQL server?
The query would be running on SQL Server 2005 from a Windows form application using the Background worker component. So the query would have been started from the background workers DoWork event using ado.net. If the user clicks an abort button in the UI I would want the query to die so that it does not continue to use sql server resources.
One of my stored procs, taking one parameter, is running about 2+ minutes. But if I run the same script in the stored proc with the same parameter hardcoded, the query only runs in a couple of seconds. The execution plans are different as well. Any reason why this could happen? TIA.
USE [Testing] GO /****** Object: Table [dbo].[Testing] Script Date: 4/25/2014 11:08:18 AM ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
It seems to work fine with one million records.
Each primary key is unique, but the begindate is non-unique, and i guess even if i use datetime2 and add nanoseconds, from what i have read, there is a chance that i could have a duplicate datetime since the date is imported via XML from multiple sources.
Is there a way to keep track in real time on how long a stored procedure is running for? So what I want to do is fire off a trace in a stored procedure if that stored procedure is running for over like 5 minutes.
I am trying to load previous days data at 3 am via a SSIS job.
The Date variable is initiated as DATEADD("dd",-1, GETDATE()) in the for loop.
Now, as this job runs at 3 am, and I set the variable as GETDATE() - 1, it excluded the data from 12 am to 3 am in the resultset as Date is set as YYYY-MM-DD 03:00:00:000 I need this to be set as YYYY-MM-DD 00:00:00:000
I hope to update a DateTime column value with a Time input parameter.  Poor attempt below but it looks like the @ApptTime param is coming in as 10:45:00.0000000 and I might have an existing @SendOnDate as: 2015-10-05 07:00:00.000...I hope to end up with 2015-10-05 10:45:00.000
ALTER PROCEDURE [dbo].[SendEditUPDATE] @QuePoolID int=null ,@ApptTime time(7) ,@SendOnDate datetime
I am using VS2005 (VB) to develop a PPC WM5.0 Program. And I am using SQLCE 3.0. My PPC Hardware is in 400MHz.
The question is when the program try to insert the first record into sdf database after each time the program started. It takes a long time. Does anyone know why and how can I fix it?
I will load the whole database into a dataset when the program start and do all the "Insert", "Update", "Delete" in this dataset and fill it into database after each action.
cn.Open() sda = New SqlCeDataAdapter(SQL, cn) 'SQL = Select * From Table scb = New SqlCeCommandBuilder(sda) sda.Update(dataset) cn.Close()
I check the sda.update(), it takes about 0.08s for filling one record into database normally. But:
1. Start the PPC Program
2. Load DB into dataset
3. Create a ONE new record in dataset
4. Fill back to DB
When I take this four steps everytime, the filling time is almost 1s or even more!
Actually, 0.08s is just a normal case. Sometimes, it still takes over 1s to filling back a dataset which only inserted one record when the program is running. (Even all inserted records are exactly the same in data jsut different in the integer key)
However, when I give up the dataset and using the following code:
cn.Open() Dim cmd As New SqlCeCommand(SQL, cn) ' I have build the insert SQL before (Insert Into Table values(XXXXXXXXXXXXXXX All field)
I found that it is still the same that the first inserted record takes more time, but just about 0.2s. And the normal insert time is around 0.02s. It is 4 times faster!!!
We need to select rows from the database that have been recently inserted/updated. We have a main primary table (COMMIT_TEST) and a second update table (COMMIT_TEST_UPDATE). The update table contains the primary key and a LAST_UPDATE field which is a datetime (to tell us when an update occurred). Triggers on the primary table are used to populate the update table.
If we insert or update the primary table in a transaction, we would expect that the datetime of the insert/update would be at the commit, however it seems that the insert/update statement is cached and getdate() is executed at the time of the cache instead of the commit. This causes problems as we select rows based on LAST_UPDATE and a commit may occur later but the earlier insert timestamp is saved to the database and we miss that update.
We would like to know if there is anyway to tell the SQL Server to not execute the function getdate() until the commit, or any other way to get the commit to create the correct timestamp.
We are using default isolation level. We have tried using getdate(), current_timestamp and even {fn Now()} with the same results. SQL Queries that reproduce the problem are provided below:
/* Different functions to get current timestamp €“ all have been tested to produce the same results */ /* SELECT GETDATE() GO SELECT CURRENT_TIMESTAMP GO SELECT {fn Now()} GO */ /* Use these statements to delete the tables to allow recreate of the tables */ /* DROP TABLE COMMIT_TEST DROP TABLE COMMIT_TEST_UPDATE */ /* Create a primary table and an UPDATE table to store the date/time when the primary table is modified */ CREATE TABLE dbo.COMMIT_TEST (PKEY int PRIMARY KEY, timestamp) /* ROW_VERSION rowversion */ GO CREATE TABLE dbo.COMMIT_TEST_UPDATE (PKEY int PRIMARY KEY, LAST_UPDATE datetime, timestamp ) /* ROW_VERSION rowversion */ GO /* Use these statements to delete the triggers to allow reinsert */ /* drop trigger LOG_COMMIT_TEST_INSERT drop trigger LOG_COMMIT_TEST_UPDATE drop trigger LOG_COMMIT_TEST_DELETE */ /* Create insert, update and delete triggers */ create trigger LOG_COMMIT_TEST_INSERT on COMMIT_TEST for INSERT as begin declare @time datetime select @time = getdate()
insert into COMMIT_TEST_UPDATE (PKEY,LAST_UPDATE) select PKEY, getdate() from inserted end GO create trigger LOG_COMMIT_TEST_UPDATE on COMMIT_TEST for UPDATE as begin declare @time datetime select @time = getdate()
update COMMIT_TEST_UPDATE set LAST_UPDATE = getdate() from COMMIT_TEST_UPDATE, deleted, inserted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end GO /* In our application deletes should never occur so we don€™t log when they get modified we just delete them from the UPDATE table */ create trigger LOG_COMMIT_TEST_DELETE on COMMIT_TEST for DELETE as begin if ( select count(*) from deleted ) > 0 begin delete COMMIT_TEST_UPDATE from COMMIT_TEST_UPDATE, deleted where COMMIT_TEST_UPDATE.PKEY = deleted.PKEY end end GO /* Delete any previous inserted record to avoid errors when inserting */ DELETE COMMIT_TEST WHERE PKEY = 1 GO /* What is the current date/time */ SELECT GETDATE() GO BEGIN TRANSACTION GO /* Insert a record into the primary table */ INSERT COMMIT_TEST (PKEY) VALUES (1) GO /* Simulate additional processing within this transaction */ WAITFOR DELAY '00:00:10' GO /* We expect at this point that the date is written to the database (or at least we need some way for this to happen) */ COMMIT TRANSACTION GO /* get the current date to show us what date/time should have been committed to the database */ SELECT GETDATE() GO /* Select results from the table €“ we see that the timestamp is 10 seconds older than the commit, in other words it was evaluated at */ /* the insert statement, even though the row could not be read with a SELECT as it was uncommitted */ SELECT * FROM COMMIT_TEST GO SELECT * FROM COMMIT_TEST_UPDATE
Any help would be appreciated, we understand we could make changes to the application/database to approximate what we need, but all the solutions have identified suffer from possible performance issues, or could still lead to missing deals (assuming the commit time is larger than some artifical time window).
I need to take a temporary table that has various times stored in a text field (4:30 pm, 11:00 am, 5:30 pm, etc.), convert it to miltary time then cast it as an integer with an update statement kind of like:
Update myTable set MovieTime = REPLACE(CONVERT(CHAR(5),GETDATE(),108), ':', '')
how this can be done while my temp table is in session?
We are using SQL Server 2008 as our database and use Access as a GUI. I am looking to create a form in Access where employees can access their time card and request changes from management. I want to use the format from the attached screen shot for the form. I pretty much know how to do it all, the only point of complication is trying to figure out the easiest way to get the transaction punch record data on employee_punch_record into a format where I can easily populate the form in the horizontal format you see in the screen shot.
I am not super strong in SQL, but figure I can do it using a formatting table of some sort. quick and easy way to move transaction records into a more horizontally oriented record?
I have a very simple time series model which processing works fine without any problem. However when I run the following query
SELECT
[TimeSeries].[PriceChange],
[TimeSeries].[Symbol],
PredictTimeSeries(PriceChange, -3, 2)
From
[TimeSeries]
WHERE
[TimeSeries].[Symbol] = 'x'
I get the following error:
TITLE: Microsoft SQL Server 2005 Analysis Services ------------------------------ Error (Data mining): A time series prediction was requested with a start time further in the past than the internal models of the mining model, TimeSeries, specified in the HISTORIC_MODEL_GAP and HISTORIC_MODEL_COUNT parameters can process.
The following is the excerpt of the minding model script related to the two parameters:
<AlgorithmParameters>
<AlgorithmParameter>
<Name>MISSING_VALUE_SUBSTITUTION</Name>
<Value xsi:type="xsdtring">Previous</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_GAP</Name>
<Value xsi:type="xsd:int">1</Value>
</AlgorithmParameter>
<AlgorithmParameter>
<Name>HISTORIC_MODEL_COUNT</Name>
<Value xsi:type="xsd:int">10</Value>
</AlgorithmParameter>
</AlgorithmParameters>
These HISTORIC_MODEL_GAP (1) and HISTORIC_MODEL_COUNT (10) should accommodate PredictTimeSeries(PriceChange, -3, 2). Could anyone shed some light on this?
we have problems with our SQL Reporting Service 2012 (SSRS) server . We have setup Kerberos delegation between SSRS and the database server (SQL Server Always-on cluster) so users are authenticated down to the database. The issue occurs from time to time that SSRS loses the ability to delegate the user credentials to the database. At this point in time the Report Server logs contain rejected database connections because of ANONYMOUS logon. After restarting SSRS the problem is gone.