SQL 2012 :: Write Query Which Runs In Background On Cyclic Basis
Jul 9, 2014
I want to write sql query which runs in a background on cyclic basis. Basically i want to count the row entries of 1 table and store the data and the count in two distinct columns.
I wrote a simple sql query to get the shortest path length from node 1 to all the other nodes. Since there's a loop in the graph, I want to prevent it from going back to some nodes it has expanded before.
I got the following error:
Msg 253, Level 16, State 1, Line 2 Recursive member of a common table expression 'CTE_Sample' has multiple recursive references.
It is referring to "and Table1.t NOT IN (select fr from CTE_Sample)"
Can somebody help me to solve it?
btw. How to use the UNION, EXCEPT or INTERSECT operators when doing the recursive join? It seems I must use UNION ALL.
WITH CTE_Sample (fr, t, level) AS ( SELECT Table1.fr, Table1.t, 1 AS level FROM Table1 WHERE fr=1 UNION ALL SELECT Table1.fr, Table1.t, level+1 FROM Table1 INNER JOIN CTE_Sample ON Table1.fr = CTE_Sample.t and Table1.t NOT IN (select fr from CTE_Sample) ) SELECT CTE_Sample.t, min(CTE_Sample.level) FROM CTE_Sample group by CTE_Sample.t
I have a very complex Stored Procedure called by a Job that is Scheduled to run every night. It's execution takes sometimes 1 or 2 hours and sometimes 7 hours or more.
So, if it is running for more than 4 hours I stop the Job and I run the procedure from a Query Window and it never takes more than 2 hours.
Can anyone help me identify the problem ? I want to run from the Job and not to worry about it.
Some more information: - It is SQL 2000 Enterprise with SP4 in a Cluster (It happens the same way in any node). - The SQL Server and SQL Agent services run using a Domain Account that have full Administrative access. - When I connect to a Query Window I also use a Windows Account.
- There is no locks or process bloking or being blocked while the job is running. - Using the Task Manager the processor activity is ok, no more than 30 % in any processor.
I want to write a query to generate a report. I have a date column which will be holding all the business dates. No dates of Saturday and Sunday are allowed. What I am looking for is, I want to get the result of every 5th business day of each month. A month could start with any day. I just want only the 5th business day.
What I really need is to find the entries that have the word LVAD but on either side there should be a space chartacters.In other words I don't want to see the following coming from the query "SILVADENE ([PHI] SULFADIAZINE)"How can we write this query to search only for any entry that has the word " LVAD " ?
Select top 100 * FROM tbl_abc where AIMS_Value like '%LVAD%'
Here is my Query, I don't know whether I'm getting it right?
--Quarter 1 SELECTD.MerchantName, A.MID, A.TID, ISNULL(SUM(A.SumTrxnMon), 0) AS SumTrxnMon, E.FullName, E.DxBEmail INTO#Quarter1 FROMdbo.tblRPT_Spend AS A INNER JOIN dbo.tblMer_DeployORetrieveTerm AS B ON A.MID = B.MID AND A.TID = B.TID INNER JOIN
now I want to call all the sp on the basis of input like If filename is Fdoor then it shold fire the SP_Archive_using_merge_Fdoor , if file name is Fdoop then it shoilud fire the SP_Archive_using_merge_Fdoop like that .
below is the 2 sp .
--First SP ALTER PROCEDURE [dbo].[SP_Archive_using_merge_Fdoor] AS BEGIN SET NOCOUNT ON DECLARE @Source_RowCount int DECLARE @New_RowCount int
Now with the above result, On every record I have to fire a query Select SUM(sale), SUM(scrap), SUM(Production) from tableB where ProdID= ["ProdID from above query"].How to write this query in a Stored Procedure so that I can get the required SUM columns for all the ProdID's from first query?
Basically I'm running a number of selects, using unions to write out each select query as a distinct line in the output. Each line needs to be multiplied by -1 in order to create an offset balance (yes this is balance sheet related stuff) for each line. Each select will have a different piece of criteria.
Although I have it working, I'm thinking there's a much better or cleaner way to do it (I use the word better loosely)
Example: SELECT 'Asset', 'House', TotalPrice * -1 FROM Accounts WHERE AvgAmount > 0 UNION SELECT 'Balance', 'Cover', TotalPrice FROM Accounts WHERE AvgAmount > 0
What gets messy here is having to write a similar set of queries where the amount is < 0 or = 0
I'm thinking something along the lines of building a table function contains all the descriptive text returning the relative values based on the AvgAmount I pass to it.
I can fetch the counts for total present and absent
Query i have tried is
Declare @StudentId Uniqueidentifier ='0B2D4D41-8D33-4D79-A981-03E0F093F458' Begin select A.StudentId ,A.Date,Count(Date)Total,B.Guid,
[Code] ....
AS result of this query i get the data.Present count and Absent count from date inserted in Dailyattendance tables.
SO my problem is if the student have promoted to next class then by this query it will count the before year also how do i need to calculate the count according to the Class StartDate and Enddate as i mention in the Class Details table what will be the query.
I have a SSRS report developed in SQL 2012. When I exporting the report into word the back ground colour of page header missing. When Exporting to PDF and Excel page header BG colour showing.
On a webpage, there are filters to choose from. Like date, amount, SSN (multiple filters can be choosen).I have a single query so far. SqlCommand cmd = new SqlCommand("SELECT [column1], [column2], [column3], [column4], [column5] FROM [table] WHERE [column4] = 'condition4' AND [column5] = @total_bill AND [last_change] >= @txtStartDate AND [last_change] <= @txtEndDate ", Conn) ; cmd.Parameters.Add(new SqlParameter("@total_bill", total_bill1.Text)); cmd.Parameters.Add(new SqlParameter("@txtStartDate", txtStartDate.Text)); cmd.Parameters.Add(new SqlParameter("@txtEndDate", txtEndDate.Text)); I want to break the query so that it executes on the basis of different sets of conditions (filters). If I dont select date filter, then the above query will not execute properly.Please help.
We are using SSRS 2012. We have a report that conditionally formats a background color for some cells. The report renders properly in a browser and in Excel 2003 format. In Excel format all cells after the first one that meets the condition are highlighted, even if only one cell should.
The sample expression that triggers this condition looks like this: =IIF(Fields!VIOL_NOTE.Value="Internal","Green","No Color")
All cells after the first one that meets the condition Fields!VIOL_NOTE.Value="Internal" have a green background.
Every night all our DBCC CHECKDB runs on all our databases. The trouble is one of them is very large and the database is inaccessible whilst this runs. DBCC CHECKDB (dbname) WITH physical_only executed by user found 0 errors and repaired 0 errors. Elapsed time: 0 hours 24 minutes 46 seconds
This normally happens fairly shortly after the backup, and normally (but not always) after a series of these entries in the log.SQL Server has encountered 7976 occurrence(s) of I/O requests taking longer than 15 seconds to complete on file [D:Program FilesMicrosoft SQL ServerMSSQL11. MSSQLSERVERMSSQ LDATA empdb.mdf] in database [tempdb] (2).Would this cause SQL to automatically run a CheckDB.
-- Get the new Customer Identifier, return as OUTPUT param SELECT @NoteID = @@IDENTITY
-- Insert new notes for all the users that the note pertains to, in this case this will be by the assigned -- users. IF @FK_UserIDList IS NOT NULL EXECUTE spInsertNotesByAssignedUsers @NoteID, @FK_UserIDList
-- Insert New Address record -- Retrieve Address reference into @AddressId -- EXEC spInsertForUserNote -- @FK_UserID, --@NoteID, -- @BeenRead -- @Fax, -- @PKId, -- @AddressId OUTPUT
COMMIT TRANSACTION
-------------------------------------------------- GO
I am able to run a query which runs FAst in QA but slow in theapplication.It takes about 16 m in QA but 1000 ms on theApplication.What I wanted to know is why would the query take a longtime in the application when it runs fast on SQL server?How should we try debugging it?Ajay
When i run it from sql server data tools on the test box it runs fine. However when i run it from the ssisdb directly i get the below error.
reload table tablename1 from Excel file:Error: The "filetoload.Outputs[Excel Source Output].Columns[Product]" failed because truncation occurred, and the truncation row disposition on "filetoload.Outputs[Excel Source Output].Columns[Product]" specifies failure on truncation. A truncation error occurred on the specified object of the specified component.
reload table t from Excel file:Error: There was an error with filetoload.Outputs[Excel Source Output].Columns[Product] on filetoload.Outputs[Excel Source Output]. The column status returned was: "Text was truncated or one or more characters had no match in the target code page.".
THis is so annoying. I have 3 ADO executes in my program. THe first one creates a view, the second one performs an outer join on that view and returns a result set, the third execute drops the aforementioned view. THe program that is using this is installed on about 200 computers scattered across Germany and Italy, all querying the same MSsql server 7.0. THe queries run quite quick when few users are actively using the program (after hours for example). however in the heat of the day performance goes up and down dramatically with identical queries taking from 1 to 20 seconds to return their result set. Now I initially thought 'bandwidth issue out of our server'. However I noticed that if I take those three queries and run them from the sql server enterprise manager( running on the same computer as the aforementioned program) then the queries run instantly and the data is in my result pane in less than 2 seconds ALWAYS....even when the program is dogging it with 20 second delays before the result set returns. I know it is hanging on the return of the result set as I put a stop after before each ADO execute in order to check which one was eating up my time. Why is there this dichotomy between running the queries from my enterprise manager versus running them from an ADO object. Both are using TCP/IP (no named pipes involved). I havent monkied with the attributes of the ADO result set so they are all set to default. I have used the sql server profiler to trace these queries and they always run in less than 33 milliseconds. THe duration is also never more than 33 milliseconds. THis stinks of a network resource issue but what always leads me somewhere else is how consistent the performance of the enterprise manager is when it runs the exact same three queries.
Here is my slightly edited connection string Public Const connection_string = "Provider=SQLOLEDB;Server=000.000.000.000;" & _ "User ID=johndoe;Password=janedoe;Network=dbmssocn;" & _ "database=fidojoe"
Here are the 3 ADO executes: conn.Execute (sqlstr_create_view) Set resultset1 = conn.Execute(sqlstr_get_providers_by_DMISID) conn.Execute (sqlstr_drop_view)
I've got a stored proc used for order generation which runs long sometimes when called from within our app. A normal run will complete within 20s, a long run will get terminated by the app at the 6 minute mark.
When it runs long once, repeated attempts will also do so until I execute the same query the app did, but from within Query Analyzer. At which time the problem will disappear for a day or two. The app connects to the SQL Server 200 SP4 database using ADO.
I suspected statistics might be at fault here but have tried both "UPDATE STATISTICS table WITH FULLSCAN" and "DBCC DBREINDEX('table') to no avail. This issue has occurred and been worked around in this manner a few dozen times.
Here a code for finding all minimal loops (cyclic paths) in a graph with vertexes of degree >= 3. Almost obviously that before seeking for loops we should eliminate from the graph all its vertexes of degree < 3 (degree of a vertex is the number of edges outcoming from the vertex). Note: there are no any 'parent' - 'child' nodes here. All vertexes are absolutely equitable. if object_id('g3')>0 drop table g3 if object_id('g3x')>0 drop table g3x if object_id('g3y')>0 drop table g3y if object_id('g3l')>0 drop table g3l GO create table g3y(v1 int, v2 int) -- ancillary table GO create table g3x(n int, v1 int, v2 int) -- ancillary table GO create table g3l(nl int, v1 int, v2 int) -- table for storing of 'detected' loops GO create table g3(v1 int, v2 int) -- table of test data with pairs of adjoining vertexes -- each vertex is named by an arbitrary number GO insert into g3 select 2, 3 union all select 2, 4 union all select 1, 4 union all select 3, 5 union all select 5, 6 union all select 1, 6 union all select 4, 7 union all select 6, 8 union all select 3, 9 union all select 1, 7 union all select 2, 7 union all select 1, 8 union all select 5, 8 union all select 2, 9 union all select 5, 9 ----union all /* select 2, 13 union all select 3, 13 union all select 13, 14 union all select 12, 14 union all select 12, 15 union all select 11, 15 union all select 11, 13 union all select 10, 11 union all select 10, 12 union all select 10, 14 union all select 10, 15 */ GO insert into g3 select v2, v1 from g3
declare @i int, @n int, @v1 int, @v2 int set @i=1
while 0=0 begin set @n=1 truncate table g3x truncate table g3y select top 1 @v1=g3.v1, @v2=g3.v2 from g3 left join g3l on (g3.v1=g3l.v1 and g3.v2=g3l.v2)or(g3.v1=g3l.v2 and g3.v2=g3l.v1) where g3l.nl is null if @@rowcount=0 break insert into g3x select @n, @v1, @v2
while @v1<>(select top 1 v2 from g3x order by n desc) begin set @n=@n+1 insert into g3x select top 1 @n, v1, v2 from g3 where v2=@v1 and v1<>@v2 and v1=(select top 1 v2 from g3x order by n desc)
if @@rowcount=0 begin insert into g3x select top 1 @n, v1, v2 from g3 where v2 not in (select v1 from g3x union all select v2 from g3x) and v1=(select top 1 v2 from g3x order by n desc) and not exists (select 0 from g3y where g3y.v1=g3.v1 and g3y.v2=g3.v2) if @@rowcount=0 if @n>2 begin insert into g3y select v1, v2 from g3x where n=@n-1 delete from g3x where n=@n-1 set @n=@n-2 end else begin insert into g3l select 0, v1, v2 from g3x break end end else begin insert into g3l select @i, v1, v2 from g3x set @i=@i+1 end end end select * from g3l order by nl Below is what we get:
7 5 9 7 9 3 7 3 5 Of course, in general case not all found by the code loops are minimal. But this is exactly my approach: firstly find any possible loops (avoiding excessiveness!!), then, in WHILE loop, try to mark out minimal loop(s) from intersection of two non-minimal loops... seems it will be an interesting t-sql job.
Good afternoon, i'm new to Functions on the SQL server
I'm trying to create a dynamic query that would select the the column passed to the function from a certain table
my table called selected_Date, and has StartDate, and EndDate columns
when the user select for example "StartDate", i pass this as a variable to the function which runs the query. but i always gets back the passed string as a result..
I am running the following MDX query through a DataReader and a Ado.Net Connection.
SELECT {[Measures].[Deuda Total Nacional], [Measures].[Deuda Total Nacional Maximo], [Measures].[Cupo Nacional], [Measures].[Porcentaje Utilizacion Maximo], [Measures].[Pago Minimo Estado Cuenta], [Measures].[Deuda Ultima Facturacion], [Measures].[Dias Mora], [Measures].[Dias Mora Maximo], } ON COLUMNS ,[Cuenta].[Cuenta].[Cuenta]*[Cuenta].[Rut].[Rut]*[Cuenta].[Dv].[Dv] ON ROWS
FROM [Bd Rtd] WHERE [Tiempo].[Mes].&[2007-09-01T00:00:00]
The thing is, when I have about 10 thousand rows It runs in about 50 seconds which is good, but when I run this query and I have processed the cube with 100 thousand rows it runs out of memory and crashes.
I'm working in a shared development server with 1GB of memory for my project.
Is there any way to make it run anyway?? I mean even if it has to swap.
thanks
By the way when this thing goes into production it will have 1.5 million rows
I have a query which joins multiple tables. This query has suddenly begun to take up to 2 minutes to run (vs. 5-10 seconds previously).
No major change in number of records in the tables (currently about 220,000). When I remove the PK from one of the tables which then forces a tablescan, the query returns to running in 5-10 seconds. If I add the PK back, performance is back to 2 minutes.
I have two servers: one production and one development. There is a third party query that runs on both servers. Yes, the query is poorly tuned - I cannot change it. In production, it runs in 54 minutes. In development, I have tried to let it run for days and it never completes.
Here's what I have tried so far:
Compared the settings of production vs. development. The settings are very similar - the development box is larger, 4 times more memory.
Max degree of parallelism is the same on both boxes.
No compression on both boxes.
The production server is fairly busy, the development server is empty - this is the only process running on it.
Plenty of free disk space.
Updated all statistics on all databases touched by the query on dev.
Indexing is the same on both boxes.
The development box is running MSSQL2012.
The production box is running MSSQL2008R2.
What I've noticed:
The query consumes a massive amount of CPU time.
HUGE number of reads (16 million reads for 10 writes according to sp_WhoIsActive)
Largest wait types are CXPACKET, SOS_SCHEDULER_YIELD and TRACEWRITE respectively. wait_typesum_wait_time_mspct_wait_timesum_waiting_tasksavg_wait_time_ms CXPACKET4142765580.7176307423.5 SOS_SCHEDULER_YIELD24146944.7532725530.0 TRACEWRITE19856343.914831338.9
I would like to create a query to find what user owns the job. It probably is in the master db, but I wouldn't know where to begin other than that. Telling me how to either change the job owner or create a job through t-sql would also help. Thanks -Kyle
Recently we started using Pass throughs to perofmr large inserts, however we have started to notice that some of these pass throughs are executing twice, and therefore duplicating data.
Is this a known bug, and if not has anyone got any advice on what could be causing it?
We're connecting from Access 2002 (SP6) to SQL Server 2000 Enterprise.
The conenction string / code is as follows :-
Dim cmd As ADODB.Command Set cmd = New ADODB.Command Set cmd.ActiveConnection = CurrentProject.Connection cmd.Properties("Jet OLEDBDBC Pass-Through Statement") = True cmd.Properties _ ("Jet OLEDB*** Through Query Connect String") = _ "ODBC;DSN=" & myDatabaseShort & ";DATABASE=" & myDatabase & ";UID=sa;PWD=" & Left(myDatabaseShort, 4) & ";"
It connects fine. My sql string is a straightforward Insert statement that only executes once via SQL Query Analyzer.
I'm calling the pass through using the following lines of code:-
cmd.CommandText = mysql cmd.Execute
Can anyone see anything obvious that I'm doing wrong, or is this a known issue?
Ok, I'll admit right off the bat that I never suspected that I'd ever raise this complaint, much less worry about how to fix the "problem" associated with it!
We're preparing to take a large set of changes (projects) to PeopleSoft Financials from development to test. The code is still somewhat rough, but it has been "desk checked" to ensure that it does what the developers think that it ought to do, and they've blessed it at that point. The code is now moving into the test phase, and the QA team is finding locking/blocking issues that we've never seen in this code before... Sort of a "lock avalanche" where no one process locks for very long, but many of them block one another to the point where applications actually "freeze" while almost never hitting a deadlock.
My solution was to create a "blitzkrieg" query / stored procedure that would periodically sample master.dbo.sysprocesses, master.dbo.sysdatabases, and apply one of the dm_ functions to gather information on locking, blocking, and deadlocking. My procedure runs nicely (it never hangs) and gets about 99.3% of the data that I want.
The problem is that the blasted query / stored procedure runs either too fast or too slow, depending on how you look at it. Because the dm_ function takes a few ms to run, there can be a situation where either a row appears as a false positive or as a missing row because of timing... Either the culprit shows up as a blocker, but by the time the victim spid is evaluated the block has cleared, or the row is skipped and by the time the victim is evaluated the block has occured.
The whole process runs in well under 100 ms when there is nothing to report, and I've never seen it run 200 ms yet under the worst conditions it has faced, so the code is fast... The problem is that I really don't want to try to enforce any kind of locking to resolve the issue, because that locking would impact performance and that is EXACTLY what I do NOT want to do.
I have a query I developed and optimised as well as possible beforeconverting to a view.When running in query analyser the query runs in 15 to 18 seconds -which is acceptable.When "converting" into a view ( This is necessary for operationalreasons) and running with the same parameter it runs in 3 to 4minutes.Is there something I am unaware of ( well of course there is !!) -I was wondering why this occurs and how I can avoid / correct theissue.All advice gratefully received.Dave ( Still learning stuff about SQL Server every day!!)
A colleague of mine has a query which fails to run under SQLAgentbatch with the following error:The conversion of a char data type to a datetime data type resulted inan out-of-range datetime value. [SQLSTATE 22007] (Error 242) Thestatement has been terminated. [SQLSTATE 01000] (Error 3621). Thestep failed.He can run the same query sucessfully via query analyzer (i.e. noerrors, and it does what he wants)If I try to run the same query through Query Analyzer on myworkstation, I get a different error altogether:Server: Msg 242, Level 16, State 3, Procedure DateForGrouping, Line 11The conversion of a char data type to a datetime data type resulted inan out-of-range datetime value.Any idea what might be causing these differences in behaviourdepending on how and/or where the query is run from?The (working) statement in question is:================================================== ======================insert into Summary_ReferrerSalesselect DateCreated,ReferrerID,ReferrerIDCount,PUIDCount,ReferrerDescription,0 as TotalOrderValue,0 as TotalOrderLinesfrom vw_ReferrerPopularityWarning: Null value is eliminated by an aggregate or other SEToperation.(11996 row(s) affected)================================================== ====================And the table / view / function definitions (I take no responsibilityfor the view definition!) are:CREATE TABLE [Summary_ReferrerSales] ([DateCreated] [datetime] NULL ,[ReferrerID] [char] (2) NULL ,[ReferrerIDCount] [int] NULL ,[PUIDCount] [int] NULL ,[ReferrerDescription] [nchar] (100) NULL ,[TotalOrderValue] [numeric](18, 0) NULL ,[TotalOrderLines] [numeric](18, 0) NULL) ON [PRIMARY]CREATE VIEW dbo.vw_ReferrerPopularityASSELECT TOP 100 PERCENT COUNT(dbo.LogReferrerID.Referrer) ASReferrerIDCount,COUNT(DISTINCT dbo.LogReferrerID.PUID) AS PUIDCount,dbo.DateForGrouping(dbo.LogReferrerID.DateUsed)AS DateCreated,dbo.LogReferrerID.Referrer AS ReferrerID,dbo.LookupReferrerID.ReferrerDescriptionFROMdbo.LogReferrerID INNER JOIN dbo.LookupReferrerIDON dbo.LogReferrerID.Referrer = dbo.LookupReferrerID.ReferrerIDWHERE (dbo.LogReferrerID.DateUsed > CONVERT(DATETIME, '2003-09-0100:00:00', 102))GROUP BY dbo.LogReferrerID.Referrer,dbo.DateForGrouping(dbo.LogReferrerID.DateUsed),dbo.LookupReferrerID.ReferrerDescriptionHAVING (dbo.LogReferrerID.Referrer <> 'WS')AND (COUNT(dbo.LogReferrerID.Referrer) IS NOT NULL)AND (dbo.LookupReferrerID.ReferrerDescription IS NOT NULL)AND (dbo.DateForGrouping(dbo.LogReferrerID.DateUsed) >CONVERT(DATETIME, '2003-09-01 00:00:00', 102))AND (COUNT(DISTINCT dbo.LogReferrerID.PUID) IS NOT NULL)ORDER BY dbo.DateForGrouping(dbo.LogReferrerID.DateUsed) DESCCREATE TABLE [LogReferrerID] ([LogReferrerID] [int] NULL ,[Referrer] [varchar] (2) NULL ,[DateUsed] [smalldatetime] NULL ,[HTTPReferrer] [varchar] (1000) NULL ,[TargetURL] [varchar] (1000) NULL ,[QueryString] [varchar] (1000) NULL ,[PUID] [varchar] (50) NULL) ON [PRIMARY]CREATE TABLE [LookupReferrerID] ([ReferrerID] [char] (2) NOT NULL ,[ReferrerDescription] [varchar] (100) NOT NULL ,CONSTRAINT [PK_Lookup_ReferrerID] PRIMARY KEY CLUSTERED([ReferrerID]) ON [PRIMARY]) ON [PRIMARY]CREATE FUNCTION dbo.DateForGrouping (@RowDate datetime)RETURNS datetime ASBEGINdeclare @datestring char(10)set @datestring=cast(datepart(dd,@RowDate) as char(2)) + '/' +cast(datepart(mm,@RowDate) as char(2)) + '/' +cast(datepart(yyyy,@RowDate) as char(4))return cast(@datestring as datetime)END
This is in ASP.NET 1.1. I have a performance problem with a webform. The form contains several bound fields and a couple datagrids. I fill the grids by creating a data adapter, then I use the adapter to fill a dataset, then I set the grid datasource to the dataset. The query to fill one of the grids is getting a SQL timeout when running from my application (it takes about 40 seconds to complete). When I run the same SQL code from SQL Query Analyzer it runs in less than 1 second. (it is embedded sql in the app, not a stored procedure). I suspect that something else is being requested from SQL during the postback that is causing a blocking issue or something but I can't tell exactly what is happening. I've tried tracing through the code and all that I can tell is that the timeout occurs during the dataadapter.fill command. Has anyone else seen something similar? Is there a good way for me to see what SQL commands are being executed from ASP.NET? Any advice on debugging this would be much appreciated.