SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[PaymentsLog](
[Code] ....
Is there a way to look at the DatePeriod table and use the StartDtae and EndDate as the periods to be used in the select statement and then cursor through each date between these two dates and then insert the data in to the PaymentsLog table?
I'm inserting from TempAccrual to VacationAccrual . It works nicely, however if I run this script again it will insert the same values again in VacationAccrual. How do I block that? IF there is a small change in one of the column in TempAccrual then allow insert. Here is my query
INSERT INTO vacationaccrual (empno, accrued_vacation, accrued_sick_effective_date, accrued_sick, import_date)
I have two table 'Cal_date' and 'RPT_Invoice_Shipped'.Table cal_data has column month_no, start_date and end_date. And table RPT_Invoice_Shipped has columns Day_No, Date, Div_code, Total_Invoiced, Shipped_Value, Line_Shipped, Unit_Shipped, Transaction_Date.
I am using below insert statment to insert data in RPT_Invoice_Shipped table.
insert into [Global_Report_Staging].[dbo].[RPT_Invoice_Shipped] (Day_No, Date, Div_code, Total_Invoiced, Transaction_Date) select , CONVERT(DATE,Getdate()) as Date, LTRIM(RTRIM(div_Code)), sum(tot_Net_Amt) as Total_Invoiced, (dateadd(day, -1, convert(date, getdate()))) from [Global_Report_Staging].[dbo].[STG_Shipped_Invoiced] WHERE CONVERT(DATE,Created_date )=CONVERT(DATE,Getdate()) group by div_code
while inserting in column Day_No in RPT_Invoice_Shipped table, I have to use formula (Transaction_Date-start_date+1) where Transaction_Date from STG_Shipped_Invoiced and start_date from Cal_date table. I was using datepart (mm, Transaction_Date) so it gives month_no, and this month_no we can join with month_no of Cal_date table and fetch start_date from Cal_date table, so that we can use start_date for formula (Transaction_Date-start_date+1).
But I am getting difficulty to arrange this in above query. how to achieve this?
I am writing a query to return some production data. Basically i need to insert either 1 or 2 rows into a Table variable based on a decision as to does the production part make 1 or 2 items ( The Raw data does not allow for this it comes from a look up in my database)
I can retrieve all the source data i need easily but when i come to insert it into the table variable i need to insert 1 record if its a single part or 2 records if its a twin part. I know could use a cursor but im sure there has to be an easier way !
Below is the code i have at the moment
declare @startdate as datetime declare @enddate as datetime declare @Line as Integer DECLARE @count INT
set @startdate = '2015-01-01' set @enddate = '2015-01-31'
why my insert or into is not working in my SQL Server R2 2008. I have a code I am using on an existing table and trying to put the data in a brand new table but it keeps giving me an error
select mCid, caucasian, aa, api, aian, mr, his, max(val) into memberrand from MEMBERV2_RAND cross apply ( select AA union all select API union all select AIAN union all select MR union all select HIS union all select CAUCASIAN ) v(val)
group by mCid, AA, API, AIAN, MR, HIS, caucasian ;
The error is: Msg 1038, Level 15, State 5, Line 1..An object or column name is missing or empty. For SELECT INTO statements, verify each column has a name. For other statements, look for empty alias names. Aliases defined as "" or [] are not allowed. Change the alias to a valid name.Do I have to actually create this table with no values first and then run the query? I was hoping this was sort of a make a table query
I saved the result into a csv file and then truncated the table. Now, I am trying to bulk insert the data into the table. So I used:
bulk insert rdb.dbo.scd_event_tab from 'C:userssluintel.ctrdesktopeventtab.csv' with ( codepage = 'RAW', datafiletype = 'native', fieldterminator = ' ', keepidentity, keepnulls ); go
However, I get this error:
Msg 4867, Level 16, State 1, Line 1 Bulk load data conversion error (overflow) for row 1, column 1 (JOB_ID). Msg 4866, Level 16, State 5, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 3. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I am experimenting with using CDC to track user changes in our application database. So far I've done the following:
-- ENABLE CDC ON DV_WRP_TEST USE dv_wrp_test GO EXEC sys.sp_cdc_enable_db GO
-- ENABLE CDC TRACKING ON THE AVA TABLE IN DV_WRP_TEST USE dv_wrp_test
[Code] ....
The results shown above are what I expect to see. My problem occurs when I use our application to update the same column in the same table. The vb.net application passes a Table Valued Parameter to a stored procedure which updates the table. Below is the creation script for the stored proc:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
if exists (select * from sysobjects where id = object_id('dbo.spdv_AVAUpdate') and sysstat & 0xf = 4) drop procedure dbo.spdv_AVAUpdate
[Code] ....
When I look at the results of CDC, instead of operations 3 and 4, I see 1 (DELETE) and 2 (INSERT) for the change that was initiated from the stored procedure:
-- GET CDC RESULTS FOR CHANGES TO AVA TABLE USE dv_wrp_test GO SELECT * FROM cdc.dbo_AVA_CT GO
--RESULTS SHOW OPERATION 1 (DELETE) AND 2 (INSERT) INSTEAD OF 3 AND 4 --__$start_lsn__$end_lsn__$seqval__$operation__$update_maskAvaKeyAvaDescAvaArrKeyAvaSAPAppellationID --0x0031E84F000000740008NULL0x0031E84F00000074000230x02119Test26NULL --0x0031E84F000000740008NULL0x0031E84F00000074000240x02119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000310x0F119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000420x0F119Test46NULL
Why this might be happening, and if so, what can be done to correct it? Also, is there any way to get the user id associated with the CDC?
Can we bulk insert only the desired column from a flat file to a table?
I am using SSIS to bulk insert from a file with more than 200 columns. I am trying to find a way I can bulk insert them to multiples table through SSIS.
The one way I can think is pre map the columns from the file to the destination tables. Build numerous Bulk Insert tasks to achieve that. But not sure if SSIS will let me do that.
When assigning permission to an authentication user to connect to a server database, if I want the user to be able to insert / update / delete data on db objects specifically tables, what permission should be assigned to that user?
My thoughts were Insert / Update / Delete; however, someone suggested that the Execute permission would do this ...
I have two different SQL 2008 servers, I don't have permission to create a linked server in any of them. i created a trigger on server1.table1 to insert the same record to the remote server server2.table1 using OPENROWSET
i created a stored procedure to insert this record, and i have no issue when i execute the stored procedure. it insert the recored into the remote server.
The problem is when i call the stored procedure from trigger, i get an error message.
Stored Procedure: USE [DB1] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
When i try to insert a new description value in the table i got the following error message:
No row was updated the data in row 1 was not committed Error source .Net SqlClient Data provider. Error Message: the operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "(null)" returned message "The partner transaction manager has disabled its support for remote/network transaction.".
correct the errors entry or press ESC to cancel the change(s).
Here is the steps I should take... 1- Check for the log table and find run status ( there is a date field which tells the day run) 2- Lets say last day was 2008-05-15, So I have to check A1.DDDDMMYYY file exists in the folder for each day like A1.20080516,A1.20080517 and A1.20060518 ( until today) 3- if A1.20080516 text file exist then I have to move it to the table and same thing for other dates like if A1.20080517 exists I have to load it to table and so on
it looks like for each loop, first I have to get the last date and then I have to check the file exists for each date and if the date file exists then I have to load it into table...
Please tell me How can I do it. it looks complex looping...
Can I safely use top n select/delete in a while loop? For example:
declare @FieldVal int while (select count(*) from @MyTempTable) > 0 begin select top 1 @FieldVal = FieldVal from @MyTempTable -- process @FieldVal then delete the row delete top 1 from @MyTempTable end
I like the simplicity of the above approach as long as it's reliable and there aren't any gotchas that I may not be aware of.
I have an SSIS job that dynamically loops through each server, grabbing data for typical DBA reporting, like diskspace, and errorlogs. If the server is down for whatever reason the SSIS package fails. Is there any way I can prevent the SSIS package from failing if one of the servers is down?
I have two tables having one row identifier column each of int datatype. Both these columns are part of the respective primary keys. Now as a part of my process, i'm inserting one small part of data from one table to another table. This was working fine but suddenly started getting error like
Violation of PRIMARY KEY constraint 'PK_TargetTable'. Cannot insert duplicate key in object 'DW.TargetTable'. The duplicate key value is (58544748).First I checked with DBCC CHECKIDENT with NORESEED and found that there is difference in the current identity value and current column value. I fixed it by running DBCC CHECKIDENT. But to my surprise again got the same issue. interesting thing is that the error comes after inserting 65466 records.
I work with sql server 2008 on a database.we have export schema and datas with the command export datas
click rigth on database => tasks => generate scripts => select all object => click advanced => select type of data to script => schema and data
Now we have a file with all datas and schema That's perfect ...But how i can insert the file in a other database?ok i can copy paste all datas in management studio and press f5 but when i do this the management studio fail because the size of the file is > 200 mega !
I've got a piece of code that returns 53 records when using just the SELECT section.When I change it to INSERT INTO ..... SELECT it only inserts 39 records into the receiving table.There are no keys/contraints/indices or anything else on the receiving table (it's just a dumping ground for some data that will be processed later).
The code for creating the table is here:- USE [CDSExtractInpatients6.2] GO /****** Object: Table [dbo].[CDS_Inpatients_CDS_Feeds_Import] Script Date: 22/05/2015 15:54:15 ******/ SET ANSI_NULLS ON GO
[code]...
I know most of the date fields are being created as varchar on here, but this is something I inherited and the SELECT is outputting the dates as text.Don't know if it makes any difference, but the server is running SQL2008.
Have any seen Insert statement deadlocking itself ? Most of the articles published by Microsoft says to change the transaction isolation level from Read Committed to Read Committed Snapshot.Below is the XML file on the deadlock
I'm able to successfully import data in a tab-delimited .txt file using the following statement.
BULK INSERT ImportProjectDates FROM "C: mpImportProjectDates.txt" WITH (FIRSTROW=2,FIELDTERMINATOR = ' ', ROWTERMINATOR = '')
However, in order to import the text file, I had to add columns to the text file to match the columns that exist in the table. The original file is an export out of another database and contains all but 5 columns from my db.
How would I control which column BULK INSERT actually imports when working with a .txt file? I've tried using a FORMAT FILE, however I kept getting errors which I tracked down to being a case of not using it with a .txt file.
Yes, I could have the DBA add in the missing columns to the query from the other DB to create the columns, however I'd like to know a little bit more about this overall.
I was wondering if there was a way to redirect an insert to another column...
Example: Original Insert Statement: INSERT INTO [table] ([columnA], [columnB]) SELECT '2015-01-01 00:00:00', 99.99
We have changed [columnB] from a decimal(19,9) to a computed column. So instead, we added another column [ColumnC] to take [ColumnB]'s insert data. I thought we could've used a trigger instead of insert, but that fails with the message "cannot be modified because it is either a computed column or is the result of a UNION operator".
This is the trigger I was using: CREATE TRIGGER [Trigger] ON [table] INSTEAD OF INSERT AS INSERT INTO [table] ([columnA], [columnC]) SELECT [dataA], [dataB] FROM Inserted
I have to perform a bulk Import on a regular Basis and have created a script to do this. The Problem is that the .csv file has 12 Columns and the table to Import into has 14. To Workaround this discrepancy I have decided to use a Format file. The Problem is that how to create one.
What statement do I use, as part of an insert trigger, to insert xml data from the xml database to a flat file database, to check if a record with a specific ID exists to delete first then insert the changed record, instead of adding the changes or an updated from the original xml database.
What I’m trying to do is take the xml formatted data out of one sql server database and insert the data only in that xml into a another sql database. So I can play with the data.
Problem is if the data in the xml is updated or changed for a specific record on the original xml database then the trigger inserts another copy into the created database (which I don’t want).
I am trying to BULK INSERT csv files using a stored procedure in SQL SERVER 2008R2 SP3. Although the files contain several thousand lines and BULK INSERT returns no errors, no data is actually imported into the table. Every field in the table is a NVARCHAR(50) datatype.
Here is the code for the operation (only the parameters for the insert itself):
set @open = 'bulk insert [DWHStaging].[dbo].[Abverkaufsquote] from ''' set @path = 'G:DataStagingDWHStagingSourceAbverkaufsquote' set @params = ''' with (firstrow = 2 , datafiletype = ''widechar'' , fieldterminator = '';'' , rowterminator = '' '' , codepage = ''1252'' , keepnulls);'
The csv file originates from a DB2 database. Using exactly the same code base I can import several other types of CSV files without problem.
The files are stored on the local server with as UCS2 Little Endian and one difference is that the files that do not import do not include a BOM. The other difference is that the failed files are non-UNICODE files.
I've never worked with a column that's defined as a binary data type. (In the past if ever we had to include a photo we stored a link to the JPG/BMP/PNG/whatever into a column, but never actually inserted or updated a column with binary data.But now I've got to do that. So how do you put data into a column defined as BINARY(4096)?