SQL Server 2008 :: How To Get Column From Another Table And Insert Into Other Table
May 18, 2015
I have two table 'Cal_date' and 'RPT_Invoice_Shipped'.Table cal_data has column month_no, start_date and end_date. And table RPT_Invoice_Shipped has columns Day_No, Date, Div_code, Total_Invoiced, Shipped_Value, Line_Shipped, Unit_Shipped, Transaction_Date.
I am using below insert statment to insert data in RPT_Invoice_Shipped table.
insert into [Global_Report_Staging].[dbo].[RPT_Invoice_Shipped]
(Day_No, Date, Div_code, Total_Invoiced, Transaction_Date)
select , CONVERT(DATE,Getdate()) as Date, LTRIM(RTRIM(div_Code)),
sum(tot_Net_Amt) as Total_Invoiced, (dateadd(day, -1, convert(date, getdate())))
from [Global_Report_Staging].[dbo].[STG_Shipped_Invoiced]
WHERE CONVERT(DATE,Created_date )=CONVERT(DATE,Getdate())
group by div_code
while inserting in column Day_No in RPT_Invoice_Shipped table, I have to use formula (Transaction_Date-start_date+1) where Transaction_Date from STG_Shipped_Invoiced and start_date from Cal_date table. I was using datepart (mm, Transaction_Date) so it gives month_no, and this month_no we can join with month_no of Cal_date table and fetch start_date from Cal_date table, so that we can use start_date for formula (Transaction_Date-start_date+1).
But I am getting difficulty to arrange this in above query. how to achieve this?
I'm inserting from TempAccrual to VacationAccrual . It works nicely, however if I run this script again it will insert the same values again in VacationAccrual. How do I block that? IF there is a small change in one of the column in TempAccrual then allow insert. Here is my query
INSERT INTO vacationaccrual (empno, accrued_vacation, accrued_sick_effective_date, accrued_sick, import_date)
Can we bulk insert only the desired column from a flat file to a table?
I am using SSIS to bulk insert from a file with more than 200 columns. I am trying to find a way I can bulk insert them to multiples table through SSIS.
The one way I can think is pre map the columns from the file to the destination tables. Build numerous Bulk Insert tasks to achieve that. But not sure if SSIS will let me do that.
why my insert or into is not working in my SQL Server R2 2008. I have a code I am using on an existing table and trying to put the data in a brand new table but it keeps giving me an error
select mCid, caucasian, aa, api, aian, mr, his, max(val) into memberrand from MEMBERV2_RAND cross apply ( select AA union all select API union all select AIAN union all select MR union all select HIS union all select CAUCASIAN ) v(val)
group by mCid, AA, API, AIAN, MR, HIS, caucasian ;
The error is: Msg 1038, Level 15, State 5, Line 1..An object or column name is missing or empty. For SELECT INTO statements, verify each column has a name. For other statements, look for empty alias names. Aliases defined as "" or [] are not allowed. Change the alias to a valid name.Do I have to actually create this table with no values first and then run the query? I was hoping this was sort of a make a table query
I saved the result into a csv file and then truncated the table. Now, I am trying to bulk insert the data into the table. So I used:
bulk insert rdb.dbo.scd_event_tab from 'C:userssluintel.ctrdesktopeventtab.csv' with ( codepage = 'RAW', datafiletype = 'native', fieldterminator = ' ', keepidentity, keepnulls ); go
However, I get this error:
Msg 4867, Level 16, State 1, Line 1 Bulk load data conversion error (overflow) for row 1, column 1 (JOB_ID). Msg 4866, Level 16, State 5, Line 1
The bulk load failed. The column is too long in the data file for row 1, column 3. Verify that the field terminator and row terminator are specified correctly.
Msg 7399, Level 16, State 1, Line 1
The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.
Msg 7330, Level 16, State 2, Line 1
Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".
I am experimenting with using CDC to track user changes in our application database. So far I've done the following:
-- ENABLE CDC ON DV_WRP_TEST USE dv_wrp_test GO EXEC sys.sp_cdc_enable_db GO
-- ENABLE CDC TRACKING ON THE AVA TABLE IN DV_WRP_TEST USE dv_wrp_test
[Code] ....
The results shown above are what I expect to see. My problem occurs when I use our application to update the same column in the same table. The vb.net application passes a Table Valued Parameter to a stored procedure which updates the table. Below is the creation script for the stored proc:
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO
if exists (select * from sysobjects where id = object_id('dbo.spdv_AVAUpdate') and sysstat & 0xf = 4) drop procedure dbo.spdv_AVAUpdate
[Code] ....
When I look at the results of CDC, instead of operations 3 and 4, I see 1 (DELETE) and 2 (INSERT) for the change that was initiated from the stored procedure:
-- GET CDC RESULTS FOR CHANGES TO AVA TABLE USE dv_wrp_test GO SELECT * FROM cdc.dbo_AVA_CT GO
--RESULTS SHOW OPERATION 1 (DELETE) AND 2 (INSERT) INSTEAD OF 3 AND 4 --__$start_lsn__$end_lsn__$seqval__$operation__$update_maskAvaKeyAvaDescAvaArrKeyAvaSAPAppellationID --0x0031E84F000000740008NULL0x0031E84F00000074000230x02119Test26NULL --0x0031E84F000000740008NULL0x0031E84F00000074000240x02119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000310x0F119Test36NULL --0x0031E84F00000098000ANULL0x0031E84F00000098000420x0F119Test46NULL
Why this might be happening, and if so, what can be done to correct it? Also, is there any way to get the user id associated with the CDC?
I'm creating a sql stored procedure inside this proc it returns some information about the user, i.e location, logged in, last logged in, etc I need to join this on to the photos table and return the photo which has been set as the profile picture, if it hasn't been set then return the first top 1 if that makes sense?
The user has the option to upload photos so there might be no photos for a particular user, which I believe I can fix by using a left join
My photos table is constructed as follows:
CREATE TABLE [User].[User_Photos]( [Id] [bigint] IDENTITY(1,1) NOT NULL, [UserId] [bigint] NOT NULL, [PhotoId] [varchar](100) NOT NULL, [IsProfilePic] [bit] NULL,
[Code] ....
Currently as it stands the proc runs but it doesn't return a particular user because they have uploaded a photo so I need to some how tweak the above to return null if a photo isn't present which is where I'm stuck.
use mysql1 declare @bp xml select @bp=xml ;WITH XMLNAMESPACES('http://schemas.openehr.org/v1' as bp,'http://www.w3.org/2001/XMLSchema-instance' as xsi,'OBSERVATION' as type) select * from ( select m.c.value('(./bp:data/bp:items[1]/bp:value[1]/bp:magnitude)[1]','int') as systolisch from BloodpressureMitSchema cross apply @bp.nodes('/bp:content/bp:data/bp:events') as m(c))m
But with this "cross apply" I can only query all the values in one xml and repeat them. Is there something wrong at "declear"
I've look at several different methods for removing leading zero's from a column but I need to remove trailing data from a VARCHAR column. For some reason, the old database saved the time along side the date in my client's app.
For example:
The old database format "2015-07-28 00:00:00"
I need the data in this column in the new database to only be the date "2015-07-28", there are alot of rows with this issue.
Is there a query I can run to remove the 00-00-00 from all of the rows? Some of the fields actually have a time in there like this: 2015-07-28 12:15:35, with this one, I don't think it's going to be easy but if I could at least remove the 00-00-00 from all the rows that have it, that would be a good start.
Every sunday, new data will be loaded from temptable to main table. I have to make sure that, duplicates does not get loaded from temptable to maintable.
For example, if last sunday a record gets loaded from temp to main. If this sunday also the same record is present then it means that is a duplicate.
The duplicate is decided on below scenario
select 'CodeChanges: ', count(*) from CodeChanges a, CodeChanges_Temp b where a.AccountNumber = b.AccountNumber and a.HexaNumber = b.HexaNumber and a.HexaEffDate = b.HexaEffDate and a.HexaId = b.HexaId and
[Code] ...
Yesterday (Sunday) , data from temp got loaded onto maintable but with duplicates.
There is a log which just displays number of duplicates.
Yesterday the log displayed 8 duplicates found. I need to find out the 8 duplicates which got loaded yesterday and delete it off from main table.
There is a column in both tables which is 'creation date and time'. Every Sunday when the load happens this column will have that day's date .
Now i need to find out what are all the duplicates which got loaded on this sunday.
The total rows in temp table is : 363 No of duplicates present is : 8
I used below query to find out the duplicates but it is returning all the 363 rows from the maintable instead of the 8 duplicates.
Select 'CodeChanges: ', * from CodeChanges a where exists ( Select 1 from CodeChanges_Temp b where a.HexaNumber = b.HexaNumber and a.HexaEffDate = b.HexaEffDate and
[Code] ...
Need finding the duplicate records which has creation date time as '2015-11-01 00:00:00.000' and all the above columns mentioned in the query matches.
SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[PaymentsLog](
[Code] ....
Is there a way to look at the DatePeriod table and use the StartDtae and EndDate as the periods to be used in the select statement and then cursor through each date between these two dates and then insert the data in to the PaymentsLog table?
I am writing a query to return some production data. Basically i need to insert either 1 or 2 rows into a Table variable based on a decision as to does the production part make 1 or 2 items ( The Raw data does not allow for this it comes from a look up in my database)
I can retrieve all the source data i need easily but when i come to insert it into the table variable i need to insert 1 record if its a single part or 2 records if its a twin part. I know could use a cursor but im sure there has to be an easier way !
Below is the code i have at the moment
declare @startdate as datetime declare @enddate as datetime declare @Line as Integer DECLARE @count INT
set @startdate = '2015-01-01' set @enddate = '2015-01-31'
I have a scenario where I need to add a blank column to a table that is a publisher. This table contains over 100 million records. What is the best way to add the column? In the past where I had to make an update, it breaks replication because the update would take forever as jobs are continuously updating the table so replication can't catch up.
If I alter a table and add a column, would this column automatically get picked up in replication?
I inherited a system which has an index on a set of columns which allow more than 900 bytes of data in it. We know one of the fields can be shortened to shrink the potential key size below 900 bytes.
The problem is the table is about 120m rows, and the index currently on that column is seeked (sought?) on about 2.5m times a day.
At its simplest, I want to drop the existing index, alter the column to shrink the varchar size, and then rebuild the index on the newly shortened column.
On a smaller, less used table, I might just do this in outside of business hours and call it a day, but I'm concerned that this will take a long time and block a lot of operations.
1) IIRC, shrinking a column, unlike widening it, is much more expensive, even if there are no values which would actually end up trunacted. Is this right?
2) I did a few tests on some other smaller (2+ m) row tables and was still able to select data out of the table. I don't think this covered all the read scenarios, but are there known scenarios which would simply not work during an index build?
3) I haven't yet tried DML operations to the table while it's doing either the column update or an index build. what scenarios would or would not be blocked?
I have several databases to deal with, all with + 250 tables. The databases are not identical and do not conform to a specific naming convention for table names. Most but not all tables have a column called "LastUpdated" containing a date/time (obviously). I'd like to be able to find all rows within a whole database (table by table) where the date/time is greater than a specified date/time.
I'm looking for a reliable query that will return all the rows in each of the tables but without me having to write hundreds of individual scripts "SELECT * FROM [dbo.xyz] WHERE LastUpdated > '2015-01-01 09:00:00:000'", or have to look through each table first to determine which of them has the LastUpdated field.
When assigning permission to an authentication user to connect to a server database, if I want the user to be able to insert / update / delete data on db objects specifically tables, what permission should be assigned to that user?
My thoughts were Insert / Update / Delete; however, someone suggested that the Execute permission would do this ...
I have two different SQL 2008 servers, I don't have permission to create a linked server in any of them. i created a trigger on server1.table1 to insert the same record to the remote server server2.table1 using OPENROWSET
i created a stored procedure to insert this record, and i have no issue when i execute the stored procedure. it insert the recored into the remote server.
The problem is when i call the stored procedure from trigger, i get an error message.
Stored Procedure: USE [DB1] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON
[Code] ....
When i try to insert a new description value in the table i got the following error message:
No row was updated the data in row 1 was not committed Error source .Net SqlClient Data provider. Error Message: the operation could not be performed because OLE DB provider "SQLNCLI10" for linked server "(null)" returned message "The partner transaction manager has disabled its support for remote/network transaction.".
correct the errors entry or press ESC to cancel the change(s).
I have these two tables Log and CategoryLog, I need to archive records older than 13 months in these two tables to two separate tables and then delete the archived records from Log and CategoryLog tables. The problem is that only 'Log' table has a date column, the other table CategoryLog does not have any date column. But the two tables are connected by a column(LogID). How to archive the data and then delete the archive data from both tables.
A common partitioning scenario is when the partition column has the same value for every record in the partition, as opposed to a range of values. Am I the only person who wonders why there isn't an option to automatically partition a table based on the unique values of the partition column? Instead of defining a partition function with constants, you ought to be able to just give it the column and be done. This would be particularly valuable for tables partitioned on a weekly or monthly date; when new data is added it could simply create a new partition if one doesn't already exist.
Hi, I am having problem in bulk update of a sql server table haning identity column from a datatable( has no identity column) using sqlbulkcopy. I tried several approaches, but it does not show any error nor is the table getting updated. But the identity value seems to getting increased every time. thanks. varun
I created am inventory table with few columns say, Servername, version, patching details, etc
I want a tracking of the table.
Let's say people are asked to modify the base table and I want a complete capture of the details modified and the session of the user ( ) who (system_user) is actually modifying the details.
I am trying to insert bulk data into main table from staging table in sql server 2012. If any error comes, this total activity is rollbacked. I don't want that to happen. I want to know the records where ever the problem persists, and the rest has to be inserted.
I have a scenario where I have to Update a table with date when there are new records in another table
For example:
I load ODS table with the data from a file in SSIS. the file has CustomerID and other columns.
Now, when there is new record for any customerID in Ods, then Update the dbo table with the most recent record for every CustomerID(i.e. update the date column in dbo for that customerID). Also Include an Identifier that relates back to the ODS table. How do I do this?
I am trying to replicated table A into table B withing the same server and the same database.
I am getting an error message when trying to configure the subscription: You have selected the Publisher as a Subscriber and entered a subscription database that is the same as the publishing database. Select another subscription database.