How would I copy entire rows within a table and change one value?
insert into user
(user_id,account_id,user_type_cd,name,e_mail_addr,login_failure_cnt,admin_user,primary_user)
select * from pnet_user where account_id='DDD111'
but now I want to change DDD111 to DDD222 on the inserted entries.
I was wondering if there is a easy way to do this. I got Table Customers, and when a row is changed, deleted, i want to copy the original row to a backup table, so i have the whole history.
Can I make a copy of my development database DEV on same SQL SERVER machine, rename it to TEST and stored procedures to be updated automatically for statements likeUPDATE [DEV].[dbo].[Company]SET [company_name] = @company_nameto becomeUPDATE [TEST].[dbo].[Company]SET [company_name] = @company_namein order not to edit each individual stored procedure for updating it ?
Hi to all. I have a starnge question but i need to work out the situation that I'll describe here: i need to replicate one row in my table to the same table but with small changes in some fileds. I mean - copy the row and to insert it again to the same table, that I copy it from, with changes in fields values. Do you know any way i can to this - temporary table/ store procedure and so on... BUT in one shot - one action for the select and insert and update operations.
i have data in a table.. i want the values in the rows to place in columns and columns into rows.. for ex:-there is one table name id dept a 1 x b 2 y c 3 z i want the resultant table should look like this a b c 1 2 3 x y z
I'm trying to create a performant script to copy records from a table in a source database, to an identical table in a destination database.
In SQL 2000, I used to create a little lookup which did a count using certain fields. If the record was missing, I executed an INSERT query, otherwise an UPDATE query. The result was that the table on the destination side was always up to date. Duplicate rows were out of the question.
This was, if I'm not mistaking, a Data Transformation, using a bit of custom VBA code to govern the transfer. For each source row, the custom code was executed. Depending on the result of the custom code, a different query was launched.
Now I'm trying to do the same using SSIS in SQL 2005. Is there a task which does this for me, or do I have to script again? In the latter case, which type of task would I use?
(I thought of the Script Task, but then I would need to set up quite a bit myself.)
Hello,I have 2 tables, Table1 and Table2. I have copied all data from Table1to Table2.However Table1 is dynamic it has new rows added and some old rowsmodified everyday or every other day...How can I continue to keep Table2 up to date without always having tocopy everything from Table1?Basically from now on I would only like to copy new rows or modifiedrows in Table1 to Table2 and skip rows that are already present andhave not been modified in Table1. I would like to not do anything forany rows that were removed in Table1 and continue to keep a copy ofthem in Table2.Is using a DTS package the best way to automate this update of Table2to make sure Table2 is always up-to-date with Table1?Thanks for any help or advise :-)Yas
Copy out all data from a DB table into/across delimited text file(s) ensuring that each text file size is no more than 3MB.Have created a SSIS solution where it achieves this requirement ..well sort of achieves the requirement ... Here it what the current solution (sparing the minute details) does in a nutshell & Problems with it:
1) Created a function (Script below) which finds the maximum row size in bytes in a given DB table & uses it to calculate how many rows can be copied out into a text file without exceeding 3MB size limit.
For instance: A DB table selected had 788 rows in total and this function for this particular table returned a value of 181 rows { select [dbo].[udf_GetRowPartitionNumber](‘<TableName>’)as #ofRowstoPartitionTableby --181} meaning in order to not exceed the requirement of 3MB per text file, we had to copy all the data from DB table across (create) 5 text files {Select CEILING(788
My Question: Does anyone know of a decent way (i.e. I do not want to loop to insert each row and check the SCOPE_IDENTITY() field or anything like that) to copy parent/child rows to their same respective table besides using the method I have listed below (my manager does not really like the idea of the "PreviousID" field)? More details are listed below.
My Table Situation: I have a parent table and a child table. Both tables have an identity column as the primary key. The relationship between the tables is established using the parent table's primary key to the column in the child table that stores the relationship. The identity column in both tables is the only column that is unique in the tables.
Sample Data (made up and simplified for visualization purposes): ParentTable - "Items" ID ItemCategory Price 1,T-Shirt,$20 2,Blue Jeans,$50 3,T-Shirt,$40
My Need: I need to make a copy of the records (keeping the parent/child relationship intact) to the same table as the source records. For example, in my data example above, I may need to make a copy of all the "T-Shirt" items and their child records. As the parent records are copied, they will be assigned new keys since the primary key is an identity. Obviously, this new key needs to be used when creating the child records, but I need to somehow associate this new key to the new child records.
Possible Solution: I know this can be achieved by adding another column to the parent table to store the "PreviousID" (INT NULL). Using this new field, when I want to copy the "T-Shirt" items, I would insert the new records and store the ID of source records (i.e. the identity value of the row that was copioed would be stored in this "PreviousID" field). Once the parent record has been copied, I could then insert the child records, and I could join on the "PreviousID" field to get to the new ID to use for inserting the copies of the child records.
I have three columns in my table with the following datatypes:
Date - DateTime Height - Decimal Change_From_last_Height - Decimal
I am using "SQL Server 2005 Express Edition". I'm fairly new to SQL and would greatly appreciate any help or advice I can get.
The "date" column increases by an extra day in every new row and I then enter the new height of the plant. What I want to know is how I can get SQL server express to automatically enter the difference in height between the current row's height and that of the previous row.
Is it possible to automate the entry in the Change_From_Last_Height column in SQL?
Put another way, I know how to find the difference between two values in the same row but different columns, but how do I calculate the difference between values in adjacent Rows (ie. Rows next to each other)?
This is the error I get with a simple Data Reader Source (SELECT * FROM A) to SQL Server Destination copy between identical tables from a linked to a local server:
We just changed over our phone system and the new system uses all of the old extensions except it adds a 1 to the beginning of them. I know that there is a relatively simple way to update my phone extension column to show this, but I can't for the life of me remember what I need to do. Any help?
I would like randomly change the order of the rows in my table. Is there any way to do that? I also have a question about random generator. Is it possible to get a repeatable sequence of random numbers between 1 and 10 in T-SQL? (for example 2,7,6,5,8,9,3,2,....each state with the same probability). But i need the same sequence every time i run my procedure. I know this is just a pseudo generator. I tried to use function rand([seed]) and change the seed value, but I got some strange results...(floor(rand([seed])*100))
I am copying data ( ~600000 rows) using SqlBulkCopy. The operation fails after copying 390000 rows with the follwoing exception ( this happens every time when i run and after the same number of rows copied). Is anything else i need to do differently. The server has 35GB of free space & 1 GB of RAM.
System.Data.SqlClient.SqlException: Timeout expired. The timeout period elapse d prior to completion of the operation or the server is not responding. at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConn ection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParserStateObject.ReadSniError(TdsParserStateObject stateObj, UInt32 error) at System.Data.SqlClient.TdsParserStateObject.ReadSni(DbAsyncResult asyncResult, TdsParserStateO bject stateObj) at System.Data.SqlClient.TdsParserStateObject.ReadPacket(Int32 bytesExpected) at System.Data.SqlClient.TdsParserStateObject.ReadBuffer() at System.Data.SqlClient.TdsParserStateObject.ReadByte() at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataRe ader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlBulkCopy.WriteToServerInternal() at System.Data.SqlClient.SqlBulkCopy.WriteRowSourceToServer(Int32 columnCount) at System.Data.SqlClient.SqlBulkCopy.WriteToServer(IDataReader reader) at PSMigrate.Program.MigrateWorkItesLatestData() in E:dd_tfs_beta3vsetSCMworkitemtrackingTo olsPSMigrateProgram.cs:line 522 at PSMigrate.Program.MigrateData() in E:dd_tfs_beta3vsetSCMworkitemtrackingToolsPSMigrate Program.cs:line 106 at PSMigrate.Program.Main(String[] args) in E:dd_tfs_beta3vsetSCMworkitemtrackingToolsPSMi grateProgram.cs:line 86
Here is the part of the code
using (SqlCommand cmd = new SqlCommand())
{
SqlDataReader dataReader;
cmd.CommandText = sql;
cmd.CommandType = CommandType.Text;
cmd.Connection = conn;
conn.Open();
dataReader = cmd.ExecuteReader();
// write the data to the server
using (SqlBulkCopy sqlBulkCopy = new SqlBulkCopy(m_destConnString))
{
// column mappings
// event settings
sqlBulkCopy.NotifyAfter = 5000;
sqlBulkCopy.SqlRowsCopied += new SqlRowsCopiedEventHandler(RowsCopiedEventHandler);
I am interested in changing the way that data is displayed in my result set.Essentially I want to display a selection of rows (1 to n) as columns, the following diagram explains my intentions.Perhaps one of the greatest challenges here is the fact that I do not have a concrete number of rows (or BIN numbers). Each stock item could be stored in one or more BINS, which I will not know until running my query.
Hi , I am using the Integration Services slowly changing dimension to move data from a SQL Server 2000 database table to a SQL Server 2005 table.
The problem is the package is not tracking changes, it is spending a lot of time doing lookups (i'm guessing this cost it's real slow), but ends up creating new records when there has not been a change.
I'm quite sure the business key is set up correctly (I'm using the PK from the source table).
The database I am transferring from has non Unicode data types (ie varchar and char) and the destination database has Unicode data types (ie nvarchar).
Also some of the fields in the dB are NULL - does this have an effect (ie one null doesn't equal another null)? Or shouldn't that matter?
Hi , I am using the Integration Services slowly changing dimension to move data from a SQL Server 2000 database table to a SQL Server 2005 table.
The other problem is the package is not tracking changes it is spending a lot of time doing lookups (it's slow), but ends up creating new records when there has not been a change.
I'm quite sure the business key is set up correctly (I'm using the PK from the source table).
The database I am transferring from has non Unicode data types (ie varchar and char) and the destination database has Unicode data types (ie nvarchar).
Also some of the fields in the dB are NULL - does this have an effect (ie one null doesn't equal another null)? Or shouldn't that matter?
I am fairly new to reporting services and recently created a simple tabular report. There is one table which queries the database and I would like to change the table so there are more than 25 rows of data per page. How do I do this? Also it would be advantageous to know how to set the report to only create a new page for every different item in a column. E.g., if I have various data for each date, I would like to only create a new page when the date changes so that all data for a specific date is on one page etc etc.
Combination of Student_Id, Subject_Id and Quarter columns is the primary key. One student can take one subject in a quarter. Now the new requirement is a student can take multiple subjects in a quarter. So need to add another table like below:
NEW table name: Student_Subject and column are below: Student_id Subject_Id Quarter1
All the above three columns combination is primary key.
After the new table Student_Subject created, remove Subject_Id column from Student table.
When the user clicks on a button after selecting multiple subjects and provide col1 and col2 data then one row gets inserted into Student table and multiple rows gets inserted into Student_Subject table.
Is there any other table design that satisfies one student can take multiple subjects in a quarter?
I have located a bug in the functions cdc.fn_cdc_get_net_changes_<capture_instance> generated when you enable cdc on a table. This bug can be triggered if 2 rows are created in the _CT table having the same values for the __$start_lsn, __$seqval and the table's key column(s). From research on the internet I have found such rows can be created by a "deferred update": a single update statement in which a column that is part of a unique constraint is updated.
In order to report the bug with Microsoft I need to create a complete series of steps-to-reproduce. But even though the situation happens several times a day in our production environment, I have not yet been able to reproduce it in my test environment.I need a single update statement (plus maybe some steps in advance) that make that the log reader inserts 2 rows into the _CT table, one with __$operation = 1 (delete) and another with __$operation = 2 (insert) as opposed to the single row with __$operation = 4 that it inserts for a normal update. Below is the script I have so far to create a fresh database, enable cdc, create a test table, insert some data and update this data.
I would have liked the last update statement to be handled as a "deferred update". However in all of my tests the log reader just simply inserts a single row into the cdc.dbo_NETTEST_CT table.how to reproduce the situation where I get the 2 rows with __$operation 1 and 2 from a single update statement instead of the single row with __$operation = 4.
I'm a web developer who writes transact-SQL to make my web applications run properly. I'm not real strong in other areas of SQL. Let me explain our set-up and then I'll explain what I want to do:
We have an ecommerce web site and all sales are saved in a SQL Server 2008 R2 database at our hosting company. We also have a local Windows 2012 network that has SQL Server 2014 Express installed.
Here is what I want to do:
I want to copy sales rows from the SQL Server 2008 database at our hosting company and save them in the SQL Server 2014 Express database on our local Windows 2012 server. I'd like to automate this if possible so that it happens each night perhaps. I know there is a way to schedule SQL jobs but I've never actually done this. I also would need to know how to attach to our hosting company DB as well as our local network DB.
PeopleID in People Table is the primarykey and foreign Key in PeopleCosts Table. PeopleID is an autonumber
The major fields in People Table are PeopleID | MajorVersion | SubVersion. I want to create a new copy of data for existing subversion (say from sub version 1 to 2) in the same table. when the new data is copied my PeopleID is getting incremented and how to copy the related data in the other table (PeopleCosts Table) with the new set of PeopleIDs..
I set up DB mirror between a primary (SQL1) and a mirror (SQL2); no witness. I have a problem when I issue command:
alter database DBmirrorTest Set Partner = N'TCP://SQL2.mycom.com:5022'; go
The error message is:
The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy of the database log.
I have the steps below prior to the command. (Note that both servers' service accounts use the same domain account. The domain account I login to do db mirror setup is a member of the local admin group.)
1. backup database DBmirrorTest on SQL1
2. backup database log
3. copy db and log backup files to SQL2
4. restore db with norecovery
5. restore log with norecovery
6. create endpoints on both SQL1 and SQL2
CREATE ENDPOINT [Mirroring]
STATE=STARTED
AS TCP (LISTENER_PORT = 5022, LISTENER_IP = ALL)
FOR DATA_MIRRORING (ROLE = PARTNER)
7. enable mirror on mirror server SQL2
:connect SQL2
alter database DBmirrorTest
Set Partner = N'TCP://SQL1.mycom.com:5022';
go
8. Enable mirror on primary server SQL1
:connect SQL1
alter database DBmirrorTest
Set Partner = N'TCP://SQL2.mycom.com:5022';
go
This is where I got the error.
The remote copy of database "DBmirrorTest" has not been rolled forward to a point in time that is encompassed in the local copy
Hi! I did: alter database mydb set single_user with rollback immediate; exec sp_detach_db @dbname='mydb', @keepfulltextindexfile='true';
then I tried to copy files to new location on other drives, same server but got >>Cannot copy <myfile>: Access is denied Make sure the disk is not full or write-protected and that the file is not currently in use<<
I also tried rename of file without success. I also tried with db service stoppet (not preferred) without success.
How to find out, which process locks the files? Best regards
I need to use Bulk insert statement for copying a table with 200 million rows to another table on the same server...the table has no primary key or identity column.... script for BULK INSERT ...