I wasn't able to figure out the vocab to search for this on usenet.
I'm sure it's an easy solution, but I have no experience with SQL
Server:
Situation:
tbl_CompanyData
is a 1735 x 20 table with a CompanyID key in the first column
tbl_MergedCompanyData
is a 1735 x 20 table imported from Excel. We found it much easier to
enter data into an Excel file.
Problem:
Keep the CompanyID field in tbl_CompanyData, but replace all the other
19 rows with data from tbl_MergedCompanyData, which contains all the
tbl_CompanyData PLUS new data that users filled in. At present, the
rows don't match up, but I suppose I could pre-sort the tables.
Preferrably, any solution would be smart enough to find CompanyID N in
tbl_MergedCompanyData and replace the data in the fifth column in
tbl_CompanyData where CompanyID is N with data from the fifth column
in tbl_MergedCompanyData.
Any thoughts would be appreciated. Please let me know if I can
clarify my problem.
Download a file using the task. Go to the file in Windows Exploder and Open the file in notepad. Copy the last line of text and paste in a few extra rows at the bottom. So if you had 100 rows, you'll now have 103 rows. Save the file. Go back to the task and make sure you have "overwrite destination" set to True. Execute the task. Go back to the file and look at the bottom of the file. You'll see the same 3 extra rows you pasted in there.
That is not how it should work if it was supposed to be released like that.
Hi all I was wondering how to do an ALTER command on a Table but without specifying Column Names but rather attempting to overwrite the Table itself with the new fields specified? For instance if I have Table_1 consisting of the following fields:
IDFirstNameSurname
Then use the following ALTER command:
Code Snippet ALTER Table Table_1 ( ID Int, FirstName VarChar(50) ) This would then drop Surname from the Table and leave only ID and FirstName inside it. Is this possible? I have been searching google but can't seem to find what I am looking for.
I created a plan for a backup. This first truncates the log and after backups the data with overwrite mode and after backups the transaction log with overwrite mode. The process is using different backup devices for the data and log backups. The first 2 step is success (truncate and data backup) but in the last step the backup process don't overwrite the backup device. Why ?
Hi :) I have a website that uses SqlExpress ...on it i have a database that was working ok ...until i made a few modifications to the database (had a few rows). I have upload the database (only the .mdf file) to the app_data folder ...but now i get this message : One or more files do not match the primary file of the database. If you are attempting to attach a database, retry the operation with the correct files. If this is an existing database, the file may be corrupted and should be restored from a backup.Cannot open database "ArtWork" requested by the login. The login failed.Login failed for user 'NT AUTHORITYNETWORK SERVICE'.Log file 'c:xxxxxxxxxxxxxxxxxxxApp_DataArtWork_log.LDF' does not match the primary file. It may be from a different database or the log may have been rebuilt previously. I try to delete the .LDF file but it gives me access denied ...i cheched the permissions for the file and everything is ok How can i solve this?? Thanks and Cheers
I have daily scheduled full backups of each database and log backups scheduled for every 2 hours. My question is should they be scheduled for overwriting or appending? I have always had them set as overwrite, but I don't know if that is correct? Any recommendations would be appreciated
I have to created a Maintenance Plan in SQL 2005 to take the backup of 5 different DB at 2AM everyday. Each DB has its own sub-directory and backup file The total DB size is 35GB. I need to overwrite the exsisting backup instead of having new backup everytime.
After logging is configured in SSIS package, it seems that after each execution the output is appended to the log file (we are talking about log provider for text files in this case). As a result the file just keeps on growing. I would like to overwrite old information with each run, but I can't find where to configure this. Anybody knows?
First off, I am not a DBA, not even remotely close. Anywho, I have been given the task of figuring out how to import from a comma delimited text file into 2 columns of an existing database. The task is as follows:
- A daily text file is created by a Unix DB and placed on a folder local to the SQL Server. - I am to take this file and import into an existing MS SQL2005 DB that has 3 columns. - AccountID, AccountName, DateRecordCreated - The imported data has to overwrite all existing SQL DB data. - This is to run automated on a daily schedule.
Being a SysAdmin, this sounds super simple to do but I have wasted 2 full days in trying to figure out how to make this happen using SSIS. All I want to know is if I am in the right track in focusing on SSIS for a solution. Any additional How To's would be greatly appreciated. BTW, the text file looks something like this...
AccountID,AccountName A123456,Joe Smith, M.D. A234567,John H. Dude,M.D.
Good Morning, I need some assistance with SQL Server 2000 Importing Data. When I import data from a text on a routine basis, three things must happen: 1. New records identified by primary key get appended to table. 2. Exisiting records identified by primary key get overwritten with new/(updated) data. 3. All other existing records are left alone. Does anyone know how to Import Records with the following the criteria above? It cannot insert duplicate primary keys by nature, so it must overwrite those records! This is being built into a DTS Package, but I need to get over this obsticle! Thanks for any guidance!
Aim – when Fee_Code = ‘42B’ and month_end_date =>2013-02-01 change the Fee_Code from “42B” to “42C”. Anything prior to 2013-02-01 the fee_code needs to remain the same
I can do this as a case statement(as seen below) but this creates a new column. How can i overwrite this logic in the fee_code column ?My query is
SELECT FDMSAccountNo, Fee_Code, month_end_date, sum(Fact_Fee_History.Retail_amount) as 'PCI', Case when fee_code = '42B' and (month_end_date >='2013-02-01') then '42C' end as Test from Fact_Fee_History
i have SSRS project which has 40 reports and one datasource. I have deployed my reports to report server and tested. every thing is working fine. but recently when i made changes to the reports and tried to deploy them, they are not getting overwritten. although the data source is getting overwritten as i set the Overwrite property to true. can any one help me?
I have 3 packages that run consecutively in a main package. Each of these packages read data from a flat file and import it into the Database and then perform some updates and then I use a script component to write the data to a flat file destination and have overwrite flag on the flat file destination to false, however the flat file is not being appended to. All the 3 packages are set to wirte to the same flat file destination each package overwrites the content of the flat file created by the previous packages.
I am wondering why the overwrite = False does not work on the flat file destination. Is there something else that I need to set or is this a defect? Any inputs will be much appreciated.
When calling the RS SOAP API's CreateReport method, it doesn't seem like the report description is updated, unless I first delete the report and then recreate it.
Here's what my call looks like - Warning[] warnings = rs.CreateReport(reportName, "/" + folder, true, buffer, null);
This is more of a philosophical post, but feedbacks are welcome!
I am working at migrating SQL 2000 DTS packages that pulls data from MAS90 via ODBC connections. At first, I REALLY tried to learn SSIS and I hated it at first, with all the new things one has to do to get a simple import to work. After a while, I begin to appreciate some of the new design and the more tiered approach. Indeed, I tried to use SSIS to import the tables and even learned how to overcome the Unicode/non-Unicode conversion errors by using the Import Wizard to do the grunt work.
But today I came across a show stopper: My imports are failing because the source lied about its metadata type and I am getting a "Value too large for output column" error. I tried to recreate the Task to no avail. I searched on the web and there are very few posts regarding to this and unfortunately I don't have a way to tweak my ODBC connection properties for MAS90 to some how "fool" SSIS. I finally give up and migrate the DTS 2000 package instead.
I am not too happy about this solution because I know that more likely or not Microsoft will discontinue support for such legacy approach and then it is more work down the road. I REALLY wanted to do it right, to rebuild it natively in SSIS but why does SSIS have to make things so hard by enforcing the type checks so tightly? Is it so bad to allow users who know the data better to by pass the validations? We are not working in a perfect Comp Sci 101 world where every thing is scrubbed clean, we work in a world of bad, old, malformed data.
If there is a way for me to overcome that "value too large" error, I am all ears. Thank you for reading.
I know the basic defiinition of these two options, but i am not very clear why would someone choose one over another, currently I am using Append to Media option, and every day backup, I see my backup files growing in size.
can someone give me nice example about these two options,
I've been assigned the task of setting up access to our SQL Server 2005 box. A consultant developing for us has accessing to 2 databases and I've set this up fine. It appears however that one of these databases is re-copied over to the server every night to keep data reasonably current.
I'm not interesting in changing this method as I'm not the maintainer (as yet).
Basically I would like to know if I've setup access to this database (it works fine), when the database is updated (with an SSIS package) the account seems to get deleted. Do the original permissions from the source database overwrite those of its destination?
I am using SQL Server 2005 and trying to create a linked server on Oracle 10. I used the commands below: EXEC sp_addlinkedserver @server = 'test1', @srvproduct = 'Oracle', @provider = 'MSDAORA', @datasrc = 'testsource' exec sp_addlinkedsrvlogin @rmtsrvname = 'test1', @useself = 'false', @rmtuser='sp', @rmtpassword='sp'
When I execute select * from test1...COUNTRY I get the error. "The OLE DB provider "MSDAORA" for linked server "...." does not contain the table "COUNTRY". The table either does not exist or the current user does not have permissions on that table." The 'sp' user I am connecting is the owner of the table. What could be the problem ? Thanks a lot.
I have created a table Table with name as Varchar and id as int. Now i have started inserting the rows like, insert into Table values ('arun',20).Yes i have inserted a row in the table. Now i have got the values " arun's ", 50. insert into Table values('arun's',20) My sqlserver is giving me an error instead of inserting the row. How will you solve this problem?
I am having a table called as status ,in that table one field is there i.e. currentstatus. the rows which are having currentstatus as "ticket closed",i want to move those rows into other table called repository which is having same table structure as status table. I can do programatically. but is there any way for every 3 months system has to check and do this action means moving to repository table automatically?
I'm inserting from TempAccrual to VacationAccrual . It works nicely, however if I run this script again it will insert the same values again in VacationAccrual. How do I block that? IF there is a small change in one of the column in TempAccrual then allow insert. Here is my query
INSERT INTO vacationaccrual (empno, accrued_vacation, accrued_sick_effective_date, accrued_sick, import_date)
For reasons that are not relevant (though I explain them below *), Iwant, for all my users whatever privelige level, an SP which createsand inserts into a temporary table and then another SP which reads anddrops the same temporary table.My users are not able to create dbo tables (eg dbo.tblTest), but arepermitted to create tables under their own user (eg MyUser.tblTest). Ihave found that I can achieve my aim by using code like this . . .SET @SQL = 'CREATE TABLE ' + @MyUserName + '.' + 'tblTest(tstIDDATETIME)'EXEC (@SQL)SET @SQL = 'INSERT INTO ' + @MyUserName + '.' + 'tblTest(tstID) VALUES(GETDATE())'EXEC (@SQL)This becomes exceptionally cumbersome for the complex INSERT & SELECTcode. I'm looking for a simpler way.Simplified down, I am looking for something like this . . .CREATE PROCEDURE dbo.TestInsert ASCREATE TABLE tblTest(tstID DATETIME)INSERT INTO tblTest(tstID) VALUES(GETDATE())GOCREATE PROCEDURE dbo.TestSelect ASSELECT * FROM tblTestDROP TABLE tblTestIn the above example, if the SPs are owned by dbo (as above), CREATETABLE & DROP TABLE use MyUser.tblTest while INSERT & SELECT usedbo.tblTest.If the SPs are owned by the user (eg MyUser.TestInsert), it workscorrectly (MyUser.tblTest is used throughout) but I would have to havea pair of SPs for each user.* I have MS Access ADP front end linked to a SQL Server database. Forreports with complex datasets, it times out. Therefore it suit mypurposes to create a temporary table first and then to open the reportbased on that temporary table.
The following dbo.Tables of Northwind.mdf in my .SQLEXPRESS (SQL Server Management Studio Express) are missing: dbo.Categories dbo.CustomerCustomerDemo dbo.CustomerDemographics dbo.Customers dbo.Employees dbo.EmployeeTerritories dbo.Order Details dbo.Orders dbo.Products dbo.Regions dbo.Shippers dbo.Suppliers dbo.Territories.
But, I have these dbo.Tables in a different Database "xyzDatabase". How can I copy each of these dbo.Tables to the another blank dbo.Table of Northwind Database?
I right clicked on the dbo.Categories and I saw the following thing: dbo.Categories New Table... Modify Open Table Script Table as |> CREATYE To |> DROP To |> SELECT To |> INSERT To |> New Query Editor Window File.... Clipboard UPDATE To |> DELETE to |> From the above observation,I think it is possible to copy the dbo.Table from the one Database to the Northwind Database that needs to be repaired. Please help and advise me how to do this task or tell me where I can find the Microsoft document that gives the details of this X-copy thing.
Thanks in advance, Scott Chang
P. S. I am using VB 2005 Express to create a project to learn "Calling Stored Procedures with ADO.NET" (see Paul Kimmel's article in http://www.developer.com/db/article.php/3438221) that needs the dbo.Tables of Northwind Database and my Northwind Database has been screwed up for quite a while and needs a big repair.
--Table 1 "Employee" CREATE TABLE [MyCompany].[Employee]( [EmployeeGID] [int] IDENTITY(1,1) NOT NULL, [BranchFID] [int] NOT NULL, [FirstName] [varchar](50) NOT NULL, [MiddleName] [varchar](50) NOT NULL, [LastName] [varchar](50) NOT NULL, CONSTRAINT [PK_Employee] PRIMARY KEY CLUSTERED ( [EmployeeGID] ) GO ALTER TABLE [MyCompany].[Employee] WITH CHECK ADD CONSTRAINT [FK_Employee_BranchFID] FOREIGN KEY([BranchFID]) REFERENCES [myCompany].[Branch] ([BranchGID]) GO ALTER TABLE [MyCompany].[Employee] CHECK CONSTRAINT [FK_Employee_BranchFID]
-- Table 2 "Branch" CREATE TABLE [Mycompany].[Branch]( [BranchGID] [int] IDENTITY(1,1) NOT NULL, [BranchName] [varchar](50) NOT NULL, [City] [varchar](50) NOT NULL, [ManagerFID] [int] NOT NULL, CONSTRAINT [PK_Branch] PRIMARY KEY CLUSTERED ( [BranchGID] ) GO ALTER TABLE [MyCompany].[Branch] WITH CHECK ADD CONSTRAINT [FK_Branch_ManagerFID] FOREIGN KEY([ManagerFID]) REFERENCES [MyCompany].[Employee] ([EmployeeGID]) GO ALTER TABLE [MyCompany].[Branch] CHECK CONSTRAINT [FK_Branch_ManagerFID]
--Foreign IDs = FID --generated IDs = GID Then I try a simple single row DELETE
DELETE FROM MyCompany.Employee WHERE EmployeeGID= 39
Well this might look like a very basic error: I get this Error after trying to delete something from Table Employee?
The DELETE statement conflicted with the REFERENCE constraint "FK_Branch_ManagerFID". The conflict occurred in database "MyDatabase", table "myCompany.Branch", column 'ManagerFID'.
Yes what Ive been doing is to deactivate the foreign key constraint, in both tables when performing these kinds of operations, same thing if I try to delete a Branch? entry, basically each entry in branch? and Employee? is child of each other which makes things more complicated.
My question is, is there a simple way to overcome this obstacle without having to deactivate the foreign key constraints every time or a good way to prevent this from happening in the first place? Is this when I have to use ON DELETE CASCADE? or something?
Banti writes "IF i create temporary table by using #table and ##table then what is the difference. i found no difference. pls reply. first: create table ##temp ( name varchar(25), roll int ) insert into ##temp values('banti',1) select * from ##temp second: create table #temp ( name varchar(25), roll int ) insert into #temp values('banti',1) select * from #temp
both works fine , then what is the difference waiting for ur reply Banti"