SQL Server 2012 :: Open EDI File And Insert Data Into Tables
Sep 27, 2012Is there a way to open EDI file in SQL Server and insert the data into tables?
View 9 RepliesIs there a way to open EDI file in SQL Server and insert the data into tables?
View 9 RepliesI am trying to take an entire MS SQL database and put it in an sql file. I have succesfully copied the tables into an sql file by highlighting the tables in enterprise manager and choosing 'generate sql script'.
That gives me the structure, but now I would like the data (in insert statements). I have looked in enterprise manager's export wizard and sql analyzer to no avail. There seem to be a lot of options for exporting data except this one! Please point me in the right direction.
At the end of the day, I would like to be able to put everything in a text file. Then, should I have problems, I can just copy my text into query analyzer and have a brand new database.
Thank you in advance.
I am unable to load data from flat file to sql table using bulk insert sql statement
My code:-
DECLARE @filePath VARCHAR(200)
DECLARE @sql VARCHAR(8000)
Declare @filename varchar(100)
set @filename='CCNVZ_150401054418'
SET @filePath = 'I:IncomingFiles'+@FileName+'.txt'
[Code] .....
I have excel file like below
sno name fname empid epsal
1 raju ravi 123 40000
Upload Import Excel Sheet data into SQL Server using ASP . in different tables like..
table_a
sno name fname
1 raju ravi
table_b
empid empsal
123 40000
Is there anyway to get the order in which data to be import on to tables when they have primary and Foreign Key relations?
For ex:We have around 170 tables and when tries to insert data it will throw error stating table25 data should be inserted first when we insert data in table 25 it say 70 like that.
Hi everyone,
I am new to SSIS and I thought maybe someone would give me tips for solving the problem I am facing.
Overview:
I want to insert data contained in a flat file into several DB tables, which have N-M relations.
For illustration, I would explain the problem on a very simple DB:
1. The database contains the following 3 tables:
EMPLOYEE (EMP_ID, EMP_NAME)
PROJECT (PROJ_ID, PROJ_NAME)
EMP_PROJ (EMP_ID, PROJ_ID) , where EMP_ID and PROJ_ID are foreign keys referencing records in the EMPLOYEE and PROJECT tables respectively.
2. Each entry in the falt file contains the following data:
EMP_ID, EMP_NAME, PROJ_ID, PROJ_NAME
3. In SSIS, I have created a Data Flow Task containing:
- a path from a Falt File Source to an SQL Server Destination (Table: Employee)
- a path from a Falt File Source to an SQL Server Destination (Table: Project)
- a path from a Falt File Source to an SQL Server Destination (Table: Emp_proj)
Note: I used SQL Server Destination, because I need to import a huge amount of data and I read that this component performs better than the OLE DB Destination!
Questions:
1. I would like to eliminate EMP_ID and PROJ_ID from the Flat File Source. Instead, I would like these fields to be generated automatically upon insertion.
a. How can I do this and propagate the generated key among the different paths, which I have explained previously?
b. Can I first generate the two keys somehow then the parallel insertions into the different tables should start using the generated keys?
2. Is my solution correct in the first place? Or is there another better way for inserting data which belong to N-N relations?
Thanks in adavance,
Samar
How can I open .str file format in sql server. Is there any manual method to do so?
View 4 Replies View RelatedI recently installed standalone version of SQL 2014 Standard on my work computer. I used Access before but I want to use a SQL server instead.
We have a shared drive that a file gets deposited every day at midnight. I want to be able to get this file and import it to the server (its basically a list of names).
Here what I have done so far:
I created the database
Created the file and successfully imported data into it using the Import Data feature.
I saved the SSIS package
Scheduled an Agent Job for this package to run at certain time,daily
At first the jobs would fail with a Access is Denied. I added a user under Credentials with my network account ( have admin rights on the work computer).Also added a Proxy for the Credential user I made.
Jobs fail with a “Cannot open data file” error. I tried changing things here and there, but I can’t get it to work.
Hi,
I created a SSIS package which will generate an output file and place it on a remote fileshare location which will look something like this
\RemoteServerNameRemoteFilePath
The package is executing fine when I am executing it through BIDS or through execute package utility and writing the output file to remote file share location.
I created a SQL job for the package and ran the Job. Then, its throwing an error saying
Executed as user: DomainUser. Microsoft (R) SQL Server Execute Package Utility Version 9.00.3042.00 for 32-bit Copyright (C) Microsoft Corp 1984-2005. All rights reserved. Started: 10:33:06 AM Error: 2008-03-10 10:33:22.22 Code: 0xC020200E Source: DFT_Generate Output File Description: Cannot open the datafile "
\RemoteServerNameRemoteFilePathOutputFileName.txt". End Error Error: 2008-03-10 10:33:22.34 Code: 0xC004701A Source: DFT_Generate Output File DTS.Pipeline Description: component "FF_DST_Output" (160) failed the pre-execute phase and returned error code 0xC020200E. End Error DTExec: The package execution returned DTSER_FAILURE (1). Started: 10:33:06 AM Finished: 10:33:22 AM Elapsed: 15.891 seconds. The package execution failed. The step failed.
DomainUser have all the permissions on the remote file share location.
SQL server agent is running with the log-on account DomainUser(same as the above).
Could anyone help me in resolving this issue.
Thanks & Regards,
Sriram.
I have been tasksed to create a data table and stored procedure to extract a special formatted XML file that is an attachment with a standard XML envelope. The XML file is an attchment in a node within the XML wrapper. There are other MIME files (pdf's ) that are handle by a seperate procedure. But I need to just extract the XML file attached along with those and put it into the datable with some other PK?FK fields.
Is a blob the best datatype. How to I insert that XML file into it?
I am trying to load a fixed width text file using `Bulk Insert` and a XML format file. I have used the same process and XML file on another fixed width, except with less columns.
Error
Msg 4857, Level 16, State 1, Line 16
Line 4 in format file "PATHCaddr.xml": Attribute "type" could not be specified for this type.
SQL Server Table
[code lang="sql"]
create table [dbo].[raw_addr](
address_numbervarchar(max),
addr_linelvarchar(max),
addr_line2varchar(max),
street_novarchar(max),
[code]....
Overall goal: Write a Bulk Insert statement using the UNC path of a filetable directory.
Issue: When using the UNC path of the filetable directory in a Bulk Insert Statement, receiving "Operating system error code 50(The request is not supported.)" Looking for confirmation as to whether this is truly not supported.
Environment: SQL Server 2012 Standard. Windows Server 2008 R2 Standard
I am running my package in sql server 2012, in which i am giving network path for flat file destination. And its working fine. But if i give m local path, its giving me error " cannot open data file" ...
Nothing is wrong with package.
I am trying to attach a database. But I am receiving below error:
Unable to open the physical file "D:databasepc.mdf". Operating system error 5: "5(Access is denied.)".
I have added service accounts to administrator group.
I have provided full control to service account on the D drive and on the .mdf file also.
My self has full permissions on the drive and .mdf file, and I am in administrator group.
Restarted the SQL Server services.Still same error.
Merging table :
--------Dummy TABLE
create table #Tbl1 (date1 date,WSH varchar(10),ITN int,Executions int)
insert into #Tbl1 (date1 , WSH , ITN , Executions)
select '20130202' ,'ABC', 1 , 100
union all
select '20130203' ,'DEF', 1 , 200
[Code] .....
I want Result like this :
date1 WSH ITNExecutionsMCGPositions
2013-02-02 ABC 1 100 2 500
2013-02-03 DEF1 200 NULL NULL
2013-02-05 NULLNULL NULL 2 600
1. Take a subset of data from about 100 tables that have multiple references to other tables in this group of 100 from a first DB.
2. Insert the above data into a second DB, a database that already has data in the 100 tables, while maintaining the correct references.
As a general approach, the best way I can think of doing this is as follows:
1. Create mapping tables for every ID that is referenced in a different table (OldID NewID)
2. Insert the old data into the new table and output the OldID and NewID into the mapping table.
3. Use that mapping data to make sure all tables that use those IDs have the new IDs in DB2.
This approach is extremely labor intensive both on initial implementation and would require a fairly substantial amount of work to maintain going forward.
I have a set of tables that look like what I have shown below. How I can achieve the desired output ?
CREATE TABLE #ABC([Year] INT, [Month] INT,Customer Varchar(10), SalesofProductA INT);
CREATE TABLE #DEF([Year] INT, [Month] INT,Customer Varchar(10), SalesofProductB INT);
CREATE TABLE #GHI([Year] INT, [Month] INT,Customer Varchar(10), SalesofProductC INT);
INSERT #ABC VALUES (2013,1,'PPP',1);
INSERT #ABC VALUES (2013,1,'QQQ',2);
INSERT #ABC VALUES (2013,2,'PPP',3);
[Code] ....
I have a query currently that looks like this . @Month and @Year are supplied as parameters
SELECT
-- select the sum for each year/month combination using a correlated subquery (each result from the main query causes another data retrieval operation to be run)
(SELECT SUM(SalesofProductA) FROM #ABC WHERE [Year]=T.[Year] AND [Month]=T.[Month]) AS [Sum_SalesofProductA]
[Code] ...
Right now I see an output like this : for a particular value of @Month and @Year
SalesofProductA, SalesofProductB, SalesofProductC What I would like to see is :
[Customer],SalesofProductA, SalesofProductB, SalesofProductC
How it can be done ?
I need to make a query that counts installed developer software for all our developers (from the sccm database), for licensing purposes. The trick here is that a license should only be counted once per. developer and that should be the highest version. But in the database, the developers can have different versions of the software installed (upgrades) on the same computer and they often use several computers with different software versions.
So for example: A source table with two developers
-------------------------------------------------------------------
| dev1 | comp1 | Microsoft Visual Studio Ultimate 2013
| dev1 | comp1 | Microsoft Visual Studio Professional 2010
| dev1 | comp2 | Microsoft Visual Studio Premium 2010
| dev2 | comp3 | Microsoft Visual Studio Professional 2010
| dev2 | comp4 | Microsoft Visual Studio Premium 2012
--------------------------------------------------------------------
I want the result to be:
-----------------------------------------------------
| dev1 | Microsoft Visual Studio Ultimate 2013
| dev2 | Microsoft Visual Studio Premium 2012
------------------------------------------------------
I have created a query using cursors that give me the correct result, but it's way to slow to be acceptable (over 20 min..). I also toyed with the idea of creating some sort of CRL proc or function in C# that does the logic, but a SCCM consultant from MS said that if I create any kind of custom objects on the SCCM SQL Server instance, we loose all support from them. So I'm basically stuck with using good old fashioned T-SQL queries.
My idea now, is to use a CTE table and combine it with a Temp table with the software and a rank. I feel that I'm on the right track, but I just can't nail it properly.
This is how far I have come now:
IF OBJECT_ID('tempdb..#swRank') IS NULL CREATE TABLE #swRank(rankID int NOT NULL UNIQUE, vsVersion nvarchar(255))
INSERT INTO #swRank(rankID, vsVersion)
VALUES
(1, 'Microsoft Visual Studio Ultimate 2013'),
(2, 'Microsoft Visual Studio Ultimate 2012'),
[Code] ....
I have found a bunch of duplicate records in our housing database that ideally I need to delete.There are two tables that I need to remove data from ih_cml_log_entry and ih_cml_log_notes. There is no unique identifier between the tables for a log entry. So I have had to join on the person_ref, log_seq and the date/time of entry.How do I go about deleting the data - I've used the script below to identify what I need to delete -
SELECT *
FROM
(
select cml.person_ref, cml.open_date + open_time as 'datetime',cml.open_user,cml.log_type
,ROW_NUMBER() OVER (PARTITION BY cml.person_ref, cml.open_date + cml.open_time,cml.open_user,cml.log_type ORDER BY (SELECT 0)) AS RowNo
,n.note
FROM ih_cml_log_entry cml
[code]...
I wonder if it possible to move data from tables on a linked server to a "normal database"?
Name linked server: Covas
Name table on linked server: tblCountries
Name field: cntCountryName
Name "normal" database: CovasCopy
Name "normal" table: Countries (or dbo.Countries)
Name "normal" field: Country
This is just a test setup. I figure that if I get this working the rest will be easier.
My current query:
select * from openquery(COVAS,'
INSERT INTO CovasCopy.dbo.Countries(Country)
SELECT cntCountryName FROM db_covas.tblCountries;')
In a Library Management database we have these tables
1) Document ( DocNo , Doc_type , permalink,inDate)
2)Title(id, DocNo,Main_Title, Other_Title)
3)Author(id , Author_Name , Author_Family,Type--Like:main author , translator ,....)
4)Publisher(id,DocNo , Name,Publisedate,address)
5)Subject(id,DocNo,Subject)
6)Description(id,DocNo,ISBN,description)--one document may have some ISBN,etc
In document table I have 500,000 records.
I want to search a word in these tables ,for example i want to search 'Computer' ,this word may be in subject or title or description and etc. How can I do this with best performance?
I have 6 tables which are very huge in row count and records needs to deleted which are older than 8 days.
Little info: Every day, 300 Million records are inserted in below 7 tables. we should maintain only 8 days worth of data in below tables. How to implement Purge script which can delete records in all tables in the same time and with optimized parallelism.
Master table which has [ID],[Timestamp]
Table Name: Sample - 2,578,106
Child tables: Foreign key [ID] is common for all the tables. There is no timestamp column in child table. So the records needs to deleted based on Min(ID) from Sample
dbo.ConnectionDB - 1,147,578,048
dbo.ConnectionSS - 876,458,321
dbo.ConnectionRT - 118,133,857
dbo.ConnectionSample - 100,038,535
dbo.Command - 100,032,235
I need to consume a live data feed from a golf tournament. And by consume, I really mean insert (merge) into our own SQL Server database on a regular intervals as a tournament progresses.This site didn't let me upload an XML file, but you can see a sample of the data feed here: URL....
I need to insert this data into 2 tables, Player_Holes and Player_Shots. But while doing the insert, I need to lookup several things such as our player ID match to theirs on an external_id against the players table. The shot types translation, and some other logic about the process overall.
The columns in my player_holes tables are: id, player_id, hole_id, round, shots (this is a total # of strokes) and date_created/date_modified.Shots table is similar: id, player_id, hole_id, round, shot_number, shot_type_id, club, distance, date_created/date_modified.
The only way I know how to do it, is inefficient. I would parse the XML in ColdFusion (please no comments on ColdFusion, that's what we use for webdev), and then loop over it and do inserts for each player, each hole for each round, and the shots would probably be separate for each hole.
It would be so much better and more efficient if I could do it in SQL directly. I've done some research and SQL Server Data Tools looks promising. I've never used it, so would have to learn, but also I'm not sure if that'd work in this application when we want to run is as a scheduled task every few minutes.
We are having folder table and team table as like below structure.
Folderlist (F)Table: (
==============
id ,folder_name, parent_id
1, c, 101
2,b,202
3,c,203
Teamlist table (T)
============
team_id, Team_name, Parent_folderid
101 , mobile,101
202 ,Tab,200
200, Phone,200
203,apple,205
205,nokia,208
208,samsung,208
If F.parent_id(101)=T.team_id(101) and T.team_id(101)=T.parent_folder_id (101)
then output should come as 'Mobile/c' (this is for f.parent_id=101)
If F.Parent_id=T.team_id and T.team_id!=T.parent_folder_id
then parent_folder_id have to start search on team_id column where it got match and pick the Team_name from that corresponding id
Ex: F.parent_id=202 is matching with T.Team_id (202) but this T.team_id(202) is not matching with T.parent_folderid(200) , so this T.parent_folderid (200) have to search on T.id (200) ,if now T.id(200) is matching with T.Parent_folder_id(200) then it have to give the names from the starting hirache
like phone/tab/b (this is for F.parent_id=202)
I'm pulling data from XML into tables, but I'm unsure how to link the data after it's imported. This example has names and tasks, and I can pull the data into two tables, but I can't find any way to link the task to the appropriate person. My person and task tables populate without issue, but there's nothing I can find to link the rows together. So in this example Test 1 would go to the first two Tasks and Test 2 would go to the second two work items.
DECLARE @XML TABLE (XMLData XML);
DECLARE @Person Table (Name NVARCHAR(50), Addresss NVARCHAR(50));
DECLARE @Task Table (Name NVARCHAR(50), Details NVARCHAR(50));
INSERT INTO @XML SELECT '
<process>
<header>
<Person><Name>Test1</Name><Address>123 main street</Address></Person>
[Code] .....
Insert data to database foxpro using open query. I already create connection like below
@server = 'EXIMBIL_LINK',
@provider = 'MSDASQL',
@srvproduct = '',
@provstr = 'Driver={Microsoft Visual FoxPro Driver}; UID=;SourceDB=D: raining_testdata;SourceType=DBF ;Exclusive=No;BackgroundFetch=Yes'
when i try to show table van with command :
select * from openquery(EXIMBIL_LINK,'select * from van')
It show query running well but when i try to insert new row to table van with command :
INSERT into OpenQuery(EXIMBIL_LINK, 'SELECT c_user_id,c_user_name,full_name from van')
VALUES('1000007668','IDMTEST2','IDMTEST2')
throws error like this : Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done
Hello (sorry my bad english, im brazilian)
I was using Visual Studio 2003 and SQL Server CE 2.0 for C# mobile applications. The .sdf database were created in the emulator or in the mobile device itself using Query Analizer.
The application developed need some initial data to run, and this data is obtained executing one service that reads a postgree database, and insert the data in the SQL CE database of the mobile device. But, given the size of the database (maybe 10.000 rows), it tooks too much time (sometimes 6 hours).
Now we are migrating to Visual Studio 2005 and SQL Server 2005 Mobile Edition.
I want to know if its possible to create the .sdf database and load the data into this database on the desktop. Maybe through the execution of a .sql script, or through a service executed on the desktop.
After this, its just upload de .sdf file to the mobile device.
Thanks
Robson
From my query I am getting results like below in one of the column:
'immediate due 14,289.00
04/15/15 5,213.00
05/15/15 5,213.00
06/15/15 5,213.00
07/15/15 5,213.00
08/15/15 5,213.00
09/15/15 5,213.00
10/15/15 5,213.00
11/15/15 5,210.00'
this same type of many rows are there (i just mentioned one) but having same pattern with tabs as delimiter in between dates and amount.
I need something that shows Date on one side representing particular amount on the other
For Immediate Due it will be current date and the amount besides it.
how can I achieve this.
Hi, I was wondering which is the best way to read data from a txt file and insert each row into sql.
OLE DB Command could be? It will be necesary to work with variables?
My txt file will have a defined width (if it is necessary to know). I will have many rows with many columns. I have to map eah column from the txt file to it's corresponding column in sql server and insert data into it for each row.
Thanks for your help!
Is it possible to dynamically grow a data file for TempDB?
I have some TSQL that I want to execute (this is raw tsql, not something I'd typically share) when I get an alert that TempDB is growing.
SET NOCOUNT ON;
DECLARE @TempDB TABLE (FileID tinyint, name varchar(50), size bigint)
DECLARE @TotalSize decimal(19,4), @TotalFiles int, @CurrentTempDBSize bigint, @SQL varchar(1000), @DBName varchar(50)
SET @SQL = ''
INSERT INTO @TempDB
[code]....
ENDThe code runs fine, but doesn't actually increase the size of the file.
i want to combine upper two tables data like below result sets. Means they should be grouped by bsns_id and its description should be comma separated taken from 2nd table. In sql server 2012.
This is the image path :
[URL]
I am having trouble with BCP. I get the same error with xp_cmdshell as I do with entering bcp as a DOS command. I have checked and rechecked the file names and permissions and even restart the PC.
Does BCP even work when SQL Server 2005 Express on a PC?
declare @sql varchar(8000)
select @sql = 'bcp SYMITAR..ACCOUNT in C: estEXTRACT.ACCOUNT -T -f C: estACCOUNT.FMT -S'+@@servername
exec SYMITAR..xp_cmdshell @sql
GO
Volume in drive C has no label.
Volume Serial Number is 08E5-2414
Directory of C: est
02/13/2007 08:44 AM <DIR> .
02/13/2007 08:44 AM <DIR> ..
08/31/2006 09:11 AM 27,503,161 EXTRACT.ACCOUNT
08/31/2006 09:12 AM 6,879 FMT.ACCOUNT
02/07/2007 08:46 AM 220 ACCTTYPE.FMT
02/13/2007 08:44 AM 0 filelisting.txt
02/07/2007 08:33 AM 220 ACCTTYPE.xml
5 File(s) 27,510,480 bytes
2 Dir(s) 54,344,847,360 bytes free
Could the fact that it's actually pulling BCP from c:Program FilesMicrosoft SQL Server80Toolsinn be a problem? I know this because it doesn't recognize the -x extensionof the bcp command.
PATH:
%SystemRoot%system32;%SystemRoot%;%SystemRoot%System32Wbem;C:Program FilesCommon FilesGTK2.0in;C:Program FilesMicrosoft SQL Server80ToolsBINN;c:Program FilesMicrosoft SQL Server90Toolsinn;c:Program FilesMicrosoft Visual Studio 8Common7IDEPrivateAssemblies;c:Program FilesMicrosoft SQL Server90DTSBinn;c:Program FilesMicrosoft SQL Server90ToolsBinnVSShellCommon7IDE;C:Program FilesQuickTimeQTSystem
...
I added the c: est to the path and removed c:Program FilesMicrosoft
SQL Server80Toolsinn from the path. While it recognizes the -x
extension now it still gets the same error message:
SQLState = HY000, NativeError = 0
Error = [Microsoft][SQL Native Client]Unable to open BCP host data-file
NULL
Hello,
I have checked file names, and I have checked permissions on the files. I do not see any reason why the BCP should fail. Any ideas how I can debug this further?
Thanks
Tom