I have recently been task with rewriting a database that holds large volumes of data, whilst ensuring that query can be run in optimal time. Having never really delved into this sort of thing before, I hoped you guys might be able to offer some advice and guidance.
The design I have inherited is based around 2 main tables:
[captured_varbinds]
[id] [int] IDENTITY (1, 1) NOT NULL
[captured_trap_id] [int] NOT NULL
[varbind_oid] [varchar] (500)
[varbind_text] [varchar (500)
The relationship between the two tables is on the "captured_traps (id)" to "captured_varbinds (captured_trap_id)". Currently the "captured_traps" table contains around 350 million rows, the "captured_varbinds" table contains around 900 million rows.
Now as you can probably gather this model runs like a....well it sort of hobbles more than runs hence the need to redesign.
My current thoughts on this are:
- Normalising all varchars - there is alot of duplicate values in most of the varchar fields.
- Full Text Indexing
However beyond that I am not sure which route to go down. After googling for most of today I have come across a number of "solutions" however I do not want to go steaming down the track of one of these to discover that it is fatally flawed somewhere.
I need to periodically import a (HUGE) table of data from an external data source (not SQL Server) into SQL Server, with the following scenarios: Some of the records in the external data source may not exist in SQL.Some of the records in the external data source may have a different value at different imports, but this records are identified univocally by the same primary key in the external datasource and in SQL Server.Some of the records in the external data source may be the same in SQL.
Due to the massive volume of the import, I would like to import only the records which are different from what I have in SQL Server (cases 1 and 2 above). In fact case 2 is the most critical.
I thought of making a query with a left outer join between the data in the external data source table (SOURCE) and the data in the SQL Server table (DESTIN). The join is done on the respective primary keys (composed keys of up to 10 columns) and one of the WHERE conditions will be that the value in SOURCE is different from the value in DESTIN.
The result of this query would be exactly what I need to import. How to do this in SSIS??? I couldn't figure out how to join tables in different data sources yet.
In fact I cannot write a stored procedure to do that, since one of the sources is in a datasources not SQL Server. I have seen the Lookup transformation in this article http://www.sqlis.com/default.aspx?311 but this is not exacltly what I want to do. Another possibility is to use the merge join, but due to the sorting I believe its performances would be terrible!
I'm creating a DB to track clients, programs, and client participation in the programs. They are service programs. A client can be in more than one program and a program can have more than one client. Can someone give me an example of how they would layout the tables? My guess is: tblClient, ClientID tblClientProgramLog, ProLogID, ClientID tblProgramDetails, ProDetailID, ProLogID tblPrograms, ProgramID, ProDetailID I appreciate any suggestions,
I'm trying to design a database that handles Clients, Cases, Individual and Group Sessions. My problem is that a client can have individual sessions and belong to more than one group at the same time, so I have a many-to-many relationship to deal with. Also I'm trying to design it so that I can have a form that when a group is selected from a drop down it shows all clients assigned to that group and will let me enter new session data for them.
Just looking for some advice on how to handle the relationships. Maybe someone could show me how they see the relationships working.
My take is that the session is linked to the case not the client, I could be thinking incorrectly?
I have an construction estimation system, and I want to develop a project management system. I will be using the same database because there are shared tables. My question is this, critical data tables are considered tables with dollar values and these tables should not be shared across the whole company. I do however need information from these tables, such as product and quantity of the product for a given project. When an estimate becomes a project it is assigned a project number. At this point I thought of Copying the required data from the estimate side to the project side. This would result in duplicate data in a sence but the tables will be referenced from two standalone front end applications. Should I copy the data from one table to another, or create new "views" to the estimate tables for the project management portion.
What would be the best solution to this problem? I find in some circumstances, a new table is required because additional data will be saved on the "Project Management" side, but not in all cases.
I am creating a database where: - I have a Blogs and Folders system. - Use a common design so I can implement new systems in the future.
Users, Comments, Ratings, View, Tags and Categories are tables common to all systems, i.e., used by Posts and Files in Blogs and Folders.
- One Tag or Category can be associated to many Posts or Files. - One Comment, View or Rating should be only associated to one Post or one File. I am missing this ... (1)
Relations between a File / Folder and Comments / Ratings / View / Tags / Categories are done using FilesRatings, FoldersViews, etc.
I am using UniqueIdentifier as Primary Keys. I checked ASP.NET Membership tables, a few articles and few features in my project, such as renaming files with the GUID of their records. I didn't decided yet for INT or UNIQUEIDENTIFIER.
I am looking for some feedback on the design of my database. One thing I need to improve is mentioned in (1)
Thank You, Miguel
My Database Script:
-- Users ... create table dbo.Users ( UserID uniqueidentifier not null constraint PK_User primary key clustered, [Name] nvarchar(200) not null, Email nvarchar(200) null, UpdatedDate datetime not null )
Hello Everyone. Im sorry for this urgent post, but have critical issue that needs a solution quick. So for my issue. I am adjusting our sales order tables to handle a couple different scenarios. Currently we have 2 tables for sales orders
SALESORDERS ------------ SORDERNBR int PK, { Addtl Header Columns... }
SALESORDERDETAILS ------------------- SODETAILID int, SORDERNBR int FK, PN varchar, SN varchar(25), { Addtl Detail Columns ... }
Currently the sales order line item is serial number specific. I need to change the tables to be able to handle different requests like :
Line Item Request ( PN, QTY ) Line Item Request ( SN ) Line Item Request ( PN, GRADE, QTY ) ETC.
I am thinking i need to create a new table to hold the specifics for a particular line item. Maybe like this :
SALESORDERSPECS ---------------- SOSPECID int, SODETAILID int FK, SPECTYPE varchar, IE : SN, PN, GRADE. { one value per row } SPECVALUE varchar IE : GRADE A
Im thinking i would need to rename the SALESORDERDETAILS table to SALESORDERITEMS. SALESORDERITEMS would just contain header info like SalePrice, Warranty, Etc...
Then rename SALESORDERSPECS to SALESORDERDETAILS...
Anyone understand what im trying to do? If you need more info please ask. You can also get a hold of me through IM.
Hi All, I have read MANY posts on how to track changes to data overtimeIt appears there are two points of view1. Each record supports a Change Indicator flag toindicate the current record(would this be EVERY table?)2. Each table is duplicated as an archive table andtriggers are used to update archiveCan someone give me some guidance based on REAL world experiencewhich works best for them?My scenario - I have insurance policies and must track history aspolicies are updated by customer service reps.Imagine many tables Policy>LifePol>LifePolRiders[color=blue]>AccidentPol >etc...>DIPol>DIPolRiders[/color]To me the archive table scenario does not seem scalable at all....someguidance on design would be aprreciated...Thanks!!!
I am creating a database where: - I have a Blogs and Folders system. - Use a common design so I can implement new systems in the future.
Users, Comments, Ratings, View, Tags and Categories are tables common to all systems, i.e., used by Posts and Files in Blogs and Folders.
- One Tag or Category can be associated to many Posts or Files. - One Comment, View or Rating should be only associated to one Post or one File. I am missing this ... (1)
Relations between a File / Folder and Comments / Ratings / View / Tags / Categories are done using FilesRatings, FoldersViews, etc.
I am using UniqueIdentifier as Primary Keys. I checked ASP.NET Membership tables, a few articles and few features in my project, such as renaming files with the GUID of their records. I didn't decided yet for INT or UNIQUEIDENTIFIER
I am looking for some feedback on the design of my database. One thing I think need to improve is mentioned in (1)
But any advices to improve it would be great.
Thank You, Miguel
My Database Script:
-- Users ... create table dbo.Users ( UserID uniqueidentifier not null constraint PK_User primary key clustered, [Name] nvarchar(200) not null, Email nvarchar(200) null, UpdatedDate datetime not null )
I have a schema that is mostly working, but I was wondering if some of you with more experience than I might give me some constructive criticism on my methodology.
Basically, I have a single table that stores data for many records. Each record has a variable number of fields, each of which can be a different data type. Later, queries will pull filtered subsets of data from the table, and do calculations on specific fields. In my implementation, the fields for a record are bound together by the datagroup (uniqueidentifier) column in the LotsOData table, the field name is defined by the dataname column, and the field value is stored in the datavalue column, which is type sql_variant.
One problem I had, and I'm not able to reliably replicate, is that the more complicated queries sometimes raise casting errors on the sql_variant column, even when the data is absolutely correct. I've been able to avoid this case by pre-selecting some of the subqueries into temporary tables first, then joining on the temp tables in the main query, but that seems horribly inefficient.
I've included a sample table, data, and query to demonstrate my basic solution. I was wondering if anybody could provide some insight on a better way of designing a solution for this scenario.
Thanks! -Eric.
PS: bonus points if you have any insight at all on the casting error I mentioned!!
-- create tablecreate table LotsOData( pk int identity, dataname nvarchar(16) not null, datagroup uniqueidentifier, datavalue sql_variant);-- lot of insertsdeclare @group_a uniqueidentifier, @group_b uniqueidentifier, @group_c uniqueidentifier;set @group_a = newid();set @group_b = newid();set @group_c = newid();insert into LotsOData (dataname, datagroup, datavalue)select 'some_int', @group_a, 1union all select 'some_int', @group_b, 2union all select 'some_int', @group_c, 3insert into LotsOData (dataname, datagroup, datavalue)select 'some_char', @group_a, 'a'union all select 'some_char', @group_b, 'b'union all select 'some_char', @group_c, 'c'insert into LotsOData (dataname, datagroup, datavalue)select 'some_string', @group_a, 'abc'union all select 'some_string', @group_b, '!@#'union all select 'some_string', @group_c, 'xyz'insert into LotsOData (dataname, datagroup, datavalue)select 'some_float', @group_a, 1.23union all select 'some_float', @group_b, 2.34union all select 'some_float', @group_c, 3.45insert into LotsOData (dataname, datagroup, datavalue)select 'some_datetime', @group_a, cast('01/01/2001 01:00:00' as datetime)union all select 'some_datetime', @group_b, getdate()union all select 'some_datetime', @group_c, cast('01/01/2009 01:00:00' as datetime)-- do some big ugly query:select cast(a.datavalue as datetime) as datatime_data, cast(b.datavalue as int) as int_data, cast(c.datavalue as char(1)) as char_data, cast(d.datavalue as nvarchar(max)) as string_data, cast(e.datavalue as float) as float_data, cast(b.datavalue as int) * cast(e.datavalue as float) as calc_datafrom ( select datavalue, datagroup from LotsOData where dataname = 'some_datetime' ) a inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_int' ) b on b.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_char' ) c on c.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_string' ) d on d.datagroup = a.datagroup inner join ( select datavalue, datagroup from LotsOData where dataname = 'some_float' ) e on e.datagroup = a.datagroupwhere cast(a.datavalue as datetime) between '01/01/2006' and '01/01/2008';
Any design suggestions for the best way to architect this report using SQL Reporting Services 2005 are appreciated!
My website features a catalog of roughly 50,000 items, each of which may be appear in a list of search results or in a detailed view. There are counters on the pages that update totals for such appearances and track other item-specific information in several tables in a SQL database. The catalog of items changes frequently, so the list of item IDs is never exactly the same from month to month.
I've been asked to produce a monthly report of this data for each of the items in the catalog, with reports for the current and previous months (for many years) accessible at all times. Some -- but not all -- items are useful for one purpose or another and so can be considered as belonging to a group of items. Although I have not yet been asked to create a report that aggregates the values for all Group members into a single report for that Group, I can clearly see it would be valuable and will be requested soon.
To ensure the report captures the data for an entire month, it must be run at the very end of each month. That means I will need to run the report using a Schedule that kicks off the process at 12:01am every 1st of the month. The report must be processed and stored for later retrieval and rendering on demand.
Considering the number of items and the indefinite length of time the report data must be retained, my question is really what's the best way to set all this up?
Should I create a report for each item separately? That would mean the scheduled task would have to somehow discover the current list of item IDs (which is available via query from the database) and create and process (but not render) a report for each (passing the item ID as a report parameter?), adding it to the report history. Although each report would be small take only a short time to run, overall that seems like it would take a long time to run and create a huge number of reports to store each month.
Or should I create a single 'master' report that contains all the data for every item for the month, and then use the item ID as a filter on the data when it is rendered? While that means only one report is created each month and added to the history, it would be a much larger report and take much longer to run (with more potential for timeouts and errors to scuttle the whole report). It also means all the data for the entire report has to be loaded every time the report is rendered, even though only 1/50,000 of the data (the data for 1 of the 50,000 items) will actually be viewed with any given rendering. But that would seem overly cumbersome, slow, and wastefully band-width intensive.
Any alternatives, suggestions, considerations, etc. -- all welcome!
I need to create a text file using information from SQL tables/views in the following format...Can anyone recommend a direction or procedure to look into, i.e, sql script, custom dts, etc. The items in parentheses identify specific portions of the text file.
Hi,I have two tables Table A and B, below with some dummy data...Table A (contains specific unique settings that can be requested)Id, SettingName1, weight2, lengthTable B (contains the setting values, here 3 values relate to weightand 1 to length)Id, Brand, SettingValue1, A, 1001, B, 2001, null, 3002, null, 5.3(There is also a list of Brands available in another table). No primarykeys / referential integrity has been setup yet.Basically depending upon the Brand requested a different setting valuewill be present. If a particular brand is not present (signified by anull in the Brand column in table B), then a default value will beused.Therefore if I request the weight and pass through a Brand of A, I willget 100If I request the weight but do not pass through a brand (i.e. null) Iwill get 300.My question is, what kind of integrity can I apply to avoid the userspecifying duplicate Ids and Brands in table B. I cannot apply acomposite key on these two fields as a null is present. Table B willprobably contain about 50 rows and probably 10 of them will be brandspecific. The reason its done like this is in the calling client code Iwant to call some function e.g.getsetting(weight) .... result = 300Or if it is brand specificgetsetting(weight,A) ..... result = 100Any advice on integrity or table restructuring would be greatlyappreciated. Its sql 2000 sp3.Thanksbrad
I am currently working on a PHP based website that needs to be able to drawfrom Oracle, MS SQL Server, MySQL and given time and demand other RDBMS. Itook a lot of time and care creating a flexible and solid wrapper and amdeep into coding. The only problem is a noticed VARCHAR fields being drawnfrom SQL Server 2000 are being truncated to 255 characters.I searched around php.net and found the following :Note to Win32 Users: Due to a limitation in the underlying API used by PHP(MS DbLib C API), the length of VARCHAR fields is limited to 255. If youneed to store more data, use a TEXT field instead.(http://www.php.net/manual/en/functi...ield-length.php)The only problem with this advice is Text fields seem to be limited to 16characters in length, and I am having similar results in terms of truncationwith other character based fields that can store more than 255 characters.I am using PHP 4.3.3 running on IIS using the php_mssql.dll extensions andthe functions referenced here http://www.php.net/manual/en/ref.mssql.php.What are my options here? Has anybody worked around this or am I missingsomething obvious?James
When using DTS (in SQL 7) to export via OLE DB a large varchar to a text file, it clips it at 255 chars. No other data access drivers seem to work, either. This is lame! I cannot use bcp as a work around, because i want to use quoted comma-delimited, which it doesn't support, and I am using query-based export, where the query calls a stored proc, which bcp also doesn't support.
Are there any new versions of MDAC that fix this? Anyone know a workaround? My current hack fix is to split my field into 2, but this is a grubby fix that hassles my reciptients.
This is a pretty fundamental limitation to a major product!
What's the most efficient way to store the following information:
* Table contains 1 million listings * Each listing can be geo-targeted to any of the 200+ countries * Searches return listings based on geo-location
Storage options:
Option #1 (normalized) * ListingsTable (PK listingID int) [1 million rows] * ListingGeoLocations (listingID, geoLocationID) [could be up to 200 million rows]
Option #2 (denormalized) * ListingsTable (PK listingID int, binary(32) with bit-mask consisting of 200 bits one for each location)
Did anyone have experience with similar structures? Which option is more efficient?
I have a varchar(900) which means that I can use 900bytes, so if I am not wrong if the character is unicode, y only can use 450 because each character need two bytes.I have a databease with a column that use the intercalation general_latin_CI_AI, but I don't know if this intercalation use 1byte per character or use 2bytes per character.How can I know how many bytes need a character of a varchar column?
What's the most efficient way to store the following information:
* Table contains 1 million listings * Each listing can be geo-targeted to any of the 200+ countries * Searches return listings based on location
Storage options:
Option #1 (normalized) * Listings (PK listingID int) [1 million rows] * ListingLocations (listingID, locationID) [could be up to 200 million rows]
Option #2 (denormalized) * Listings (PK listingID int, binary(32) with bit-mask consisting of 200 bits one for each location)
Usage: Usually the query will simply lookup listings based on some keywords. It will get back 50-200 listings. Then the application (C#) will filter the listings based on location.
Did anyone have experience with similar structures? Which option is more efficient?
I know that using the intersection-table in Option #1 is the "proper" relational-DB way of doing things. However, I do not like the idea of storing the listingID so many times (ones for each locationID).
I want to log errors to a table. If the error is with a URL, I want to store the URL. These URLs can be very large, hundreds of characters, but I only need to store it if it causes the error, which should be very infrequent. Which is the better design:
Create a large varchar field in the log table to hold the URL, or null if the error wasn't with the URL. Create a foreign key field in the log table to a second URL table, which has a unique ID and a large varchar, and only create a record in this table if the error is with the URL.
One concern I have with design 2 is that there could be many other fields that are infrequent. Do I create a separate table for every one?
I have looked far and wide and have not found anything that works to allow me to resolve this issue.
I am moving data from DB2 using the MS OLEDB Provider for DB2. The OLEDB source sees the column of data as DT_TEXT. I setup a destination to SQL Server 2005 and everything looks good until I try and run the package.
I get the error: [OLE DB Source [277]] Error: An OLE DB error has occurred. Error code: 0x80040E21. An OLE DB record is available. Source: "Microsoft DB2 OLE DB Provider" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".
[OLE DB Source [277]] Error: Failed to retrieve long data for column "LIST_DATA_RCVD".
[OLE DB Source [277]] Error: There was an error with output column "LIST_DATA_RCVD" (324) on output "OLE DB Source Output" (287). The column status returned was: "DBSTATUS_UNAVAILABLE".
[OLE DB Source [277]] Error: The "output column "LIST_DATA_RCVD" (324)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "LIST_DATA_RCVD" (324)" specifies failure on error. An error occurred on the specified object of the specified component.
[DTS.Pipeline] Error: The PrimeOutput method on component "OLE DB Source" (277) returned error code 0xC0209029. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Any suggestions on how I can get the large string data in the varchar column in DB2 into the varchar(max) column in SQL Server 2005?
I am trying to create a store procedure inside of SQL Management Studio console and I kept getting errors. Here's my store procedure.
Code Block CREATE PROCEDURE [dbo].[sqlOutlookSearch] -- Add the parameters for the stored procedure here @OLIssueID int = NULL, @searchString varchar(1000) = NULL AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here IF @OLIssueID <> 11111 SELECT * FROM [OLissue], [Outlook] WHERE [OLissue].[issueID] = @OLIssueID AND [OLissue].[issueID] = [Outlook].[issueID] AND [Outlook].[contents] LIKE + ''%'' + @searchString + ''%'' ELSE SELECT * FROM [Outlook] WHERE [Outlook].[contents] LIKE + ''%'' + @searchString + ''%'' END
And the error I kept getting is:
Msg 402, Level 16, State 1, Procedure sqlOutlookSearch, Line 18
The data types varchar and varchar are incompatible in the modulo operator.
Msg 402, Level 16, State 1, Procedure sqlOutlookSearch, Line 21
The data types varchar and varchar are incompatible in the modulo operator.
I am querying a tableA with 1.8 million rows, it has id as its primary key and is a clustered index. I need to select all rows where I order by lastname. Its taking me 45 seconds. Is there anything i can do to optimize the query.Will creating a fulltext index on lastname If so, can you give me an example on how to create a full text index on lastname?
[Project1].[Id] AS [Id], [Project1].[DirectoryId] AS [DirectoryId], [Project1].[SPI] AS [SPI], [Project1].[FirstName] AS [FirstName], [Project1].[LastName] AS [LastName], [Project1].[NPI] AS [NPI], [Project1].[AddressLine1] AS [AddressLine1], [Project1].[AddressLine2] AS [AddressLine2],
Hello Everyone, I have a SQL 6.5 database that is about to grow beyond the size of its current volume. I have 3 volumes of 20GB each, 2 of which aren't being used. What do I need to do to ensure that I can expand the device across multiple volumes?
We have our production SQL 2005 server (64 bit Standard Ed.) attached to an iSCSI Equalogic SAN. We have set up 2 new servers an installed the cluster service. My question is: can I install SQL 2005 in this cluster environment an latter on disconnect the data and log volumes from the production server, attach those volumes to the cluster an reattach the DBs? the reason we need to do it like that is that we don't have enough spare space in the SAN to initially create these 2 volumes in the cluster.
Any ideas/suggestions would be greatly appreciated.
Soon, I will be migrating SQL cluster volumes from our old SAN to the new one. I have an idea of how to do it, but I just wanted some feedback. Yes, I know the best way would be to set up a new cluster using the new SAN and migrate the DB, but unfortunately I don't have that luxury. Here's my plan...
Add new storage to cluster, ensuring the drives are active cluster resources and dependencies match old resources Back up DB Shut down SQL Server Copy all files & folders from old storage to new storage Reassign drive letters to make new storage match old configuration Start SQL Server
In theory, I think this will be fine because as long as SQL sees the correct drive letters, it should function properly. Just concerned about the quorum portion of the cluster.
We have our production SQL 2005 server (64 bit Standard Ed.) attached to an iSCSI Equalogic SAN. We have set up 2 new servers and installed the cluster service. My question is: can I install SQL 2005 in this new cluster environment an latter on disconnect the data and log volumes from the production server, attach those volumes to the cluster an reattach the DBs? the reason we need to do it like that is that we don't have enough spare space in the SAN to initially create these 2 volumes in the cluster.
Any ideas/suggestions would be greatly appreciated.
I'm looking at using Cluster Shared Volumes on a new Windows Server 2012/SQL Server 2014 cluster. Each instance is going to be configured to use cluster shared volumes. Is there any reason why Availability Groups couldn't be used in conjunction with Cluster Shared Volumes.
I have an existing database with approx 500,000 rows and accessed by afew hundred users per day creating approx 1,000 new records per dayplus typical reporting - relatively low volume stuff for SQL Server.I'm about to add a process that will be importing data daily fromlegacy databases and summarizing it for reporting purposes, integratingit with the existing database. This volume of data will be considerablyhigher, perhaps 100,000+ rows per day, which will be deleted once ithas been summarized and the results written to some intermediatetables.Is there any concern about mixing different levels of volume within onedatabase? As I'll be creating lots of rows daily and then deleting themI was wondering about fragmentation, transaction logging etc. andwhether having this processing in a separate database from the mainapplication would be 'better'.
I've been following Soctt Mitchell's tutorials on Data Access and in Tutorial 1 (Step 5) he suggests using SQL Subqueries in TableAdapters in order to pick up extra information for display using a datasource. I have two tables for a gallery system I'm building. One called Photographs and one called MS_Photographs which has extra information about certain images. When reading the MS_Photograph data I also want to include a couple of fields from the related Photographs table. Rather than creating a table adapter just to pull this data I wanted to use the existing MS_Photographs adapter with a query such as...1 SELECT CAR_MAKE, CAR_MODEL, 2 (SELECT DATE_TAKEN 3 FROM PHOTOGRAPHS 4 WHERE (PHOTOGRAPH_ID = MS_PHOTOGRAPHS.PHOTOGRAPH_ID)) AS DATE_TAKEN, 5 (SELECT FORMAT 6 FROM PHOTOGRAPHS 7 WHERE (PHOTOGRAPH_ID = MS_PHOTOGRAPHS.PHOTOGRAPH_ID)) AS FORMAT, 8 (SELECT REFERENCE 9 FROM PHOTOGRAPHS 10 WHERE (PHOTOGRAPH_ID = MS_PHOTOGRAPHS.PHOTOGRAPH_ID)) AS REFERENCE, 11 DRIVER1, TEAM, GALLERY_ID, PHOTOGRAPH_ID 12 FROM MS_PHOTOGRAPHS 13 WHERE (GALLERY_ID = @GalleryID) This works but I wanted to know if there's a way to get all of the fields using one subquery instead of three? I did try it but it gave me errors for everything I could think of.Is using a subquery like above the best way when you want this many fields from a secondary table or should I be using another approach. I'm using classes for the BLL as well and wondered if there's a way to do it at this stage instead?
I have 2 flat files to load into a datamart via SSIS. Need to implement:- 1. How can I prevent loading of same file again? 2. If by chance wrong data has been loaded how can I rollback? Kindlt guide asap as I have to implement these. JigJan
I have to retrieve data from a KSAM (kerridge) database and can only use a file dsn ODBC connection to connect to the database.
I can get access to the database through excel and thereby see the tables but when I try and open a connection through the development environment, it keeps my machine busy for what seems to be an eternity whith no result.
I want to use SSIS to extract the data to a SQL 2005 database but will need to get my connections/connection managers to work.
Is there any advice that anyone can give me on perhaps the best approach on the data extract? Regards Mike
Hi,Is there a programmatic wasy to convert the results of a sql data set to xls, csv, etc. Ideally a user would be able to make a selection to view the data (result set has, E.g. make, model, year, condition viewed in a datagrid or similar control) and then be able to export the file to the format they choose, and have a download box popup from the browser to download the file.E.g. Export this data to: __ XLS __ CSV __TXT . I know DTS can do this but any advice on how to encapsulate this in a C# web app woudl be greatly appreciated! Thanks!