I'm reporting from a Microsoft SQL database (poorly documented unfortunately) and would like to find a 3rd party application to assist me in rapidly making Select queries. The ability to browse data in a field from the interface would be a plus.
What are the best alternatives for rapidly creating these queries from some sort of builder or wizard?
I've noticed lately that my queries through ADO/VB are taking alot longer to process at certain times. The query and the result information never changes, only that at certain times the query takes alot longer than usual. I thought that I possibly need more licenses, or it might be network traffic. I currently use MS SQL Server 2000 small business Ed(5 Cals).
Has anyone any information about performance problems due to licensing issues?
I'm working on an app that contains a series of 100 very compute-intensive queries that altogether take 30 minutes on my laptop machine. The app may be atypical in that there is just this series of expensive queries, and not lots of simultaneous simple queries.
To speed up testing, I bought a Dell PE2900 Quad-Core Xeon with 16GB of RAM and four 15000rpm hard drives. It runs 64-bit Windows 2003 server and 64-bit SQL Server 2005 dev edition.
Unfortunately performance is worse than on my laptop. Task manager reports (almost constant) 13GB of available physical memory, while processor usage hovers around 30%. The page file usage is constant at 3 GB.
If you were doing paging of results on a web page and were interestedin grabbing say records 10-20 of a result set. But also wanted to knowthe total # of records in the result set (so you could know the total #of pages in the set).Would it be better to query the DB table 2X. Once for Count(*). Andagain for the records for the current page?Or better to create a temp table, select the records into it, and thenget count(*) and the page results from the temp table?I saw an example in a book that made a temp table to do this and to meit seemed like it would be slower. I don't get the reason for a temptable. Anyone have any ideas?
I was wondering if there was a different approach I should take in appending data to a table...
My destination table has about 94+ million records in it, and I have been taking two approaches to getting new files into this table:
1) I do a data pump task in a DTS to import the file to a trans (temp) table, which is truncated every time, and then do an INSERT INTO statement from the temp table to my destination table.
The import to the trans table only takes a few minutes (about 1 - 2 million records per file, but have short record lenghts,) but when I do the INSERT INTO statement, it takes upwards of 6 hrs to append.
2) I have tried doing a bulk insert task, going directly to the destination table (which defeats the purpose of my trans table to check out the data prior, but I feel the data is clean at this point.)
I am running the bulk insert right now, and it's been running for over 3 hours...so I'm going to assume this will take just as long as the INSERT INTO statement does like I did before.
My destination table does not have any indexes in it at all, and I don't need to do any transformations to the data when bringing it into SQL since the data is clean. Also, I have a default value constraint on one of my fields on the destination table.
Plus there are other ppl and applications hitting the server which could impact the overall processing, but nothing out of the ordinary is going on the server today. I know there are only so many ways to get a file into a table...but maybe someone knows a different way I should try this.
I want to delete 30-40 million rows from a transactional table. Whats the fastest way to delete these rows. just to delete 300,000 rows it takes 30 min. also i don't want to truncate the table.
Has anyone successfully used cherry's oledb provider for MYSQL to create a linked server from MS SQLserver 2005 to a Linux red hat platform running MYSQL.
I can not get it to work.
I've created a UDL which tests fine. it looks like this
[oledb]
; Everything after this line is an OLE DB initstring
We have an XML column in a SQL Server 2005 table. Each row of this table contains one XML document.
I want to shred values from the XML documents and process these within a Data Flow. I want the Data Flow to execute once across a record set comprised of all of the XML documents.
I can shred the XML using a For-Each loop and XML Task. I'm kinda stuck on how I then get the data from variables into a Recordset or similar so that I can process this within single iteration of a Data Flow.
Or - is my approach incorrect? I seem to be building a verbose and clunky solution to this problem. I know I could accomplish the same in a pretty simple SQL statement using .value on the XML column... am I missing something? Is a SQL query just better suited to this problem?
I am trying to transfer data from a table in a SQL server database to an equivilent table in a MYSql database that resides on a different server.
I have previously used a php script to do the job but it proved very slow. I am dealing with a table that is growing and has 9.5 million records in this case and the proceedure may need to be carried out a number of times preday so I am hoping to find a better solution.
I am wondering anyone can point me in the direction of an alternative method.
Hello. I use VS2005 and I was trying to make a little program able to connect to a MySQL 5.0.18 database through MyODBC 3.51.12. I can connect to the database and I can load a table in the Datagridview control, but I can't update the table: the wizard didn't generate the Update method. Even if I configure again the tableAdapter, the wizard generates only: SELECT statement table mappings Fill method Get method The table is a very simple table: it has 3 columns and the first is a primary key.
If I create an identical table using Access database and make a program that connects through ODBC, I don't have the problem: the wizard generates also the UPDATE, INSERT and DELETE statements.
Hello,I have an application that will be logging to a SQL Server 2000database user user activity from several Windows 2003 terminalservers. This information will be retrieved by monitoring theSecurity logs of these servers (this part I know how to accomplishalready).A table in the database, tblLogEntries, will contain the followingfields:- ID = autoincrementing int- LogTime = Date/Time the user activity was recorded in the securitylog- Username = User's login ID that the activity was recorded with- Type = int, referencing a lookup table with the values of Logon,Logoff, and possible other future items- Server = The name of the server the activity was recorded on.The only question I have is, can you offer a way to process the totaluser login time during a given range using T-SQL.For Example...Given the table data:ID LogTime Username Type Server1 10-10-2003 8:30:00 Tom Logon SERVER-A2 10-10-2003 8:45:00 Sarah Logon SERVER-A3 10-10-2003 16:45:00 Tom Logoff SERVER-A4 10-10-2003 17:00:00 Sarah Logoff SERVER-A5 10-11-2003 8:30:00 Tom Logon SERVER-A6 10-11-2003 8:45:00 Sarah Logon SERVER-A7 10-11-2003 16:30:00 Sarah Logoff SERVER-A8 10-11-2003 17:15:00 Tom Logoff SERVER-AHow would you receive the output:User Logon Total Time for SERVER-ATom 17.0 hrsSarah 16.0 hrsI know I can handle this type of processing on my ASP.NET front-end,but I'm curious as to how easily it can be done by the database,itself.Thanks in advance for your assistance.
I want to process data from source table to destination table without using cursor.
At this point; we are creating temp table and inserting data from source to temp table. once we get data into temp table; using while loop we are processing record one by one to destination table.
while executing stored procedure; we noticed that there are few records in source table which are invalid and stored procedure is terminating from such records.
Any better approach to log INVALID data and resume code to process next record instead of terminating.
I want to do conditional processing depending on values in the rows of a CTE. For example, is the following kind of thing possible with a CTE?:
WITH Orders_CTE (TerritoryId, ContactId) AS ( SELECT TerritoryId, ContactId FROM Sales.SalesOrderHeader WHERE (ContactId < 200) ) IF Orders_CTE.TerritoryId > 3 BEGIN /* Do some processing here */ END ELSE BEGIN /* Do something else here */ END
When I try this, I get a syntax error near the keyword 'IF'
Any ideas? I know this kind of thing can be done with a cursor but wanted to keep with the times and avoid using one!
I have the table creation script and insret record script. This is MySQl Format. What changes I have to do so can I run this scripts into SQL Server 2000. If any body has successfully done it then please tell me the procedure.
Does anyone know of a reference site where I can find a reference table to get a better idea of data type conversions that I should be using?
I have a MySQL 5.0 database which had a lot of tables (mostly empty) that I already have gotten transferred to SQL Server 2005. However, I am suspicious of some of the data type conversions that SQL Server did.
I would really like a good web site to bookmark to help me with this if there is such a reference. Can anyone help?
If not, the most specific example I have right now is a MySQL column that is expecting to accept currency and the MySQL data type is "Double". SQL Server 2005 translated this as a "float" data type. I normally use a "decimal" data type.
Can anybody tell how to import a table with the text column from SQL Server 2000 to MySQL 4.0.17? I tried this using ODBC connection but got an error message saying, "Query-based Insertion or updating of BLOB values is not supported".
Begin Truncate table A Insert into A (Col1, Col2, Col3... ) Select Value1, Value2, Value3... From Table B End
The insert operation query takes approximately 3.5 minutes to execute. What's occurring is the Table is immediately truncated, and there are no rows in the table for those 3.5 minutes.
How can I avoid having this gap - where there are no rows in the table for that period of time during the job execution ? The table could be locked, but that doesn't seem like the best solution.
In Access, if I want to update one table with information from another,all I need to do is to create an Update query with the two tables, linkthe primary keys and reference the source table(s)/column(s) with thedestination table(s)/column(s). How do I achieve the same thing in SQL?RegardsColin*** Sent via Developersdex http://www.developersdex.com ***
I have an internal Project Management and Scheduling app that I wrote internally for my company. It was written to use MySQL running on a Debian server, but I am going to move it to SQL Server 2000 and integrate it with our Accounting software. The part I am having trouble with is the user login portion. I previously used this:
PHP Code:
$sql = "SELECT * FROM users WHERE username = "$username" AND user_password = password("$password")";
Apparently the password() function is not available when accessing SQL Server via ODBC. Is there an equivalent function I could use isntead so the passwords arent plaintext in the database? I only have 15 people using the system so a blank pwd reset wouldn't be too much trouble.
The dynamic sql is used for link server. Can someone help. Im getting an error CREATE PROCEDURE GSCLink ( @LinkCompany nvarchar(50), @Page int, @RecsPerPage int ) AS SET NOCOUNT ON --Create temp table CREATE TABLE #TempTable ( ID int IDENTITY, Company nvarchar(50), AcctID int, IsActive bit ) INSERT INTO #TempTable (Name, AccountID, Active) --dynamic sql DECLARE @sql nvarchar(4000) SET @sql = 'SELECT a.Name, a.AccountID, a.Active FROM CRMSBALINK.' + @LinkCompany + '.dbo.AccountTable a LEFT OUTER JOIN CRM2OA.dbo.GSCCustomer b ON a.AccountID = b.oaAccountID WHERE oaAccountID IS NULL ORDER BY Name ASC' EXEC sp_executesql @sql --Find out the first and last record DECLARE @FirstRec int DECLARE @LastRec int SELECT @FirstRec = (@Page - 1) * @RecsPerPage SELECT @LastRec = (@Page * @RecsPerPage + 1) --Return the set of paged records, plus an indication of more records or not SELECT *, (SELECT COUNT(*) FROM #TempTable TI WHERE TI.ID >= @LastRec) AS MoreRecords FROM #TempTable WHERE ID > @FirstRec AND ID < @LastRec
Error: Msg 156, Level 15, State 1, Procedure GSCLink, Line 22 Incorrect syntax near the keyword 'DECLARE'.
What is the best approach to handle this situation? I have three different databases, which has it's own stored procedure. I need to call them all at page load and piece together the data. The common demoninator is the date.
2007 JAN FEB MAR APR
row 1 50 60 89 63
row 2 44 21 62 46
2006 JAN FEB MAR APR
row 1 60 90 65 41
row2 984 650 452 762 Row 1 and Row 2 come from two different databases and stored procedures. How can I query the data and present it as it's shown above? Thank you!
I was messing around with stored procedures and I was wondering if creating a SP that populates a single table for reporting is a good idea?
I have ~10 queries that I have to currently run manually and was hoping to drop them into a physical table and then leverage that single table to pull into excel.
Some of my queries use virtual tables or CTE's, this is to get the aggregate set correctly.
Essentially I am working out of a data warehouse and would like to eventually get all my queries in one SP or something similar and then call that query for a insert.
Speaking of which could you create a SP that has several selects than with that output drops the records into a single table by using an insert into query so the data from the all the queries would line up into the right columns?
I have 3 queries pulling from the same table, trying to define a count on each criteria. I have the data pulling, but the data is in multiple rows. I want the data in one row with all the counts in each separate columns. Also I need to setup a flag if a client purchased and order within 30 days from their last purchase.
I am doing this select for each credit card, check and cash purchases. I do not know how to setup a flag where the client may have ordered and paid by check or cash after 30 days from a credit card purchase. Is this something that can be done?
select clientnumber,count(distinct clientnumber) as cccnt, 0 as ckcnt, 0 as cacnt from dbo.purchases where orderdate >= 20120101 and orderdate <= 20121231 and payment_type = 'CC' group by clientnumber;
OUTPUT currently looks like this: 1234 2 0 0 1234 0 1 0 1234 0 0 4
Is it possible to result in this, along with a flag with the criteria above?: 1234 2 1 4 Y
I'm new to adp w/ sql server but I have to use it on a project i'mdoing...One of the MUSTS for this project is the ability to update a 00 - 09text value with the appropriate text description from another table...Easy as pie in .mdb. Of course In the stored procedure it barks at meand tells me that an update query can only have one table.. ouch thathurts...I'm currently reading on the subject but this group has been veryhelpful in the past.....I found this link...http://www.sqlservercentral.com/col...stheeasyway.aspUnfortunetly I'm using MSDE not Enterprise so I don't think I can usethe query analyser.. But I tryed it in my Access ADP anywayit barked at me..I tried to go from this....SELECT dbo.LU_SEX.SEX_CODE, dbo.TEST.DEFECTS_DP1FROM dbo.TEST INNER JOINdbo.LU_SEX ON dbo.TEST.SEX_DP1 =dbo.LU_SEX.SEX_DECTo this...UPDATE dbo.TEST.SEX_DP1SET dbo.TEST.SEX_DP1 = dbo.LU_SEX.SEX_CODEFROM dbo.LU_SEX INNER JOINdbo.TEST ON dbo.LU_SEX.SEX_DEC =dbo.TEST.SEX_DP1Maybe I need a good book on this?Thanks,Charles
Hi, I currently have three queries running seperately which I'd like to join up..
However I'm sure it can be done..
Here goes:
I have one table which lists payments made (it stores a PaymentID from another table, a payment amount and an invoiceID)
Therefore there can be several records with the same PaymentID referenceing different invoices (i.e a user paying off several invoices with one payment)
I have one query which lists all of the invoices paid against one payment.
SELECT PaymentID, InvoiceID, PaymentAmount WHERE PaymentID = xxx AND PayType <> 'Credit'
I then have a second query which is ran against each individual invoice which shows the other payments which have been made against this invoice already
SELECT PaymentID, InvoiceID, PaymentAmount WHERE PaymentID <> xxx AND PayType <> 'Credit' AND InvoiceID = XXX
and final I have one query which lists the credits SELECT PaymentID, InvoiceID, PaymentAmount WHERE PayType = 'Credit' AND InvoiceID = XXX
All of the above lets me see an payment, which invoices have been paid against that payment.. and then for each invoice, any other payment which were made beforehand, and finally any credits against that invoice.
I run these from an ASP page in a loop which is pretty inefficient way of doing it.
I would much prefer to amalgamate the three queries above so I could see what I was paying now, what had already been paid and what was credited against each invoice from a PaymentID.. all in query.
I'm working with a table with about 60 million records. This monster is growing every minute of the day as well, by 200,000 - 300,000 records/day. It's 11 columns wide, and has one index on a datetime column. My task is to create some custom reports based on three of these columns, including the datetime one.
The problem is response time. Any query executed on this table takes forever--anywhere between 30 seconds and 4 minutes. Queries such as this one below, as simple as it is, can take a minute or more:
select count(dt_date) as Searches from SearchRecords where datediff(day,getdate(),dt_date)=0
As the table gets larger and large, the response time is going to get worse and worse. Long story short, what are my options to get the speed of queries down to just a few seconds with a table this big? So far the best I can come up with is index any other appropriate columns (of which there is one for sure, maybe two).