Hi All,
I am having an issue with a JDBC based reporting tool when connected to my SQL2005 db. Basically I am getting erradic results. Let me explain the two cases.
1) The full result set gets returned. This is good. In my trace file, I can see the cursor getting created and chunks of 50 records returning to the program. The last chunk has 10 records in it (a total of 960 records).
2) Partial results returned. 10 records, which is strangely enough the last chunk size of the full result set. No errors are flagged.
The vendor of the product ensures me that they cannot replicate the problem (even given my db), and I am having trouble identifying places/settings in SQL2005 that might help.
If it makes a difference, the dataset is a query with many LEFT OUTER JOINS, but the problem seems to stem from the inclusion of a subquery which uses a function in its from clause. EG.
SELECT blah1, blah2
FROM table1 LEFT OUTER JOIN table2 ON ....
...
WHERE EXISTS (SELECT id FROM func1(2,2) WHERE id = blah1)
If I remove the EXISTS clause, things seem to work well.
My @@version = Microsoft SQL Server 2005 - 9.00.2050.00 (Intel X86) Feb 13 2007 23:02:48 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 2)
Any help with this would be GREATLY appreciated.
Thanks,
Steele.
Hi All, I am having an issue with a JDBC based reporting tool when connected to my SQL2005 db. Basically I am getting erradic results. Let me explain the two cases. 1) The full result set gets returned. This is good. In my trace file, I can see the cursor getting created and chunks of 50 records returning to the program. The last chunk has 10 records in it (a total of 960 records). 2) Partial results returned. 10 records, which is strangely enough the last chunk size of the full result set. No errors are flagged. The vendor of the product ensures me that they cannot replicate the problem (even given my db), and I am having trouble identifying places/settings in SQL2005 that might help. If it makes a difference, the dataset is a query with many LEFT OUTER JOINS, but the problem seems to stem from the inclusion of a subquery which uses a function in its from clause. EG.
Code Snippet SELECT blah1, blah2 FROM table1 LEFT OUTER JOIN table2 ON .... ... WHERE EXISTS (SELECT id FROM func1(2,2) WHERE id = blah1)
If I remove the EXISTS clause, things seem to work well.
My @@version = Microsoft SQL Server 2005 - 9.00.2050.00 (Intel X86) Feb 13 2007 23:02:48 Copyright (c) 1988-2005 Microsoft Corporation Developer Edition on Windows NT 5.1 (Build 2600: Service Pack 2)
Any help with this would be GREATLY appreciated. Thanks, Steele.
I have written a SQL Procedure to return all the results from a table and am writing a function to run the procedure (So that it is easier to use within my app). I have kinda got a bit confused. :oS
The SQL procedure Gallery_GetAllCetegoryPictures has one input variable (CategoryID) and returns a table procedure is something like SELECT * FROM Gallery WHERE CategoryID = @CategoryID.
The code im having trouble with is below:
Function GetAllCategoryPictures(ByVal CatID As Integer) As DataSet Dim MyConnection As SqlConnection Dim MyCommand As New SqlCommand Dim MyParameter As SqlParameter MyConnection = New SqlConnection(AppSettings("DSN")) MyConnection.Open()
Is it possible to have a Reporting Services dataset be returned from a custom assembly? I need to produce data for a graph that is not easily queried from the database and I am already familiar with using a custom assembly in an RS report.
What I would like to do is have a method that fills a dataset and returns it to the report for processing. Is this possible?
SPs in Access 2000 (SQL) / Crosstab problem / returning dataset I've recently "upsized" from Access97 (Jet) to Access 2000 (SQL) client/server using MS SQL Server 2000. As a result, I'm new to the concept of Stored Procedures. I am trying to work out a general solution to the fact SQL doesn't allow an easy way to create dynamic crosstab queries (from within Access client/server).
I've included the SP code I found (sp_crosstab) to create the crosstab solution. To execute the sp_crosstab, I use another SP (execute_crosstabs) which defines the input parameters.
If I run the SPs in Query Analyzer, the results are returned as a dataset. However, if I run them in MS Access 2000, the following message is returned:
"The stored procedure executed successfully but did not return records." Likewise, if I attach an Access form to the SP, it returns the same message.
I've seen ADO code which could return the records (to Access), but I would prefer an alteration to the SP (sp_crosstab) which would return the records automatically.
For example, if I run the SP below (sp_MyTables which executes sp_tables), a dataset is returned automatically instead of the message "The stored procedure executed successfully but did not return records." If I attach sp_MyTables to an Access form, the records are returned in the form as well.
My question is this: How can I get sp_crosstab to act like sp_tables (executed by sp_MyTables) to return a dataset instead of the infernal message?
I've looked all over the Internet and have not seen this issue addressed directly. Your help would be EXTREMELY appreciated (and will probably make Internet history)!
-- Work variables declare @ReturnSet varchar(255), @sql varchar(8000), -- Hold the dynamically created sql statement @colname varchar(255), -- The current column when building sql statement @i smallint, -- know when we reached the last column (@i = @cols) @cols smallint, -- Number of columns @longest_col smallint, -- the len() of the widest column @CrLf char(2) -- Constants declare @max_cols_in_table smallint, @max_col_name_len smallint, @max_statement_len smallint, -- @sql7 bit, -- 1 when version 7, 0 otherwise. @err_severity int
set nocount on
set @max_cols_in_table = 255 set @max_statement_len = 8000 set @max_col_name_len = 128 set @err_severity = 11 set @CrLf = char(13) + char(10)
-- Check inputs if @tablename is null or @crosscolumn is null or @crossrow is null or @crossvalue is null begin raiserror ('Missing parameter(s)!',@err_severity,1) return @@rowcount end
-- Check for existence of the table. if (not exists(select * from sysobjects where name like @tablename))begin raiserror ('Table/View for crosstab not found!',@err_severity,1) return 0 end
-- Don't check for columns because we may actually get an expression as the column name
-- prepare for future feature of checking database version to validate -- inputs. Default to version 7 --set @sql7 = 1 --if (patindex('%SQL Server 7.%',@@version) = 0) begin -- set @sql7 = 0 --end
-- Extract all values from the rows of the attribute -- we want to use to create the cross column. This table -- will contain one row for each column in the crosstab. create table #crosscol (crosscolumn varchar(255)) set @sql = ' insert #crosscol Select Distinct ' + @crosscolumn + ' From ' + @tablename --+ --' Group By ' + @crosscolumn --print @sql exec (@sql) set @cols = @@rowcount
if @cols > @max_cols_in_table begin raiserror ('Exceeded maximum number of columns in Cross-tab',@err_severity,1) return 0 end else begin if @cols = 0 begin raiserror ('Could not find values to use for columns in Cross-tab',@err_severity,1) return 0 end else begin -- Check if any of the data is too long to make it a name of a column select @longest_col = max(len(convert(varchar(129),crosscolumn))) from #crosscol
if @longest_col > @max_col_name_len begin raiserror ('Value for column name exceeds legal length of column names',@err_severity,1) return 0 end else begin
-- All Validations OK, start building the dynamic sql statement
set @sql = '' -- Use tmp table rows to create the sql statement for the crosstab. -- each row in the table will be a column in the cross-tab set @sql = 'select isnull(convert(varchar(255), ' + @crossrow + '),''Undefined'') As ' + @crossrow + ', ' + @CrLf + space(4)
declare cross_sql cursor for select crosscolumn from #crosscol order by crosscolumn
--print 'Sql cross statment: ' + @sql
open cross_sql fetch next from cross_sql into @colname -- Use "@i" to check for the last column. We need to input commas -- between columns, but not after the last column set @i = 0 while @@FETCH_STATUS = 0 begin set @i = @i + 1 set @colname = isnull(@colname,'Undefined') set @crossvalue = isnull(@crossvalue, 0)
I've created a sproc in SQL2000 that returns a dataset from a temp table & the number of records it's returning as an output parameter, although I can't seem to retrieve the value it's returning in asp.net although I get the dataset ok.
This is my sproc create procedure return_data_and_value @return int output as set nocount on ... ... select * from #Table select @return = count(*) from #Table drop table #Table go
This is asp.net code
Dim nRecords as Int32 Dim cmd As SqlCommand = New SqlCommand("return_data_and_value", conn) cmd.CommandType = CommandType.StoredProcedure
Dim prm As SqlParameter = New SqlParameter("@return", SqlDbType.Int) prm.Direction = ParameterDirection.Output cmd.Parameters.Add(prm)
We are facing an issue while executing a stored procedure which uses a table of current database with INNER JOIN a table of another database in same instance.
Per our requirement, we are inserting select statement output in table variable. Then applying business logic and finally showing the data from table variable.
This scenario is working exactly fine in Dev environment. But when we deployed the code in quality environment. Stored procedure does not returning OUTPUT/ (No column names) from table variable.
During initial investigation, we found that collation of these two databases are different but we added DATABASE_DEFAULT collation in the JOIN.
I have read similar posts to this, but I am still having problems.
I am trying to use connection pooling to connect to a local SQL Server 2005 database. I am running my application using MyEclipse Enterprise Workbench. I have verified that sqljdbc.jar resides in "WebRoot/WEB-INF/lib/"
I've got an import app written in Java. One table I'm importing from contains 22 million records. When I run the app in a 2000 environment, I have my max heap set at 512, and the table gets imported. When I run in a 2005 environment, I have to change the max heap to 1152 or it will error out with a similiar error:
com.microsoft.sqlserver.jdbc.SQLServerException: The system is out of memory. Use server side cursors for large result sets:Java heap space. Result set size:854,269,999. JVM total memory size:1,065,484,288. (<--this is with max heap at 1024)
what is the difference between the 2000 and 2005 JDBC that I have to set max heap in one and not the other?
I have a report with multiple datasets, the first of which pulls in data based on user entered parameters (sales date range and property use codes). Dataset1 pulls property id's and other sales data from a table (2014_COST) based on the user's parameters. I have set up another table (AUDITS) that I would like to use in dataset6. This table has 3 columns (Property ID's, Sales Price and Sales Date). I would like for dataset6 to pull the Property ID's that are NOT contained in the results from dataset1. In other words, I'd like the results of dataset6 to show me the property id's that are contained in the AUDITS table but which are not being pulled into dataset1. Both tables are in the same database.
I have a small number of rows in a dataset, Table 1. There is a CLOB on a large dataset, Table 2. They join on a PK. I would like to retrieve this CLOB and add it to the data flow for Table1. In short I want to emulate the following:
Table 1: Small table without CLOB, 10 rows. Table 2: Large table with CLOB, 10,000,000 rows
select CLOB from table2 where pk = (select pk from table1)
I want this to return the CLOBs for the small number of rows in Table 1. The PK is indexed obviously so it should be a fast look up.
Table 1 and Table 2 live on different Oracle databases. How do I perform this operation efficiently in SSIS? It seems the Lookup and Merge Join wont do this.
I have a report with multiple datasets, the first of which pulls in data based on user entered parameters (sales date range and property use codes). Dataset1 pulls property id's and other sales data from a table (2014_COST) based on the user's parameters.
I have set up another table (AUDITS) that I would like to use in dataset6. This table has 3 columns (Property ID's, Sales Price and Sales Date). I would like for dataset6 to pull the Property ID's that are NOT contained in the results from dataset1. In other words, I'd like the results of dataset6 to show me the property id's that are contained in the AUDITS table but which are not being pulled into dataset1. Both tables are in the same database.
I found out the data I need for my SQL Report is already defined in a dynamic dataset on another web service. Is there a way to use web services to call another web service to get the dataset I need to generate a report? Examples would help if you have any, thanks for looking
Anyone here with a ready to go sqlscript that lists all db's, files, sizes, owner etc? I guess it's a combination of sp_databases, sp_helpdb and sp_helpdb [db].
Hi, I have a stored procedure attached below. It returns 2 rows in the SQL Management studio when I execute MyStorProc 0,28. But in my program which uses ADOHelper, it returns a dataset with tables.count=0. if I comment out the line --If @Status = 0 then it returns the rows. Obviously it does not stop in if @Status=0 even if I pass @status=0. What am I doing wrong? Any help is appreciated.
ALTER PROCEDURE [dbo].[MyStorProc]
(
@Status smallint,
@RowCount int = NULL,
@FacilityId numeric(10,0) = NULL,
@QueueID numeric (10,0)= NULL,
@VendorId numeric(10, 0) = NULL
)
AS
SET NOCOUNT ON
SET CONCAT_NULL_YIELDS_NULL OFF
If @Status = 0
BEGIN
SELECT ...... END If @Status = 1 BEGIN SELECT...... END
Does anyone know of a quick way to find out what the largest indexes on a database are? I have a number of tables and was wondering if there's a stored proc or query that I can execute that will list the indexes and their size in order by size? Thanks
Does anyone know of a quick way to find out what the largest indexes on a database are? I have a number of tables and was wondering if there's a stored proc or query that I can execute that will list the indexes and their size in order by size? Thanks
When you have the autogrowth turned on for log files. What happens when you put a max file size on it? Will just overwrite the old logs to keep the file at the max size or will it just create a new file every time it hits the max size?
I'm putting together a manual system that tracks data growth in a certain database. I was going to use sp_spaceused as a part of it, but then realized the datatypes for size are CHAR, not INT or BIGINT. I was going to do counts, averages, etc. on those columns but that wouldn't work against a CHAR field obviously. I could easily write a little something to strip out the KB, but was hoping there was another way to get those figures.
Secondly...has anynoe seen a stored procedure/code/etc. that just calculates the largest/smallest/average row size for a table? I haven't been able to find anything anywhere...
I am opening a simple command against a view which joins 2 tables, so that I can return a column which is defined as a tinyint in one of the tables. The SELECT looks like this: SELECT TreatmentStatus FROM vwReferralWithAdmissionDischarge WHERE ClientNumber = 138238 AND CaseNumber = 1 AND ProviderNumber = 89 The TreatmentStatus column is a tinyint. When I execute that above SQL SELECT statement in SQL Server Management Studio (I am using SQL Server 2005) I get a value of 2. But when I execute the same SQL SELECT statement as a part of a SqlDataReader and SqlCommand, I get a return data type of integer and a value of 1. Why?
I am currently cleaning up my database to get its total size down and am not sure how nvarchar and varchar work exactly.
When defining the length of a varchar or nvarchar in enterprise manager, will that effect the size of the entry (as far as data size) no matter what the length of the entry? In other words, will there be a difference in Data Size for an entry with the length of 4 characters with a definition of varchar(4) versus an entry with the length of 4 characters with a definition of varchar(50).
****If there is no difference, is there any reason in trying to best guess the size to give nvarchar or varchar columns? It would seem easier to just define the lengths of columns which need variable lengths to 200 or 400 just to save time in not trying to best guess what the size might be...*****
Hi, I am looking to runa query to get the sizes of the tables in my SQL 7 DB. I know I can access the info in Enterprise Manager, under "Tables & Indexes". But I need to get this info via a query. I need rows and size. I figured out how to get rows through the sys tables: select sysobjects.name, sysindexes.rows from sysobjects,sysindexes where sysobjects.name = sysindexes.name and xtype = 'U'
Is the size of each table stored in a sys table as well? I can't find it.
Hey all, Got a little problem. have 2 matching tables on different servers with the EXACT same column layout and data (the tables are being replicated with MSSQL7) and one table is 200MB while the other is 2000MB. I'm running MSSQL7 SP2. Any ideas???
Hi, my log files are growing like anything. One of my log file size is 20GB. How i have to reduce the log file size. If i run DBCC command is it come backs... Pls tell me the way how i have to find the free space and reduce logsizes. After taking backups also my log file sizes are not reducing.
I have inherited a number of databases which were substantially over sized when they were set up. I'd like to reduce both the log and database files to be smaller than their original sizes, what's the easiest way to do this? If anyone has any experience of doing this please reply.
We are looking at installing a new Oracle server for a client but have been told that they have used Oracle in the past but had a lot of problems with slow response even though the bandwidth on the WAN was barely being used. He says that this was due to the fact that Oracle sends out very small packets across the network meaning that there are hundreds of packets being sent out. This caused a problem on the routers being used as it was killing the processors. Is this still the case and have you had other reports of slow response of this nature?
Ok, I have a new one. Several of my devices are showing with negative sizes when viewed in edit in enterprise manager. I cannot edit them as the change now button is grayed-out. Oddly enough they are all located on the same drive. The master (on C drive), and the tempdb (on D drive) both show as the default device. I am very confused. User access to the information is fine. What gives?