I have this peculiar problem concerning MS SQL Server.
My company works with an mailing application (ASP) which uses SQL
Server as it's repository. What I want to do is send data directly
from my own application to this SQL Server in order to feed the
mailing application.
To test if this was possible I linked the tables from SQL Server in MS
Access and entered the data. This worked fine and the data was picked
up correctly by the mailing application.
The problem occurs when I send the data from my application (Java
application with JDBC connection). The data is in this case no longer
picked up by the application. The strange thing is that the data which
is entered through Access and the data from the Application look
identical in de database view. The problem also occurs when the data
is send with the tool winSQL and when I view the data in here it still
looks identical.
Even more strange is when I select the record which is not working in
Access and copy it into a new record (only changing the key) it
suddenly works!
I've seen one other post on this topic from October 2005 and I thought I'd bring it up again. I've a Fuzzy Grouping component in my data flow. The output data from it appears to be the result of records spliced into other records. This includes pass-through columns, not merely "clean" or similarity columns. For example (I've added the suffixes for illustrative purposes):
We have written an application which splits up our customers data intotheir individual databases. The structure of the databases is thesame. Is it better to create the same stored procedures in eachdatabase or have them in one central location and use the sp_executesqland execute the generated the SQL statement.Thank you.Mayur Patel
Hi, I have two tables named Tab1 and Tab2. Both are identical in structure. The only diff is Tab2 has two more additional fields (FromDate and ToDate). The structure is like below : Col1 Col2 (Date field) Col3 Col4
Also Tab 2 have Col5 (From Date) Col6 (To Date)
Now I want to transfer some set of reocrds from Tab1 to Tab2. The additional Tab2 field (Col5 and Col6) values should be the minimum and maximum values of Tab1 date field for the current set.
How to accomplish this? Kindly help me in this regard.
Please guide me urgently how to extract data in SSIS from 10 identical oracle database into 1 sql server database. There is a table which list all the 10 databases.
Using an SQLDataSource control to populate a two column table... I have not found a simple solid means of looking at the table to determine if a record exists and if it isn't there insert it. I've tried talking directly to the DataSource control and using dependent controls such as formview or gridviews. Anyone have a simple solid method to do this? I have a hunch I'm missing the obvious here. Next, we (our dev team) is seeing an issue when switching modes of formview from Item, to Edit, Insert, etc. That the values of the TextBoxes and Labels become null. I'm guessing it is a cache issue but we need to get around it. Has anyone seen this sort of behavior and what did you do to correct/work around it? Thanks,
I've been trying to find out why a simple query containing a couple of top-level order bys produces different results when I introduce TOP into the query. I've found nothing so far, other than the results of both queries (TOP and non-TOP) are different again if I add an index.
All the DDL and DML is below, along with the results. The database uses the BIN2 collator.
I guess there's a simple explanation for this...
DROP table employeeCREATE TABLE employee( id INTEGER IDENTITY (1, 1) NOT NULL, givenname NVARCHAR (20), familyname NVARCHAR (20), CONSTRAINT pk_employee_id PRIMARY KEY (id));insert into employee values('John', 'Smith');insert into employee values('John', 'SMITH');insert into employee values('John', 'Smyth');insert into employee values('John', 'SMYTH');goselect top 10 givenname, familynamefrom employee order by lower(familyname) asc, lower(givenname) ascgo-- Dropping top produces results in different order!select givenname, familynamefrom employee order by lower(familyname) asc, lower(givenname) ascgo-- Creating an index results in different order on both queries!CREATE INDEX idx_employee_names ON employee (familyname, givenname);goselect top 10 givenname, familynamefrom employee order by lower(familyname) asc, lower(givenname) ascgoselect givenname, familynamefrom employee order by lower(familyname) asc, lower(givenname) ascgo
This produces the following results:
givenname familyname -------------------- -------------------- John SMITH John Smith John SMYTH John Smyth
(4 rows affected) givenname familyname -------------------- -------------------- John Smith John SMITH John Smyth John SMYTH
(4 rows affected) givenname familyname -------------------- -------------------- John Smith John SMITH John Smyth John SMYTH
(4 rows affected) givenname familyname -------------------- -------------------- John SMITH John Smith John Smyth John SMYTH
We have a issue with the performance in SQL server database.
Scenario & Issue: We have delivered a .net application to our client. This application is installed in newly built windows 2003 server.
The client is facing performance issues with the application. When compared with the performance in the development server , the performance of the production server is very poor.
Even when we execute the stored procedures in the backend, the performance is poor in the production server.
Example: A stored procedure that takes 16 seconds in the development server takes 17 minutes for the same parameters. The time remains the same even for HOT execution.
System Info:
Database Version - SQL Server 2005 Database Size - 120 plus GB OS Platform - Windows 2003 Database Load - 50 users CPUs - 4 RAM - 8 GB
The OS is Clustered ( failover clustering ).
Points to Note:
1.There is a huge table with 250 million rows ( this table itself takes upto 60 GB )
2.The huge table is partitioned ( SQL server 2005 table partitioning ) and placed in 20 different filegroups (.mdfs).
3.The .mdf's are placed in a SAN and .ldf is in local HD
4.Dynamic queries are used at few instances for performance benefits.
Questions:
1. Any thoughts on why this kind of performance issue arises ?
2. The client DBA wants us to clear the data and stored procedure cache before executing the stored procedure and test the performance.
Will this be would be the case in production scenario ?
3. Will the performance change based on the input parameters ?
4. The client DBA also have stated that a report server that pings the production database server is the cause for frequent clearing of the SQL Server cache.
When does the SQL Server database actually clears the cache memory? Is there any way to control it?
I am tackling with unique request. I have to download the ".zip" files from the https and uncompress them. Now, the fetch process works fine when I am downloading the files. Using .net and .IO namespaces from system library.
Similarly after downloading I have to uncompress the files as these files are treated by compressed file folders by windows xp. I know it can be achieved by using the IO.compression class form system namespace. But the only trouble here is IO.compression supports ".gz" and "Gzipstream.Uncompress" and I wonder how I would be able to get the ".zip" and "Zipstream.uncompress" done.
Thanks a million in all your help and advice. Also I appreciate for your time.
We have a issue with the performance in SQL server database.
Scenario & Issue: We have delivered a .net application to our client. This application is installed in newly built windows 2003 server.
The client is facing performance issues with the application. When compared with the performance in the development server , the performance of the production server is very poor.
Even when we execute the stored procedures in the backend, the performance is poor in the production server.
Example: A stored procedure that takes 16 seconds in the development server takes 17 minutes for the same parameters. The time remains the same even for HOT execution.
System Info:
Database Version - SQL Server 2005 Database Size - 120 plus GB OS Platform - Windows 2003 Database Load - 50 users CPUs - 4 RAM - 8 GB
The OS is Clustered ( failover clustering ).
Points to Note:
1.There is a huge table with 250 million rows ( this table itself takes upto 60 GB )
2.The huge table is partitioned ( SQL server 2005 table partitioning ) and placed in 20 different filegroups (.mdfs).
3.The .mdf's are placed in a SAN and .ldf is in local HD
4.Dynamic queries are used at few instances for performance benefits.
Questions:
1. Any thoughts on why this kind of performance issue arises ?
2. The client DBA wants us to clear the data and stored procedure cache before executing the stored procedure and test the performance.
Will this be would be the case in production scenario ?
3. Will the performance change based on the input parameters ?
4. The client DBA also have stated that a report server that pings the production database server is the cause for frequent clearing of the SQL Server cache.
When does the SQL Server database actually clears the cache memory? Is there any way to control it?
Upon restarting user defined function took seemingly forever to run
I am learning about nested while loops being used in some interdependent user defined functions. They seemed to work OK for a while.
Later, remembering how I lost the database due to hard disk reformatting, I backed up the database and copied it to a rewritable CD.
As the data is not really significant I deleted the database and practiced restoring the database from the CD.
This morning I restarted the user defined function and ran it. After more than half an hour with no result I gave up. Normally it took much less time to run such a user defined function.
I re-ran some other UDF and they worked. However, after I made some minor amendments to the TSQL scripts, saved the UDF and re-ran it, and it again seemed to take forever to run, even when I had set the counter in to while loop to 2.
I don€™t know what had gone wrong.
I went to register my copy of SQL 2005 EXPRESS. It didn€™t seem to help.
When I execute a SELECT query on a table with around 25 million records, I get performance difference based on the passed parameter value .
The below queries returns the output in 1 second.
SELECT TOP 10000 * FROM TestTable WHERE Column = 1 SELECT TOP 10000 * FROM TestTable WHERE Column = 2 SELECT TOP 10000 * FROM TestTable WHERE Column = 3
The below query alone takes 18 seconds to return the output.
SELECT TOP 10000 * FROM TestTable WHERE Column = 4
(FYI: The count of records for the column value 4 is lesser than the other column values)
Could anyone please let me know why this happens and how to resolve this ?
SQL Express installation is bombing out with the "A beta version of the .NET Framework 2.0 or SQL Server was detected on the computer...".
From the log file: Running: PerformSCCAction2 at: 2007/6/26 8:17:27 Loaded DLL:C:WINsystem32msi.dll Version:3.1.4000.2435 Product "{7131646D-CD3C-40F4-97B9-CD9E4E6262EF}" versioned 2.0 is not compatible with current builds of SQL Server.Expected at least version: 2.0.50727.42 The Product Name is ".NET Framework"
Now what is peculiar to me is that 1) this machine should have never had the beta software on it; 2) I have seen people with similar issues except their versioned info always said "versioned 2.0.50727" and the product guid is different, this one only says "2.0"; and 3) I thought that product guid was for the RTM 2.0.50727.42 release!
What is going on here and where is it getting this version information?
we're trying to get a better understanding of how RS behaves when parameters are being set. We see quirky behavior that is a little difficult to describe. Right now we assume that if the revolving green circle (with the phrase "Report is being generated" beneath it) doesnt appear, the report really wasnt rendered properly, even if the report region changes.
One peculiarity that seems pretty consistent is on reports we've prototyped with "from" and "to" date parameters. It seems that when we set one date (doesnt matter which is 1st) things progress normally, ie no "report clearing event" occurs as a result of setting cursor focus in the calendar control and changing its value. The report region doesnt change from what showed previously. But trying to set focus on 2nd (doesnt matter if its "from" date or "to" date, just that its the 2nd date being set) always seems to trigger some kind of event that 1) doesnt allow focus to be on that text box, 2) blanks out the report region including headings. Only after this "event" occurs, can we set focus on the 2nd date, change the value and click the "view report" button for rerendering.
We see similar types of behavior with other types of parameters that include multi value dropdowns and booleans. The toughest part of this is trying to explain it to our users. On some parameters, the event always occurs every time they are changed. On other parameters, it appears that the event only occurs if another parameter was changed beforehand.
I believe we've even seen headings with no data rendered, thinking temporarily that no rows were returned, just to find out that by clicking the "view report" button there really was data to be reported based on current filters. Unfortunately I cant reproduce this scenario when I want to.
I have two SQL Server tables on the same server and in the same database. I'll call them table A and table B. They have identical schemas. I need to insert all rows in table A into table B. (Don't laugh - this is just for testing and long run the tables will reside on different servers.)
Can someone please tell me the correct task to use for this and the connection type I need for both the source and destination?
Rather than the real code, here's a sample we came up with.
Here's the C# Code: public class sptest : System.Web.UI.Page { protected System.Web.UI.WebControls.Label Label1; private DataSet dtsData;
private void Page_Load(object sender, System.EventArgs e) { // Put user code to initialize the page here string strSP = "sp_testOutput"; SqlParameter[] Params = new SqlParameter[2]; Params[0] = new SqlParameter("@Input", "Pudding"); Params[1] = new SqlParameter("@Error_Text", ""); Params[1].Direction = ParameterDirection.Output; try { this.dtsData = SqlHelper.ExecuteDataset(ConfigurationSettings.AppSettings["SIM_DSN"], CommandType.StoredProcedure, strSP, Params); Label1.Text = Params[0].Value.ToString() + "--Returned Val is" + Params[1].Value.ToString(); } //catch (System.Data.SqlClient.SqlException ex) catch (Exception ex) { Label1.Text = ex.ToString();
} }
Here is the stored procedure:
CREATE PROCEDURE [user1122500].[sp_testOutput](@Input nvarchar(76),@Error_Text nvarchar(10) OUTPUT)AS SET @Error_Text = 'Test'GO When I run this, it prints up the input variable, but not the output variable.
I have a Client-Server - App where every Client-User has his own DB. The server is monitoring changes to all Client-DB's via SqlDependency. My problem can be reproduced with a small application, it even might be a €œfeature€? and not a €œbug€?:
- Consider two Databases TestDb1 and TestDb2 running on one SQL Server 2005 instance.
- Both DB€™s have identical Schemas.
- Consider the two DB€™s have each one table named €œTable1€?.
- Both tables have the same schema as already mentioned (the fields Id and Text).
- Now I setup a SQLDependency object on each Database:
dependency.OnChange += new OnChangeEventHandler(dependency_OnChange);
}
}
If I make any changes to the Table in TestDb1 I get two notifications with the different Id€™s but the same Info,Source,Type (saying e.g. Data,Change,Update). If I make changes to the Table in TestDb2 I again get two notifications with the same result. As soon as I rename the Table in one of the Db€™s (e.g. Table2) and also change my Sql-Query in the code €“ I get just one Notification as expected. This behaviour is the same even If I change the connectionstring so that it points to another machine. So it somehow seems to fire a notification for every change to a table with the same name €“ regardless of the connectionstring where the physical change was done.
Does anybody know if this is a wanted behaviour of SqlDependency ? Does anybody know how I can set this up so I can have two DB€™s with identical Schemas and only get a Notification from the DB I actually changed ?
Hello,I ended up with two identical rows in one table. They should have differences but I cannot update one, as it tries to update both of them, or throws an error. How to update only one row, and leave other as is?
Here's some code that says it should identify if a user already exists in my database. I have changed the code to match my database, but it seems to have somewhat the opposite affect, rejecting all names (even new ones) or accepting all names (including existing ones). The switch in situations occurs in the "if" statement towardsd the end, when I change the sign of objDR.RecordsAffected. Do you have any idea what could be wrong? Thanks. Function DoesUserExist(ByVal userName As String) As Boolean Dim connectionString As String = "server='(local)Netsdk'; trusted_connection=true; Database='AuthorizedUsers'" Dim sqlConnection As System.Data.SqlClient.SqlConnection = New System.Data.SqlClient.SqlConnection(connectionString)
Dim queryString As String = "SELECT [Users].[UserName] FROM [Users] WHERE ([Users].[UserName] = @UserName)" Dim sqlCommand As System.Data.SqlClient.SqlCommand = New System.Data.SqlClient.SqlCommand(queryString, sqlConnection) Dim Cmd as New SQLCommand(queryString, sqlConnection) With Cmd.Parameters .Add(New SQLParameter("@username", username)) End With
sqlConnection.Open Dim blHasRows As Boolean Dim objDR As System.Data.SqlClient.SqlDataReader = Cmd.ExecuteReader(system.data.CommandBehavior.CloseConnection)
if objDR.RecordsAffected > 0 then blHasRows="True" else blHasRows="False" End If
How can I create a table identical to another one? I need to copy the indexes a constraint too. Example: I have a table "employee" and I want another table "employee2" with the same indexes and primary key and references.
I have a database with three different tables having the exact same fields. New records are written to table1, before moving to table2 and ultimately table3. I was wondering if it's possible to run the same query on all three tables at the same time. I need to get all unique instances in the JC field from each table after a specified date. I get an "Ambiguous column name" error on the JC and TimeID fields.
SELECT distinct [JC] FROM [table1], [table2], [table3] where timeid > '20090900';
I'm using a query to see how many times an action was recorded on a person. The query works, it returns this:
John Smith 1 John Smith 1 John Smith 1 Jane Doh 1 Jane Doh 1 Al Johnson 1
but I need it to return totals like this
John Smith 3 Jane Doh 2 Al Johnson 1
This is the query I am using:
Select Player.First_Name, Player.Last_Name, COUNT(Action.Employee_ID) from Player INNER JOIN PlayerVisit on PlayerVisit.Player_ID = Player.Player_ID join Treatment on Treatment.Visit_ID = PlayerVisit.Visit_ID join Action on Treatment.Action_ID = Action.Action_ID group by Player.First_Name, Player.Last_Name, Action.Employee_Id;
I have implemented a script to perform a MD5 hash on each row processed by the SSIS package so that it can be compared with a stored value to see if there has been a change in the record. This package processes over 1 million rows. In 12 of these rows I get a hash value that is different than the stored value despite the fact that the rows "look" identical. Curious about this, I used the both the CheckSum and Binary_Checksum feature from t-sql to check the rows and they both show the identical checksum value. I have exported the rows into text and did a compare and the records are identical. I assume there must be some hidden characters that is causing the hash to be different, has anyone else run into this issue? Any help is much appreciated.
i created a simple table to record all uploaded files to my website. now, it works, but the problem is, it posts to the table 2 times, as in it executes "Button1_Click" event twice. The result is i get two records which are the same, and only differs in primary key (because i set it as an autonumber). how do i fix this? thanks in advance here's the code: HTML: <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConflictDetection="CompareAllValues" ConnectionString="<%$ ConnectionStrings:ConnectionString %>" InsertCommand="INSERT INTO [Base_Files] ([User_ID], [Date_Posted], [File_Type], [File_Size], [File_Name], [File_Description]) VALUES (@User_ID, @Date_Posted, @File_Type, @File_Size, @File_Name, @File_Description)">
I have an "insert into" statement that creates two identical rows in a table, with this statement: delete from [table] where [column] = @parameterINSERT INTO [table]([fields]) VALUES ([parameter values]) This is the code-behind that performs the insert: Dim dbConn As New SqlConnection(strConn)Dim cmd As New SqlCommand("sp_CreateUser", dbConn)cmd.CommandType = Data.CommandType.StoredProcedurecmd.Parameters.AddWithValue("@UserID", strUserID)cmd.Parameters.AddWithValue("@UserName", strUserName)cmd.Parameters.AddWithValue("@Email", strEmail)cmd.Parameters.AddWithValue("@FirstName", strFirstName)cmd.Parameters.AddWithValue("@LastName", strLastName)cmd.Parameters.AddWithValue("@Teacher", strTeacher)cmd.Parameters.AddWithValue("@GradYr", lngGradYr)Using dbConndbConn.Open()cmd.ExecuteNonQuery()dbConn.Close()cmd.Dispose()dbConn.Dispose()End Using I wonder if it inserts twice due to a postback issue. Is there a way to stop two rows from being created in the first place with the same "insert into" statement? I'd appreciate any advice.
We have MSSQL 2000 Server instance installed and working well on Windows 2003 Server machine [IBM X series-366] with 16GB RAM, 3.67GHZ cpu power, and 400GB hard disk space.
We further created an identical server instance on a new machine. More specifically, on Windows 2003 Server machine [Intel (R) Xeon (TM)] with 16GB RAM, 3.67GHZ cpu power, and 400GB hard disk space, we installed MSSQL 2000 Server and copied over all the dbs, applications ...
We were expecting same or similar performance (since processor speed, ram, hd, server and database configurations are all the same, with same indexes on same tables. However, for some reason, there is a noticeable difference in performance.
More specifically, I ran Profiler for 30 minutes on both servers simultaneously [same trace parameters]. The trace file of the new server is 3 times as large as that of the old one (i.e. It looks like more items are being processed). However, the average duration of the executed stored procedures is much longer on the new server than that of the old server.
Moreover, when I run same queries on 2 servers. The query on the new server always takes longer than that on the old server. And for tables where we don't have indexes, it takes much longer.
Following advice here(http://support.microsoft.com/kb/274750/), we configured our new server (just as was our old one configured) to use 15GB of RAM. I further compared the configurations of 2 servers by executing sp_configure (with advance options). The only difference I saw was that "remote proc trans" is set to off on the new server and on on the old server. I don't think it could affect this issue though.
Furthermore, the new server appears to have many more locks, as compared to the old server. Could it be because it is processing more items?
I cannot figure what is causing the queries to be slower on the new server.