Is It Possible To Use A Varchar Variable As A Table Name?
Mar 31, 2008
Hi,
What I am trying to achieve is renaming a table by appending a timestamp value if the table exists!
But I am using a varchar variable to achieve the same.
I feel am stuck as what I need to do is rename the table and use the new name to query that table.
This is what I am doing.
DECLARE @NewTableName varchar(100)
IF OBJECT_ID('TableName') IS NOT NULL
BEGIN
SET @NewTableName = 'ExistingTableName' + CONVERT(VARCHAR(25),GETDATE(),126)
END
Please let me know if there is a way to achieve the table renaming and querying the renamed table!
Hi, is there a type of data that we don't need to specify the length but can grow dynamically?instead of using varchar 2000, a type that acts like a varchar but the length is unlimited ....Thanks,
Why can varchar datatype variable only 4000 byte? For example: in a storedprocedure declare @aa varchar(8000) ...... while select @aa=@aa+@otherinfo end when the length is more than 4000 ,the data in the behind will be lost
Hi there, I've a question regaarding datatype conversions... I can't convert the above mentioned datatype. I always get an error message that the conversion fails. Doesn't matter If convert or cast is used.
How would you convert the above mentioned variable into float???
Cannot use dynamic sql in current context. So need some help regarding this.I am developing a stored procedure to update a table. Sending Column names as parameters, but not able to use them as given below.INSERT INTO Books (@Column1, @Column2) values.. Any way to execute without using dynamic sql?..Thanx.
I'm seeing a problem with printing very long strings using the PRINT command on a VARCHAR(MAX) variable. After a certain amount of characters the string is truncated....it looks like the limit is at around 8,000 characters.
Does anyone know of a solution or a workaround for this?
I have created a table Table with name as Varchar and id as int. Now i have started inserting the rows like, insert into Table values ('arun',20).Yes i have inserted a row in the table. Now i have got the values " arun's ", 50. insert into Table values('arun's',20) My sqlserver is giving me an error instead of inserting the row. How will you solve this problem?
insert into #t(branchnumber) values (005) insert into #t(branchnumber) values (090) insert into #t(branchnumber) values (115) insert into #t(branchnumber) values (210) insert into #t(branchnumber) values (216)
[code]....
I have a parameter which should take multiple values into it and pass that to the code that i use. For, this i created a parameter and temporarily for testing i am passing some values into it.Using a dynamic SQL i am converting multiple values into multiple records as rows into another variable (called @QUERY). My question is, how to insert the values from variable into a table (table variable or temp table or CTE).OR Is there any way to parse the multiple values into a table. like if we pass multiple values into a parameter. those should go into a table as rows.
I have a Users table that I use for membership. I am using username varchar(30) as the primary key for this table since username will always be unique.
The question I have is regarding how SQL Server actually stores data:
I see that when I add users, they are always stored alphabetically sorted on username.
I was expecting that all users will appear on the users table in the order they were added.
Example: I have 3 users (john, jonah, wilson). Now I added 4 user with username='bob'
If I execute select * from users, it returns me (bob, john, jonah, wilson). Look bob has become the first row of the table.
My question: Is Sql server moving 3 older rows to make room for 'bob' and it is also rebuilding part of the index due this new username 'bob'?
If this is the case, then it will have big impact if I have 100K users and I add one user that becomes firstrow. In that case 99,999 rows will have to move.
Bottom line, insert, delete will be very expensive.
I know sql server keeps data physically sorted on PK. But I am concerned here since rows are losing the order in which they were inserted.
It appears that when I insert data into a varchar(8000) field, SQL Server truncates everything after the 256th byte. When I change the field to text, this problem is eliminated. Can someone give me an explanation of why? And can I actually to the insert with the field being a varchar(8000) instead of a text data type. This will do wonders for the size and indexing.
I have a table that imported as varchar. Most of these columns need to be in a numerical format. How can I convert a table with columns named column0 (needs to be int),column1 (stays varchar), column2(needs to be int), and column 3(needs to be int)?
Simple example: declare @tTable(col1 int) insert into @tTable(col1) values (1) select * from @tTable
Works perfectly in SQL Server Management Studio and the database connection is OK to as I may generate PP table using complex (or simple) queries without difficulty.
But when trying to get this same result in a PP table I get an error, idem when replacing table variable by a temporary table.
Message: OLE DB or ODBC error. .... The current operation was cancelled because another operation the the transaction failed.
I am trying to use a stored procedure to update a column in a sql table using the value from a variable table I getting errors because my syntax is not correct. I think table aliases are not allowed in UPDATE statements.
This is my statement:
UPDATE [dbo].[sessions_teams] stc SET stc.[Talks] = fmt.found_talks_type FROM @Find_Missing_Talks fmt WHERE stc.sessionid IN (SELECT sessionid FROM @Find_Missing_Talks) AND stc.coupleid IN (SELECT coupleid FROM @Find_Missing_Talks)
I have a table in my database and the table has almost 45 columns and the rowsize is 10468 bytes.in that most of the colums have varchar datatypes and and i think coz of poor knowledge of the data most of the columns with varchar data were given more column length. Now i want to decrease the size of those columns and to see the row size would be around 8k Bytes.If i do this now, does it affect the table performance much....Infact can i do this as there is lot of data (almost 2 million rows) in the table.If it is possible is there anything to be taken care before changing the column lenghts.
I am currently woking on transfering the table contents in sql server in an csv file i have created a stored procedure which would do same but the prob i am facing is the data in the table is not clean it contains tab,newline etc so i had to clean the data i had applied the folowing procedure
set @columns='' --'@columns+''replace(replace(replace(''+column_name +'',Char(10),''''''''),Char(13),''''''''),Char(19) ,'''''''')''' --print @sql select @columns=@columns+'replace(replace(replace('+colum n_name+',Char(10),''''),Char(13),''''),Char(19),'' '') as '+column_name+ ', ' from
[code]...
the table_name contains around 300 columns with the column_name of min 20 characters .When i run the above query i get only few columns instead of all the columns so i tried to find the length which gives me the result as 4000.I had declared @columns as Varchar(8000) i dont know why this issue is coming up?? is there any other way i can clean the data an the transfer it into the file
Should I change my current design which has a Lookup Child Table and a Parent Table containing a ID link tot he Child Table, and instead just putting a varchar for the Value in the Parent Table?
The issue is that the value is unique for a large percentage of cases (although some specific values reoccur very frequently).
However, the parent table has a high number of inserts, and that in turn often requires inserts to the Child Table, which in turn pushes the Child Table out of shape, and performance suffers.
The data is very seldom used (usually for debugging only)
Should I de-Normalise this for performance?
Details of the actual circumstances:
I have a table that Logs each "page hit" on our web site. That stores the "Referrer".
We store the Referrer for the SESSION separately (the first page viewed will have an external referrer, thereafter all pages will have an "internal" referrer).
This Referrer data is used VERY LITTLE - just for the occasional debugging issue.
Within some URLs are frequently used (www.mydomain.com/HOME.ASP) and others contain &Parameters and stuff and are either relatively unique(www.mydomain.com/ViewBasket.ASP&BasketID=1234) or variable (www.mydomain.com/ViewProduct.ASP&ProductID=1234)
Currently I store the Referrer in a Child "Lookup" Table, and the PK ID in the "Logging" Table.
Some analysis of the data shows:
119,509 rows (pages logged [purged after a couple of days, only includes data where the Referrer is not blank!])
Of those there are 10,760 distinct Referrer values. An average of 11 Page Hits per Referrer value.
On the face of it that suggests that using a child table is good! However, for each row inserted into the Page Hit table I have to do the following to get the Referrer ID:
SELECT @MyReferrerID = ID FROM MyLookupTable WHERE MyValue = @MyReferrerValue IF @@ROWCOUNT = 0 INSERT ... @MyReferrerValue ... SELECT @MyReferrerID = @@IDENTITY
(MyLookupTable has a PK on [ID] and a Unique Index on [MyValue] )
(In a busy month we will add 100,000 rows to this Lookup table, in a slack month 20,000. These get "purged" after a while, so the table size stays about the same. The table is defragged [if required] daily and the Stats updated. There are 350,000 rows in MyLookupTable)
Obviously some values in MyLookupTable reoccur, others are unique - and get purged after a while. Inserting these new entries pushes MyLookupTable out of shape, the SHOW_STATISTICS start to look awful, an sp_recompile sorts it out (Why? The query plan doesn't change ... although obviously the "shape" of the table has, maybe the Stats got auto-updated, would that change anything?)
So I'm thinking about just storing @MyReferrer as a VARCHAR in the Logging table, and throwing away MyLookupTable.
So really I'm just looking at performance, not Normalisation ...
I have looked far and wide and have not found anything that works to allow me to resolve this issue.
I am moving data from DB2 using the MS OLEDB Provider for DB2. The OLEDB source sees the column of data as DT_TEXT. I setup a destination to SQL Server 2005 and everything looks good until I try and run the package.
I get the error: [OLE DB Source [277]] Error: An OLE DB error has occurred. Error code: 0x80040E21. An OLE DB record is available. Source: "Microsoft DB2 OLE DB Provider" Hresult: 0x80040E21 Description: "Multiple-step OLE DB operation generated errors. Check each OLE DB status value, if available. No work was done.".
[OLE DB Source [277]] Error: Failed to retrieve long data for column "LIST_DATA_RCVD".
[OLE DB Source [277]] Error: There was an error with output column "LIST_DATA_RCVD" (324) on output "OLE DB Source Output" (287). The column status returned was: "DBSTATUS_UNAVAILABLE".
[OLE DB Source [277]] Error: The "output column "LIST_DATA_RCVD" (324)" failed because error code 0xC0209071 occurred, and the error row disposition on "output column "LIST_DATA_RCVD" (324)" specifies failure on error. An error occurred on the specified object of the specified component.
[DTS.Pipeline] Error: The PrimeOutput method on component "OLE DB Source" (277) returned error code 0xC0209029. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.
Any suggestions on how I can get the large string data in the varchar column in DB2 into the varchar(max) column in SQL Server 2005?
I am trying to create a store procedure inside of SQL Management Studio console and I kept getting errors. Here's my store procedure.
Code Block CREATE PROCEDURE [dbo].[sqlOutlookSearch] -- Add the parameters for the stored procedure here @OLIssueID int = NULL, @searchString varchar(1000) = NULL AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; -- Insert statements for procedure here IF @OLIssueID <> 11111 SELECT * FROM [OLissue], [Outlook] WHERE [OLissue].[issueID] = @OLIssueID AND [OLissue].[issueID] = [Outlook].[issueID] AND [Outlook].[contents] LIKE + ''%'' + @searchString + ''%'' ELSE SELECT * FROM [Outlook] WHERE [Outlook].[contents] LIKE + ''%'' + @searchString + ''%'' END
And the error I kept getting is:
Msg 402, Level 16, State 1, Procedure sqlOutlookSearch, Line 18
The data types varchar and varchar are incompatible in the modulo operator.
Msg 402, Level 16, State 1, Procedure sqlOutlookSearch, Line 21
The data types varchar and varchar are incompatible in the modulo operator.
Can someone tell me if it is possible to add an index to a Table variable that is declare as part of a table valued function ? I've tried the following but I can't get it to work.
ALTER FUNCTION dbo.fnSearch_GetJobsByOccurrence ( @param1 int, @param2 int ) RETURNS @Result TABLE (resultcol1 int, resultcol2 int) AS BEGIN
my stored procedure have one table variable (@t_Replenishment_Rpt).I want to create an Index on this table variable.please advise any of them in this loop... below is my table variable and I need to create 3 indexes on this...
I've look at several different methods for removing leading zero's from a column but I need to remove trailing data from a VARCHAR column. For some reason, the old database saved the time along side the date in my client's app.
For example:
The old database format "2015-07-28 00:00:00"
I need the data in this column in the new database to only be the date "2015-07-28", there are alot of rows with this issue.
Is there a query I can run to remove the 00-00-00 from all of the rows? Some of the fields actually have a time in there like this: 2015-07-28 12:15:35, with this one, I don't think it's going to be easy but if I could at least remove the 00-00-00 from all the rows that have it, that would be a good start.
I have a Users table that I use for membership. I am using username varchar(30) as the primary key for this table since username will always be unique.
The question I have is regarding how SQL Server actually stores data:
I see that when I add users, they are always stored alphabetically sorted on username.
I was expecting that all users will appear on the users table in the order they were added.
Example: I have 3 users (john, jonah, wilson). Now I added 4 user with username='bob'
If I execute select * from users, it returns me (bob, john, jonah, wilson). Look bob has become the first row of the table.
My question: Is Sql server moving 3 older rows to make room for 'bob' and it is also rebuilding part of the index due this new username 'bob'?
If this is the case, then it will have big impact if I have 100K users and I add one user that becomes firstrow. In that case 99,999 rows will have to move.
Bottom line, insert, delete will be very expensive.
I know sql server keeps data physically sorted on PK. But I am concerned here since rows are losing the order in which they were inserted.
For the life of me I cannot figure out why SSIS will not convert varchar data. instead of using the table to table method, I wrote a SQL query so that I could transform the datatype ntext to varchar 512 understanding that natively MS is going towards all Unicode applications.
The source fields from Access are int, int, int and varchar(512). The same is true of the destination within SQL Server 2005. the field 'Answer' is the varchar field in question....
I get the following error
Validating (Error)
Messages
Error 0xc02020f6: Data Flow Task: Column "Answer" cannot convert between unicode and non-unicode string data types. (SQL Server Import and Export Wizard)
Error 0xc004706b: Data Flow Task: "component "Destination - Query" (28)" failed validation and returned validation status "VS_ISBROKEN". (SQL Server Import and Export Wizard)
Error 0xc004700c: Data Flow Task: One or more component failed validation. (SQL Server Import and Export Wizard)
Error 0xc0024107: Data Flow Task: There were errors during task validation. (SQL Server Import and Export Wizard)
DTS used to be a very strong tool but a simple import such as this is causing me extreme grief and wondering of SQL2005 is ready for primetime. FYI SP1 is installed. I am running this from a workstation and not on the server if that makes a difference...
Hi All,Hope someone can help me...Im trying to highlight the advantages of using table variables asapposed to temp tables within single scope.My manager seems to believe that table variables are not advantageousbecause they reside in memory.He also seems to believe that temp tables do not use memory...Does anyone know how SQL server could read data from a temp tablewithout passing the data contained therein through memory???Is this a valid advantage/disadvantage of table variables VS temptables?
SQLLY challenged be gentle --Trying to create code that will drop a table using a variable as theTable Name.DECLARE @testname as char(50)SELECT @testname = 'CO_Line_of_Business_' +SUBSTRING(CAST(CD_LAST_EOM_DATEAS varchar), 5, 2) + '_' + LEFT(CAST(CD_LAST_EOM_DATE AS varchar),4)+ '_' + 'EOM'FROM TableNamePrint @testname = 'blah...blah...blah' (which is the actual tablename on the server)How can I use this variable (@testname) to drop the table? Undersevere time constraints so any help would be greatly appreciated.
In a previous post "Could #TempTable within SP cause lock on tempdb?" http://forums.microsoft.com/msdn/showpost.aspx?postid=2691763&siteid=1
It was obvious that we have to limit the use of #Temp table to a minimum. Let assume that some of the temp tables are really difficult to replace and we have to live with them.
Would it be easier on tempdb if the #TempTable is replaced by a table variable? Or do they all end up in tempdb?