As I am creating the non-clustered indexes for the tables, I dont quite understand how dose it really matter to put the columns in the index key columns or put them into the included columns of the index?
I am really confused about that and I am looking forward to hearing from you and thank you very much again for your advices and help.
I'm using sys.dm_db_missing_index_details to find missing indexes on a database that is currently in testing. After running a bunch of our reports, there are several suggested indexes on 3 or 4 columns that have 15 - 20 included columns. The included columns are mostly varchars ranging from 1 to 150 characters along with a couple of date columns. My index size on that table is already nearly twice the size of the data.
I don't think it's a good idea to add an index with that many columns, but the information I've read on included columns is very general. I'm wondering if there is something about them that I don't understand that would make this a good idea.
I am trying to tune a process that is running slowly. I analyzed the process using the Database Engine Tuning Advisor, and it recommended the creation of 3 indexes, all non-clustered:
1) ColA, include ColB 2) ColA, include ColC 3) ColA, include ColD
So... I created a single non-clustered index on:
4) ColA, include ColB, ColC, ColD
That should do the same thing, right? A look at my execution plan shows that the index I created is being scanned -- 3 times. What is puzzling me, though, is that the Database Engine Tuning Advisor is still recommending I create these 3 separate indexes, even with the index (4) that I created in existence.
If it matters, ColA, ColB, ColC and ColD are all int FKs.
I would like to know the impacts (if any) of adding nonclustered index with included columns on large tables (these tables are populated by bulk insert from text files).
If I have a table with 1 or more Nullable fields and I want to make sure that when an INSERT or UPDATE occurs and one or more of these fields are left to NULL either explicitly or implicitly is there I can set these to non-null values without interfering with the INSERT or UPDATE in as far as the other fields in the table?
EXAMPLE:
CREATE TABLE dbo.MYTABLE( ID NUMERIC(18,0) IDENTITY(1,1) NOT NULL, FirstName VARCHAR(50) NULL, LastName VARCHAR(50) NULL,
[Code] ....
If an INSERT looks like any of the following what can I do to change the NULL being assigned to DateAdded to a real date, preferable the value of GetDate() at the time of the insert? I've heard of INSTEAD of Triggers but I'm not trying tto over rise the entire INSERT or update just the on (maybe 2) fields that are being left as null or explicitly set to null. The same would apply for any UPDATE where DateModified is not specified or explicitly set to NULL. I would want to change it so that DateModified is not null on any UPDATE.
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) VALUES('John','Smith',NULL)
INSERT INTO dbo.MYTABLE( FirstName, LastName) VALUES('John','Smith')
INSERT INTO dbo.MYTABLE( FirstName, LastName, DateAdded) SELECT FirstName, LastName, NULL FROM MYOTHERTABLE
Dear All, i've run the upgrade wizard. i'm trying to upgrade from sql server 2000 ent edition to sql server 2005 standard edition.i'm getting the below error after run the wizard. please help me Error: ActiveX components cant create an object
Arnav Even you learn 1%, Learn it with 100% confidence.
Recently I have come across a requirement where i need to design a table.
There are some columns in table like below with DECIMAL Datatype:
BldgLength
BldgHeight
BldgWeight
Based on my knowledge, i know that values before Floating-Point will not be more than 4 digits.
Now as per MSDN,
Precision => 1 - 9 Storage bytes => 5
so i can create column as:
BldgLengthDECIMAL(6,2) DEFAULT 0
OR
BldgLengthDECIMAL(9,2) DEFAULT 0
Now while reading some articles, i came to know that when we do some kind of operation like SUM Or Avg, on above column then result might be larger than current data type.
So some folks suggested me that i should keep some extra space/digits considering above MATH functions, to avoid an Arithmetic Over Flow error.
So my question is what should be value of DataType for above column ?
The query Im running so far is wrong, but here it is...
SELECT t.FromUserID, t.ToUserID, t.msg, u.UserName AS UserFrom, u.GroupID AS FromGroup, u2.UserName AS UserTo, u2.GroupID AS ToGroup FROM tmp_Messages t LEFT JOIN (SELECT UserID, GroupID, UserName FROM tmp_users WHERE GroupID = 3) u
[Code] .....
im missing the details of one of the users.I know what the problem is, I just cant figure out how to get this working without using temp tables, which I cant do in the production version.
how to use like operator select statement to retrieve multiple column names in sql server DB...for ex: I have a table say employees where in I want to get all column names like emp_,acc_ etc using '%' And what is this below query used for?
SELECT column_name as 'Column Name', data_type as 'Data Type', character_maximum_length as 'Max Length' FROM information_schema.columns WHERE table_name = 'tblUsers'
I'm able to successfully import data in a tab-delimited .txt file using the following statement.
BULK INSERT ImportProjectDates FROM "C: mpImportProjectDates.txt" WITH (FIRSTROW=2,FIELDTERMINATOR = ' ', ROWTERMINATOR = '')
However, in order to import the text file, I had to add columns to the text file to match the columns that exist in the table. The original file is an export out of another database and contains all but 5 columns from my db.
How would I control which column BULK INSERT actually imports when working with a .txt file? I've tried using a FORMAT FILE, however I kept getting errors which I tracked down to being a case of not using it with a .txt file.
Yes, I could have the DBA add in the missing columns to the query from the other DB to create the columns, however I'd like to know a little bit more about this overall.
I have multiple databases in the server and all my databases have tables: stdVersions, stdChangeLog. The stdVersions table have field called DatabaseVersion which stored the version of the database. The stdChangeLog table have a field called ChangedOn which stored the date of any change made in the database.
I need to write a query/stored procedure/function that will return all the database names, version and the date changed on. The results should look something like this:
I have a table of Customers & their data in about 20 Columns.
I have another table that has potential Customers with 3 Columns.
I want to append the records from Table 2 onto Table 1 to the Columns with the same names.
I've thought of using UNION ALL or Select Insert but I'm mainly stuck on the most efficient way to do this.
There is also no related field that can be used to join the data as these Customers in table 2 have no Customer ID yet as they're only potential Customers.
Can I just append the 3 columns from Table 2 to the same 3 columns in table 1?
I'm using MS SQL Server 2008 and I'm trying to figure out if it is possible to identify what tables / columns contain specific records.
In the example below information generated for the end user, so the column headers (Customer ID, Customer, Address, Phone, Email, Account Balance, Currency) are not necessarily the field names from the relevant tables, they are simply more identifiable headers for the user.
Customer ID CustomerAddress Phone Email Account Balance Currency js0001 John Smith123 Nowhere Street555-123-456 jsmith@nowhere.com-100 USD jd2345 Jane Doe 61a Down the road087-963258 jdoe@downthe road.com-2108 GBP mx9999 Mr X Whoknowsville 147-852369 mrx@whoknows.com0 EUR
In reality the column headers may be called eg (CustID, CustName, CustAdr, CustPh, CustMail, CustACBal, Currency).
As I am not the generator of this report, I would like to know whether or not it is possible to identify the field names and / or what tables they exist in, if I were to used the report info to search for it. For example, could I perhaps find out the field name and table for "jd2345" or for "mrx@whoknows.com", because the Customer ID or Email may not be what the actual fields are called.
I'm not a DB admin and I don't have rights to do a stored procedure on the server. I'm guessing what I want is not so simple to do, but is it possible to do via a query?
I have 3 tables, a supplier table, a types table and a relationship table between the two..I want to build a query that put the different types in columns, and use a Boolean value to identify if the supplier supplies that type.
I would like to pull data from two seperate columns based on the vaule for MakeFlag. So if MakeFlag = 0 I would like the description to show but anything else I would like catalog description to show up.
I have a matrix report with STORES in the row group and DATES in the column group. The table sums on SALES. The DATES column is formatted like =format(Fields!DATES.Value, "MMM yyyy"). The table also has 2 parameters @Start and @End. This all works great but I then added a child report so that the user can click on the SALES value for any sale by month and store. The child report uses the @Start and @End parameters from the original report but this is where I run into problems.
Rather than bringing me the sales details for a particular store and month it brings back everything from the time period selected with the original date parameters. So say I originally selected 2015-01-01 to 2015-06-30 with the parameters when I select on FEB 15 in my matrix report I get Febs data along with all the other months ie Jan-Jun 15. The DATES fields in both reports are in the same date format - in fact both reports use exactly the same dataset.
I realize it's something to do with the formatting of the DATE field not being recognized in the linked report.
Is there a good way to add columns to a table type?
I built several procs which make use of table-valued-parameters, and they work pretty nicely, until I need them to accept additional columns. Then I have to drop all the procs that use them, alter the types, and rebuild all the procedures, which is a huge pain in the rear.
Is there any good way (built in, or custom) to alter the def of a table type that's used as a parameter to multiple stored procedures?
CREATE TABLE Person ( PersonID INT Name varchar(50), HireDate datetime, HireOrder int, AltOrder int )
Assume I have data like this
INSERT INTO Person VALUES(1, 'Rob', '06/02/1988', 0, 0) INSERT INTO Person VALUES(2, 'Tom', '05/07/2016', 0, 0) INSERT INTO Person VALUES(3, 'Phil', '01/04/2011', 1, 0) INSERT INTO Person VALUES(4, 'Cris', '01/04/2011', 2, 0) INSERT INTO Person VALUES(5, 'Jen', '01/04/2011', 3, 0) INSERT INTO Person VALUES(6, 'Bill', '01/05/2011', 0, 0) INSERT INTO Person VALUES(7, 'Ray', '01/23/2012', 0, 0)
I'm trying to simplify my requirement... providing the input of HireDate, HireOrder, and AltOrder, I need to be able to pick up the next person
For ex:, if I provide input, HireDate: 06/02/1988, HireOrder:0, AltOrder:0, the return value expected is "Tom" because he is the next person after the provided input.
For ex:, if I provide input, HireDate: 05/07/2016, HireOrder:0, AltOrder:0, the return value expected is "Phil" because he is the next person after the provided input. Though Phil and Cris have same dates, their HireOrder takes precedence in this case. If they also have same HireOrder, AltOrder would be coming in picture to determine next person
Another ex: if I provide input, HireDate: 01/04/2011, HireOrder:1, AltOrder:0, the return value expected is "Cris" because she is the next person after the provided input. Here hireorder determines.
If I provide, HireDate: 01/23/2012, HireOrder:0, AltOrder:0, as there is no person after this, I should be able to pick the first person on the list - in this case Rob.
I can write some business logic in front-end, but I thought it would be good, if I can move this to a stored procedure which can return me the PersonID for optimal performance.
I have tried writing various conditions but couldn't achieve a query that meets all my requirements here.
I'm even fine if my last condition is not met (returning the first person in the list, in case no one is available after the provided input).
Our front end saves all IP addresses used by a customer as a comma separated string, we need to analyse these to check for blocked IPs which are all stored in another table.
A LIKE statement comparing each string with the 100 or so excluded IPs will be very expensive so I'm thinking it would be less so to split out the comma separated values into tables.
The problem we have is that we never know how many IPs could be stored against a customer, so I'm guessing a function would be the way forward but this is the point I get stuck.
I can remove the 1st IP address into a new column and produce the new list ready for the next removal, also as part of this we would need to create new columns on the fly depending on how many IPs are in the column.
This needs to be repeated for each row
SELECT IP_List , LEFT(IP_List, CHARINDEX(',', IP_List) - 1) AS IP_1 , REPLACE(IP_List, LEFT(IP_List, CHARINDEX(',', IP_List) +0), '') AS NewIPList1 FROM IpExclusionTest
Is there a way to permanently change the order of the columns in Job Activity Monitor?
I'd like to move Duration to the right of Step Name, but this only lasts so long as I have JAM open. Once I close it and re-open, JAM goes back to its default column order. Google gives me nothing but the temporary "drag and drop" method that I already know about.
I have an issue where I am storing various international characters in nvarchar columns, but need to branch the data at one point of processing so that ASCII characters are run through an additional cleansing process and all non-ASCII characters are set aside.
Is there a way to identify which nvarchar values are within the ASCII range and can be converted to varchar without corruption? Also, the strings may contain a mix of english and international character sets, so the entire string must be checked and not just the first character.
I have this doubt and want to be sure if my thinking is correct.
Lets consider 2 tables one with Fixed length columns (char) and other table with Variable length columns (Varchar).
The table with fixed length column will always allocate same size within a Page however, table with variable length column will allocate actual length of data within a page.
I think that updates happening on table with fixed length columns will have more possibility of InPlace updates at least from data length perspective, however updates on table with variable length columns will have more split updates from data length perspective.
I have some huge tables (think 200+GB for a single table) which are excellent candidates for sparse columns. The tables have many columns which are defined with decimal datatypes (13,2) with a large percentage of them (over 50% in most cases- some as much as 99%) being 0.00. Since this is very expensive in terms of storage my idea is to set all the 0.00 values equal to NULL then set them as sparse. Across 100 or so identical databases, I have 5 such tables, with 20-40 columns in each table.
1.) three steps for each column in each table in each db.
Step 1: update table to allow for nulls
Step 2: update tabe set column=null where column =0.00
Step 3 update table set sparse columns
2.)
Step 1: Create entirely new table with sparse column definitions
Step 2: copy entire table, transforming 0.00 to null for affected columns via SSIS
Step 3: drop original table, rename new table to original name
I have run into a perplexing issue with how to UPDATE some rows in my table correctly.I have a Appointment table which has Appointment Times and a Appointment Names. However the Name is only showing on the Appt start Time line. I need it to show for its duration. So for example in my DDL Morning Appt only shows on at 8:00 I need it to show one each line until the next Appt Starts and so on. In this case Morning Appt should show at 8:00,8:15, 8:30.