t1 (id_primary, id_secundary, name) i.e. [(1,1,"name1"), (2,1,"name2")]
I want to join this table with the following second table:
t2 (id_primary, id_secundary, value) i.e. [(1, NULL, "value1"),
(NULL,1,"value2")]
The join should first try to find a match on id_primary and only if that
fails it should find a match on id_secundary. Every row in t1 is matched
against a single row in t2.
The following query works:
select
a.name, isnull(b.value, c.value)
from
t1 a left outer join t2 b on a.id_primary = b.id_primary
left outer join t2 c on a.id_secundary = c.id_secundary
I'm wondering though if it would be possible to write a query that only uses
t2 once, since it actualy is quite a complex query that is calculated twice
now. Any ideas (besides using a temp table)?
I run into a problem when asking to show a query of employee vacation days.
table 1: column1 is dates e.g. 2015-01-01 2015-01-02 2015-01-03 . . . 2015-12-31
table2: employeeID vacation_date Tom 2015-01-03 Tom 2015-01-04 David 2015-01-04 John 2015-01-08 Mary 2015-01-012
My query output need to be:
2015-01-01 2015-01-02 2015-1-03 Tom 2015-01-04 Tom 2015-01-04 David 2015-01-05 2015-01-06 2015-01-07 2015-01-08 John 2015-01-09 2015-01-10 2015-01-11 2015-11-12 Mary
... etc... all the way to 2015-12-31
when i use left outer join, i only record one employee per date.
I am looking for an alternate logic for below-mentioned code where I am inserting into a table and having left join with the same table
insert TABLE1([ID],[Key],[Return]) select distinct a.[ID],cat1,cat2 from (select ID,[Key] Cat1 ,[Return] cat2 from @temp as temp) a left join TABLE1 oon a.ID= o.ID and a.Cat1 = o.[Key] and a.cat2 = o.[return] where [key] is null order by ID
SELECT Loan.loan_No AS Loan_No, Loan.customer_No AS Customer_No, Customer.first_name AS First_name, Customer.second_name AS Second_name, Customer.surname AS Surname, Customer.initials AS Initials, Bank.Bank_name AS Bank_name, Branch.Branch_name AS Branch_name, Branch.branch_code AS Branch_code , Bank_detail.bank_acc_type AS Bank_acc_type, Transaction_Record.transaction_Amount AS Transaction_Amount, Transaction_Record.transaction_Date AS Transaction_Date, Loan.product AS Product, Product.product_Type AS Product_Type, Product_Type.loan_Type AS Loan_Type
FROM Transaction_Record INNER JOIN Loan ON Transaction_Record.loan_No = Loan.loan_No INNER JOIN Product ON Loan.product = Product.product INNER JOIN Customer ON Loan.customer_No = Customer.customer_no INNER JOIN Bank_detail ON Customer.customer_no = Bank_detail.customer_no INNER JOIN Branch ON Bank_detail.Branch = Branch.Branch INNER JOIN Bank ON Branch.Bank = Bank.Bank INNER JOIN Product_Type ON Product.product_Type = Product_Type.product_Type
I have tables Companies, CompaniesDetails (the company branches), Addresses and Companies_Addresses.
The addresses table contain street and city while the Companies_Addresses has the keys for both companies and branches ,i.e., they are linked to Companies and CompaniesDetails via CompanyID and CompanyDetailID and to Addresses via addressID.
Companies_Addresses id (PK) companyID (FK) companyDetailID (FK) addressID (FK)
I am able to get the branch address at the moment with this code but I would like to get the company address as well using a single select statement.
Code: SELECT DISTINCTAddresses.city as branchCity, Addresses.street as branchStreet FROMCompanies LEFT JOINCompaniesDetails AS cd ON companies.companyID = cd.companyID
LEFT JOINCompanies_Addresses AS c ON c.companyDetailID = cd.companyDetailID LEFT JOINAddresses ON c.addressID = Addresses.addressID
WHERE Companies.name LIKE 'abc' ANDCompanies.status_indicator like 'Current'
If a table gets joind with itself and then joined again with itself, is it possible to perform one kind of outer join from the third table to the second tabe and having the range of records in the second table limited to the joins between the first and the second table?
Join on RegionID Join on RegionID and City Show records not qualifying for the joint but limited to the scope of of records established by the first joint _______________ ________________ _______________ | | | | | | | RegionID One | | RegionID One | | RegionID One | |_______________| | City New York | | City New York| |________________| |_______________| _______________ ________________ _______________ | | | | | | | RegionID One | | RegionID One | | RegionID One | |_______________| | City New York | | City New York | |________________| |_______________| _______________ ________________ _______________ | | | | | | | RegionID One | | RegionID One | | RegionID One | |_______________| | City Buffalo | | City New York | |________________| |_______________|
This is regarding the joins...for instance I have three Tables A, B, C. And they are related to each other. If I am joining A->B,B->C then is it necessary to join A->C too? what happens if I omit join A->C? and what happens if I include join A->C too?
Hello allLet's say I have 1 table "contract" containing the following data:id year sales45 2005 10045 2004 9589 2005 25089 2004 27512 2005 42I want to make a table with one unique row for each id and then a column for2004 sales and 2005 sales, like this:select a.id, a.sales, b.salesfrom contract a, contract bwhere a.contract=b.contract(+)and a.year=2005and b.year=2004The rows for id 45 and 89 are shown perfectly. But id 12 is not shown at allbecause it doesn't have a record for 2004!! I don't know why 'cause Iouterjoined the tables.It works perfectly when I have two distinct tables for each year (forinstance contract_2005 and contract_2004). So the problem seems to be in thefact I like to join one table with itself.Someone has a solution for this?thanks!Maarten
Is it possible to use wildcards with an equals statement? Such asSELECT * FROM Table WHERE City = '%' AND State='Ca'Bascially just stating where city equals anything...I know you can do it with a LIKE statement such as...SELECT * FROM Table WHERE City LIKE '%' AND State='Ca'but is that very efficient?The reason I want to do this is because I want to programmitcally set the city, so just ommiting it won't work Also, using City LIKE '%' seems to not include NULL...is there anywayto include NULL as well as anything else? Thanks for your help!
SELECT CASE WHEN Population BETWEEN 0 AND 100 THEN '0-100' WHEN Population BETWEEN 101 AND 1000 THEN '101-1000' ELSE 'Greater than 1000' END AS Population_Range, COUNT(CASE WHEN Population BETWEEN 0 AND 100 THEN '0-100' WHEN Population BETWEEN 101 AND 1000 THEN '101-1000' ELSE 'Greater than 1000' END) AS [No. Of Countries] FROM Country GROUP BY CASE WHEN Population BETWEEN 0 AND 100 THEN '0-100' WHEN Population BETWEEN 101 AND 1000 THEN '101-1000' ELSE 'Greater than 1000' END
I'm required to allow users to order items in a field to be displayed on a page in the order they specified.
For example: A user can drag and drop items in a list to specify the order it will be displayed in.
I have my drag and drop code ready to do this. I have an idea on how to do this but I think it’s too inefficient. I was going to create an orderby field and populate it with a number that corresponds to the position of the item. However, as one can deduce, if a user drags and drops a record between two others, I would have to change not only its orderby number but then change all the other items orderby number.
For instances if I dropped an item with an orderby number of 3 between 6 and 7 I would have to change the 3 to a 7 and then recursively change all the other records orderby numbers up to 3 and then change everything after 7.
Well, I hope I make sense. It’s easier to visualize it on paper.
Does anyone know how to tackle this issue of user dynamic ordering?
What sort of script would convert a pre-existing table into the second below?
I only want to merge the columns with Primary_IDs 1111 & 3333 to have the same Secondary_ID values, without duplicating any similar Secondary_ID values between the 2 which I've marked in red below.
Select: C1.firstname, C1.lastname, SP.lastyearsales, A.city From: SalesPerson as SP join Employee as E1 on Sp.salespersonID = E1.EmployeeID Right Join Employee as E2 on E1.EmployeeID=E2.EmployeeID join Address as A on E2.AddressID = A.AddressID Right Join Employee Contact as C1 on E1.ContactID = C1.ContactID
1. [Relshp].[PersonId] to primary key [Person].[PersonId], and 2. [Relshp].[RelatedToPersonId] to primary key [Person].[PersonId].
What kind of index structure would best support those FK constraints?
Would it be:
a) One combined index: CREATE INDEX IX_Relshp ON Relshp (PersonId, RelatedToPersonId) or b) Two indexes: CREATE INDEX IX_RelshpP ON Relshp (PersonId) CREATE INDEX IX_RelshpR ON Relshp (RelatedToPersonId) or c) Two "mirrored" combined indexes: CREATE INDEX IX_RelshpP ON Relshp (PersonId, RelatedToPersonId) CREATE INDEX IX_RelshpR ON Relshp (RelatedToPersonId, PersonId)
Sorry if this has been asked before, I looked through the FAQ but could not find an answer to the following.
I have a Project table which contains amongst other fields, a CreatedByID field and a LastModifiedByID field. Both these fields point to a User table. What I would like to do is get the two usernames from a query on the project table.
I know how to get one usename using the following Sql command but how do I get access to the second username ?
SELECT Project.Name,User.Username FROM Project,User WHERE Project.ID=@id AND User.ID = Project.CreatedByID;
use default pubs database in sqlserver2000. use authors table and publishers table.
Write a query to list first name, last of all authors and name of the publisher (if any) present in the same city as the author. If no publisher is present in the city where the author is located then the column should contain a NULL value. If there is more than one publisher in the city where the author is located, then the details of the author are to be repeated for each publisher.
but there is no field match between authors table and publishers table.
I have query that takes 12 minutes to execute. The query uses around 9 tables but I have narrowed down the problem to one table that has over 65 million rows. The problem table has only 3 fields
The query uses the primary key of this table to perform the join. FieldTwo and FieldThree are only used as output parameters.
I noticed if I remove FieldTwo and FieldThree from the output (but still leave the table in the query), the query executes in 1 second. However if I include FieldTwo and FieldThree in the output, the query takes over 12 minutes to execute.
I cannot index FieldTwo and FieldThree because of the field size and I cannot reduce the size of the fields because of the data that needs to be stored in it? How can I index or do something similar to speed up the table look up.
I have an ASP.NET web application that uses a Treeview control to display what can potentially be a very large data set. In the past, I would just run a recursive stored procedure in my database that would output the XML which I would save to a file. The Treeview used the XML file as its data source. I did this because it can take so long for the stored procedure to run (10 seconds or more) that, it isn't practical to have the treeview point directly to the stored procedure. This worked well enough because the data didn't change very often.
Now, it looks as if the application will be used in a production environment, and I really need to find a way to supply up-to-date data to the treeview in a dynamic way. I have tried creating a view that would provide XML and that would be updated any time the target table is updated but, that has not worked. I have also tried creating a trigger that would output to an XML file any time an edit was made (using the xp_cmdshell functionality) but, that has proven difficult as well.
Is there a simpler solution that I am just missing? I just want an up-to-date XML representation of the data that is a result of a recursive function.
We have a website that accesses our SQL databases. In the past, we used our internal employees to improve our SQL databases. However, we want to outsource the work.
There is a lot of information we would like to keep private from the outsourcing.
Is there a way to efficiently make test data throughout our database without changing our original database? Is a way to easily update our database to the new changes?
I found this product through Google EMS Data Generator for SQL Server http://www.sqlmanager.net/en/products/mssql/datagenerator Would this program help us make test data?
I'm stuck on something here. Any help would be great. This is a relational database question.
I'm trying to inner join two columns of one table with one column of another. The follwoing code doesn't work, but I think you can see what I'm trying to do.
Procedure _Links_List AS CREATE TABLE #TempTable ( LinkId int, LinkCategory varchar(50), LinkStatus varchar(50), LinkName varchar(50) ) INSERT INTO #TempTable ( LinkId, LinkCategory, LinkStatus, LinkName ) SELECT LinkId, _SubCategories.SubCategoryName, _SubCategories.SubCategoryName, LinkName FROM _Links INNER JOIN _SubCategories ON _Links.CategoryId = _SubCategories.SubCategoryId INNER JOIN _SubCategories ON _Links.StatusId = _SubCategories.SubCategoryId SELECT LinkId, LinkCategory, LinkStatus, LinkName FROM #TempTable
Also, I know how to do this if I had seperate category tables for each category (LinkCategory, LinkStatus). For examlple:
Procedure _Links_List AS CREATE TABLE #TempTable ( LinkId int, LinkCategory varchar(50), LinkStatus varchar(50), LinkName varchar(50) ) INSERT INTO #TempTable ( LinkId, LinkCategory, LinkStatus, LinkName ) SELECT LinkId, _Links_Categories.CategoryName, _Links_StatusCategories.StatusName, LinkName FROM _Links INNER JOIN _Links_Categories ON _Links.CategoryId = _Links_Categories.CategoryId INNER JOIN _Links_StatusCategories ON _Links.StatusId = _Links_StatusCategories.StatustId SELECT LinkId, LinkCategory, LinkStatus, LinkName FROM #TempTable
I know the above works but I'm trying to figure out how to have just one category table and one subcategory table for all of my categories of all my tables.
So instead of having to create a new table for every category and all the procedures for them for all my tables, I want to be able to just use these two tables.
If anyone knows how I go about this, especially when a table uses two category columns, I Thank you.
I am Using MSSQL and ASP Classic for this project. Let me explain my problem, i have two tables:
Users ID Username
ChangeLog ID Description UserID DateChanged
The Users table will contain one record for each user in the system.
The ChangeLog table can contain any number (including 0) of rows relating to each user, when a change to the users account is made, and entry is put into the ChangeLog table.
The object of my report is to display a list of all users in the system and display the date and description of the last change made to the user.
So far i have written this:
Code:
SELECT Users.ID, Users.Name, ChangeLog.Description, ChangeLog.DateChanged FROM Users LEFT OUTER JOIN ChangeLog ON (ChangeLog.UserID = Users.ID) ORDER BY ChangeLog.DateChanged DESC
However, if a user has got more than one change against it then a row will be returned for each change instead of just showing the latest change. So i then tried this:
Code:
SELECT Users.ID, Users.Name, ChangeLog.Description, ChangeLog.DateChanged FROM ChangeLog LEFT OUTER JOIN Users ON (Users.ID = ChangeLog.UserID) ORDER BY ChangeLog.DateChanged DESC
The problem with this one? It only shows users that have actually had a change made to them, and the report *needs* to display ALL users regardless of wether they have been changed or not.
I'd appretiate any help you can give me at all, thanks in advance!
I have a simple setup: 2 tables and a joining table, and want to retrieve a data set showing every possible combination of table A and table B together with whether that combination actually exists in the joining table or not.
My tables:
channels ====== channel_id channel_name
items ==== item_id item_name
channels_items (joining table) =========== channel_id item_id created
An example of the dataset I want (assuming 2 items and 2 channels, with itemA not being in channelB):
Along with some other rows with the same format. I need to join to this table using a RiskElementCode that I get from the Source system. The trick is that it can be at any level, but I don't know which level it is at. So what I have to do is somehow get the correct row from the lookup table based on the code from the source to get the correct level.
So for Example, If i receive the RiskElementCode of 'SSR', that is in column RiskElementCategoryCode_3 so I need the row that has 'NA' for anything after RiskElementCategoryCode_3 where RiskElementCategoryCode_3 = 'SSR'. If i get 'DFR' I need to get the row where RiskElementCategoryCode_4 = 'DFR' since there are no levels deeper than 4 i don't need to check anything else. If I get 'PRR', then I need the row where RiskElementCategoryCode = 'PRR' and code_2, code_3 and code_4 = 'NA'.
So besides getting the correct row based on the code, i need to get the correct row based on the level where the next levels are 'NA'. I should only get 1 row each time.
Hi,I'd be interested in people's thoughts about the following. A user on my site will be searching for a venue name, and that could officially include a sponsor which the user might not search for. Now I am using the AutoCompleteDropdown from the AJAX Control Toolkit, so the user will start typing in a few characters and the results will be returned. I can generate the results from sql by doing a simple LIKE '%' + @searchTerm + '%' however, this fills me with great fear of table scans. At the moment, we'd be querying against a table of 5K records, but our application is very new.I'm thinking one option is to split the words into another table - a one to many relationship to hold each word of the venue. The benefit of this would be that you could do a:LIKE @term + '%'but then I have the cost of the join. (And the added complexity which is not a major issue)Any thoughts/tips?Thanks!
We have a SQL Server 2012 Enterprise live transactional database that is now growing over 1G per month and is becoming a size problem for us. It is currently at 23G. Character type fields are all Unicode and I have calculated a savings of 5G in space converting only 2 such fields averaging 206 characters each to non-Unicode, and almost 10G in space if we convert a few more of them from nchar and nvarchar to char and varchar types. These fields will never have a requirement to hold Unicode characters that cannot be in the SQL_Latin1_General_CP1_CI_AS collation as they come in as plain ASCII originally and will always do so per the protocol standard.
I’m the software architect and chief C# developer though only a DBA hack or I would not have designed our database to have Unicode fields for high volume tables that did not need Unicode for those fields when the database was created 3 years ago. I want to correct this mistake now before we finalize converting to an AlwaysOn environment to support with various performance and backup issues.
After downsizing these two or more fields, we would like to shrink the database one time to take advantage of the space savings for full backups, and for seeding an AlwaysOn environment.
1.What is the safest and most efficient conversion technique for downsizing columns from nchar/nvarchar to char/varchar types? Esp. when there are multiple fields in the same table to be converted. I tested doing an “add new column, set new=old, drop old, rename old to new” for both of the main two fields I want to convert from nvarchar(max) to varchar(max), and it took 81 minutes on our test server (4 virtual core, 8G memory) before running out of disk space even though there was 8G left on the disk, and the db has unlimited size set (Could not allocate space for object 'dbo.abc'.'PK_xyz' in database 'xxx' because the 'PRIMARY' filegroup is full). I did delete an old database before it finished after getting a disk warning so maybe it did not count that new space. Regardless it was too slow. And this was on just the two largest of these columns (12.6M rows) and only ran 2 to 3% CPU busy so seemed not very efficient, and indicated unacceptable downtime if we were to convert even these two fields much less any additional fields. Average field size for these two fields was only 206 characters or 412 bytes each. Another technique I plan to try is to create the new table def in a new schema, select into it from the old table, then move tables amongst schema and delete the old table. I have a FK and indexes to contend with on the table.
2.If I figure out how to do #1 efficiently within an acceptable maint window, what is the safest practice for doing a one-time shrink and end up with organized/rebuilt indexes and updated Statistics? I understand the logic of not doing regular shrinks and that sometimes it can actually increase the size.
3.Is there any third party tool that could take a backup and restore it into a new database with the modified field definitions or otherwise convert certain field types?
Situation: SQL Server 2000. At my new employer they have a production database on one server and a copy of it that is set to read only on another server which is used for reporting.
#1 They have an SQL Server Agent job on the production server that: (2 times a day)
Backs up the production database Copies the backup file to a directory on the reporting server. (Its pretty big and can take time if there are problems with the LAN)
#2 They have an SQL Server Agent job on the Reporting server that: (scheduled to run 2 hours or so after the job on server 1 has runthey figured that it would be a safe bet that the backup and copy process of the first job would be done by then)
Breaks the user connections to the reporting database Performs a restore on the reporting database using the backup file that was copied to the holding directory by the production job. Sets some permissions for various users. Sets the reporting database to READ ONLY. What I would like to do is find a more efficient way to create this reporting database, I have started doing research into DTS methods but would like some opinions from more experienced users.
The query Im running so far is wrong, but here it is...
SELECT t.FromUserID, t.ToUserID, t.msg, u.UserName AS UserFrom, u.GroupID AS FromGroup, u2.UserName AS UserTo, u2.GroupID AS ToGroup FROM tmp_Messages t LEFT JOIN (SELECT UserID, GroupID, UserName FROM tmp_users WHERE GroupID = 3) u
[Code] .....
im missing the details of one of the users.I know what the problem is, I just cant figure out how to get this working without using temp tables, which I cant do in the production version.