SQL Server Integration Services team would like to hear from you. Our goal is to understand your current Data Extract, Transform & Load (ETL) practices and prioritize features for future releases.
You will have a chance to win one of five 80GB Zunes. To win a Zune, you must fill out the survey by Dec. 20, 2007.
To participate in the survey, please click here: https://mscuillume.smdisp.net/Collector/Survey.ashx?Name=SqlETLSurvey2
Has anyone implemented or knows an easy refernce to implement F-statistic for single variable linear regression? I was told it's not implemented in SSAS, so it seems like implementing it manually is the only way to go.
Hi, Is there possibility to get somehow statistic for mostly used SPs in the SQL Server 2000 Database? Any field in sysobjects for number of executions for certian SP?
We are having an occasional problem occur where a process will not stop blocking.
We are trying to trace the problem, but in the interim, I would like to set up an alert that notifies me when a process has been blocking for too long.
Are any of the lock wait times good statistics to use for such an alert? If not, is there anything else I could look at from the alert level?
If I had to, I could periodically create a table of sysprocess spids that are at the top of blocking chains, then test for a spid that lingers. I'm hoping I can avoid this and use the built-in monitoring instead, though.
Is there a DMV or similar in SQL 2012, or SQL 2008, that shows when a statistic was last used by the optimizer? I would like to cleanup some of the auto-generated stats, assuming it's possible to do so. In particular I'm looking to drop those statistics that were created by one-off queries, data loads, etc, and are now doing nothing but adding to the execution time of Update Statistics jobs.
I am trying to modify a survey DB schema created by OOP(Object Oriented Programming) snobs who had their heads so far up their butts(in my opinion). This is current schema created by them.
Actually there aren't a several foreign keys as they are shown in the picture, but that's what they(OOP snobs) are doing to the table "Response". They are inserting question_collection_id and its name as well as question_id and its name to Response table. However, it was not ill-intended, "We just wanted to load the report quickly"(OOP snobs). But I really recommend this schema, Because of its simplicity.
Yet I haven't proven that my schema will provide report just as quickly as OOP snobs' version. As mentioned earlier, since they don't have foreign key constraints accordingly, I do find quite number of discrepancy on values of "question" columns between "Question" and "Response" tables and "question_collection" columns between "question_collection" and "Response" table. I can provide schedule task to eliminate those problems. But I really prefer my schema than theirs. So far, I can explain mine is better just because of its simplicity and DBA's gut instinct. I'd like to hear Superior Expert's opinion which schema is better and why one is better than the other.
Attached is also zip files of schema images if you can't see them by those URL.
The BI Survey is the largest independent survey of BI technology users worldwide. (It used to be known as the OLAP survey, but has expanded to cover a broader array of BI technologies.) The survey analyzes people's buying decisions, BI implementation cycles and business results achieved.
The BI Survey is strictly independent --- they ask vendors like us to ask our customers like you to participate, but we don't have any other involvement --- except closely reading the results, of course. The Survey will send all participants a summary of the full survey results and place them in a drawing for ten $50 Amazon vouchers.
You can complete the survey on-line by following these links:
English version - website: http://www.eu-survey.com/webprod/cgi-bin/askiaext.dll?Action=StartSurvey&T1=BI&lan=1&lin=20
German version: http://www.eu-survey.com/webprod/cgi-bin/askiaext.dll?Action=StartSurvey&T1=BI&lan=2&lin=67
I am going into salary negotiationyearly review next month after my week off and I am trying to determine what to ask for. I am thinking another 10K and an extra week off would not be unreasonable, but I wanted to get some idea of what you guys think I should ask for in terms of pay.I have been developing software for seven years and I have been a dba for a little over five. I live in pricey northern Virginia. On a fairly regular basis I do about 55 to 60 hour weeks. On my DBA team, I am the only one who can handle both development and production tasks. The others are strictly developers. Although I have been relieved of most of my customer support tasks by our newbie, the customer support manager still brings the nastier bits to me. It is my perception that the more complex tasks get assigned to me. I get told on a regular basis that I am the best dba this place has ever had and other embarrassing accolades are regularly thrown my way. After a year, other than my boss I am the dba that has been here the longest in high turnover high burnout company. This year as we try to move to a SAP model, it looks like we will be going to 24/7 support on a disaster recovery model I am designing and implementing, so I guess I am getting the pager.So how much money should I be asking for?you can PM me with a number if you want.
I have surveys that I need to add weights to and was wondering if there was a way to convert the contents of a column.
select empid, ans_for_ql, (if ans_for_ql = A then 0, B then 3, C then 5) as weightscore, ans_for_q2, (if ans_for_q2 = A then 8, B then 4, C then 2, D then 1, E then 0) as weightscore
Here is what the table looks like:
empid | Ans_for_Q1 | Ans_for_Q2 1001 A C 1002 B E
And these are the possible answers and what they need to be converted to:
Weights for answers to Question1 Q1_A = 0 Q1_B = 3 Q1_C = 5
Weights for answers to Question2 Q2_A = 8 Q2_B = 4 Q2_C = 2 Q2_D = 1 Q2_E = 0
from (select distinct(clientID) ,sum(case when QuestionID = 1 and Answer in (1,2,3,4,5,6,7,8,9,10) then Answer else 0 end) as Q1_1 ,sum(case when QuestionID = 1 and Answer in (1,2,3,4,5,6,7,8,9,10) then Answer else 0 end) as Q1_2 ,sum(case when QuestionID = 1 and Answer in (1,2,3,4,5,6,7,8,9,10) then Answer else 0 end) as Q1_3 ,sum(case when QuestionID = 1 and Answer in (1,2,3,4,5,6,7,8,9,10) then Answer else 0 end) as Q1_4
from dbo.survey
group by clientid) as x
group by x.clientID ,x.Q1_1 ,x.Q1_2 ,x.Q1_3 ,x.Q1_4
Hi,We are a systems research group at the Computer Science departmentatRutgers University, and are conducting a survey to understand detailsabout network, systems and database administration. We hope that thisinformation would help us recreate a realistic environment to helpresearch in 'systems management'.We request network, systems, and database administrators to take thissurvey. As an incentive, all surveys completed in their entirety willbeentered into a drawing of a number of $50 gift certificates (fromAmazon.com).We hope you have a few minutes to take the survey which is located at:http://vivo.cs.rutgers.edu/administration_survey.htmlResearch in our group:The goal of our research is to improve the overall availability andmaintainability of services. Since administrators form an integral partof these services, a key aspect of this work is to build environmentsand tools that ease the task of service administration. In particular,environments which would help administrators know how their actionsmight impact the real service (before performing them for real), webelieve would be useful in preventing inadvertent actions. This surveytries to understand the existing environments, what administrators docurrently to test the 'validity' of their actions, and the difficultiesthey face in doing so. The two specific systems we are looking at arenetworks and databases, as we believe these are important components ofmany services.If you have any questions regarding this survey or our work, feel freeto email us:Fabio Oliveira (fabiool at cs || rutgers || edu), orKiran Nagaraja (knagaraj at cs || rutgers || edu)Thanks for your time,Fabio OliveiraPhD student, Vivo Research Group (http://vivo.cs.rutgers.edu)Rutgers University
Dear Madam/Sir,You are invited to participate in an international research study. Thisresearch project is headed and led by Cambridge student Nico Baumgarten.What is it all about?It is a well-known fact that there exist many differences between cultures.However, how these differences effect motivation if there are effects atall is not yet clear. This survey is set out to change this. With yourhelp, it will be possible to achieve this.Therefore, we kindly ask you to spare only 10 minutes of your time andvisit the survey home page.Go to the survey http://crosscultural.research.yi.orgor if this does not work the direct link http://82.4.146.32Who participates?All over the world, people are chosen to participate in the survey.Although the survey is completely anonymous, each person is arepresentative of his/her country. Hence, it does not matter, which job aperson has, whether he/she is male or female, old or young. The more peoplechoose to participate the more accurate this survey is going to representand show cultural influences as well as similarities.So if you want to tell your friends about this survey - Please Do So! Justforward this email to your friends or use the feature on the survey homepage.What is in it for you?With your help your country will be represented in the survey.Of course, you can opt-in to receive the results of the study at the end ofthe survey.Any information you provide will be kept strictly confidential andanonymous.Thanking you in advance for you support I remain withKind regardsNico Baumgartenhttp://crosscultural.research.yi.orghttp://82.4.146.32
I am a developer and I have a problem trying to design a system tomanage data coming from web surveys. Each section can potentially havedozens of questions, i.e., fields.I am focusing here only on the table(s) that will hold the survey data.I do not have any DDL as I am still trying to understand this!All the examples I have found so far in books and on the web seem todeal with fairly limited data, that is easily, or so it looks, brokendown in multiple tables.It seems that, from my research, having a wide table per survey sectionwith each field as a column, which has been suggested to me, is notproper design for many reasons - missing values for non-requiredquestions, table with 100s of possible columns, etc... - so I playedwith the idea of a single table where one of the columns would be theforeign key pointing to a questions table and another column would holdthe data (this is a simplified explanation.)The problem with this is that now this column will have to accomodateall types of data, from bits to large varchars, and that fieldvalidation seems now impossible jeopardizing data integrity.What do knowledgeable designers come up with in this case? Can someonepoint me in the right direction?Jerry
I've written several survey systems in which the majority of the questionshave the same or similar responses (Yes/No, True/False, scale of 1 - 5,etc).But this latest survey system I'm working on has 8-10 sections, with avariety of question attributes and answer scales. Some items have just adescription and require a Yes/No answer, others have a description and anactive status and require a Yes/No and price answer, some require a comment,etc.Rather than build a separate response table for each survey section, I wasthinking of building one generic response table, and trying to force allsections to fit by adding columns - some of which won't apply to some items.Like this:Survey Category (will apply to all items)Survey Section (will apply to all items)Item Description (will apply to all items)Item YN (will apply to all items)Item Price (will apply to about 10% of the items)Item Points (will apply to about 10% of the items)Item Active YN (will apply to about 10% of the items)Item Fail YN (will apply to about 10% of the items)Item Comment (will apply to about 10% of the items)For instance, in the structure above the field "Item YN" would representmultiple types of answers: is the item in use?, is the item in place?, isthe item given away for free?, is the item on display?, etc. Basically,anywhere a Yes/No answer is used.The advantage is one source table (rather than 8) for storing answers, andit might be easier to query and report on.The disadvantages I see are 1) it's more difficult to understand the meaningof the responses when the answer field is named Item YN, and 2) you have anon-normalized table that's difficult for a 3rd party to understand.If I have the questions and responses in separate tables, I'll use nameslike "ItemComplimentaryYN" and "ItemUsedYN" depending on the question. It'seasier for others to learn the data.I actually don't like the "generic" approach, and probably won't use it, butI figured I'd try to get some input from others who've written surveysystems.Thanks
Dear friendsI am conducting a survey on Relational Database usage and would likeyour help. The study is part of my MBA Dissertation.Could you kindly spare 5 minutes to take part in this survey?http://FreeOnlineSurveys.com/rendersurvey.asp?id=120816ThanksRajeev
I am working on a survey system with another DB Engineer.Our current design includes 1 table per Question Type (Numeric, Boolean, Text), Response values are tightly coupled based on data being entered, to keep things simple assume there are only 6 tables in this this system (1 for each Question Type to define the "rules" of the data entered, and 1 for each Response collection).He is a former .Net programmer and has been talking with the .Net guys on the project and he is proposing that we abstract our Question Types to use one table and store the Input Values all as nvarchar(max).This obviously simplifies the DB design and reduces the amount of work I will need to do so I should be all for it. I guess I am a little concerned that we are no longer tightly coupling the data types and essentially leaving all the validation up to the application. Also should I be worried that a bunch of bit data will be stored as 1/0 in an nvarchar(max) column?Thanks,-John
Several prominent researchers are conducting a survey and looking for500 respondents by Friday on the topic of enterprise architecture andsoftware development. Here is the link to the survey:http://www.surveymonkey.com/s.asp?u=845572436515After completing the survey, results are displayed immediately. A smalldonation will be made to a charitable organization after itscompletion.http://duckdown.blogspot.com/
I am a developer and I have a problem trying to design a system to manage data coming from web surveys. On average each subject will take part in more than one survey and each survey may potentially have 100+ questions.
I am focusing here only on the table(s) that will hold the survey data.
I'm thinking about having a wide table storing each question in the survey(s) as a column against the subjects. My main concerns with this approach:
- the "yuckiness" and potentially other performance issues(?) associated with a table with 100s of columns - the 8kb size limit per row (unlikely to touch it, but possible) - 1024 column limit (unlikely to touch it, but possible)
Another approach obviously is to have a single table where we have, in a simplified version, 3 columns: person_id, question_id and data/ answer. The problem with this is that the "data" column will have to accommodate all types of data, from bits to varchars, and that field validation seems now impossible jeopardizing data integrity, and most importantly, you can't easily work with the data in filtering/ reporting etc.
What do knowledgeable designers come up with in this case? Can someone point me in the right direction?
- Sales Volume: Information about sold articles to a customer incl. Selling date - Survey: irregular answered survey questions about customers incl. date of answer and three Dimensions: - Customer - Date - Survey Answer: Information about possible Answer values (e.g. Yes / No)
We would like to be able to determine the aggregated sales volume (sum) of a customer for a specific period depending on the latest survey answer within this period.
For example: Selected Time period: Jan - Jul 2015 Sales Volume Customer X - Jan - Jul 2015: 1000 Litres Sales Volume Customer Y - Jan - Jul 2015: 500 Litres
Surveys answered:
15th Jan 15: Customer X, Survey Question A: Yes 2nd Mar 15: Customer X, Survey Question A: No 20th Apr 15: Customer X, Survey Question A: Yes
10th Feb 15: Customer Y, Survey Question A: Yes 20th Jul 15: Customer Y, Survey Question A: No
Latest survey answer (Jan-Jul) Customer X, Question A: Yes Latest survey answer (Jan-Jul) Customer Y, Question A: No
Excel Pivot should show something like this:
Question | Latest Answer: Yes | No |
------------------------------------------------------------------------------------ A | 1000 Litres | 500 Litres |
-- Get the new Customer Identifier, return as OUTPUT param SELECT @NoteID = @@IDENTITY
-- Insert new notes for all the users that the note pertains to, in this case this will be by the assigned -- users. IF @FK_UserIDList IS NOT NULL EXECUTE spInsertNotesByAssignedUsers @NoteID, @FK_UserIDList
-- Insert New Address record -- Retrieve Address reference into @AddressId -- EXEC spInsertForUserNote -- @FK_UserID, --@NoteID, -- @BeenRead -- @Fax, -- @PKId, -- @AddressId OUTPUT
COMMIT TRANSACTION
-------------------------------------------------- GO
ok can someone tell me why i get two different answers for the same query. (looking for last day of month for a given date)
SELECT DATEADD(ms, - 3, DATEADD(mm, DATEDIFF(m, 0, CAST('12/20/2006' AS datetime)) + 1, 0)) AS Expr1 FROM testsupplierSCNCR I am getting the result of 01/01/2007
"Error: 8624, Severity: 16, State: 1 Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services."
I have traced this to an insert statement that executes as part of a stored procedure.
INSERT INTO ledger (journal__id, account__id,account_recv_info__id,amount)
There is also an auto-increment column called id. There are FK contraints on all of the columns ending in "__id". I have found that if I remove the contraint on account__id the procedure will execute without error. None of the other constraints seem to make a difference. Of course I don't want to remove this key because it is important to the database integrity and should not be causing problems, but apparently it confuses the optimizer.
Also, the strange thing is that I can get the procedure to execute without error when I run it directly through management studio, but I receive the error when executing from .NET code or anything using ODBC (Access).
Hey, i've written a query to search a database dependant on variables chosen by user etc etc. Opened up a new sqldatasource, entered the query shown below and went on to the test query page. Entered some test variables, everything works as it should do. Try to get it to show in a datagrid on a webpage - nothing. No data shows. SELECT dbo.DERIVATIVES.DERIVATIVE_ID, count(*) AS Matches FROM dbo.MAKES INNER JOIN dbo.MODELS ON dbo.MAKES.MAKE_ID = dbo.MODELS.MAKE_ID INNER JOIN dbo.DERIVATIVES ON dbo.MODELS.MODEL_ID = dbo.DERIVATIVES.MODEL_ID INNER JOIN dbo.[VALUES] ON dbo.DERIVATIVES.DERIVATIVE_ID = dbo.[VALUES].DERIVATIVE_ID INNER JOIN dbo.ATTRIBUTES ON dbo.[VALUES].ATTRIBUTE_ID = dbo.ATTRIBUTES.ATTRIBUTE_ID WHERE ((ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID1 and (@VAL1 is null or VALUE = @VAL1)) or (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID2 and (@VAL2 is null or VALUE = @VAL2)) or (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID3 and (@VAL3 is null or VALUE = @VAL3)) or (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID4 and (@VAL4 is null or VALUE = @VAL4)) ) GROUP BY dbo.DERIVATIVES.DERIVATIVE_ID HAVING count(*) >= CASE WHEN @VAL1 IS NOT NULL THEN 1 ELSE 0 END + CASE WHEN @VAL2 IS NOT NULL THEN 1 ELSE 0 END + CASE WHEN @VAL3 IS NOT NULL THEN 1 ELSE 0 END + CASE WHEN @VAL4 IS NOT NULL THEN 1 ELSE 0 END -2 ORDER BY count(*) DESC
Here is the page source
<%@ Page Language="VB" MasterPageFile="~/MasterPage.master" Title="Untitled Page" %> <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> <asp:SqlDataSource ID="SqlDataSource1" runat="server" ConnectionString="<%$ ConnectionStrings:DevConnectionString1 %>" SelectCommand="	SELECT dbo.DERIVATIVES.DERIVATIVE_ID, count(*) AS Matches 	FROM dbo.MAKES INNER JOIN 				 dbo.MODELS ON dbo.MAKES.MAKE_ID = dbo.MODELS.MAKE_ID INNER JOIN 				 dbo.DERIVATIVES ON dbo.MODELS.MODEL_ID = dbo.DERIVATIVES.MODEL_ID INNER JOIN 				 dbo.[VALUES] ON dbo.DERIVATIVES.DERIVATIVE_ID = dbo.[VALUES].DERIVATIVE_ID INNER JOIN 				 dbo.ATTRIBUTES ON dbo.[VALUES].ATTRIBUTE_ID = dbo.ATTRIBUTES.ATTRIBUTE_ID 	WHERE ((ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID1 and (@VAL1 is null or VALUE = @VAL1)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID2 and (@VAL2 is null or VALUE = @VAL2)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID3 and (@VAL3 is null or VALUE = @VAL3)) or 		 (ATTRIBUTES.ATTRIBUTE_ID = @ATT_ID4 and (@VAL4 is null or VALUE = @VAL4)) ) 	GROUP BY dbo.DERIVATIVES.DERIVATIVE_ID 	HAVING count(*) >= CASE WHEN @VAL1 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL2 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL3 IS NOT NULL THEN 1 ELSE 0 END + 									 CASE WHEN @VAL4 IS NOT NULL THEN 1 ELSE 0 END -2 	ORDER BY count(*) DESC "> <SelectParameters> <asp:ControlParameter ControlID="DropDownList1" Name="ATT_ID1" PropertyName="SelectedValue" /> <asp:ControlParameter ControlID="TextBox1" Name="VAL1" PropertyName="Text" /> <asp:Parameter Name="ATT_ID2" /> <asp:Parameter Name="VAL2" /> <asp:Parameter Name="ATT_ID3" /> <asp:Parameter Name="VAL3" /> <asp:Parameter Name="ATT_ID4" /> <asp:Parameter Name="VAL4" /> </SelectParameters> </asp:SqlDataSource> <asp:SqlDataSource ID="SqlDataSource2" runat="server" ConnectionString="<%$ ConnectionStrings:DevConnectionString1 %>" SelectCommand="SELECT * FROM [ATTRIBUTES]"></asp:SqlDataSource> <br /> <asp:DropDownList ID="DropDownList1" runat="server" DataSourceID="SqlDataSource2" DataTextField="ATTRIBUTE_NAME" DataValueField="ATTRIBUTE_ID"> </asp:DropDownList> <asp:TextBox ID="TextBox1" runat="server" AutoPostBack="True"></asp:TextBox><br /> <br /> <br /> <asp:GridView ID="GridView1" runat="server" AutoGenerateColumns="False" DataKeyNames="DERIVATIVE_ID" DataSourceID="SqlDataSource1"> <Columns> <asp:BoundField DataField="DERIVATIVE_ID" HeaderText="DERIVATIVE_ID" InsertVisible="False" ReadOnly="True" SortExpression="DERIVATIVE_ID" /> <asp:BoundField DataField="Matches" HeaderText="Matches" ReadOnly="True" SortExpression="Matches" /> </Columns> </asp:GridView> </asp:Content> AFAIK I have configured the source to pick up the dropdownlist value and the textbox value (the text box is autopostback). Am i not submitting the data correctly? (It worked with a simple query...just not with this one). I have tried a stored procedure which works when testing just not when its live on a webpage. Please help!
(Visual Web Devleoper 2005 Express and SQL Server Management Studio Express)
However, as you can see, the original select query is run twice and joined together.What I was hoping for is this to be done in the original query without the need to duplicate the original query.
I'm trying to find the command to open up an odbc conection inside sql2005 express. I only have ues of an odbc connector, we're conection to remedy. We will eventually be using stored procedures to extract the data we need from remedy and doing additional data crunching. I'm a foxpro programmer so once I get the correct syntax for making the odbc connector I shold be ok. Also I need a really good advanced book on sql2005. The type of book that would have my odbc answer. I've spent all morning trying to find this information and was unable to.
Thanks in advance
Daniel Buchanan.
If this was the wrong forum to post this on, please move this question to the correct one. I need this answer soon.
We have a issue with a MDS server that have been over us for a couple of days, the original error msg from SQL Server Engine is the one "The query processor could not produce a query plan" but the ones we get on the Excel-Addin are "Sequece contains no elements" or "The value cannot be null" T
• Using Microsoft SQL Server 2012 (SP1) - 11.0.3393.0 (X64) for 6months on this server without issues
• Two weeks ago we started to have 2 errors: "Sequence Contains No Elements" | "The Value Cannot Be Null"
• We are using the last version of Excel Add-in
• We try to reinstall the MDS feature
• If I backup/restore MDS database to other server it works
• We updated to SQL 2012 SP2 + CU4 but the error persisted ...
Looking at the MDSTraceLog we are routed to the this msg
SQL Error Debug Info: Number: 8624, Message: Internal Query Processor Error: The query processor could not produce a query plan. For more information, contact Customer Support Services., Server: bbdvsql03inst01, Proc: udpMetadataEntityGetDetailsXML, Line: 28
At line 28 udpMetadataEntityGetDetailsXML is calling udfMetadataEntityGetDetailsXML … and here is where we stopped
** Error found when try to get data from a entity using Excel add-in ** =================================== Sequence contains no elements ------------------------------ Program Location: at Microsoft.MasterDataServices.AsyncEssentials.AsyncResultBase.EndInvoke() at Microsoft.MasterDataServices.ExcelAddInCore.AsyncProviderBase`1.EndOperation(IAsyncResult ar)
how do I get the variables in the cursor, set statement, to NOT update the temp table with the value of the variable ? I want it to pull a date, not the column name stored in the variable...
create table #temptable (columname varchar(150), columnheader varchar(150), earliestdate varchar(120), mostrecentdate varchar(120)) insert into #temptable SELECT ColumnName, headername, '', '' FROM eddsdbo.[ArtifactViewField] WHERE ItemListType = 'DateTime' AND ArtifactTypeID = 10 --column name declare @cname varchar(30)
-- The 3rd query uses an incorrect column name in a sub-query and succeeds but rows are incorrectly qualified. This is very DANGEROUS!!! -- The issue exists is in 2008 R2, 2012 and 2014 and is "By Design"
set nocount on go if object_id('tempdb.dbo.#t1') IS NOT NULL drop table #t1 if object_id('tempdb.dbo
[code]....
This succeeds when the invalid column name is a valid column name in the outer query. So in this situation the sub-query would fail when run by itself but succeed with an incorrectly applied filter when run as a sub-query. The danger here is that if a SQL Server user runs DML in a production database with such a sub-query which then the results are likely not the expected results with potentially unintended actions applied against the data. how many SQL Server users have had incorrectly applied DML or incorrect query results and don't even know it....?