Steve writes "I need a query to show all the medicare patients only. I want only one kind of insurance for each patient. I am getting records where each patient has more than one insurance like medicare and something else. Each patient has one,two, three or more insurances like medicare,medical,blue cross,tricare etc.
The result should look like this: Sam pick Medicare John white Medicare Ann Richmond Medicare
My query retrieves patients with more than one insurance.
select patient_first_name f_name,patient_last_name l_name,patient_zip zip, count(distinct payer_id) payer_id from claims where payer_id ='abcdefhj' group by patient_first_name, patient_last_name,patient_zip having count(distinct payer_id)= 1 order by patient_zip asc;
please help me. Greatly appreciate your response."
I need to group up the records randomly into ānā number of batches. That can be done by NTILE, but I want group up similar records in single group.
Say for example, following is the list of records I have in my table which I want to group into 5 batches
A123 A124 A124 A123 A127
After Ntile I will get the below,
Desired output is, Need output like Ntile but all same id should reside in single batch
Even if I n=5, maximum possibility of batches are 3 only.
I have duplicate records in table.I need to count duplicate records based upon Account number and count will be stored in a variable.i need to check whether count > 0 or not in stored procedure.I have used below query.It is not working.
SELECT @_Stat_Count= count(*),L1.AcctNo,L1.ReceivedFileID from Legacy L1,Legacy L2,ReceivedFiles where L1.ReceivedFileID = ReceivedFiles.ReceivedFileID and L1.AcctNo=L2.AcctNo group by L1.AcctNo,L1.ReceivedFileID having Count(*)> 0 IF (@_Stat_Count >0) BEGIN SELECT @Status = status_cd from status-table where status_id = 10 END
The MSDN doc makes it sound like after a failover of the primary, the CDC data won't "keep working" on the secondary unless you "To allow the logreader to proceed further and still have disaster recovery capacity, remove the original primary replica from the availability group using ALTER AVAILABITY GROUP <group_name> REMOVE REPLICA. Then add a new secondary replica to the availability group."
We have a few CDC tracked tables that we use and the general idea of AlwaysOn I thought was to minimize all the overhead and let things "just work" so your apps just connect and the listener re-routes everything where it needs to go.
It looks like to get this working properly an automated job /trigger would have to wait for a failover event and then kick off tasks to remove and re-add the replica and perhaps start up the CDC job on the secondary?
is there a way to restore all file groups except one? example: Database A has 10 filegroups, but 1 of them is defunct, so i cant delete it and there's no backup for restore it.Can I create a new DB restoring the 9 good FGs from a database A's backup?
We are having a conversation at work and the subject of load balancing with SQL came up. Right now we are running SQL Server 2014 on four (4) machines. I am using a AlwaysOn with Availability Groups (AG). Now I know that we can scale out the reading in AG by allowing the secondary serves to receive reads.
Is there a way to be able to do this with writes? Can I have in essences 2 masters that some how reconcile with each other? We are expecting a huge amount of writes in the near future and we need a way for SQL to handle the amount of traffic we are expecting with out any issues.
I explored the possibility of Peer - to - Peer replication; however, it seems that it would be more work if we are constantly making updates to the database scheme.
we currently use Backup Scripts from Ola Hallengren, It Says Full (non copy-only) and differential backups are performed on the primary replica. Full(Copy-only) backups and transaction log backups are performed on the preferred replica.
we currently do FULL(COPY_ONLY) Backup everyday and LOG Backups for every 15 min, is there any performance benefit on running the FULL (non copy-only) on the preferred replica .
I am planning to have AlwaysON Availability Groups setup between Server 1 and Server 2
Server 1 -->Publisher-->2014 SQL Enterprise edition-->Windows Std 2012 --> Always on Primary Replica
Server 2 -->Publisher(when DR happens)-->2014 SQL Enterprise edition-->Windows Std 2012 --> Secondary Primary
Server 4 as Subscriber
Server X as Remote Distributor ..
If i create Publications on Server1 (primary replica) to subscriber 4 servcer, will the publication be created automatically in Secondary Replica Server2 ? or do i have to create manullay using GUI/T Sql on Both Servers?
I want two write a small script to determine which is the currently active (primary) server in the AG.
Right now, I see that using SELECT * FROM SYS.dm_hadr_availability_replica_states I can determine the role. However, when the server goes down and switches to the secondary node, I don't believe that the role changes (or does it?). How do I determine which is the active node?
This is my first deployment of an always on availability group for SQL 2014 and I'm trying to get my custom backup procedure to handle all databases appropriately depending on the primary group. Basiscally I want the system databases and all databases that don't participate in the availability group to be backed up on both nodes and those that do participate backed up ONLY on the primary server. I've looked at the sys.fn_hadr_backup_is_preferred_replica funcation, but would like to only have to test for a single databases existance in the availability group. If the one database is in the group, only backup the system databases and those that don't participate, otherwise backup everydatabase. This would be the case for both full backups and transaction logs.
IF OBJECT_ID('tempdb..#Test') IS NOt NULL DROP TABLe #Test --===== Create the test table with create table #Test([Year] float, Age Int, ) INSERT INTO #Test ([Year], Age)
[Code]...
I queried below to get additional column
Select *,row_number() over(partition by [Year] order by Age) as RN from #Test as
I need to calculate the last two columns (noofgrp and grpsize) No of Groups (count of Clientid) and Group Size (number of clients in each group) according to begtim and endtime. So I tried the following in the first Temp table
GrpSize= count(clientid) over (partition by begtime,endtime) else 0 end and in the second Temp Table, I have select ,GrpSize=sum(grpsize) ,NoofGrp=count(distinct grpsize) From Temp1
The issue is for the date of 5/26, the begtime and endtime are not consistent. in Grp1 (group 1) all clients starts the session at 1030 and ends at 1200 (90 minutes session) except one who starts at 11 and end at 1200 (row 8). For this client since his/her endtime is the same as others, I want that client to be in the first group(Grp1). Reverse is true for the second group (Grp2). All clients begtime is 12:30 and endtime is 1400 but clientid=2 (row 9) who begtime =1230 but endtime = 1300. However, since this client begtime is the same as the rest, I wan that client to be in the second group (grp2) My partition over creates 4 groups rather than two.
I have an issue where I am getting an error on an unique index.
I know why I am getting the error but not sure how to get around it.
The query does a check on whether a unique value exists in the Insert/Select. If I run it one record at a time (SELECT TOP 1...) it works fine and just won't update it if the record exists.
But if I do it in a batch, I get the error. I assume this is because it does the checking on the file before records are written out and then writes out the records one at a time from a temporary table.
It thinks all the records are unique because it compares the records one at a time to the original table (where there would be no duplicates). But it doesn't check the records against each other. Then when it actually writes out the record, the duplicate is there.
How do I do a batch where the Insert/Select would write out the records without the duplicates as it does when I do it one record at a time.
We are on SQL 2014...we have a bunch of views in a database where we are trying to find the views which have more than 16 columns max for unique index/constraint...this is needed so we can convert them to indexed views...
I need to find a method to assign unique Shift IDS to rows that correspond to a single shift. For instance, the first shift would begin on the first row when shift_start flag is turned on, and end on the third row when shift_end flag is turned on.
I have a comma separated field containing numerous 2 digit numbers that I would like splitting out by a corresponding unique code held in another field on the same row.
E.g
Unique Code Comma Separated Field
14587934 1,5,17,18,19,40,51,62,70
6998468 10,45,62,18,19
79585264 1,5,18
These needs to be in column format or held in an array to be used as conditional criteria.
I've a excel spreadsheet with 650 records with unique PONumbers. I need to pull data from SQL server based on the PONumbers. I don't want to run select statement 650 times. How do I retrieve the records in efficient way?
I have heard that high numbers of VLF's aren't good. It can impact performance and can delay recovery time, so I wanted to test that.
I created 2 DBs with 100MB datafile and 50MB logfile.
TestDB log file had 100MB autogrowth TestDB2 log file had 1% growth.
I inserted 1048576 records, took the backup
Ran DBCC loginfo and TestDB had 40 VLFs and TestDB2 had 165 VLFs
But when I restored both DBs, this is what I got.
TestDB: RESTORE DATABASE successfully processed 42258 pages in 4.420 seconds (74.691 MB/sec). SQL Server Execution times: CPU Time = 125ms, elapsed time = 8323 ms.
TestDB2: RESTORE DATABASE successfully processed 42257 pages in 3.943 seconds (83.724 MB/sec). SQL Server Execution Times: CPU time = 109 ms, elapsed time = 8314 ms.
Question is: Where is the difference? How TestDB which has 40 VLFs are better than TestDB22 which has 165 VLFs.
Table2 contains fields Group, Name,Category, Dimension (Group and Name are not in Table1)
So basically I need to read the records in Table1 using Groupid and each time there is a Groupid then select records from Table2 where Table2.Category in (Select Catergory from Table1) and Table2.Dimension in (Select Dimension from Table1)
In Table1 There might be 10 Groupid records all of which are different.
SELECT top 100 Ltrim([text]),objectid,total_rows,total_logical_reads , execution_count FROM sys.dm_exec_query_stats AS a CROSS APPLY sys.dm_exec_sql_text(a.sql_handle) AS b where last_execution_time >= '2015-04-07 10:01:01.01' ORDER BY execution_count DESC
But the result of execution count is from the first. I want to know it only one day.
I have a CTE query against a table with 32K rows that runs fine in 2008R2. I am running it in 2014 Std Ed. against the same data and it runs very slowly. Looking at the execution plan I think I see what's contributing to the slowness.
Note that the "actual number of rows" is some 351M...how is this possible?
the query:
declare @amts table (claim int,allowed decimal(12,2),copay decimal(12,2),deductible decimal(12,2),coins decimal(12,2)); ;with unpaid (claimID) as (select claimID from claim where amt+copay + disct+mm + ded=0) insert @amts select lineID, sum(rc), sum(copay), sum(deduct), case when sum(mm)>0 and (sum(mm)<sum(mmamt)) then sum(mm) else 0 end from claimln where status is null and lineID not in (select claimID from unpaid) group by lineID
it's like there's some massively recursive process going on?
I have four columns in my table, the first one is the identity column
col1 Col1 col2 col3
1 12 1 This is Test1 2 12 2 This is Test1 3 12 3 This is Test3 4 12 4 This is Test4 5 12 5 @@@@@
When, I see, @@@ sign in my col4, I need to restart the col3 from 1 again so it will look like this
col1 Col2 col3 col4
1 12 1 This is Test1 2 12 2 This is Test1 3 12 3 This is Test3 4 12 4 This is Test4 5 12 5 @@@@@ 6 12 1 This is another test1 7 12 2 This is another Test2