Hi all,
I have a big problem with the design and the queries of a couple of tables. I have to calculate the weighted average of the number of days between an invoice and the sum of its payments (rateal payments), weighting this number with the invoice's amount, and only when the sum of the invoice's payments equal invoice amount.
I have designed two FactTables to accomplish this:
Example:
data in the fist table (FactInvoice) looks like this:
Let's suppose that I'm querying the cube for the customer 1234567, at the date 2008-05-01. I need to sum the payments for the first invoice multiplicating it for the count of days elapsed, then divide the number by the InvoiceAmount (or PaymentsAmount, is the same) : (0 days * 500‚¬ + 5 days * 500‚¬ + 10 days * 800‚¬ + 13 days * 1200)/3000‚¬ = 8,7 days of weighted average to cash an invoice.
How can I do this with Analisys Services ? Is it too complicated ? Here, in the southern Italy, the problem of customer's debit is heavily felt, and total like this are really important!
Any help or suggestion will be really appreciated.
I was looking at the new index locking granularity option available in 2005. I did not understand in what case can this be a performance enhancement. Has anybody looked into this ?
Hi, I got inventory fact table. For the past two weeks, I got on a daily level; beyond that, weekly level, and beyond that monthly. I need to tie it all in to the time dimension of course €“ and the problem is, how do I do it on different granularity? As far as time dimension, tn the datamart, I got tables dim_date with key column date_id (int) , and correspondingly dim_week with week_id(int) and dim_month with month_id(int).
What I€™ve done so far, is created a time dimension from dim_date table (meaning granularity=daily) and simply tied in all the inventory €“ daily, weekly, monthly, on the day level (it all has date_id field in it, even the weekly and the monthly. Its simply the day of the end of week or end of month) I didn€™t tie anything to dim_week and dim_month. Does that makes sense? The result is kind of strange. I cant upload an image here, but€¦ well it seems ok, I got year, week (€˜GL Week€™) and then €¦ this is the annoying thing: why am I getting a €˜date€™ column, when I only want it by week or by month? I can€™t make that column disappear (e.g when in time hierarchy I only group by month, still a €˜day€™ column will be there, and will show 4 days€¦ the 4 €˜end of week€™ that comes from €˜date_id€™.) How do I make it go away??
Hi I have a few things to clear up and I hope i can find some answers in here. I have an application written in C# using COM+ components This application is intended to support a few hundred users at the same time and each action user leads to a few updates on several tables. The problem is that 2 or more users might do this thing the same time meaning that each users should update rows that "belong" the other users . I rely on the appearance of deadlocks in order to keep the data consistent, so only one of the users should be able to have this action completed, the transactions for the other concurrent users should abort. Basically the results would be the same no matter which user will complete the transaction so the only issue is to have only one action completed. So far so good, it seems that I have no problems and that the data is consistent.
Now the reason why I'm here would be that lately i see that the number of deadlocks has increased considerably and there should be no reason for this to happen; the situations when the users would modify each others data are somehow rare and I should see a few hundred deadlocks daily So what i think is that SQL might increases the lock granularity to table or page instead of using row locks(I am not sure it's just a wild guess) From what I've read, sql starts with default row lock and it might increase it when necessary.
And now finally the question: can I force sql to use only row lock? and if yes are there any risks involved? Is sql capable to maintain only this kind of lock and still be able to manage the transactions correctly? And another question, how do deadlock decrease overall performance?
I am struggling with adding budget numbers to a cube. The main reason being that the budget is *not* on the finest granularity (employee) with regard to the organization hierarchy but on a coarser one (team).
The organization hierarchy is a "flat" (not parent-child) hierarchy that looks about like that:
employee -> team -> teamgroup -> region -> country
As mentioned I now have budget numbers that are defined on the team-level (not on the employee level as "regular" measures). I would know assume that I could put the budget data into its own table and "link" it with the organization through the "team" attribute. I would do that on the "dimesion usage"-tab.
The problem with this approach is that the organization is changing (SCD type 2). This essentially means that by linking to the "team" attribute the aggregation of the budget data on higher levels of the organization hierarchy can be ambiguous (at least that is what I understand).
That would mean that on some point in time team 2 (consisting of one employee, Jeanne) moved from teamgroup 2 to teamgroup 3. Just for the sake of a simple example.
Now, what am I to do with my budget data in this situation? I cannot link it to the teamId, because teamId = 2 for example cannot specify if the value should be aggregated into teamgroup 2 or teamgroup3...
I have a feeling that this got something to do with the design of the organization-table but I am unsure about what the actual problem is. Any hint, pointer or solution would be appreciated. If the question is unclear, please let me know and I will try to clarify.
I like the new gig a lot. Real busy, smart folks and I have been in high demand since 5 minutes after my butt hit the chair. I already have code in production.
Anyhow, we have a security situation on the sql servers I pointed out on my first day. So they want me to roll everything over to Windows Authentication and give the developers and report writers more restricted rights inside SQL Server. So they have NT Groups for different kinds of users and all of that jazz and I layed on the typical stuff about using NT groups vs individual accounts and ease of admin vs granularity of control. Well the boss came back and said he wants ease of admin and granularity of control over security. So, does anyone have any fresh thinking on turning my eitheror into an AND.
I have just been running a query which I was planning on improving by removing a redundant GROUP BY (there are about 20 columns, and one of the columns returned is atomic, so will mean that the "group by" will never manage to group any of the data) but when I modified the query to remove the grouping, this actually seems to slow the query, and I can't see why this would be the case.
Both queries return the same number of rows (69000), as I expected, and looking at the query plan, then they look nearly identical, other than at the start, there is a "stream aggregate" and "sort" being performed. The estimated data size is 64MB for the non-grouped query (runs in 6 min 41 secs), vs 53MB for the aggregated query (runs in 5 min 31 secs), and the estimated row size is smaller when aggregated.
Can rationalise this? In my mind, the data that is being pulled is identical, plus there is extra computation for doing an unnecessary aggregation, so the aggregated query should be unquestionably slower, but the database engine has other ideas; it seems to be able to work more quickly when it needs to do unnecessary work :) Perhaps something to do with an inefficient query plan for the non-aggregated query? I would have thought looking at the actual execution plan might have made this apparent, but both plans look very similar.
Edit: More information, the "group by" query had two aggregations on it, a count of one of the columns, and an average of another one. I changed this so that it was just "1" instead of the count, and for the average, I changed it to be the expression within the average aggregate, since the aggregation effectively does not do anything.
I am modelling two fact tables of Actuals and Budget which are at different granularity, Actuals are at day, customer and product sub category level. Budgets are at month, Region and Product category level.
Month, Region and Product Category is present in Date, Region and Product Category dimension respectively. I have only three dimensions as Customer, Product and Date. Linking those dimensions to Actual Fact table is not an issue, what is the best way and options are there to link budget fact table to those three dimensions.
Hi I know i can use a LOWER or UPPER function to change the case of letters, but say i want to change all but the 1st letter ie i want MARK SMITH to be Mark Smith ... MARK and SMITH are 2 seperate columns so im assuming something like
hi I have a column in my database i would like to convert to lowercase is their a t-sql statement or something i can use so i dont have to do it manually ?? cheers!!!
Hello, I have been having a hard time with this issue. I am attempting to join a table onto itself to get the closest date onto a single row. What i mean is: I have the following data id date 1 10/07/08 2 10/06/07 3 10/06/03 4 10/06/03
the new table should have the current id and the one closes to it as so. 1 10/07/08 2 10/06/07 2 10/06/07 3 10/06/03 3 10/06/03 null null 4 10/06/03 null null but i am getting duplicates do to the 10/06/03. 1 10/07/08 2 10/06/07 2 10/06/07 3 10/06/03 2 10/06/07 4 10/06/03 3 10/06/03 null null 4 10/06/03 null null i want so that if there is a duplicate i can take the id thats higher. I cant figure it out. This is my current sql:
SELECT PB.ID,PB.StartDate, PB2.ID, PB2.Startdate from table PB left outer join table PB2 on PB.keyID = PB2.keyID and PB2.StartDate < PB.StartDate and PB.StartDate = (select top(1) StartDate from table PB3 where PB.keyID = PB3.keyID and PB2.StartDate < PB3.StartDate order by PB3.StartDate asc)
I have following problem:table includes times for startup and end of operation as datetime fieldrelated to daily shift operations:dateid date starttime endtime458 2006-12-29 22:00 23:15458 2006-12-29 00:15 01:30459 2006-12-30 20:00 21:10459 2006-12-30 22:15 23:35459 2006-12-30 23:30 00:40459 2006-12-30 01:50 02:30records are inserted for a date related to begining of the shift, althoughsome operations are performed also past the midnight (actualy next day, ex:2006-12-31), but belongs to same shift (group)Now I need to build a function that corrects (updates) the date of everyoperation recorded after midnight to a date+1 value, so all records relatedto same groups (458, 459, etc) that starts after midnight has correct date.The procedure has to update already exiting table.Any solution?Grey
Hi there ; I've a photo viewer which shows the image based on a datetime QueryString. I've a couple of buttons like next, prev, last, .... . in the "prev" for instance, i want to select the image with a datetime that is lower than the current image datetime, and is the biggest one of course.but i'm stuck in it! By the way i use .net 2.0 & Sql 05 Express thanks in advance
For example, select fieldA form tableA where fieldA = 'aaa' I got following output fieldA --------- aaa aAa AAA AAa ... if I want select only the lower case 'aaa', how can I do that?
My SQL Server database is not case sensetive. How can I compare like cluase with search for capital and small letter? For example SELECT add1 from xcty_all where add1 like '%AL'%' I need only ................... 10 ltncewwod way AL 456 Ruio St. AL NOT
I need to split a string in two if there are lowercase characters atthe end of it.For example:'AAPLpr' becomes 'AAPL' and 'pr''Ta' becomes 'T' and 'a''MSFT' becomes 'MSFT' and '''TAPA' becomes 'TAPA' and ''I am using SQL 2000. I read about "collate Latin1_General_CS_AS" butnot sure if I can use that outside of a select statement.Thank you in advance for any help.
hi i want to select * from table1 where name =petter?now if there is many type of petter in table linke PETTER , Petter And petter which record will come in display?if i want all this three (PETTER,Petter,petter) will come in display which command is for this ??? regard
In the database, most of our cities are stored in all upper case. For reporting purposes, I need to have them returned as upper/lower.  I used the below function, which works great for one word cities.  However I can’t figure out how to get it to capitalize the 1st letter of each word, for addresses containing multiple names such as Rancho Santa Margarita.Â
I want to perform column level and database level encryption/decryption.... Does any body have that code written in C# or VB.NET for AES-128, AES-192, AES-256 algorithms... I have got code for single string... but i want to encrypt/decrypt columns and sometimes the whole database... Can anybody help me out... If you have Store procedure in SQL for the same then also it ll do... Thanks in advance
I'm still haven't resolved the issue with displaying information from a SQL database. The text I'm displaying is in ALL CAPS in the SQL database, and I'm trying to convert it so that when I display it in gridview, The First Letter Of Each Word Is Capitalized, as apposed to ALL CAPS. I've tried the text-transform feature of CSS, but I noticed in a SQL book there are LOWER() & UPPER() string functions. The ideal thing to do then, would be to do some select statement that converts all the incoming text to lowercase, then use the CSS text-transform: capitalize , to convert the first letter of each word to caps. Basically, I need a select statement or something that converts my sql material to lowercase. Thanks.
I want to search for alphanumeric values between an upper and lower bound in a sql database.For example: search for a serial number like pvf-456-3b. Upper bound is q, lower bound is g.I should then get every serial number starting with g - q.If possible the bounds should be more specific like "search for serial number between gt2 and qy"Can anybody help me out?
I am running the following OSQL command and capturing the return codefor the error .Whenver i have an error like server not exists or uableto login I get a return code of 1 for the %ERRORLEVEL%.Howeverwhenever I have an errorof a wrong dbcompatibility error the retuncode id 1 even though sql returns an iformation message from OSQL thatthe right copatibilty levels are 60,70 and 80.How can i get OSQL toreturn the right return code whenver a error of this type occurrs frombatch mode sql.The OSQL i am running from the batch isosql -S%SrvName% -U%Username% -P%Userpswd% -n -w 132 -d%DBname%-Q%sqlcmd% -o%Dirrpt%\%DBname%_%SPname%.txtECHO %errorlevel% >> %logbatch%IF %ERRORLEVEL% NEQ 0 Goto SQLErrorsqlcms is exec sp_dbcompatibiltylevel srvrname, dbname 80Thanks in anticipation.Ajay
What I'm trying to select is the closest value from a list given by a parameter or select the matched value.
declare @compare as int set @compare = 8 declare @table table ( Number int ) insert into @table values(1), (2), (3), (4), (5), (10)
If the parameter value match one of the values from the table list, select that matched one.If the value does not exist in the table list, select the closest lower value from the table list, in this case, it would be value 5.
I have a virtual server (VMware ESX) with 64GB RAM running a single instance of SQL 2012 SP1. The max memory config is set to 59392 (58GB).
The Page Life Expectancy for this server has been averaging well under 10 mins for the last few days, according to our monitoring.
I have been checking the amount of data in the buffer cache periodically during the day with the below query, which seems to show that there is never more than about 10GB of data at any one time, frequently dropping below 5GB:
SELECT COUNT(*) AS BufferPages, CONVERT(decimal(10, 2), COUNT(*) / 128.0) AS BufferMB FROM sys.dm_os_buffer_descriptorsWhy would the amount of cached data be so low (and cause so much churn)?
I am aware that other things will require some of that memory (plan cache etc.) but with Max Mem of 58GB, I would expect there to be a much higher amount of actual cached data at any one time. I did the same checks on another VM with the same amount of RAM/Max Mem setting, and there was 50GB of data in the cache, with PLE measured in hours.
When you utilize transactions in ADO.NET are the locks put on the entire TABLE used or at the row level? For instance if you do a SELECT within a transaction if you only pull 5 rows out of a 1000 row table can you just make it lock the rows that have been pulled? It seems like it locks the entire table?
Hi, Can anybody please explain me, what is low level and high level locking in SQL Server 2005 database. Also what is the name of process which converts low level locking into high level locking and vise versa. -Sanjeev
Hi..I'd very much appreciate it if someone would tell me how to translatea statement level trigger written in Oracle to its equivalent (if there isone)in MS SQL Server. Ditto for a row level trigger.If this is an old topic, I apologize. I'm very much a newbie to SQL Server.Regards,Allan M. Hart