Hi, all here, I found that in my case when I trained the data mining models, the model cover rate is very low (in my case, the train data set has 82 rows but the case occuring in the models I trained is only 25). How can I improve the cover rate to improve the quality of the models? (if it is possible in SQL Server 2005) I am using SQL Server 2005.
Hi, I am trying to create an asp.net web recruiting application for HR, which will give the users the ability to both copy/paste the Resume and cover letter in textbox and upload resume and cover letter, then submit it (which will be saved into SQL Server 2000 table). I am thinking to save Resume and cover_letter as Image data type columns in SQL Server. . Can someone give me a direction about how to save the uploaded resume and cover letter to table and if it's the easiest way to do it? . What to deal with different formats of uploaded resumes? I hope to limit to only Word or HTML . Since I also give user another option - copy/paste the resume and coverletter into a textboxes. Can I simply save the copy/paste resume and cover letter into text field column? Later, say, if any HR recruiter retrieve the text from database, will it concatonates everything together without line break? Any ideas is appreciated.
Very simply, I need to add a cover page to my SSRS 2005 report. I've tried this using the Page Header region, but the cover page will not show up on Preview or Export. How and where do I create a cover page?
I have a report which shows around 8 columns. The report is in landscape mode(Width -11in and Height-8.5 in). I want to display only header in the first page means basically I want to add a cover page.
I have used tablix to display data. To display an empty page, I have added a rectangle before tablix and made add a page break after true. But while exporting to pdf, I am getting 2 blank pages with header. But I need only one.
If you are familiar with Crystal reports or Visual basic, you may be familiar with the Rate and Pmt functions.
I need to duplicate them in SQL sever 7.
Anybody have code for this already? I hate re-inventing the wheel.
More (Unnecesary) details: I have a client who has handed me the formula that I need to use for calculating Interest rates. Unfortunatly, the formula was written in Crystal reports, so now I need to pick it apart and do the work that CR does automaticly. Any help?
I need to develop a language specific dwh, meaning that descriptions of products are available from a SAP system in multiple languages. English is the most important language and that is the standard. But, there are also requirements of countries that wants productdescriptions in their language.
Productnr Productdesc Language 1 product EN 1 produkt DE
One option is to column the descriptions, but that is not very elegantly. I was thinking of using bridge tables to model this but you have to always select a language in a filter (I think)..
I'm thinking of a technical solution, such that when a user logs on, the language is determined and a view determines whether to pick a certain product table specific for a certain language. But then I don't have the opportunity to interchange the different language specific fields in a report (or in my case PowerPivot).
We have our Production server having database on which Few DTS packages execute every night. Most of them have Bulk Insert stored procedures running.
SO we have to set Recovery Model of the database to simple for that period of time, otherwise it will blow up our logs.
Is there any way we can set up log shipping between our production and standby server, but pause it for some time, set recovery model of primary db to simple, execute DTS Bulk Insert Jobs, Bring it Back to Full recovery Model AND finally bring back Log SHipping.
It it possible, if yes how can we achieve this.
If not what could be another DR solution in this scenario.
I am looking for information that tells me how fast a db is growing in MB and or percentages over a given period of time, ie weekly, monthly, yearly etc. Either in real numbers or estimates. Does 7.0 already store something like this or do I need to create some code for this?
Or does someone have something like this already coded that they would be willing to share?
Hi, I want to store tax rate in my tables. I set the data type to float, I wan't 4 decimal places and the data in the table has 4 decimals, but when I run a query in query analyzer it returns: 4.4999999999999998E-2 instead of 0.045. How can I fix this?
I have installed a SQL Server diagnose tool for evaluation. It prompts and warns me that "Procedure Cache hit rate is for example 15%. Its help indicates:
The Procedure Cache Hit Rate alarm is raised when the ratio between the number of times SQL Server looks for a plan in the procedure cache and the number of times it does not find a required plan in the procedure cache falls below a threshold.
A low procedure cache hit rate indicates that SQL Server is finding fewer of the query execution plans it needs already in memory and therefore has to perform more compiles. These extra compilations will degrade SQL Server performance by causing extra CPU load.
I've got a statistics table that I've been writing to for about 2 years now. Every saturday night, a size (in MB) snapshot of each DB file is taken and dumped into this table. I'm then emailed a copy for that week.
Now, I'm trying to figure out what the fastest growers are. Here's the table ddl
What I'm trying to figure out is how to query the average monthly and yearly growth percentages per DB on the MDFSize column.
I'm usually pretty good at this sort of thing, but I just can't seem to wrap my head around how to solve this issue. I'm not having a very good math day.
I need to pickup a tax rate, that is stored on a 1 record file. I would like to avoid using the CROSS JOIN. Is there a way to SELECT the record and set a Variable = to the tax rate so I can pickup the rate in another SELECT statement on each record?
We are having problems with our SQL server 2000. The problem is that on a daily basis we run out of disk space and I always have to run shrinkdatabase on tempdb. Today we started with 160GB of free space and by the end of the day it was gone!
Yes we do have many jobs running on our SQL server pulling data in from many sources. But I dont know how to find out which job is causing this problem. I have a suspicion that it could be a job that runs hourly that pulls data from Oracle (approximately 10000 rows each time), but that job has been active since the 28th August 2007. We only started running out of space in the past 5 days. Any suggestions would be appreciated as to what is causing this or how to diagnose the problem.
I have a 32 bit SQL 2005 EE clustered installation with 10GB of physical memory and AWE enabled. Our monitoring tool, Spotlight, is reporting the Procedure Cache to be 384MB and a Hit Rate of 75% on a fairly regular basis. Sometimes the Procedure Cache increases to 495MB and a Hit Rate of 82%.
(1) With 2005 can the Procedure Cache be increased?
(2) What is the max size of Procedure Cache?
(3) How do I increase the Hit Rate to a higher percentage?
I do not encounter the issue on any other SQL Server installation, however this is our only cluster.
DBCC PROCCACHE num proc buffs = 64889 num proc buffs used = 1135 num proc buffs = 1135 active proc cache size = 2896 proc cache used = 364 proc cache active = 364
We have Asynchronous Database Mirroring on SQL Server 2005 SP2 Entprise Edition/Windows 2000 Advanced Server. We noticed that log sent rate is quite low (average 1.3 MB/sec) in most of the cases whereas "Log bytes flushed/sec" is high (1.4 MB/sec) as a result Log send queue keeps on increasing and finally taking all the transaction log space. Our disk queue length is always in range of 0.01. And prinicipal and mirror servers are on local LAN.
I tried on low end server and high end server and in both cases Log sent rate is approx 1.3 MB/sec (Maximum 4 MB/sec).
Is there any limitation on Log sent rate?
How can we improve on log sent rate? Since both servers are on local LAN, network bandwith does not seems to be an issue.
I have a procedure that requires picking up the Rate based on Effective Date. This is what I have so far:
SELECT SHPD.ProductID, SHPD.ReceivedDate, SHPD.Shipper, SHIP.UnitRate FROM tblShipmentDet SHPD LEFT OUTER JOIN tblShippers ON SHIP.ProductID = SHPD.ProductID AND SHIP.Shipper = SHPD.Shipper AND Max???(SHIP.Effectivedate) <= SHPD.ReceivedDate
Because there can be more than 1 Shipper record, I would somehow need to pickup the Maximum EffectiveDate in each case. I realize I cannot use the Max aggregate in the JOIN. Not sure where to go from here. On the Mainframe I used a LOOKUP function that would return the correct EffectiveDate. Help would be appreciated.
SQL Server 2005XEON CPU 3.0GMEMORY 2.0GRAID Tow tables:HIS_HTTP_ONLINE_LOG(PARTITION) FOR HISTORY DATAREL_HTTP_ONLINE_LOG(NOT PARTITIONED) FOR EVERYDAY DATA,AND THEY HAVE THE SAME STRUCTURE CREATE TABLE HIS_HTTP_ONLINE_LOG(ID numeric(20,0) NOT NULL,USERID varchar(32) NOT NULL,USERIP varchar(16) NOT NULL,USERPORT numeric(10, 0) NULL,OBJECTIP varchar(16) NULL,OBJECTPORT numeric(10, 0) NULL,HTTPURL varchar(256) NULL,HTTPHOST varchar(128) NULL,HTTPDNS varchar(128) NULL,VISITIME numeric(10, 0) NULL,STARTIME datetime NOT NULL,ENDTIME datetime NOT NULL)....... SELECT * INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=2 There are 5 indexes in HIS_HTTP_ONLINE_LOG ,There is not one index in REL_HTTP_ONLINE_LOG There are about 5000,000 records in REL_HTTP_ONLINE_LOG everyday,at night it will move into HIS_HTTP_ONLINE_LOG automatically,The data of everyday in REL_HTTP_ONLINE_LOG will be last 90 days. My operations:1: ALTER DATABASE DB SET RECOVERY SIMPLE2: EXEC SP_DBOPTION DB, 'select into/bulkcopy', 'TRUE'3:INSERT INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=24: TRUNCATE TABLE REL_HTTP_ONLINE_LOG ASK:why the step 3 cost so much time ? (about 1 hour) and how can I reduce the transaction logs in this period ? Could you give me some suggestions ?Thanks!
SQL Server 2005XEON CPU 3.0GMEMORY 2.0GRAID Tow tables:HIS_HTTP_ONLINE_LOG(PARTITION) FOR HISTORY DATAREL_HTTP_ONLINE_LOG(NOT PARTITIONED) FOR EVERYDAY DATA,AND THEY HAVE THE SAME STRUCTURE CREATE TABLE HIS_HTTP_ONLINE_LOG(ID numeric(20,0) NOT NULL,USERID varchar(32) NOT NULL,USERIP varchar(16) NOT NULL,USERPORT numeric(10, 0) NULL,OBJECTIP varchar(16) NULL,OBJECTPORT numeric(10, 0) NULL,HTTPURL varchar(256) NULL,HTTPHOST varchar(128) NULL,HTTPDNS varchar(128) NULL,VISITIME numeric(10, 0) NULL,STARTIME datetime NOT NULL,ENDTIME datetime NOT NULL)....... SELECT * INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=2 There are 5 indexes in HIS_HTTP_ONLINE_LOG ,There is not one index in REL_HTTP_ONLINE_LOG There are about 5000,000 records in REL_HTTP_ONLINE_LOG everyday,at night it will move into HIS_HTTP_ONLINE_LOG automatically,The data of everyday in REL_HTTP_ONLINE_LOG will be last 90 days. My operations:1: ALTER DATABASE DB SET RECOVERY SIMPLE2: EXEC SP_DBOPTION DB, 'select into/bulkcopy', 'TRUE'3:INSERT INTO REL_HTTP_ONLINE_LOG SELECT * FROM HIS_HTTP_ONLINE_LOGWHERE 1=24: TRUNCATE TABLE REL_HTTP_ONLINE_LOG ASK:why the step 3 cost so much time ? (about 1 hour) and how can I reduce the transaction logs in this period ? Could you give me some suggestions ?Thanks!
I am only DBA in my company and client want to know the growth rate of his SQL server DataBase which is in production. How can I get the growth rate per day?
When sizing products we use predefined size groups that the users can choose any or all of the sizes from. For example if i size group consisted of sizes (6,8,10) they could use all sizes (6,8,10) or just (6,8) or just (10) if required. Similarly, if a group consisted of (S,M,L,XL) they could choose to only buy (S,L). They cannot choose across groups, so would not be able to choose (6,S)
Once the required sizing is determined they then assign size mixes to the sizes to denote how much of the buy will be in that size. So for example if we had 3 sizes: (6,8,10) and they had the associated mixes (25%,25%,50%) that would mean we would buy 25% of size 6 and 50% of size 10. All size mixes must add up to 100% in total.
The users do analysis to determine what sizes they wish to buy and how much of it.
We also have a franchise portion of the business that have some predefined size mixes. They use the same base size groups as above, but the rule is that they can only use sizes that the particular product is being bought in.
So if the assigned franchise mix is S (50%), M (50%) and the main mix was S (100%) then the franchise mix would only be able to then have the S size.
We would then eliminate the sizes from the franchise mix and then to ensure that the franchise mix still adds to 100 we would then pro-rate up the franchise mix to give a new mix. To do this I divide one by the total the remaining size mixes to get a ratio and then multiple the mixes by this factor.
In the case above not be able to use the M size and would only use the S.This would be
-Total of remaining mixes, in this case only size S for simplicity 1 / 0.5 = 2
-multiple original mix by this factor 0.5 * 2 = 1
size S would now be 100% instead of 50%
The issue I'm having is that on occasion some of the totals are adding up to 100.01% because another one of the requirements is that it needs to be 4 decimal places (0.1015 would represent 10.15% in excel)
Here is a shortened version of the code with some test data: