Lots Of Transactions, But No Activity

Jan 30, 2007

Hi all,

I have a strange situation. Performance monitor shows that SQLServer:Transactions Transactions value is 125, but SQL Server Profiler does not show any activity.

I ran sp_who2 and I have a bunch of processes with SUSPENDED status. Would those be counted in Performance monitor?

When does a transaction get suspended?

Thanks.

Alec

View 1 Replies


ADVERTISEMENT

Performance Problem, Lots Of Disk Activity, Running Out Of Memory

Jul 20, 2005

Fellas!!This is a very complicated one and it took me a few days to figure outexactly what's going on, but here's the final story:I have a production environment running on .NET with a SQL Server(2000, SP3). The SQL Server is on a dedicated Proliant computer with2GB RAM (the actual SQLServer.exe process has dynamic memoryassignment and can reach up to 1.6GB RAM). Nothing else is running onthat specific computer.Once the SQLServer is started, it hits 300MB RAM (the minimum that wasset in the configuration of the server - remember, it is dynamicallyaquired).Then there is a .NET program that requests just about all the data theSQL Server contains (apart from a single table that contains roughly1.6 million rows and another table that contains about 10000 rowswhich are all of type IMAGE).Once all the data is retrieved, the RAM is at about 400MB. From thereon, every update I make to the data on the server causes the RAM to goup by a bit (that updates are done in a Transaction which of course iscommitted at the end). It seems that BLOB updates are the majorproblem in all of this. For some reason, uploading a blob of size 9MBcauses the RAM to go up by roughly 20MB and after commit it gose down10MB (total gain of roughly 10MB RAM). Eventually the SQLServerprocess hits its upper limit (1.6GB) and at this point it startsslowing down.Some performance checks showed me the SQLServer has a lot of diskactivity, it seems it is reading and writing pages of data from/to theHD all the time (which causes the queries to be much much muchslower).We have a development environment running the exact same code (it isthe exact same in everything, except for the amount of data stored inthe DB). This does not happen there at all.I have a few questions:1. Why is the RAM going up after BLOB updates?2. Why is the RAM going up at all?3. How can I tell the DB which tables should remain in the RAM at alltime (never swapped back to the HD?) - DBCC PINTABLE does not seem todo the job.It does not seem to have anything to do with the .NET code.Thank you very much,M Yamo.

View 4 Replies View Related

Changing Connection Transactions To Database Transactions

May 22, 2005

Hi there,
I have decided to move all my transaction handling from asp.net to stored procedures in a SQL Server 2000 database. I know the database is capable of rolling back the transactions just like myTransaction.Rollback() in asp.net. But what about exceptions? In asp.net, I am used to doing the following:
<code>Try   'execute commands   myTransaction.Commit()Catch ex As Exception   Response.Write(ex.Message)   myTransaction.Rollback()End Try</code>Will the database inform me of any exceptions (and their messages)? Do I need to put anything explicit in my stored procedure other than rollback transaction?
Any help is greatly appreciated

View 3 Replies View Related

Please Help! I Have Lots Of Questions.

May 6, 2007

In case some of you have read my previous posts, you may be aware that I'm writing a webboard application as a replacement for the old one.The old one currently have approximately 50000 topics in the database, each has on average 10 replies (I just check recently. I though it was only 7000 topics).I need to provide paging and sorting feature to the topic list. But I can't just SELECT all of them and let GridView do the paging/sorting, right?I have been using stored procedures to store my SQL statement for several projects now. I know how to deal with the paging feature (ROW_NUMBER), but the sorting requires me to change to change the "ORDER BY" clause.1. Can somebody tell me how to change the ORDER BY clause in the stored procedure(s) at runtime? Or does anyone have other approach?
Currently I'm thinking about moving back from store procedures to hard-code SQL statements, and then modify/generate the SQL statement for each paging / sorting. But I've learn that stored procedures give more performance and security.2. According to the situation I provided, is it worth moving from stored procedures to hard-code SQL?I'm also using 3-tier architecture approach + OOP. But I reach a conflict in my thoughts. You see, according to OOP, I'm supposed to create classes that reflect the actual objects in the real-world, right? In my case the classes are "Board, Topic, Reply, ...." According to this and 3-tier approach, I intend to use ObjectDataSource as a bridge between Presentation Logic and Business Logic. But I wonder what my datasource class should return3. Should my data source class return data objects like1st approach[DataObject(True)]pubic class TopicDataSource{         public static Topic[] GetTopicList() { }}or should it return DataSet / DataTable / DataReader like2nd approach [DataObject(True)]public class TopicDataSource{          public static DataTable GetTopicList() {}}Personally I think approach 1 is more OOP and allow for more extendability, but approach 2 might be faster.4. If I go with approach 1, how should I control which property of my data objects is read-only after it's has been inserted/created? Can I just set my data object's property to be readonly? Or do I have to set it at page level (i.e. GridView-> Columns -> BoundField -> ReadOnly=True)? Or do I set it and the page level and write a code to throw an exception in the rare case the application / user try to change it's value? Or else?Please help. These questions slow me down for days now.If there's any concepts that I misunderstood, please tell me. I'm aware that I don't know as much as some of you.I will be extremely grateful to anyone who answer any of my questions.Thanks a lot.PS. For those who think my questions are stupid, I'm very, very sorry that I bother you.

View 3 Replies View Related

Do Lots Of COUNTs

Sep 19, 2006

Hello :)

I seem to have somehow got myself into a situation where I'm having to run the following SELECTs, one after another, on a single ASP page. This is not tidy. How can I join them all together so I get a single recordset returned with all my stats in different columns?

SELECT COUNT(*) FROM tblQuiz WHERE [q3] = '5 years +' OR [q3] = '2 - 4 years'
SELECT COUNT(*) FROM tblQuiz WHERE [q4] <> '' AND [q4] IS NOT NULL
SELECT COUNT(*) FROM tblQuiz WHERE [q5] = 'Unhappy'
SELECT COUNT(*) FROM tblQuiz WHERE [q6] = 'Yes'
SELECT COUNT(*) FROM tblQuiz WHERE [q7] = 'Yes'
SELECT COUNT(*) FROM tblQuiz WHERE [q8] <> '' AND [q8] IS NOT NULL

View 8 Replies View Related

Lots Of Queries For My Db

Jul 20, 2005

Hello all,I have a database in SQL Server that should save data from a CRM-likeapplication.The database consists of tables like products, services, customers,partners etc. Problem is that the users should be able to find theseitems on different properties and with or without substring finding(SQL: LIKE). Example: I want the users to be able to find a customer,providing a customerID, but also providing a customername, zipcode orjust a part of those strings.This will result in a lot of queries. I bet there are some nicesolutions to this, since I will not be the first with this situation.If anyone can help, please.Thank you in advance.Regards,Freek Versteijn

View 3 Replies View Related

Deleting Lots Of Records

May 21, 2004

Apparently, deleting 7,000,000 records from a table of about 20,000,000 is not advisable. We were able to take orders at 8:00AM, but not at 7:59.

So, what's the best way of going about deleting a large number of records? Pretty basic lookup table, no relationships with other tables, 12 or so address-type fields, 4 or 5 simple indexes. I can take it down for a weekend or night, if needed.

DTS the ones to keep to another table, drop the old and rename the new table?
Bulk copy out, truncate and bring back in?
DTS to text, truncate and import back?
Other ways?

Never worked with such a large table and need a little experienced guidance.

Thanks for the help

View 1 Replies View Related

SQL Performance- Lots Of Little Tables Or One Big One?

May 25, 2004

I am planning an application where ~1000 companies will be accessing data. Should I use a key to identify the company and place all data in one table i.e (WHERE company =123) or should the application create company specific tables i.e should I have 1000 small tables with 100 records in each, or one table with 100,000 records?

View 2 Replies View Related

Does Sql Server Slow Down If It Has Lots Of Db's?

May 7, 2008

Hi,

You know how there are lots of hosted applications out there, many of them provide you with your own database (not shared).

1. If a server has 1K databases on it, will this slow down the server just due to the # of databases? (each user has their own database, but they won't be accessing it that much really).

A seperate database is required for security purposes usually.

2. Can you still open up EM with 1K+ databases?

View 3 Replies View Related

Trying To Restore Having Lots Of Troubles....

Sep 20, 2007

I used MS SQL Server Management Studio Express to back up my SQL Server 2005 database called "PMDB" on a server in my office to a jump drive. Then, I removed the jump drive from the server and plugged it into my laptop. I then tried using MS SQL Server Management Studio Express to restore "PMDB" to my laptop SQL Server 2005 Express Edition (instance "Primavera") to a database called "pmdb$primavera". I've had several issues:

1. pmdb$primavera is now "locked up" "in the middle of a restore". I can't seem to unlock it even by rebooting my laptop.
2. when I execute the restore (I've used several command sequences...) I get a message that there are additional "families" restore is waiting for. I don't understand that.
3. I tried restoring to a completely new database (in the same instance...), but got the "family" problem. The query just hangs forever. When I cancel it, it hangs the database (see #1).
4. I now have two databases in the "Primavera" instance on my laptop in the "restoring" state. I can't get to either one of them.

I'm desperate. I have a customer presentation on Thursday where I must have the database working. Help!

Here's one of the queries I executed.... though I tweaked items here and there for 3 and 4 above.


restore database pmdb

from disk='f:databasebackupPMDB-BAK.BAK'

with recovery,

move 'pmdb_Dat' to 'C:Program FilesMSSQLPrimaveraMSSQL.1MSSQLDATApmdb$primavera_DAT.MDF',

move 'pmdb_log' to 'C:Program FilesMSSQLPrimaveraMSSQL.1MSSQLDATApmdb$primavera_LOG.LDF'

go



The issue seems to be #2 above each time.

View 4 Replies View Related

SS2005 Log Has Lots Of These Messages.

Nov 14, 2007

Every time a transaction log is dumped we see the following message in the log file:

BackupDiskFile:penMedia: Backup device '\s-sqlbkups-1g$myserverlogmy_databasemydatabase_backup_200711071430.trn' failed to open. Operating system error 2(The system cannot find the file specified.).


Source spid139
Message
Error: 18204, Severity: 16, State: 1.



And yet, the actual log dump appears fine and the file is found on the share. The dump is done with a maintenance plan.

Any ideas?

View 2 Replies View Related

Best Way To Populate A Page With Lots Of SQL Data

Sep 8, 2006

I have a page that has about 8 dropdown boxes that need to be populated from sql tables.  What is the best way to populate these boxes. Here is how I have it nowconn = New SqlConnection(ConfigurationManager.AppSettings("SQLString")) ''''''''''''' Fill in DropDownList Status '''''''''''''''''''strSelect = "SELECT * FROM Requests_Status"cmdSelect = New SqlCommand(strSelect, conn) conn.Open()dtrSearch = cmdSelect.ExecuteReader()ddlRequestStatus.DataSource = dtrSearchddlRequestStatus.DataTextField = "RequestStatusName"ddlRequestStatus.DataValueField = "RequestStatus"ddlRequestStatus.DataBind()ddlRequestStatus.Items.Insert(0, New ListItem("-- Select Below --", -1)) cmdSelect.Cancel()dtrSearch.Close()conn.Close()''''''''''''' Fill in DropDownList Container '''''''''''''''''''strSelect = "SELECT * FROM Containers"cmdSelect = New SqlCommand(strSelect, conn) conn.Open()dtrSearch = cmdSelect.ExecuteReader()ddlContainer.DataSource = dtrSearchddlContainer.DataTextField = "ContainerName"ddlContainer.DataValueField = "ContainerID"ddlContainer.DataBind()ddlContainer.Items.Insert(0, New ListItem("-- Select Below --", -1)) cmdSelect.Cancel()dtrSearch.Close()conn.Close()'''''''''''''''''''''''''''''I then repeat the same commands as above for the other 6 dropdowns.  This seems like a bad way to have to do all this.ThanksCraig

View 3 Replies View Related

Easiest Way To Update Lots Of Records

Apr 26, 2007

I have a database where several thousand records have NULL in a binary field.  I want to change all the NULLs to false.  I have Visual Studio 5, and the database is a SQL Server 5 database on a remote server.  What is the easiest way to do this?  Is there a query I can run that will set all ReNew to false where ReNew is Null?  This is a live database so I want to get it right.  I can't afford to mess it up.Diane 

View 2 Replies View Related

Lots Of Txt Files Has To Be Loaded (hints??)

Aug 18, 2006

have a dts package that does txt -> sql server.
i have 200 txt files with the same exact format.

just want to know if i can write a SP passing a parameter that loads this txt files. because i dont wanna create 200 packages or 200 sources to load 200 txt files.


say:
exec SP_loadTXT txt1

or should i use bulk insert?

any approaches are fine. any suggestions are fine too.

View 14 Replies View Related

Selecting From Table With Lots Of Inserts

Mar 19, 2008

Hi,

I am working on an application to analyse down time on a production line system. The system has about 40 rows inserted per minute. The inserts are coming from about 10 different stations.

I need to a analyse the downtime between each insert from each station. The plan is to copy the data to another database on a different server so as not to affect the live system that is being updated by the production line.

However the initial requirement was to do this at night while the production line was down but now they want the data to be updated every 3 hours which means performing this huge query while the production line is bombarding the DB with inserts.

I am wondering what is the best way of doing this. Is there any way I can limit the abount of processor this proceedure will take.

Any advice appreciated,

Thanks,
Sean

View 2 Replies View Related

How Would You Design This Query Better ( Does Not Contain Lots Of Code)

Apr 18, 2006

i have a number of business programs. Each program is started anew at the beginning of each fiscal year. each program has a number of goals and customers subscribe to the goals.

i have to pull all this info out of the database.

i have a cursor that gets the first program, inside this program i have a cursor that gets the first period, and inside that i have a cursor that gets the info on each goal.

program cursor

{

period cursor

{

goal cursor

}

}

}

}

this takes ages ( hours and hours ) to run. is there any way i could have designed this using joins and simple selects to make it more efficient??

View 4 Replies View Related

Updating A Field Inserts Lots Of Spaces

Aug 30, 2007

I have a database setup and the fields setup as nchar(200).
Now, when I update a row using the code below it inserts the text but then seems to fill the rest of the field with spaces. i.e. if the text is only 10 characters, mssql seems to put 190 characters on the end of it to make 200.
 Is there any reason how I can stop this/1 // Create new command
2 comm = new SqlCommand(
3 "UPDATE Pages SET PageBody=@PageBody, " +
4 "PageMetaTitle=@PageMetaTitle, PageMetaDesc=@PageMetaDesc, PageMetaKeywords=@PageMetaKeywords " +
5 "WHERE PageID=@PageID", conn);
6
7 // Add command parameters
8 comm.Parameters.Add("@PageID", System.Data.SqlDbType.Int);
9 comm.Parameters["@PageID"].Value = idTextBox.Text;
10 comm.Parameters.Add("@PageBody", System.Data.SqlDbType.NVarChar);
11 comm.Parameters["@PageBody"].Value = contentTextBox.Text;
12 comm.Parameters.Add("@PageMetaTitle", System.Data.SqlDbType.NVarChar);
13 comm.Parameters["@PageMetaTitle"].Value = titleTextBox.Text;
14 comm.Parameters.Add("@PageMetaDesc", System.Data.SqlDbType.NVarChar);
15 comm.Parameters["@PageMetaDesc"].Value = descriptionTextBox.Text;
16 comm.Parameters.Add("@PageMetaKeywords", System.Data.SqlDbType.NVarChar);
17 comm.Parameters["@PageMetaKeywords"].Value = keywordsTextBox.Text;
 

View 3 Replies View Related

Import Lots Of Decimals, Problem On The 18th

Dec 20, 2006

Hello,

I'm trying to import data from an Excel sheet into a table. Not all of them is imported.

Exl: 0,000801054857569349 becomes
Sql: 0,000801054857569350 when it is imported.

The column in SQL-Server is defined as DECIMAL 28.18, should take all 18 numbers i thought. Tried 28.19 also but it only added another zero at the end.

I've tried importing via DTS and manually import,same result both times.

Any suggestions?

View 6 Replies View Related

Deleting Duplicate Records From Lots Of Tables

Aug 29, 2006

Hi All,

So.. I'm a complete newb to SQL stuff.

I managed to find the 'Deleting Duplicate Records' from SQLTeam.com (thanks, by the way!!).. I managed to modify it for one of my tables (one of 14).


-- Add a new column

Alter table dbo.tblMyDocsSize add NewPK int NULL
go

-- populate the new Primary Key
declare @intCounter int
set @intCounter = 0
update dbo.tblMyDocsSize
SET @intCounter = NewPK = @intCounter + 1

-- ID the records to delete and get one primary key value also
-- We'll delete all but this primary key
select strComputer, strATUUser, RecCount=count(*), PktoKeep = max(NewPK)
into #dupes
from dbo.tblMyDocsSize
group by strComputer, strATUUser
having count(*) > 1
order by count(*) desc, strComputer, strATUUser

-- delete dupes except one Primary key for each dup record
deletedbo.tblMyDocsSize
fromdbo.tblMyDocsSize a join #dupes d
ond.strComputer = a.strComputer
andd.strATUUser = a.strATUUser
wherea.NewPK not in (select PKtoKeep from #dupes)

-- remove the NewPK column
ALTER TABLE dbo.tblMyDocsSize DROP COLUMN NewPK
go

drop table #dupes


Now that I've got that figured out, I need to write the same thing to fix the other 13 tables (with different column info)- and I'll need to run this daily.

Basically I've put together some vbscript that gathers inventory data and drops it into an MSDE db (sorry - goin for 'free' stuff right now). Problem is it has to run daily so that I'm sure to capture computers that turned on at different times etc which ever-increases my database 'till I bounce off the 2GB limit of MSDE.

So the question is, what would be the best way to do this? Can I put the code into a stored procedure that I can execute each day?


Thanks for your help....

View 4 Replies View Related

Lots Of Columns To Check/convert - Best Practice?

Jun 27, 2007

I've got about 6 formats of flat files coming through data flows & heading into a relational db, then later to a data warehouse. In each file type, I've got about 60-70 columns to perform basically two levels of validations on - first is straight data type conversions, then 2nd is finer level stuff. Some of the data from each file type overlaps in other files, so for instance some lookup codes are maintained for both.



The data is pretty dirty, so I'm keeping everything as varchar coming into the staging area, just so I can get the data in the system, b/c the users demand on some form of of the data making it into the system, no matter the dirtiness.



So then I'm running my two steps - first converting data types from varchar to bit/datetime/int, etc. as applicable. And then , I'll be running finer levels of validation, doing range checks, etc. My question is - with so many columns, what's the most efficient & best way of doing all these checks in the data flow, and recording errors out of each check? Do I put one column check after another with the success constraint, do the lookup/range check/other, and then record the error if an error is encountered, then move on to the next column? Or would it be better to multicast out the stream to 60 flows and do all the needed checks, then union all the good stuff back together at the end? Anything to help save some headache - b/c this data is dirty and there's a lot of it.

View 4 Replies View Related

2-Tier Architecture - How To Manage Lots Of Users

Nov 6, 2007

We have a 2-tier architecture with thick client (.NET 2.0) applications directly accessing the SQL Server database. We are struggling to manage lots of users while maintaining security. Granting lots of users directly to the database seems to be tough to manage. In fact, we would like to let supervisors without DBA previlege to add and remove users of our applications. Using SQL Authentication (a single account to access the database) is the other alternative but it is not considered as a secure solution.

I would appreciate if anyone gives me suggestions on how to handle this, without moving to a 3-tier architecture (dedicated middle-tier DB access layer running a custom user account).

Thanks in advance.

View 4 Replies View Related

Lots Of Stats Based On One Database Table

Feb 9, 2007

Hi,


I'm new to reporting services and this is a very general question. I'm working on a large sales stats report with many results.
I want to be able to compare many results for two dates. These results
include, average sales value per day, average sales per weekday, sales
with payment received, etc.

So basically there is lots of analysis
needed mainly based on one database table (a fairly standard orders table).


What seemed the most logical thing to do is get all the relevant order rows for
these two date ranges, A and B, and append a period column to the
results, and then do all the maths/aggregate functions in Reporting
Services. Thus only having to connect to database once.

And use a matrix with date period columns.
So my query gives me results like:

Period order_total, is_weekday, no_weekdays_in_period.....

A 123 0 22....

A 54 1 22....

B 134 0 20...


Does this make the most sense? Or should I do the maths (grouping and aggregate functions) in lots of
different queries (in which case, is Reporting Services worthwhile using?)?


Any advise/suggestions appreciated.

View 1 Replies View Related

Derived Column - Lots Of Columns-automate?

Aug 7, 2007



I'm sending a lot of columns through my derived column transform, checking for empty strings from a flat file - I was wondering is there a way that I could "script out" all the transforms instead of enduring this click hell that I'm stuck in inside the derived column transformation editor. I've got probably 100+ columns to configure with the following sort of transform....

Replace Col1
TRIM(Col1) == "" ? NULL(DT_WSTR,2) : Col1


Basically - if the string is empty, then throw Null in the data stream. I'm about a third the way through but it would really be nice if there was a quicker way. Even with the most efficient copying & pasting & keyboard shortcuts, it's still painful.

View 1 Replies View Related

Optimising A Table With Lots Of Boolean Fields

Jul 17, 2006

I have an application that reads a monitoring devices that produces 200 digital outputs every second and I would like to store them in a table. This table would get quite big fairly quickly as ultimately I would like to monitor over a hundred of these devices.

I would like to construct queries against each of the individual digital channels or combinations of them.

M first thought is to set up a table with 200 separate columns (plus others for date stamp, device ID etc) however, I am concerned that a table with 200 boolean (1-bit) fields would be an enormous waste of space if each field takes maybe one to four bytes on the hard disk to store a single bit. However, this would have the advantage of make the SQL queries more natural.

The other alternative is to create a single 200 bit field and use lots of ANDing and ORing to isolate bits to do my queries. This would make my SQL code less readable and may also cause nore hassle in the future if the inputs changed, but it would make the file size smaller.

In essence I am asking (hoping) the following : If I create a table with 200 boolean fields, does SQL server express automatically optimise the storage to make it more compact? This means that the server can mess around at the bit level and leave my higher level SQL code looking cleaner and more logical.

View 5 Replies View Related

Business Rules -&&> Using Lots Of UDFs &&amp; Views

Sep 7, 2007

I am in the process of building my first "large scale" database system (after 15+ years of developing Windows Apps and Web Apps) - so I am very VERY "Green" when it comes to Database development & SQL et al.

A little context setting: I am building a multi-tier Statistical Analysis & Reporting system where the "end product" will be Reports created in Reporting Services. There are a ton of business rules that I am implementing in a Business Logic Tier (hidden from the "end user" by a Data Access Tier) comprised of SQL in the form of UDFs (scalar) and Views.

The question: I have been reading that UDFs cause a performance hit compared to things like in-line functions. Alot of the Rules (implemented as Scalar UDFs) build on each other so that the output of UDF #1 is used as input to UDF #2.

So far I am implementing the Business Logic as a hierarchy of Views (7 Views to be exact) with each view implementing multiple Rules; each Rule basically a Scalar UDF. Below is an example of what I am doing:

Example

View #1 -> Select A, B, C, funcX1(A) as ValueX1, funcY1(B, C) as ValueY1 FROM someView

Then
View #2 -> Select A, B, C, ValueX1, ValueY1, funcX2 (ValueX1) as ValueX2, funcY2(ValueY2) as ValueY2 FROM View#1

Currently I have a hierarchy of 7 views that each use UDFs to implement the Business Rules, where the value calculated from a UDF in one View is used as input to UDF in a View further down the Hierarchy.

Is there a better way of implementing all of the Rules instead of using multiple Views with a bunch of UDFs?

The "end product" dataset is then exposed as a Stored Procedure to the reports in Reporting Services.

Any help would be GREATLY appreciated.

Thanks!
- marty

View 5 Replies View Related

Lots Of Individual Insert Commands Or String Parsing In Sql?

Feb 16, 2007

Wondering what's the preferred method for this.  I've got a scenario that a user is updating some content on a page and I need to update my word catalogs for my search feature.  I have some code currently to filter out words that are too small, make sure there are no duplicates and to count how many occurrences there are of each.  What I'm wondering is, does it make more sense to do a loop in my code to run all the insert commands to place the new words in the database, should I try sticking them together in one string and parse them when they get up there or is there a better method someone can suggest?

View 1 Replies View Related

K, I Got Dreamweaver To Hook Up To My Access Database(Yes Lots Of Trouble) BUT...

Jan 6, 2008

when I hit F12 to preview my page and forms etc it goes to this error

This error (HTTP 500 Internal Server Error) means that the website you are visiting had a server problem which prevented the webpage from displaying.
For more information about HTTP errors, see Help.
 
AND when I try to hold up to an SQL database it wont let me and says the server doesnt exist still. Whats wrong with my SQL server and what can I do to resolve this problem?

View 1 Replies View Related

EXPERT: Implement Dijkstra's Algorithm -&> Need Lots Of Help Implementing!

Jan 9, 2008

This is such a complex question and I'm 99.9% sure it requires usage of Dijkstra's algorithm in order to determine the shortest path. :(I have tried to build this myself (yes, I've viewed enough examples on the web, but since they dont exactly do what I want AND I'm rather new to this advanced SQL AND my boss would really like this asap I feel forced to call upon the community)Basically I need a query which analyzes the relationships between 2 persons and returns the shortest path(S!) I have provided the data that is required to perform any tests below. The example I provide match with the given data.I know for sure that such a query has been written before since for example LinkedIN uses something similar...so if anyone has this off the shelf for me great!If not, I would really really appreciate it if someone could provide a completely worked out example. I'll even give special thanks to that person on our future website :)So, many thanks ahead for whoever takes up this challenge! :)CASE:-----------------------------------------------------------------------------I have tables with friend relationships and tables with userdata.Lets say im logged in as Peter (usercode 5).Now if I (as user Peter) view the profile of Andre (usercode 51), I want to determine the relationship that exists between me and Andre.When the users would have a direct relationship, eg. between Peter (5) and John (6)  I want returned:col1 col2 col3     col4 5     Peter 6     JohnWhen the users would have a indirect relationship, witch EXACTLY 1 person in between, like between John (6) and Jack (48).So I can go from John to Jack in exactly 2 steps via multiple persons, in this case I want the following rows returned (max 4):col1 col2 col3     col4 col5     col6 6     John 11     Hans 48     Jack6     John 15     Hans 48     JackWhen the users would have a indirect relationship, witch MORE than 1 persons in between, like between Peter (5) and Andre (48), I want returned:col1 col2 col3     col4 col5     col6 col7    col85     Peter 11     Hans 48     Jack 51     AndreIn any case when there are multiple paths from person A to person B, I only want the shortest paths returned to a maximum of 4Since this query will be called may times by different users at the same time concurrency issues also need to be taken into account (e.g. usage of temp tables)with the entire query the maximum amount of steps that should be checked is 6, so maximum 6 persons in between 2 persons.So if a viewed user is more than 6 steps away from the viewing user I want no results returned.E.g. when Peter (5) views the profile of Simon (7), no relationship exists through any other person, and an empty dataset should be returned.-----------------------------------------------------------------------------I have the following tables and data:CREATE TABLE [dbo].[tblFriends](    [UserCodeOwner] [int] NOT NULL,    [UserCodeFriend] [int] NOT NULL,    [createdate] [datetime] NOT NULL CONSTRAINT [DF_tblFriends_createdate]  DEFAULT (getdate())) ON [PRIMARY]CREATE TABLE [dbo].[tblUserData](    [UserID] [uniqueidentifier] NOT NULL,    [UserCode] [int] IDENTITY(1,1) NOT NULL,    [UserName] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NOT NULL,    [DisplayName] [nvarchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL,) ON [PRIMARY] INSERT INTO tblUserdata (UserCode,UserName,DisplayName) VALUES (5,'peter',':-D Peter ;-)') INSERT INTO tblUserdata (UserCode,UserName,DisplayName) VALUES (6,'john','J ;-)') INSERT INTO tblUserdata (UserCode,UserName,DisplayName) VALUES (7,'simon','Simon :-D') INSERT INTO tblUserdata (UserCode,UserName,DisplayName) VALUES (11,'hans','Hans :-)') INSERT INTO tblUserdata (UserCode,UserName,DisplayName) VALUES (15,'Jane','Jane3') INSERT INTO tblUserdata (UserCode,UserName,DisplayName) VALUES (28,'jean','jean') INSERT INTO tblUserdata (UserCode,UserName,DisplayName) VALUES (48,'Jack','Jack') INSERT INTO tblUserdata (UserCode,UserName,DisplayName) VALUES (51,'Andre','Andre') INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (5,11) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (5,6) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (6,11) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (6,5) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (6,15) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (7,28) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (11,6) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (11,5) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (11,15) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (11,48) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (15,6) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (15,11) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (15,48) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (28,7) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (48,11) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (48,51) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (48,15) INSERT INTO tblFriends (UserCodeOwner,UserCodeFriend) VALUES (51,48)  

View 2 Replies View Related

Best Practice For Add, Edit Records Into Database With Lots Of Fields ?

Feb 7, 2006

What's the best practice for adding / editing a record into a database with lots of fields ?I am not talking about the mechanics of it, as there are a lot of trivial examples using ADO.NET, stored procs, etc.
Deleting is easy, you just pass in (a few) primary key/keys to uniquely identify the record.
But in the real world when you have, say, a table with 100 fields! Do you code the INSERT sproc by hand,  with 100 parameters... then call it with your ADO.NET code ? sounds like a lot of work to me...
What about updating! That's even worst, sometimes you may need to update only 3 or 4 fields, but using sprocs you would have to pass the whole 100 parameters in again, and "update" the whole record (when in fact you are only changing 3 or 4 fields).
With the update i could write different sprocs targeting only the fields i wish to update, but that sounds like duplicating work, vs having one generic update proc.
Sometimes i just feel like bypassing sprocs and having inline sql as it would be less work... but i know it is untidy.. and more potential to be buggy.
So come on guys (and gals)... let's hear your thoughts on how you would handle the insert  / update scenarios when you have lots of fields ? Northwind examples are too trivial :-)
 

View 1 Replies View Related

Making Changes To A Table With Lots Of Data. Timeout Error?!

Oct 11, 2006

Hello,

I have a table that is fairly large, and I need to make a change to one of the columns in the table. Namely I need to change the datatype and rename that column. When I try to save the updated table, I keep getting a timeout error that says.

'eligibility (dbo)' table
- Unable to create index 'PK_eligibility'.
Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding.

Any ideas on how to make the table change more efficient or change the timeout period. I need to keep the existing data in the table. I am using sql server managment studio(2005) connected to a sql server 2000 database.

Thanks!

View 7 Replies View Related

How Can I Avoid Lots Of Spaces Being Added To The End When Saving Contents Of A Textbox

Sep 18, 2006

HiI am using FormView, SQL 2005, VB 2005When I save the contents of Title and Area entry fields , I have lots of spaces added to the end.Title and Area are MultiLine Textboxes and database:varchar(100)I thought I had solved the issue using;TextBox Title = FormView1.FindControl("TitleTextBox") as TextBox;TextBox Area = FormView1.FindControl("AreaTextBox") as TextBox;TitleLenTrim = Title.Text.Trim().ToString();if (TitleLenTrim.Length > 100){TitleLenTrim = TitleLenTrim.Trim().Substring(0, 99);}string AreaLenTrim;AreaLenTrim = Area.Text.Trim().ToString();if (AreaLenTrim.Length > 100){AreaLenTrim = AreaLenTrim.Trim().ToString();} string insertSQL;insertSQL = "INSERT INTO Issue(";insertSQL += "ProjectID, TypeofEntryID, PriorityID ,Title, Area)";insertSQL += "VALUES ( '";insertSQL += ProjectID.Text.ToString() + "', '";insertSQL += EntryTypeID.Text.ToString() + "', '";insertSQL += PriorityID.Text.ToString() + "', '";insertSQL += TitleLenTrim.Trim().ToString() + "', '";insertSQL += AreaLenTrim.Trim().ToString() + "', '";Is there any other way I could remove spaces?Thanks in advance.

View 4 Replies View Related

Access To SQL Server 2000 Pauses With Lots Of Inserts, Updates, ...

Nov 21, 2007



Hi,
I have the following problem: Within a VBScript, I use a component (written in C++ I think with use of ADO) for sending "Insert", "Update" Statements to an SQL Server 2000 for inserting, updating data. If I insert 100 - 120 Records in a Loop, all works fine. If I insert 1000 records, approximately 150 records will be inserted very quick, then the program pause fo approx. 8 - 15 minutes and then it proceed for the next 150 recs, pause for 8 - 15 minutes and so on.

If I use a SQL Server 2005 for the database, all works fine. The same happens with another customer and another program written in Visual Basic 6.0 with ADO. The access to SQL 2000 pause and with SQL 2005 all works fine. It seems to me that this is a problem with some buffers, timeout, or so. Has anyone an idea on what screw I can turn?

Thanks
Hans

View 1 Replies View Related

Analysis :: Large Tabular Cube With Lots Of Measures - Keyboard Stuck?

Apr 24, 2015

large tabular cubes (SSAS 2012 SP2), with lots of calculations, the keyboard seems to be stuck, one cannot type or make any modifications to any measures, even after closing a project.

The only workaround is to completely restart the machine.  Using Visual Studio 2010 Shell.

Are there any possibly corruptions or calc limitations that one should be aware of?

Notable that this never happens with any small cubes, but just with the largest cube that we have, that has probably tens of calcs.  

View 6 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved