Relational DB/Analysis Services Configuration For Multi-proc Environment

Oct 19, 2006

Hello--

Is it possible to install/configure SQL-Server 2005 on a multi-processor machine so that the relational DB utilizes a given subset of processors while Analysis Services utilizes another subset?

Thanks,

- Paul

View 4 Replies


ADVERTISEMENT

Paramater Use For Both Relational And Analysis Services?

Nov 9, 2007



Does anyone know if it is possible to have one parameter that is used for both a relational(SQL) and Analysis Services(MDX) query within one report? I need to pull up-to-date inventory counts for a product, which are not in my Analysis Services cube I also have related Product information in Analysis Services that I need in the same report. I can't seem to find any information about how to do this or if it is even possible...

Frank

View 3 Replies View Related

Integration Services :: The Configuration Environment Variable Was Not Found

Jul 28, 2008

I have a Master Package that executes a number a child packages.
 
In my SSIS Package Configuration:
 
1.  I have an SSIS Configuation table that holds the connection string.
 
2.  An XML Configuration File with a setting of configuration location stored in an enviornmental variable.
 
3. And finally, an Eveniornmental variable with the setting of ProjectFolderAbsolutePath value, where it is the full path of the project folder.
 
The project functions normally but everytime I open it I get the following error.

" Warning loading MasterPackage.dtsx: The configuration environment variable was not found.  The environment variable was: "EnviorVariable". This occurs when a package specifies an environment variable for a configuration setting but it cannot be found. Check the configurations collection in the package and verify that the specified environment variable is available and valid."

View 5 Replies View Related

Reporting Services :: Where To Install SSRS SharePoint Mode In A Multi-server Environment

Aug 17, 2015

I am setting up sharepoint and sql server integration environment. I am considering the following topology: PowerPivot for SharePoint 2013 and Reporting Services in SharePoint mode Two Server Deployment

[URL]

I am looking to follow the topology example by the letter, which involves installing PowerPivot for SharePoint (aka SSAS in SharePoint mode) in the same server as my SQL, and installing SSRS in SharePoint (SP) integrated mode in the same server as SharePoint.

I understand, however, that if I wanted to install SSRS in SP mode in the same server as SQL, I could but only if the server contains the SP Object Model.

My first question is, what would involve having the SP Object Model in the SQL Server?

Would only installing SP binaries be enough; or Do I need to do a minimal install of SP in the SQL server enough for it to joing the SP farm? And most importantly, what would be the licensing implications for SP in case I want proceed down this route and have SSRS in SharePoint mode installed in the same server as the SQL?

View 4 Replies View Related

SQL 2012 :: Disaster Recovery Options For Multi-Database Multi-Instance Environment

Sep 23, 2014

Disaster Recovery Options based on the following criteria.

--Currently running SQL 2012 standard edition
--We have 18000 databases (same schema across databases)- majority of databases are less than 2gb-- across 64 instances approximately
--Recovery needs to happen within 1 hour (Not sure that this is realistic
-- We are building a new data center and building dr from the ground up.

What I have looked into is:

1. Transactional Replication: Too Much Data Not viable
2. AlwaysOn Availability Groups (Need enterprise) Again too many databases and would have to upgrade all instances
3. Log Shipping is a viable option and the only one I can come up with that would work right now. Might be a management nightmare but with this many databases probably all options with be a nightmare.

View 1 Replies View Related

Update An Analysis Services Cube From Stored Proc?

Jul 1, 2004

I'd like to be able to update an Analysis Services cube through a stored proc.

Currently I can:
- Make a DTS package that updates the cube
- run xp_cmdshell which runs dtsrun which runs the DTS package.

That is messy, easily broken, and hard to get good error info when an error occurs. Is there a better route?

View 1 Replies View Related

Multi Relational Table Design Question

Dec 10, 2006

Hi! Im working on a webapplication and has serious thoughts about howto optimize my table structure. To explain:
My tablestructure today



(simplified):tbl_customerscust_idname.....tbl_contactscon_idname.....tbl_groupsgrp_idname..... 
My subtables look like this(alternative 1):tbl_sub_phonephone_idparent_typeparent_idphone_areaphone_nr.....tbl_sub_emailmail_idparent_typeparent_idemail..... 
As seen above every contact, group and customer can be assigned an unlimited amount of phonenumbers or emailadresses.For example when entering a new email or a customer following will be inserted in tbl_sub_email: parent_type = 'cst', parent_id= '2' (the cust_id from tbl_customers), email = 'gwerg@fe.com'The problem is i am uncertain if this is a very unefficient way of handling it? i see two alternatives:
Alternative 2:i create x subtables for each table  for example tbl_customers will get its mailadresses and phonenumbers contained in tbl_customers_phone and tbl_customers_emailWhat i am uncertain of here is if this would make things alot more troublesome when searching på example after a specific phonenumber.Alternative 3:



(simplified):tbl_customerscust_idname.....tbl_contactscon_idname.....tbl_groupsgrp_idname..... 

tables connection objects to subobjects
tbl_customers_phoneidcust_idphone_idtbl_contacts_phoneidcon_idphone_idtbl_customers_mailidcust_idmail_id 
subtables tbl_sub_phonephone_idphone_areaphone_nr.....tbl_sub_emailmail_idemail..... 
Ranking these three models, wich would be the most efficient and most inefficient performanswise?What i want to avoid is performanceproblems when listing the objects, my indexing skills are a bit limited although im doing alot of reading and testing regarding this.So thats why im asking for advice so that i can minimize the need of rebuilding the table structure when the application already has been starting to get used.I also have another general question.
I have alot of select querys when i need to fetch data from several different tables.Most of them is that i for example get an application from tbl_applications table, and that tables contains the columns cat1, cat2 and cat3 (wich are categories and contain the primary key integer to the tbl_sub_categorys table)With 3 joins i retrieve these 3 category names returning 1 result with all the info i need.Since ive been getting som strange results from the query analyzer(i got results that using clustered indexing for the primary key resulted in a slower query (higher cost)) i actually have another question.Can it generally be summed up that a single query(join or subquery) generaly ils faster than getting the data in separate selects?In the example above this i have the options either of using joins = 1 query or doing 2 querys and sorting the categorys codewise in aspx pages or doing 4 querys, one for the app followed by 1 for every category.Any input regarding this?
As i said earlier im looking for the most efficient way of doing the things abov, would greatly appriechiate any input!

View 3 Replies View Related

How To Flatten Relational Data For Analysis

Mar 22, 2007

Hello everyone, this is my first post here so hopefully I am not asking a common question.

I am trying to create a flat dataset in SQL 2005. Basically I run a query and I get multiple rows for the same primary key. The query I am running is quite large and has several different tables connected to it, here is a small sample of what it looks like...



Typeid(Primary Key) Individual Address

1 Sam 912 Ave. J

1 Sam 913 Ave. Q

1 Sam 914 Ave. R

2 Mike 1000 Ave. O

3 Jill 1001 Ave. O



I want it to kind of look like this

TypeID Individual Address_1 Address_2 Address_3

1 Sam 912 Ave. J 913 Ave. Q 914 Ave. R

2 Mike 1000 Ave. O

3 Jill 1001 Ave. O



As I said before, this query is pretty big, and has several variables like Address where multiple rows are being taken by one Primary Key.

If it is not possible to do this in SQL 2005, is there a program that may be able to? Right now we are using SPSS as sort of a bandaid... we run the query in small portions like the one in the sample and then restructure the in sections but this takes several hours.

Anyways, thanks for any help that you may be able to provide.



-John

View 5 Replies View Related

Multi-server Environment

Oct 1, 2007

We are currently managing about 75 SQL Server instances. Each instance contains jobs (backups, index work, admin, etc) which report back to a central instance for monitoring. This is working well, but each time I need to change one of the jobs, I am having to log into each instance to do so.

I have recently played with Multi-server Environment, using a master and target, to see if this might help. There is relatively little written about Multi-server in books-online, and I am left with a question.

All jobs appear to be exact duplicates. But I don't want exact duplicates. For example, on the various target servers, I would like for the backup jobs to run at different times, and write to different directories. How are the rest of you working around this situation, if you are using Multi-server Environment?

View 1 Replies View Related

Replication Environment Configuration Help!!

Oct 24, 2007



I have a production server log shipping to a secondary server every 30 minutes (both SQL 2000), which the second server is used for both a warm standby server and for reporting from users. Issue: the log shipping locks the DB so reporting can't be done until the load is finished, the load to the second set of databases has taken up to 15 minutes to finish allowing the users only 15 minutes to run reports, this is not acceptable. The server also needs to be used for DR.

I am looking for another solution, I can't use Transactional Log shipping as not all of the tables in the databases have a primary key identified. So, I am looking for a real-time or near real-time reporting server that is more available to running reports and a warm standby server for Disaster recovery. I am trying to figure out what SQL Server 2000 has to provide (or even 2005 or 2008?) or I am also looking at some third party software, but not sure what is the best for a reasonable price.

Any help is appreciated.

Thanks....JB

View 3 Replies View Related

SQL Licenses In Multi-server ETL Environment

Jan 6, 2004

My company uses a quad processor server connected to a SAN to load and summarize detail sales information from 2000 stores on a nightly basis. We poll and load around 5,000,000 rows of data each night. THis information is summarized up to various levels, then replicated to one or more secondary datamart servers for end user access via web reporting, BI tools like Proclarity/Analysis Services etc...

The initial data polling server is only touched by the development staff supporting the process (1-5 programmers) and is licensed for SQL server Enterprise using a CAL model. Each datamart server is licensed with MS SQL server processor licenses.

The question: We were told that the quad processor polling machine, which has no end user access allowed, must be licensed with processor licenses since it touchs the data ultimately consumed by end users. This makes no sense to me.

The Microsoft white papers discussing multi_tier environments don't seem to address this type of issue. They focus on applications that ultimately pass thru a data request to the SQL server machine. In this situation, user requests are handled by the datamart servers, which are licensed with processor SQL licenses.

Can anyone clarify?

Thanks, Mike

View 1 Replies View Related

SqlTransactions In A Multi-Threaded Environment

Nov 27, 2006

I am currently working on a project in which I am using SqlTransaction objects with their IsolationLevel set to Serializable. I am wondering what would be the effect if multiple threads were to call this code at the same time? Would the second transaction (created by the second thread on the same connection), for example, be queued until the first one (created by the first thread on the same connection) finishes? Or would an exception be thrown for the second transaction? In case an exception is thrown, would using the synchronization (locks) around critical sections solve the issue?
I guess it is easier to understand the scope of transactions in terms of multiple users, but it is confusing (for me) when I start thinking in terms of multiple threads. The underlying data is held in a single table and the scope of the transaction spans multiple SELECTs and an UPDATE or an INSERT.

I have been trying to find a feasible answer, and haven't found anything credible yet. Help!

View 10 Replies View Related

Package Configuration With Environment Variable

Aug 9, 2007

Hi,

I have issues with the Connection Manager in the SSIS package when using package configs thru environment variable.


Here goes..

SSIS package1:

Connections used: devcon1, devcon2 - Dev Env and testcon1,testcon2 - Test Env. Now using all four. Ideally either devcons or testcons should reside at a time.

Environment variable:

Pckg_config = <location of config file which has testcon1 and testcon2>


I need to use only devcon1 and devcon2 in Dev env. In test i need to use only testcon1 and testcon2
Hence i set the values of devcons in devEnv.dtsconfig and testcons in testEnv.dtsconfig


Now i remove both testcons from ssis package. If i try to run the Test Env and my testcons which are marked in testenv.dtsconfig are not found as connections in ssis package then the ssis gives error wanting for those connections.


SSIS maintains the connections in the Connection Manager per package. Although internally it is a pool of connections.


Ideally i should be able to play around with the connection at run time. My package now works, if it is deployed with all the devcons and testcons together. However, ideally it should be either devcons or testcons. I am trying to be more explicable to reach to the masses here.


Am i doing something wrong? All your efforts in solving this puzzle will be greatly appreciated. Please participate.


Thanx,Tushar

View 4 Replies View Related

Package Configuration Using Environment Variable

Jun 28, 2007

I am doing SSIS package configuration using environment variable.



I have created a system environment variable that points to the dtsConfig file.



I opened the package and choosen the configuration type as environment variable and specified the environment variable



When I click the next button , it doesn't allow me to choose the configurable property.



Please suggest

View 1 Replies View Related

Package Configuration + Environment Variable

Jul 17, 2006

We are using Package configuration with environment variables. The problem we are having that if we try to open project from other PC (PC 2) it gives the error:

Error 1 Error loading F0005.dtsx: Failed to decrypt protected XML node "DTS:Password" with error 0x8009000B "Key not valid for use in specified state.". You may not be authorized to access this information. This error occurs when there is a cryptographic error. Verify that the correct key is available. z:visual studio 2005projectssales data martextract to staging areaF0005.dtsx 1 1

We are using environment variable named DWConfig and have configured correct path in each PC. If we edit package configuration in PC 2 and go thru the same procedures without any amendments the errors is removed for that PC and if, again we OPEN that project in PC 1 it gives same error and if we go thru package configuration wizard again error is removed.

Can any one tell me is there any solution of that problem?


Note: Our project is saved on server (neither PC 1 nor PC 2)

regards,

Anas.

View 4 Replies View Related

Configuration File For Porting From One Environment To Another

Oct 2, 2007



Hi,

I'm using connection managers for all the connections i have in my packages in one project. However, if i change from one environment to another, i have to go to each connection manager in each package just to set the connection.

is there a faster way to change them like a configuration file lookup or something?

cherriesh

View 4 Replies View Related

Environment Variable Package Configuration

Dec 17, 2007



Okay - this one is driving me batty.


I have a package that uses an environment variable package configuration of value X for a connection string. I close BIDS. I change the value of the environment variable to value Y. I open BIDS and the package, and the value of my connection string is Y. I save my package with the new configuration. if I look at the dtsx file, I see connection string with value Y. All as expected.


I move the package to my server (I've tried Import package from SSMS, using the deployment manifest, and save copy as). On the server, the environment variable is set to value Y. If I run the package or export it; however, the value of my connection string is X!


Does anyone have any suggestions of things to try or some reason that this is not working?


Thanks,
Jessica

View 6 Replies View Related

Anyone Know Table Limits In Multi-schema Environment?

Mar 27, 2007

My product is growing rapidly and currently I have a db for each client with identical schema. Of course maintenance is pretty hard. I was thinking of using a shared db but having a schema for each client (sql 2005) - I have almost 100 tables in the schema which means with just 10 clients the db would pass 1000 tables. My gut is telling me this ain't going to fly!any ideas? and if it does work ... any thoughts on updating the internal schemas for each client?thanks-c 

View 2 Replies View Related

Temporary Tables In Multi User Environment

Feb 6, 2004

Hi,

I am developing reporting application (access project) which will be used in multi user environment.

Here is what I have:

1 SQLServer database for many users

Each report will be based on:
stored procedure which creates a table filtered for specific dates predefined views will use the newly generated table to show results to the client. However, if more than 1 person runs reports results will not be accurate if each person specified different dates because they will look at the same table and results will match only for a user who called the stored procedure last.

what can you recommend - how to report in multi user environment?


Many thanks

View 2 Replies View Related

How Use Stored Procedures In Multi-user Environment?

Apr 19, 2007

Hi!



In my database I have all business logic in stored procedures. For example there are procedures: ReadBike and UpdateBike. Bike is business object stored in 4 tables.



On my system work 100 employees and we have one problem with this. The stroy is;

1. User A reads data about Bike1

2. User B reads data about Bike1

3. User A updates data about Bike1 (user B have old data)

4. User B updates data about Bike1



So user B don't know about chnages made by user A. How to solve that ptroblem?



It's probably solved in ADO, but I want use business login in procedures.



Regards,

Walter

View 1 Replies View Related

Package Configuration Type - Environment Variable

Feb 19, 2008

Hello All -



Have you ever seen the error message below?



Description: The package path referenced an object that cannot be found: "Package.Variables[User::<variable_name>].Properties[Value]". This occurs when an attempt is made to resolve a package path to an object that cannot be found. End Warning Could not load package "<package_name>" because of error 0xC0010014.



Basically, I create a package variable under my User Namespace and this variable will tell what server the SSIS is running at. We first create a system variable locally and the SQL Server will have a variable with exactly the same name so that the server name will be evaluated through the package variable/package configurations when the SSIS is executing from a SQL Server job.



This way we do not hard code the server name... We always succeeded on doing that with DTS as well as SSIS packages but just now my package is running into this issue...



Since I did not change ANYTHING in the package, I am guessing this is not programming related and that something was changed in the server. However, the DBA was helpless over here and I have no clue of what this error means.



Any help would be appreciated.



Thanks, Gabriel.

View 14 Replies View Related

Reporting Service Configuration In Distributed Environment

Feb 27, 2007

Hi,

Please let me know if the following server configuration is possible?

SQL Server Database Services on machine #1

Analysis Services and Reporting Services 2005 on machine #2 (without SQL Server Database Services)


Post installation, need to setup Reporting services configuration. In that, within Database Setup, point to database installed on machine #1.

I am facing problem in pointing to remote database.

Awaiting some reply.

Thanks!



View 3 Replies View Related

Walk-through Of A Distributed SSRS Environment Installation/configuration?

Apr 30, 2008

I'm using Virtual PC on my WinXP PC, with two W2K3 environments runnings as Virtual PC guests. My goal is to use these virtual servers to emulate a distributed SSRS environment--prior to setting up a similar environment on physical servers.

One of the W2K3 guests is my database server; the other W2K3 guest is my report server. Just like with a physical SSRS environment, there are many choices that need to be made during the installation and configuration--any one of which, it seems, can prevent the environment from working properly.

Currently, I'm struggling with getting my virtual distributed SSRS environment to work. I could go into the details, but I figured I'd ask a basic question first: Is anyone aware of a book, magazine article, blog, etc. that goes through a distributed SSRS installation--listing the requirements, discussing the choices, considering the implications, etc.?

Off the top of my head, such a walk-through would ideally cover networking configuration issues (Virtual PC or otherwise), Active Directory considerations, Windows Firewall settings between the servers, SSRS-related service account selections, authentication between the servers, etc.

If no one has produced such a walk-through (!), I'd be happy to write one so that others don't have to deal with the same struggles I'm currently dealing with....but I'd need your help.

View 8 Replies View Related

SSIS Dynamic Configuration - Environment Variable Problem

Aug 24, 2007

Dear all,

I have a problem with SSIS reading an environment variable after deploying the packages to a server. I explain.

I have an Parent Packages ETL_MAIN_PACKAGE.dtsx that reads the child packages from a record set and loops on it to execute them with the Execute Package Task task. The first child package to be executed is called DIM_PERIODIC.dtsx.

On my local machine, the Parent Package is configured to read its database connections from an XML file SSIS_configfile.config located on my C: drive. The path (C:SSIS_configfile.config) to this file is stored in the environment variable BI_ETL.

When I run the Parent Package inside SSIS only machine, the connections are read and the package executes perfectly. Now, I want to deploy the packages on our server.

I copied the XML configuration file to the server C drive, I created the same environment variable BI_ETL and set its value to C:SSIS_configfile.config and I rebooted the machine (in case).

The execution of the Parent package is managed by a stored procedure. I use xp_cmdshell command. The command line generated is :


cmd.exe /c dtexec /file "C:ETL_DeploymentETL_MAIN_PACKAGE.dtsx" /CHECKPOINTING OFF /MAXCONCURRENT " -1 " /SET Package.Variables["P_PACKAGE_PATH"].Value;"C:ETL_Deployment" /SET Package.Variables["P_LOOKUP_PATH"].Value;"C:ETL_DeploymentETL_LOGS" /SET Package.Variables["P_SCHOOL_CODE"].Value;"007"

This command generates an error telling that the Environment variable is not found and it throws this error:

Error : 2007-08-23 18:59:10.25
Code : 0x80019003
Sourse : The configuration environment variable was not found. The environment variable was: BI_ETL. This occurs when a package specifies an environment variable for a configuration setting but it cannot be found. Check the configurations collection in the package and verify that the specified environment variable is available and valid.
End Error


Error: 2007-08-23 18:59:10.25

Code: 0xC001401E

Source: ETL_MAIN_PACKAGE Connection manager "Package Path Execute"

Description: The file name "C:ETL_Deployment" /SET Package.Variables[P_LOOKUP_PATH].Value;C:ETL_DeploymentETL_LOGSDIM_PERIODIC.dtsx" specified in the connection was not valid.

End Error

I run the package on the same server with a command line directly in a DOS window:


C:>cmd.exe /c dtexec /file "C:ETL_DeploymentETL_MAIN_PACKAGE.dtsx" /CHECKPOINTING OFF /MAXCONCURRENT " -1 " /SET Package.Variables["P_PACKAGE_PATH"].Value;"
C:ETL_Deployment" /SET Package.Variables["P_LOOKUP_PATH"].Value;"C:ETL_Deplo
ymentETL_LOGS" /SET Package.Variables["P_SCHOOL_CODE"].Value;"007"


I don't have anymore the error saying that the Environment variable is not found, but I still have the same second error :


Error: 2007-08-23 18:59:10.25

Code: 0xC001401E

Source: ETL_MAIN_PACKAGE Connection manager "Package Path Execute"

Description: The file name "C:ETL_Deployment" /SET Package.Variables[P_LOOKUP_PATH].Value;C:ETL_DeploymentETL_LOGSDIM_PERIODIC.dtsx" specified in the connection was not valid.

End Error

I conclude that the environment variable is not read at all.

Does anybody have an idea on how to solve this problem ?

Many thanks.

Sami



View 10 Replies View Related

Web Based Sql Server Record Manipulation In Multi-user Environment

Jan 20, 2007

I was wondering if you guys might give me some advice on how best to handle a particular scenario i'm struggling with.
I have a client that basically wants web-based-update access to their sql server database.  Specifically, for a group of users to be able to access a page where they select a record for editing.  the caveat is that no two users should be able to pull up the same record at the same time.  Originally I would have thought there was some easier record-locking-mechanism I could exploit within sql server or ado.net itself, but I haven't been able to come up with anything..  so this is my current approach:
The page they use starts-out with basically a blank form.  there are custom-built paging controls at the bottom of the screen.  they click page-forward to begin and a stored procedure is ran to select a record and update a field on that record to indicate "in-process".  when they finish editing the record - or page on to the next record without updating - another stored procedure is ran - updating/resetting the status field on the record appropriately.
The entire page is encapsulated within an ajax.net updatepanel.
The entire page has caching disabled.  This works well in conjunction with the first page being blank.  if they get out of the app and try to get back in by clicking the back button - all they can do is get to the first (blank) page.
A piece of javascript window.onunload clicks a button on the page that releases the record they currently have selected in the event of a re-direct, clicking back, etc.. it appears to work with everything except a window close.  in that case, i have a stored procedure running periodically on the server that checks how long a record has been selected - and if it exceeds the time indicated - resets the record as to allow it to be re-selected later.
In the event of session timeout, they are redirected to another page that tells them their session has timed-out (and since the window.onunload fires - it takes care of releasing the record if they have one on the screen).
The concept seemed to be working well until I started multi-user testing.  Now it seems as if two users time it perfectly - they are actually able to both select the same record.  it happens pretty rarely, but it does seem to happen.  I'm guessing this has to do with how my stored procedure is structured - possibly allowing a tiny-enough window between the record being selected for editing - and the update running to actually status the record as in-process (2 separate sql statements within the one stored procedure).
I believe I also have a found a second quirk in my approach where something is causing the window.onunload event to fire twice in some strange situations..  but that's more annoying/confusing from a logging standpoint than anything..
I've read where people say to ensure you dont update a record that's already been updated - that you should compare the fields before you actually perform the update and ensure they haven't changed since you selected it..  but to me that doesn't solve anything.. if two people select the same record and both spend time working on it - the person that tries to update last has just wasted their time.
I've also toyed with the idea of maintaining a separate table in the database to hold the keys to the currently selected records and use that to keep multiple people from selecting the same record - but honestly i dont know if that approach is any better than what i'm doing now.   
anyway, I was just curious if you guys had any advice in regards to how you'd handle a request like this..  or if you see any obvious problems/fixes with my current approach..
I would greatly appreciate any info you could provide-
thanks-

View 3 Replies View Related

Use Cubes From Analysis Services 2005 To Analysis Services 2000

Oct 17, 2007

Hi,

I have some questions about SQL Servers 2000 and 2005 compatibility.
In my configuration I have to use both servers.
The cubes are stocked in 2005 server.
May I transfer from 2005 to 2000 Analysis Services the cubes?

If yes, what is the procedure? The result of migration is the same in the two different versions?


If not, how can I solve this problem?

Thanks in advance.

View 3 Replies View Related

Create A Relational Diagram From Non-relational Database

Aug 4, 2005

Hi all,
I am trying to create a diagram for our database, during the creating, I create some of the relationships which were not there(basically our original database is not relational database, that's why I am doing it)
So sometimes I have to chage data type in order to create a relationship for the coloumns in different tables. i.e. change char(16) to varchar(7) (I checked the field that make sure all the data in this field is <= 7 characters)

But when I saved the diagram, there is an error message that state:
Errors were encountered during the save process. Some of your database objects are not saved on your diagram.

'agent' table saved successfully
'VisitUSA' table
- Unable to create relationship 'FK_VisitUSA_agent'.
ODBC error: [Microsoft][ODBC SQL Server Driver][SQL Server]ALTER TABLE statement conflicted with COLUMN FOREIGN KEY constraint 'FK_VisitUSA_agent'. The conflict occurred in database 'CMC', table 'agent', column 'AgentCode'.

What does that mean? is it caused by some of the agentcode data in VisitUSA table which is not in agent table?
Thanks!
Betty

View 3 Replies View Related

Multi Join Stored Proc

Nov 20, 2007

Hi,
Why does the below not return any results???
Colorcodes mark the related keys.

Thanks.


Tables:
FundClient (ClientID PK, Client)

FundPortfolio (PortfolioID PK, Portfolio, ClientID FK)

Staff(StaffID PK, SeniorMgr, ClientID FK, FundID FK)

myLegal(myID PK, LegalCounsel FK, ClientID FK, FundID FK)

FullLegalList(LegalID PK, LegalName)





Code Block
@LegalCounsel int = 0,
@ClientID int = 0,
@FundID int = 0

DECLARE @thisQuery as varchar(max)

SET @thisQuery = 'SELECT p.Portfolio, SeniorMgr, fl.Legal FROM FundClient f'
BEGIN
SET @thisQuery = @thisQuery + ' INNER JOIN FundPortfolio p
ON p.ClientID = f.ClientID'
END
BEGIN
SET @thisQuery = @thisQuery + ' LEFT OUTER JOIN Staff s
ON (s.ClientID = p.ClientID AND s.FundID = p.PortfolioID AND s.ClientID = f.ClientID)'
END
BEGIN
SET @thisQuery = @thisQuery + ' LEFT OUTER JOIN myLegal l
ON (l.ClientID = p.ClientID AND l.FundID = p.PortfolioID)
INNER JOIN FullLegalList fl
ON fl.LegalID = l.LegalCounsel'
END
BEGIN
IF @Legal != 0
SET @thisQuery3 = @thisQuery3 + ' WHERE rl.Legal = ' + cast(@LegalCounsel as varchar(11))
END
BEGIN
IF @ClientID != 0
SET @thisQuery = @thisQuery + ' AND p.ClientID = ' + cast(@ClientID as varchar(11))
END
BEGIN
IF @FundID != 0
SET @thisQuery = @thisQuery + ' AND p.PortfolioID = ' + cast(@FundID as varchar(11))
END
BEGIN
SET @thisQuery = @thisQuery + ';'
END





View 12 Replies View Related

Analysis :: Power BI Analysis Services Connector - Remote Server Returned Error

Mar 5, 2015

I have, a SSAS 2012 tabular instance with SP2, there is a database on the instance with a read role with everyone assigned permissions. When configuring the Power BI analysis services connector, at the point where you enter Friendly Name, Description and Friendly error message, when you click next I receive the error "The remote server returned an error (403)." I've tested connecting to the database from Excel on a desktop and connect fine.I don't use a "onmicrosoft" account so don't have that problem to deal with.

We use Power BI Pro with our Office 365. As far as I can tell that part is working ok as I pass that stage of the configuration with a message saying connected to Power BI.The connector is installed on the same server as tabular services, its a Win2012 Standard server. The tabular instance is running a domain account that is the admin account for the instance (this is a dev environment) that account is what I've used in the connector configuration. It's also a local admin account. There is no gateway installed on the server.

View 10 Replies View Related

Analysis :: Cube Needs To Be Deployed From VS After SSIS Analysis Services Processing Task Completes?

May 13, 2014

I have a cube that we are processing nightly via an Analysis Service Processing Task in SSIS.  In order to increase the performance of the processing time, we elected to use a lot of rigid dimension attributes, and do a full process of everything in the SSIS task.  The issue that I am having is that after that task completes, I need to go into Visual Studio to deploy the cube becuase we are unable to browse or use the cube.  This issue seemed to start once we changed the SSIS Analysis Service Processing Task to do a full process on the dimensions, rather than an incremental.

I would expect that once development is done, and it is processed and deployed, that is it.  My thinking is that the SSIS task should just update the already deployed cube,

View 2 Replies View Related

Analysis :: How To Right Choose Key Column In Mining Structure For Microsoft Analysis Services

Jun 12, 2015

How to right choose key column in"Mining Structure" for Microsoft Analysis Services?
 
I have table:

"Incoming goods"

Create table Income (         
ID int not null identity(1, 1)            
[Date] datetime not null,             
GoodID int not null,               
PriceDeliver decimal(18, 2) not null,               
PriceSalse decimal(18, 2) not null,               
CONSTRAINT PK_ Income PRIMARY KEY CLUSTERED (ID),             
CONSTRAINT FK_IncomeGood foreign key (GoodID)  references dbo.Goods ( ID )            
)

I'm trying to build a relationship(regression) between “Price Sale” from Good and “Price Deliver”.But I do not know what column better choose as “key column”: ID or GoodID ?

View 2 Replies View Related

Analysis :: Create Analysis Services Project In Visual Studio 2012 Data Tools?

Feb 18, 2013

It is possible to create Analysis Services project (*.dwproj) in Visual Studio 2012 Data Tools?

View 5 Replies View Related

SQL 2012 :: SSIS Multi Configuration In A Single Table

Oct 20, 2015

I want to maintain all configurations in Single table, what is the best way to approach it.

View 5 Replies View Related







Copyrights 2005-15 www.BigResource.com, All rights reserved