SQL Server 2008 :: How To Load A Customer Database
May 26, 2015
I need to load a customer database onto our SQL 2008 server. I always use restore database option in the management studio and create a new database from device (customer database backup file), it used to work just fine. But when I do the same now, I get this below error:
CREATE DATABASE permission denied in database 'master'.
RESTORE HEADERONLY is terminating abnormally.(Microsoft SQL Server,Error: 262)
I also tried to create new database option in the management studio, but get the same error. I did run management studio as 'Run as admin'.
I have an invoice table with customer, date, sales location, and invoice total. I want to total the invoices by location by customer and see what location sells the most to each customer. My table looks like this.
My requirements are to return the sales for each customer for each store, for the past 6 weeks.
The catch is that I only want those customers which have had sales over 10,000 within the last week.
This is my query:
WITH SET [CustomerList] AS FILTER( ([Store].[Store Name].[Store Name], [Customer].[Customer Name].[Customer Name]), [Measures].[Sales] >= 10000 AND [Date].[Fiscal Week Name].&[2015-01-26 to 2015-02-01]
I have some source tables like Customer, Order, ship, item, invoice. Among these source tables, I have to create 5 dimension tables and 1 Fact called orderFact using sql server queries just to test data. So i have created 5 dimensions and pulled dimension keys from each dimension and loaded into fact using join. For measures I have joined those 5 sources created a Rawfact table which have all measures.
Now loading into fact I have joined Rawfact with all dimensions and get keys and for measures i directly pulled from rawfact. Is this process right or we can do it by some other method?
And I want to avoid any Cartesian product for below queries. What I can do to avoid this?
Dimension: DimCustomer, DimOrder, DimShip,DimItem, DimInvoice and Fact is FactOrder:
Loading Rawfact:
select o.ord_id, o.full_order_value,o.open_order_value,o.div_code, o.order_type_code,o.order_status,o.order_date, s.num_of_pallets,s.num_of_cartons,s.shipment_value,s.ppd_coll,s.ship_status, i.invoice_amt, it.net_weight, it.gross_weight,it.warranty_days,it.item_type,it.item_num, c.terr_code, c.largest_bal,c.last_amt_pay,c.last_inv_amt,c.num_invoice_paid,c.cust_num, from order o
I have a table in Server A and it has 5 columns. One is Address & ID, CreateDatetime,..
I need to transfer data from this table from Server A to server B for a report pupose. The Address column in this table has some places two address in the table. I am giving ex below
Address Houston, Dallas Redmond Sacramento New Jersey, New York
I want to avoid the rows where address are two Houston, Dallas & New Jersey, NewYork to the destination table in server B and need to do incremental loads.
how to proceed with this?
The version we are using is Sql Server standard edition 2008r2
I am running SQL 2008 Enterprize Edition with SP1 on Windows 2008. I am trying to set up replication. I have completed the following:
1. Created distribution Database 2. Created publisher 3. Granted SQL Agent access to the ...MSSQL100Com folder to execute the agent_exe files 4. Granted SQL Agent access to ...MSSQLinn where the subsystem_dll files are located 5. Granted SQL Agent write permissions to ...MSSQL epldata in order the write the bcp files
Each time I try to initialize the snapshot, I get the following errors in the SQL Agent Log
1. Log Step.......cannot be run because the LogReader subsystem failed to load. The job has been suspended. 2. Log Step.......cannot be run because the Snapshot subsystem failed to load. The job has been suspended.
I found posts where the records in the msdb.dbo.syssubsystems pointed to different folders than where the dll and exe files are located. So, I checked that, but they are correct.
The SQL Agent has sysadmin on the SQL Server and is using a windows service account.
I believe it is a security issue because I can run the executables from the command prompt to generate the snapshot for the publication. Have I missed the forest for the trees?
I have a ssis package which runs daily. This ssis package has couple of execute sql tasks which load data for yesterday's transaction. Ex.
INSERT INTO Shipped (Div_Code, shipment_value, ship_l_id, shipped_qty, shipped_date, whse_code,
ord_id, ship_id, ship_l_ord_l_id, Created_date) select ord.DIV_CODE as div_code, ship.SHIPMENT_VALUE as shipment_value, ship_l.SHIP_L_ID as ship_l_id, ship_l.SHIPPED_QTY as shipped_qty, ship.SHIPPED_DATE as shipped_date, ship.WHSE_CODE as whse_code, ord.ORD_ID as ord_id, ship.SHIP_ID as ship_id, ship_l.ord_l_id as ship_l_ord_l_id, Getdate() as Created_date from SHIP ship, ORD ord, SHIP_L ship_l where ship.SHIPPED_DATE=(dateadd(day, -1, CONVERT(VARCHAR(10),GETDATE(),120))) and ship.WHSE_CODE='WPP' and ord.ORD_ID=ship.ORD_ID and ship.SHIP_ID=ship_l.SHIP_ID
All execute sql task has query like above query. and in some query we have date filter which loads data for yesterday. Ex. one query has ship.SHIPPED_DATE=(dateadd(day, -1, CONVERT(VARCHAR(10),GETDATE(),120))). some other query has ord.trans_date=(dateadd(day, -1, CONVERT(VARCHAR(10),GETDATE(),120))). this package runs daily through sql server job, so It loads data for yesterday. Now If i want to run for any particular date, How could we achieve from ssis?
I have a ssis package, which runs on date parameter. If we dont specify the date it always load data for yesterday's date. And if we give any specific date like '2015-05-10', It should load for that date. How can we achieve this dynamically (using package configuration)? Once we load for any specific date, package should be set for yesterday's date dynamically. How to achieve this as I am new to SSIS.
1) CustomerID 2) FirstName 3) MiddleName 4) SurName 5) Title 6) Marital Status 7) Education 8) Occupation 9) Annual Income 10) Line of Business 11) DOB 12) Father Name 13) Mother Name 14) SpouseName 15) Gender 16) Email 17) MainTel 18) Home Tel 19) Passport Number 20)---------------------- 21)- - - - - - - - - - -
100)------------------- Above mentioned list is a snapshot of our customer master table ,which contain approximately 100 attributes related to a customer.
We are designing an application for banking sector (but NOT Core banking solution),for which we may need to capture variable number of addresses for bank's customer,i.e more then three types of addresses Fixed,Temporary and Communication addresses(which is generally the case with all banks). A single address includes address1/address2/city/country/state/pincode fields. In context of OLTP database,We have option to put multiple addresses in child table but that involves various joins at the time of data retrival and slow down the query.
As another option we can can create redundent addresses columns(address1/address2/city/country/state/pincode) in master table that will accumulate addresses if demand for more then three type addresses arises(although there is reasonable numer of extra addresses is expected, i.e 10)
Database is expected to serve the records of 25 million(approx) bank's customer,so does someone can suggest me how to maintan the balance between two approches.
I intend to develop a web based application, which uses SQL server 2005 at back end and Visual studio 2.0 as front end.
Application serves two functionalities
Requirement1: It carryout a search (In SQL server) for a particular name entered from front end .net application against a huge DataBase of size about 1 million records.
Scenario: The above requirement can be complemented by following example
Consider we have a bank database which has its existing customer DataBase having containing attributes like Name, Age, and Profession e.t.c.
Now if some new customer want to open a new account in bank, then bank officials want to know whether the
new customer is one of the existing customer or not(without asking to customer itself).
System should be able to detect the combination of name also i.e if we enter "Jhon" from front end .net interface
then application should be able generate all list of all customer having "Jhon" as part of their name at any location(firstname, middlename, lastname).
Requirement 2: If some time change is detected in bank's extisting customer's DataBase then each record of this DataBase is searched against a external dataBase(having almost 2 -3 million records).
Scenario: The above requirement can be complemented by following example
If new user is added to bank's existing customer database(database change) then this new updated database's every record is serarched against another bank's database.
I would like to hear experts voice for database design of such application for optimal performance,and types of searches I should look for application.
We have a few customers dropping files in Amazon S3. how to load this data into SQL Server 2008 R2 database using SSIS? We are 2008 R2 BIDS environment.
I need to update a number of sql server tables, the data sources for these coming from a number of stored procedures. I want a generic way of getting the data and then passing this data to the tables.I am thinking of doing this for each table:Populating a datasetWriting this dataset to XMLUsing SQLXML Bulk Load to pass this XML to the database to updateI can create the xml data file by: dataset.WriteXml("C:data.xml")The problem I have is that the example (http://support.microsoft.com/default.aspx/kb/316005/en-us) I looked at relies on the schema being defined:<?xml version="1.0" ?><Schema xmlns="urn:schemas-microsoft-com:xml-data" xmlns:dt="urn:schemas-microsoft-com:xml:datatypes" xmlns:sql="urn:schemas-microsoft-com:xml-sql" > <ElementType name="CustomerId" dt:type="int" /> <ElementType name="CompanyName" dt:type="string" /> <ElementType name="City" dt:type="string" /> <ElementType name="ROOT" sql:is-constant="1"> <element type="Customers" /> </ElementType> <ElementType name="Customers" sql:relation="Customer"> <element type="CustomerId" sql:field="CustomerId" /> <element type="CompanyName" sql:field="CompanyName" /> <element type="City" sql:field="City" /> </ElementType></Schema>Is there any way I can create the schema 'on the fly' similar to how I did for the data source file.As I could then pass these files to the database:objBL.Execute ("schema.xml","data.xml");
We need to implement incremental load in database. A sample scenario is, there is a view (INCOMEVW) which is build on top of a query like
CREATE VIEW INCOMEVW AS SELECT CLIENTID,COUNTRYNAME,SUM(OUTPUT.INCOME) AS INCOME (SELECT EOCLIENT_ID AS CLIENTID,EOCOUNTRYNAME AS COUNTRYNAME,EOINCOME AS INCOME FROM EOCLIENT C INNER JOIN EOCOUNTRY CT ON C.COUNTRYCODE=CT.COUNTRYCODE
[code]...
This is a sample view. As of now there is a full load happening from the source(select * from INCOMEVW) and loads to target table tbl_Income.We need to pick only the delta and load to the target table using a staging. The challenge is,
1) If we get the delta(Insert,update or deleted rows in the source tables EOCLIENT,EOCOUNTRY,ENCLIENT,ENCOUNTRY, how to load the incremental to
single target table tbl_Income.
2) How to do the Sum operation with group by in incremental load?
3) We are planning to have a daily incremental load and thinking to create the same table structure as source with Date and Flag column to identify
the date and whether that source row is an Insert or Update or Delete with the flag. But not sure how to frame something like this view and load to single target with Sum operations.
I had to to relocate the database log file and I issued an Alter database command but by mistake I put a space in the file name as below. The space is at the beginning file name. Now I am unable get the database loaded to SQL Server. The database has 2 replications configured, so deleting and re-attaching the database means the replication needs to be re-configured. Is there an alternative way to issue a command to update the database FILENAME ? Not sure if this can be edited in master database (sys files).
ALTER DATABASE [User_DB] MODIFY FILE (NAME = User_DB_log, FILENAME = 'I:SQLLogs User_DB_log.ldf') GO
SELECT prin.[name] [User], sec.state_desc + ' ' + sec.permission_name [Permission] FROM [sys].[database_permissions] sec JOIN [sys].[database_principals] prin ON sec.[grantee_principal_id] = prin.[principal_id] WHERE sec.class = 0 ORDER BY [User], [Permission];
but the results are this: 2 columns - User and Permission
User Permission User1 GRANT CONNECT User2 GRANT CONNECT
IS there a way in SQL Server (2005/2008/2012) to run a script against a Database that will show all users that have permissions to that Database and the type of permissions?
Looking for sample ETL package to extract data from SQL Sever Database and load into Oracle Database using SQL SERVER INTEGRATION SERVICES 2008. The requirement is for full load and incremental load both.
I have a simple SSIS package that reads a flat file and copies it into a SQL Server table.
When the flat fiel is on the C drive I have no problem runnign this package from SQL Server Agent, but as soon as I update the path to a network location the package only works when I run it manually, but fails when is executed via the SQL Server agent job.
The error says "cannot open the datafile", while the datafile location is valid.
Is this a kind of limitation of a SQL Server Agent that only local files are allowed to be processed?
I am wondering how people maintain their SQL Servers which run at several customers sites and disk space is getting smaller and smaller? I want to say that we have tables in SQL dbs which hold a lot of date consisting of statistics, errors, logs etc. They grow and grow and existing data is not needed anymore as soon as the data get older than let's say for one year. How do you overcome the problem reducing the tables but not charging the system too much as the major application also runs on the same server?
if you can restore a database to Server B using Server A as the service. Meaning we would issue the command on Server A but somehow point to Server B as where we want the restore to happen.
The backup file would be in a location independent of both servers.
1) I can't get the 'copy database' function to work from SQL Server 2008 to SQL Server 2005. I connect ok. Everything goes to the last step and then it fails.2) I cant get a SQL server 2008 backup to restore on SQL SEerver 2005 either. The only way I know that works is to script the creation of all tables then export and import. This does work. How can I get the 'entire' database, structure and data, from 2008 to 2005? ThanksSQL newbie.
I was having issue with one of my databases in SQL EXPRESS. It is offline this morning …it said “Database 'MyDB' cannot be opened due to inaccessible files or insufficient memory or disk space. When I checked the error log ..it only said “FCB::Open failed: Could not open file D:DatabasesMyDB_Data.mdf for file number 1. OS error: 32(failed to retrieve text for this error. Reason: 15100).
I did alter database offline and and online ..it works for me( I can access the database again but I need to find the cause of this issue )I checked the max memory setting is still by default .. not limited for SQL ..it could be the cause ?
Just heard about the coming release of SQL Server 2008. Anyone here got some ideas on what are significant new features in SQL Server 2008 Database Engine compared to SQL Server 2005 Database Engine? Would be very interesting to know that.
I am task with identifying the source database name, id, and server name for each staging table that I create. I need to add this to a derived column on all staging tables created from merging same tables on different servers together.
When doing a Merge Join, there is no way to identify the source of data so I would like to see if data came from one database more than the other servers or if their are duplicates across servers.
The thing that bugs me about SSIS Data Flow task is there is no way to do an easy Execute SQL Task after I select my ADO.NET Source to get this information because my connection string is dynamic and there is no way of know which data source is being picked up at runtime.
For Example I have Products table on Server 1 and 2:
Server 2 has more Products and would like to join the two together to create a staging table.
Is it possible to view the Connection String information of a remote login/session? I want to know if the login is looking-up the database server via IP address, servername (NetBIOS name) or fully-qualified domain name (FQDN).
Using these DMVs I can get a lot of relevant information:
sys.dm_exec_sessions Program Name (eg. Microsoft SQL Server Management Studio), Client Interface Name (eg. .Net SqlClient Data Provider) sys.dm_exec_connections Net Transport (eg. TCP), Client Net Address and TCP Port
but not how the server's IP address was resolved. Is the connection string ever sent by the client to the server, or just used for DNS lookup?
UAT environment : SQL Server 2008 R2 SP3 Enterprise Edition
SANDBOX environment : SQL Server 2008 R2 SP3 Standard Edition
I have a database backup (.bak) that was taken from UAT environment that has CDC enabled on some tables. I want to restore that database into my SANDBOX environment which does not support CDC (because of standard edition). The restore process fails due to this incompatibility. Is there any way to restore without CDC (I dont CDC enabled on my SANDBOX; just my data from the backup) ?
Our database crashed this morning and went into recovery mode.how I can track the progress of the recovery to determine how long it might take?The error log shows that it started up all the databases, then shows the recovery messages fr the msdb database, then shows that sql server is ready for client connections. I don't see any messages about my database recovery or the number of transactions to roll forward or background. If i run the sys.sp_readerrorlog and search for my database name, the only line returned is the starting up database message.
I do expect the database to take a while to recovery as it is about 8TB, there is plenty of free disk space about 3TB.The database started recovery while a transaction log backup was running so that backup failed,the last transaction log backup was taken 2 hours before recovery started. The last full backup completed about 5 days ago. The transaction log backup occurs every 2 hours and is typically around 16GB.