I need to record in a table:
Who, When, What Field and New Value of Fields
When changes occur to an existing record.
The purpose is for users to occassionally view the changes. They'll want to be able to see the history of the record - who changed what and when.
I figured I'd add the needed code to the stored procedure that's doing the update for the record.
When the stored procedure is called to do the update, the PK and parameters are sent.
The SP could first retain the current state of the record from the disk,
then do the update, then "spin" thru the fields comparing the record state prior to the update and after. Differences could be parsed to a "Changes string" and in the end, this string is saved in a history record along with a few other fields:
Name, DateTime, Changes
FK to Changed Record: some int value
Name: Joe Blow
Date: 1/1/05 12:02pm
Changes: Severity: 23 Project: Everest Assigned Lab: 204
How does the above approach sound?
Is there a better way you'd suggest?
Any sample code for a system that spins thru the fields comparing 1 temporary record with another looking for changes?
This one is giving me quite a bit more difficulty then I ever imagined it would...
Essentially I would like to use one table to store the change history for multiple tables. I would like to use an update trigger to check which fields have changed in each record, and write a single record for each field that changed containing the table name, field name, previous value and new value to a history table.
There is a report when you click servername, report and run SCHEMA CHANGE HISTORY I had my SQL 2005 running for a few weeks and this is many listed from day started is there a way to recycle this and clean it up on a weekly basic
one of my SQL Developer member had one observation that, size of the parameter 'Parameter_XYZ' in certain stored procedure had changed from 25 to 255 during some production fixes, however suddenly its looks like that, someone has changed it back to 25 instead of 255.
DECLARE @Parameter_XYZ varchar(25);
Can we figure out in which sprint/drop the stored procedure was changed and the Parameter_XYZ back to 25. Can any log recovery mechanism will get such details.
Can we get stored procedure text between different alteration.
Hi, I am not sure how to change data and log file name on existing database, If it is oracle, I just need mount database and use alter database statement to change data file name or location.
i have a package who export data in a text file, this package is schedule all the night how i can change the name of the file dynamically ? or how the new data can be append at the end of the file ?
And I want to map it to a table that has the ff columns:
rack_id | CDName | Year | Artist
Assuming that N is constant and all rows of the flat file will have N or less values filled up per rack, how do I map the fields from the flat file such that cdname1, cdname2 and cdname3 all fall under the CDName column, year1 and so on will fall under the Year column, and artist1 and so on will fall under the Artist column?
I'm getting a bit lost in SSIS. I've got an Excel source file that I'm trying to load into a table. I keep getting validation errors that warn about not being able to convert between unicode and non-unicode string data types.
I'm trying figure out where I have to change this and am frankly confused. It seems SSIS is selecting various columns as unicode/WSTR data types, but I want them to import as regular string types.
On the Data Flow tab in SSIS, I right-click on the source Data Flow component (the Excel file) and select Show Advanced Editor. Then on the last tab, Input and Output Properties, there's a tree view for the Excel output. There are "External Columns" and "Output Columns" containers in the tree view.
I tried setting some of these but they don't seem to "take". Do I need to change the data type for each column under both the External and Output columns?
That seems like a lot of work! And, as I say, I tried setting some, but I still got the same validation errors. So, then I go back to this spot (Advanced Editor -> Input and Output Properties tab) and my changes seem to have been lost.
I added a secondary data file to TEMPdb yesterday and gave it a wrong location by mistake. If I try to change the location, then I am getting an error now. I think that is because TEMPdb is in use and that is why I cant change it's secondary file's location. Do I need to take TempDB offline and then change the secondary file's location??
How to move one table or a ResultSet to a separate database .sdf file in compact edition?
For example, I have handheld installed the compact edition to do some order works. After the job done, I want move the new table Orders to our server or other place. Of course I don't want to move everything in the .sdf file. So, what is the best way to move table Orders out to a sparate .sdf file? (In my case, I can not use RDA or replication. )
Using SSRS 2008R2 is it possible to change the file extension of a CSV data driven subscription? I'm outputting a text file with a .csv extension, but users have access to these files and I don;t want them opened with Excel and then saved back in an incorrect format.
These are the options I have from the report "manage" option.
I want to encrypt certain data like password, ssn, credit card info etc before saving in database. Also, this encrypted data can be queried using standard SQL statements like:
select * from users where userid=454 and password = 'encrypted data'
The mechanism to encrypt data could be in a .net application. The code that does encryption/decryption should also be protected so that it doesnt work if it falls in wrong hands.
Can anyone suggest what would be the best way to accomplish above?
I wonder what would be the best (at to be honest - how to do it at all) to perform data normalization with SSIS. The scenario is as follows: I got plain table with several columns in it.Some of columns can be copied straight into destination tableSome columns (String) should be lookup in another table to get IDOn success just replace string with IDOn fail - create new record in lookup table and return newly created ID Thanks for any ideas and maybe short samples
Hi , I am loading the Data into the Tables with the constraints on and redirecting the error rows into a seperate table is there a way to capture the error rows from a execute sql task by directly loading data without constraints and later adding them with the execute sql task and redirecting them to error table as this approach would make the loads quicker. the approach now that i am using is on a row by row basis ..... and if i drop constraints and load data and then add constraints will this deposit the same error rows as in case of the current approach please send me ur suggestions
In an approach of building an ETL tool, we are into a situation wherein, a table has to be loaded on an incremental basis. The first run all the records apporx 100 lacs has to be loaded. From the next run, only the records that got updated since the last run of the package or newly added are to be pulled from the source Database. One idea we had was to have two OLE DB Source components, in one get those records that got updated or was added newly, since we have upddate cols in the DB getting them is fairly simple, in the next OLEDB source load all the records form the Destination, pass it onto a Merge Join then have a Conditional Split down the piple line, and handle the updates cum insert.
Now the question is, how slow the show is gonna be ? Will there be a case that the Source DB returns records pretty fast and Merge Join fails in anticipation of all the records from the destination ?
What might be the ideal way to go about my scenario.. Please advice...
My Input is a flat file source and it has spaces in few columns in the data . These columns are linked to another table as a foreign key and when i try loading them in a relational structure Foreigh key violation is occuring , is there a standard method to replace these spaces .
what approach should i take so that data gets loaded in a relational structure.
for example
Name Age Salary Address dsds 23 fghghgh
Salary description level 2345 nnncncn 4
here salary is used in this example , the datatype is char in real scenario
what approach should i take to load the data in with cleansing the spaces in ssis
I would like to know if there is a way to maintain the history of changes to the reports that have been published to the report server?
I know that the report definitions get saved onto ReportServer database. But let's say a user makes a change to the published report and then saves it back to the server. And that the latest change was incorrect and I have to revert back to the previous version of the published report. Is there a way to do that? Does the report server maintain a history of previous versions.
There is a history for each report and I think that corresponds to the history of report executions (output data). But I am talking about the history of actual report definition.
i had setup the merge replication across the server but each next morning i find that replication syncronization has stop with following error
The merge process was unable to access row metadata at the 'Subscriber'. When troubleshooting, restart the synchronization with verbose history logging and specify an output file to write to, or use SQL Profiler to determine the source of the failure. (Source: MSSQL_REPL, Error number: MSSQL_REPL-2147200996)
for this i manually restrat the syn. agent by stopping and then starting agent.
plz tell me how can i fix this hell of message or tell me how i call start/stop merge agent on command line so that i make script starting and stoping agent as job and will configure to run every morning.
It's well known issue, that one can't use any dataset fields in a report header/footer directly. One of the approach is to create query-based parameter that basically equals =First(Fields!@FieldName@.Value, "@DataSetName@") and use that parameter value instead. But it doesn't work in my case!
My report displays some entity description and is parametrized with EntityID param. Its header contains entity name that, according to the approach, is queried from the data source through the EntityName report parameter. There's important issue: the report is displayed in ReportViewer control, that is embedded into my application and entity ID parameter isn't ser by user in ReportViewer parameters area. Its default value is changed by the application with SetReportParameters() web method every time a user wants to view the report according to the entity the user is exploring in the application. But after the report has been rendered, its header always contains not actual (outdated) entity name. Nevertheless, the report body contains actual data (including entity name). If I alter entity ID parameter in ReportViewer or in web-based Report Manager and refresh report, header displays correct entity name.
We are using a SQL Server 2000 Replication. I'm using the Merge Agent History Screen to retrive Informacion about replication sessions, is there any other screen to know exaclty which datawas replicated on each session?Or at least to know the script generated on each session?
I have to make a stored procedure that will show the history and changes made to a given EmpNo, with the UpdateDate, UpdateUser and indicate which field is modified. Ex. Employee Mobile number was changed from '134151235' to '23523657'.
Result must be:
EmpNo | UpdateDate | UpdateUser | Field changed | Change from | change to
I have a package A which is copied from another existing package B as most of the data structure and ETL mappings are same.
What I need to change in Package A is to change the file in Flat File Connection Manager. I can change it in the conneciton manager editor. However, it is automatically changed back to the previous one everytime when I try to Save All or run the package.
I also tried to copy/paste this Flat File Connection Manager in the same package but samething happen. The package file is not set 'Read Only" so anything else can Saved well except for the File name in connection manager.
Is this a bug? It would be very appreciated if anyone can give me any idea about this.
We have a need to report on historical data when none exists in the database.
Create a tabular report rdl that is run on a regular schedule. This report run is saved as an Excel sheet that overwrites the previous run, which has the same name.
Report is designed to use two data sources--the database and and the previously run Excel sheet. The data is "merged" using the Lookup function thereby creating a new report that includes the history needed.
How can I dynamically change the file name in File connection Manager in SSIS package?
I can do this in SQL 2000 but how to achieve the same thing in SQL 2005. I have to generate 10 different excel file and just need to change the file name in connection manager for excel file
I have a master securities table which has 7 fields. As a part of the daily process I am uploading flat files into database tables. The flat files contains the master(static) security data as well as the analytics(transaction) data. I need to
1) separate the master (static) data from the flat files,
2) check whether that data is present in the master table, if not then insert that data into the master table
3) If data present then move that existing record to an history table and then update the main master table.
All the 7 fields need to be checked to uniquely identify a single record in the master table.
How can this be done? Whether we can us a combination of data flow items or write a sql procedure to do all this.
I want to automate my performance monitoring Reports(sql2000). So I need to import performance monitor output log file .CSV to a table. I feel converting this .CSV to .XLS is good way to importing the log thru DTS. Is there any command/ script to change the file type .CSV to .XLS, so that i can include this one into DTS Step.