Showing posts with label files. Show all posts
Showing posts with label files. Show all posts

Tuesday, March 27, 2012

duplicate rows in 1 csv file

Hi, I am trying to import data from a csv files to a OLE DB Destination. The csv files contains all transactional changes . For example for a particular record the firstname, lastname, email address records change within the same csv file. I need to save only the last updated record from the csv files. I have tried "slowly changing dimensions" but these dont work when there is duplictes within the same csv file. Also have tried 'Sort' but this only stores the first occurance.
Any ideas how i can store the latest changed data within 1 csv file.

How does the process distinguish between the latest data and duplicate data?

Is there a date as part of the row or is it just the last row wins?

Kirk Haselden
Author "SQL Server Integration Services"

|||the last row wins. there is no timestamp field . This is the way designed by the third party developers. Records are saved in csv files as changes are made. I have to put all these rows of data into a datawarehouse without duplicating the primary key.

Monday, March 19, 2012

Dump files in SQL 7.0 Enterprise

Hello,
I have been experiencing crash dump files on our SQL 7 Enterprise box several times per day for the last few months. I can't seem to match them up to a particular task or job.
Any ideas appreciated. Thank you.
Here is the head of one of the .dmp files:
================================================== ===================
BugCheck Dump
================================================== ===================
This file is generated by Microsoft SQL Server 1792.842
upon detection of fatal unexpected error. Please return this file,
the query or program that produced the bugcheck, the database and
the error log, and any other pertinent information with a Service Request.
Computer type is AT/AT COMPATIBLE.
Current local time is Wed Jul 21 12:13:00 2004
2 Intel x86 level 6, 797 Mhz processor(s).
Windows NT 5.0 Build 2195 CSD Service Pack 4.
Memory
MemoryLoad = 97% TotalPhys = 2047 MB AvailPhys = 47 MB
TotalPageFile = 3941 MB
Dump thread - spid = 29, PSS = 0x35799138, EC = 0x357992e8
Symbol load failed - Proceed to dump without any symbols
Stack Dump being sent to C:\MSSQL7\log\SQL08175.dmp
************************************************** *****************************
*
* BEGIN STACK DUMP:
* 07/21/04 12:13:00 spid 29
*
* Input Buffer 42 bytes -
* s p _ r e s e t _ c o n n e c t i o n
*
************************************************** *****************************
Parsing dump files is not something everybody can do and you only provided a
fraction of it. If your crashing that often you should contact MS PSS and
let them work directly with you to fix the issue. They can certainly access
the dump files properly.
http://support.microsoft.com/default...d=fh;EN-US;sql SQL Support
http://www.mssqlserver.com/faq/general-pss.asp MS PSS
Andrew J. Kelly SQL MVP
"ddrewrey" <ddrewrey@.discussions.microsoft.com> wrote in message
news:A939C3EC-D341-4835-B4EC-E9FB7AF26638@.microsoft.com...
> Hello,
> I have been experiencing crash dump files on our SQL 7 Enterprise box
several times per day for the last few months. I can't seem to match them up
to a particular task or job.
> Any ideas appreciated. Thank you.
> Here is the head of one of the .dmp files:
> ================================================== ===================
> BugCheck Dump
> ================================================== ===================
> This file is generated by Microsoft SQL Server 1792.842
> upon detection of fatal unexpected error. Please return this file,
> the query or program that produced the bugcheck, the database and
> the error log, and any other pertinent information with a Service Request.
> Computer type is AT/AT COMPATIBLE.
> Current local time is Wed Jul 21 12:13:00 2004
> 2 Intel x86 level 6, 797 Mhz processor(s).
> Windows NT 5.0 Build 2195 CSD Service Pack 4.
>
> Memory
> --
> MemoryLoad = 97% TotalPhys = 2047 MB AvailPhys = 47 MB
> TotalPageFile = 3941 MB
> Dump thread - spid = 29, PSS = 0x35799138, EC = 0x357992e8
> Symbol load failed - Proceed to dump without any symbols
> Stack Dump being sent to C:\MSSQL7\log\SQL08175.dmp
>
************************************************** **************************
***
> *
> * BEGIN STACK DUMP:
> * 07/21/04 12:13:00 spid 29
> *
> * Input Buffer 42 bytes -
> * s p _ r e s e t _ c o n n e c t i o n
> *
>
************************************************** **************************
***

Dump files in SQL 7.0 Enterprise

Hello,
I have been experiencing crash dump files on our SQL 7 Enterprise box several times per day for the last few months. I can't seem to match them up to a particular task or job.
Any ideas appreciated. Thank you.
Here is the head of one of the .dmp files:
===================================================================== BugCheck Dump
=====================================================================
This file is generated by Microsoft SQL Server 1792.842
upon detection of fatal unexpected error. Please return this file,
the query or program that produced the bugcheck, the database and
the error log, and any other pertinent information with a Service Request.
Computer type is AT/AT COMPATIBLE.
Current local time is Wed Jul 21 12:13:00 2004
2 Intel x86 level 6, 797 Mhz processor(s).
Windows NT 5.0 Build 2195 CSD Service Pack 4.
Memory
--
MemoryLoad = 97% TotalPhys = 2047 MB AvailPhys = 47 MB
TotalPageFile = 3941 MB
Dump thread - spid = 29, PSS = 0x35799138, EC = 0x357992e8
Symbol load failed - Proceed to dump without any symbols
Stack Dump being sent to C:\MSSQL7\log\SQL08175.dmp
*******************************************************************************
*
* BEGIN STACK DUMP:
* 07/21/04 12:13:00 spid 29
*
* Input Buffer 42 bytes -
* s p _ r e s e t _ c o n n e c t i o n
*
*******************************************************************************Parsing dump files is not something everybody can do and you only provided a
fraction of it. If your crashing that often you should contact MS PSS and
let them work directly with you to fix the issue. They can certainly access
the dump files properly.
http://support.microsoft.com/default.aspx?scid=fh;EN-US;sql SQL Support
http://www.mssqlserver.com/faq/general-pss.asp MS PSS
--
Andrew J. Kelly SQL MVP
"ddrewrey" <ddrewrey@.discussions.microsoft.com> wrote in message
news:A939C3EC-D341-4835-B4EC-E9FB7AF26638@.microsoft.com...
> Hello,
> I have been experiencing crash dump files on our SQL 7 Enterprise box
several times per day for the last few months. I can't seem to match them up
to a particular task or job.
> Any ideas appreciated. Thank you.
> Here is the head of one of the .dmp files:
> =====================================================================> BugCheck Dump
> =====================================================================> This file is generated by Microsoft SQL Server 1792.842
> upon detection of fatal unexpected error. Please return this file,
> the query or program that produced the bugcheck, the database and
> the error log, and any other pertinent information with a Service Request.
> Computer type is AT/AT COMPATIBLE.
> Current local time is Wed Jul 21 12:13:00 2004
> 2 Intel x86 level 6, 797 Mhz processor(s).
> Windows NT 5.0 Build 2195 CSD Service Pack 4.
>
> Memory
> --
> MemoryLoad = 97% TotalPhys = 2047 MB AvailPhys = 47 MB
> TotalPageFile = 3941 MB
> Dump thread - spid = 29, PSS = 0x35799138, EC = 0x357992e8
> Symbol load failed - Proceed to dump without any symbols
> Stack Dump being sent to C:\MSSQL7\log\SQL08175.dmp
>
****************************************************************************
***
> *
> * BEGIN STACK DUMP:
> * 07/21/04 12:13:00 spid 29
> *
> * Input Buffer 42 bytes -
> * s p _ r e s e t _ c o n n e c t i o n
> *
>
****************************************************************************
***

Dump files in SQL 7.0 Enterprise

Hello,
I have been experiencing crash dump files on our SQL 7 Enterprise box severa
l times per day for the last few months. I can't seem to match them up to a
particular task or job.
Any ideas appreciated. Thank you.
Here is the head of one of the .dmp files:
========================================
=============================
BugCheck Dump
========================================
=============================
This file is generated by Microsoft SQL Server 1792.842
upon detection of fatal unexpected error. Please return this file,
the query or program that produced the bugcheck, the database and
the error log, and any other pertinent information with a Service Request.
Computer type is AT/AT COMPATIBLE.
Current local time is Wed Jul 21 12:13:00 2004
2 Intel x86 level 6, 797 Mhz processor(s).
Windows NT 5.0 Build 2195 CSD Service Pack 4.
Memory
--
MemoryLoad = 97% TotalPhys = 2047 MB AvailPhys = 47 MB
TotalPageFile = 3941 MB
Dump thread - spid = 29, PSS = 0x35799138, EC = 0x357992e8
Symbol load failed - Proceed to dump without any symbols
Stack Dump being sent to C:\MSSQL7\log\SQL08175.dmp
****************************************
************************************
***
*
* BEGIN STACK DUMP:
* 07/21/04 12:13:00 spid 29
*
* Input Buffer 42 bytes -
* s p _ r e s e t _ c o n n e c t i o n
*
****************************************
************************************
***Parsing dump files is not something everybody can do and you only provided a
fraction of it. If your crashing that often you should contact MS PSS and
let them work directly with you to fix the issue. They can certainly access
the dump files properly.
http://support.microsoft.com/defaul...id=fh;EN-US;sql SQL Support
http://www.mssqlserver.com/faq/general-pss.asp MS PSS
Andrew J. Kelly SQL MVP
"ddrewrey" <ddrewrey@.discussions.microsoft.com> wrote in message
news:A939C3EC-D341-4835-B4EC-E9FB7AF26638@.microsoft.com...
> Hello,
> I have been experiencing crash dump files on our SQL 7 Enterprise box
several times per day for the last few months. I can't seem to match them up
to a particular task or job.
> Any ideas appreciated. Thank you.
> Here is the head of one of the .dmp files:
> ========================================
=============================
> BugCheck Dump
> ========================================
=============================
> This file is generated by Microsoft SQL Server 1792.842
> upon detection of fatal unexpected error. Please return this file,
> the query or program that produced the bugcheck, the database and
> the error log, and any other pertinent information with a Service Request.
> Computer type is AT/AT COMPATIBLE.
> Current local time is Wed Jul 21 12:13:00 2004
> 2 Intel x86 level 6, 797 Mhz processor(s).
> Windows NT 5.0 Build 2195 CSD Service Pack 4.
>
> Memory
> --
> MemoryLoad = 97% TotalPhys = 2047 MB AvailPhys = 47 MB
> TotalPageFile = 3941 MB
> Dump thread - spid = 29, PSS = 0x35799138, EC = 0x357992e8
> Symbol load failed - Proceed to dump without any symbols
> Stack Dump being sent to C:\MSSQL7\log\SQL08175.dmp
>
****************************************
************************************
***
> *
> * BEGIN STACK DUMP:
> * 07/21/04 12:13:00 spid 29
> *
> * Input Buffer 42 bytes -
> * s p _ r e s e t _ c o n n e c t i o n
> *
>
****************************************
************************************
***

Sunday, March 11, 2012

Dumb question

I have a simple flow that loads a data table from some flat files. It works properly but I can't figure out how to add only rows that exist (so I won't get an error from the duplicate ID). I added a lookup that redirects records that don't match any ID, but when I run it I get a timeout error (?). It seems to pick up the right # of records to add, but when it gets to the SQL Server Destination it seems to generate a timeout.

SSIS package "ImportAL3.dtsx" starting.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Warning: 0x802092A7 at Data Flow Task, SQL Server Destination [872]: Truncation may occur due to inserting data from data flow column "SampleID" with a length of 4000 to database column "SampleID" with a length of 10.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 17]) + REPLICATE(" ",10 - LEN(TRIM([Column 17])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 2]) + REPLICATE(" ",25 - LEN(TRIM([Column 2])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Information: 0x4004300A at Data Flow Task, DTS.Pipeline: Validation phase is beginning.
Warning: 0x802092A7 at Data Flow Task, SQL Server Destination [872]: Truncation may occur due to inserting data from data flow column "SampleID" with a length of 4000 to database column "SampleID" with a length of 10.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 17]) + REPLICATE(" ",10 - LEN(TRIM([Column 17])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 2]) + REPLICATE(" ",25 - LEN(TRIM([Column 2])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Information: 0x40043006 at Data Flow Task, DTS.Pipeline: Prepare for Execute phase is beginning.
Information: 0x40043007 at Data Flow Task, DTS.Pipeline: Pre-Execute phase is beginning.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002785.AL3" has started.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 17]) + REPLICATE(" ",10 - LEN(TRIM([Column 17])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Warning: 0x800470D8 at Data Flow Task, Derived Column [1446]: The result string for expression "TRIM([Column 2]) + REPLICATE(" ",25 - LEN(TRIM([Column 2])))" may be truncated if it exceeds the maximum length of 4000 characters. The expression could have a result value that exceeds the maximum size of a DT_WSTR.
Information: 0x400490F4 at Data Flow Task, LookupGrade [2832]: component "LookupGrade" (2832) has cached 11 rows.
Information: 0x400490F4 at Data Flow Task, LookupTestID [5608]: component "LookupTestID" (5608) has cached 0 rows.
Error: 0xC0202009 at Data Flow Task, SQL Server Destination [872]: An OLE DB error has occurred. Error code: 0x80040E14.
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Cannot fetch a row from OLE DB provider "BULK" for linked server "(null)".".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "The OLE DB provider "BULK" for linked server "(null)" reported an error. The provider did not give any information about the error.".
An OLE DB record is available. Source: "Microsoft SQL Native Client" Hresult: 0x80040E14 Description: "Reading from DTS buffer timed out.".
Information: 0x4004300C at Data Flow Task, DTS.Pipeline: Execute phase is beginning.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C:\temp\LW002785.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002785.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002786.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C:\temp\LW002786.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002786.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002787.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C:\temp\LW002787.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002787.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002788.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C:\temp\LW002788.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002788.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002789.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C:\temp\LW002789.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002789.AL3" has ended.
Information: 0x402090DC at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002790.AL3" has started.
Information: 0x402090DE at Data Flow Task, Flat File Source [100]: The total number of data rows processed for file "C:\temp\LW002790.AL3" is 1.
Information: 0x402090DD at Data Flow Task, Flat File Source [100]: The processing of file "C:\temp\LW002790.AL3" has ended.
Information: 0x40043008 at Data Flow Task, DTS.Pipeline: Post Execute phase is beginning.
Information: 0x40043009 at Data Flow Task, DTS.Pipeline: Cleanup phase is beginning.
Information: 0x4004300B at Data Flow Task, DTS.Pipeline: "component "SQL Server Destination" (872)" wrote 6 rows.
Warning: 0x80019002 at Data Flow Task: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
Task failed: Data Flow Task
Warning: 0x80019002 at ImportAL3: The Execution method succeeded, but the number of errors raised (1) reached the maximum allowed (1); resulting in failure. This occurs when the number of errors reaches the number specified in MaximumErrorCount. Change the MaximumErrorCount or fix the errors.
SSIS package "ImportAL3.dtsx" finished: Failure.

You have quite a few errors there.

You might want to replace the SQL Server destination with an OLE DB Destination.

It seems that you have the right idea using the lookup transformation. The red arrow will direct non-matches.|||Thanks for the help, that worked!

Actually, I get just one error - the rest are just truncation warnings. I don't see any need to address them - if the file has a >4,000 character key, it's been corrupted, and I want it to fail in that case! The one error that I did get, is solved by using an OLE DB destination. (I don't intend on scanning all files, every time, normally I should get no duplicates, but for the sake of error tolerance I had to put a check in.)

Now I'm doing a bit of reading to see just why the SQL Server Destination fails in such a case - it emphasises BULK transfers, I'm not sure of the significance of this.

Another dumb question, is there any way to post graphic files here? Since coding is being replaced by a graphical interface, you can't really post the code here.
|||You need to have your SQL Server destination database running on the same machine that the package is stored on. Also, a SQL Server destination won't perform any automatic type conversions -- you have to be sure that the metadata going into it is an exact match for the SQL Server table data types.|||Well, destination database is on the same machine. And when I went to the datatable, cleared out the records in question, and removed the SSIS module that checked for duplicate keys, and just went straight from the flat files into the database, it worked fine.

Dug ourselves into a hole

We just upgraded from 6.5 to 2000. The original database lived on two device
s and as a result the upgrade process created an mdf and ndf files for the p
rimary filegroup.
Here is the current allocation
db_data1.mdf 9MB
db_data2.ndf 3600MB
Logically, we would use alter database to empty the file contents of the mdf
file. SQL lets us to this. However, when we ask it to delete the file after
it is emptied, it informs us that we cannot delete the primary file.
The only solution is to empty the db_data2.ndf and then remove it.
Is this the best solution? And is this process of moving data from one file
to another highly reliable?I tend to use the primary file - the .MDF file - just for system objects and
place al of my application objects in a separate file. Looks like you're
there already. :-)_
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
"Gil Darves" <gil.darves@.phtech> wrote in message
news:90CC2143-8DC0-4C9F-A8BC-37D3FE369899@.microsoft.com...
We just upgraded from 6.5 to 2000. The original database lived on two
devices and as a result the upgrade process created an mdf and ndf files for
the primary filegroup.
Here is the current allocation
db_data1.mdf 9MB
db_data2.ndf 3600MB
Logically, we would use alter database to empty the file contents of the mdf
file. SQL lets us to this. However, when we ask it to delete the file after
it is emptied, it informs us that we cannot delete the primary file.
The only solution is to empty the db_data2.ndf and then remove it.
Is this the best solution? And is this process of moving data from one file
to another highly reliable?|||Thank you Thomas.
Yes, I understand about splitting objects but this pre-supposes the secondar
y file is on another filegroup. It is not; it's still the primary filegroup.
So I guess I am still back to square one and I am currently testing the proc
ess of moving the data from the ndf back to the mdf. Looks like it is a very
slow process and with 36 GB will take over 4 hours to accomplish. So I am c
urious, can you perform thi
s action if there is moderate activity on the db? We are thinking of doing t
his over the weekend.
Thanks again!
Gilles de Montreal|||Oh, OK. I'd still go with two filegroups. The big problem here is the
migration, which you can do in stages - as long as the users don't need
access during the migration. First, create a new, single-file filegroup.
Make sure it has lots of space. Then, for each table to be moved:
1. Drop all nonclustered indexes.
2. Drop the clustered index.
3. Create the clustered index on the new filegroup.
4. Create the nonclustered indexes.
Start with the largest table. It could be that you have to do this in a
series of outages. Ideally, you should take a backup of production and try
this out on a similar server.
You can still go the other way and refill the primary file. Consider the
following:
use Northwind
go
alter database Northwind
add file
(
name = Second
, filename = 'E:\temp\Second.ndf'
) to filegroup [PRIMARY]
go
sp_helpfile
go
dbcc shrinkfile (1, emptyfile)
go
dbcc shrinkfile (3, emptyfile)
go
alter database Test
remove file Second
This pushes all of the data back into the primary file. (I'd expand it
first.)
HTH
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
.
"Gil Darves" <anonymous@.discussions.microsoft.com> wrote in message
news:30EA8F1E-A019-4E6A-BEDB-AF2A6C57B6DB@.microsoft.com...
Thank you Thomas.
Yes, I understand about splitting objects but this pre-supposes the
secondary file is on another filegroup. It is not; it's still the primary
filegroup.
So I guess I am still back to square one and I am currently testing the
process of moving the data from the ndf back to the mdf. Looks like it is a
very slow process and with 36 GB will take over 4 hours to accomplish. So I
am curious, can you perform this action if there is moderate activity on the
db? We are thinking of doing this over the weekend.
Thanks again!
Gilles de Montreal

Dug ourselves into a hole

We just upgraded from 6.5 to 2000. The original database lived on two devices and as a result the upgrade process created an mdf and ndf files for the primary filegroup
Here is the current allocatio
db_data1.mdf 9M
db_data2.ndf 3600M
Logically, we would use alter database to empty the file contents of the mdf file. SQL lets us to this. However, when we ask it to delete the file after it is emptied, it informs us that we cannot delete the primary file
The only solution is to empty the db_data2.ndf and then remove it
Is this the best solution? And is this process of moving data from one file to another highly reliable?This is a multi-part message in MIME format.
--=_NextPart_000_04D3_01C3DB85.8613E550
Content-Type: text/plain;
charset="Utf-8"
Content-Transfer-Encoding: 7bit
I tend to use the primary file - the .MDF file - just for system objects and
place al of my application objects in a separate file. Looks like you're
there already. :-)_
--
Tom
---
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
"Gil Darves" <gil.darves@.phtech> wrote in message
news:90CC2143-8DC0-4C9F-A8BC-37D3FE369899@.microsoft.com...
We just upgraded from 6.5 to 2000. The original database lived on two
devices and as a result the upgrade process created an mdf and ndf files for
the primary filegroup.
Here is the current allocation
db_data1.mdf 9MB
db_data2.ndf 3600MB
Logically, we would use alter database to empty the file contents of the mdf
file. SQL lets us to this. However, when we ask it to delete the file after
it is emptied, it informs us that we cannot delete the primary file.
The only solution is to empty the db_data2.ndf and then remove it.
Is this the best solution? And is this process of moving data from one file
to another highly reliable?
--=_NextPart_000_04D3_01C3DB85.8613E550
Content-Type: text/html;
charset="Utf-8"
Content-Transfer-Encoding: quoted-printable
=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

I tend to use the primary file - the =.MDF file - just for system objects and place al of my application objects in a =separate file. Looks like you're there already. :-)_
-- Tom
---T=homas A. Moreau, BSc, PhD, MCSE, MCDBASQL Server MVPColumnist, SQL =Server ProfessionalToronto, ON Canadahttp://www.pinnaclepublishing.com/sql">www.pinnaclepublishing.com=/sql
"Gil Darves" wrote in =message news:90C=C2143-8DC0-4C9F-A8BC-37D3FE369899@.microsoft.com...We just upgraded from 6.5 to 2000. The original database lived on two =devices and as a result the upgrade process created an mdf and ndf files for the =primary filegroup.Here is the current allocationdb_data1.mdf 9MBdb_data2.ndf 3600MBLogically, we would use alter database =to empty the file contents of the mdf file. SQL lets us to this. However, =when we ask it to delete the file after it is emptied, it informs us that we =cannot delete the primary file.The only solution is to empty the =db_data2.ndf and then remove it.Is this the best solution? And is this =process of moving data from one file to another highly reliable?

--=_NextPart_000_04D3_01C3DB85.8613E550--|||Thank you Thomas
Yes, I understand about splitting objects but this pre-supposes the secondary file is on another filegroup. It is not; it's still the primary filegroup
So I guess I am still back to square one and I am currently testing the process of moving the data from the ndf back to the mdf. Looks like it is a very slow process and with 36 GB will take over 4 hours to accomplish. So I am curious, can you perform this action if there is moderate activity on the db? We are thinking of doing this over the weekend
Thanks again
Gilles de Montreal|||This is a multi-part message in MIME format.
--=_NextPart_000_0049_01C3DB94.68C0E2A0
Content-Type: text/plain;
charset="Utf-8"
Content-Transfer-Encoding: 7bit
Oh, OK. I'd still go with two filegroups. The big problem here is the
migration, which you can do in stages - as long as the users don't need
access during the migration. First, create a new, single-file filegroup.
Make sure it has lots of space. Then, for each table to be moved:
1. Drop all nonclustered indexes.
2. Drop the clustered index.
3. Create the clustered index on the new filegroup.
4. Create the nonclustered indexes.
Start with the largest table. It could be that you have to do this in a
series of outages. Ideally, you should take a backup of production and try
this out on a similar server.
You can still go the other way and refill the primary file. Consider the
following:
use Northwind
go
alter database Northwind
add file
(
name = Second
, filename = 'E:\temp\Second.ndf'
) to filegroup [PRIMARY]
go
sp_helpfile
go
dbcc shrinkfile (1, emptyfile)
go
dbcc shrinkfile (3, emptyfile)
go
alter database Test
remove file Second
This pushes all of the data back into the primary file. (I'd expand it
first.)
HTH
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA
SQL Server MVP
Columnist, SQL Server Professional
Toronto, ON Canada
www.pinnaclepublishing.com/sql
.
"Gil Darves" <anonymous@.discussions.microsoft.com> wrote in message
news:30EA8F1E-A019-4E6A-BEDB-AF2A6C57B6DB@.microsoft.com...
Thank you Thomas.
Yes, I understand about splitting objects but this pre-supposes the
secondary file is on another filegroup. It is not; it's still the primary
filegroup.
So I guess I am still back to square one and I am currently testing the
process of moving the data from the ndf back to the mdf. Looks like it is a
very slow process and with 36 GB will take over 4 hours to accomplish. So I
am curious, can you perform this action if there is moderate activity on the
db? We are thinking of doing this over the weekend.
Thanks again!
Gilles de Montreal
--=_NextPart_000_0049_01C3DB94.68C0E2A0
Content-Type: text/html;
charset="Utf-8"
Content-Transfer-Encoding: quoted-printable
=EF=BB=BF<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
&

Oh, OK. I'd still go with two filegroups. The big problem here is the migration, which you can =do in stages - as long as the users don't need access during the =migration. First, create a new, single-file filegroup. Make sure it has lots =of space. Then, for each table to =be moved:
1. Drop all =nonclustered indexes.
2. Drop the =clustered index.
3. Create =the clustered index on the new filegroup.
4. Create the =nonclustered indexes.
Start with the largest table. It =could be that you have to do this in a series of outages. Ideally, you =should take a backup of production and try this out on a similar =server.
You can still go the other way and =refill the primary file. Consider the following:
use Northwind
go
alter database Northwindadd =file( name =3D Second, filename =3D ='E:\temp\Second.ndf') to filegroup [PRIMARY]go
sp_helpfilego
dbcc shrinkfile (1, emptyfile)go
dbcc shrinkfile (3, emptyfile)go
alter database Testremove =file Second
This pushes all of the data back into =the primary file. (I'd expand it first.)
HTH
-- Tom
----Thomas A. =Moreau, BSc, PhD, MCSE, MCDBASQL Server MVPColumnist, SQL Server ProfessionalToronto, ON Canadahttp://www.pinnaclepublishing.com/sql">www.pinnaclepublishing.com=/sql.
"Gil Darves" wrote in message news:30E=A8F1E-A019-4E6A-BEDB-AF2A6C57B6DB@.microsoft.com...Thank you Thomas.Yes, I understand about splitting objects but this pre-supposes the secondary file is on another filegroup. It is not; it's =still the primary filegroup.So I guess I am still back to square one =and I am currently testing the process of moving the data from the ndf back to =the mdf. Looks like it is a very slow process and with 36 GB will take over 4 =hours to accomplish. So I am curious, can you perform this action if there is =moderate activity on the db? We are thinking of doing this over the weekend.Thanks again!Gilles de Montreal

--=_NextPart_000_0049_01C3DB94.68C0E2A0--

Dual Transaction Log

I have just taken over an SQL Server 7.0 database which has two transaction
log files. One is on the system drive, and the other is on the same drive as
the database..
The transaction log on the system drive is huge, leaving very little free
space, whilst the database drive has tonnes of free space..
How do I remove the log file on the system drive, and just use the second
one?
Thnak you
Nick
Hi,
Best way is :-
1. Detach the database using sp_detach_db
2. copy the LDF file in system drive to new location
3. Attach the database using SP_ATTACH_DB, specify the new LDF location.
One more method to remove ldf , is backup the database and TX log , shrink
the LDF file using DBCC SHRINKFILE and REMOVE the FILE using Alter database
see the below link for details
http://groups.google.com/groups?q=re...t .com&rnum=5
Thanks
Hari
MCDBA
"nh" <anonymous@.discussions.microsoft.com> wrote in message
news:egnAeCymEHA.2372@.TK2MSFTNGP10.phx.gbl...
>I have just taken over an SQL Server 7.0 database which has two transaction
> log files. One is on the system drive, and the other is on the same drive
> as
> the database..
> The transaction log on the system drive is huge, leaving very little free
> space, whilst the database drive has tonnes of free space..
> How do I remove the log file on the system drive, and just use the second
> one?
> Thnak you
> Nick
>

Dual Transaction Log

I have just taken over an SQL Server 7.0 database which has two transaction
log files. One is on the system drive, and the other is on the same drive as
the database..
The transaction log on the system drive is huge, leaving very little free
space, whilst the database drive has tonnes of free space..
How do I remove the log file on the system drive, and just use the second
one?
Thnak you
NickHi,
Best way is :-
1. Detach the database using sp_detach_db
2. copy the LDF file in system drive to new location
3. Attach the database using SP_ATTACH_DB, specify the new LDF location.
One more method to remove ldf , is backup the database and TX log , shrink
the LDF file using DBCC SHRINKFILE and REMOVE the FILE using Alter database
see the below link for details
http://groups.google.com/groups?q=remove+log+file+in+sql+7&hl=en&lr=&ie=UTF-8&selm=us%24RwvUb%24GA.223%40cppssbbsa02.microsoft.com&rnum=5
Thanks
Hari
MCDBA
"nh" <anonymous@.discussions.microsoft.com> wrote in message
news:egnAeCymEHA.2372@.TK2MSFTNGP10.phx.gbl...
>I have just taken over an SQL Server 7.0 database which has two transaction
> log files. One is on the system drive, and the other is on the same drive
> as
> the database..
> The transaction log on the system drive is huge, leaving very little free
> space, whilst the database drive has tonnes of free space..
> How do I remove the log file on the system drive, and just use the second
> one?
> Thnak you
> Nick
>

Wednesday, March 7, 2012

dtUtil question - How to include miscellaneous files in deployment?

Hello,

I'm wondering how to use the dtUtil command to deploy an ssis package but also include any of the miscellaneous files, such as some dtsconfig files. I'd like to be able to use the dtUtil command to specify where to put any "dependencies" that I have. If I create a deployment utility from BI Studio, I have the option on the third screen to choose the folder for dependencies, how come I don't see any parameters for that on the dtUtil command? Is it possible, or do I HAVE to deploy using the deployment utility created for me by BI Studio?

Thanks in advance,

Andy

When deploying to a file system /copy option should work for dependencies as well.

For example:

dtutil /File c:\2.reg /copy File;c:\3.reg

|||[quote]
dtutil /File c:\2.reg /copy File;c:\3.reg
[/quote]
That's very interesting; as long as you are 'deploying' to the same machine. Maybe it's just me, but 'deploying' to the same machine isn't really 'deployment' - normally you need to do it to a remote machine; and if you try to specify a /destserver, you get....

The File destination cannot be specified with the DTS, SQL, Server, User, or Password destinations.

DTSTransformationSTATus constants !

Hi
I am building a DTS package, to read a IIS Log files into SQL-Server database file through a DTS package.

It is runing smooth, but sometimes the Log file include some CR LF, unexpected text,...etc so the DTS package can't read the fields perfectly and it fails, and I recieve an error.

I am up to use ActiveX Script in VBScript language to have more control over the flow of the transformation of the data operation.

Now, how can I use the DTSTransformationSTATUS constants to make my loops and check the different source fields?

I will appreciate an examples that control the folow of the Transformation of a TEXT file.

Thank you for your help in advance.

KhalidHow big are the log files ? What are the patterns to these hiccups ? Have you thought about writing a small app(vb,perl) that corrects the file first then use dts to import the file - you could actually make this a part of your dts package (calling the app) ?|||The log files are 50 MB to 60 MB in size.

About writing an Application to correct the log file, this is exactuly what I am up to, and since DTS is providing a powerful tools through VBScript or VBJScript to have more control over the Transformation process, I have decided to use these tools.

There are some constants such as DTSTransformationSTATUS_Error, DTSTransformationSTATUS_ErrorInfo,.. etc . These constants can be used to control the flow of the Transformation process and to handle the errors if any.

I need an example using these constants to make things more clear for me to build my own application.

Do you have any idea about these constants?

Thanks

Sunday, February 26, 2012

dtsconfig file is binary?

Hello all,

I'm trying to easily change a value in 80 ssis configuration files from "localhost" to "myservername". I downloaded WinGrep to do this, but it's balking on some of the files, saying that they are binary files.

Does anyone know how to make the *.dtsconfig files NOT binary in SSIS?

Any suggestions GREATLY appreciated.

I won't go into the reason I have my servername in 80 places...

Andy

The .dtsconfig files are XML files - i.e. mostly text, but sometimes you need to be careful with the encoding (typically utf-8) to keep it valid XML. I have no experience with WinGrep, or why it thinks it is "binary". I would suggest using XML-aware tools to modify .dtsconfig.|||

Thanks for the tip Michael. Do you know if notepad is XML-aware? Because that's the only tool I used to edit them.

Also, I tried just creating a clean .dtsconfig file from the package configurations wizard and then pointing WinGrep at that file and I got the same thing, saying it was binary. I didn't modify it at all outside of the wizard.

|||The .dtsConfig file is a text file, in UTF-8 encoding. If you don't configure any localized (non-English) properties, UTF-8 is identical to ASCII encoding - in that case any text processor can work with it. Notepad can typically detect the encoding correctly even if you have non-English properties, so it should be OK in any case.

Again, I have no idea why WinGrep thinks it is binary, maybe because it is just one line.|||

Hi Michael,

You hit the nail on the head with that last sentence. I just put a carriage return in the file so it was more than 1 line and now WinGrep doesn't think it's binary anymore. Weird. Anyway to make it so that a configuration file is created on more than 1 line?

|||

Ok, so in order to make it so it's not just 1 line, in BI studio, with your config file open in then center pane and your cursor inside the file, go to Edit/Advanced/Format Document. This will format the document better as well as put it on more than 1 line. Now WinGrep has no problem.

Friday, February 24, 2012

DTS_E_COLUMNDATAOVERFLOWDISKIOBUFFER Error

Hi! I'm a newbie and I'm trying to create a SSIS package which would loop in a directory of comma delimited text files.

Could anybody help? I'm having a problem when I run my package.

[Flat File Source [524]] Error: The column data for column "IndicatorGroupID" overflowed the disk I/O buffer.
[Flat File Source [524]] Error: An error occurred while processing file "\\192.168.0.47\Temp\detail-2864EGGPUSH-20060315.txt" on data row 1.
[DTS.Pipeline] Error: The PrimeOutput method on component "Flat File Source" (524) returned error code 0xC0202092. The component returned a failure code when the pipeline engine called PrimeOutput(). The meaning of the failure code is defined by the component, but the error is fatal and the pipeline stopped executing.

Please help...

:)

Try directing all erroring rows into into an error sink using the error output from your flat file source so you can examine them later. I imagine you have some erroneous data in there.

--Jamie

|||Thanks! I've already found the error. You were correct, I was not able to filter my loop. As it happened, it looped through all the files in the directory, even through the unparsed ones.I only needed it to loop through the parsed ones. Thanks again!

Sunday, February 19, 2012

DTS, driver ODBC for Btrieve

Hi,
I need transformation data from files Btrieve (.dt) to any other formats. Where i can download driver ODBC or OLE DB for Btrieve?
And how install this driver?
Thank.See the vendors list (http://www.sqlsummit.com/oledbVen.htm) and get it.|||Originally posted by Satya
See the vendors list (http://www.sqlsummit.com/oledbVen.htm) and get it.
You have already used this Vendors list?
Exist of the shareware or freeware drivers?|||I've used Data Direct Technologies and for the shareware ensure to test them before deploying on production server.

Freeware/shareware are easy to get but make sure there is no security threat to the server.|||Originally posted by Satya
I've used Data Direct Technologies and for the shareware ensure to test them before deploying on production server.

Freeware/shareware are easy to get but make sure there is no security threat to the server.
What to do Data Direct Technologies? This is alternative DTS?
Give me links for downloads Data Direct Technologies, pls.|||I think you're not interested in the referred link, where this http://www.datadirect-technologies.com/ link is listed there.

Originally posted by Cancer
What to do Data Direct Technologies? This is alternative DTS?
Give me links for downloads Data Direct Technologies, pls.|||I looked only those links, which support Btrieve.
I have not found still shareware/freeware driver.

Thank for discussion|||Then may contact vendors listed, for further information.

DTS zipping a file

Hello there.
I have this DTS that creates 5 files. The last step on the DTS is anActiveX task that moves the files to a certain folder. Bun?t now what I really need to do is zip the file and leave it on the server.

This DTS is executed by a job on the SQL Server, and the files are created on a network share with access to the SQLServer user that executes the job.

I've been trying to find out information on how to zip the folder and I've come up with two options:

1. install winzip on the server and run winzip32.exe passing arguments to it.
2. use the standard zipfldr.dll that comes with Windows to do the job.

Number one is out of my reach since IT department don't want the software installed, I have to stick with number 2.

Now I've read that using rundll32.exe I can run the dll as an exe, but I need the entry point to the dll wich I cannot find besides the RouteTheCall %L, which actually opens an explorer window with the folder or zip file I pass to it as an argument.

Does anyone know if this is possible either using rundll32.exe or automating the task through an AttiveX task?

TIA.

I would really like to find the answer to this question as well but, I don't see any replies and I noticed that this posting is 8 months old. I also could not find any information on the web as of yet. If I find anything I will respond to this post.

Steve

|||I recently think of the 2nd method. But it's too difficult. If you find how to use the zipfldr.dll .Please tell me. Thanks a lot. And , I will tell me if I get a clear using.|||

I programming to compress the data. I want to call am usable zipfldr.dll with C#.NET programming, eager to know that must use which order in the calling uses. please tell me at jenkarn_7@.hotmail.com, thanks.

|||

I just searched for "zip file" on msdn... http://blogs.msdn.com/dotnetinterop/archive/2006/04/05/567402.aspx

|||Easiest method of compressing files is to use rar.exe (WinRar CLI).

rar -a C:\FilesToCompress\*.xxx C:\MyCompressedfile.rar

Can run that through a CLI in SSIS and DTS.

DTS zipping a file

Hello there.
I have this DTS that creates 5 files. The last step on the DTS is anActiveX task that moves the files to a certain folder. Bun?t now what I really need to do is zip the file and leave it on the server.

This DTS is executed by a job on the SQL Server, and the files are created on a network share with access to the SQLServer user that executes the job.

I've been trying to find out information on how to zip the folder and I've come up with two options:

1. install winzip on the server and run winzip32.exe passing arguments to it.
2. use the standard zipfldr.dll that comes with Windows to do the job.

Number one is out of my reach since IT department don't want the software installed, I have to stick with number 2.

Now I've read that using rundll32.exe I can run the dll as an exe, but I need the entry point to the dll wich I cannot find besides the RouteTheCall %L, which actually opens an explorer window with the folder or zip file I pass to it as an argument.

Does anyone know if this is possible either using rundll32.exe or automating the task through an AttiveX task?

TIA.

I would really like to find the answer to this question as well but, I don't see any replies and I noticed that this posting is 8 months old. I also could not find any information on the web as of yet. If I find anything I will respond to this post.

Steve

|||I recently think of the 2nd method. But it's too difficult. If you find how to use the zipfldr.dll .Please tell me. Thanks a lot. And , I will tell me if I get a clear using.|||

I programming to compress the data. I want to call am usable zipfldr.dll with C#.NET programming, eager to know that must use which order in the calling uses. please tell me at jenkarn_7@.hotmail.com, thanks.

|||

I just searched for "zip file" on msdn... http://blogs.msdn.com/dotnetinterop/archive/2006/04/05/567402.aspx

|||Easiest method of compressing files is to use rar.exe (WinRar CLI).

rar -a C:\FilesToCompress\*.xxx C:\MyCompressedfile.rar

Can run that through a CLI in SSIS and DTS.

DTS zipping a file

Hello there.
I have this DTS that creates 5 files. The last step on the DTS is anActiveX task that moves the files to a certain folder. Bun?t now what I really need to do is zip the file and leave it on the server.

This DTS is executed by a job on the SQL Server, and the files are created on a network share with access to the SQLServer user that executes the job.

I've been trying to find out information on how to zip the folder and I've come up with two options:

1. install winzip on the server and run winzip32.exe passing arguments to it.
2. use the standard zipfldr.dll that comes with Windows to do the job.

Number one is out of my reach since IT department don't want the software installed, I have to stick with number 2.

Now I've read that using rundll32.exe I can run the dll as an exe, but I need the entry point to the dll wich I cannot find besides the RouteTheCall %L, which actually opens an explorer window with the folder or zip file I pass to it as an argument.

Does anyone know if this is possible either using rundll32.exe or automating the task through an AttiveX task?

TIA.

I would really like to find the answer to this question as well but, I don't see any replies and I noticed that this posting is 8 months old. I also could not find any information on the web as of yet. If I find anything I will respond to this post.

Steve

|||I recently think of the 2nd method. But it's too difficult. If you find how to use the zipfldr.dll .Please tell me. Thanks a lot. And , I will tell me if I get a clear using.|||

I programming to compress the data. I want to call am usable zipfldr.dll with C#.NET programming, eager to know that must use which order in the calling uses. please tell me at jenkarn_7@.hotmail.com, thanks.

|||

I just searched for "zip file" on msdn... http://blogs.msdn.com/dotnetinterop/archive/2006/04/05/567402.aspx

|||Easiest method of compressing files is to use rar.exe (WinRar CLI).

rar -a C:\FilesToCompress\*.xxx C:\MyCompressedfile.rar

Can run that through a CLI in SSIS and DTS.

Friday, February 17, 2012

DTS with ActiveX script schedule problem

Hello gang, can someone help me with a problem?
I have a DTS package that uses ActiveX script to copy and rename files. I
can run this DTS package manually and the files are renamed and copied.
However, if I schedule this package with a job, the job runs successfully bu
t
the files are not copied and renamed. I have logged both the DTS activity
and Job activity and everything appears successful but no files are written.
Has anyone ran into this same problem.
These are the items I have thouroughly checked out:
The paths are correct (because the DTS package works)
The owner of the DTS has privleges to write(because the DTS package works)
The Owner of the Job has privileges to write(because the log activity shows
the steps in the DTS running)
Thanks for your help in advance.
Thanks Kllyj64BTW. Both the Job and the DTS package are running on the actual server, so
I
am not having a problem about "where I am executing the objects."
--
Thanks Kllyj64
"kllyj64" wrote:

> Hello gang, can someone help me with a problem?
> I have a DTS package that uses ActiveX script to copy and rename files. I
> can run this DTS package manually and the files are renamed and copied.
> However, if I schedule this package with a job, the job runs successfully
but
> the files are not copied and renamed. I have logged both the DTS activity
> and Job activity and everything appears successful but no files are writte
n.
> Has anyone ran into this same problem.
> These are the items I have thouroughly checked out:
> The paths are correct (because the DTS package works)
> The owner of the DTS has privleges to write(because the DTS package works)
> The Owner of the Job has privileges to write(because the log activity show
s
> the steps in the DTS running)
> Thanks for your help in advance.
>
> --
> Thanks Kllyj64|||I found the problem, SQL jobs runs under NT and does not recognize mapped
drives. I changed the DTS ActiveX scripts to use UNC drives and the job
worked. I should have looked further before posting.
--
Thanks Kllyj64
"kllyj64" wrote:
[vbcol=seagreen]
> BTW. Both the Job and the DTS package are running on the actual server, s
o I
> am not having a problem about "where I am executing the objects."
> --
> Thanks Kllyj64
>
> "kllyj64" wrote:
>

DTS with ActiveX script schedule problem

Hello gang, can someone help me with a problem?
I have a DTS package that uses ActiveX script to copy and rename files. I
can run this DTS package manually and the files are renamed and copied.
However, if I schedule this package with a job, the job runs successfully but
the files are not copied and renamed. I have logged both the DTS activity
and Job activity and everything appears successful but no files are written.
Has anyone ran into this same problem.
These are the items I have thouroughly checked out:
The paths are correct (because the DTS package works)
The owner of the DTS has privleges to write(because the DTS package works)
The Owner of the Job has privileges to write(because the log activity shows
the steps in the DTS running)
Thanks for your help in advance.
--
Thanks Kllyj64BTW. Both the Job and the DTS package are running on the actual server, so I
am not having a problem about "where I am executing the objects."
--
Thanks Kllyj64
"kllyj64" wrote:
> Hello gang, can someone help me with a problem?
> I have a DTS package that uses ActiveX script to copy and rename files. I
> can run this DTS package manually and the files are renamed and copied.
> However, if I schedule this package with a job, the job runs successfully but
> the files are not copied and renamed. I have logged both the DTS activity
> and Job activity and everything appears successful but no files are written.
> Has anyone ran into this same problem.
> These are the items I have thouroughly checked out:
> The paths are correct (because the DTS package works)
> The owner of the DTS has privleges to write(because the DTS package works)
> The Owner of the Job has privileges to write(because the log activity shows
> the steps in the DTS running)
> Thanks for your help in advance.
>
> --
> Thanks Kllyj64|||I found the problem, SQL jobs runs under NT and does not recognize mapped
drives. I changed the DTS ActiveX scripts to use UNC drives and the job
worked. I should have looked further before posting.
--
Thanks Kllyj64
"kllyj64" wrote:
> BTW. Both the Job and the DTS package are running on the actual server, so I
> am not having a problem about "where I am executing the objects."
> --
> Thanks Kllyj64
>
> "kllyj64" wrote:
> > Hello gang, can someone help me with a problem?
> >
> > I have a DTS package that uses ActiveX script to copy and rename files. I
> > can run this DTS package manually and the files are renamed and copied.
> > However, if I schedule this package with a job, the job runs successfully but
> > the files are not copied and renamed. I have logged both the DTS activity
> > and Job activity and everything appears successful but no files are written.
> > Has anyone ran into this same problem.
> >
> > These are the items I have thouroughly checked out:
> > The paths are correct (because the DTS package works)
> > The owner of the DTS has privleges to write(because the DTS package works)
> > The Owner of the Job has privileges to write(because the log activity shows
> > the steps in the DTS running)
> >
> > Thanks for your help in advance.
> >
> >
> > --
> > Thanks Kllyj64

DTS vs triggers

Hi,

I've got the following problem:

I'm using DTS to transfer fata from csv files into a table.
In the file there are often duplicated rows, therefore I made a
trigger
on the table.

The trigger works fine when i tried in query analyzer, but it not
launched
from the dts.

I've learned that i have to turn of the fast load option in the
properties of the transform data task object.
When I turn this option off, the task will fail with the following
message:

"The number of failing rows exceed the maximum specified.
Cursor operation conflict."

I wonder why this happens. There are no check constraints on the table
against duplication.

Thanks in advance for any helpBijesz wrote:
...
> "The number of failing rows exceed the maximum specified.
> Cursor operation conflict."
> I wonder why this happens. There are no check constraints on the table
> against duplication.

I may have an answer:

Let's assume you've saved your DTS job on the SQL Server, so that when
you are in Ent. Mgr. you can click on the DTS folder, click on the
'local packages' icon, and find your saved DTS job there.

Once there, you'll see the DTS job displayed as a set of 3 objects: 1)
the Excel icon for your source; 2) an arrow pointing to your MSSQL
destination, and; 3) your MSSQL destination.

If you right-click on the Excel or MSSQL objects and open the
Properties window, as I did, you'll be perplexed.

It finally occurred to me to right-click on that arrow and open-up yet
another Properties window there.

Once there, if you click on the 'Options' tab, you'll find a 'Max Error
Count' property. Nice place to hide it, eh?

hth,

victor dileo