Wednesday, March 28, 2012
Persisting XML
I would like to save the output of a FOR XML stored procedure to a file on my hard drive with a filename of my choosing. Can anybody point me in the write direction on how to do this. (I've got the FOR XML sproc written, I just need help figuring out ho
w to persist it to a file. Thanks.
JT
Hi JT,
I noticed you make another post with topic 'Persist Sproc OUTPUT' in the
newsgroup: microsoft.public.sqlserver.programming. I found an community
member has added his reply to that thread and I will add my reply if you
have follow up questions to that thread
Thank you for your patience and cooperation. If you have any questions or
concerns, don't hesitate to let me know. We are here to be of assistance!
Sincerely yours,
Mingqing Cheng
Microsoft Online Support
Get Secure! - www.microsoft.com/security
This posting is provided "as is" with no warranties and confers no rights.
Please reply to newsgroups only, many thanks!
|||See the thread entitled "extracting results of XML stored proc to file"
started by Mike UK in this newsgroup.
Cheers,
Graeme
--
Graeme Malcolm
Principal Technologist
Content Master Ltd.
www.contentmaster.com
www.microsoft.com/mspress/books/6137.asp
"JT" <JTnospam@.verizon.net> wrote in message
news:A3758B34-4AC8-4675-8326-319C40508CFF@.microsoft.com...
> Hi all,
> I would like to save the output of a FOR XML stored procedure to a file on
my hard drive with a filename of my choosing. Can anybody point me in the
write direction on how to do this. (I've got the FOR XML sproc written, I
just need help figuring out how to persist it to a file. Thanks.
> JT
|||Thanks Graeme.
JT
Persisting Package Variables
I was able to write successfully a Script Task to set the values of several package variables. But when the execution completes, the variables still contain values which were specified by default.
What I want is a method of persisting the values assigned to a variable through a script. I believe this was possible in DTS
The variable is not saved, because you have not saved the package. There is no way to do this really, so for anything you wish to persist between package executions you should store externally. If you want to see the value of variables during execution then use a breakpoint and drag the variables to the watch window. You do not see the value in BIDS because the package loaded in BIDS is not the same instance as that being executed. When you execute the package is saved, and a copy loaded into memory of the execution host. The copy in BIDS remains where it is as the debug copy.
In DTS when executing through Enterprise Manager it was the same copy, so you could see the value of variables post execution. The values were still not saved though, it was just a factor of you having only one instance of the package in memory. If you were executing through DTSRUN and variables changed they did not get persisted, as the package was not saved. The changed copy was thrown away once execution completed.
Persisting data in custom aggregate
For example:
The 50th percentile is calculated by extracting the value(s) in the centre
of a sorted dataset.
What I need in this case is a sorted dataset so that I can extract values at
specific indexes to be able to calculate different percentiles.
1) At the moment I'm using a Private ArrayList to persist the intermediate
data.
2) The Accumalate method is used to add the data to the ArrayList.
3) I use the Terminate method to sort the ArrayList and extract the values I
need at specific index(es)
4) The Merge method appends two ArrayList's
Unfortunately I have no idea what to do in the Read and Write methods
concerning the ArrayList.
The way I understand it is that the Write method should write the ArratList
in a binary format which the Read are able to consume.
I am quite new at this so any ideas would be appreciated.
JR MalherbeJames
Does it relate somehow to SQL Server?
If it does ,please post DDL+ sample data+ expected result.
"JamesM" <JamesM@.discussions.microsoft.com> wrote in message
news:BE227508-8478-4758-903B-5DDB2E25D723@.microsoft.com...
>I want to create a custom aggregate to calculate percentiles.
> For example:
> The 50th percentile is calculated by extracting the value(s) in the centre
> of a sorted dataset.
> What I need in this case is a sorted dataset so that I can extract values
> at
> specific indexes to be able to calculate different percentiles.
> 1) At the moment I'm using a Private ArrayList to persist the intermediate
> data.
> 2) The Accumalate method is used to add the data to the ArrayList.
> 3) I use the Terminate method to sort the ArrayList and extract the values
> I
> need at specific index(es)
> 4) The Merge method appends two ArrayList's
> Unfortunately I have no idea what to do in the Read and Write methods
> concerning the ArrayList.
> The way I understand it is that the Write method should write the
> ArratList
> in a binary format which the Read are able to consume.
> I am quite new at this so any ideas would be appreciated.
> --
> JR Malherbe|||JamesM wrote:
> I want to create a custom aggregate to calculate percentiles.
> For example:
> The 50th percentile is calculated by extracting the value(s) in the centre
> of a sorted dataset.
> What I need in this case is a sorted dataset so that I can extract values
at
> specific indexes to be able to calculate different percentiles.
> 1) At the moment I'm using a Private ArrayList to persist the intermediate
> data.
> 2) The Accumalate method is used to add the data to the ArrayList.
> 3) I use the Terminate method to sort the ArrayList and extract the values
I
> need at specific index(es)
> 4) The Merge method appends two ArrayList's
> Unfortunately I have no idea what to do in the Read and Write methods
> concerning the ArrayList.
> The way I understand it is that the Write method should write the ArratLis
t
> in a binary format which the Read are able to consume.
> I am quite new at this so any ideas would be appreciated.
> --
> JR Malherbe
SQL Server 2005? Use the NTILE or RANK aggregate functions. I don't see
why you would need a user-defined aggregate but if you still think you
do then please give us a better spec.
David Portas, SQL Server MVP
Whenever possible please post enough code to reproduce your problem.
Including CREATE TABLE and INSERT statements usually helps.
State what version of SQL Server you are using and specify the content
of any error messages.
SQL Server Books Online:
http://msdn2.microsoft.com/library/ms130214(en-US,SQL.90).aspx
--|||"Uri Dimant" wrote:
> James
> Does it relate somehow to SQL Server?
> If it does ,please post DDL+ sample data+ expected result.
This relates to extending SQL2005 with CLR functions (in this case a user
defined aggregate)
I'm trying to write a aggregate which is doing the same as the PERSENTILE
function in Excel
I'll post my code and expected results with David Portas|||"David Portas" wrote:
> SQL Server 2005? Use the NTILE or RANK aggregate functions. I don't see
> why you would need a user-defined aggregate but if you still think you
> do then please give us a better spec.
Unfortunately not
The function needs to do what the PERCENTILE function in Excel does
example 1 (uneven number of records)
You have a sorted list of values: (20,23,34,56,58,61,67,70,72,84,85)
In this case the the 50th percentile is the center value = 61
example 2 (even number of records)
You have a sorted list of values: (20,23,34,56,58,61,67,70,72,84,85,88)
In this case the the 50th percentile is the average of the two center values
= (61+67)/2 = 64
Here is my code:
using System;
using System.Data;
using System.Data.Sql;
using System.Data.SqlTypes;
using System.Collections;
using Microsoft.SqlServer.Server;
using System.IO;
[Serializable]
[SqlUserDefinedAggregate(
Format.UserDefined,
IsInvariantToNulls = true,
IsInvariantToDuplicates = false,
IsInvariantToOrder = false,
MaxByteSize = 8000)
]
public class Median : IBinarySerialize
{
private ArrayList intermediateResult;
public void Init()
{
intermediateResult = new ArrayList();
}
public void Accumulate(SqlDouble Value)
{
if (Value.IsNull)
{
return;
}
intermediateResult.Add(Value.Value);
}
public void Merge(Median Group)
{
intermediateResult.InsertRange(0, Group.intermediateResult);
}
public SqlDouble Terminate()
{
double output = double.NaN;
if (intermediateResult != null && intermediateResult.Count > 0)
{
double cntItem = intermediateResult.Count;
if (cntItem > 10)
{
intermediateResult.Sort();
double pos_relative = (cntItem - 1) * 0.5; //median
int pos1 = Convert.ToInt32(Math.Floor(pos_relative));
double pos_fraction = pos_relative - pos1;
if (pos_fraction == 0)
output = Convert.ToDouble(intermediateResult[pos1]);
else
output = Convert.ToDouble(intermediateResult[pos1]) +
pos_fraction * (Convert.ToDouble(intermediateResult[pos1 + 1]) -
Convert.ToDouble(intermediateResult[pos1]));
}
}
return new SqlDouble(output);
}
public void Read(BinaryReader r)
{
//need to read binary to intermediateResult from the format that was
used below
}
public void Write(BinaryWriter w)
{
//need to write intermediateResult to a binary format
}
}|||James
CREATE TABLE #Test
(
num INT
)
INSERT INTO #Test VALUES (20)
INSERT INTO #Test VALUES (23)
INSERT INTO #Test VALUES (34)
INSERT INTO #Test VALUES (56)
INSERT INTO #Test VALUES (58)
INSERT INTO #Test VALUES (61)
INSERT INTO #Test VALUES (67)
INSERT INTO #Test VALUES (70)
INSERT INTO #Test VALUES (72)
INSERT INTO #Test VALUES (84)
INSERT INTO #Test VALUES (85)
INSERT INTO #Test VALUES (88)
SELECT AVG( b3.num )
FROM (
SELECT MAX( b1.num )
FROM (
SELECT TOP 50 PERCENT b.num
FROM #Test AS b
ORDER BY b.num ASC
) AS b1
UNION ALL
SELECT MIN( b2.num )
FROM (
SELECT TOP 50 PERCENT b.num
FROM #Test AS b
ORDER BY b.num DESC
) AS b2
) AS b3( num )
"JamesM" <JamesM@.discussions.microsoft.com> wrote in message
news:7945781D-56E4-4C5B-B915-2AD4284FBA04@.microsoft.com...
> "David Portas" wrote:
> Unfortunately not
>
> The function needs to do what the PERCENTILE function in Excel does
> example 1 (uneven number of records)
> You have a sorted list of values: (20,23,34,56,58,61,67,70,72,84,85)
> In this case the the 50th percentile is the center value = 61
> example 2 (even number of records)
> You have a sorted list of values: (20,23,34,56,58,61,67,70,72,84,85,88)
> In this case the the 50th percentile is the average of the two center
> values
> = (61+67)/2 = 64
>
> Here is my code:
> using System;
> using System.Data;
> using System.Data.Sql;
> using System.Data.SqlTypes;
> using System.Collections;
> using Microsoft.SqlServer.Server;
> using System.IO;
> [Serializable]
> [SqlUserDefinedAggregate(
> Format.UserDefined,
> IsInvariantToNulls = true,
> IsInvariantToDuplicates = false,
> IsInvariantToOrder = false,
> MaxByteSize = 8000)
> ]
> public class Median : IBinarySerialize
> {
> private ArrayList intermediateResult;
> public void Init()
> {
> intermediateResult = new ArrayList();
> }
> public void Accumulate(SqlDouble Value)
> {
> if (Value.IsNull)
> {
> return;
> }
> intermediateResult.Add(Value.Value);
> }
> public void Merge(Median Group)
> {
> intermediateResult.InsertRange(0, Group.intermediateResult);
> }
> public SqlDouble Terminate()
> {
> double output = double.NaN;
> if (intermediateResult != null && intermediateResult.Count > 0)
> {
> double cntItem = intermediateResult.Count;
> if (cntItem > 10)
> {
> intermediateResult.Sort();
> double pos_relative = (cntItem - 1) * 0.5; //median
> int pos1 = Convert.ToInt32(Math.Floor(pos_relative));
> double pos_fraction = pos_relative - pos1;
> if (pos_fraction == 0)
> output = Convert.ToDouble(intermediateResult[pos1]);
> else
> output = Convert.ToDouble(intermediateResult[pos1]) +
> pos_fraction * (Convert.ToDouble(intermediateResult[pos1 + 1]) -
> Convert.ToDouble(intermediateResult[pos1]));
> }
> }
> return new SqlDouble(output);
> }
> public void Read(BinaryReader r)
> {
> //need to read binary to intermediateResult from the format that
> was
> used below
> }
> public void Write(BinaryWriter w)
> {
> //need to write intermediateResult to a binary format
> }
> }
>|||Thanks Uri
We also have workarouds in SQL, which is very tedious if you are calculating
different percentiles (10th, 25th, 50th,...) in different columns and at
different break levels.
We chose to go the route of a custom aggregate since this will save us lots
of work in the next 10 months.
All I need to know is how to persist an ArrayList for a user defined
aggregate.
persisting data for a pluggable tree
I am rather new to database design and modelling concepts in general
and was hoping for some advice on a problem I am trying to solve. I
have designed a piece of software that creates a tree with pluggable
nodes. Each node class can have 0 to n distinct classes plugged into
it to define the type for that node.
For example, a node plugged with a 'customer' class and an 'engineer'
class would indicate that this node in the tree is an engineer who is
also a customer. We could also have a 'owner', 'engineer' etc.
I now want to persist this tree in an SQL Server 2000 Database. I have
chosen to implement the nested set model, and have thought about the
following table design:
table NODE_TABLE:
lft INTEGER
rft INTEGER
propsID INTEGER
table PROPERTIES_TABLE:
propsID INTEGER
tableName VARCHAR
table CUSTOMER_TABLE:
propsID INTEGER
firstname CHAR
lastname CHAR
table ENGINEER_TABLE:
propsID INTEGER
num_completed_projects INTEGER
degree CHAR
school CHAR
table OWNER_TABLE:
propsID INTEGER
companyName CHAR
So, given the above example - I would have a NODE_TABLE that links to 2
entries in PROPERTIES_TABLE. One entry would link to an entry in the
CUSTOMER_TABLE, the other to an entry in ENGINEER_TABLE.
Are there any more efficient solutions to this problem? As i said, I
am very new to DB design and would welcome any feedback or suggestions
of how else I might model my pluggable tree in a Database. Thank you,
Bob YohanYou can get a copy of my book TREES & HIERARCHIES IN SQL for several
ways to model these things in SQL. But what you are trying to do is
force an OO model into SQL and it is not a good idea. There are no
classes or links in RDBMS; we have tables and references. We do not
mix data and metadata in a schema. The data model does not change
during the application.
Many years ago, the INCITS H2 Database Standards Committee(nee ANSI
X3H2 Database Standards Committee) had a meeting in Rapid City, South
Dakota. We had Mount Rushmore and Bjarne Stroustrup as special
attractions. Mr. Stroustrup did his slide show about Bell Labs
inventing C++ and OO programming for us and we got to ask questions.
One of the questions was how we should put OO stuff into SQL. His
answer was that Bells Labs, with all their talent, had tried four
different approaches to this problem and come the conclusion that you
should not do it. OO was great for programming but deadly for data.
I have watched people try to force OO models into SQL and it falls
apart in about a year. Every typo becomes a new attribute or class,
queries that would have been so easy in a relational model are now
multi-table monster outer joins, redundancy grows at an exponential
rates, constraints are virtually impossible to write so you can kiss
data integrity goodbye, etc.|||Thanks for your reply. I actually bought your book a couple of weeks
ago, which led to my choosing the nested set implementation. Great
book, I'd recommend it to anyone looking for a good summary of the
various methods of modelling trees in SQL. I guess my problem is not
how to model the tree itself given that I have the book, but rather how
to persist what is obviously an already built OO model in a RDBMS. I'm
sure your statements about the problems of mixing metadata and data
will hold true, but how can I persist my dynamic nodes without doing
this? I suppose I could use an XML file to map specific tables to
their respective nodes, but this doesn't seem like a more elegant
solution given, and in this case I might as well store the entire tree
structure in XML. Given that the trees will be > 10000 nodes on
average, I think an efficient XML solution is out. Any further
suggestions would be appreciated - thanks,
Bob Yohan|||The classic scenario calls for a root class with all the common
attributes and then specialized sub-classes under it. As an example,
let's take the class of Vehicles and find an industry standard
identifier (VIN), and add two mutually exclusive sub-classes, Sport
utility vehicles and sedans ('SUV', 'SED').
CREATE TABLE Vehicles
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) NOT NULL
CHECK(vehicle_type IN ('SUV', 'SED')),
UNIQUE (vin, vehicle_type),
..);
Notice the overlapping candidate keys. I then use a compound candidate
key (vin, vehicle_type) and a constraint in each sub-class table to
assure that the vehicle_type is locked and agrees with the Vehicles
table. Add some DRI actions and you are done:
CREATE TABLE SUV
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT 'SUV' NOT NULL
CHECK(vehicle_type = 'SUV'),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Vehicles(vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
CREATE TABLE Sedans
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
CHECK(vehicle_type = 'SED'),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Vehicles(vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
I can continue to build a hierarchy like this. For example, if I had a
Sedans table that broke down into two-door and four-door sedans, I
could a schema like this:
CREATE TABLE Sedans
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT 'SED' NOT NULL
CHECK(vehicle_type IN ('2DR', '4DR', 'SED')),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Vehicles(vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
CREATE TABLE TwoDoor
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT '2DR' NOT NULL
CHECK(vehicle_type = '2DR'),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Sedans(vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
CREATE TABLE FourDoor
(vin CHAR(17) NOT NULL PRIMARY KEY,
vehicle_type CHAR(3) DEFAULT '4DR' NOT NULL
CHECK(vehicle_type = '4DR'),
UNIQUE (vin, vehicle_type),
FOREIGN KEY (vin, vehicle_type)
REFERENCES Sedans (vin, vehicle_type)
ON UPDATE CASCADE
ON DELETE CASCADE,
..);
The idea is to build a chain of identifiers and types in a UNIQUE()
constraint that go up the tree when you use a REFERENCES constraint.
Obviously, you can do variants of this trick to get different class
structures.
If an entity doesn't have to be exclusively one subtype, you play with
the root of the class hierarchy:
CREATE TABLE Vehicles
(vin CHAR(17) NOT NULL,
vehicle_type CHAR(3) NOT NULL
CHECK(vehicle_type IN ('SUV', 'SED')),
PRIMARY KEY (vin, vehicle_type),
..);
Now start hiding all this stuff in VIEWs immediately and add an INSTEAD
OF trigger to those VIEWs. Performance will suck and the code will be
a pain to maintain, but it is possible.|||Thank you Joe - you've been very helpful. Keep up the good work,
Bob
Persisting Code (Repost)
the report to have a silver background and white on the rest. When I run
the report the first time I get the desired result. Then, depending on the
number of rows the report returns, successive runs of the report will have
the silver row starting on 1, 2 or 3. My guess is that SRS is remembering
where the code left off and picks up from there on the successive runs.
I added the following code to my report:
Private Shared count As Integer = 0
Private Shared colors As String() = {"White","White","White","White","White","White","White","White","White","White","White","White","Silver","Silver","Silver","Silver","Silver","Silver"}
Public Function GetColor() As String
Dim c as string = colors(count Mod colors.Length)
count = count + 1
Return c
End Function
Then, on the background Color I have:
=Code.GetColor()
George F Grund IVWould removing the shared keyword achieve your desired results?
You could also try and reset the count by creating a function like
below and then calling from your page header.
public function ResetVariable() as string
count = 0
return ""
end function|||Well, I'll asumme that you are using a table with no groups. If so, what you
are trying to do is very simple and you don't that code. Select the detail
row, then in the background color porperty type something like this:
=IIF(RowCount("YourDataSet") mod 3 = 0, "#D3D3D3", "#FFFFFF")
D3D3D3 is the RBG code for Light Gray, and FFFFFF is for White
I hope this helpssql
Persisting Bookmarks
Is it possible to persist bookmarks? aka what is the intended lifetime/scope of workbench bookmarks?
This question stems from a chapter I am writing on sql tools. I have used the bookmark feature in the past and upon testing it I see that you can assign bookmarks to any editor and then save/close the file. Later on (so long as you have the same bookmark window open with the bookmarks set) you can reopen the file and the bookmark appears.
What I am trying to explain is how do your persist a bookmark beyond the setting in the bookmarks window? I want to store a bookmark with a sql/mdx etc file and not have to rely on it being set in the IDE. I want the bookmarks window to autopopulate upon opening of the corressponding file.
Thanks,
Derek
Copying from: http://forums.microsoft.com/MSDN/ShowPost.aspx?PostID=380970&SiteID=1
Bookmarks in Management Studio are consist with Visual Studio. To use bookmarks your files need to be part of a solution/project, rather than just loose files. Yes, the environment allows the creation of bookmarks on files that are not part of a solution but these bookmarks are not persisted. When you create a solution, add a file, create some bookmarks, the bookmarks will be persisted and show up each time the solution is opened.
Persisting a SqlDataReader in a SqlFunction enumerator
I have built a Table Valued SqlFunction which streams out filtered results from a table. The way that I have implemented this is by using a custom class which implements the IEnumerator interface.
The class internally stores a SqlDataReader and a SqlConnection as private member variables. The Class initializer (ie: the New function) creates a SqlCommand which is executed with ExecuteReader into the SqlDataReader and returns.
The IEnumerator.MoveNext method loops through the SqlDataReader until it finds a result which matches the heuristic filter, and the IEnumerator.Current method returns the current result from the SqlDataReader.
Unfortunately as soon as the initializer returns the SqlDataReader is automatically closed, and I can't figure out why. I've debugged through this and at the end of the Initializer the SqlDataReader is definately open, and as soon as first call to MoveNext is run it is closed.
Can anyone offer any suggestions on how I can fix this.
In the interim i've had to load all of the filtered results into a temporary ArrayList in the Initializer, which defeats the purpose of streaming out the results.
"You can use the context connection in the initialization method ..., but not in the method that fills rows (the method pointed to by the FillRowMethodName attribute property)." (copy/pasted from Managed Data Access Inside SQL Server with ADO.NET and SQLCLR), so you cannot loop through the SqlDataReader inside FillRow method.
Solutions are:
Implement the filtering using T-SQL (if possible)|||I'm not using the FillRow method to access the SqlDataReader; it is just passed a copy of the object (it's a UDT in this case) which the IEnumerator.Current method returns. I am looping through the SqlDataReader in the IEnumerator.MoveNext method.
My code looks something like this (with large chunks of logic cut out):
Imports System
Imports System.Data
Imports System.Data.SqlClient
Imports System.Data.SqlTypes
Imports Microsoft.SqlServer.Server
Imports System.Runtime.InteropServices
Imports System.Collections
Imports System.Security
Imports System.Net
<Assembly: AllowPartiallyTrustedCallers()>
Public Class MyListIterator
Implements IEnumerator
Private oIncludes(0) As MySQLType
Private oExcludes(0) As MySQLType
Private oColumnMeta As SqlMetaData
Private oConn As SqlConnection
Private oDR As SqlDataReader
Private bMoreRecords As Boolean = False
Private iChannelAccountID As Integer
Public Sub New(ByVal ChannelAccountID As Integer)
iChannelAccountID = ChannelAccountID
InitReader()
End Sub
Private Sub InitReader()
oConn = New SqlConnection("context connection=true")
Debug("Starting InitReader")
Dim oCommand As SqlCommand
Dim oMyCode As MySQLType
' Set up the filter arrays (oIncludes and oExcludes) here
' ....
' end filter array setup
' now we'll just loop through all of the My codes and return those that match
oCommand = New SqlCommand("SELECT MyCode FROM MyCodes", oConn)
Dim oColumnMeta As New SqlMetaData("MyCode", SqlDbType.Udt, GetType(MySQLType))
oConn.Open()
oDR = oCommand.ExecuteReader()
bMoreRecords = True
' dont' need to get the first item
' the initial position of the result set is defined as "before the beginning"
End Sub
Public ReadOnly Property Current() As Object Implements System.Collections.IEnumerator.Current
Get
If bMoreRecords Then
Return oDR(0)
Else
Return Nothing
End If
End Get
End Property
Public Function MoveNext() As Boolean Implements System.Collections.IEnumerator.MoveNext
If bMoreRecords AndAlso Not oDR.IsClosed Then
' find the next included code
Do While oDR.Read
If CType(oDR(0), MySQLType).IsIncluded(oIncludes, oExcludes) Then Return True
Loop
End If
bMoreRecords = False
Return False ' no more records
End Function
Public Sub Reset() Implements System.Collections.IEnumerator.Reset
bMoreRecords = False
If Not oDR Is Nothing AndAlso oDR.IsClosed Then oDR.Close()
If Not oConn Is Nothing Then oConn.Close()
InitReader()
End Sub
End Class
Partial Public Class UserDefinedFunctions
<Microsoft.SqlServer.Server.SqlFunction(TableDefinition:="MyCode MySQLType", _
IsPrecise:=True, _
IsDeterministic:=False, _
DataAccess:=DataAccessKind.Read, _
FillRowMethodName:="FillMyListRow")> _
Public Shared Function GetMyList(ByVal ChannelAccountID As Integer) As IEnumerator
Return New MyListIterator(ChannelAccountID)
End Function
Public Shared Sub FillMyListRow(ByVal oMyInCode As Object, <Out()> ByRef oMyOutCode As MySQLType)
If Not oMyInCode Is Nothing Then
oMyOutCode = CType(oMyInCode, MySQLType)
Else
oMyOutCode = MySQLType.Null
End If
End Sub
End Class|||
Further to my last post, I can't use TSQL to create the filters because the logic is too complex (it relies on heirarchical data and complex type rules).
I have created the function as a SQLCLR stored procedure successfully however I was hoping to use a SqlFunction so that I could perform joins (eg: SELECT [whatever] FROM [mytable] WHERE [myfield] IN MyFunction(@.ChannelAccountID)).
One alternative is that I figure out how to join select results with the results returned from a stored procedure, however i haven't figured out how to do that without executing the stored procedure into a temporary table (and even that is simply what I guess that you could do; i haven't actually tried at this point).
I have also implemented a scalar valued CLR function which checks the filters, however this is unoptimal because I need to build the filter arrays again for each and every row in the table (about 20,000 rows). This performs too slow for my liking.
Cheers
Beric Holt
http://buzzrick.true.geek.nz
Persist ConnectionString
Hello All,
I have changed connection string property of connectionmanger at runtime and saved package.But my new connectionstring is not persisting, i am getting my old connection string after reload package .
ConnectionManager cm = package.Connections[test.ConnectionManagerID];
cm.ConnectionString = @."C:\Test.csv";
app.SaveToDtsServer(package, null,
@."File System\Dupaco Load Next Best.dtsx", "CIRCLE");
How can I save new connection string?.
Please help me.
Thanks
Subin
The ProtectionLevel property of the ConnectionManager will change what and if sensitive information is saved.
Personally I do not like saving sensitive information in a package, I much prefer using configurations. This allows you to keep such information externally, and just by changing the configuartion information between environments you can easily manage this. This mechanism is obviously great for when moving packages between test and production environments for example, but it scales right down to the developer's desktop, so you use the same mechanism throughout.