Search found 27 matches

by migle
Mon 30 Aug 2010 14:37
Forum: dotConnect for PostgreSQL
Topic: PgSqlScript and CREATE FUNCTION
Replies: 4
Views: 1383

I worked around it by reading the whole script into a string, and doing some search replace on that string using regular expressions, adding a row containing a single slash after all BEGIN statements, all CREATE FUNCTIONs and all CREATE TRIGGERs.

Ugly, but it will work as long as the script is under my control.

The "Unprepared execute" string would cause me more trouble, as I mean to have a single place for creating a database connection on the whole application.
by migle
Mon 30 Aug 2010 14:30
Forum: dotConnect for PostgreSQL
Topic: PgSqlLoader and null values
Replies: 4
Views: 1409

Hi,

The referenced thread where the issue of floating point data loss in text mode was raised is this one http://www.devart.com/forums/viewtopic.php?t=17179.

Note that this possibility of data loss is not raised by your software alone. It is PostgreSQL who will receive the text data and parse the floating point values in it.

Representing a floating point number in decimal and then decoding it to binary mode is hard to guarantee. The C standard makes no numeric accuracy requirements on its I/O functions, and IEEE is unaware of I/O. So, even if you use 16 decimal digits, it's hard to guarantee that PgSql (by coincidence written in C) will parse them accurately, let alone having all special cases (+/-0, +/-inf, 4 flavours of NaN and denormals) correctly handled.

I wouldn't put my hands on the fire for it.

So, in some cases, when you care very much of your 13th and 14th digits, and the sign of your infinities, maybe its better not to put it in decimal.

I worked around the issue. In this application I have the luck that when I want to load a bunch of data, when some columns are NULL, they are NULL on all rows. Thankfully.
by migle
Wed 25 Aug 2010 10:13
Forum: dotConnect for PostgreSQL
Topic: PgSqlScript and CREATE FUNCTION
Replies: 4
Views: 1383

PgSqlScript and CREATE FUNCTION

Hi,

I'm trying to use PgSqlScript to execute a database script which updates the schema of the database automatically, so the user doesn't have to.
That script is generated elsewhere. I have a file, which usually psql eats, outputting some errors which are to be ignored. Now, I have that file as an embedded resource in my executable and wan't to execute.

I tried doing it with PgSqlDump and PgSqlScript, PgSqlScript seems to be the one intended for that purpose. However, I'm running into a lot of troubles. I found out that I had to remove comments and rework the whitespace (mainly remove newlines). That's ok.

But I keep having this error: "cannot insert multiple commands into a prepared statement". Well, I suppose that's why I'm using PgSqlScript. Well, I figure PgSqlScript is having trouble separating the statements due to semicolons inside the text of plpgsql functions. I just can't figure how to do it.

The doc says "When PgSqlScript encounters a BEGIN keyword in the script, it looks for a slash symbol "/" in the first position of some consecutive line. Everything between the BEGIN and the slash is considered single block. Usually the slash is placed after the line containing END keyword. You do not have to place slash after nested BEGIN...END block." But this is a bit obscure... Are those BEGIN and END and slashes removed before sending the command to PostgreSQL ? Certainly I can't put the slashes near the BEGIN and END of my functions, because the DECLARE keyword appears before BEGIN, and semicolons can appear before BEGIN...

Suppose I have this function:

Code: Select all

CREATE OR REPLACE FUNCTION Abc(a INT, b INT, c INT)
    RETURNS INT AS $$
    DECLARE a INT;
    BEGIN
        ...
	RETURN a;
    END;
    $$ LANGUAGE plpgsql VOLATILE;
Where would I insert the BEGINs, ENDs and slashes?

Thanks,
by migle
Sun 22 Aug 2010 11:17
Forum: dotConnect for PostgreSQL
Topic: PgSqlLoader and null values
Replies: 4
Views: 1409

PgSqlLoader and null values

Hi,

I'm loading some data using PgSqlLoader. Because of eventual floating point data loss and efficiency, I prefer to use binary mode (cf. my previous posts here).

What's troubling me now are NULL values. When using text mode, loading data with NULLs in it works fine. Simply changing Mode to PgSqlLoaderMode.Binary causes a NullReferenceException to be thrown when NextRow is called.

Might this be a bug?
by migle
Mon 16 Aug 2010 10:37
Forum: dotConnect for PostgreSQL
Topic: Trouble using cursors
Replies: 3
Views: 1367

Trouble using cursors

Hi,

I'm running into two different problems when using cursors.

First, the sample in the documentation for PgSqlCursor.GetDataReader shows a function which has a named output parameter, p, to let the cursor out of the function. The parameter collection for the command that calls the function is filled in automatically using ParameterCheck.

Now, I wish I didn't have to do that: first, I would really dislike having to name the output parameter for the cursor, I would like to return it as the single return value; second, I would prefer to avoid ParameterCheck (I understand it will cause an additional round trip to the server).

However, I don't seem to be able to use PgSqlCursor this way. Here's how I'm doing it:

Code: Select all

            PgSqlCommand comm = new PgSqlCommand("myfunction", conn);
            comm.CommandType = CommandType.StoredProcedure;
            //comm.ParameterCheck = true; // doesn't work with ParameterCheck either

            // Add parameters.
            comm.Parameters.Add(new PgSqlParameter("a", PgSqlType.Int));
            comm.Parameters.Add(new PgSqlParameter("b", PgSqlType.Int));
            comm.Parameters.Add(new PgSqlParameter("return_value", PgSqlType.VarChar)).Direction = ParameterDirection.Output; // doesn't work with ReturnValue either

            comm.Parameters["a"].Value = 10;
            comm.Parameters["b"].Value = 3;

            comm.ExecuteNonQuery();

            PgSqlCursor cursor = comm.Parameters["return_value"].PgSqlValue as PgSqlCursor;

            using (PgSqlDataReader rd = cursor.GetDataReader()) { 
                while (rd.Read()) 
                     Console.WriteLine(rd.GetValue(0)); 
            }
I find that comm.Parameters["return_value"] actually contains a cursor in there (in the debugger I see a {PgSqlCursor} in the PgSqlValue property of this parameter); AFAICT, all properties of this PgSqlParameter are set to the same values as are the ones for the generated parameter when the code is exactly like the sample; however, rd.Read() always returns false and there's no output.

What am I doing wrong ?
Do I really need to resort to naming the output parameter for the cursor and using ParameterCheck ?

As to the second problem: when the code is exactly like the sample in the documentation, I find that cursor.GetDataReader() actually fetches the entire dataset from the server (GetDataReader takes a lot of time, whereas the while loop does not involve communication).
Now, this kind of beats the purpose of using a cursor, doesn't it?
Can I work around this behaviour?
I already tried something called FetchSize but seems not to be related.

Please help,
by migle
Mon 26 Apr 2010 13:50
Forum: dotConnect for PostgreSQL
Topic: Installed v4.90.120 and now unable to connect in VS2010RC
Replies: 18
Views: 3140

I had a lot of trouble, because alongside Visual Studio 2008 which I use professional, I had installed the Visual Studio 2010 beta 2.

Not only did install not work properly, entity developer crashed on code generation, couldn't connect to the DB, etc, etc...

So, I suspected that he was using .NET framework 4; and I only had installed .NET framework 4 Beta 2.

I tried installing the release .NET framework 4, then reinstalled dotConnect, and things have gone better since that.

Miguel
by migle
Sun 25 Apr 2010 21:41
Forum: LinqConnect (LINQ to SQL support)
Topic: First LINQ statement in a program returns no more than 100
Replies: 10
Views: 2606

I have upgraded our projects to the new 4.90.124 version. I'm very happy with it, most of the bugs went away and it doesn't try to prepare statements for each execution.

However, this problem persists. It does even depend on some strange configuration of a server, I just installed a server here at home, and it also happens, and now it appears to happen more often and on tables with simpler relationships.

I have a table with a primary key comprised of three columns, like this:

Code: Select all

CREATE TABLE abc (
ida INT NOT NULL REFERENCES A (ida) ON DELETE CASCADE,
idb INT NOT NULL REFERENCES B (ida) ON DELETE CASCADE,
seq INT NOT NULL,
PRIMARY KEY (ida, idb, seq),
x REAL ARRAY,
n INT)
WITHOUT OIDS;
This table has relationships to A and B and B also has a relationship to A. Like this:

Code: Select all

CREATE TABLE A (ida SERIAL PRIMARY KEY, ...) WITHOUT OIDS;
CREATE TABLE B (
ida INT NOT NULL PRIMARY KEY REFERENCES A (ida) ON DELETE CASCADE,
...)
WITHOUT OIDS;
All of these relationships are mapped in LINQ to SQL; the ON DELETE is also in the mapping. Table abc quickly grows to 100000+ records.

Right now, the problem happens when I do something like:

Code: Select all

public class Key {
  public int IdA { get; set; }
  public int IdB { get; set; }
  public int Seq { get; set; }
}
...
using (AbcDataContext ctxt = new AbcDataContext())
{
  IEnumerable q =
    from a in ctxt.abcs
    where IdA = X && IdB == Y
    orderby Seq
    select new Key { IdA = a.IdA, IdB = a.IdB, IdC = a.IdC };

  foreach (Key k in q)
  {
     // This foreach goes over only 100 entities, even though the table
     // has 81715; in this case, if I remove the usage of the class Key,
     // and do select a (the Abc type), then the loop goes over the
     // correct set of entities.
  }
}
Now, this is not an isolated sample that I have tried and confirmed the bug. But the structure of these tables is quite analogous to the ones that are (now) exposing this behaviour.

I can't control and haven't got a clue as what are the factores that are correlated with this problem. Perhaps the only correlation is that in all cases the primary key of the table is comprised of several columns and I am querying it using less columns.

Server issues are unlikely, I have several versions of PostgreSQL 8.4 running several different OSes and 32/64 bit architectures.

The workaround I applied previously in other cases is impracticle for this table. In those other cases, I would use an IQueryable, Count the number of entities and then do a ToList(), if the length of the list was less than the previous Count, I would repeat the query.

I don't even want a fix, because I wish I wouldn't have to upgrade again. If only I knew how to avoid this or workaround it...

Thanks,

Miguel
by migle
Sun 25 Apr 2010 08:16
Forum: dotConnect for PostgreSQL
Topic: multi-dimension array as parameter
Replies: 1
Views: 1245

Hi,

I'm not affiliated with Devart, but we also use arrays as parameters.
Be sure to read the following page about using parameters and stored procedures:

http://www.devart.com/dotconnect/postgr ... eters.html

For arrays, you just have to use type PgSqlType.Array, and put PgSqlArray structures on the parameter value. Also, look for the help on PgSqlArray under Structs, not under Classes.

Miguel
by migle
Tue 20 Apr 2010 15:01
Forum: dotConnect for PostgreSQL
Topic: Critical bug inserting PgSqlArray of doubles (data loss)
Replies: 2
Views: 1817

Critical bug inserting PgSqlArray of doubles (data loss)

Hi,

Surprinsing as it may seem, using the code bellow to insert a PgSqlArray of 10 doubles using PgSqlLoader, on my machine, actually results in 20 doubles being inserted.
If binary mode is used, however, the correct number of elements, 10, is inserted.

Hint: I live in continental Europe.

Code: Select all

object[] a = { 1e-1, 2e-1, 3e-2, 4e-2, 5e-2, 6e-2, 7e-2, 8e-2, 9e-2, 1e-1 };

PgSqlLoader loader = new PgSqlLoader("public.abc", conn);

loader.Columns.Add("arraycolumn", PgSqlType.Array, 0, 0, 0);
loader.Open();
loader.SetValue(0, new PgSqlArray(a, PgSqlType.Double, 1, a.Length));
loader.NextRow();
loader.Close();
This is at least the fifth trivially reproducible bug I found.
Cool, huh?
BTW, what happened to the beta version that was available before and is not longer there?

Miguel
by migle
Tue 06 Apr 2010 09:00
Forum: dotConnect for PostgreSQL
Topic: PostgreSQL COPY TO command available?
Replies: 12
Views: 5524

Hello, again.

I have just confirmed that when I do a SELECT using dotConnect for PostgreSQL, the data is transmitted in binary as you said.

In that case, maybe there's no reason for COPY TO being faster and my concerns about data loss on floating point data also are unfounded.

I have to leave this up to you. I have no performance comparison data and it seems not to make sense.

The only thing odd I saw right now is that while when I use the psql client which uses text protocol and I do that "select sqrt(2)" query, only 3 messages are exchanged between client and server.
When I use dotConnect for PostgreSQL, using binary protocol, tcpdump shows me 18 messages being exchanged...

Well, this is not the place to discuss the PostgreSQL protocol... but it looks redundant, with lots of repetitions of sequences like "PORTAL20318972712401650" and "PTSTMT20318972712401650".

Anyway, thanks, I have what I needed most, which is binary protocol, and can't believe that performance improvement with COPY TO can be anything better than 10% (just guessing), which makes little difference.
by migle
Thu 25 Mar 2010 16:50
Forum: dotConnect for PostgreSQL
Topic: PostgreSQL COPY TO command available?
Replies: 12
Views: 5524

StanislavK wrote:Could you please provide us with a link to the PostgreSQL official documentation where it is stated that the COPY TO command is more performant than SELECT? Unfortunately, I couldn't find any at the moment.
Is that a joke?

Look, I asked if the feature was available.
I wanted to check if I had any performance gain with that.
If the feature is not available, why don't you just answer so?

If you know better than that, that using "COPY TO" isn't faster than using "SELECT", then great.
StanislavK wrote:Also, please note that the SELECT command transfers result sets in binary mode as well.
Does it? Great.
I'm not the one who knows the protocol here.

But then again, are you sure?
Because when I use the psql client, if I ask for the square root of 2 and check what goes on the wire with tcpdump I see the three packets bellow being transfered.
There I see a binary protocol, but the data is sent in text mode.

Now, that is using the psql command line client not with your driver (I'm not in position of checking with your driver right now).

So, again, is that really so?
If it is then I was really mistaken there.

Code: Select all

x0000:  4500 0049 64e4 4000 4006 5266 c0a8 010a  E..Id.@[email protected]....
0x0010:  c0a8 010a 839a 1538 7c02 bed2 7b5e 5954  .......8|...{^YT
0x0020:  8018 0212 83a0 0000 0101 080a 0077 f964  .............w.d
0x0030:  0077 dab1 5100 0000 1473 656c 6563 7420  .w..Q....select.
0x0040:  7371 7274 2832 293b 00                   sqrt(2);.

0x0000:  4500 007e 82fc 4000 4006 3419 c0a8 010a  E..~..@[email protected].....
0x0010:  c0a8 010a 1538 839a 7b5e 5954 7c02 bee7  .....8..{^YT|...
0x0020:  8018 0200 83d5 0000 0101 080a 0077 f965  .............w.e
0x0030:  0077 f964 5400 0000 1d00 0173 7172 7400  .w.dT......sqrt.
0x0040:  0000 0000 0000 0000 02bd 0008 ffff ffff  ................
0x0050:  0000 4400 0000 1900 0100 0000 0f31 2e34  ..D..........1.4
0x0060:  3134 3231 3335 3632 3337 3331 4300 0000  142135623731C...
0x0070:  0b53 454c 4543 5400 5a00 0000 0549       .SELECT.Z....I

0x0000:  4500 0034 64e5 4000 4006 527a c0a8 010a  E..4d.@[email protected]....
0x0010:  c0a8 010a 839a 1538 7c02 bee7 7b5e 599e  .......8|...{^Y.
0x0020:  8010 0212 54d3 0000 0101 080a 0077 f965  ....T........w.e
0x0030:  0077 f965                                .w.e
[/code]
by migle
Thu 25 Mar 2010 10:12
Forum: dotConnect for PostgreSQL
Topic: PostgreSQL COPY TO command available?
Replies: 12
Views: 5524

No, I can't qualify the difference between COPY TO and SELECT performance.
Precisely, I wanted to know if that functionality was available, to see if I could gain anything from it.
PostgreSQL documentation often refers to using COPY TO being faster.

I see one difference, even if performance is the same.
COPY TO ... WITH BINARY transfers data in binary.
Binary data transfer TO the database is possible using your PgSqlLoader class (despite bugs I already reported and have been fixed in the meanwhile). However, without using COPY TO binary transfer FROM the database is not possible.

Even though performance may be more or less the same, binary transfer can be preferrable in some cases, for example in the case of the transfer of floating point data. Even though a double precision value should theoretically be accurately reproduced from a 16 decimal digit representation, it's hard to guarantee that the deserialized value is exactly the same for all cases.
And even then, you require 16 decimal digits, a decimal dot, perhaps a +/- sign, an 'e' for exponent, plus some digits for exponent. That's well over 20 bytes for a single double precision value, that is represented in binary using only 8 bytes.

For the typical case, using text representation of numerical data takes twice as much space as binary. Now, this imposes an unnecessary burden on network bandwidth utilization.

In my case, ExecutePageReader is not what I need. I have lots of numerical data which I transfer using queries that return 400-800 rows at a time. Because it's not many rows, ExecutePageReader won't help me. The only problem is that I transfer some 20 such chunks each second (that's 10000 rows/second) and any gain I have here would be noticeable.
by migle
Tue 23 Mar 2010 16:52
Forum: dotConnect for PostgreSQL
Topic: PostgreSQL COPY TO command available?
Replies: 12
Views: 5524

Ok,

I had some time now. I misread your answer last time: you got it wrong, I don't want to transfer data to a file. I would like to know if it is possible to transfer data to the client in using COPY TO instead of a select.

Just as it is possible to transfer data TO the database using PgSqlReader, which translates to "COPY FROM", I would like to know if there is an optimized transfer of data to the client, which would translate to "COPY TO".

Something like the following, only it doesn't work.

Code: Select all

            string sql =
                "COPY (SELECT column1, column2 FROM table1 JOIN table2 WHERE column1 > :param1) " +
                " TO STDOUT WITH BINARY";

            PgSqlCommand comm = new PgSqlCommand(sql, conn);

            comm.Parameters.Add(new PgSqlParameter("param1", 0));

            comm.PgSqlDataReader reader = comm.ExecuteReader();
            List list = new List();
            try
            {
                while (reader.Read())
                {
                    list.Add(new MyObject()
                    {
                        Column1 = reader.GetInt32(0),
                        Column2 = reader.GetInt32(1)
                    });
                }
            }
            finally
            {
                reader.Close();
            }
by migle
Tue 23 Mar 2010 14:09
Forum: dotConnect for PostgreSQL
Topic: Entity Developer generates code which does not compile
Replies: 2
Views: 1437

Entity Developer generates code which does not compile

Hi,

We have configured a given project to include all model classes in one namespace, and the DataContext class in another namespace.
This is an option in the "Project Properties" dialog box, "Model" tab, the two text boxes labeled "Context Namespace" and "Entity Namespace".

This has suited us better, so far, because we keep these in separate projects, because they have different dependencies (specifically, the model classes do not depend on dotConnect classes and the DataContext does).

However, when we recently added some methods (stored procedures) to the mapping which use model types as arguments or return values, we came across this issue: the methods declarations in the DataContext class use namespace unqualified names for their arguments and return type.

Because the entity types are in a different namespace, this code does not compile.

For instance, if stored procedure "getEntity" on the database receives an int as argument and returns type "Entity", the generated function will be:

Code: Select all

        [Function(Name=@"getEntity", IsComposable=true)]
        public System.Linq.IQueryable GetEntity([Parameter(Name="$1", DbType="int4")] System.Nullable i)
        {
            return this.CreateMethodCallQuery(this, ((MethodInfo)(MethodInfo.GetCurrentMethod())), i);
        }
Which won't compile because Entity is not declared within the DataContext namespace.

Right?
by migle
Tue 16 Mar 2010 10:32
Forum: dotConnect for PostgreSQL
Topic: PgSqlLoader binary mode wrong datetime values
Replies: 6
Views: 1768

I can't reach that. I have acquired dotConnect for PostgreSQL Professional Team Edition through ComponentSource. I received a file which indicates an order reference number and a serial number.
I have registered on the devart website (miguel.ramos * albatroz-eng * com). I can't find a place to put my serial number anywhere and I never found any members download area.

Can anybody point me in the right direction?