I have upgraded our projects to the new 4.90.124 version. I'm very happy with it, most of the bugs went away and it doesn't try to prepare statements for each execution.
However, this problem persists. It does even depend on some strange configuration of a server, I just installed a server here at home, and it also happens, and now it appears to happen more often and on tables with simpler relationships.
I have a table with a primary key comprised of three columns, like this:
Code: Select all
CREATE TABLE abc (
ida INT NOT NULL REFERENCES A (ida) ON DELETE CASCADE,
idb INT NOT NULL REFERENCES B (ida) ON DELETE CASCADE,
seq INT NOT NULL,
PRIMARY KEY (ida, idb, seq),
x REAL ARRAY,
n INT)
WITHOUT OIDS;
This table has relationships to A and B and B also has a relationship to A. Like this:
Code: Select all
CREATE TABLE A (ida SERIAL PRIMARY KEY, ...) WITHOUT OIDS;
CREATE TABLE B (
ida INT NOT NULL PRIMARY KEY REFERENCES A (ida) ON DELETE CASCADE,
...)
WITHOUT OIDS;
All of these relationships are mapped in LINQ to SQL; the ON DELETE is also in the mapping. Table abc quickly grows to 100000+ records.
Right now, the problem happens when I do something like:
Code: Select all
public class Key {
public int IdA { get; set; }
public int IdB { get; set; }
public int Seq { get; set; }
}
...
using (AbcDataContext ctxt = new AbcDataContext())
{
IEnumerable q =
from a in ctxt.abcs
where IdA = X && IdB == Y
orderby Seq
select new Key { IdA = a.IdA, IdB = a.IdB, IdC = a.IdC };
foreach (Key k in q)
{
// This foreach goes over only 100 entities, even though the table
// has 81715; in this case, if I remove the usage of the class Key,
// and do select a (the Abc type), then the loop goes over the
// correct set of entities.
}
}
Now, this is not an isolated sample that I have tried and confirmed the bug. But the structure of these tables is quite analogous to the ones that are (now) exposing this behaviour.
I can't control and haven't got a clue as what are the factores that are correlated with this problem. Perhaps the only correlation is that in all cases the primary key of the table is comprised of several columns and I am querying it using less columns.
Server issues are unlikely, I have several versions of PostgreSQL 8.4 running several different OSes and 32/64 bit architectures.
The workaround I applied previously in other cases is impracticle for this table. In those other cases, I would use an IQueryable, Count the number of entities and then do a ToList(), if the length of the list was less than the previous Count, I would repeat the query.
I don't even want a fix, because I wish I wouldn't have to upgrade again. If only I knew how to avoid this or workaround it...
Thanks,
Miguel