Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sql/pgwire: missing support for row count limits in pgwire #4035

Closed
knz opened this issue Jan 29, 2016 · 15 comments
Closed

sql/pgwire: missing support for row count limits in pgwire #4035

knz opened this issue Jan 29, 2016 · 15 comments
Assignees
Labels
A-sql-pgwire pgwire protocol issues. C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) X-anchored-telemetry The issue number is anchored by telemetry references.

Comments

@knz
Copy link
Contributor

knz commented Jan 29, 2016

Following up on #3819, while trying to run the test case at
#3819 (comment)

ERROR in (a-test) (QueryExecutorImpl.java:2182)
Uncaught exception, not in assertion.
expected: nil
  actual: org.postgresql.util.PSQLException: ERROR: execute row count limits not supported
 at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse (QueryExecutorImpl.java:2182)
    org.postgresql.core.v3.QueryExecutorImpl.processResults (QueryExecutorImpl.java:1911)
    org.postgresql.core.v3.QueryExecutorImpl.execute (QueryExecutorImpl.java:173)
    org.postgresql.jdbc2.AbstractJdbc2Statement.execute (AbstractJdbc2Statement.java:616)
    org.postgresql.jdbc2.AbstractJdbc2Statement.executeWithFlags (AbstractJdbc2Statement.java:452)
    org.postgresql.jdbc2.AbstractJdbc2Connection.execSQLUpdate (AbstractJdbc2Connection.java:399)
    org.postgresql.jdbc2.AbstractJdbc2Connection.getTransactionIsolation (AbstractJdbc2Connection.java:922)
@knz knz added the SQL label Jan 29, 2016
@knz knz added this to the Beta milestone Jan 29, 2016
@knz
Copy link
Contributor Author

knz commented Jan 29, 2016

Needed for #4036

@maddyblue maddyblue self-assigned this Feb 2, 2016
@knz
Copy link
Contributor Author

knz commented Feb 4, 2016

While looking more closely at the code and the trace the error appears when jdbc queries the current transaction isolation level. The way the jdbc code works internally seems to be to query the current transaction isolation level, then change it if different from the current one. The backtrace says as much, there is a getTransactionIsolation -> execSQLUpdate -> executeWithFlags call stack reported.

@knz
Copy link
Contributor Author

knz commented Feb 4, 2016

Ok after upgrading to 4eeb73f we are back to "Cant prepare SHOW." The network trace:


No.     Time           Source                Destination           Protocol Length Info
      6 44.899931      52.91.194.28          172.31.53.252         PGSQL    174    >

Frame 6: 174 bytes on wire (1392 bits), 174 bytes captured (1392 bits)
Ethernet II, Src: 12:98:7b:58:57:9d (12:98:7b:58:57:9d), Dst: 12:62:5d:34:3b:e3 (12:62:5d:34:3b:e3)
Internet Protocol Version 4, Src: 52.91.194.28, Dst: 172.31.53.252
Transmission Control Protocol, Src Port: 44458 (44458), Dst Port: 15432 (15432), Seq: 1, Ack: 1, Len: 108
PostgreSQL
    Type: Startup message
    Length: 108
    Parameter name: user
    Parameter value: psql
    Parameter name: database
    Parameter value: system
    Parameter name: client_encoding
    Parameter value: UTF8
    Parameter name: DateStyle
    Parameter value: ISO
    Parameter name: TimeZone
    Parameter value: Etc/UTC
    Parameter name: extra_float_digits
    Parameter value: 2

No.     Time           Source                Destination           Protocol Length Info
      7 44.899945      172.31.53.252         52.91.194.28          TCP      66     15432 → 44458 [ACK] Seq=1 Ack=109 Win=26880 Len=0 TSval=217925401 TSecr=347887638

Frame 7: 66 bytes on wire (528 bits), 66 bytes captured (528 bits)
Ethernet II, Src: 12:62:5d:34:3b:e3 (12:62:5d:34:3b:e3), Dst: 12:98:7b:58:57:9d (12:98:7b:58:57:9d)
Internet Protocol Version 4, Src: 172.31.53.252, Dst: 52.91.194.28
Transmission Control Protocol, Src Port: 15432 (15432), Dst Port: 44458 (44458), Seq: 1, Ack: 109, Len: 0

No.     Time           Source                Destination           Protocol Length Info
      8 44.900052      172.31.53.252         52.91.194.28          PGSQL    120    <R/S/S

Frame 8: 120 bytes on wire (960 bits), 120 bytes captured (960 bits)
Ethernet II, Src: 12:62:5d:34:3b:e3 (12:62:5d:34:3b:e3), Dst: 12:98:7b:58:57:9d (12:98:7b:58:57:9d)
Internet Protocol Version 4, Src: 172.31.53.252, Dst: 52.91.194.28
Transmission Control Protocol, Src Port: 15432 (15432), Dst Port: 44458 (44458), Seq: 1, Ack: 109, Len: 54
PostgreSQL
    Type: Authentication request
    Length: 8
    Authentication type: Success (0)
PostgreSQL
    Type: Parameter status
    Length: 25
    Parameter name: client_encoding
    Parameter value: UTF8
PostgreSQL
    Type: Parameter status
    Length: 18
    Parameter name: DateStyle
    Parameter value: ISO

No.     Time           Source                Destination           Protocol Length Info
      9 44.900076      172.31.53.252         52.91.194.28          PGSQL    72     <Z

Frame 9: 72 bytes on wire (576 bits), 72 bytes captured (576 bits)
Ethernet II, Src: 12:62:5d:34:3b:e3 (12:62:5d:34:3b:e3), Dst: 12:98:7b:58:57:9d (12:98:7b:58:57:9d)
Internet Protocol Version 4, Src: 172.31.53.252, Dst: 52.91.194.28
Transmission Control Protocol, Src Port: 15432 (15432), Dst Port: 44458 (44458), Seq: 55, Ack: 109, Len: 6
PostgreSQL
    Type: Ready for query
    Length: 5
    Status: Idle (73)

No.     Time           Source                Destination           Protocol Length Info
     10 44.900455      52.91.194.28          172.31.53.252         TCP      66     44458 → 15432 [ACK] Seq=109 Ack=55 Win=27008 Len=0 TSval=347887638 TSecr=217925401

Frame 10: 66 bytes on wire (528 bits), 66 bytes captured (528 bits)
Ethernet II, Src: 12:98:7b:58:57:9d (12:98:7b:58:57:9d), Dst: 12:62:5d:34:3b:e3 (12:62:5d:34:3b:e3)
Internet Protocol Version 4, Src: 52.91.194.28, Dst: 172.31.53.252
Transmission Control Protocol, Src Port: 44458 (44458), Dst Port: 15432 (15432), Seq: 109, Ack: 55, Len: 0

No.     Time           Source                Destination           Protocol Length Info
     11 44.900463      52.91.194.28          172.31.53.252         TCP      66     44458 → 15432 [ACK] Seq=109 Ack=61 Win=27008 Len=0 TSval=347887638 TSecr=217925401

Frame 11: 66 bytes on wire (528 bits), 66 bytes captured (528 bits)
Ethernet II, Src: 12:98:7b:58:57:9d (12:98:7b:58:57:9d), Dst: 12:62:5d:34:3b:e3 (12:62:5d:34:3b:e3)
Internet Protocol Version 4, Src: 52.91.194.28, Dst: 172.31.53.252
Transmission Control Protocol, Src Port: 44458 (44458), Dst Port: 15432 (15432), Seq: 109, Ack: 61, Len: 0

No.     Time           Source                Destination           Protocol Length Info
     12 44.931064      52.91.194.28          172.31.53.252         PGSQL    135    >P/B/E/S

Frame 12: 135 bytes on wire (1080 bits), 135 bytes captured (1080 bits)
Ethernet II, Src: 12:98:7b:58:57:9d (12:98:7b:58:57:9d), Dst: 12:62:5d:34:3b:e3 (12:62:5d:34:3b:e3)
Internet Protocol Version 4, Src: 52.91.194.28, Dst: 172.31.53.252
Transmission Control Protocol, Src Port: 44458 (44458), Dst Port: 15432 (15432), Seq: 109, Ack: 61, Len: 69
PostgreSQL
    Type: Parse
    Length: 40
    Statement: 
    Query: SHOW TRANSACTION ISOLATION LEVEL
    Parameters: 0
PostgreSQL
    Type: Bind
    Length: 12
    Portal: 
    Statement: 
    Parameter formats: 0
    Parameter values: 0
    Result formats: 0
PostgreSQL
    Type: Execute
    Length: 9
    Portal: 
    Returns: 1 rows
PostgreSQL
    Type: Sync
    Length: 4

No.     Time           Source                Destination           Protocol Length Info
     13 44.931223      172.31.53.252         52.91.194.28          PGSQL    125    <E

Frame 13: 125 bytes on wire (1000 bits), 125 bytes captured (1000 bits)
Ethernet II, Src: 12:62:5d:34:3b:e3 (12:62:5d:34:3b:e3), Dst: 12:98:7b:58:57:9d (12:98:7b:58:57:9d)
Internet Protocol Version 4, Src: 172.31.53.252, Dst: 52.91.194.28
Transmission Control Protocol, Src Port: 15432 (15432), Dst Port: 44458 (44458), Seq: 61, Ack: 178, Len: 59
PostgreSQL
    Type: Error
    Length: 58
    Severity: ERROR
    Code: XX000
    Message: prepare statement not supported: SHOW

No.     Time           Source                Destination           Protocol Length Info
     14 44.931250      172.31.53.252         52.91.194.28          PGSQL    72     <Z

Frame 14: 72 bytes on wire (576 bits), 72 bytes captured (576 bits)
Ethernet II, Src: 12:62:5d:34:3b:e3 (12:62:5d:34:3b:e3), Dst: 12:98:7b:58:57:9d (12:98:7b:58:57:9d)
Internet Protocol Version 4, Src: 172.31.53.252, Dst: 52.91.194.28
Transmission Control Protocol, Src Port: 15432 (15432), Dst Port: 44458 (44458), Seq: 120, Ack: 178, Len: 6
PostgreSQL
    Type: Ready for query
    Length: 5
    Status: Idle (73)

@knz
Copy link
Contributor Author

knz commented Feb 4, 2016

Note by the way that the "EXECUTE" part of the postgres command packet specifies a row count of 1 (Returns: 1 rows)

@petermattis petermattis changed the title Missing support for row count limits in pgwire sql/pgwire: missing support for row count limits in pgwire Feb 12, 2016
@petermattis petermattis added C-bug Code not up to spec/doc, specs & docs deemed correct. Solution expected to change code/behavior. and removed SQL labels Feb 13, 2016
@jordanlewis
Copy link
Member

I think row count limits are still unsupported, based on this forum question.

Here's the unimplemented stub:

if err := c.sendInternalError(fmt.Sprintf("execute row count limits not supported: %d of %d", limit, result.Rows.Len())); err != nil {

@dianasaur323
Copy link
Contributor

For those of you who've looked at this issue before, how difficult would this be to implement? Unlikely to make it into 1.2, but would be helpful to know.

@jordanlewis
Copy link
Member

I think this would be relatively easy to implement.

@jordanlewis
Copy link
Member

Actually, nevermind. I think this would be a fairly hefty task. The protocol doesn't just permit row count limits - it actually implements a more sophisticated cursor-like interface that pauses a query when the row count limit is hit, permitting resumption of that query when the user is ready. The flow works as follows (copying from a private discussion for posterity):

  1. The user prepares a statement, which produces a prepared statement on the server.
  2. The user binds parameters to the statement, which produces a "statement portal" on the server.
  3. The user executes the statement portal. Normally, when there's no row count limit sent, the statement will execute, return results, and delete the statement portal all at once. If there is a row count limit, and the limit is reached during execution, the results are sent back along with a "portal suspended" message.
  4. The user sends another execute statement, this time targeting the in-progress statement portal. The portal resumes execution of the query (it's up to the implementation how that part works exactly, it seems) as in step 3. This flow is repeated until all rows are exhausted or the client stops sending messages.

@knz
Copy link
Contributor Author

knz commented Sep 10, 2017

I think if we restrict the protocol so that there can be only one open portal executing at a time, and that no new statement can be executed when a portal is still open (or accept that would close the open portal), then we could achieve this still fairly easily (S-sized project). But before we implement that I'd like to know if having at max 1 portal open is acceptable to clients that use this feature.

(The initial use case above would be OK with the limitation, FWIW)

@dianasaur323 dianasaur323 self-assigned this Sep 17, 2017
@bdarnell
Copy link
Contributor

bdarnell commented Dec 7, 2017

This is an issue for SOLR, although there is a workaround: set batchSize to zero in the solr configuration.

knz added a commit to knz/cockroach that referenced this issue Oct 23, 2018
The JDBC driver and perhaps others commonly try to use the "fetch
limit" parameter, which is yet unsupported in
CockroachDB (cockroachdb#4035). This patch adds telemetry to gauge demand.

Release note (sql change): attempts by client apps to use the
unsupported "fetch limit" parameter (e.g. via JDBC) will now be
captured in telemetry if statistics reporting is enabled, to gauge
support for this feature.
knz added a commit to knz/cockroach that referenced this issue Oct 23, 2018
The JDBC driver and perhaps others commonly try to use the "fetch
limit" parameter, which is yet unsupported in
CockroachDB (cockroachdb#4035). This patch adds telemetry to gauge demand.

Release note (sql change): attempts by client apps to use the
unsupported "fetch limit" parameter (e.g. via JDBC) will now be
captured in telemetry if statistics reporting is enabled, to gauge
support for this feature.
craig bot pushed a commit that referenced this issue Oct 23, 2018
31637: pgwire: add telemetry for fetch limits r=knz a=knz

Requested by @awoods187 

The JDBC driver and perhaps others commonly try to use the "fetch
limit" parameter, which is yet unsupported in
CockroachDB (#4035). This patch adds telemetry to gauge demand.

Release note (sql change): attempts by client apps to use the
unsupported "fetch limit" parameter (e.g. via JDBC) will now be
captured in telemetry if statistics reporting is enabled, to gauge
support for this feature.

31725: sql/parser: re-allow FAMILY, MINVALUE, MAXVALUE, NOTHING and INDEX in table names r=knz a=knz

Fixes #31589.

CockroachDB introduced non-standard extensions to its SQL dialect very
early in its history, before concerns of compatibility with existing
PostgreSQL clients became a priority. When these features were added,
new keywords were liberally marked as "reserved", so as to "make the
grammar work", and without noticing / care for the fact that this
change would make existing valid SQL queries/clients encounter new
errors.

An example of this:

1. let's make "column families" a thing

2. the syntax `create table(..., constraint xxx family(a,b,c))` is not
   good enough (although this would not require reserved keywords), we
   really want also `create table (..., family (a,b,c))` to be
   possible.

3. oh, the grammar won't let us because "family" is a possible column
   name? No matter! let's mark "FAMILY" as a reserved name for
   column/function names.

   - No concern for the fact that "family" is a  perfectly valid
	 English name for things that people want to make an attribute of
	 in inventory / classification tables.

   - No concern for the fact that reserved column/function names are
	 also reserved for table names.

4. (much later) Clients complaining about the fact they can't call
   their columns or tables `family` without quoting.

Ditto "INDEX", "MINVALUE", "MAXVALUE", and perhaps others.

Moral of the story: DO NOT MAKE NEW RESERVED KEYWORDS UNLESS YOU'RE
VERY VERY VERY SURE THAT THERE IS NO LEGITIMATE USE FOR THEM IN CLIENT
APPS EVER.

(An example perhaps: the word "NOTHING" was also marked as reserved,
but it's much more unlikely this word will ever be used for something
useful.)

This patch restores the use of FAMILY, INDEX, NOTHING, MINVALUE and
MAXVALUE in table and function names, by introducing an awkward dance
in the grammar of keyword non-terminals and database object names.

They remain reserved as colum names because of the non-standard
CockroachDB extensions.

Release note (sql change): It is now again possible to use the
keywords FAMILY, MINVALUE, MAXVALUE, INDEX or NOTHING as table names,
for compatibility with PostgreSQL.

31731: sql/parser: unreserve INDEX and NOTHING from the RHS of SET statements r=knz a=knz

First commit from #31725.

The SET statement in the pg dialect is special because it
auto-converts identifiers on its RHS to symbolic values or strings. In
particular it is meant to support a diversity of special keywords as
pseudo-values.

This patch ensures that INDEX and NOTHING are accepted on the RHS.

Release note (sql change): the names "index" and "nothing" are again
accepted in the right-hand-side of the assignment in SET statements,
for compatibility with PostgreSQL.


Co-authored-by: Raphael 'kena' Poss <[email protected]>
knz added a commit to knz/cockroach that referenced this issue Nov 12, 2018
The JDBC driver and perhaps others commonly try to use the "fetch
limit" parameter, which is yet unsupported in
CockroachDB (cockroachdb#4035). This patch adds telemetry to gauge demand.

Release note (sql change): attempts by client apps to use the
unsupported "fetch limit" parameter (e.g. via JDBC) will now be
captured in telemetry if statistics reporting is enabled, to gauge
support for this feature.
@awoods187 awoods187 added the X-anchored-telemetry The issue number is anchored by telemetry references. label Apr 9, 2019
craig bot pushed a commit that referenced this issue Jul 31, 2019
39085: sql: add partial support for pgwire row count limits r=mjibson a=mjibson

Previously we supported row count limits as long as the limit was higher
than the number of returned rows. Here we add a feature that supports
this part of the spec in a partial way. This should unblock simple JDBC
usage of this feature.

Supported use cases:
- implicit transactions (which auto close the portal after suspension)
- explicit transactions executed to completion

Unsupported use cases (with explicit transactions):
- interleaved execution
- explicitly closing a portal after partial execution

Many options were evaluated during implementation. The one here is based
on work where the pgwire package itself takes over the state machine
processing during AddRow and looks for further ExecPortal messages. This
has a number of problems: there are now two state machines, we can only
support a part of the spec. However it also has a number of benefits
like it is a simple implementation that is easy to understand.

Two other solutions were evaluated.

First, teaching distsql how to pause and resume execution (a
proof-of-concept branch for this was produced). I did not pursue this
route because of my own unfamiliarity with distsql, and I thought that
attempting to reach a high level of confidence that all of the new pause,
resume, and error logic flows were correct would be very difficult
(I may be wrong about this). Also, this approach needed to address how
to handle post-distsql execution cleanup and stats gathering. That is,
after the call into distsql was paused and returned control back to
the sql executor, there's a lot of code that gets run to cleanup stuff
and report stats in various places, including some defers. These would
need to be audited and only execute once per statement, not once per
portal execution.

Second, start distsql execution in a new go routine and send exec portal
requests to it over a channel. This would avoid teaching distsql how
to pause and resume itself, but instead move that complexity into the
channel and go routine handling, another area ripe for race conditions
and deadlocks. This also needed to deal with the post-distsql execution
cleanup and defers handling discussed above.

For now, we decided that this limited implementation is good enough for
what we need today. In order to one day either support the full spec
or pay down some of our technical debt, we can probably do a number of
preliminary refactors that will make invoking much of the distsql path
multpile times easier.

pgx got a version bump to get support for PortalSuspended. The current
pgx version had a few new dependencies too.

See #4035

Release note (sql change): add partial support for row limits during
portal execution in pgwire.

Co-authored-by: Matt Jibson <[email protected]>
@jordanlewis jordanlewis assigned maddyblue and unassigned awoods187 Aug 6, 2019
@jordanlewis
Copy link
Member

The initial work here is done: row count limits are now supported! This allows support for most simple use cases of JDBC streaming. See #39085.

#40195 now tracks the rest of the missing functionality.

jordanlewis added a commit to jordanlewis/cockroach that referenced this issue Oct 2, 2019
The spreadsheet we discussed is unwieldy - hard to edit and impossible to keep
up to date. If we write down blacklists in code, then we can use an approach
like this to always have an up to date aggregation.

So far it seems like there's just a lot of unknowns to categorize still.

The output today:

```
=== RUN   TestBlacklists
 648: unknown                                                (unknown)
 493: cockroachdb#5807   (sql: Add support for TEMP tables)
 151: cockroachdb#17511  (sql: support stored procedures)
  86: cockroachdb#26097  (sql: make TIMETZ more pg-compatible)
  56: cockroachdb#10735  (sql: support SQL savepoints)
  55: cockroachdb#32552  (multi-dim arrays)
  55: cockroachdb#26508  (sql: restricted DDL / DML inside transactions)
  52: cockroachdb#32565  (sql: support optional TIME precision)
  39: cockroachdb#243    (roadmap: Blob storage)
  33: cockroachdb#26725  (sql: support postgres' API to handle blob storage (incl lo_creat, lo_from_bytea))
  31: cockroachdb#27793  (sql: support custom/user-defined base scalar (primitive) types)
  24: cockroachdb#12123  (sql: Can't drop and replace a table within a transaction)
  24: cockroachdb#26443  (sql: support user-defined schemas between database and table)
  20: cockroachdb#21286  (sql: Add support for geometric types)
  18: cockroachdb#6583   (sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}))
  17: cockroachdb#22329  (Support XA distributed transactions in CockroachDB)
  16: cockroachdb#24062  (sql: 32 bit SERIAL type)
  16: cockroachdb#30352  (roadmap:when CockroachDB  will support cursor?)
  12: cockroachdb#27791  (sql: support RANGE types)
   8: cockroachdb#40195  (pgwire: multiple active result sets (portals) not supported)
   8: cockroachdb#6130   (sql: add support for key watches with notifications of changes)
   5: Expected Failure                                       (unknown)
   5: cockroachdb#23468  (sql: support sql arrays of JSONB)
   5: cockroachdb#40854  (sql: set application_name from connection string)
   4: cockroachdb#35879  (sql: `default_transaction_read_only` should also accept 'on' and 'off')
   4: cockroachdb#32610  (sql: can't insert self reference)
   4: cockroachdb#40205  (sql: add non-trivial implementations of FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE, FOR NO KEY SHARE)
   4: cockroachdb#35897  (sql: unknown function: pg_terminate_backend())
   4: cockroachdb#4035   (sql/pgwire: missing support for row count limits in pgwire)
   3: cockroachdb#27796  (sql: support user-defined DOMAIN types)
   3: cockroachdb#3781   (sql: Add Data Type Formatting Functions)
   3: cockroachdb#40476  (sql: support `FOR {UPDATE,SHARE} {SKIP LOCKED,NOWAIT}`)
   3: cockroachdb#35882  (sql: support other character sets)
   2: cockroachdb#10028  (sql: Support view queries with star expansions)
   2: cockroachdb#35807  (sql: INTERVAL output doesn't match PG)
   2: cockroachdb#35902  (sql: large object support)
   2: cockroachdb#40474  (sql: support `SELECT ... FOR UPDATE OF` syntax)
   1: cockroachdb#18846  (sql: Support CIDR column type)
   1: cockroachdb#9682   (sql: implement computed indexes)
   1: cockroachdb#31632  (sql: FK options (deferrable, etc))
   1: cockroachdb#24897  (sql: CREATE OR REPLACE VIEW)
   1: pass?                                                  (unknown)
   1: cockroachdb#36215  (sql: enable setting standard_conforming_strings to off)
   1: cockroachdb#32562  (sql: support SET LOCAL and txn-scoped session variable changes)
   1: cockroachdb#36116  (sql: psychopg: investigate how `'infinity'::timestamp` is presented)
   1: cockroachdb#26732  (sql: support the binary operator: <int> / <float>)
   1: cockroachdb#23299  (sql: support coercing string literals to arrays)
   1: cockroachdb#36115  (sql: psychopg: investigate if datetimetz is being returned instead of datetime)
   1: cockroachdb#26925  (sql: make the CockroachDB integer types more compatible with postgres)
   1: cockroachdb#21085  (sql: WITH RECURSIVE (recursive common table expressions))
   1: cockroachdb#36179  (sql: implicity convert date to timestamp)
   1: cockroachdb#36118  (sql: Cannot parse '24:00' as type time)
   1: cockroachdb#31708  (sql: support current_time)
```

Release justification: non-production change
Release note: None
@quaff
Copy link

quaff commented Oct 17, 2019

CockroachDB version: CCL v19.1.5 @ 2019/10/10 02:31:05 (go1.13.1)
PG JDBC driver version: 42.2.8
JDK version: 1.8.0_221

String sql = "******";
try (Connection conn = DriverManager.getConnection("jdbc:postgresql://localhost:26257/postgres", "root", "")) {
	try (PreparedStatement stmt = conn.prepareStatement(sql)) {
		stmt.setFetchSize(1);
		conn.setAutoCommit(false);
		try (ResultSet rs = stmt.executeQuery()) {
		}
	}
}

Error raised if setFetchSize(1) with setAutoCommit(false), or just single setMaxRows(1)

Exception in thread "main" org.postgresql.util.PSQLException: ERROR: unimplemented: execute row count limits not supported: 1 of 34
 :See: https://github.com/cockroachdb/cockroach/issues/4035
	at org.postgresql.core.v3.QueryExecutorImpl.receiveErrorResponse(QueryExecutorImpl.java:2497)
	at org.postgresql.core.v3.QueryExecutorImpl.processResults(QueryExecutorImpl.java:2233)
	at org.postgresql.core.v3.QueryExecutorImpl.execute(QueryExecutorImpl.java:310)
	at org.postgresql.jdbc.PgStatement.executeInternal(PgStatement.java:446)
	at org.postgresql.jdbc.PgStatement.execute(PgStatement.java:370)
	at org.postgresql.jdbc.PgPreparedStatement.executeWithFlags(PgPreparedStatement.java:149)
	at org.postgresql.jdbc.PgPreparedStatement.executeQuery(PgPreparedStatement.java:108)

@maddyblue
Copy link
Contributor

This is only supported in 19.2. The most recent beta is at https://www.cockroachlabs.com/docs/releases/v19.2.0-beta.20191014.html.

jordanlewis added a commit to jordanlewis/cockroach that referenced this issue Oct 24, 2019
The spreadsheet we discussed is unwieldy - hard to edit and impossible to keep
up to date. If we write down blacklists in code, then we can use an approach
like this to always have an up to date aggregation.

So far it seems like there's just a lot of unknowns to categorize still.

The output today:

```
=== RUN   TestBlacklists
 648: unknown                                                (unknown)
 493: cockroachdb#5807   (sql: Add support for TEMP tables)
 151: cockroachdb#17511  (sql: support stored procedures)
  86: cockroachdb#26097  (sql: make TIMETZ more pg-compatible)
  56: cockroachdb#10735  (sql: support SQL savepoints)
  55: cockroachdb#32552  (multi-dim arrays)
  55: cockroachdb#26508  (sql: restricted DDL / DML inside transactions)
  52: cockroachdb#32565  (sql: support optional TIME precision)
  39: cockroachdb#243    (roadmap: Blob storage)
  33: cockroachdb#26725  (sql: support postgres' API to handle blob storage (incl lo_creat, lo_from_bytea))
  31: cockroachdb#27793  (sql: support custom/user-defined base scalar (primitive) types)
  24: cockroachdb#12123  (sql: Can't drop and replace a table within a transaction)
  24: cockroachdb#26443  (sql: support user-defined schemas between database and table)
  20: cockroachdb#21286  (sql: Add support for geometric types)
  18: cockroachdb#6583   (sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}))
  17: cockroachdb#22329  (Support XA distributed transactions in CockroachDB)
  16: cockroachdb#24062  (sql: 32 bit SERIAL type)
  16: cockroachdb#30352  (roadmap:when CockroachDB  will support cursor?)
  12: cockroachdb#27791  (sql: support RANGE types)
   8: cockroachdb#40195  (pgwire: multiple active result sets (portals) not supported)
   8: cockroachdb#6130   (sql: add support for key watches with notifications of changes)
   5: Expected Failure                                       (unknown)
   5: cockroachdb#23468  (sql: support sql arrays of JSONB)
   5: cockroachdb#40854  (sql: set application_name from connection string)
   4: cockroachdb#35879  (sql: `default_transaction_read_only` should also accept 'on' and 'off')
   4: cockroachdb#32610  (sql: can't insert self reference)
   4: cockroachdb#40205  (sql: add non-trivial implementations of FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE, FOR NO KEY SHARE)
   4: cockroachdb#35897  (sql: unknown function: pg_terminate_backend())
   4: cockroachdb#4035   (sql/pgwire: missing support for row count limits in pgwire)
   3: cockroachdb#27796  (sql: support user-defined DOMAIN types)
   3: cockroachdb#3781   (sql: Add Data Type Formatting Functions)
   3: cockroachdb#40476  (sql: support `FOR {UPDATE,SHARE} {SKIP LOCKED,NOWAIT}`)
   3: cockroachdb#35882  (sql: support other character sets)
   2: cockroachdb#10028  (sql: Support view queries with star expansions)
   2: cockroachdb#35807  (sql: INTERVAL output doesn't match PG)
   2: cockroachdb#35902  (sql: large object support)
   2: cockroachdb#40474  (sql: support `SELECT ... FOR UPDATE OF` syntax)
   1: cockroachdb#18846  (sql: Support CIDR column type)
   1: cockroachdb#9682   (sql: implement computed indexes)
   1: cockroachdb#31632  (sql: FK options (deferrable, etc))
   1: cockroachdb#24897  (sql: CREATE OR REPLACE VIEW)
   1: pass?                                                  (unknown)
   1: cockroachdb#36215  (sql: enable setting standard_conforming_strings to off)
   1: cockroachdb#32562  (sql: support SET LOCAL and txn-scoped session variable changes)
   1: cockroachdb#36116  (sql: psychopg: investigate how `'infinity'::timestamp` is presented)
   1: cockroachdb#26732  (sql: support the binary operator: <int> / <float>)
   1: cockroachdb#23299  (sql: support coercing string literals to arrays)
   1: cockroachdb#36115  (sql: psychopg: investigate if datetimetz is being returned instead of datetime)
   1: cockroachdb#26925  (sql: make the CockroachDB integer types more compatible with postgres)
   1: cockroachdb#21085  (sql: WITH RECURSIVE (recursive common table expressions))
   1: cockroachdb#36179  (sql: implicity convert date to timestamp)
   1: cockroachdb#36118  (sql: Cannot parse '24:00' as type time)
   1: cockroachdb#31708  (sql: support current_time)
```

Release justification: non-production change
Release note: None
craig bot pushed a commit that referenced this issue Nov 7, 2019
41252: roachtest: add test that aggregates orm blacklist failures r=jordanlewis a=jordanlewis

The spreadsheet we discussed is unwieldy - hard to edit and impossible to keep
up to date. If we write down blacklists in code, then we can use an approach
like this to always have an up to date aggregation.

So far it seems like there's just a lot of unknowns to categorize still.

The output today:

```
=== RUN   TestBlacklists
 648: unknown                                                (unknown)
 493: #5807   (sql: Add support for TEMP tables)
 151: #17511  (sql: support stored procedures)
  86: #26097  (sql: make TIMETZ more pg-compatible)
  56: #10735  (sql: support SQL savepoints)
  55: #32552  (multi-dim arrays)
  55: #26508  (sql: restricted DDL / DML inside transactions)
  52: #32565  (sql: support optional TIME precision)
  39: #243    (roadmap: Blob storage)
  33: #26725  (sql: support postgres' API to handle blob storage (incl lo_creat, lo_from_bytea))
  31: #27793  (sql: support custom/user-defined base scalar (primitive) types)
  24: #12123  (sql: Can't drop and replace a table within a transaction)
  24: #26443  (sql: support user-defined schemas between database and table)
  20: #21286  (sql: Add support for geometric types)
  18: #6583   (sql: explicit lock syntax (SELECT FOR {SHARE,UPDATE} {skip locked,nowait}))
  17: #22329  (Support XA distributed transactions in CockroachDB)
  16: #24062  (sql: 32 bit SERIAL type)
  16: #30352  (roadmap:when CockroachDB  will support cursor?)
  12: #27791  (sql: support RANGE types)
   8: #40195  (pgwire: multiple active result sets (portals) not supported)
   8: #6130   (sql: add support for key watches with notifications of changes)
   5: Expected Failure                                       (unknown)
   5: #23468  (sql: support sql arrays of JSONB)
   5: #40854  (sql: set application_name from connection string)
   4: #35879  (sql: `default_transaction_read_only` should also accept 'on' and 'off')
   4: #32610  (sql: can't insert self reference)
   4: #40205  (sql: add non-trivial implementations of FOR UPDATE, FOR NO KEY UPDATE, FOR SHARE, FOR NO KEY SHARE)
   4: #35897  (sql: unknown function: pg_terminate_backend())
   4: #4035   (sql/pgwire: missing support for row count limits in pgwire)
   3: #27796  (sql: support user-defined DOMAIN types)
   3: #3781   (sql: Add Data Type Formatting Functions)
   3: #40476  (sql: support `FOR {UPDATE,SHARE} {SKIP LOCKED,NOWAIT}`)
   3: #35882  (sql: support other character sets)
   2: #10028  (sql: Support view queries with star expansions)
   2: #35807  (sql: INTERVAL output doesn't match PG)
   2: #35902  (sql: large object support)
   2: #40474  (sql: support `SELECT ... FOR UPDATE OF` syntax)
   1: #18846  (sql: Support CIDR column type)
   1: #9682   (sql: implement computed indexes)
   1: #31632  (sql: FK options (deferrable, etc))
   1: #24897  (sql: CREATE OR REPLACE VIEW)
   1: pass?                                                  (unknown)
   1: #36215  (sql: enable setting standard_conforming_strings to off)
   1: #32562  (sql: support SET LOCAL and txn-scoped session variable changes)
   1: #36116  (sql: psychopg: investigate how `'infinity'::timestamp` is presented)
   1: #26732  (sql: support the binary operator: <int> / <float>)
   1: #23299  (sql: support coercing string literals to arrays)
   1: #36115  (sql: psychopg: investigate if datetimetz is being returned instead of datetime)
   1: #26925  (sql: make the CockroachDB integer types more compatible with postgres)
   1: #21085  (sql: WITH RECURSIVE (recursive common table expressions))
   1: #36179  (sql: implicity convert date to timestamp)
   1: #36118  (sql: Cannot parse '24:00' as type time)
   1: #31708  (sql: support current_time)
```

Release justification: non-production change
Release note: None

Co-authored-by: Jordan Lewis <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
A-sql-pgwire pgwire protocol issues. C-enhancement Solution expected to add code/behavior + preserve backward-compat (pg compat issues are exception) X-anchored-telemetry The issue number is anchored by telemetry references.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants