Topics

How to change transaction level for metadata (INFORMATION_SCHEMA) operations (Spanner DB)?


yunus@...
 

Hi everyone,

I am implementing an RDBMS adapter for the GCP Spanner DB. I encounter a problem with transactions and metadata operations.
Spanner DB supports information schema queries with read-only transactions, not with read-write.
So while performing an insert, or select operation, Datanucleus performs a getTable operation to get column info or at least check the existence of the table.
Then Spanner raises an exception that information schema operation is not supported with a read-write transaction.

I am kind of stuck on how to proceed.
The first solution that comes to my mind is using a second read-only connection to perform these metadata operations. But I don't know how viable or hard it is.
Do you have any suggestions?

I use Datanucleus 4.1. I have not tested with the latest version 5 yet.

thanks in advance
yunus


Andy
 

Hi,
Firstly the only source that is supported is current (5.2) so that's the only chance anything would have of being merged, FYI.

Secondly, do you have some particular stack traces that show where there are calls to "getTable" or "get column info"? That way I can see exactly what you're referring to.

Schema operations are already performed using connections from a different pool than the normal operations.


yunus@...
 

Thanks for the response. 
I noted that only 5.2 is open for merge.

After your comment, I've dug deep and I've seen that it is possible to request a new connection for value-generation with a different isolation level. I've set datanucleus.valuegeneration.transactionIsolation to none. Now I am able to run the tutorial sample.
Do you think that this is the correct solution?

The stack trace is below. In the highlighted RDBMSStoreManager.getStrategyValueForGenerator() a new connection is created. After I set the above valuegeneration.transactionIsolation to None, everything was fine.:

com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory$JdbcSqlExceptionImpl: INVALID_ARGUMENT: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Unsupported concurrency mode in query using INFORMATION_SCHEMA.
at com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory.of(JdbcSqlExceptionFactory.java:208)
at com.google.cloud.spanner.jdbc.AbstractJdbcStatement.executeQuery(AbstractJdbcStatement.java:166)
at com.google.cloud.spanner.jdbc.JdbcPreparedStatement.executeQueryWithOptions(JdbcPreparedStatement.java:62)
at com.google.cloud.spanner.jdbc.JdbcDatabaseMetaData.getTables(JdbcDatabaseMetaData.java:761)
at org.datanucleus.store.rdbms.datasource.dbcp.DelegatingDatabaseMetaData.getTables(DelegatingDatabaseMetaData.java:589)
at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getTableType(RDBMSSchemaHandler.java:433)
at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:588)
at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.repositoryExists(TableGenerator.java:242)
at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:81)
at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:184)
at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:92)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2048)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1290)
at org.datanucleus.state.StateManagerImpl.populateStrategyFields(StateManagerImpl.java:2201)
at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:418)
at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:120)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:218)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2079)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2177)
at org.datanucleus.store.types.SCOUtils.validateObjectForWriting(SCOUtils.java:1486)
at org.datanucleus.store.rdbms.scostore.ElementContainerStore.validateElementForWriting(ElementContainerStore.java:422)
at org.datanucleus.store.rdbms.scostore.JoinSetStore.addAll(JoinSetStore.java:341)
at org.datanucleus.store.rdbms.mapping.java.CollectionMapping.postInsert(CollectionMapping.java:157)
at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:522)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObjectInTable(RDBMSPersistenceHandler.java:162)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:138)
at org.datanucleus.state.StateManagerImpl.internalMakePersistent(StateManagerImpl.java:3363)
at org.datanucleus.state.StateManagerImpl.makePersistent(StateManagerImpl.java:3339)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2080)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:1923)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1778)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:724)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749)
at org.datanucleus.samples.jdo.tutorial.Main.main(Main.java:61)


Andy
 
Edited

Only you can decide what is "right" for your database, but requiring a CONNECTION "read only" (presumably this database has a JDBC driver and require a call of Connection.setReadOnly?) and changing the isolation level to NONE aren't the same thing! NONE will simply "auto-commit" the value generation connection.

The DN codebase doesn't currently have an explicit support for read-only connections (since no other database requires it) ... the only current use of "read-only" is where the whole database is "read only" allowing no changes. In the context of value generation, it gets a connection to get the current value generation "value" and optionally to write the new "value" (depending on the "strategy" used) using that single connection. It would require code changes to have a separate connection for any schema checks (in the value generation code) and a separate (read-write) connection for any value generation updates. Further to that, if you pass a connection through to the schema handling code for "value generation" and it finds that the "table" required doesn't exist, then it would need to create it ... and if the connection is read-only then it won't work! Think through what you really want to do