Date   

Re: To set length of Datanucleus db2 column of type CLOB to 1M

Andy
 
Edited

Hello,
what was in some version from 2010 I've little interest in. You can trace code changes in SourceForge (ancient code) and GitHub (recent code). Likely it is this change.

Perhaps you can provide the stack trace that calls that method and trace it through to when that is ever called, and hence find how whatever it is is handled now? Either way that code was undocumented likely from back in TJDO days, and we don't use DB2, so someone who does use DB2 has to take responsibility for such things if it gets re-included.

Dont know what "is coming as 2147483647" means. It creates your schema with that in a column? something else?


To set length of Datanucleus db2 column of type CLOB to 1M

mayank.chawla@...
 

Hello,

I am migrating from Datanucleus version 2.0.4 to 5.x. In 2.0.4 version, length for DB2, CLOB columns used to come as "1M" when schema gets created. Now, in 5.2.3, length for DB2, CLOB columns is coming as 2147483647(ex: USER_X CLOB(2147483647),DESC_X CLOB(2147483647)) in create queries, but need to have column length for CLOB columns as 1M. 

In Datanucleus RDBMS 2.0.4 jar, I noticed there is a method getUnlimitedLengthPrecisionValue in DB2Adapter class, which I believe result in length of 1M. Please find the method definition below,
public int getUnlimitedLengthPrecisionValue(SQLTypeInfo typeInfo)
    {
        if (typeInfo.getDataType() == java.sql.Types.BLOB || typeInfo.getDataType() == java.sql.Types.CLOB)
        {
            return 1 << 30;
        }
        else
        {
            return super.getUnlimitedLengthPrecisionValue(typeInfo);
        }
    }

I checked similar method in Datanucleus RDBMS 5.2.3 jar but its not present. Kindly advice how this can be achieved - by adding some configuration or through above method. I might have missed on any important detail which might be required to answer this question, please let me know if any information is required.

Please find below versions being used:
1. java 8
2. datanucleus-api-jdo-5.2.4
3. datanucleus-core-5.2.3
4. datanucleus-rdbms-5.2.3

Thanks.


Re: Can someone point me where I can find a working example to call stored procedure from DN apis

Andy
 
Edited

That works for our tests for a database/driver that supports JDBC CallableStatement API, as per this link. These tests are public, and anybody can contribute to them, just like they can the code (not that 99% of users get what "open source" is).

Since you don't define what is your database, JDBC driver, version of DataNucleus, what you've tried, what happened ("did not work" strangely doesn't explain much there), what your stored proc is, then there is nothing more to say.


Can someone point me where I can find a working example to call stored procedure from DN apis

ab.jaipur@...
 


Re: Datanucleus and Android

Andy
 

Looks like you added the missing javax.* classes, as per this.


Re: Datanucleus and Android

marco@...
 

...and here's another repo that might be interesting: https://github.com/cloudstore/cloudstore-android

The last commit-comments state "cloudstore droid working with DN" and "PersistenceManager can be created!!!" -- so we certainly got it running. I don't remember what we did concerning javax.naming-package(s). Either we added the missing classes to our own dex so that class-loading didn't fail, or we somehow removed references to them. Sorry, don't remember the details, anymore. I only remember that it certainly worked: created tables, wrote and read data.


Re: Datanucleus and Android

marco@...
 

It's quite a while, but I definitely got DataNucleus running on Android. I wrote a test-program and it worked fine. You already found my pull-request, which fixed quite a few issues in the SQLDroid-driver. I have no idea, if it was ever really merged into the official codebase. https://github.com/cloudstore/cloudstore-experiment here are some old codes which made use of it -- and they worked in the Android-emulator for sure, and IIRC also on my phone back then. Unfortunately, this was never put into productive use, for many reasons -- but not technical ones.


Re: How to change transaction level for metadata (INFORMATION_SCHEMA) operations (Spanner DB)?

Andy
 
Edited

Only you can decide what is "right" for your database, but requiring a CONNECTION "read only" (presumably this database has a JDBC driver and require a call of Connection.setReadOnly?) and changing the isolation level to NONE aren't the same thing! NONE will simply "auto-commit" the value generation connection.

The DN codebase doesn't currently have an explicit support for read-only connections (since no other database requires it) ... the only current use of "read-only" is where the whole database is "read only" allowing no changes. In the context of value generation, it gets a connection to get the current value generation "value" and optionally to write the new "value" (depending on the "strategy" used) using that single connection. It would require code changes to have a separate connection for any schema checks (in the value generation code) and a separate (read-write) connection for any value generation updates. Further to that, if you pass a connection through to the schema handling code for "value generation" and it finds that the "table" required doesn't exist, then it would need to create it ... and if the connection is read-only then it won't work! Think through what you really want to do


Re: How to change transaction level for metadata (INFORMATION_SCHEMA) operations (Spanner DB)?

yunus@...
 

Thanks for the response. 
I noted that only 5.2 is open for merge.

After your comment, I've dug deep and I've seen that it is possible to request a new connection for value-generation with a different isolation level. I've set datanucleus.valuegeneration.transactionIsolation to none. Now I am able to run the tutorial sample.
Do you think that this is the correct solution?

The stack trace is below. In the highlighted RDBMSStoreManager.getStrategyValueForGenerator() a new connection is created. After I set the above valuegeneration.transactionIsolation to None, everything was fine.:

com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory$JdbcSqlExceptionImpl: INVALID_ARGUMENT: io.grpc.StatusRuntimeException: INVALID_ARGUMENT: Unsupported concurrency mode in query using INFORMATION_SCHEMA.
at com.google.cloud.spanner.jdbc.JdbcSqlExceptionFactory.of(JdbcSqlExceptionFactory.java:208)
at com.google.cloud.spanner.jdbc.AbstractJdbcStatement.executeQuery(AbstractJdbcStatement.java:166)
at com.google.cloud.spanner.jdbc.JdbcPreparedStatement.executeQueryWithOptions(JdbcPreparedStatement.java:62)
at com.google.cloud.spanner.jdbc.JdbcDatabaseMetaData.getTables(JdbcDatabaseMetaData.java:761)
at org.datanucleus.store.rdbms.datasource.dbcp.DelegatingDatabaseMetaData.getTables(DelegatingDatabaseMetaData.java:589)
at org.datanucleus.store.rdbms.schema.RDBMSSchemaHandler.getTableType(RDBMSSchemaHandler.java:433)
at org.datanucleus.store.rdbms.table.AbstractTable.exists(AbstractTable.java:588)
at org.datanucleus.store.rdbms.valuegenerator.TableGenerator.repositoryExists(TableGenerator.java:242)
at org.datanucleus.store.rdbms.valuegenerator.AbstractRDBMSGenerator.obtainGenerationBlock(AbstractRDBMSGenerator.java:81)
at org.datanucleus.store.valuegenerator.AbstractGenerator.obtainGenerationBlock(AbstractGenerator.java:184)
at org.datanucleus.store.valuegenerator.AbstractGenerator.next(AbstractGenerator.java:92)
at org.datanucleus.store.rdbms.RDBMSStoreManager.getStrategyValueForGenerator(RDBMSStoreManager.java:2048)
at org.datanucleus.store.AbstractStoreManager.getStrategyValue(AbstractStoreManager.java:1290)
at org.datanucleus.state.StateManagerImpl.populateStrategyFields(StateManagerImpl.java:2201)
at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:418)
at org.datanucleus.state.StateManagerImpl.initialiseForPersistentNew(StateManagerImpl.java:120)
at org.datanucleus.state.ObjectProviderFactoryImpl.newForPersistentNew(ObjectProviderFactoryImpl.java:218)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2079)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2177)
at org.datanucleus.store.types.SCOUtils.validateObjectForWriting(SCOUtils.java:1486)
at org.datanucleus.store.rdbms.scostore.ElementContainerStore.validateElementForWriting(ElementContainerStore.java:422)
at org.datanucleus.store.rdbms.scostore.JoinSetStore.addAll(JoinSetStore.java:341)
at org.datanucleus.store.rdbms.mapping.java.CollectionMapping.postInsert(CollectionMapping.java:157)
at org.datanucleus.store.rdbms.request.InsertRequest.execute(InsertRequest.java:522)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObjectInTable(RDBMSPersistenceHandler.java:162)
at org.datanucleus.store.rdbms.RDBMSPersistenceHandler.insertObject(RDBMSPersistenceHandler.java:138)
at org.datanucleus.state.StateManagerImpl.internalMakePersistent(StateManagerImpl.java:3363)
at org.datanucleus.state.StateManagerImpl.makePersistent(StateManagerImpl.java:3339)
at org.datanucleus.ExecutionContextImpl.persistObjectInternal(ExecutionContextImpl.java:2080)
at org.datanucleus.ExecutionContextImpl.persistObjectWork(ExecutionContextImpl.java:1923)
at org.datanucleus.ExecutionContextImpl.persistObject(ExecutionContextImpl.java:1778)
at org.datanucleus.api.jdo.JDOPersistenceManager.jdoMakePersistent(JDOPersistenceManager.java:724)
at org.datanucleus.api.jdo.JDOPersistenceManager.makePersistent(JDOPersistenceManager.java:749)
at org.datanucleus.samples.jdo.tutorial.Main.main(Main.java:61)


Re: How to change transaction level for metadata (INFORMATION_SCHEMA) operations (Spanner DB)?

Andy
 

Hi,
Firstly the only source that is supported is current (5.2) so that's the only chance anything would have of being merged, FYI.

Secondly, do you have some particular stack traces that show where there are calls to "getTable" or "get column info"? That way I can see exactly what you're referring to.

Schema operations are already performed using connections from a different pool than the normal operations.


How to change transaction level for metadata (INFORMATION_SCHEMA) operations (Spanner DB)?

yunus@...
 

Hi everyone,

I am implementing an RDBMS adapter for the GCP Spanner DB. I encounter a problem with transactions and metadata operations.
Spanner DB supports information schema queries with read-only transactions, not with read-write.
So while performing an insert, or select operation, Datanucleus performs a getTable operation to get column info or at least check the existence of the table.
Then Spanner raises an exception that information schema operation is not supported with a read-write transaction.

I am kind of stuck on how to proceed.
The first solution that comes to my mind is using a second read-only connection to perform these metadata operations. But I don't know how viable or hard it is.
Do you have any suggestions?

I use Datanucleus 4.1. I have not tested with the latest version 5 yet.

thanks in advance
yunus


Re: DataNucleus - Class versions V1_5 or less must use F_NEW frames

mayank.chawla@...
 

Thanks Andy.

Your point is correct. After changing source, target properties to 1.6(changing which was also trouble as it is a big project and resulted in other errors) classes were enhanced fine. I couldn't find the exact root cause but like you said some bytecode related issue was there and it doesn't work with 1.5 and lesser version due to major feature upgrade of jdk I believe.


Re: DataNucleus - Class versions V1_5 or less must use F_NEW frames

Andy
 

In the absence of any further info, the classes being enhanced are likely the problem then. ASM is clearly getting upset about some bytecode, and if it works when compiling classes using JDK1.8 then enhancing, then its down to the classes being enhanced and whatever is in their bytecode.


Re: DataNucleus - Class versions V1_5 or less must use F_NEW frames

mayank.chawla@...
 

My bad that I mentioned about ASM v5.0.3, even in stack trace it is org.datanucleus.enhancer.DataNucleusEnhancer. I saw that reference and mentioned ASM v5.0.3 by mistake.



Re: DataNucleus - Class versions V1_5 or less must use F_NEW frames

Andy
 

Only you know what the differences are between the situations you refer to.

In StackOverflow you say you are using ASM v5.0.3 (not mentioned here at all). If you are using DataNucleus v5.2 then you are using DN-BUILTIN ASM v8.0.1 and not some standalone ASM jar.


Re: Datanucleus and Android

Andy
 

Well javax classes not being present in Android's dex stuff kinda limits things (but then they are required by the JDO API), and by "run-time search for a jar containing JDO api" I assume you mean the DN plugin mechanism (searching for MANIFEST.MF and plugin.xml ... uses javax.xml.*). I mailed nlmarco to see if he can remember what they did.


Re: DataNucleus - Class versions V1_5 or less must use F_NEW frames

mayank.chawla@...
 

I ran a sample DataNucleus program with jdk 8 and kept source, generated .class files compatibility as 1.5 and it enhanced all classes without any issues, so assuming it works fine with 1.5 compatible classes but then not sure why I am facing this issue in my main project.


Re: Datanucleus and Android

steve@...
 

The issue wasn't the JDBC driver.  It was the use of javax.naming.* packages, and the run-time search for a jar containing the "JDO" api, which isn't included in the .dex packaging for Android.  I have managed to build without the use of javax.naming.*, but I gave up at the api searching. 


Re: DataNucleus - Class versions V1_5 or less must use F_NEW frames

Andy
 

Must be. When I run, I use JDK 1.8 with source/target 1.8+ always, and enhancement (via ASM) will use 1.8 bytecode instructions.


Re: DataNucleus - Class versions V1_5 or less must use F_NEW frames

mayank.chawla@...
 

@Andy

Thanks for your reply.

I am using jdk 8 with parameters source=1.5, target=1.5 as I have some legacy jars and using Datanucleus 5.2. I have read Datanucleus enhancer reads the compiled class and does the enhancement.
So, does this source compatibility is causing this issue? Since it makes the code compatible with 1.5, do you think it can cause this issue?