Data nucleus with springboot
rommeldsumpo@...
Hi Guys,
I would like to ask some help regarding the issue below. DataNucleus.Persistence : Error : Could not find API definition for name "JDO". Perhaps you dont have the requisite datanucleus-api-XXX jar in the CLASSPATH? Error : Could not find API definition for name "JDO". Perhaps you dont have the requisite datanucleus-api-XXX jar in the CLASSPATH? org.datanucleus.exceptions.NucleusUserException: Error : Could not find API definition for name "JDO". Perhaps you dont have the requisite datanucleus-api-XXX jar in the CLASSPATH? mvn clean package spring-boot:run |
|
Re: Data nucleus with springboot
The message is very clear. You seemingly don't have the jar `datanucleus-api-jdo.jar` in the CLASSPATH. That jar contains a file `plugin.xml` and if it isn't found then features cannot work.
|
|
Alter column types for existing schemas
mores
I am able to utilize extension point="org.datanucleus.store.rdbms.rdbms_mapping" so that all new Boolean columns created will be of type smallint instead of bit.
Is there an easy way alter all tables and columns from existing tables to match the rdbms_mapping ? |
|
Re: Alter column types for existing schemas
I don't think there is any handling for that. Using something like Liquibase for schema management maybe would offer that though? never used it so you'd have to find out
|
|
Is there any good documentation for how to add support for new data stores?
JashDave
I am trying to add support for Oracle NoSQL DB, but I am not getting any good enough documentation.
I am trying to follow this link http://www.datanucleus.org:15080/products/accessplatform_5_1/extensions/extensions.html#howto but I am not getting it clearly. I also tried to go through JavaDocs like http://www.datanucleus.org/javadocs/core/latest/org/datanucleus/store/StoreManager.html but it has hardly any description. I also tried going through Abstract implementation's JavaDoc like http://www.datanucleus.org/javadocs/core/latest/org/datanucleus/store/AbstractStoreManager.html#getSupportedOptions-- but it also doesn't provides complete details. For eg. it doesn't explains StoreManager.OPTION_DATASTORE_TIME_STORES_MILLISECS option. I also tried going through MongoDB's code, but it also doesn't have any descriptive comments. So is there any good link or documentation that has good enough information on how to add support for new data store? |
|
Re: Is there any good documentation for how to add support for new data stores?
Hello, "Good" is clearly subjective. Only you know what your experience level is, and so what you are needing.
Since it is left to one person to write such documentation, and that one person doesn't have a lot of time, then the first link is the primary place to use. This is really why documentation would be better written by people who use and develop with DataNucleus rather than someone who is heavily involved. You don't give an idea of "what" you are not getting. The "supportedOptions", are JDO/JPA features. Sure it would be nice to have javadocs for everything under the sun (all defined in StoreManager), but I just don't have time and try to make the names symbolise what they are for (so then Javadocs are not so critical). If in doubt about what an option means, ask. StoreManager.OPTION_DATASTORE_TIME_STORES_MILLISECS means does the database store milliseconds with a "Time" type, or is it just hour+min+sec? Some datastores don't. |
|
Re: Is there any good documentation for how to add support for new data stores?
JashDave
Hello Andy, thanks for your quick response. I appreciate your effort and dedication, I know how difficult it is to do all the things single handed.
Sorry for not being clear, what I meant by "good" was, is there any better (more descriptive) documentation (which explains every field and method, and their need), which now I know is not there. Initially I was expecting a tutorial type guide, which takes you through an example implementation of an abstract data store, but it is too much of an expectation from my side. I have many doubts, but I need some time to formulate them and ask. Thanks again. |
|
Should not need to provide JDBC driver name.
Mark Thornton
Since Java 6 / JDBC 4 it has been possible (and normal practice) to open a Connection without specifying the driver or attempting to load its class. Unfortunately if I omit the driver name in DataNucleus (JPA api), it attempts to load a driver class with the name "" (i.e. an empty string), and then fails.
Could this behaviour be changed? Mark Thornton |
|
Re: Should not need to provide JDBC driver name.
Any "behaviour" can be changed because the code is open source. People can contribute GitHub Pull Requests
|
|
Re: Should not need to provide JDBC driver name.
Mark Thornton
True, but before putting in the effort I prefer to discover if the project maintainers are in favour of such a change. I also don't know if a similar problem afflicts other JPA implementations.
I have been a little surprised recently, to discover that many people remain unaware that specifying the driver class is unnecessary in JDBC 4. Mark |
|
Re: Should not need to provide JDBC driver name.
Since JPA and JDO specs were written for earlier JDKs then I guess they have never been updated to reflect what more recent JDBC specs mandate. Since DataNucleus (v5+) supports JRE 1.8+ ONLY then there is no issue with including JDBC4+ semantics. What other JPA providers do or not you'd have to ask them
|
|
Re: Should not need to provide JDBC driver name.
|
|
Control quoting of identifiers
Mark Thornton
Is it possible to control the quoting of identifiers. I am using Postgresql (with JPA) and, by default, all tables and column names are created as upper case. This is unnatural for that database (which converts all unquoted names to lower case).
I am aware of the datanucleus.identifier.case property which allows me to change the case to LowerCase, but that wouldn't work well were I to swap to some other databases. The SQL standard requires unquoted names to be upper cased by the database; unfortunately both Postgresql and MS SQL Server violate this. Mark |
|
Re: Control quoting of identifiers
The mechanism you quoted is the way to change case. That is what there is. If that doesn't meet your needs you get the code and contribute some feature
|
|
DataNucleus Forum deletion
The DataNucleus Forum is planned to be DELETED permanently in February/March 2018.
The MVNForum software that it used hasn't been updated for years, and the number of spammers who try to post their crap on there takes up too much of my time to prevent. Make use of all previous questions by that point; but note that the documentation for current versions of DataNucleus mean that the majority of questions there are no longer relevant. |
|
Filtered query fails for class with reference to a persistent interface having multiple implementations extending an abstract base class
Chris Colman
I have created a test case (attached) that reproduces this issue.
The class diagram is: The test case also does a dump of the AirCraftController (atc) table schema and its records in the H2 database to show the columns created. It appears to have more columns than are strictly necessary and that may or may not be the cause of the issue. If I remove the AirbusA380 class and make no other code changes the test case succeeds. |
|
EC.preCommit L1Cache op IS NULL!
I have an application that, upon initial startup, creates about a 100,000 objects in a h2 database (using JDO). Each object is committed in its own transaction.
About half way through, the log spits out "EC.preCommit L1Cache op IS NULL!". The only place I find a reference to this error message is in the source file: ExecutionContextImpl.java The message doesn't occur all the time, but it does occur most of the time - usually around the same spot. In terms of persisting, I'm using the following code: pm.currentTransaction().begin(); I'm really at a loss here. Any help would be greatly appreciated. |
|
Re: EC.preCommit L1Cache op IS NULL!
"EC.preCommit" will only ever be in that 1 class (EC = ExecutionContext(Impl)). And the L1 cache will only ever be updated in that 1 class.
Consequently if there is a NULL value in the L1 cache you'd need to debug all places where variable "cache" has "put" called on it with a null value (putObjectIntoLevel1Cache?), and debug why. If it never puts a null in, but you get one out then you look at whether you are using a PM multi-threaded(!!!) and fix your code to NOT do that. Or look at the class of the "cache" object. FWIW, DataNucleus provides a mechanism for providing an initialisation SQL file to load up objects, which would run a damn site more efficiently than just instantiating objects and going through JDO (see property datanucleus.generateSchema.scripts.load and its related properties) |
|
Re: EC.preCommit L1Cache op IS NULL!
I think I figured it out. The problem only surfaced when running on Java 9.0.1 inside a Docker container when heap was exhausted (but no OutOfMemory errors displayed).
Moving back to Java 8_151 (inside Docker) but with -XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap and a large enough heap specified, and the message went away. It was a seriously strange problem without any indication it was memory related. I have not tried increasing the heap on Java 9 so I don't know if 9 has issues or was just masking the fact it was a memory issue. 8 told me right away by throwing OutOfMemoryError. Thanks for the heads up on the script loading. For most of my use-cases, I will not know the data in advance of initial start-up, but I plan on using the script loading to populate default/demo data. That'll be really convenient. Cheers |
|
Re: Filtered query fails for class with reference to a persistent interface having multiple implementations extending an abstract base class
Chris Colman
This is fixed in datanucleus-rdbms version 5.1.6.
|
|