Date   

Re: Best practices for connection factory configurations when using connection pooling?

niklas.duester@...
 

Thanks Andy, I appreciate the response. Makes sense.

On the same note, given we use a pool of adequate size for the primary connection factory, wouldn't it also make sense to enable datanucleus.connection.singleConnectionPerExecutionContext? If the connections were not pooled, using different connection factories for tx and non-tx contexts sounds logical. But with pooling in place already, it feels a bit wasteful. Speaking in context of an application that performs many non-tx read operations here.


Re: Best practices for connection factory configurations when using connection pooling?

Andy
 

It is a valid assumption (secondary using smaller pool).
If you do all of your schema generation up front you won't need any secondary connections for those ops also.
If your chosen value generation strategies don't need a DB connection (e.g "identity", "uuid", "uuid-hex", "auid", etc) then you don't need any secondary connections for those ops also.
You could even get to not needing a pool for secondary in those situations


Best practices for connection factory configurations when using connection pooling?

niklas.duester@...
 

I'm aware this is kind of a generic question, but is there any guidance or any best practices around connection factory configuration when using connection pooling?

We use DN RDBMS with HikariCP, and I was a little bamboozled by the fact that DN creates two connection pools.
So providing these properties to the PMF:
    datanucleus.connectionPoolingType=HikariCP
    datanucleus.connectionPool.maxPoolSize=20
results in two Hikari pools of up to 20 connections each to be created, which was not clear to me based on the docs, and seems rather excessive.

The docs mention:
The secondary connection factory is used for schema generation, and for value generation operations (unless specified to use primary).
Based on that description it seems like the secondary factory should be fine with a smaller pool than the primary factory. Is that a valid assumption?
I see that it is possible to provide custom DataSources as connection factories. This would allow us to provide tweaked HikariCP configurations.

I've been looking for examples of how others do it, and only found Apache Hive which allocates a fixed amount of 2 connections to the secondary pool.
Justification is "Since DataNucleus uses locks for schema generation and value generation, 2 connections should be sufficient.", which makes sense to me.
But maybe I'm overlooking something?


Is there anyone else who is willing to share some insight into how to best size the secondary connection pool?
What are the factors that would help me in figuring out an optimal size?

Thanks.


Re: Select with subselect

Andy
 

Not got the time to look through that at the moment. Suggest that you look in our test cases, since there's a lot of example of subqueries with JDOQL.

https://github.com/datanucleus/tests/blob/master/jdo/identity/src/test/org/datanucleus/tests/JDOQLSubqueryTest.java


Select with subselect

jacek.czerwinski.ext@...
 

I'd like to implement a JDOQL query for the following SQL statement: 

select * from project p, project_strategic_bucket psb where psb.project_id_oid = p.id 
    and psb.strategic_bucket_id_oid in (select sb.id from strategic_bucket sb where sb.id in (1, 2, 3, ...)) order by p.prj_nr;

(An alternative SQL statement that also works is: 
select * from project p inner join project_strategic_bucket psb on psb.project_id_oid = p.id 
    where psb.strategic_bucket_id_oid in (select sb.id from strategic_bucket sb where sb.id in (1, 2, 3, ...)) order by p.prj_nr;)

I try with something like this query:

Query<ProjectStrategicBucket> subQuery = pm.newQuery(ProjectStrategicBucket.class);
subQuery.setFilter("param.contains(this.strategicBucket)");
subQuery.declareParameters("java.util.List param");
subQuery.setParameters(searchBean.getStrategicBuckets());
 
Query<Project> query = pm.newQuery(Project.class, "this.projectStrategicBuckets.contains(projectStrategicBucket)");
query.declareVariables("com.siemens.energy.rdst.data.db.ProjectStrategicBucket projectStrategicBucket");
query.addSubquery(subQuery, "com.siemens.energy.rdst.data.db.ProjectStrategicBucket projectStrategicBucket", null, "param");
results = (List<Project>)query.execute();

But I get the exception: 
org.datanucleus.exceptions.NucleusUserException: Query has reference to member "strategicBucket" of class "com.siemens.energy.rdst.data.db.Project" yet this doesnt exist!
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.getSQLTableMappingForPrimaryExpression(QueryToSQLMapper.java:3606)
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.processPrimaryExpression(QueryToSQLMapper.java:3253)
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.processInvokeExpression(QueryToSQLMapper.java:4300)
at org.datanucleus.store.rdbms.query.QueryToSQLMapper.processInvokeExpression(QueryToSQLMapper.java:4264)

How can I solve this problem or implement the query better?

I prefer JDOQL because I'd to add multiple several subqueries, for example for ProjectStrategicLens and so on. 

The O/R Java classes are: 

Project 1 : N  ProjectStrategicBucket N : 1 StrategicBucket

Project.java
...
@Persistent(mappedBy="project")
    @Order(column = "LIST_ORDER_IDX")
    @ForeignKey(deleteAction = ForeignKeyAction.CASCADE)
    private List<ProjectStrategicBucket> projectStrategicBuckets = new ArrayList<ProjectStrategicBucket>();

...

ProjectStrategicBucket.java:

@PersistenceCapable(table = "PROJECT_STRATEGIC_BUCKET", identityType = IdentityType.APPLICATION)
@Sequence(name = "ProjectStrategicBucketSeq", datastoreSequence = "PROJECT_STRATEGIC_BUCKET_SEQ", strategy = SequenceStrategy.CONTIGUOUS, allocationSize = 1)
@FetchGroups({
    @FetchGroup(name = DataServiceConstants.STRATEGIC_BUCKET, members = { @Persistent(name = "strategicBucket"), @Persistent(name = "value"),
@Persistent(name = "modificationTimestamp"), @Persistent(name = "person"), @Persistent(name = "project")})
})
public class ProjectStrategicBucket  extends AbstractBeanWithId implements Comparable<ProjectStrategicBucket>
{
 
    private static final long serialVersionUID = 1L;
    
    public final static Comparator<ProjectStrategicBucket> BACKET_VALUE_COMPARATOR = new Comparator<ProjectStrategicBucket>() {
    @Override public int compare( ProjectStrategicBucket bucket1, ProjectStrategicBucket bucket2 ) {
    return bucket2.value.compareTo( bucket1.value );
    }
    };
 
    @Persistent(primaryKey = "true", nullValue = NullValue.EXCEPTION, valueStrategy = IdGeneratorStrategy.SEQUENCE, sequence = "ProjectStrategicBucketSeq")
    private Long _id;
 
    @Persistent(column = "STRATEGIC_BUCKET_ID_OID")
    private StrategicBucket strategicBucket;
 
    @Persistent
    private Integer value;
    
    @Persistent
    private DateTime modificationTimestamp;
    
    @Persistent
    private Person person;
    
    @Persistent
    @Column(name="PROJECT_ID_OID")
    private Project project;
    
    /**
* @return the project
*/
public Project getProject() {
return project;
}
 
/**
* @param project the project to set
*/
public void setProject(Project project) {
this.project = project;
}
 
/**
     * Constructor.
     */
    public ProjectStrategicBucket() {
    }
    
    /**
     * Constructor.
     * @param strategicBucket
     */
    public ProjectStrategicBucket(StrategicBucket strategicBucket){
    this.strategicBucket = strategicBucket;
    }
    
    @Override
    public Long getId() {
        return _id;
    }
 
    @Override
    public void setId(Long id) {
        _id = id;
    }
 
    /**
     * @return the Strategic Bucket
     */
    public StrategicBucket getStrategicBucket()
    {
        return strategicBucket;
    }
 
    /**
     * @param strategicBucket - the Strategic Bucket
     */
    public void setStrategicBucket(StrategicBucket strategicBucket)
    {
    this.strategicBucket = strategicBucket;
    }
 
    /**
     * @return the percentage value of Strategic Bucket
     */
    public Integer getValue()
    {
        return value;
    }
 
    /**
     * @param value - the percentage value of Strategic Bucket
     */
    public void setValue(Integer value)
    {
        this.value = value;
    }
    
    @Override
    public int compareTo(ProjectStrategicBucket other)
    {
        return strategicBucket.compareTo(other.strategicBucket);
    }
    
    /**
* @return the modificationTimestamp
*/
public DateTime getModificationTimestamp() {
return modificationTimestamp;
}
 
/**
* @param modificationTimestamp
*/
public void setModificationTimestamp(DateTime modificationTimestamp) {
this.modificationTimestamp = modificationTimestamp;
}
 
/**
* @return the person
*/
public Person getPerson() {
return person;
}
 
/**
* @param person 
*/
public void setPerson(Person person) {
this.person = person;
}
}

StrategicBucket.java:

@PersistenceCapable(table = "STRATEGIC_BUCKET", identityType = IdentityType.APPLICATION)
@Sequence(name = "StrategicBucketSeq", datastoreSequence = "STRATEGIC_BUCKET_SEQ", strategy = SequenceStrategy.CONTIGUOUS, allocationSize = 1)
@FetchGroups({
    @FetchGroup(name = DataServiceConstants.STRATEGIC_BUCKET, members = { @Persistent(name = "shortName"), @Persistent(name = "name"), @Persistent(name = "description")
    , @Persistent(name = "projectStrategicBuckets") })
})
public class StrategicBucket  extends AbstractBeanWithId implements Comparable<StrategicBucket> 
{
private static final long serialVersionUID = 1L;
 
@Persistent(primaryKey = "true", nullValue = NullValue.EXCEPTION, valueStrategy = IdGeneratorStrategy.SEQUENCE,
sequence = "StrategicBucketSeq")
private Long _id;
 
@Persistent(nullValue = NullValue.EXCEPTION)
@Column(length = 50)
@Unique
private String name;
 
@Persistent(nullValue = NullValue.EXCEPTION)
@Column(length = 20)
@Unique
private String shortName;
 
@Persistent()
@Column(length = 200)
private String description;
 
/**
* Constructor
*/
public StrategicBucket()
{
}
 
/**
* Constructor
* @param name - the name of strategic bucket
* @param description - the description of strategic bucket
*/
public StrategicBucket(String name, String description)
{
this.name = name;
this.description = description;
}
 
/**
* Constructor
* @param shortName 
* @param name - the name of strategic bucket
* @param description - the description of strategic bucket
*/
public StrategicBucket(String shortName, String name, String description)
{
this.shortName = shortName;
this.name = name;
this.description = description;
}
 
/**
* @return the name of strategic bucket
*/
public String getName()
{
return name;
}
 
/**
* @param name - the name of strategic bucket
*/
public void setName(String name)
{
this.name = name;
}
 
/**
* @return the shortName
*/
public String getShortName() {
return shortName;
}
 
/**
* @param shortName
*/
public void setShortName(String shortName) {
this.shortName = shortName;
}
 
/**
* @return the description of strategic bucket
*/
public String getDescription()
{
return description;
}
 
/**
* @param description - the description of strategic bucket
*/
public void setDescription(String description)
{
this.description = description;
}
 
@Override
public Long getId()
{
return _id;
}
 
@Override
public void setId(Long id)
{
_id = id;
}
 
@Persistent(mappedBy="strategicBucket")
@ForeignKey(deleteAction = ForeignKeyAction.CASCADE)
Collection<ProjectStrategicBucket> projectStrategicBuckets = new HashSet<ProjectStrategicBucket>();
 
/**
* @return the projectStrategicBuckets
*/
public Collection<ProjectStrategicBucket> getProjectStrategicBuckets() {
return this.projectStrategicBuckets;
}
 
/**
* @param projectStrategicBuckets the projectStrategicBuckets to set
*/
public void setProjectStrategicBuckets(Collection<ProjectStrategicBucket> projectStrategicBuckets) {
this.projectStrategicBuckets = projectStrategicBuckets;
}
 
@Override
public String toString()
{
final StringBuilder sb = new StringBuilder(100);
sb.append(getClass().getSimpleName());
sb.append("(id=").append(_id);
sb.append(",name='").append(name).append('\'');
sb.append(')');
return sb.toString();
}
 
    @Override
    public int compareTo(StrategicBucket other)
    {
        return name.compareTo(other.name);
    }
}



Re: Evict parent class from Query Results Cache at commit

Andy
 

The docs https://www.datanucleus.org/products/accessplatform_6_0/jdo/query.html#cache_results describes how you can gain access to internal objects and hence evict particular things should you need to


Evict parent class from Query Results Cache at commit

edwin.chui@...
 

Hi!

I hope you can help with this issue. I have a simple inheritance of these two classes.



 I've got them mapped like this



with subclass-table inheritence strategy. Also I have query results cache enabled in my datanucleus.properties



Then in my code I have 2 services classes, one for BasePlayer class which is BasePlayerService where I'm making a select query to retrieve all the players in the database, the other one is for Player which is PlayerService where I'm making a insert of the Player.



The 1st list of users shows all the 6 records saved in my database, but when I insert a new Player, the 2nd list shows the same 6 records, but in fact another Player was saved in the database, so the second list should list a total of 7 Players
This is the code from my BasePlayerService


And this is the code of my PlayerService class for saving a new Player


I know that In the BasePlayerService I'm using the class BasePlayer and in the PlayerService I'm using the Player class and I also know that any query related to Player class is evicted from Query Results Cache once the commit is executed.
My question is if there is a way to tell datanucleus to evict the results related to the parent class BasePlayer too, because the first list query is being cached and it's not cleared from Query Results Cache.

I hope you can help me.
If there is any other information you need to understand the issue better, please write me!

Thanks a lot!


Re: Cache Results not working with "count" query

Andy
 
Edited

IgnoreCache is for the L1 cache, as per the JDO spec.

You simply don't use result caching on that query, i.e either don't enable it in the first place or set datanucleus.query.resultCacheType to none.


Re: Cache Results not working with "count" query

edwin.chui@...
 

Is there a method I can call, when executing the count query, to avoid getting the result from cache???
I've tried with the method setIgnoreCache.



But it still getting the same error.
If you can help me please!


Re: Cache Results not working with "count" query

Andy
 

As replied on Gitter,
Clearly caching results (persistable object ids) when you are not returning persistable objects makes no sense. Just turn off "results caching" when not returning persistable results

Result Caching is for candidate results and nothing else currently, as shown by the implementation. A "count" query is not for candidate results


Cache Results not working with "count" query

edwin.chui@...
 

I've got a project with datanucleus v6.0.0-release and in my properties file I've got configured with the following values:



As you can see I have the datanucleus.query.results.cached enabled
In my project I have the following code:



I'm calling the count procedure twice, and in the second call throws an error



And the line where the exception is thrown is this



As I could review in the sources the error happens in the class ForwardQueryResult.java line 186



In this line the value of the nextElement is the result of the count query that is a Long value



In the class ApiAdapter.java in line 294 is verifying if the result is Persistable to return an ObjectId but in this case it returns a null value.
Then the collection resultIds in the ForwardQueryResult.java class is added to the cache the first time.
Then in the second time it throws an error.
Is there any way to avoid this without disabling the property datanucleus.query.results.cached ?


Re: Persisting attribute using "new" Java time Instant in MySQL

Page bloom
 
Edited

That was enlightening: I hadn't appreciated the two levels of types: JDBC types and MySQL types and that the mappings between them are not a simple one-to-one. Given that the type names are identical I assumed that TIMESTAMP (JDBC) would be mapped to TIMESTAMP (in MySQL)  - silly me ;)

I got specific with the sql-type and now it's working:

        <field name="created">
                <column jdbc-type="TIMESTAMP" sql-type="TIMESTAMP"/>
        </field>
 


Re: Persisting attribute using "new" Java time Instant in MySQL

Andy
 

If you look at what the MySQL JDBC driver provides to DataNucleus, you see that it supports DATETIME and TIMESTAMP as the SQL TYPE when the JDBC TYPE is TIMESTAMP.
https://github.com/datanucleus/datanucleus-rdbms/blob/master/docs/mysql-5.7.27-jdbc-8.0.28.txt

so setting the JDBC type to TIMESTAMP will simply result in the default SQL type for that JDBC type ... which is ... DATETIME. Run SchemaTool in "dbinfo" mode and you can see all you need to know


Re: Persisting attribute using "new" Java time Instant in MySQL

Page bloom
 

I tried adding an explicit JDBC type via:

        <field name="created">
                <column jdbc-type="TIMESTAMP" />
        </field>

but DN still creates a DATETIME column instead of a TIMESTAMP column.

I don't know why ...


Persisting attribute using "new" Java time Instant in MySQL

Page bloom
 
Edited

Traditionally we have used the java.util.Date class for storing time instants in UTC time - in all existing tables DN has created a 'TIMESTAMP' type field in MySQL - which works fine.

Recently we added our first attribute of type java.time.Instant (getting all jiggy with the "new" Java time library - better late than never!) and the DN docs say that the default mapping is a TIMESTAMP (via the convention that the bold type is the default):

https://www.datanucleus.org/products/accessplatform/jdo/mapping.html#_temporal_types_java_util_java_sql_java_time_jodatime

Java Type:  java.time.Instant
Comments:     Persisted as TIMESTAMP, String, Long, or DATETIME.

However, the Instant attribute has been mapped to a DATETIME instead of TIMESTAMP.

We have not added any specific customizations to the XML metadata so would assume that it would have mapped to the default TIMESTAMP.

Any idea what me going on? Have I misunderstood the docs? Do we need to explicitly set the MySQL type to TIMESTAMP somehow?


Re: JDOQL ordering using primary key field

Page bloom
 
Edited

Thanks Andy, that worked well.

I used it within my setOrdering call:

setOrdering("score ascending, JDOHelper.getObjectId(this) ascending);


Re: JDOQL ordering using primary key field

Andy
 

JDO has

"SELECT FROM mydomain.MyClass ORDER BY JDOHelper.getObjectId(this)"


JDOQL ordering using primary key field

Page bloom
 
Edited

If a class does not have an attribute defined to be the primary key field but, rather, relies on DN creating the primary key column in the RBDMS table implicitly, is there any way to specify the default primary key colunn in the ordering clause:

e.g.

Query q = ....
q.setOrdering(score ascending, PRIMARYKEYCOLUMN ascending)

I know the column exists in the RDBMS table and could write the native SQL to set up ordering that included it but at the Java level there is no Java attribute to refer to it in JDOQL that I am aware of. 

Perhaps DN has some keyword that can be used to represent the underlying, implicitly created primary key field.




 


Re: Use multiple columns for Discriminator in Inheritance Tree with single-table

Andy
 
Edited

Yes, JDO, JPA (and Jakarta Persistence) all allow 1 discriminator (value) only. JDO metadata strictly speaking permits multiple columns for some reason, but there is only space for 1 value and that's what DataNucleus has always stuck to. Multiple discriminator values makes little sense to me FWIW.

I don't see any way of having multiple discriminator columns by using extensions with current DataNucleus code.

Options
  • Update DN code to cater for multiple columns (and values) for each class, and then provide a mechanism for getting class name from the column values. This is potentially a lot of work, and would have to retain compatibility with standard JDO / JPA handling.
  • Update your DB to only have a single (surrogate) column and use standard JDO discriminator handling.
  • Update your class(es) to map on to a single one of the (surrogate) columns and optionally read in the other column (if it is still important) into a field of the class(es).


Use multiple columns for Discriminator in Inheritance Tree with single-table

Kraen David Christensen
 

We have a table like: node(nodetype, subnodetype, ...)
From this one table we want to have several persistent java classes.
Both node.nodetype and node.subnodetype is needed to discriminate the java class we need (sometimes only nodetype is enough).
As far as I can see JDO/JPA only allows ONE discriminator column.
Is there anyway I can customize and build my own discriminator type (eg. by specifying something like:
@Discriminator(customStrategy = "MyStrategy")
on node table?
I've tried with no luck - but perhaps this is not possible.
It might be that I should write my own StoreManager extension for this?
And would that be recommended/easy?

Best regards, Kræn David Christensen

1 - 20 of 476