Date   

Re: Howto help improve DataNucleus code base

Andy
 

The "tests" repo is for our internal tests, that are run with each release as a matter of course. If you want some feature to be checked regularly then that is where a test can go, using the existing samples and test framework as much as possible.
The "test-jdo" (and "test-jpa") are simply templates for people to demonstrate something self-contained if they can't be bothered to look at our tests. Doing that gives no guarantee that their test will ever be adapted to fit in to our test suites ("tests" repo) and be run for every release.

So use "tests" if something is critical to your functionality, and "test-jdo" if its just to demonstrate something (and not bothered if that feature is checked every release)


Re: Howto help improve DataNucleus code base

Kraen David Christensen
 

Hi,
Thanks for the quick response.

It is a little unclear to me exactly what to do.

For each of the 20+ issue we have found I guess we should:
Create an issue in https://github.com/datanucleus/datanucleus-core.git OR https://github.com/datanucleus/datanucleus-rdbms.git.

But after that should we then:
1) Create fork in https://github.com/datanucleus/test-jdo showing the problem (and perhaps hint how we solved this in a comment in the SimpleTest), OR
2) Create fork in https://github.com/datanucleus/tests.git adding a test AND fork https://github.com/datanucleus/datanucleus-core.git OR https://github.com/datanucleus/datanucleus-rdbms.git with the code fix we have made.

So should we go for 1) or 2) - or perhaps a mixture?

Please advise - and perhaps also if you have any experience in having the above code repositories imported into IntelliJ IDEA and run the test from there.

Best regards, Kræn David Christensen, Stibo Systems


Re: Howto help improve DataNucleus code base

Andy
 

Hi,
the only way I can include things in the codebase is to see them (GitHub PR) problem by problem, so I can see why they are each needed.
Some things may be obvious (e.g typos, hence only need a superficial inspection), others probably not.
Clearly we have a lot of tests that will need to pass also.
Not sure what else to say; that's the problem with developing separate versions over time.


Howto help improve DataNucleus code base

Kraen David Christensen
 

Hello,
We have for the last year been migrating our code from using Kodo to use DataNucleus.
Through this process we have had to fix bugs, make performance improvements and add plugins to the DataNucleus core and RDBMS code to meet our needs.
We have ended up with fixes to around 20 classes in DataNucleus core and RDBMS code.
The question is how we can push this back to the community - since we also have the code changes required to fix these problems.
We can of course supply 20 forks of JDO test templates and create 20 issues on github for core and RDBMS to prove the problems - but what about the code fixes we already have made - can we also supply them as seperate forks/pull-request to DataNucleus github repository?

Please advise us in how we can help improving the code base with our findings.

Best regards, Kræn David Christensen, Stibo Systems


Re: Racing condition in compiling queries after upgrading from 5.0.x to 5.2.x

marius.ropotica@...
 

Hi,
It was faster for us to just revert this commit https://github.com/datanucleus/datanucleus-rdbms/commit/548e18d6babcddf2271a2bba312139eb58f2d742 .
No more broken queries after this change.

Marius


Re: Racing condition in compiling queries after upgrading from 5.0.x to 5.2.x

marius.ropotica@...
 

That's exactly what we intend to do. Will let you know once we make the change and run some tests.

Thanks,
Marius 


Re: Racing condition in compiling queries after upgrading from 5.0.x to 5.2.x

Andy
 

You could easily enough make a change to such as "SQLText.toSQL" to make the setting of "sql" be within a synchronised block and see what effect it has on your application, since that is the only thing that is generated by an SQLText.


Re: Racing condition in compiling queries after upgrading from 5.0.x to 5.2.x

marius.ropotica@...
 

Hi,
I think we have found the issue. Please have a look here https://github.com/datanucleus/datanucleus-rdbms/blob/master/src/main/java/org/datanucleus/store/rdbms/query/JDOQLQuery.java#L708

At this line, getSQLText() is called on a select statement that comes from the query compilation cache.
From what I understand, the query compilation cache comes from the PMF, so it's shared by all PMs and query objects.
I don't see anything that would prevent another thread to access the same select statement and call the getSQLText() method.

Can somebody have a look an confirm our findings ?

Thanks,
Marius
  


Re: Racing condition in compiling queries after upgrading from 5.0.x to 5.2.x

marius.ropotica@...
 

Hi,
PMs or query objects are not reused, each time a query is executed a new PM object is created. The PMF is created at the application startup.
I tried to see what happens if we reuse PMs or queries, I got all sorts of exceptions like NPE, class cast exception, and many more. But nothing like I mentioned in the first post.
Currenlty I'm trying to reproduce the issue in a dev / local environment, no luck so far.

Could you suggest anything else I should check?

Thanks,
Marius


Re: Racing condition in compiling queries after upgrading from 5.0.x to 5.2.x

Andy
 

Define how you are using PMF, PM with respect to threads. FWIW "Multithreaded" PMs has always been a recipe for pain.
I would expect each query to be used by a single thread, and hence such locking be unnecessary.


Re: Racing condition in compiling queries after upgrading from 5.0.x to 5.2.x

marius.ropotica@...
 

Hi,

Thanks for your quick response.
I already tried what you have suggested buth with no luck so far.
I'll have a closer look.

Marius


Re: Racing condition in compiling queries after upgrading from 5.0.x to 5.2.x

Andy
 

Hi Marius,

no, never seen that situation.
If you want to track it down, find the stack trace of that method call; as the commit says, any locking should be further out in the call chain.

PS 5.x is no longer maintained.


Racing condition in compiling queries after upgrading from 5.0.x to 5.2.x

marius.ropotica@...
 

Hello,

We have recently upgraded Datanucleus from 5.0 to 5.2 and we started to see broken queries, here are some examples:

# this query is cut after the WHERE cause

SELECT 'com.somepackage.EntityOne' AS `dn_type`,

      `a0`.`creationdate`,

      `a0`.`encodedkey`,

      `a0`.`NAME`,

      `a0`.`entityone_encodedkey_own`

FROM   `entityone` `a0`

WHERE


# this query have multiple issues: some parts are duplicated (see the first line), some nulls appear in the field list 

SELECT   'com.somepackage.AnotherEntity''com.somepackage.AnotherEntity' AS `dn_type`,,

        `a0`.`active`,

        `a0`.`amount`,

        nullnullnullnullnullnull,

        `a0`.`id`,

        `a0`.`lastmodifieddate`,

        `a0`.`NAME`,

        `a0`.`TRIGGER`,

        `a0`.`anotherentity_encodedkey_own`,

        `a0`.`anotherentity_integer_idx` AS `nucorder0`

FROM     `anotherentity` `a0`
...


The issues only heappens when loading the one to many relations. We haven't been able to reproduce the issue so far, it only happens in some of the production environments.
To me this looks like a racing condition in compiling the SQL queries, probably in 
org.datanucleus.store.rdbms.sql.SelectStatement#getSQLText. This method was a synchronized method in 5.0 but the synchronized keyword has been removed in 5.2 in this commit https://github.com/datanucleus/datanucleus-rdbms/commit/548e18d6babcddf2271a2bba312139eb58f2d742#diff-9b826f96d7e6dcf962322355800d231eb1c8b8cabe5bb5047921b28209c7c4e2

Have anyone encountered such an issues? Do you have any ideas on what we might be doing wrong?
Any help will be much appreciated.

Thanks,
Marius


Re: 1+N avoidance: Fetch of relationship working, but ignored when accessed

Vaal
 

Just tried 6.0.4-SNAPSHOT:
- Queries are no longer repeated for empty bulk fetch results. So the fix/change seems to be working, thanks!
- I can see prefetching happening more than 1 level deep, which is also good and in line with my understandig of the maxFetchDepth:

2023-02-19 20:02:10,158 [] DEBUG [DataNucleus.Datastore.Native] SELECT 'org.dependencytrack.model.PolicyViolation' AS "DN_TYPE","B0"."ID","B0"."STATE","B0"."COMPONENT_ID","B0"."ID","B0"."POLICYVIOLATION_ID","B0"."PROJECT_ID","B0"."SUPPRESSED","C0"."AUTHOR","C0"."BLAKE2B_256","C0"."BLAKE2B_384","C0"."BLAKE2B_512","C0"."BLAKE3","C0"."CLASSIFIER","C0"."COPYRIGHT","C0"."CPE","C0"."DESCRIPTION","C0"."DIRECT_DEPENDENCIES","C0"."EXTENSION","C0"."EXTERNAL_REFERENCES","C0"."FILENAME","C0"."GROUP","C0"."ID","C0"."INTERNAL","C0"."LAST_RISKSCORE","C0"."LICENSE","C0"."LICENSE_URL","C0"."MD5","C0"."NAME","C0"."TEXT","C0"."PROJECT_ID","C0"."PUBLISHER","C0"."PURL","C0"."PURLCOORDINATES","C0"."LICENSE_ID","C0"."SHA1","C0"."SHA_256","C0"."SHA_384","C0"."SHA3_256","C0"."SHA3_384","C0"."SHA3_512","C0"."SHA_512","C0"."SWIDTAGID","C0"."UUID","C0"."VERSION","A0"."ID","D0"."ID","D0"."OPERATOR","D0"."SUBJECT","D0"."UUID","D0"."VALUE","E0"."ACTIVE","E0"."AUTHOR","E0"."CLASSIFIER","E0"."CPE","E0"."DESCRIPTION","E0"."DIRECT_DEPENDENCIES","E0"."EXTERNAL_REFERENCES","E0"."GROUP","E0"."ID","E0"."LAST_BOM_IMPORTED","E0"."LAST_BOM_IMPORTED_FORMAT","E0"."LAST_RISKSCORE","E0"."NAME","E0"."PUBLISHER","E0"."PURL","E0"."SWIDTAGID","E0"."UUID","E0"."VERSION","A0"."TEXT","A0"."TIMESTAMP","A0"."TYPE","A0"."UUID","A0"."COMPONENT_ID" FROM "POLICYVIOLATION" "A0" LEFT OUTER JOIN "VIOLATIONANALYSIS" "B0" ON "A0"."ID" = "B0"."POLICYVIOLATION_ID" INNER JOIN "COMPONENT" "C0" ON "A0"."COMPONENT_ID" = "C0"."ID" INNER JOIN "POLICYCONDITION" "D0" ON "A0"."POLICYCONDITION_ID" = "D0"."ID" INNER JOIN "PROJECT" "E0" ON "A0"."PROJECT_ID" = "E0"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0_SUB"."ID" AS "DN_APPID" FROM "COMPONENT" "A0_SUB" WHERE "A0_SUB"."PROJECT_ID" = ? AND "A0_SUB"."ID" < ? AND "A0"."COMPONENT_ID" = "A0_SUB"."ID")


Re: 1+N avoidance: Fetch of relationship working, but ignored when accessed

Vaal
 
Edited

https://github.com/datanucleus/datanucleus-rdbms/issues/462 could well be it then.

If only 1 level is supported, I'm gonna do my own queries anyway. But your first reply is about FETCH JOINs which I am not using (at least not explicitly and the query being generated is different).
I interpreted the maxFetchDepth value in the Fetch Plan docs as a way to "prefetch" multiple levels of fields/relationships in one go: https://www.datanucleus.org/products/accessplatform_4_1/jdo/fetchgroup.html

I'm not sure less logging will help as there's currently maybe only 10-ish lines of logging. I have enabled DEBUG logging for the full DataNucleus category. Using TRACE or ALL level didn't give me any extra messages.


Re: 1+N avoidance: Fetch of relationship working, but ignored when accessed

Andy
 
Edited

If some field is loaded because it isn't currently loaded then there is a log message saying exactly that. Suggest you look at your log settings. And think of cutting it down into something simpler to debug it.

Having 0 elements could have been the problem since that was not supported (and just fell back to doing the load on access). See https://github.com/datanucleus/datanucleus-rdbms/issues/462

I already said why only 1 level of 1-N relations is loaded ... since only 1 level of 1-N is supported .... my first reply.


Re: 1+N avoidance: Fetch of relationship working, but ignored when accessed

Vaal
 

I did all that and was hoping DN would log why it was retrieving the relationship again, but it's not clear. I'm not accessing any fields from models inside the relationship, just doing getVulnerabilities()

Copying everything from my linked SO post:
 

Question applies mostly to Component.java (this is a link to the sourcecode).

It has a many-to-many relationship (but I see the same problem with 1-N):

@Persistent(table = "COMPONENTS_VULNERABILITIES")
@Join(column = "COMPONENT_ID")
@Element(column = "VULNERABILITY_ID")
@Order(extensions = @Extension(vendorName = "datanucleus", key = "list-ordering", value = "id ASC"))
private List<Vulnerability> vulnerabilities;

@FetchGroup(name = "METRICS_UPDATE", members = {
        @Persistent(name = "id"),
        @Persistent(name = "lastInheritedRiskScore"),
        @Persistent(name = "uuid"),
        @Persistent(name = "vulnerabilities"),
        @Persistent(name = "analysises"),
        @Persistent(name = "policyViolations")                
})

Query<Component> query = pm.newQuery(Component.class);
query.setOrdering("id DESC");
query.setRange(0, 500);
query.getFetchPlan().setGroup(Component.FetchGroup.METRICS_UPDATE.name());
query.getFetchPlan().setFetchSize(FetchPlan.FETCH_SIZE_GREEDY);
query.getFetchPlan().setMaxFetchDepth(10);
components = query.executeList();

LOGGER.debug("Fetched " + components.size() + " components for project " + project.getUuid());
for (final Component component : components) {
    try {
        LOGGER.debug("Printing vulnerabilities count: " + component.getVulnerabilities().stream().count());

FetchSize and MaxDepth are added because I was trying stuff.

Logs

2023-02-11 18:41:36,458 DEBUG [Query] JDOQL Query : Executing "SELECT FROM org.dependencytrack.model.Component WHERE project == :project ORDER BY id DESC RANGE 0,500" ...
2023-02-11 18:41:36,460 DEBUG [Connection] ManagedConnection OPENED : "org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@3a5359fd [conn=com.zaxxer.hikari.pool.HikariProxyConnection@78a8a548, commitOnRelease=true, closeOnRelease=true, closeOnTxnEnd=true]" on resource "nontx" with isolation level "read-committed" and auto-commit=false
2023-02-11 18:41:36,460 DEBUG [Datastore] Using PreparedStatement "HikariProxyPreparedStatement@289630101 wrapping SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0"."ID" AS "NUCORDER0","A0"."LAST_RISKSCORE","A0"."UUID" FROM "COMPONENT" "A0" WHERE "A0"."PROJECT_ID" = ? ORDER BY "NUCORDER0" DESC FETCH NEXT 500 ROWS ONLY " for connection "com.zaxxer.hikari.pool.HikariProxyConnection@78a8a548"
2023-02-11 18:41:36,463 DEBUG [Native] SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0"."ID" AS "NUCORDER0","A0"."LAST_RISKSCORE","A0"."UUID" FROM "COMPONENT" "A0" WHERE "A0"."PROJECT_ID" = ? ORDER BY "NUCORDER0" DESC FETCH NEXT 500 ROWS ONLY
2023-02-11 18:41:36,466 DEBUG [Retrieve] SQL Execution Time = 3 ms
2

... some other bulk prefetches snipped

2023-02-11 18:41:36,476 DEBUG [Retrieve] JDOQL Bulk-Fetch of org.dependencytrack.model.Component.vulnerabilities
2023-02-11 18:41:36,476 DEBUG [Datastore] Using PreparedStatement "HikariProxyPreparedStatement@555511245 wrapping SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS","A0"."COMPONENT_ID" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0_SUB"."ID" AS "DN_APPID" FROM "COMPONENT" "A0_SUB" WHERE "A0_SUB"."PROJECT_ID" = ? AND "A0"."COMPONENT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"" for connection "com.zaxxer.hikari.pool.HikariProxyConnection@78a8a548"
2023-02-11 18:41:36,476 DEBUG [Native] SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS","A0"."COMPONENT_ID" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0_SUB"."ID" AS "DN_APPID" FROM "COMPONENT" "A0_SUB" WHERE "A0_SUB"."PROJECT_ID" = ? AND "A0"."COMPONENT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"
2023-02-11 18:41:36,480 DEBUG [Retrieve] SQL Execution Time = 4 ms

As far as I can see this is using the filters from the original query to fetch the vulnerabilities relationship. So far so good.

The first statement after the query above is to access getVulnerabilities(), and this results in, more or less, the exact same query:

2023-02-11 18:41:36,494 DEBUG [ProjectMetricsUpdateTask] Fetched 5 components for project f04cfba0-7b94-4380-8cbd-aca492f97a7f
2023-02-11 18:41:36,494 DEBUG [Persistence] Object with id "org.dependencytrack.model.Component:45106" has a lifecycle change : "HOLLOW"->"P_NONTRANS"
2023-02-11 18:41:36,508 DEBUG [Persistence] Object "org.dependencytrack.model.Component@68f46ca4" field "vulnerabilities" is replaced by a SCO wrapper of type "org.datanucleus.store.types.wrappers.backed.List" [cache-values=true, lazy-loading=true, allow-nulls=true]
2023-02-11 18:41:36,508 DEBUG [Persistence] Object "org.dependencytrack.model.Component@68f46ca4" field "vulnerabilities" loading contents to SCO wrapper from the datastore
2023-02-11 18:41:36,512 DEBUG [Connection] ManagedConnection OPENED : "org.datanucleus.store.rdbms.ConnectionFactoryImpl$ManagedConnectionImpl@73aeb4aa [conn=com.zaxxer.hikari.pool.HikariProxyConnection@30f8bfe1, commitOnRelease=true, closeOnRelease=true, closeOnTxnEnd=true]" on resource "nontx" with isolation level "read-committed" and auto-commit=false
2023-02-11 18:41:36,512 DEBUG [Datastore] Using PreparedStatement "HikariProxyPreparedStatement@1310502991 wrapping SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE "A0"."COMPONENT_ID" = ? ORDER BY "NUCORDER0"" for connection "com.zaxxer.hikari.pool.HikariProxyConnection@30f8bfe1"
2023-02-11 18:41:36,515 DEBUG [Native] SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE "A0"."COMPONENT_ID" = ? ORDER BY "NUCORDER0"

Disabling L2 Cache doesn't help. There are a couple more relationships in the Component class. They all have the samen problem. I also notice that DN only fetches 1 level of relationships, despite the max depth being set to 10.

So two questions:

  • Why is the prefetched contents of the relationship not used?
  • Why is only 1 level of relationships prefetched?

I do notice that the vulnerabilities relation is empty in the database. Could that be triggering the repeat of queries?


Re: 1+N avoidance: Fetch of relationship working, but ignored when accessed

Andy
 

Focus on the log entries ... for the JDOQL ... and for whatever accessor methods you call after that ... by putting entries in the log to separate your calls. Maybe you have some field in the related object that is not in the fetch plan?


Re: 1+N avoidance: Fetch of relationship working, but ignored when accessed

Vaal
 

I'm not really doing much different and as the logs show that SQL is indeed generated and executed, as in the demonstration you posted.

2023-02-11 18:41:36,476 DEBUG [Native] SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS","A0"."COMPONENT_ID" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE EXISTS (SELECT 'org.dependencytrack.model.Component' AS "DN_TYPE","A0_SUB"."ID" AS "DN_APPID" FROM "COMPONENT" "A0_SUB" WHERE "A0_SUB"."PROJECT_ID" = ? AND "A0"."COMPONENT_ID" = "A0_SUB"."ID") ORDER BY "NUCORDER0"
But for some reason when accessing the relation/prefetched results, DN reverts back to fetching the relationship again, but only for that specific model instance.
2023-02-11 18:41:36,515 DEBUG [Native] SELECT 'org.dependencytrack.model.Vulnerability' AS "DN_TYPE","A1"."CREATED","A1"."CREDITS","A1"."CVSSV2BASESCORE","A1"."CVSSV2EXPLOITSCORE","A1"."CVSSV2IMPACTSCORE","A1"."CVSSV2VECTOR","A1"."CVSSV3BASESCORE","A1"."CVSSV3EXPLOITSCORE","A1"."CVSSV3IMPACTSCORE","A1"."CVSSV3VECTOR","A1"."CWES","A1"."DESCRIPTION","A1"."DETAIL","A1"."EPSSPERCENTILE","A1"."EPSSSCORE","A1"."FRIENDLYVULNID","A1"."ID" AS "NUCORDER0","A1"."OWASPRRBUSINESSIMPACTSCORE","A1"."OWASPRRLIKELIHOODSCORE","A1"."OWASPRRTECHNICALIMPACTSCORE","A1"."OWASPRRVECTOR","A1"."PATCHEDVERSIONS","A1"."PUBLISHED","A1"."RECOMMENDATION","A1"."REFERENCES","A1"."SEVERITY","A1"."SOURCE","A1"."SUBTITLE","A1"."TITLE","A1"."UPDATED","A1"."UUID","A1"."VULNID","A1"."VULNERABLEVERSIONS" FROM "COMPONENTS_VULNERABILITIES" "A0" INNER JOIN "VULNERABILITY" "A1" ON "A0"."VULNERABILITY_ID" = "A1"."ID" WHERE "A0"."COMPONENT_ID" = ? ORDER BY "NUCORDER0"
It could still be something simple/obvious/stupid, but I don't see it nor do the people I have asked to look at it.


Re: 1+N avoidance: Fetch of relationship working, but ignored when accessed

Andy
 
Edited

A simple demonstration of BulkFetch of a Collection field is to be found at
https://github.com/andyjefferson/jdo-bulkfetch-collection

Does a query, where a Collection field is in the DFG for the query candidate, and as a result comes up with this SQL

SELECT 'mydomain.model.B' AS DN_TYPE,A1.ID,A1."NAME",A0.ID_OID,A0.IDX AS NUCORDER0
FROM A_BS A0 INNER JOIN B A1 ON A0.ID_EID = A1.ID
WHE
RE A0.IDX >= 0 AND EXISTS
    (SELECT 'mydomain.model.A' AS DN_TYPE,A0_SUB.ID AS DN_APPID
     FROM A A0_SUB
     WHERE A0.ID_OID = A0_SUB.ID) ORDER BY NUCORDER0

and that is all.

Suggest you debug what you're doing different.