You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I/we are trying to reboot the this library, the idea is to unify a lot of different leaf/sparse conventions all together for handling natives for the whole cycle, from the producer publication till the user consumption, since this is one of the cause of pain/additional overhead on end consumers and library developers, for example:
in Scenery/Sciview where we have to manually fix the pom
in the incoming Gradle Platform I have to manually hardcode a list of the runtime dependencies to later on setup a runtime constraints for those, instead of relying on some sort of "runtime classifier schema"
This attempt is a chance to fix also some other issues which are somehow (strictly) related, hopefully once and for all, before the new Foreign Function API transition/JNI future limitations take over.
Actually all of this goes beyond the scope of native-lib-loader, but there isn't really a right place to discuss about all of these thematics, so I'd just over-exploit the "Issues" space here as a placeholder.
Let me write down some of the ideas we came with, the goal is to reach a broader audience for further feedbacks, improvements, etc..
First of all, natives should be published for each OS/Arch combination, in order to avoid pulling in dependencies thus downloading useless files which are not meant in any way for the given platform, just polluting the consumer classpath, saving space, bandwidth and time on scale.
The good new is that most of libraries nowadays already follow this path, but some of them (like jinput), unfortunately, don't. We could help them by providing a PR ourselves or, in the worst cases, re-publishing the artifact ourselves on scijava (or central by new GAVs)? I don't know
Using the os-maven-plugin normalized os and arch values
You can see them as enums here, they are copy/pasted from the os-maven-plugin
Gradle has already something, but it's somehow limited. They also use macos for MACOS, to say
These library developers using Maven and targetting other Maven consumers should declare the natives dependency in their pom using placeholders for OS and Arch.
In the Scijava ecosystem this is already done today in some places, such here
We should come up with some convention for the classifier, something like natives-$os-$arch could be an idea. Lwjgl does already something like that, although for x64 classifiers it simply shorten to natives-$os.
Extracting and caching natives
Natives will be extracted into a directory following:
XDG conventions, so under the user home (like ~/.cache/native-lib-loader)
Gradle dependency cache GAVH ("gropId/artifactId/version/hashSha1") schema, it's important to take the hashing in account, as according to the Gradle docs:
It is possible for different repositories to provide a different binary artifact in response to the same artifact identifier. This is often the case with Maven SNAPSHOT artifacts, but can also be true for any artifact which is republished without changing its identifier. By caching artifacts based on their SHA1 checksum, Gradle is able to maintain multiple versions of the same artifact. This means that when resolving against one repository Gradle will never overwrite the cached artifact file from a different repository. This is done without requiring a separate artifact file store per repository.
Consuming the natives
Gradle handles .module file natively and will properly resolve them, no work needed there, the consumer has to just set the os and arch attributes to find the right natives under the hood. A plugin could also help by setting these values automatically, from the running machine underneath.
Scijava already does something similar with scijava.natives.classifier & company. And corresponding Maven plugin (or whatsoever) might also provide the same automatic setup underneath
Loading the natives in runtime
A NativeLoader2 (or whatsoever) will read all the underlying native libraries from the given GAVH ("gropId/artifactId/version/hashSha1") coordinate, extracting them if they weren't already cached before, and load them downstream safely respecting the JNI restriction
The same JNI native library cannot be loaded into more than one class loader.
This means that classloader must match with the classloader of the class specifying the native methods.
This is achievable by the i-th class by passing down to NativeLoader2 the reference to System::load, like Lwjgl is already doing since ages
Also, we should prefer System::load to System::loadLibrary, since as it's noted here
The drawback of loadLibrary() is so that it returns the confusing error message "java.lang.UnsatisfiedLinkError: myLib (Not found in java.library.path)" even if it is found, but could not be loaded due to other reasons. In such case calling load()/loadLibrary() with the absolute path shows additional output like "rtld: Symbol myFunc was referenced ... but a runtime definition was not found." which is hidden by loadLibrary() with the library basename as argument. That is the best way is to write own method which iterates through java.library.path and calls load() per item.
And/or we may use JNA to get dependency libraries properly added as well, as noted here. Some field in META-INF/MANIFEST might serve good for this purpose
However, the default strategy should be to load all the native present in the given artifact, in case there are more than one
Looking forwards for feedbacks :)
The text was updated successfully, but these errors were encountered:
elect86
changed the title
Library reboot
Library reboot and much more
Apr 22, 2024
Hello folks,
I/we are trying to reboot the this library, the idea is to unify a lot of different leaf/sparse conventions all together for handling natives for the whole cycle, from the producer publication till the user consumption, since this is one of the cause of pain/additional overhead on end consumers and library developers, for example:
This attempt is a chance to fix also some other issues which are somehow (strictly) related, hopefully once and for all, before the new Foreign Function API transition/JNI future limitations take over.
Actually all of this goes beyond the scope of
native-lib-loader
, but there isn't really a right place to discuss about all of these thematics, so I'd just over-exploit the "Issues" space here as a placeholder.Let me write down some of the ideas we came with, the goal is to reach a broader audience for further feedbacks, improvements, etc..
This might be interesting for some other people outside this repo other than @ctrueden (and @bmarwell?), such as @jjohannes and @Spasi
One artifact for each supported "
OS
_Arch
"First of all, natives should be published for each OS/Arch combination, in order to avoid pulling in dependencies thus downloading useless files which are not meant in any way for the given platform, just polluting the consumer classpath, saving space, bandwidth and time on scale.
The good new is that most of libraries nowadays already follow this path, but some of them (like
jinput
), unfortunately, don't. We could help them by providing a PR ourselves or, in the worst cases, re-publishing the artifact ourselves on scijava (or central by new GAVs)? I don't knowUsing the os-maven-plugin normalized os and arch values
You can see them as
enum
s here, they are copy/pasted from the os-maven-pluginGradle has already something, but it's somehow limited. They also use
macos
forMACOS
, to sayPublishing using the Gradle Metadata rich format
Gradle does this natively, of course, but a Gradle plugin will make the creation of those variants even easier
And the
gradle-module-metadata-maven-plugin
will produce the Gradle metadata for library developers using Maven.These library developers using Maven and targetting other Maven consumers should declare the natives dependency in their pom using placeholders for
OS
andArch
.In the Scijava ecosystem this is already done today in some places, such here
We should come up with some convention for the classifier, something like
natives-$os-$arch
could be an idea. Lwjgl does already something like that, although for x64 classifiers it simply shorten tonatives-$os
.Extracting and caching natives
Natives will be extracted into a directory following:
~/.cache/native-lib-loader
)gropId
/artifactId
/version
/hashSha1
") schema, it's important to take the hashing in account, as according to the Gradle docs:Consuming the natives
Gradle handles
.module
file natively and will properly resolve them, no work needed there, the consumer has to just set the os and arch attributes to find the right natives under the hood. A plugin could also help by setting these values automatically, from the running machine underneath.Scijava already does something similar with
scijava.natives.classifier
& company. And corresponding Maven plugin (or whatsoever) might also provide the same automatic setup underneathLoading the natives in runtime
A
NativeLoader2
(or whatsoever) will read all the underlying native libraries from the given GAVH ("gropId
/artifactId
/version
/hashSha1
") coordinate, extracting them if they weren't already cached before, and load them downstream safely respecting the JNI restrictionThis means that classloader must match with the classloader of the class specifying the native methods.
This is achievable by the i-th class by passing down to
NativeLoader2
the reference toSystem::load
, like Lwjgl is already doing since agesAlso, we should prefer
System::load
toSystem::loadLibrary
, since as it's noted hereThis could solve issues such as this one
And/or we may use JNA to get dependency libraries properly added as well, as noted here. Some field in
META-INF/MANIFEST
might serve good for this purposeHowever, the default strategy should be to load all the native present in the given artifact, in case there are more than one
Looking forwards for feedbacks :)
The text was updated successfully, but these errors were encountered: