I often see Rust mentioned at the same time as MIT-type licenses.
Is it just a cultural thing that people who write Rust dislike Libre copyleft licenses? Or is it baked in to the language somehow?
Edit: It has been pointed out that I meant to say “copyleft”, not “libre”, so edited the title and body likewise.
Specifically for libraries licensed as LGPL, a lot of the time with Rust I hear the justification that it forces anything using it to also be (L)GPL, because Rust always links libraries statically1 into the final binary and therefore does not meet the requirement of the LGPL that library code licensed under it must be able to be replaced.
This is absolutely not the case however, precompiled binaries can just ship all the object files it is linked from along with it so users can replace the object files of the LGPL library with their own custom version, or just the source code for open-source software, which also meets the requirement of course.
1 something I could rant about for hours as this is lowkey one of the two things that ruins Rust for me but I digress
This is completely incorrect for languages like Rust (or, say, C++). You are spouting misinformation.
Generics mean that not all the code “in a library” can be compiled to a single object file, as it’s impossible to know ahead of time what instantiations will exist. This means these instantiations may instead be emitted to another object file, and therefore include a copy of the code.
It is therefore impossible to “just publish the object files and swap out some of them link again”. Meaning it is impossible to comply with the LGPL if a Rust library is LGPL.
Oh and also, Rust isn’t ABI safe. So even if you make a library “without generics” to get around this, it’s still impossible to ensure it keeps working with future Rust compiler changes.
Yes, that is true. And yet, there are C++ LGPL libraries which as you say do in principle have the same problem. It should be safe if you’re careful about not using generics in the library’s public interface, or at least only generic code that is essentially just stubs calling the real logic. (I haven’t actually tried this myself tbh.)
In general any kind of inlined code is always a problem when doing this, even C can have this with macros, or “static final” integer constants in Java.
I should have definitely mentioned this and Rust’s ABI stability though, yeah. As for that, keeping the same compiler version is generally not a problem since all of them are available.
IIRC Same compiler version doesn’t mean the ABI will be the same. Each compilation may produce different representation of data structures in the binary. Depending on the optimization and other things.
Ugh, that would complicate things. If that’s the case, all I can say is that’s really negligent (and goes into what I originally said about lack of stable ABI really ruining Rust for me — technically I said static linking but that’s really the core issue)
Yeah, and there’s no plan to stabilize the ABI because it’s developing.
You can use C ABI for some data formats, but you’re limited on what you can use (mostly primitives). There’s a crate stable-abi or abi-stable that provides a way to do things to keep it stable, but since it’s external crate it has limitations.
I know it’s frustrating because I am writing something in rust that loads functions in runtime. I thought it’d be easy because programs written in C do it all the time. Rust gives a lot of advantages but working on dynamic loading hasn’t been fun. And there aren’t a lot of resources about this either.
I do not program. So maybe trying to understand all this is over my head. wikipedia describes
I thought that was the idea of binaries in general. In the Arch repos there are many packages appended with
-bin
. (The Arch repos also contain items of various licenses including proprietary.) Lots of FLOSS packages make a binary available by direct download from their website. Without too much detail, is there something special about Rust? Or maybe I misunderstand the concept of a binary release.Does this mean you need to be able to make a reproducible build? Or need to be able to swap code for something else? Wouldn’t that inherently break a program?
So the basic purpose of a library is to allow code that does some useful thing to be easily used in multiple programs. Like say math functions beyond what is in the language it self or creating network connections.
When you build a program with multiple source files there are many steps. First each file compiled into an object file. This is machine code but wherever you have calls into other files it just inserted a note that basicly says connect this call to this part of another file. So for example connect this call to SquareRoot function in Math library.
After that has been done to every file needed then the linker steps in. It grabs all the object files combines them into one big file and then looks for all the notes that say connect this call to that function and replaces them with actual calls to the address where it put that function.
That is static linking. All the code ends up in a big executable. Simple but it has two big problems. The first is size. Doing it this way means every program that takes the squareroot of something has a copy of the entire math library. This adds up. Second is if there is an error in the math library every program needs to be rebuilt for the fix to apply.
Enter dynamic linking. With that the linker replaces the note to connect to the SquareRoot function in math library with code that requests the connection be made by the operating system.
Then when the program is run the OS gets a list of the libraries needed by the program, finds them, copies them into the memory reserved for that program, and connects them. These are .so files on Linux and .dll on Windows.
Now the os only needs one copy of math.so and if there is a error in the library a update of math.so can fix all the programs that use it.
For GPL vs LGPL this is an important distinction. The main difference between them is how they treat libraries. (There are other differences and this is not legal advice)
So if math.so is GPL and your code uses it as a static link or a dynamic link you have to providd a copy of the source code for your entire program with any executable and licence it to them under the GPL.
With LGPL it’s different. If math.so is staticly linked it acts similar to the GPL. If it’s dynamicly linked you only have to provide the source to build math.so and licences it under LGPL. So you don’t have to give away all your source code but you do have to provide any changes to the math library you made. So if you added a cubeRoot function to the math library you would need to provide that.
To add a couple of issues with Dynamic Libraries, and why someone would choose Static Libraries:
Like a lot of things, there are tradeoffs, and there is no universal correct choice.
Agreed. I wasn’t trying to say they are always better just explain the difference.
I almost exclusivity use Linux and it handles this great. .so libraries are stored with a version number and a link to the latest. So math3.so and math4.so with math.so being a link to math4.so. that way if needed I can set a program to use math3.so and keep everything else on the latest version.
There are two ways of using library code in an executable program: dynamically linked libraries – also shared libraries – (these are DLL files on Windows, so files on Linux, dylib files on Mac), and statically linking libraries, which are embedded into the program executable at build time (specifically the link step which is generally the last).
Dynamically linked libraries just store a reference to the library file name, and when the program is run, the dynamic linker searches for these library files on disk and loads them into the program memory, whereas as I already said above statically linked libraries already are part of the program executable code so nothing special has to be done there at runtime.
This has nothing to do with bin packages inherently, which usually use at least a couple dynamically linked libraries anyway (libc at least). In fact every Rust program dynamically links to libc by default as at least glibc is impossible afaik to statically link. Some of them ship dynamic libraries along with the program binary (you see this a lot with Qt since it is pretty hard to link statically), some of them link their dependencies statically, some just expect the system to have certain versions of shared libraries available.
The special thing about Rust is that it really does not want you to output dynamically linked Rust libraries and link them to Rust programs, at least if they’re not inside the same project, since it does not have a stable interface for Rust code to Rust code calls (and a couple other reasons). This means you cannot ship a Rust shared library as a system package that programs in other packages can link against, like you can with for example C or Swift languages. Every dependency has to be built inside the same project the final executable is also in.
It does not mean you need to make a reproducable build, it just means users must be able to modify the LGPL licensed parts. For example, the library loads a file from a specific path that you want to change, you must be able to change that path by editing the library source code. This is trivial if it’s a shared library since you can just build your own where the path is changed and tell the dynamic linker to load that instead of the original, but with a closed-source statically linked binary you cannot easily change it if it does not provide the object files. These are essentially mostly final compiled code produced from source code files but not yet linked together into an executable, and significantly the LGPL parts are isolated files which can be swapped out with your own and then linked together again.
Doing this does not inherently break a program as long as the interface of the library (like function names, parameter types, general behavior of the code) stays compatible with the original one shipped with the program.