2xsaiko

  • 7 Posts
  • 1.08K Comments
Joined 2 years ago
cake
Cake day: June 12th, 2023

help-circle
  • Oh hey, awesome!

    I’ll just paste here what I’ve already written in Feedback Assistant.

    FB15922287 Messages sometimes only allows you to send SMS despite the recipient supporting RCS
    • Device: any
    • App: Messages
    • Conversation type: 1-on-1, RCS

    Sometimes, Messages only allows you to send SMS despite the recipient supporting RCS. During a conversation, receiving an RCS message even briefly changes the text in the input bar to RCS before switching back to SMS.

    This seems to be caused by failing to deliver an RCS message sometime before, which can always happen (bad connection on either side, recipient is offline, other network error, etc.)

    To reproduce:

    1. Take recipient phone offline
    2. Send an RCS message
    3. Wait for it to fail to be delivered
    4. Take recipient phone online

    Observed result: Messages shows the recipient as supporting SMS only, even when they are actively sending you RCS messages.

    Expected result: RCS is never artificially disallowed by Messages.


    This also happens if you use the “Send as SMS” entry in the context menu on a stuck message.


    Looks like at this point receiving an RCS message fixes this. However one sent RCS message that falls back to SMS will still lock the conversation to SMS until you get a reply.


    FB16920262 Separate window for single conversation does not mark messages as read
    • Device: Mac
    • App: Messages
    • Conversation type: 1-on-1, RCS (probably any)

    When opening a conversation in a new window, this window does not mark the messages as read when active like the main window.

    To reproduce:

    • Open conversation in new window (Control click on conversation, Open in New Window)
    • Receive a message
    • Focus the separate window/scroll to the bottom

    Observed result: The received message is not marked as read

    Expected result: The received message is marked as read like in the main window


    FB17053263 When "Group windows by application" is enabled, selecting a specific window from Mission Control does not actually focus the window
    • Device: Mac
    • App: Mission Control

    In Mission Control, with the “Group windows by application” setting enabled, you can scroll up to unstack the windows of an application to allow you to select a specific one. In this view, clicking one of them focuses the application but not the window, leading to a weird state where the application is focused but none of its windows are, and the menu bar also isn’t interactible until you click on another window.

    Steps to reproduce:

    • Turn on “Group windows by application” in System Settings (Desktop & Dock)
    • Open an application such as TextEdit, open one or multiple windows
    • Open Mission Control
    • Scroll up on the TextEdit group to expand its windows
    • Click one of the TextEdit windows

    Observed result: The application is focused but none of its windows are, nor are they ordered to front. This also looks really glitchy because Mission Control will show the animation that the selected window is expected to be in front but it will then move back behind any other windows that were in front of it after the animation is done

    Expected result: The selected window is focused and ordered to front, like when you don’t scroll up.


    System info:

    • MacBook Air M2, 2022, 8GB RAM, 256GB storage
      • macOS 15.3.2 (24D81)
    • iPhone 13 mini, 128GB storage
      • iOS 18.3.2 (22D82)

    I have a couple more watchOS ones, and a couple wonky ones that I either don’t know how to reliably reproduce, or can’t reproduce anymore right now but also don’t know if they’re fixed yet. But these should be the bugs I can think of right now that I can reliably reproduce that aren’t watchOS. If I find/remember more I’ll post them.

    This also just made me check and close 2 feedbacks that did get fixed in the meantime :^)




  • I’m primarily talking about Win32 API when I talk about Windows, and for Mac primarily Foundation/AppKit (Cocoa) and other system frameworks. What third-party libraries do or don’t do is their own thing.

    There’s also nothing wrong with bundling specialized dependencies in principle if you provide precompiled binaries. If it’s shipped via the system package manager, that can manage the library versions and in fact it should do that as far as possible. Where this does become a problem is when you start shipping stuff like entire GUI toolkits (hello bundled Qt which breaks Plasma’s style plugins every time because those are not ABI-compatible either).

    The amount of time that I had to get out of .dll-hell on Windows on the other hand. The Linux way is better and way more stable.

    Try running an old precompiled Linux game (say Unreal Tournament 2004 for example). They can be a pain to get working. This is not just some “ooooh gotcha” case, this is an important thing that’s missing for software preservation and cross-compatibility, because not everything can be compiled from source by distro packagers, and not every unmaintained open-source software can be compiled on modern systems (and porting it might not be easy because of the same problem).

    I suppose what Linux is severely lacking is a comprehensive upwards-compatible system API (such as Win32 or Cocoa) which reduces the churn between distros and between version releases. Something that is more than just libc.

    We could maybe have had this with GNUstep, for example (and it would have solved a bunch of other stuff too). But it looks like nobody cares about GNUstep and instead it seems like people are more interested in sidestepping the problem with questionably designed systems like Flatpak.



  • Distributions are not the problem. Most just package upstream libraries as-is (plus/minus some security patches). Hence why programs built for another distro will a lot of the time just run as is on a contemporary distro given the necessary dependencies are installed, perhaps with some patching of the library paths (plenty of packages in nixpkgs which just use precompiled deb packages as a source, as an extreme example because nixpkgs has a very different file layout).

    Try a binary built for an old enough Ubuntu version on a new Ubuntu version however…





  • Is this after it becomes unresponsive? I’m not seeing anything suspicious except maybe some D-Bus activation errors but those shouldn’t do anything like that. Does the mouse cursor still move? Anything in dmesg after it starts doing that? What about CPU or memory usage? Can you switch to another TTY?


  • Yes, that is true. And yet, there are C++ LGPL libraries which as you say do in principle have the same problem. It should be safe if you’re careful about not using generics in the library’s public interface, or at least only generic code that is essentially just stubs calling the real logic. (I haven’t actually tried this myself tbh.)

    In general any kind of inlined code is always a problem when doing this, even C can have this with macros, or “static final” integer constants in Java.

    I should have definitely mentioned this and Rust’s ABI stability though, yeah. As for that, keeping the same compiler version is generally not a problem since all of them are available.



  • There are two ways of using library code in an executable program: dynamically linked libraries – also shared libraries – (these are DLL files on Windows, so files on Linux, dylib files on Mac), and statically linking libraries, which are embedded into the program executable at build time (specifically the link step which is generally the last).

    Dynamically linked libraries just store a reference to the library file name, and when the program is run, the dynamic linker searches for these library files on disk and loads them into the program memory, whereas as I already said above statically linked libraries already are part of the program executable code so nothing special has to be done there at runtime.

    This has nothing to do with bin packages inherently, which usually use at least a couple dynamically linked libraries anyway (libc at least). In fact every Rust program dynamically links to libc by default as at least glibc is impossible afaik to statically link. Some of them ship dynamic libraries along with the program binary (you see this a lot with Qt since it is pretty hard to link statically), some of them link their dependencies statically, some just expect the system to have certain versions of shared libraries available.

    The special thing about Rust is that it really does not want you to output dynamically linked Rust libraries and link them to Rust programs, at least if they’re not inside the same project, since it does not have a stable interface for Rust code to Rust code calls (and a couple other reasons). This means you cannot ship a Rust shared library as a system package that programs in other packages can link against, like you can with for example C or Swift languages. Every dependency has to be built inside the same project the final executable is also in.

    library code licensed under it must be able to be replaced.

    Does this mean you need to be able to make a reproducible build? Or need to be able to swap code for something else? Wouldn’t that inherently break a program?

    It does not mean you need to make a reproducable build, it just means users must be able to modify the LGPL licensed parts. For example, the library loads a file from a specific path that you want to change, you must be able to change that path by editing the library source code. This is trivial if it’s a shared library since you can just build your own where the path is changed and tell the dynamic linker to load that instead of the original, but with a closed-source statically linked binary you cannot easily change it if it does not provide the object files. These are essentially mostly final compiled code produced from source code files but not yet linked together into an executable, and significantly the LGPL parts are isolated files which can be swapped out with your own and then linked together again.

    Doing this does not inherently break a program as long as the interface of the library (like function names, parameter types, general behavior of the code) stays compatible with the original one shipped with the program.



  • Specifically for libraries licensed as LGPL, a lot of the time with Rust I hear the justification that it forces anything using it to also be (L)GPL, because Rust always links libraries statically1 into the final binary and therefore does not meet the requirement of the LGPL that library code licensed under it must be able to be replaced.

    This is absolutely not the case however, precompiled binaries can just ship all the object files it is linked from along with it so users can replace the object files of the LGPL library with their own custom version, or just the source code for open-source software, which also meets the requirement of course.

    1 something I could rant about for hours as this is lowkey one of the two things that ruins Rust for me but I digress



  • I use LLVM because it’s good, but I would like it even more if it was GPL and I agree with OP’s comment as well.

    However, you’re literally the guy that replies “oh, so you hate oranges” to people that say “I like apples” or however that meme goes. How about you don’t completely twist people’s justifications into something they never said.

    edit: It comes down to that I have no say in whether other people want to allow their code to be exploited by corporations nor does it make a practical difference to me in what I can do with it, all I can do is say “you’re an idiot” to them.