• 1 Post
  • 198 Comments
Joined 5 years ago
cake
Cake day: February 15th, 2021

help-circle
  • The only reason for CSD is touch interfaces on small screens.

    Even in this case I’d argue that on small screens most apps simply have no real decorations (not even client-side)… there’s typically not even a close button. Hamburger buttons are menus, which isn’t what’s typically considered “decoration”. One could argue that the bar at the bottom in Android with home/back/etc controls is effectively a form of SSD. Android offers system UI or gestures to send the app to the background (ie. minimize) or closing it, it does not require Apps to render their own, which is effectively what Gnome is asking with CSD.


  • They justify the rejection of SSD because it isn’t part of the core Wayland protocol and at the same time push client apps for the “minimize” and “maximize” buttons (along with respecting some settings) despite it also not being part of the core protocol and it being only possible through extensions. There’s a ton of tiling compositors that don’t even have any concept of minimize/maximize, so why should this be required of every client app?

    It feels backwards to ask the app developers to be the ones adding the UI for whatever features the window compositor might decide to have. They might as well be asking all app developers to add a “fullscreen” button to the decoration, or a “sticky” button, or a “roll up”/“shade” button like many old school X11 WM used to have. This would lead to apps lagging behind in terms of what they have implemented support for and resulting in inconsistent UX, and at the same time limiting the flexibility and user customization of the decorations, not just in terms of visuals but also function and behavior.


  • LLMs abstract information collected from the content through an algorithm (what they store is the result of a series of tests/analysis, not the content itself, but a set of characteristics/ideas). If that makes it derivative, then all abstractions are derivative. It’s not possible to make abstractions without collecting data derived from a source you are observing.

    If derivative abstractions were already something that copyright can protect then litigants wouldn’t resort to patents, etc.


  • You are not gonna protect abstract ideas using copyright. Essentially, what he’s proposing implies turning this “TGPL” in some sort of viral NDA, which is a different category of contract.

    It’s harder to convince someone that a content-focused license like the GPLv3 protects also abstract ideas, than creating a new form of contract/license that is designed specifically to protect abstract ideas (not just the content itself) from being spread in ways you don’t want it to spread.



  • It’s meant in the sense of “underwhelming” (as shown by the follow-up comment the article references). It’s not incompatible to be surprised at how capable AI is (ie. being “impressed”) and at the same time be also unwilling to pay the costs / repercussions and want to ban / regulate it.

    In this context, being deeply unimpressed with something is equivalent to calling that something “irrelevant” / “incapable”. If AI was no more impressive than it was before the LLM boom then there wouldn’t have been such a reaction against it to begin with. If anything, people being now opposed to modern AI is proof of how impactful AI has become.


  • Yea, but he’s (intentionally?) misrepresenting things… people are not “unimpressed” by AI, what they are is not interested in MS “agentic OS”, these are not the same things.

    It’s irresponsible to hand in control of your machine to an AI integrated that deeply into the OS, particularly when it’s designed to be tethered to the network and it’s privately owned and managed by human entrepreneurs that do have the company’s interests as first and main priority.



  • Those are open questions that I don’t think we can answer yet.

    If you are asking if Valve did make changes there, I’m expecting the answer is likely no. They haven’t shown anything regarding KDE/desktop mode on the Steam Frame. And we have yet to see how exactly this is integrated with gamescope. But if the device does become popular and interest grows for Linux VR development, then I expect we’ll see people trying to make new VR environments for Linux (or adapt existing ones for VR).

    However, given that Valve plans to offer ways to play non-VR games with the Frame, I expect one could add a nested wayland session as if it were a non-Steam non-VR game, so in the VR environment from SteamOS one could have a floating screen showing a traditional KDE session relatively easy, I would expect. And in that sense one could have a desktop VR environment standalone, in the Frame.


  • Yes, I think you’re talking about something else, related to your particular needs. But the post OP opened (which you were replying to) was about discussing what “implications for Linux” would the new Steam hardware have.

    I feel the only part in your comment that was somewhat relevant to the question raised by OP was:

    Anyway IMHO the big questions for VR on Linux more broadly is what changes upstream on KDE in terms of immersive UX? Is KDE Plasma becoming a VR graphical shell? Does it have 3D widgets? Does it impact freedesktop in any way?


  • The only reason Linux became a thing is because Torvalds managed to get engagement and popularity amongst a niche community of hackers that happened to share the same needs/goals.

    Because what gives it importance is the needs we share. “The need of 1” is measured in relation to “the need of many”. Community is a huge piece in the “open source” puzzle. A community of 1 is not a community… it’s a personal space. If you don’t share your software with a community then declaring it “open” is pointless.

    Also… when I said “relevant” I specifically meant for the questions raised by OP. I’m not talking about “relevancy” in some weird transcendental way… I don’t believe such a thing exists… everything has a viewpoint from which something can be said to be “relevant”… however, as you yourself said: “your preferences are not relevant to my needs”.



  • Relevant section:

    At first, around 1996, it was common practice to make the Windows key act as Meta. However, because of the existing alternative keys for Meta in Emacs, the reintroduction of a hardware Meta key binding did not prove exceptionally useful. This made Super the next most frequently emulated key of choice, and thus it became the standard assignment for the Windows key under X11.

    Most Linux software and documentation calls these keys “Super” keys. However, they are still referred to as KEY_LEFTMETA and KEY_RIGHTMETA in the kernel,[5] and some documentation such as that of KDE Plasma refers to it as just the Meta key.[6][7] “Windows” and ⌘[8] are also used in documentation.


  • It’s unclear what you are trying to say. The question was what would switching license do. There’s 2 scenarios: 1) either Google is really not doing changes in ffmpeg source internally right now …or 2) they are in fact making changes to it internally (perhaps for encoding with their own codecs, etc.) which they are not releasing back to the public (since the code is LGPL, and not AGPL)

    With situation 1, they can simply continue using ffmpeg, even if it were to switch to AGPL. They would have no need/obligation to release anything, whether they decide to fund development or not. The way I see it, only if it’s situation 2, will Google be affected by a license change. However, if the use they make of ffmpeg is just to have their own encoder program for use with specific codecs, they might as well decide to stop using ffmpeg for this purpose instead and have their own program to work with their encoders. Most of the encoding work is already being done in the encoding libraries separately released (like libaom, which Google licensed under BSD-2).

    But even in the rare case of Google having made changes that (after license change) they would suddenly decide to be willing to share with the community despite having not done so before… the whole problem with this bug-reporting mess is that most of the issues reported by the automated tools aren’t something really that impactful/important, they are things that even Google would not really be that interested to fix… (why would Google need to fix a codec that only affects a videogame cinematic from 1995?). These reports are just the result of automated & indiscriminated AI analysis, slop.


  • AGPL is more “copyleft”, but not really more “permissive”, in the sense that AGPL adds the extra requirement of forcing server admins to provide the sourcecode to the users of any service that internally makes use of AGPL code.

    It plugs a loophole of the other GPL licenses that allows companies to not share any custom modifications as long as they don’t directly share the binaries (they can offer a service using internally modified binaries, but as long as they don’t distribute the binaries themselves they don’t have to share the source code from those modifications running on their private servers, even if they are GPL).

    However, I don’t think a license change would really solve this particular bug-reporting trouble. Most likely Google has not patched these vulnerabilities internally either, or at least the biggest chunk of them (since most of them are apparently edge cases that would most likely not apply to Google’s services anyway).


  • Sounds like a prioritization issue. They could configure the git bots to automatically flags all these as “AI-reported” and filter them out from their TODO, considering them low priority by default, unless/until someone starts commenting on the ticket and bringing it up to their attention / legitimizing it.

    EDIT: ok, I just read about the 90-days policy… I feel then the problem is not the reporting, but the further actions Google plans based on an automated tool that seems to be inadequate to judge the severity of each issue.


  • Sure, but if it wasn’t triaged why consider it “medium impact”? I feel when tight on resources, it’s best to default to “low priority” for all issues whose effect (ie. to the end-user, or to the software depending on it) isn’t clearly scoped and explained by the reporter. If the reporters (or those affected) have not done the job to make it easy to quickly see why it’s important to have this fixed then it’s probably not so important for them to have it fixed. Some projects even have bots that automatically close issues whenever there has not been activity for a certain time (though I’d prefer labeling it / categorizing as “low engagement” or something so it can be filtered out when swamped, instead of simply closing it).

    About “public confidence”, I feel that this would rather be “misplaced confidence” if it’s based on a number that is “massaged” to hide issues. Also this is an open source project we are talking about, there isn’t an investment fund behind it or a need for people to have absolute loyalty or blind trust. The code is objectively there, the trust should never be blind. If there wasn’t a long list of reports I’d be more suspicious of a project as popular, frequently updated & ubiquitous as ffmpeg. Specially if they are (allegedly) not triaged. Anyone who decides to choose ffmpeg based on the number of issues open without actually investigating from their end how relevant that number actually is… well… they can go look for a different software.


  • I agree… I mean they are not forced to fix the issues, if the issue is obscure and not many people are affected, then there’s no reason why they can’t just mark it as “patches welcome” and leave it there. I feel this is a problem in the policy the project might have for prioritization, not really a problem in QA / issue report.

    For context:

    The latest episode was sparked after a Google AI agent found an especially obscure bug in FFmpeg. How obscure? This “medium impact issue in ffmpeg,” which the FFmpeg developers did patch, is “an issue with decoding LucasArts Smush codec, specifically the first 10-20 frames of Rebel Assault 2, a game from 1995.”

    To me, the problem shouldn’t be the report, but categorizing it as “medium impact” if they think fixing it isn’t “a valuable use of an assembly programmer’s time”.

    Also:

    the former maintainer of libxml2 […] recently resigned from maintaining libxml2 because he had to “spend several hours each week dealing with security issues reported by third parties. Most of these issues aren’t critical, but it’s still a lot of work.

    Would it be truely better if the issues wouldn’t be reported? what’s the difference between the issue not being reported and the issue not being fixed because it’s not seen as a priority?



  • Yes, it also narrows down the number of potential targets for analysis / report. If an extension is not marked “none” then no need to go out of your way to figure out if it does it.

    For some extensions it might actually be relatively easy to figure out if they do communicate with an external server that they might not need to, specially considering that the extension format can easily be decompressed, .crx files are just zip files with some javascript and other files inside… they might want to obfuscate the logic, but it’s not impossible to try and unravel things to some extent.