• 0 Posts
  • 210 Comments
Joined 3 years ago
cake
Cake day: January 17th, 2022

help-circle
  • I have a Razer Blade Stealth 13 QHD+ touchscreen (RZ09-02393E32) since 2017. Until recently it was mostly Windows and Ubuntu side by side. I realized few months ago I never ever boot on Windows so I removed it. I also got tired on Ubuntu pushing for its own package management system which I don’t find useful. Consequently back to “just” Debian stable and works great for me. Didn’t have to tinker with anything, just works.




  • Indeed. That being said I have a (sigh) Android video projector (Nebula Mars II Pro, by Anker) and even though it does comes with its bloatware (namely “trying” to force installation, without actually doing it, of e.g. YouTube or NetFlix apps) attempts one can ignore that, install F-Droid, install VLC and Launch on Boot from there then boot straight to VLC without have to interact with the stock launcher. Also remote adb works by default so one can tinker quite a bit without even having to active a kind of developer mode.




  • I used Kodi with LibreElec for years in a similar setup. It was nice… but in practice I didn’t really use the “cool” functionalities (like indexing, image preview, Web remote control, etc) so instead I checked how Kodi works and noticed DLNA. I saw that my favorite video player, namely VLC, supports DLNA. I then looking for DLNA server on Linux, found few and stuck to the simplest I found, namely minidlna. It’s quite basic, at the least the way I use it, but for my usage it’s enough :

    • install VLC on clients, including Android video projector, phones, XR HMDs, etc
    • install minidlna on server (RPi5)
    • configure minidlna to serve the right directory with subdirectories ( /var/lib/minidlna by default )
    • configure few extra software that get videos to push them (via scp script and ssh-key) to rpi5:/var/lib/minidlna/

    voila… very reliable setup (been using for more than a year on a daily basis.



  • So if you are genuinely worried about this, don’t.

    First because, as numerous persons already clarified, researchers here are breaking deprecated cryptography.

    It’s a bit like taking toothpicks and opening a lock while the locks used in your modern car is very different. Yes, it IS actually interesting but the same technique does not apply in practice, only in principle.

    Second because IF in principle there IS a path to radically grow in power, there are already modern cryptography techniques which are resistant to scaling the power of quantum computers. Consequently it is NOT just about small the key is, but also HOW that key is made, what are the mathematical foundations on which a key is made, and can be broken.

    Anyway for a few years now there has been research, roughly matching the interest in quantum computers, to what is called post-quantum encryption, or quantum resistant encryption. Basically the goal of the research is to find new ways to make keys that are very cheap to generate and verify, literally with something as cheap and non powerful as the chip in your credit card, BUT practically impossible to “crack” for a computer, even a quantum computer, even a powerful one. The result of that on-going research are schemes like Kyber, FALCON, SPHINCS+, etc which answer such requirements. Organizations like NIST in the US verify that the schemes are actually without flaws and the do recommendations.

    So… all this to say that a powerful quantum computer is still not something that breaks encryption overall.

    If you are worried TODAY, you can even “play” with implementations like https://github.com/open-quantum-safe/oqs-demos and setup a server, e.g Apache, and a client, e.g Chromium, so that they can communicate using such schemes.

    Now practically speaking if you are not technically inclined or just want to bother, you can “just” use modern software, e.g Signal, which last year https://signal.org/blog/pqxdh/ announced that they are doing just that on your behalf.

    You can finally expect all actors, e.g hosts like Lemmy, browsers like Firefox, that you use daily to access content to gradually both integrate post-quantum encryption but also gradually deprecate older, and thus risky, schemes. In fact if you try to connect today to old hardware via e.g ssh you might find yourself forced to accept older encryption. This very action is interesting because it does show that over the years encryption changes, old schemes get deprecated and replace.

    TL;DR: cool, not worried though even with a properly powerful quantum computer because post-quantum encryption is being rolled out already.


  • What this show is a total lack of originality.

    AI is not new. Open-source is not new. Putting two well known concepts together wasn’t new either because… AI has historically been open. A lot of the cutting edge research is done in public laboratories, with public funding, and is published in journals (sadly often behind paywall but still).

    So the name and the concept are both unoriginal.

    A lot of the popularity gained from OpenAI by using a chatbot is not new either. Relying on always larger dataset and benefiting from Moore’s law is not new either.

    So I’m not standing on any side, neither this person nor the corporation.

    I find that claiming to be “owning” common ideas is destructive for most.




  • Just yesterday I pinned VLC on my KDE Plasma Task Manager. Why? Because this way I can directly open “Recent Files” from it. I discovered about this functionality just last week with Libre Office Draw. It’s so efficient, it absolutely changed how I use my computer daily!

    but… why do I bother with this long example? Because IMHO that’s from KDE, not Debian. When a distro improve the UX, as I also wish, it can be mostly by selecting the best software in its packages to maintain (e.g. here KDE but yes could indeed be their own custom made package, even though it requires a lot more resource AND other distro could also use them back assuming it’s FLOSS) but arguably the UX is mostly of the distribution itself is limited to the installation process.


  • a shortage of meaningful innovation

    Well… a distribution IS a selection of packages and a way to keep them working together. Arguably the “only” innovation in that context is HOW to do that and WHICH packages to rely on. For the first, the “latest” real change could be considered immutable distributions, as on the SteamDeck, and declarative setup, e.g. NixOS. For the second… well I don’t actually know if anybody is doing that, maybe things like PrimTux for kids at schools in France?

    Anyway, I agree but I think it’s tricky to be innovative there so let me flip the question, what would YOU expect from an innovative distribution?


  • I hope everybody criticizing the move either do not use products from Mozilla or, if they do, contribute however they can up to their own capabilities. If you don’t, if you ONLY criticize, yet use Firefox (or a derivative, e.g. LibreWolf) or arguably worst use something fueled by ads (e.g. Chromium based browsers) then you are unfortunately contributing precisely to the model you are rejecting.


  • Honestly a very imperfect alternatives but that’s been sufficient for me for years is… NextCloud of documents.

    There are few dozen documents I need regardless of the device, e.g national ID, billing template, but the vast VAST majority of my files I can get on my desktop… which is why I replied to you in depth rather than actually doing it. I even wrote some software for a “broader” view on resuming across devices including offline, namely https://git.benetou.fr/utopiah/offline-octopus as a network of NodeJS HTTP servers but … same, that’s more for the intellectual curiosity than a pragmatic need. So yes explore with VMs if you prefer but I’d argue remain pragmatic, i.e what you genuinely do need versus an “idealized” system that you don’t actually use yet makes your workflow and setup more complex and less secure.


  • Regardless of what technical solution you decide to rely on, e.g borgbackup, Synchting or rsync, the biggest question is “what” do you actually need. You indeed do not need system files, you probably also applications (which can fetch back anyway) so what left is actually data. You might want to then save your ~ directory but that might still conflict with some things, e.g ~/.bashrc or ~/.local so instead you might want to start with individual applications, e.g Blender, and see where it implicitly or you explicitly save the .blend files and all their dependency.

    How I would do it :

    • over the course of a day, write down each application I’m using, probably a dozen at most (excluding CLI tools)
    • identify for each where data is stored and possibly simplify that, e.g all my Blender files in a single directory with subdirectory
    • using whatever solution I have chosen, synchronize those directories
    • test on the other device while being on the same network (should be much faster and with a change of fixing problems)

    then I would iterate over time. If I were to often have to move and can’t really iterate, I would make the entire ~ directory available even though it’s overkill, and only pick from it on a per needed basis. I would also insure to exclude some directories that could be large, maybe ~/Downloads

    PS: I’d also explore Nix for the system and applications side of things but honestly only AFTER taking care of what’s actually unique to you, i.e data.


  • I remember a discussion with a friend of mine while I was probably droning about privacy, surveillance capitalism, etc.

    She politely listened then said she didn’t really mind or care.

    I feel quite strongly about this and as I know she is pretty smart was somehow surprised by her reaction so I tried to illustrate my point more directly. We were in a bar so it went a bit like this :

    • A: so, can I ask you how much you earn?
    • B: yes, sure
    • A: can I tell others here in the bar
    • B: I guess
    • A: can I instead sell others that information so that they can try to sell you goods and services?
    • B: no

    So my point was that she associated a problem with privacy with a friend who might be a bit curious. When she started to see it as a systematic commercial endeavor that was unfair to her, she did change her mind.

    Maybe a short thought experiment like this could help your brother see what’s troubling to you?



  • AFAICT that’s correct for WebBluetooth indeed, as it’s only implemented by Chromium (and thus all browsers relying on it) but for but for WebUSB https://wicg.github.io/webusb/ it’s still being discussed at the W3C level so even though not standards (which I don’t think W3C even produce, only API specifications, e.g HTML isn’t a standard whereas Bluetooth is) thus allowing others to possibly implement it.

    To clarify Firefox is my main browser, but (sadly) for those very specific cases I’m relying on Chromium (WebXR on standalone XR devices, even now Wolvic switching to Chromium as a backend).

    It’s an important point as by doing this Google is pushing for it’s own set of technologies and is pushing for it’s own engine which comes with a lot of business (namely ads) related “feature” e.g Manifest v3 that aren’t good for privacy.

    That is also interesting to consider on “why” a browser keeps on evolving, i.e having the most “advanced” browsers does give an edge and pushes competition away.