Hemingways_Shotgun

  • 0 Posts
  • 18 Comments
Joined 2 years ago
cake
Cake day: June 7th, 2023

help-circle






  • Seeing what is happening reminds me of Eisenhower seeing the concentration camps for the first time and immediately calling for the media to photograph everything and for members of congress to view the atrocities in person. He knew that there might come a day when the world wanted to pretend that it never happened and insisted that there be enough evidence that there was no way anyone could deny the holocaust.

    It was the greatest act by a president (in my opinion) before he even became the president.

    The lesson being, everyone in these cities need to be recording everything, so that when the blatantly criminal administration and its enablers are finally held to legal account (and they will), there can be no denying what they attempted to do in America in 2025.




  • It’s not that I’m disagreeing with you. I’m just not agreeing with you.

    I personally think that (as unpopular an opinion as it may be) Flatpak’s largely make the choice of first distro irrelevant. The weakness in Manjaro is that you either risk using the AUR or stay on old versions of the software. Or with Mint/Ubuntu/etc… you either risk adding random repos to your sources list or you use older versions of the software.

    Either way, you run the risk of a new person mucking up their system with a bad repo or a bad aur package.

    The alternative, using flatpaks, largely solves both issues for when you need newer versions of a certain software, and are dead simple to install/remove/update, etc…

    And I say this as someone who was super skeptical of flatpak’s for a very very long time.





  • Exactly that.

    If I were to google how to get gum out of my child’s hair and then be directed to that same reddit post. I’d read through it and be pretty sure which were jokes and which were serious; we make such distinctions, as you say, every day without much effort.

    LLMs simply don’t have that ability. And the number of average people who just don’t get that is mind-boggling to me.

    I also find it weirdly dystopian that, if you sum that up, it kind of makes it sound like in order for an LLM to make the next step towards A.I. It needs a sense of humour. It needs the ability to weed through when the information it’s digging from is serious, or just random jack-asses on the internet.

    Which is turning it into a very very Star Trek problem.


  • The fact that any AI company thought to train their LLM on the answers of Reddit users speaks to a fundamental misunderstanding of their own product (IMO)

    LLMs aren’t programmed to give you the correct answer. They’re programmed to give you the most pervasive/popular answer on the assumption that most of the time that will also happen to be the right one.

    So when you’re getting your knowledge base from random jackasses on Reddit, where a good faith question like “What’s the best way to get get gum out of my childs hair” get’s two two good faith answers, and then a few dozen smart-ass answers that gets lots of replies and upvotes because they’re funny. Guess which one your LLM is going to use.

    People (and apparently even the creators themselves) think that an LLM is actually cognizent enough to be able to weed this out logically. But it can’t. It’s not an intelligence…it’s a knowlege agreggator. And as with any aggregator, the same rule applies

    garbage in, garbage out