• 0 Posts
  • 13 Comments
Joined 2 months ago
cake
Cake day: September 21st, 2024

help-circle
  • GreenKnight23@lemmy.worldtoScience Memes@mander.xyzProbably
    link
    fedilink
    English
    arrow-up
    41
    ·
    17 days ago

    had this happen to me at a conference. I didn’t realize they were going to put where I worked at on my nametag so I spent three days walking around as the guy who worked at “some dumbass company”.

    it went over surprising well though and was a great icebreaker that landed me an interview for another job.


  • GreenKnight23@lemmy.worldtoScience Memes@mander.xyzNobel Prize 2024
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    4
    ·
    19 days ago

    the crypto scam ended when the AI scam started. AI conveniently uses the same/similar hardware that crypto used before the bubble burst.

    that not enough? take a look at this google trends that shows when interest in crypto died AI took off.

    Screenshot_20241017-100142_Firefox

    so yeah, there’s a lot more that connects the two than what you’d like people to believe.


  • GreenKnight23@lemmy.worldtoScience Memes@mander.xyzNobel Prize 2024
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    4
    ·
    19 days ago

    not once did I mention ChatGPT or LLMs. why do aibros always use them as an argument? I think it’s because you all know how shit they are and call it out so you can disarm anyone trying to use it as proof of how shit AI is.

    everything you mentioned is ML and algorithm interpretation, not AI. fuzzy data is processed by ML. fuzzy inputs, ML. AI stores data similarly to a neural network, but that does not mean it “thinks like a human”.

    if nobody can provide peer reviewed articles, that means they don’t exist, which means all the “power” behind AI is just hot air. if they existed, just pop it into your little LLM and have it spit the articles out.

    AI is a marketing joke like “the cloud” was 20 years ago.


  • GreenKnight23@lemmy.worldtoScience Memes@mander.xyzNobel Prize 2024
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    6
    ·
    19 days ago

    when I get an email written by AI, it means the person who sent it doesn’t deem me worth their time to respond to me themselves.

    I get a lot of email that I have to read for work. It used to be about 30 a day that I had to respond to. now that people are using AI, it’s at or over 100 a day.

    I provide technical consulting and give accurate feedback based on my knowledge and experience on the product I have built over the last decade and a half.

    if nobody is reading my email why does it matter if I’m accurate? if generative AI is training on my knowledge and experience where does that leave me in 5 years?

    business is built on trust, AI circumvents that trust by replacing the nuances between partners that grow that trust.


  • GreenKnight23@lemmy.worldtoScience Memes@mander.xyzNobel Prize 2024
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    11
    ·
    19 days ago

    those aren’t examples they’re hearsay. “oh everybody knows this to be true”

    You are ignoring ALL of the of the positive applications of AI from several decades of development, and only focusing on the negative aspects of generative AI.

    generative AI is the only “AI”. everything that came before that was a thought experiment based on the human perception of a neural network. it’d be like calling a first draft a finished book.

    if you consider the Turing Test AI then it blurs the line between a neural net and nested if/else logic.

    Here is a non-exhaustive list of some applications:

    • In healthcare as a tool for earlier detection and prevention of certain diseases

    great, give an example of this being used to save lives from a peer reviewed source that won’t be biased by product development or hospital marketing.

    • For anomaly detection in intrusion detection system, protecting web servers

    let’s be real here, this is still a golden turd and is more ML than AI. I know because it’s my job to know.

    • Disaster relief for identifying the affected areas and aiding in planning the rescue effort

    hearsay, give a creditable source of when this was used to save lives. I doubt that AI could ever be used in this way because it’s basic disaster triage, which would open ANY company up to litigation should their algorithm kill someone.

    • Fall detection in e.g. phones and smartwatches that can alert medical services, especially useful for the elderly.

    this dumb. AI isn’t even used in this and you know it. algorithms are not AI. falls are detected when a sudden gyroscopic speed/ direction is identified based on a set number of variables. everyone falls the same when your phone is in your pocket. dropping your phone will show differently due to a change in mass and spin. again, algorithmic not AI.

    • Various forecasting applications that can help plan e.g. production to reduce waste. Etc…

    forecasting is an algorithm not AI. ML would determine the percentage of an algorithm is accurate based on what it knows. algorithms and ML is not AI.

    There have even been a lot of good applications of generative AI, e.g. in production, especially for construction, where a generative AI can the functionally same product but with less material, while still maintaining the strength. This reduces cost of manufacturing, and also the environmental impact due to the reduced material usage.

    this reads just like the marketing bullshit companies promote to show how “altruistic” they are.

    Does AI have its problems? Sure. Is generative AI being misused and abused? Definitely. But just because some applications are useless it doesn’t mean that the whole field is.

    I won’t deny there is potential there, but we’re a loooong way from meaningful impact.

    A hammer can be used to murder someone, that does not mean that all hammers are murder weapons.

    just because a hammer is a hammer doesn’t mean it can’t be used to commit murder. dumbest argument ever, right up there with “only way to stop a bad guy with a gun is a good guy with a gun.”




  • GreenKnight23@lemmy.worldtoScience Memes@mander.xyzNobel Prize 2024
    link
    fedilink
    English
    arrow-up
    31
    arrow-down
    21
    ·
    19 days ago

    I don’t get the ai hate sentiment.

    I don’t get what’s not to get. AI is a heap of bullshit that’s piled on top of a decade of cryptobros.

    it’s not even impressive enough to make a positive world impact in the 2-3 years it’s been publicly available.

    shit is going to crash and burn like web3.

    I’ve seen people put full on contracts that are behind NDAs through a public content trained AI.

    I’ve seen developers use cuck-pilot for a year and “never” code again… until the PR is sent back over and over and over again and they have to rewrite it.

    I’ve seen the AI news about new chemicals, new science, new _fill-in-the-blank and it all be PR bullshit.

    so yeah, I don’t believe AI is our savior. can it make some convincing porn? sure. can it do my taxes? probably not.




  • pipeline schedules. once a month I clone the remote repo into a local branch, and push it back to my repo with an automatic merge request assigned to me. review & merge kicks off build pipeline.

    I also use pipeline schedules to do my own ddns to route 53 using terraform. runs once every 15 minutes.

    also once a week I’ve got about 50 container images I cache locally that I build my own images from.