Hi, I’m Eric and I work at a big chip company making chips and such! I do math for a job, but it’s cold hard stochastic optimization that makes people who know names like Tychonoff and Sylow weep.

My pfp is Hank Azaria in Heat, but you already knew that.

  • 8 Posts
  • 149 Comments
Joined 1 year ago
cake
Cake day: January 22nd, 2024

help-circle



  • Also, man why do I click on these links and read the LWers comments. It’s always insufferable people being like, “woe is us, to be cursed with the forbidden knowledge of AI doom, we are all such deep thinkers, the lay person simply could not understand the danger of ai” like bruv it aint that deep, i think i can summarize it as follows:

    hits blunt “bruv, imagine if you were a porkrind, you wouldn’t be able to tell why a person is eating a hotdog, ai will be like we are to a porkchop, and to get more hotdogs humans will find a way to turn the sun into a meat casing, this is the principle of intestinal convergence”

    Literally saw another comment where one of them accused the other of being a “super intelligence denier” (i.e., heretic) for suggesting maybe we should wait till the robot swarms coming over the hills before we declare its game over.








  • Came across this fuckin disaster on Ye Olde LinkedIn by ‘Caroline Jeanmaire at AI Governance at The Future Society’

    "I’ve just reviewed what might be the most important AI forecast of the year: a meticulously researched scenario mapping potential paths to AGI by 2027. Authored by Daniel Kokotajlo (>lel) (OpenAI whistleblower), Scott Alexander (>LMAOU), Thomas Larsen, Eli Lifland, and Romeo Dean, it’s a quantitatively rigorous analysis beginning with the emergence of true AI agents in mid-2025.

    What makes this forecast exceptionally credible:

    1. One author (Daniel) correctly predicted chain-of-thought reasoning, inference scaling, and sweeping chip export controls one year BEFORE ChatGPT existed

    2. The report received feedback from ~100 AI experts (myself included) and earned endorsement from Yoshua Bengio

    3. It makes concrete, testable predictions rather than vague statements that cannot be evaluated

    The scenario details a transformation potentially more significant than the Industrial Revolution, compressed into just a few years. It maps specific pathways and decision points to help us make better choices when the time comes.

    As the authors state: “It would be a grave mistake to dismiss this as mere hype.”

    For anyone working in AI policy, technical safety, corporate governance, or national security: I consider this essential reading for understanding how your current work connects to potentially transformative near-term developments."

    Bruh what is the fuckin y axis on this bad boi?? christ on a bike, someone pull up that picture of the 10 trillion pound baby. Let’s at least take a look inside for some of their deep quantitative reasoning…

    …hmmmm…

    O_O

    The answer may surprise you!







  • Good news everyone, Dan has released his latest AI safety paper, we are one step closer to alignment. Let’s take a look inside:

    Wow, consistent set of values you say! Quite a strong claim. Let’s take a peek at their rigorous, unbiased experimental set up:

    … ok, this seems like you might be putting your finger on the scales to get a desired outcome. But I’m sure at least your numerical results are stro-

    Even after all this shit, all you could eek out was a measly 60%? C’mon you gotta try harder than that to prove the utility maximizer demon exists. I would say our boi is falling to new levels of crankery to push his agenda, but he did release that bot last year that he said was capable of superhuman prediction, so this really just par for the course at this point.

    The most discerning minds / critical thinkers predictably reeling in terror at another banger drop from Elon’s AI safety toad.

    *** terrifying personal note: I recently found out that Dan was my wife’s roommate’s roommate’s roommate back in college. By the transitive property, I am Dan’s roommate, which explains why he’s living rent free in my head




  • OAI announced their shiny new toy: DeepResearch (still waiting on DeeperSeek). A bot built off O3 which can crawl the web and synthesize information into expert level reports!

    Noam is coming after you @dgerard, but don’t worry he thinks it’s fine. I’m sure his new bot is a reliable replacement for a decentralized repository of all human knowledge freely accessible to all. I’m sure this new system doesn’t fail in any embarrassing wa-

    After posting multiple examples of the model failing to understand which player is on which team (if only this information was on some sort of Internet Encyclopedia, alas), Professional AI bully Colin continues: “I assume that in order to cure all disease, it will be necessary to discover and keep track of previously unknown facts about the world. The discovery of these facts might be a little bit analogous to NBA players getting traded from team to team, or aging into new roles. OpenAI’s “Deep Research” agent thinks that Harrison Barnes (who is no longer on the Sacramento Kings) is the Kings’ best choice to guard LeBron James because he guarded LeBron in the finals ten years ago. It’s not well-equipped to reason about a changing world… But if it can’t even deal with these super well-behaved easy facts when they change over time, you want me to believe that it can keep track of the state of the system of facts which makes up our collective knowledge about how to cure all diseases?”

    xcancel link if anyone wants to see some more glorious failure cases:

    https://xcancel.com/colin_fraser/status/1886506507157585978#m