• 0 Posts
  • 4 Comments
Joined 1 year ago
cake
Cake day: June 12th, 2023

help-circle
  • They’re not files, it’s just leaking other people’s conversations through a history bug. Accidentally putting person A’s “can you help me write my research paper/IT ticket/script” conversation into person B’s chat history.

    Super shitty but not an uncommon kind of bug. Often either a nasty caching issue or screwing up identities for people sharing IPs or similar.

    It’s bad but it’s “some programmer makes understandable mistake” bad not “evil company steals private information without consent and sends it to others for profit” kind of bad.


  • TechLich@lemmy.worldtoMemes@lemmy.mlDon't ask
    link
    fedilink
    arrow-up
    11
    arrow-down
    4
    ·
    edit-2
    1 year ago

    “aborigines” is not a great word to use these days. It’s generally seen as pretty offensive to Indigenous Australians as it’s a bit dehumanising and comes from colinisers who treated people like animals.

    Better to go with “First Nations people”, “Aboriginal and Torres Strait Islander people” or “Indigenous Australians.”

    But yes, they’ve been treated (and in many cases continue to be treated) pretty horribly.


  • I think the idea is that there are potentially alignment issues in LLMs because it’s not clear what concepts map to what activations. That makes it difficult to see what they’re really “thinking” about when they generate text. Eg. if they’re being misleading or are incorrectly associating concepts that shouldn’t be connected etc.

    The idea here is to use some mechanistic interpretability stuff to see what text activates what neurons in an LLM and then crowd source the meanings behind that and see if that’s something you could use to look up some context from an ai. Sort of trying to make a “Wikipedia of AI mind reading”

    Dunno how practical it is or how effective that approach is but it’s an interesting idea.