• Carrolade@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    2
    ·
    6 months ago

    I largely agree, current LLMs add no capabilities to humanity that it did not already possess. The point of the regulation is to encourage a certain degree of caution in future development though.

    Personally I do think it’s a little overly broad. Google search can aid in a cyber security attack. The kill switch idea is also a little silly, and largely a waste of time dreamed up by watching too many Terminator and Matrix movies. While we eventually might reach a point where that becomes a prudent idea, we’re still quite far away.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      edit-2
      6 months ago

      We’re not anywhere near anything that has anything in common with human level intelligence, or poses any threat.

      The only possible cause for support of legislation like this is either a completely absence of understanding of what the technology is combined with treating Hollywood as reality (the layperson and probably most legislators involved in this), or an aggressive market control attempt through regulatory capture by big tech. If you understand where we are and what paths we have forward, it’s very clear that there’s only harm that this can do.