• 0 Posts
  • 21 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • Odd. I replied to this comment, but now my reply is gone. Gonna try again and type up as much as I can remember.

    Regardless, an algorithm expecting binary answers will obviously not take para- and extralinguistic cues into account. That extra 50 ms hesitation, the downwards glance and the voice cracking when answering “no” to “has he ever tried to strangle you before?” has a reasonable chance to get picked up by a human, but when reducing it to something that the algorithm can handle, it’s just a simple “no”. Humans are really good at picking up on such cues, even if they aren’t consciously aware that they’re doing it, but if said humans are preoccupied with staring into a computer screen in order to input the answers to the questionnaire, then there’s a much higher chance that they’ll miss them too. I honestly only see negatives here.

    It’s helpful to have an algorithm that makes you ask the right questions […]

    Arguably a piece of paper could solve that problem.

    Seriously. 55 victims out of the 98 homicide cases sampled were deemed at negligible or low risk. If a non-algorithm-assisted department presented those numbered I’d expect them to be looking for new jobs real fast.



  • so it’s probably just some points assigned for the answers and maybe some simple arithmetic.
    

    Why yes, that’s all that machine learning is, a bunch of statistics :)

    I know, but that’s not what I meant. I mean literally something as simple and mundane as assigning points per answer and evaluating the final score:

    // Pseudo code
    risk = 0
    if (Q1 == true) {
        risk += 20
    }
    if (Q2 == true) {
        risk += 10
    }
    // etc...
    // Maybe throw in a bit of
    if (Q28 == true) {
        if (Q22 == true and Q23 == true) {
            risk *= 1.5
        } else {
            risk += 10
        }
    }
    
    // And finally, evaluate the risk:
    if (risk < 10) {
        return "negligible"
    } else if (risk >= 10 and risk < 40) {
        return "low risk"
    }
    // etc... You get the picture.
    

    And yes, I know I can just write if (Q1) {, but I wanted to make it a bit more accessible for non-programmers.

    The article gives absolutely no reason for us to assume it’s anything more than that, and I apparently missed the part of the article that mentioned that the system had been in use since 2007. I know we had machine learning too back then, but looking at the project description here: https://eucpn.org/sites/default/files/document/files/Buena practica VIOGEN_0.pdf it looks more like they looked at a bunch of cases (2159) and came up with the 35 questions and a scoring system not unlike what I just described above.

    Edit: I managed to find this, which has apparently been taken down since (but thanks to archive.org it’s still available): https://web.archive.org/web/20240227072357/https://eticasfoundation.org/gender/the-external-audit-of-the-viogen-system/

    VioGén’s algorithm uses classical statistical models to perform a risk evaluation based on the weighted sum of all the responses according to pre-set weights for each variable. It is designed as a recommendation system but, even though the police officers are able to increase the automatically assigned risk score, they maintain it in 95% of the cases.

    … which incidentally matches what the article says (that police maintain the VioGen risk score in 95% of the cases).


  • The crucial point is: 8% of the decisions turn out to be wrong or misjudged.

    The article says:

    Yet roughly 8 percent of women who the algorithm found to be at negligible risk and 14 percent at low risk have reported being harmed again, according to Spain’s Interior Ministry, which oversees the system.

    Granted, neither “negligible” or “low risk” means “no risk”, but I think 8% and 14% are far too high numbers for those categories.

    Furthermore, there’s this crucial bit:

    At least 247 women have also been killed by their current or former partner since 2007 after being assessed by VioGén, according to government figures. While that is a tiny fraction of gender violence cases, it points to the algorithm’s flaws. The New York Times found that in a judicial review of 98 of those homicides, 55 of the slain women were scored by VioGén as negligible or low risk for repeat abuse.

    So in the 98 murders they reviewed, the algorithm put more than 50% of them at negligible or low risk for repeat abuse. That’s a fucking coin flip!



  • The article mentions that one woman (Stefany González Escarraman) went for a restraining order the day after the system deemed her at “low risk” and the judge denied it referring to the VioGen score.

    One was Stefany González Escarraman, a 26-year-old living near Seville. In 2016, she went to the police after her husband punched her in the face and choked her. He threw objects at her, including a kitchen ladle that hit their 3-year-old child. After police interviewed Ms. Escarraman for about five hours, VioGén determined she had a negligible risk of being abused again.

    The next day, Ms. Escarraman, who had a swollen black eye, went to court for a restraining order against her husband. Judges can serve as a check on the VioGén system, with the ability to intervene in cases and provide protective measures. In Ms. Escarraman’s case, the judge denied a restraining order, citing VioGén’s risk score and her husband’s lack of criminal history.

    About a month later, Ms. Escarraman was stabbed by her husband multiple times in the heart in front of their children.

    It also says:

    Spanish police are trained to overrule VioGén’s recommendations depending on the evidence, but accept the risk scores about 95 percent of the time, officials said. Judges can also use the results when considering requests for restraining orders and other protective measures.

    You could argue that the problem isn’t so much the algorithm itself as it is the level of reliance upon it. The algorithm isn’t unproblematic though. The fact that it just spits out a simple score: “negligible”, “low”, “medium”, “high”, “extreme” is, IMO, an indicator that someone’s trying to conflate far too many factors into a single dimension. I have a really hard time believing that anyone knowledgeable in criminal psychology and/or domestic abuse would agree that 35 yes or no questions would be anywhere near sufficient to evaluate the risk of repeated abuse. (I know nothing about domestic abuse or criminal psychology, so I could be completely wrong.)

    Apart from that, I also find this highly problematic:

    [The] victims interviewed by The Times rarely knew about the role the algorithm played in their cases. The government also has not released comprehensive data about the system’s effectiveness and has refused to make the algorithm available for outside audit.




  • I get notifications for calls (obviously), SMS messages (of which I receive an average of 1 per month) and IMs from my immediate family. Everything else I check up on when I actually feel like I have the time for it. This has dramatically reduced the number of emails and other things I forget to reply to/act on, because I see them when I want to and when I have the time to actually deal with them; not when some random notification pops up when I’m doing something else, gets half-noticed and swiped away because I’ll deal with it later.






  • I wouldn’t trust an LLM to produce any kind of programming answer. If you’re skilled enough to know it’s wrong, then you should do it yourself, if you’re not, then you shouldn’t be using it.

    I’ve seen plenty of examples of specific, clear, simple prompts that an LLM absolutely butchered by using libraries, functions, classes, and APIs that don’t exist. Likewise with code analysis where it invented bugs that literally did not exist in the actual code.

    LLMs don’t have a holistic understanding of anything—they’re your non-programming, but over-confident, friend that’s trying to convey the results of a Google search on low-level memory management in C++.



  • Elysium is definitely the slowest, but time saved carries over, so if you’re fast on Tartarus, you’ll have more time in Asphodel, which is relatively short in terms of number of rooms, and subsequently more time in Elysium.

    And as someone else mentioned, it saves time to be aggressive too, so never be idle and try to anticipate where enemies spawn. Enemies without shield (or when the shield is gone) can be stun-locked, so just hammer away at them.

    Maybe I’m forgetting how much time and practice I’ve put into going faster, but if I can do it — everyone can, that’s for certain.

    Drawing inspiration from speedrun builds is great, so maybe check some of those (linked earlier) out. There are builds for different play styles, so I’m sure you’ll find something that works for you. :)


  • I’m not too good at high-heat runs, but I usually enable Tight Deadline level 1 and 2 first, and Extreme Measures 1 and 2, which is already 6 heat without much of a difference in difficulty. Extreme Measures 3 is a bit wild, IMO, but I guess it’s just a matter of practice, and then that’s 9 heat. Middle Management isn’t too bad either, and that’s 2 extra heat.

    Other than that, I usually go for Malphon (fists/gloves) with Aspect of Demeter and a build with Ares’ Doom effects on attack and cast, Athene’s dash and special, which usually gives me access to their duo boon: Merciful End which triggers the doom effects when I dash into an opponent of use my special. It’s insanely powerful.

    The Hades community on speedrun.com has some good guides and forum posts on builds — and videos of speedruns obviously, there’s a lot to pick up from them as well: https://www.speedrun.com/hades