Here’s the thing: How would you react, if this bill required all texts that could help someone “hack” to be removed from libraries? Outrageous, right? What if we only removed cybersecurity texts from libraries if they were written with the help of AI? Does it now become ok?
What if the bill “just” sought to prevent such texts from being written? Still outrageous? Well, that is what this bill is trying to do.
Not everything is a slippery slope. In this case the scenario where learning about cybersecurity is even slightly hinderedby this law doesn’t sound particularly convincing in your comment.
The bill is supposed to prevent speech. It is the intended effect. I’m not saying it’s a slippery slope.
I chose to focus on cybersecurity, because that is where it is obviously bad. In other areas, you can reasonably argue that some things should be classified for “national security”. If you prevent open discussion of security problems, you just make everything worse.
Yeah, a bunch of speech is restricted. Restricting speech isn’t in itself bad, it’s generally only a problem when it’s used to oppress political opposition. But copyrights, hate speech, death threats, doxxing, personal data, defense related confidentiality… Those are all kinds of speech that are strictly regulated when they’re not outright banned, for the express purpose of guaranteeing safety, and it’s generally accepted.
In this case it’s not even restricting the content of speech. Only a very special kind of medium that consists in generating speech through an unreliably understood method of rock carving is restricted, and only when applied to what is argued as a sensitive subject. The content of the speech isn’t even in question. You can’t carve a cyber security text in the flesh of an unwilling human either, or even paint it on someone’s property, but you can just generate exactly the same speech with a pen and paper and it’s a-okay.
If your point isn’t that the unrelated scenarios in your original comment are somehow the next step, I still don’t see how that’s bad.
Oh yeah? And which restriction of free speech illustrating my previous comment would is even remotely controversial, do you think?
I’ve actually stated explicitly before why I believe it is a thing: to protect political dissent from being criminalized. Why do you think it is a thing?
And which restriction of free speech illustrating my previous comment would is even remotely controversial, do you think?
All of these regularly cause controversy.
I’ve actually stated explicitly before why I believe it is a thing: to protect political dissent from being criminalized. Why do you think it is a thing?
That’s not quite what I meant. Take the US 2nd amendment; the right to bear arms. It is fairly unique. But freedom of expression is ubiquitous as a guaranteed right (on paper, obviously). Why are ideas from the 1st amendment ubiquitous 200 years later, but not from the 2nd?
My answer is, because you cannot have a prosperous, powerful nation without freedom of information. For one, you can’t have high-tech without an educated citizenry sharing knowledge. I don’t know of any country that considers freedom of expression limited to political speech. It’s one of the more popular types of speech to cause persecution. Even in the more liberal countries, calls to overthrow the government or secede tends to be frowned on.
Do they really? Carving into people’s flesh causes controversy? The US sure is wild.
Even if some of my examples do cause controversy in the US sometimes (I do realize you lot tend to fantasize free speech as an absolute rather than a freedom that - although very important - is always weighed against all the other very important rights like security and body autonomy) they do stand as examples of limits to free speech that are generally accepted by the large majority. Enough that those controversies don’t generally end up in blanket decriminalization of mutilation and vandalism. So I still refute that my stance is not “the default opinion”. It may be rarely formulated this way, but I posit that the absolutism you defend is, in actuality, the rarer opinion of the two.
The example of restriction of free speech your initial comment develops upon is a fringe consequence of the law in question and doesn’t even restrict the information from circulating, only the tools you can use to write it. My point is that this is not at all uncommon in law, even in american law, and that it does not, in fact, prevent information from circulating.
The fact that you fail to describe why circulation of information is important for a healthy society makes your answer really vague. The single example you give doesn’t help : if scientific and tech-related information were free to circulate scientists wouldn’t use sci-hub. And if it were the main idea, universities would be free in the US (the country that values free speech the most) rather than in European countries that have a much more relative viewpoint on it. The well known “everything is political” is the reason why you don’t restrict free speech to explicitly political statements. How would you draw the line by law? It’s easier and more efficient to make the right general, and then create exceptions on a case-by-case basis (confidential information, hate speech, calls for violence, threats of murder…)
Should confidential information be allowed to circulate to Putin from your ex-President then?
Seems a reasonable request. You are creating a tool with the potential to be used as a weapon, you must be able to guarantee it won’t be used as such. Power is nothing without control.
This is for models that cost 100 million dollars to train. Not all things are the same and most things that can do serious damage to big chunks of population are regulated. Cars are regulated, firearms are regulated, access to drugs is regulated. Even internet access is super controlled. I don’t see how you can say AI should not be regulated.
AI is already regulated. Just because something is new (to the public) does not mean that laws don’t apply to it. We don’t need regulation for the sake of regulation.
There’s a lot of AI regulation that may become necessary one day. For example, maybe we should have a right to an AI assistant, like there is a right to legal counsel today. Should we be thinking about the minimum compute to assign to public defense AIs?
This is for models that cost 100 million dollars to train.
Or take a certain amount of compute. Right now, this covers no models. Between progress and inflation, it will eventually cover all models. At some point between no and then, the makers of such laws will be cursed as AI illiterate fools, like we curse computer illiterate boomers today.
Think about this example you gave: Cars are regulated
We regulate cars, and implicitly the software in it. We do not regulate software in the abstract. We don’t monitor mechanics or engineers. People are encouraged to learn and to become educated.
Of course you regulate software in the abstract. Have you ever heard of the regulations concerning onboard navigation software in planes? It’s really strict, and mechanics and engineers that work on that are monitored.
Better exemple: do you think people who work on the targeting algorithms in missiles are allowed to chat about the specifics of their algorithms with chat gpt? Because they aren’t.
This bill targets AI systems that are like the ChatGPT series. These AIs produce text, images, audio, video, etc… IOW they are dangerous in the same way that a library is dangerous. A library may contain instructions on making bombs, nerve gas, and so on. In the future, there will likely be AIs that can also give such instructions.
Controlling information or access to education isn’t exactly a good guy move. It’s not compatible with a free or industrialized country. Maybe some things need to be secret for national security, but that’s not really what this bill is about.
Yep nothing about censorship is cool. But for rampaging agi systems, a button to kill it would be nice. However it leads into a game and a paradox on how this could ever be achieved
I don’t see much harm in a “kill switch”, so If it makes people happy… But it is sci-fi silliness. AI is software. Malfunctioning software can be dangerous if it controls, say, heavy machinery. But we don’t have kill switches for software. We have kill switches for heavy machinery, because that is what needs to be turned off to stop harm.
I had a short look at the text of the bill. It’s not as immediately worrying as I feared, but still pretty bad.
https://leginfo.legislature.ca.gov/faces/billTextClient.xhtml?bill_id=202320240SB1047
Here’s the thing: How would you react, if this bill required all texts that could help someone “hack” to be removed from libraries? Outrageous, right? What if we only removed cybersecurity texts from libraries if they were written with the help of AI? Does it now become ok?
What if the bill “just” sought to prevent such texts from being written? Still outrageous? Well, that is what this bill is trying to do.
Not everything is a slippery slope. In this case the scenario where learning about cybersecurity is even slightly hinderedby this law doesn’t sound particularly convincing in your comment.
The bill is supposed to prevent speech. It is the intended effect. I’m not saying it’s a slippery slope.
I chose to focus on cybersecurity, because that is where it is obviously bad. In other areas, you can reasonably argue that some things should be classified for “national security”. If you prevent open discussion of security problems, you just make everything worse.
Yeah, a bunch of speech is restricted. Restricting speech isn’t in itself bad, it’s generally only a problem when it’s used to oppress political opposition. But copyrights, hate speech, death threats, doxxing, personal data, defense related confidentiality… Those are all kinds of speech that are strictly regulated when they’re not outright banned, for the express purpose of guaranteeing safety, and it’s generally accepted.
In this case it’s not even restricting the content of speech. Only a very special kind of medium that consists in generating speech through an unreliably understood method of rock carving is restricted, and only when applied to what is argued as a sensitive subject. The content of the speech isn’t even in question. You can’t carve a cyber security text in the flesh of an unwilling human either, or even paint it on someone’s property, but you can just generate exactly the same speech with a pen and paper and it’s a-okay.
If your point isn’t that the unrelated scenarios in your original comment are somehow the next step, I still don’t see how that’s bad.
That’s certainly not the default opinion. Why do you think freedom of expression is a thing?
Oh yeah? And which restriction of free speech illustrating my previous comment would is even remotely controversial, do you think?
I’ve actually stated explicitly before why I believe it is a thing: to protect political dissent from being criminalized. Why do you think it is a thing?
All of these regularly cause controversy.
That’s not quite what I meant. Take the US 2nd amendment; the right to bear arms. It is fairly unique. But freedom of expression is ubiquitous as a guaranteed right (on paper, obviously). Why are ideas from the 1st amendment ubiquitous 200 years later, but not from the 2nd?
My answer is, because you cannot have a prosperous, powerful nation without freedom of information. For one, you can’t have high-tech without an educated citizenry sharing knowledge. I don’t know of any country that considers freedom of expression limited to political speech. It’s one of the more popular types of speech to cause persecution. Even in the more liberal countries, calls to overthrow the government or secede tends to be frowned on.
Do they really? Carving into people’s flesh causes controversy? The US sure is wild.
Even if some of my examples do cause controversy in the US sometimes (I do realize you lot tend to fantasize free speech as an absolute rather than a freedom that - although very important - is always weighed against all the other very important rights like security and body autonomy) they do stand as examples of limits to free speech that are generally accepted by the large majority. Enough that those controversies don’t generally end up in blanket decriminalization of mutilation and vandalism. So I still refute that my stance is not “the default opinion”. It may be rarely formulated this way, but I posit that the absolutism you defend is, in actuality, the rarer opinion of the two.
The example of restriction of free speech your initial comment develops upon is a fringe consequence of the law in question and doesn’t even restrict the information from circulating, only the tools you can use to write it. My point is that this is not at all uncommon in law, even in american law, and that it does not, in fact, prevent information from circulating.
The fact that you fail to describe why circulation of information is important for a healthy society makes your answer really vague. The single example you give doesn’t help : if scientific and tech-related information were free to circulate scientists wouldn’t use sci-hub. And if it were the main idea, universities would be free in the US (the country that values free speech the most) rather than in European countries that have a much more relative viewpoint on it. The well known “everything is political” is the reason why you don’t restrict free speech to explicitly political statements. How would you draw the line by law? It’s easier and more efficient to make the right general, and then create exceptions on a case-by-case basis (confidential information, hate speech, calls for violence, threats of murder…)
Should confidential information be allowed to circulate to Putin from your ex-President then?
You are apparently mistaking me for someone else.
Seems a reasonable request. You are creating a tool with the potential to be used as a weapon, you must be able to guarantee it won’t be used as such. Power is nothing without control.
How is that reasonable? Almost anything could be potentially used as a weapon, or to aid in crime.
This is for models that cost 100 million dollars to train. Not all things are the same and most things that can do serious damage to big chunks of population are regulated. Cars are regulated, firearms are regulated, access to drugs is regulated. Even internet access is super controlled. I don’t see how you can say AI should not be regulated.
This is appeal to authority, the ligitimicy, correctness and, “goodness” of the items you’ve listed are in constant flux and under heavy dibate.
These two in particular are a powder keg. US politics likes the former (a lot) and, lemmy is attracted to the latter.
AI is already regulated. Just because something is new (to the public) does not mean that laws don’t apply to it. We don’t need regulation for the sake of regulation.
There’s a lot of AI regulation that may become necessary one day. For example, maybe we should have a right to an AI assistant, like there is a right to legal counsel today. Should we be thinking about the minimum compute to assign to public defense AIs?
Or take a certain amount of compute. Right now, this covers no models. Between progress and inflation, it will eventually cover all models. At some point between no and then, the makers of such laws will be cursed as AI illiterate fools, like we curse computer illiterate boomers today.
Think about this example you gave: Cars are regulated
We regulate cars, and implicitly the software in it. We do not regulate software in the abstract. We don’t monitor mechanics or engineers. People are encouraged to learn and to become educated.
Of course you regulate software in the abstract. Have you ever heard of the regulations concerning onboard navigation software in planes? It’s really strict, and mechanics and engineers that work on that are monitored.
Better exemple: do you think people who work on the targeting algorithms in missiles are allowed to chat about the specifics of their algorithms with chat gpt? Because they aren’t.
I guess let’s deregulate guns then. Oh wait.
This bill targets AI systems that are like the ChatGPT series. These AIs produce text, images, audio, video, etc… IOW they are dangerous in the same way that a library is dangerous. A library may contain instructions on making bombs, nerve gas, and so on. In the future, there will likely be AIs that can also give such instructions.
Controlling information or access to education isn’t exactly a good guy move. It’s not compatible with a free or industrialized country. Maybe some things need to be secret for national security, but that’s not really what this bill is about.
Yep nothing about censorship is cool. But for rampaging agi systems, a button to kill it would be nice. However it leads into a game and a paradox on how this could ever be achieved
I don’t see much harm in a “kill switch”, so If it makes people happy… But it is sci-fi silliness. AI is software. Malfunctioning software can be dangerous if it controls, say, heavy machinery. But we don’t have kill switches for software. We have kill switches for heavy machinery, because that is what needs to be turned off to stop harm.
I am pretty sure no one has ever built a computer that can’t be shut off. Somehow someway.