Depends on what you’re using.
With local models you use something called a “negative prompt” to exclude anything that you don’t want in the image.
Depends on what you’re using.
With local models you use something called a “negative prompt” to exclude anything that you don’t want in the image.
If you really want this to work, you would have to train/fine tune a model by feeding it a bunch of images that show that person’s handwriting.
if you’re just asking ChatGPT to do this for you then you’re doing it wrong.
Yeah until the cops pull you over and take your cash under civil asset forfeiture because it’s “suspicious that you have so much cash on hand”.
The features you miss out on would be direct deposit from checks and app notifications (usually there are a few that you want enabled but are only available through the app).
Good luck when banking apps start doing this.
I just want to be able to set alarms with their calendar app (where it currently only sends notifications).
Ok, but the most important part of that research paper is published on the github repository, which explains how to provide audio data and text data to recreate any STT model in the same way that they have done.
See the “Approach” section of the github repository: https://github.com/openai/whisper?tab=readme-ov-file#approach
And the Traning Data section of their github: https://github.com/openai/whisper/blob/main/model-card.md#training-data
With this you don’t really need to use the paper hosted on arxiv, you have enough information on how to train/modify the model.
There are guides on how to Finetune the model yourself: https://huggingface.co/blog/fine-tune-whisper
Which, from what I understand on the link to the OSAID, is exactly what they are asking for. The ability to retrain/finetune a model fits this definition very well:
The preferred form of making modifications to a machine-learning system is:
- Data information […]
- Code […]
- Weights […]
All 3 of those have been provided.
I don’t understand. What’s missing from the code, model, and weights provided to make this “open source” by the definition of your first link? it seems to meet all of those requirements.
As for the OSAID, the exact training dataset is not required, per your quote, they just need to provide enough information that someone else could train the model using a “similar dataset”.
I did a quick check on the license for Whisper:
Whisper’s code and model weights are released under the MIT License. See LICENSE for further details.
So that definitely meets the Open Source Definition on your first link.
And it looks like it also meets the definition of open source as per your second link.
Additional WER/CER metrics corresponding to the other models and datasets can be found in Appendix D.1, D.2, and D.4 of the paper, as well as the BLEU (Bilingual Evaluation Understudy) scores for translation in Appendix D.3.
The STT (speech to text) model that they created is open source (Whisper) as well as a few others:
I initially think this same thing every time I see someone mention MTG on here, glad I’m not the only one.
I don’t think this is specifically an “AI” problem as much as it’s a privacy issue with the way companies are buying and selling our info for targeted advertising. These models are definitely enabling them to do more with the data that they have as well as to collect more information from us in new ways.
Yeah, the other thing I could see happening is a similar tactic used by scammers where they use Mules who pick up mail from various Airbnbs throughout whatever country, but this would definitely limit most bot operations… Unless some organization specializes in this and just offers some service to create a bunch of accounts for anyone willing to pay.
Also, how many accounts would you limit to a single address, and how long would you lock up an address before it could be used again (given that people do move around from time to time).
edit:typo.
That’s a good point. I didn’t know about the USPS Form 1583 for virtual mailboxes… Although that is a U.S. specific thing, so finding a similar service in a country that doesn’t care so much might be the way to go about that.
Yep, exactly this. It might deter some small time bot creators, but it won’t stop larger operations and may even help them to seem more legitimate.
If anything, my favorite idea comes from this xkcd:
Easy way to get around that with “virtual” addresses: https://ipostal1.com/virtual-address.php
Just pay $10 for every account that you want to create… you may as well just go with the solution of charging everyone $10 to create an account. At least that way the instance owner is getting supported and it would have the same effect.
This article doesn’t go into it, but Louis Rossmann pointed out that their profit margin has tanked recently.
https://odysee.com/how-intel’s-oxidation-scandal-screws
At the end of 2021 it was 25.1% for the year.
At the end of 2022 it was 12.7%
At the end of 2023 it was 3.1%
Even ignoring the downward trend, at a margin like 3%, a small swing in the market, a small mistake in inventory ordering, or replacing a bunch of CPUs that had an oxidation issue during the manufacturing process will push them over the edge into losing money instead of making money.
https://www.youtube.com/watch?v=OVdmK1UGzGs
Not saying this to defend Intel, just pointing out a major reason as to why they are scrambling to cut down on costs.
edit: formatting
Yeah, a decision to modify copyright so that it affects training data as well would devastate open source models and set us back a bit.
There are many that want to push LLMs back, especially journalists, so seeing articles like this are to be expected.
edit: a word.
That’s a big misconception with what quantum internet is (and what quantum entanglement actually allows for) as explained by this physicist: https://www.youtube.com/watch?v=u-j8nGvYMA8
Quantum Internet doesn’t mean that you can transmit data faster than the speed of light.
Quantum Internet just means you get an ultra secure connection, but it’s super susceptible to noise (in other words, you can’t send a lot of data reliably and it would be terrible for that).
At best this would be useful for being absolutely sure that some encryption keys were sent successfully without being intercepted by anyone else.
The oldest tweets I could find that actually started reporting this are from ~16 days ago.
https://x.com/Piotrdotcom/status/1829126494574067992
They reference a page here that was posted on Aug 29th.
https://niebezpiecznik.pl/post/uwazajcie-na-takie-captcha/