• 0 Posts
  • 5 Comments
Joined 1 year ago
cake
Cake day: June 29th, 2023

help-circle
  • If I am not mistaken, the difference was that the Internet Archive was distributing books with a DRM that would make the PDF unusable after a certain time. You could relate it to how a physical library offers books for a limited time, for free. Now, of course, one could bypass the DRM or copy the contents differently, but so can another person photocopy a book they borrowed physically. Meanwhile, other physical libraries are allowed to distribute e-books, but I’m not sure if that’s made possible due to licensing fees.

    I’m not saying that they approached this well, especially given the copyright laws in the US, but it was indeed a good thing for the normal person at the time. Too bad that the judicial system in the US is biased towards leeching companies. I really can’t wait to see the AI vs publishers fight, though. Let’s see who has deeper pockets and better plants in the courts :D


  • Good luck! You can try the huggingface-chat repo, or ollama with this web-ui. Both should be decent, as they have instructions to set up a docker container.

    I believe the Llama 3 models are out there in a torrent somewhere, but I didn’t dig to find it. For the 70B model, you’ll probably need around 64GB of RAM available, but the 7B one should run fine with just 8GB. It will be somewhat slow though, compared to the ChatGPT experience. The self-attention mechanism can be parallelized, which is why you will see much better results on a GPU. According to some others that tested it, if you offload some stuff to RAM, you could see ~10-12 tokens per second on an RTX 3090 for certain 70B models. But more capable ones will be at less than 1 token per second, all depending on the context window you use.

    If you don’t have a GPU available, just give the Phi-3 model a try :D If you quantize it to 4 bits, it can apparently get 12 tokens per second on an iPhone haha. It should play nice with pooling information from a search engine, or a vector database like milvus, qdrant or chroma.


  • What db2 already said. Microsoft just released Phi-3 mini, which could, allegedly, run locally on newer smartphones.

    If I understood correctly, the Rabbit thingy just captures your information locally and then forwards it to their server. So, if you want more power, you could probably do the same by submitting the same info to a bigger open source model than Phi-3, like Llama 3, hosted on your homelab. I believe you can set it up with huggingface/gradio, which sort of provides an API that you could use.

    That way, you don’t need a shitty orange box, and can always get the latest open source models with a few lines of code. There are plenty of open source frameworks in the works at the moment, and I believe that we’re not far off from having multi-modal LLMs running on homelab-level hardware (if you don’t mind a bit of lag).