I am a teacher and I have a LOT of different literature material that I wish to study, and play around with.
I wish to have a self-hosted and reasonably smart LLM into which I can feed all the textual material I have generated over the years. I would be interested to see if this model can answer some of my subjective course questions that I have set over my exams, or write small paragraphs about the topic I teach.
In terms of hardware, I have an old Lenovo laptop with an NVIDIA graphics card.
P.S: I am not technically very experienced. I run Linux and can do very basic stuff. Never self hosted anything other than LibreTranslate and a pihole!
While you can run an llm on an “old” laptop with an Nvidia GC it will likely be really slow. Like several minutes to much much longer slow. Huggingface.co is a good place to start and has a ton of different LLMs to choose from that range from small enough to run on your hardware to ones that won’t.
As you are a teacher you know that research is going to be vital to your understanding and implementing this project. There is a plethora of information out there. There will not be a single person’s answer that will work perfectly for your wants and your hardware.
When you have figured out your plan and then run into issues that’s a good point to ask questions with more information about your situation.
I say this cause I just went through this. Not to be an ass.
Can they not get a TPU on USB, like the Coral Accelerator or something?
It’s less the calculations and more about memory bandwidth. To generate a token you need to go through all the model data, and that’s usually many many gigabytes. So the time it takes to read through in memory is usually longer than the compute time. GPUs have gb’s of RAM that’s many times faster than the CPU’s ram, which is the main reason it’s faster for llm’s.
Most tpu’s don’t have much ram, and especially cheap ones.
What I’m using is Text Generation WebUI with an 11B GGUF model from Huggingface. I offloaded all layers to the GPU, which uses about 9GB of VRAM. With GGUF models, you can choose how many layers to offload to the GPU, so it uses less VRAM. Layers that aren’t offloaded use system RAM and the CPU, which will be slower.
Probably better to ask on !localllama@sh.itjust.works. Ollama should be able to give you a decent LLM, and RAG (Retrieval Augmented Generation) will let it reference your dataset.
The only issue is that you asked for a smart model, which usually means a larger one, plus the RAG portion consumes even more memory, which may be more than a typical laptop can handle. Smaller models have a higher tendency to hallucinate - produce incorrect answers.
Short answer - yes, you can do it. It’s just a matter of how much RAM you have available and how long you’re willing to wait for an answer.
I’m in the early stages of this myself and haven’t actually run an LLM locally but the term that steered me in the right direction for what I was trying to do was ‘RAG’ Retrieval-Augmented Generation.
ragflow.io (terrible name but good product) seems to be a good starting point but is mainly set up for APIs at the moment though I found this link for local LLM integration and I’m going to play with it later today. https://github.com/infiniflow/ragflow/blob/main/docs/guides/deploy_local_llm.md
It depends on the exact specs of your old laptop. Especially the amount of RAM and VRAM on the graphics card. It’s probably not enough to run any reasonably smart LLM aside from maybe Microsoft’s small “phi” model.
So unless it’s a gaming machine and has 6GB+ of VRAM, the graphics card will probably not help at all. Without, it’s going to be slow. I recommend projects that are based on llama.cpp or use it as a backend, for that kind of computers. It’s the best/fastest way to do inference on slow computers and CPUs.
Furthermore you could use online-services or rent a cloud computer with a beefy graphics card by the hour (or minute.)
deleted by creator
You need more than a llm to do that. You need a Cognitive Architecture around the model that include RAG to store/retrieve the data. I would start with an agent network (CA) that already includes the workflow you ask for. Unfortunately I don’t have a name ready for you, but take a look here: https://github.com/slavakurilyak/awesome-ai-agents
I’d recommend trying LM Studio (https://lmstudio.ai/). You can use it to run language models locally. It has a pretty nice UI and it’s fairly easy to use.
I will say, though, that it sounds like you want to feed perhaps a large number of tokens into the model, which will require a model made for a large context length and may require a pretty beefy machine.
I watched NetworkChucks tutorial and just did what he did but on my Macbook. Any recent Macbook(M-series) will suffice. https://youtu.be/Wjrdr0NU4Sk?si=myYdtKnt_ks_Vdwo
Here is an alternative Piped link(s):
https://piped.video/Wjrdr0NU4Sk?si=myYdtKnt_ks_Vdwo
Piped is a privacy-respecting open-source alternative frontend to YouTube.
I’m open-source; check me out at GitHub.
NetworkChuck is the man
Reasonable smart… that works preferably be a 70b model, but maybe phi3-14b or llama3 8b could work. They’re rather impressive for their size.
For just the model, if one of the small ones work, you probably need 6+ gb VRAM. If 70b you need roughly 40gb.
And then for the context. Most models are optimized for around 4k to 8k tokens. One word is roughly 3-4 tokens. The VRAM needed for the context varies a bit, but is not trivial. For 4k I’d say right half a gig to a gig of VRAM.
As you go higher context size the VRAM requirement for that start to eclipse the model VRAM cost, and you will need specialized models to handle that big context without going off the rails.
So no, you’re not loading all the notes directly, and you won’t have a smart model.
For your hardware and use case… try phi3-mini with a RAG system as a start.
Jan.ai might be a good starting point or ollama? There’s https://tales.fromprod.com/2024/111/using-your-own-hardware-for-llms.html which has some guidance for using jan.ai for both server and client
The easiest way to run local LLMs on older hardware is Llamafile https://github.com/Mozilla-Ocho/llamafile
For non-nvidia GPUs, webgpu is the way to go https://github.com/abi/secret-llama
deleted by creator
https://matilabs.ai/2024/02/07/run-llms-locally/
Haven’t done this yet, but this is a source I saved in response to a similar question a while back.
While this will get you a selfhosted LLM it is not possible to feed data to them like this. As far as I know there are a 2 possibilities:
-
Take an existing model and use the literature data to fine tune the model. The success of this will depend on how much “a lot” means when it comes to the literature
-
Create a model yourself using only your literature data
Both approaches will require some yrogramming knowledge and understanding of how a llm works. Additionally it will require a preparation of the unstructured literature data to a kind of structured data that can be used to train or fine tune the model.
Im just a CS student so not an expert in this regard ;)
Thx for this comment.
My main drive for self hosting is to escape data harvesting and arbitrary query limits, and to say, “I did this.” I fully expect it to be painful and not very fulfilling…
-