https://huggingface.co/deepseek-ai/DeepSeek-R1
With AI there's a few phases of actual computationally intense work.
1) Data prep/loading for training. - This is where you get all the stuff you want to show the AI together, cleaned up and formatted and tagged and whatever. You have to be able to tell an AI what to compare stuff to and when and if something is right or wrong after it spits out whatever it is doing in training. Think of this like decomposing a food dish to its ingredients and how much of each was in the recipe. They "tokenize" the data and assign arbitrary neural net locations to these tokens.
Simply put this is a brain from newborn until 10-12 years old where you are mostly in feeding and organizing what is fed to you mode.
2) training itself - this is where the big math stuff happens. It adds parameters and models weights to certain training results and just tries over and over and over and over to spit out something it knows is right by a reward system (that was more right than less right etc) they do this countless amounts of times until there starts to be associations between the neural nodes themselves. So the likelihood of the next token is probably one of the nuerons that has a shitload of connections to the prior token. The one with the most connections is the weighted winner by default, but you can mess with the parameters to get a range. Once you do this enough times you start to get to a place where it's mostly accurate regurgitating prompts that you have fed it. From your big pool of tokenized data. Think of this like having all the ingredients to make every dish you've ever had and also the recipe cards and you just randomly mix them and taste the food then look at the recipe and see how wrong you were (not what you did wrong, just that you weren't right) and then you try again until you get good enough to just recreate any dish you are asked and the recipe is close enough.
Simply put this is where you learn how to learn and start to use what you've learned in school. 12-19ish brain where you go to school, play sports learn to drive or do things etc.
2b) reinforcement learning or reasoning etc. This is where you do the things in step 1 but have stuff that wasn't in the training set thrown at you at the same time and you are graded significantly more harshly to the point where you need to not only recreate a dish but also do any modifications or substitutes asked in the prompt and it still be acceptable.
Simply put this is where you start to really work on specific problems and use your knowledge to do stuff. Think junior year of college or post journeyman trade moving into graduate or masters programs or into unsupervised work in the trades up to and including taking your PE exam for engineers. 19-25ish of the human brain.
After step 2 you can plop that out into what's called a model. This is a saved state of all the neural network and the weights and all that, but without having to store the training set, just the tokens and parameters. The larger the parameters the bigger the model (1B, 7b, 30b, 200b, 875b that you see next to models) you can get pretty awesome results with low parameter models that are well trained without having to go crazy on the next part.
3) inference - this is where you take the model and load it into working memory. So the bigger the model the bigger your working memory set needs to be. And all this working memory needs to be immediately accessible so it can start outputting the answer. This is where you can have a restaurant with anyone in the world asking you to cook anything they've ever eaten that's recorded in history and order it their way and you are expected to immediately cook it in as little time as possible. The faster your memory is and the better your reinforcement learning was the faster and more accurately you can serve this dish.
Simply put this is being the engineer or doctor or tradesperson or whatever you functionally expertise at as an adult. This is what you get paid to do with your brain not your body. Human brain 25+ish.
Except what we are trying to do with these LLM and agentic models is actually just do this whole thing once and have it be expert level at everything. So instead if making a bot to win iron chef we are trying to make a bot that can win iron chef, master chef, survivor, big brother, call the NFC championship game, engineer a building, fix traffic in Chicago, make a cure for cancer, build the best app in the app store, etc. It can do anything you throw at it and it can do it awesome.
Small language models or models trained specifically on a certain topic are far better and you can take an llm and in 2a specialize it and then it can just win iron chef every time.
Models right now have 3 styles of release.
1) Commercial/Proprietary - you pay us we give you the license and let you use it according to a specific cost structure. That's whether you pay per token or run it yourself and pay a license fee.
2) Commercial/Open Source - you can download the model and use it on your own computers or you can pay us to use it on our computers for a specific cost structure, unless you make a bunch of money off using it to actually do something and then we get a cut of that (meta llama model)
3) Completely Open Source - you can download it and do whatever you want with it as long as you say it's our model. Tons of people can offer this on any cost structure, but the devekoper usually offers a plan too (deepseek)
If you had 8 H100s or MI350X or whatever that had about 800GB of HBM for the GPUs in a cluster you can run deepseek-R1 with no germs and conditions. Download it from the link up top.
They have distilled models that make it smaller so you can run it on a laptop all the way through a pretty beefy workstation.
Basically they gave the brain away you just need to put it in an android, how much memory your android has is how fancy of a model you can install.