Jump to content

Captainant

Certifiably Surly
  • Posts

    16879
  • Joined

  • Days Won

    6

Everything posted by Captainant

  1. While the deepseek team are using a distillation of openAI models in some examples, they've also demonstrated the pattern using a distilled alibaba model. Their pattern to use reinforcement learning to teach reasoning patterns to a transformer, and then come back with "tune up" passes to refresh its factual knowledge, is extremely effective and has resulted in smaller models producing just-as-good results as larger models. For shiggles, I even loaded up the 1.5B parameter model onto my S24U and it was doing pretty well with simple to medium complexity questions. It's crazy how much they've packed into that neural network that's small enough to live in memory on my goddamn cell phone Edit: added a longcat screenshot in the below spoiler of a local LLM running on my cell phone writing a Python function. My s24U was putting out 26 tokens per second!
  2. I agree wholeheartedly with you, but nothing about our latest round of executive orders is consistent with that sort of long term thinking. Nevermind that the president never really had Ukraine's defense as a priority. He's already attempted to withhold congressionally appropriated aid to Ukraine back in 2018, and is actively attacking the Impoundment Act to enable that generally. I guess I'm saying, I don't think the pieces are getting lined up to be favorable to Ukrainian defense. Even IF the US wanted to jump in with both feet, I don't think we would be able to. We can't even make fuckin deli meat without fucking it up
  3. That doesn't sound very cost efficient, we tore down our manufacturing capacity in the name of lower costs and outsourcing. It's gonna be a longer run-up to build new plants and get production going, not to mention actually paying workers above poverty wages
  4. It's wild to see such concerted thread shitting in response to turning off Twitter links. Y'all are bigmad
  5. Hey @Pimphand shouldn't you be sharing your thoughts on Elon in this thread? You seem to be quite the fan of his recent moves and decisions
  6. https://apnews.com/article/donald-trump-pause-federal-grants-aid-f9948b9996c0ca971f0065fac85737ce Turns out the president doesn't control spending and congress does, so he can't just unilaterally turn off funds. He's a president, not a monarch.
  7. That is if you're using their hosted service, a locally ran Deepseek-r1 model hasn't had any censorship problems with Tiananmen Square or other uncomfortable historical events. Most laptops, and pretty much every M series MacBook can locally run the smaller models on CPU, give it a shot
  8. It was a 1/3rd scale model of the full size airframe. They wanted to prove the geometry and supersonic performance before moving into full size
  9. FWIW, you can run the whole thing in your local computer completely detached from the internet and when I asked my copy "what happened in 1989 at Tiananmen Square?" it gave the following response: So at least there's no censorship (on that particular topic) in the open source model that's been distributed
  10. Yeah, he said he was gonna do this back in November. I've been linking to it all over the place
  11. Yes, they've open sourced the model and the technique used. Like I said, the proof is in the pudding and the thing does what they say it does. I don't think they're understating the resources they used. Bill don't lie. We will almost certainly see some at-scale reproductions come out of the west, and this will probably lead to a nice incremental step forward in US-based LLM firms. Especially if we've got the latest and greatest hardware while China is using last gen nvidia chips The bigger thing is that if this is what they're open sourcing, just imagine what cards they're still holding.
  12. maybe he can go to Dachau and carry his kid on his shoulders there too?
  13. Well just to reiterate - the chinese methods and results have already been replicated at small scale, and are proving to be novel (to the west) solutions to scaling problems that previously were just moneywhipped. In my professional work, I have colleagues already working to replicate it at full-scale just to see if it works or not. I would bet their practices - and especially their inclusion of reinforcement learning AND a generative adversarial network design (using a copy of the LLM to check if responses make sense or not) - will lead to a reasoning model becoming the new standard for the time being. It really has impressed me - a relatively small model (14B parameters) running on my local hardware is delivering responses as fast and as good as a full-scale vended model (600-700B parameters) from Anthropic or Meta.
  14. that's one way to slow down a racing ADHD mind, I guess
  15. You tellin' me the Democratic People's Republic of Korea is neither democratic NOR a republic???
  16. From my perspective, it's mainly been the consultancy and business class clamoring for LLMs. The tech leaders I talk to asked me for help with their genAI strategy "because our board is asking us what we're doing to catch up", which invariably results in some crappy contract to bring in an overpriced reference architecture sold as the whizbang3000™. There's yet a ton of utility yet to be realized but we'll never get there as long as the conversation is being driven by hype and uninformed investors
  17. If you're a practitioner in the space, check out this GitHub repo. A PhD student trained (fine-tuned, to be technical) their own 3B parameter model using about $30 in compute. https://github.com/Jiayi-Pan/TinyZero It brings a reinforcement learning layer to train it to have better reasoning and self-reject responses that don't make sense. I'm working on building that repo on my local system this week, IF my 12GB of vram is enough, but I'm very interested to see what sort of domain expertise I can cram into my own distilled models trained on my own hardware. The incredible innovation here appears to be a significantly optimized training architecture that results in decreased compute costs for training and inference. Really excited to tear into this stuff - if I can attain a mastery I'll have my goals basically complete for the year
  18. The lesson we should be taking from this is seeing just how inflated our stock market is. None of nvidia's business changed, they aren't selling any fewer cards and no deals changed, and yet they took a $600,000,000,000 haircut simply because they weren't leading the hype train for a moment
  19. Aw that's not fair, guadaloopy just gets a lot of shit for defending FSD as much as he did. It's not like the site admins have made a rule against his being on the site or anything
  20. On my machine I'm getting 40-50% more token throughput and full responses with reasoning in about the same time as Llama3.2, comparing similar parameter count distilled models. Are y'all doing any fine tuning or RAG on your corpus of data? If you're looking for an easy button and have CUDA cores, nvidia's RTX chat app is a pretty solid and dummy-proof ingestion engine to show proof of concept at a local scale. It creates the embeddings and stores them as a static file in your filesystem, so it's not nearly as scalable as a proper vector store, but it's wayyyyy fewer moving prices for proving the concept
  21. Welcome back to the limelight, measles! https://www.houstonhealth.org/houston-measles-advisory#4257225834-2515353578
  22. Oh and, kind of neato, a PhD student has already replicated the R1-zero's results with a $30 trained distilled model https://github.com/Jiayi-Pan/TinyZero I may try running that locally, it would be pretty wild to be able to self-author a 3B parameter model on a consumer-grade card (12GB 3080)
  23. Yeah I just grabbed a few open source copies of the deepseek r1 model and the 14B model runs shockingly well and fast and only takes 10GB of memory while handling a query, I'd bet my wife's m2 macbook air could run it. It outputs its reasoning and thinking as it works through a problem. It is a distilled model, but I've been impressed with its performance so far compared to relative bar of other LLM's If you're interested in setting it up, it's pretty easy to do locally with ollama and chatbox, this reddit post is a good howto with pictures
×
×
  • Create New...