Jump to content

ChatGPT AI Tool— We all work for robots now


956 Worldwide

Recommended Posts

11 hours ago, raw dog said:

This guy has some good points, but I don’t know how to make twitter work here.  
 

https://x.com/amlivemon/status/1884037387686191133?s=46

 

Anyone that takes China’s declarations of tech and economic policies at face value when for decades their tech has failed, their economy is a giant Ponzi scheme and it’s military is a paper tiger, I just laugh and discount. 

They tout hypersonics while US achieved  Mach 24 in the 1970s… nor have Russia or China equaled the Blackbird tech… which is 60 years old…. So yes… China and Russia are decades upon decades behind in tech and economic progress

These are ignorant points. Chinese hypersonic weapons are significantly more advanced than what Russia typically calls hypersonic. Modern hypersonic weapons aren't just about speed, but about maneuverability at speed. The glide vehicle hypersonic missiles utilized by China are legitimately impressive and currently offer a capability that the US cannot replicate in its in-service arsenal. We do have a number of programs developing hypersonic glide vehicles similar to what China currently fields and more advanced hypersonic cruise missiles (powered hypersonic flight). 

Link to comment
Share on other sites

1 minute ago, B00M said:

Other than NVIDIA there are only 12 companies in the US with market caps bigger than the $595B NVIDIA lost just yesterday. 

The lesson we should be taking from this is seeing just how inflated our stock market is. None of nvidia's business changed, they aren't selling any fewer cards and no deals changed, and yet they took a $600,000,000,000 haircut simply because they weren't leading the hype train for a moment

  • Hook 'Em 6
  • Like 3
Link to comment
Share on other sites

13 minutes ago, Captainant said:

On my machine I'm getting 40-50% more token throughput and full responses with reasoning in about the same time as Llama3.2, comparing similar parameter count distilled models.

Are y'all doing any fine tuning or RAG on your corpus of data? If you're looking for an easy button and have CUDA cores, nvidia's RTX chat app is a pretty solid and dummy-proof ingestion engine to show proof of concept at a local scale. It creates the embeddings and stores them as a static file in your filesystem, so it's not nearly as scalable as a proper vector store, but it's wayyyyy fewer moving prices for proving the concept

We're still POCing but yes,  using RAGs via Msty for right now unless we can find a better solution but the Ollama/Msty package is providing great results. 

Edited by SmokeyTheBear
Link to comment
Share on other sites

6 minutes ago, SmokeyTheBear said:

We're still POCing but yes,  using RAGs via Msty for right now unless we can find a better solution but the Ollama/Msty package is providing great results.  

If you're a practitioner in the space, check out this GitHub repo. A PhD student trained (fine-tuned, to be technical) their own 3B parameter model using about $30 in compute.

https://github.com/Jiayi-Pan/TinyZero

It brings a reinforcement learning layer to train it to have better reasoning and self-reject responses that don't make sense.

I'm working on building that repo on my local system this week, IF my 12GB of vram is enough, but I'm very interested to see what sort of domain expertise I can cram into my own distilled models trained on my own hardware.

The incredible innovation here appears to be a significantly optimized training architecture that results in decreased compute costs for training and inference. Really excited to tear into this stuff - if I can attain a mastery I'll have my goals basically complete for the year

Edited by Captainant
  • Hook 'Em 3
Link to comment
Share on other sites

5 minutes ago, Captainant said:

If you're a practitioner in the space, check out this GitHub repo. A PhD student trained (fine-tuned, to be technical) their own 3B parameter model using about $30 in compute.

https://github.com/Jiayi-Pan/TinyZero

It brings a reinforcement learning layer to train it to have better reasoning and self-reject responses that don't make sense.

I'm working on building that repo on my local system this week, IF my 12GB of vram is enough, but I'm very interested to see what sort of domain expertise I can cram into my own distilled models trained on my own hardware.

The incredible innovation here appears to be a significantly optimized training architecture that results in decreased compute costs for training and inference. Really excited to tear into this stuff - if I can attain a mastery I'll have my goals basically complete for the year

I'm not dev but in IT and responsible for our internal use policy.  I'll get that info to the team to test out and compare.  Appreciate the input.  

Link to comment
Share on other sites

Well, it seems like the Chinese went into the problem from a different perspective and found an elegant, simplified solution to the compute gluttony associated with training models.  This happens a lot, but not at this scale of investment and publicity.  Something foundational was missed by all the engineers for all the big AI players in the US who are used to operating with unlimited compute and capital.  It really calls into question their leadership in the space.  Its an oversimplified exaggeration, but billions of investment dollars have been sunk into what may wind up being a dead branch of the evolutionary tree of AI and at the urging of American technologists.  How are investors going to react to that?  There are going to be some tense board of director meetings coming up.  Good article about the market implications.

https://daringfireball.net/linked/2025/01/27/nvidia-deepseek-haircut

To make a long story short, it’s been broadly assumed that leading-edge model training and inference requires the highest of high-end hardware — which hardware, especially for training, pretty much exclusively comes from Nvidia. The assumption was that any company trying to compete in this fiercely competitive field — a field with massive implications for industry, culture, and national security — must get in line to buy (or rent) enormous amounts of computational power exclusively available on the very best systems from Nvidia. That hardware isn’t available (at least legally) to Chinese companies, because of the Biden administration’s export bans. DeepSeek engineers found and implemented multiple massive efficiency improvements that allowed the company to train its latest models at far lower prices using far-less-capable hardware, and perform inference under heretofore unprecedented memory constraints. DeepSeek’s achievements render obsolete several prior planks of conventional wisdom regarding the state of AI. My link a few sentences prior in this paragraph is to a WSJ report from 14 days ago that now feels like it dates from at least 14 months ago.

Spoiler

The most shocking result has been a 17 percent hit to Nvidia’s stock price today, knocking somewhere around $350 billion off their market cap. Google is down about 4 percent, and Microsoft down 2 percent — but Meta is up 1 percent and Apple is up a little over 3 percent. I think the market is reacting pretty sensibly:

  • Nvidia has taken an unprecedented beating. $350 billion is more than the combined market caps of Verizon and AT&T (each worth roughly $170 billion). But Nvidia has been — and even with these breakthroughs from DeepSeek, remains — in an unprecedented position. Nvidia’s high-flying valuation reflects its unprecedented position as the essential hardware company for AI. Nvidia’s position, post-R1, remains essential — just not as essential as everyone thought. Hence the haircut. But Nvidia remains the third-most-valuable company in the world — it took a massive plunge today but in no way collapsed.
  • TSMC and Broadcom both took big hits today too (-13% and -17%, respectively). Being a “chip company” doesn’t look as sweet today as it did last week.
  • If OpenAI were publicly traded, I suspect it might have collapsed — the sell-off it would have faced today might have triggered emergency measures that halted trading.
  • Microsoft has been distancing itself from OpenAI, but is still intertwined with them like no other company. (Recall that Apple walked away from a big investment in OpenAI just four months ago.) A -2% hit feels about right.
  • The broadest implication of DeepSeek’s achievements is that really good AI is going to be even cheaper and more openly available than expected — sooner than expected. That’s bad news for Google, whose entire enterprise value is based around their having an unassailable and sustainable lead in these areas. Google doesn’t have to worry about a competitor coming along to rival them in search. They have to worry that the entire field of “search” is on the cusp of being commoditized. Thus the -4% hit.
  • Meta, the thinking goes, benefits as generative AI becomes cheaper. That’s why Meta’s own AI efforts are sorta-kinda open source. Meta has nothing to do with DeepSeek, but DeepSeek’s achievements seem perfectly aligned with Meta’s own interests in AI.
  • Apple benefits, indirectly, in at least two ways from DeepSeek’s breakthroughs. First, a vast increase in the inference capabilities of RAM-constrained hardware strongly suggests that Apple’s consumer devices will soon be able to perform far more AI functionality locally. Apple Silicon’s shared memory architecture is perfect for this — and still un-replicated by anyone else in the industry. Second, Apple’s decades-long standoffish relationship with Nvidia suddenly looks like less of a problem.

 

  • Like 1
Link to comment
Share on other sites

7 minutes ago, Goredho said:

It really calls into question their leadership in the space.  Its an oversimplified exaggeration, but billions of investment dollars have been sunk into what may wind up being a dead branch of the evolutionary tree of AI and at the urging of American technologists. 

From my perspective, it's mainly been the consultancy and business class clamoring for LLMs. The tech leaders I talk to asked me for help with their genAI strategy "because our board is asking us what we're doing to catch up", which invariably results in some crappy contract to bring in an overpriced reference architecture sold as the whizbang3000™.

There's yet a ton of utility yet to be realized but we'll never get there as long as the conversation is being driven by hype and uninformed investors

  • Hook 'Em 3
Link to comment
Share on other sites

1 minute ago, Captainant said:

From my perspective, it's mainly been the consultancy and business class clamoring for LLMs. The tech leaders I talk to asked me for help with their genAI strategy "because our board is asking us what we're doing to catch up", which invariably results in some crappy contract to bring in an overpriced reference architecture sold as the whizbang3000™.

There's yet a ton of utility yet to be realized but we'll never get there as long as the conversation is being driven by hype and uninformed investors

Yep, every BOD meeting I am in for our company, the first question is, "What are you doing to bring AI to bear for our business?"  No consideration for how AI might be applicable, just "We need AI, we need it yesterday."  In the long run this will all be good for the industry, and some of that good would be dampen the pressure to wantonly adopt AI, any AI.

Link to comment
Share on other sites

On 1/27/2025 at 8:17 AM, Captainant said:

What's especially weird is that the new hot Chinese AI is just software - it runs and fine tunes great on Nvidia hardware. It's just a much more efficient architecture than western LLMs are using, so it doesn't need the computational heft NV wants to sell you.

Surprising how much a software release is negatively impacting a hardware company

 

Computationally this is super interesting - Chinese is much more information-dense language than English or most Latin languages. A single character can mean several words, and tiny differences can drastically change the meaning. It's linguistically not that surprising that they might develop an LLM architecture with greater throughput than a latin-speaking nation might.

Fascinating times

Another something that's coming for bigtech.  They are likely going to have some problems protecting their AI "machines" because of weaknesses they have purposely introduced into US IP law.

Link to comment
Share on other sites

49 minutes ago, Dahobbs said:

These are ignorant points. Chinese hypersonic weapons are significantly more advanced than what Russia typically calls hypersonic. Modern hypersonic weapons aren't just about speed, but about maneuverability at speed. The glide vehicle hypersonic missiles utilized by China are legitimately impressive and currently offer a capability that the US cannot replicate in its in-service arsenal. We do have a number of programs developing hypersonic glide vehicles similar to what China currently fields and more advanced hypersonic cruise missiles (powered hypersonic flight). 

First you have to approach from the standpoint that China is lying about stuff. 

I think it's safe to say the US has hypersonic missile capability. We've had the tech in adjacent fields for a while. Also the rest of their military is second rate. Navy etc.

Their economy is a ponzi. Their relative debt levels are higher than ours and that is seen as one of our biggest weaknesses according to the America bears.

27 minutes ago, Goredho said:

Well, it seems like the Chinese went into the problem from a different perspective and found an elegant, simplified solution to the compute gluttony associated with training models.  This happens a lot, but not at this scale of investment and publicity.  Something foundational was missed by all the engineers for all the big AI players in the US who are used to operating with unlimited compute and capital.  It really calls into question their leadership in the space.  Its an oversimplified exaggeration, but billions of investment dollars have been sunk into what may wind up being a dead branch of the evolutionary tree of AI and at the urging of American technologists.  How are investors going to react to that?  There are going to be some tense board of director meetings coming up.  Good article about the market implications.

https://daringfireball.net/linked/2025/01/27/nvidia-deepseek-haircut

To make a long story short, it’s been broadly assumed that leading-edge model training and inference requires the highest of high-end hardware — which hardware, especially for training, pretty much exclusively comes from Nvidia. The assumption was that any company trying to compete in this fiercely competitive field — a field with massive implications for industry, culture, and national security — must get in line to buy (or rent) enormous amounts of computational power exclusively available on the very best systems from Nvidia. That hardware isn’t available (at least legally) to Chinese companies, because of the Biden administration’s export bans. DeepSeek engineers found and implemented multiple massive efficiency improvements that allowed the company to train its latest models at far lower prices using far-less-capable hardware, and perform inference under heretofore unprecedented memory constraints. DeepSeek’s achievements render obsolete several prior planks of conventional wisdom regarding the state of AI. My link a few sentences prior in this paragraph is to a WSJ report from 14 days ago that now feels like it dates from at least 14 months ago.

  Reveal hidden contents

The most shocking result has been a 17 percent hit to Nvidia’s stock price today, knocking somewhere around $350 billion off their market cap. Google is down about 4 percent, and Microsoft down 2 percent — but Meta is up 1 percent and Apple is up a little over 3 percent. I think the market is reacting pretty sensibly:

  • Nvidia has taken an unprecedented beating. $350 billion is more than the combined market caps of Verizon and AT&T (each worth roughly $170 billion). But Nvidia has been — and even with these breakthroughs from DeepSeek, remains — in an unprecedented position. Nvidia’s high-flying valuation reflects its unprecedented position as the essential hardware company for AI. Nvidia’s position, post-R1, remains essential — just not as essential as everyone thought. Hence the haircut. But Nvidia remains the third-most-valuable company in the world — it took a massive plunge today but in no way collapsed.
  • TSMC and Broadcom both took big hits today too (-13% and -17%, respectively). Being a “chip company” doesn’t look as sweet today as it did last week.
  • If OpenAI were publicly traded, I suspect it might have collapsed — the sell-off it would have faced today might have triggered emergency measures that halted trading.
  • Microsoft has been distancing itself from OpenAI, but is still intertwined with them like no other company. (Recall that Apple walked away from a big investment in OpenAI just four months ago.) A -2% hit feels about right.
  • The broadest implication of DeepSeek’s achievements is that really good AI is going to be even cheaper and more openly available than expected — sooner than expected. That’s bad news for Google, whose entire enterprise value is based around their having an unassailable and sustainable lead in these areas. Google doesn’t have to worry about a competitor coming along to rival them in search. They have to worry that the entire field of “search” is on the cusp of being commoditized. Thus the -4% hit.
  • Meta, the thinking goes, benefits as generative AI becomes cheaper. That’s why Meta’s own AI efforts are sorta-kinda open source. Meta has nothing to do with DeepSeek, but DeepSeek’s achievements seem perfectly aligned with Meta’s own interests in AI.
  • Apple benefits, indirectly, in at least two ways from DeepSeek’s breakthroughs. First, a vast increase in the inference capabilities of RAM-constrained hardware strongly suggests that Apple’s consumer devices will soon be able to perform far more AI functionality locally. Apple Silicon’s shared memory architecture is perfect for this — and still un-replicated by anyone else in the industry. Second, Apple’s decades-long standoffish relationship with Nvidia suddenly looks like less of a problem.

 

This article seems to have made the mistake of taking the Chinese word at face value.

Not tech savvy, but this seems like a good breakdown. Some breakthroughs yes, but Chinese likely vastly understating the cost and type of hardware they used. Why would they do that? 

Because it's a Chinese operation to shake confidence in American markets and tech. Congrats.

The good news is it will likely spur us to move even quicker.

https://x.com/GavinSBaker/status/1883891311473782995

https://x.com/GavinSBaker/status/1883891313453470153

Spoiler

1) DeepSeek r1 is real with important nuances. Most important is the fact that r1 is so much cheaper and more efficient to inference than o1, not from the $6m training figure. r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild. Simple math is that every 1b active parameters requires 1 gb of RAM in FP8, so r1 requires 37 gb of RAM. Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud. Would also note that there are true geopolitical dynamics at play here and I don’t think it is a coincidence that this came out right after “Stargate.” RIP, $500 billion - we hardly even knew you. Real: 1) It is/was the #1 download in the relevant App Store category. Obviously ahead of ChatGPT; something neither Gemini nor Claude was able to accomplish. 2) It is comparable to o1 from a quality perspective although lags o3. 3) There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference. Training in FP8, MLA and multi-token prediction are significant. 4) It is easy to verify that the r1 training run only cost $6m. While this is literally true, it is also *deeply* misleading. 5) Even their hardware architecture is novel and I will note that they use PCI-Express for scale up. Nuance: 1) The $6m does not include “costs associated with prior research and ablation experiments on architectures, algorithms and data” per the technical paper. “Other than that Mrs. Lincoln, how was the play?” This means that it is possible to train an r1 quality model with a $6m run *if* a lab has already spent hundreds of millions of dollars on prior research and has access to much larger clusters. Deepseek obviously has way more than 2048 H800s; one of their earlier papers referenced a cluster of 10k A100s. An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m. Roughly 20% of Nvidia’s revenue goes through Singapore. 20% of Nvidia’s GPUs are probably not in Singapore despite their best efforts. 2) There was a lot of distillation - i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1. As

@altcap

pointed out to me yesterday, kinda funny to restrict access to leading edge GPUs and not do anything about China’s ability to distill leading edge American models - obviously defeats the purpose of the export restrictions. Why buy the cow when you can get the milk for free?

 

2) Conclusions: 1) Lowering the cost to train will increase the ROI on AI. 2) There is no world where this is positive for training capex or the “power” theme in the near term. 3) The biggest risk to the current “AI infrastructure” winners across tech, industrials, utilities and energy is that a distilled version of r1 can be run locally at the edge on a high end work station (someone referenced a Mac Studio Pro). That means that a similar model will run on a superphone in circa 2 years. If inference moves to the edge because it is “good enough,” we are living in a very different world with very different winners - i.e. the biggest PC and smartphone upgrade cycle we have ever seen. Compute has oscillated between centralization and decentralization for a long time. 4) ASI is really, really close and no one really knows what the economic returns to superintelligence will be. If a $100 billion reasoning model trained on 100k plus Blackwells (o5, Gemini 3, Grok 4) is curing cancer and inventing warp drives, then the returns to ASI will be really high and training capex and power consumption will steadily grow; Dyson Spheres will be back to being best explanation for Fermi’s paradox. I hope the returns to ASI are high - would be so awesome. 5) This is all really good for the companies that *use* AI: software, internet, etc. 6) From an economic perspective, this massively increases the value of distribution and *unique* data - YouTube, Facebook, Instagram and X. 7) American labs are likely to stop releasing their leading edge models to prevent the distillation that was so essential to r1, although the cat may already be entirely out of the bag on this front. i.e. r1 may be enough to train r2, etc. Grok-3 looms large and might significantly impact the above conclusions. This will be the first significant test of scaling laws for pre-training arguably since GPT-4. In the same way that it took several weeks to turn v3 into r1 via RL, it will likely take several weeks to run the RL necessary to improve Grok-3’s reasoning capabilities. The better the base model, the better the reasoning model should be as the three scaling laws are multiplicative - pre-training, RL during post-training and test-time compute during inference (a function of the RL). Grok-3 has already shown it can do tasks beyond o1 - see the Tesseract demo - how far beyond is going to be important. To paraphrase an anonymous Orc from “The Two Towers,” meat might be back on the menu very shortly. Time will tell and “when the facts, I change my mind.”

 

Link to comment
Share on other sites

1 minute ago, raw dog said:

Not tech savvy, but this seems like a good breakdown. Some breakthroughs yes, but Chinese likely vastly understating the cost and type of hardware they used. Why would they do that? 

Because it's a Chinese operation to shake confidence in American markets and tech. Congrats.

Well just to reiterate - the chinese methods and results have already been replicated at small scale, and are proving to be novel (to the west) solutions to scaling problems that previously were just moneywhipped.

In my professional work, I have colleagues already working to replicate it at full-scale just to see if it works or not. I would bet their practices - and especially their inclusion of reinforcement learning AND a generative adversarial network design (using a copy of the LLM to check if responses make sense or not) - will lead to a reasoning model becoming the new standard for the time being. It really has impressed me - a relatively small model (14B parameters) running on my local hardware is delivering responses as fast and as good as a full-scale vended model (600-700B parameters) from Anthropic or Meta.

  • Hook 'Em 3
Link to comment
Share on other sites

1 hour ago, Captainant said:

The lesson we should be taking from this is seeing just how inflated our stock market is. None of nvidia's business changed, they aren't selling any fewer cards and no deals changed, and yet they took a $600,000,000,000 haircut simply because they weren't leading the hype train for a moment

Which is why you should buy the dip. Mark my words, this time next month they will have regained every last penny.

Link to comment
Share on other sites

1 hour ago, Goredho said:

Yep, every BOD meeting I am in for our company, the first question is, "What are you doing to bring AI to bear for our business?"  No consideration for how AI might be applicable, just "We need AI, we need it yesterday."  In the long run this will all be good for the industry, and some of that good would be dampen the pressure to wantonly adopt AI, any AI.

I kinda get it. It's "move fast; break things" mantra when talking about truly disruptive and transformative technology (e.g. Internet, PC's, Mobile...and now AI?). It's okay to not be perfect (and it's even okay to fail, if your corporate culture is strong enough) but it's not okay to not take the risks IMO.

Link to comment
Share on other sites

1 hour ago, raw dog said:

First you have to approach from the standpoint that China is lying about stuff. 

I think it's safe to say the US has hypersonic missile capability. We've had the tech in adjacent fields for a while. Also the rest of their military is second rate. Navy etc.

Their economy is a ponzi. Their relative debt levels are higher than ours and that is seen as one of our biggest weaknesses according to the America bears.

This article seems to have made the mistake of taking the Chinese word at face value.

Not tech savvy, but this seems like a good breakdown. Some breakthroughs yes, but Chinese likely vastly understating the cost and type of hardware they used. Why would they do that? 

Because it's a Chinese operation to shake confidence in American markets and tech. Congrats.

The good news is it will likely spur us to move even quicker.

https://x.com/GavinSBaker/status/1883891311473782995

https://x.com/GavinSBaker/status/1883891313453470153

  Reveal hidden contents

1) DeepSeek r1 is real with important nuances. Most important is the fact that r1 is so much cheaper and more efficient to inference than o1, not from the $6m training figure. r1 costs 93% less to *use* than o1 per each API, can be run locally on a high end work station and does not seem to have hit any rate limits which is wild. Simple math is that every 1b active parameters requires 1 gb of RAM in FP8, so r1 requires 37 gb of RAM. Batching massively lowers costs and more compute increases tokens/second so still advantages to inference in the cloud. Would also note that there are true geopolitical dynamics at play here and I don’t think it is a coincidence that this came out right after “Stargate.” RIP, $500 billion - we hardly even knew you. Real: 1) It is/was the #1 download in the relevant App Store category. Obviously ahead of ChatGPT; something neither Gemini nor Claude was able to accomplish. 2) It is comparable to o1 from a quality perspective although lags o3. 3) There were real algorithmic breakthroughs that led to it being dramatically more efficient both to train and inference. Training in FP8, MLA and multi-token prediction are significant. 4) It is easy to verify that the r1 training run only cost $6m. While this is literally true, it is also *deeply* misleading. 5) Even their hardware architecture is novel and I will note that they use PCI-Express for scale up. Nuance: 1) The $6m does not include “costs associated with prior research and ablation experiments on architectures, algorithms and data” per the technical paper. “Other than that Mrs. Lincoln, how was the play?” This means that it is possible to train an r1 quality model with a $6m run *if* a lab has already spent hundreds of millions of dollars on prior research and has access to much larger clusters. Deepseek obviously has way more than 2048 H800s; one of their earlier papers referenced a cluster of 10k A100s. An equivalently smart team can’t just spin up a 2000 GPU cluster and train r1 from scratch with $6m. Roughly 20% of Nvidia’s revenue goes through Singapore. 20% of Nvidia’s GPUs are probably not in Singapore despite their best efforts. 2) There was a lot of distillation - i.e. it is unlikely they could have trained this without unhindered access to GPT-4o and o1. As

@altcap

pointed out to me yesterday, kinda funny to restrict access to leading edge GPUs and not do anything about China’s ability to distill leading edge American models - obviously defeats the purpose of the export restrictions. Why buy the cow when you can get the milk for free?

 

2) Conclusions: 1) Lowering the cost to train will increase the ROI on AI. 2) There is no world where this is positive for training capex or the “power” theme in the near term. 3) The biggest risk to the current “AI infrastructure” winners across tech, industrials, utilities and energy is that a distilled version of r1 can be run locally at the edge on a high end work station (someone referenced a Mac Studio Pro). That means that a similar model will run on a superphone in circa 2 years. If inference moves to the edge because it is “good enough,” we are living in a very different world with very different winners - i.e. the biggest PC and smartphone upgrade cycle we have ever seen. Compute has oscillated between centralization and decentralization for a long time. 4) ASI is really, really close and no one really knows what the economic returns to superintelligence will be. If a $100 billion reasoning model trained on 100k plus Blackwells (o5, Gemini 3, Grok 4) is curing cancer and inventing warp drives, then the returns to ASI will be really high and training capex and power consumption will steadily grow; Dyson Spheres will be back to being best explanation for Fermi’s paradox. I hope the returns to ASI are high - would be so awesome. 5) This is all really good for the companies that *use* AI: software, internet, etc. 6) From an economic perspective, this massively increases the value of distribution and *unique* data - YouTube, Facebook, Instagram and X. 7) American labs are likely to stop releasing their leading edge models to prevent the distillation that was so essential to r1, although the cat may already be entirely out of the bag on this front. i.e. r1 may be enough to train r2, etc. Grok-3 looms large and might significantly impact the above conclusions. This will be the first significant test of scaling laws for pre-training arguably since GPT-4. In the same way that it took several weeks to turn v3 into r1 via RL, it will likely take several weeks to run the RL necessary to improve Grok-3’s reasoning capabilities. The better the base model, the better the reasoning model should be as the three scaling laws are multiplicative - pre-training, RL during post-training and test-time compute during inference (a function of the RL). Grok-3 has already shown it can do tasks beyond o1 - see the Tesseract demo - how far beyond is going to be important. To paraphrase an anonymous Orc from “The Two Towers,” meat might be back on the menu very shortly. Time will tell and “when the facts, I change my mind.”

 

 

1 hour ago, Captainant said:

Well just to reiterate - the chinese methods and results have already been replicated at small scale, and are proving to be novel (to the west) solutions to scaling problems that previously were just moneywhipped.

In my professional work, I have colleagues already working to replicate it at full-scale just to see if it works or not. I would bet their practices - and especially their inclusion of reinforcement learning AND a generative adversarial network design (using a copy of the LLM to check if responses make sense or not) - will lead to a reasoning model becoming the new standard for the time being. It really has impressed me - a relatively small model (14B parameters) running on my local hardware is delivering responses as fast and as good as a full-scale vended model (600-700B parameters) from Anthropic or Meta.

Listen to @Captainant, @raw dog.  He does this for a living, every day.  We're in the peer review stage.  If this is just Chinese propaganda, it will be a blip after being exposed as such in short order.  If its not, its going to upend the game table for AI as alluded to in that post and as seen in the market yesterday.  I don't think the Chinese would release DeepSeek as (sorta) open source if it would not pass scrutiny, but we will see.

31 minutes ago, Vegas64 said:

I kinda get it. It's "move fast; break things" mantra when talking about truly disruptive and transformative technology (e.g. Internet, PC's, Mobile...and now AI?). It's okay to not be perfect (and it's even okay to fail, if your corporate culture is strong enough) but it's not okay to not take the risks IMO.

The problem is not looking at how to make differentiating products and intellectual property with AI.  Its being pressured to provide "AI YESTERDAY" so that everyone is doing piss-poor work and slapping an "AI" sticker on it to check the AI box as quickly as possible to placate people who have no idea what they are pushing for or the risks being taken.  We are one three mile island event at the hands of an AI managed system from "AI" becoming a dirty word.  We're creating a context where that is more likely to happen, though this event will just cost hundreds of billions vs any human lives.  I am hoping this event instills some sobriety in AI-drunk executives/directors/investors.

Edited by Goredho
  • Hook 'Em 3
  • Like 1
Link to comment
Share on other sites

12 minutes ago, Goredho said:

Listen to @Captainant, @raw dog.  He does this for a living, every day.  We're in the peer review stage.  If this is just Chinese propaganda, it will be a blip after being exposed as such in short order.  If its not, its going to upend the game table for AI as alluded to in that post and as seen in the market yesterday.  I don't think the Chinese would release DeepSeek as (sorta) open source if it would not pass scrutiny, but we will see.

Fair enough. I just remember things like all the hype about the superconducting material or you know all their economic data.

From what I can gather, the worst case for US is they made some novel improvements in how these models are built and set up, while piggybacking off our existing structures and models and vastly understating how much time, money and hardware it cost to do so in order to cause a shock.

Is it open source? If so, should be easy to pull the reverse China and steal their novel improvements and ramp them up further. 

50 minutes ago, atomheartbevo said:

Despicable Me Reaction GIF
 

I was going to tell you that rather than parroting Twitter, which we’ve all stopped using because it’s shit, owned by a shit person, you go and download the code and test it yourself, but I’m guessing you can’t. 

You're right I can't test it. Good luck, we're all counting on you.

As for Twitter, old habits die hard. It's still the fastest place to get wide variety of news. Still the best populated as well. For example, I searched and didn't see this Gavin Baker fellow on bluesky. Now he's a twitter follow for me. It's still quite useful. I guess not everyone has stopped using it?

Edited by raw dog
  • Hook 'Em 1
Link to comment
Share on other sites

1 minute ago, raw dog said:

From what I can gather, the worst case for US is they made some novel improvements in how these models are built and set up, while piggybacking off our existing structures and models and vastly understating how much time, money and hardware it cost to do so in order to cause a shock.

Is it open source? If so, should be easy to pull the reverse China and steal their novel improvements and ramp them up further. 

Yes, they've open sourced the model and the technique used. Like I said, the proof is in the pudding and the thing does what they say it does. 

I don't think they're understating the resources they used. Bill don't lie. We will almost certainly see some at-scale reproductions come out of the west, and this will probably lead to a nice incremental step forward in US-based LLM firms. Especially if we've got the latest and greatest hardware while China is using last gen nvidia chips

 

The bigger thing is that if this is what they're open sourcing, just imagine what cards they're still holding. 

  • Hook 'Em 2
  • Rage+1 2
Link to comment
Share on other sites

24 minutes ago, Goredho said:

We are one three mile island event at the hands of an AI managed system from "AI" becoming a dirty word.  We're creating a context where that is more likely to happen, though this event will just cost hundreds of billions vs any human lives.  I am hoping this event instills some sobriety in AI-drunk executives/directors/investors.

IMO, it won't matter. Just breaking eggs to make that omelet. The cost takeout for digital labor vs human labor toothpaste is out of that aluminum tube. Just some necessary bumps and bruises, trials and errors and all part of the Gartner hypecycle and iterative process before the gold of 10x'ing the stock price is found, is how I think executives are thinking about it.

But you tell me (and it sounds like your experience in board rooms is consistent with what I'm saying as youre positioning yourself as the lone soberminded executive in the room)?

Link to comment
Share on other sites

1 hour ago, Goredho said:

Its being pressured to provide "AI YESTERDAY" so that everyone is doing piss-poor work and slapping an "AI" sticker on it to check the AI box as quickly as possible to placate people who have no idea what they are pushing for or the risks being taken. 

lol. The camera I use for streaming my daughter’s sports utilizes “Ai” to help it pan and follow the action on its own. Pretty big time 

Link to comment
Share on other sites

38 minutes ago, Vegas64 said:

IMO, it won't matter. Just breaking eggs to make that omelet. The cost takeout for digital labor vs human labor toothpaste is out of that aluminum tube. Just some necessary bumps and bruises, trials and errors and all part of the Gartner hypecycle and iterative process before the gold of 10x'ing the stock price is found, is how I think executives are thinking about it.

But you tell me (and it sounds like your experience in board rooms is consistent with what I'm saying as youre positioning yourself as the lone soberminded executive in the room)?

Yep, I very well could be naive that this will change anything within our company or outside of it.  But I hope something does, or the "broken eggs" will be worse than a few hundred billion market cap across a handful of companies.  Whether or not that could be correctly characterized as broken eggs or catastrophic failures will largely depend on where one sits on a spectrum of sociopathy.

Link to comment
Share on other sites

1 hour ago, Fudge Nuggets said:

If DeepSeek proved you don't need a bazillion NVDA GPUs then yes, NVDA is going to be selling a shit ton fewer cards than what their stupid valuation was based on.

While they showed you didn't need a bazillion NVDA GPUs to get those particular results, the path of AI is such that having a bazillion GPUs will still be better for future development. You'll see some of these lessons incorporated into the models owned by those entities with the extra processing grunt. It is possible they will reach a point where the extra power doesn't provide any benefit to performance, but I really doubt we are anywhere near that. 

Link to comment
Share on other sites

1 hour ago, Fudge Nuggets said:

If DeepSeek proved you don't need a bazillion NVDA GPUs then yes, NVDA is going to be selling a shit ton fewer cards than what their stupid valuation was based on.

https://www.livemint.com/ai/artificial-intelligence/scale-ai-ceo-alexandr-wang-claims-deepseek-hides-50-000-nvidia-h100-chips-stockpile-elon-musk-reacts-11738062626511.html

Wang explained that, “The Chinese labs have more H100s than people think. DeepSeek has more than 50,000 H100s, which they can’t talk about because of the export controls that the United States has in place.” This comes as the US continues to tighten regulations around semiconductor sales, particularly to China, to maintain a technological edge.

Link to comment
Share on other sites

2 hours ago, Vegas64 said:

https://www.livemint.com/ai/artificial-intelligence/scale-ai-ceo-alexandr-wang-claims-deepseek-hides-50-000-nvidia-h100-chips-stockpile-elon-musk-reacts-11738062626511.html

Wang explained that, “The Chinese labs have more H100s than people think. DeepSeek has more than 50,000 H100s, which they can’t talk about because of the export controls that the United States has in place.” This comes as the US continues to tighten regulations around semiconductor sales, particularly to China, to maintain a technological edge.

I'm sure getting into a tariff war and pissing off the world will do wonders for the ability for any regulations to actually have an effect. Wouldn't be surprised if we don't have at least a few allies hedging their bets and getting cozy with the Chinese just in case. The US can no longer lean on political stability and fair treatment of allies to maintain soft power and influence with the country alternating between normalcy and madness every 4 years. 

Another OpenAI safety researcher quit. What is that, like half their alignment team over the last couple of years? What the hell are they doing? The aptly named Steven Adler left this month and had this to say:

"An AGI race is a very risky gamble, with huge downside,” he said. “No lab has a solution to AI alignment today. And the faster we race, the less likely that anyone finds one in time.” Alignment is the process of keeping AI working toward human goals and values, not against them.

Welcome to the Jungle, indeed

  • Hook 'Em 2
Link to comment
Share on other sites

2 hours ago, Dahobbs said:

While they showed you didn't need a bazillion NVDA GPUs to get those particular results, the path of AI is such that having a bazillion GPUs will still be better for future development. You'll see some of these lessons incorporated into the models owned by those entities with the extra processing grunt. It is possible they will reach a point where the extra power doesn't provide any benefit to performance, but I really doubt we are anywhere near that. 

When your stock is priced to perfection, any glitch brings the whole house of cards down.  The price usually overshoots on the downside before recovering to a reasonable level until the next scam errrr world changing technology comes around.

Link to comment
Share on other sites

6 minutes ago, Fudge Nuggets said:

When your stock is priced to perfection, any glitch brings the whole house of cards down.  The price usually overshoots on the downside before recovering to a reasonable level until the next scam errrr world changing technology comes around.

I understand. I'm not arguing about valuation necessarily. I'm just saying I'm not convinced NVIDIA's sales see any significant reduction as a result of this. There is still pressure to having the best hardware and more of it. That hasn't changed really unless we are already near the apex of LLMs (and I don't believe we are). 

Link to comment
Share on other sites

1 minute ago, Dahobbs said:

I understand. I'm not arguing about valuation necessarily. I'm just saying I'm not convinced NVIDIA's sales see any significant reduction as a result of this. There is still pressure to having the best hardware and more of it. That hasn't changed really unless we are already near the apex of LLMs (and I don't believe we are). 

Agree with this, especially since almost 50% of NVDA's sales/revenue comes from 5 companies (Meta, Amazon, Microsoft, Alphabet, Oracle). Those companies aren't changing their orders anytime soon and 2025 already sold out.

Link to comment
Share on other sites

6 hours ago, raw dog said:

Fair enough. I just remember things like all the hype about the superconducting material or you know all their economic data.

…….

You're right I can't test it. Good luck, we're all counting on you.

You could not download superconducting material in a few minutes, watch a short YouTube tutorial and then start testing it on your own.

But this thing, you can absolutely download it and watch the eventual YouTube videos that will show non-techies how to do it, and test it yourself.  And because it’s open source, highly technical people can not only test it, but pour over the code and see how it works.  Which they are doing as we speak.

China had a lot of problems and is backwards in many ways, but they most likely manufacture the computer you are posting on Surly with, and the chips and circuit boards in your TV and cars and almost all of the networking equipment we all rely on.  They have a shitload of students and H-1B workers here in the US working on the latest annd greatest and many go back to China with that knowledge.  They aren’t peasants in rice paddies.

 

  • Hook 'Em 3
Link to comment
Share on other sites

7 hours ago, raw dog said:

As for Twitter, old habits die hard. It's still the fastest place to get wide variety of news. Still the best populated as well. For example, I searched and didn't see this Gavin Baker fellow on bluesky. Now he's a twitter follow for me. It's still quite useful. I guess not everyone has stopped using it?

No, not everybody has stopped using it, it will take time, but Musk won’t stop being a shitty person, and as more and more organizations and institutions leave, it will reach a point where still being on it will associate you with Musk. Between Twitter being a welcoming place for Nazis and Facebook fucking up peoples feeds and the atmosphere and their AI-caused bannings driving loyal users away, there will be plenty of money/resources for upstarts to flourish.

GeoCities and MySpace didn’t die overnight, and MySpace had a 6 month start on Facebook (and arguably more momentum at the start for not being limited in its audience).  Digg didn’t die overnight either when Reddit started up.  In fact both Digg and MySpace are still around.  Musk’s vanity and Temu drop shippers will keep the Twitter severs online for the foreseeable future, but many people and companies will go to the alternatives, and they are.

Musk doesn’t care about driving many people away from Twitter and making it an echo chamber - he’s willing to throw money at it, and it feeds his ego, but there’s the benefit that as he drives users away, it’ll be cheaper to operate. Zuckerberg is the one who fucked himself over in the long term so he could make short term gains.  

Zuckerberg moving to AI moderation for FB and automatically banning long-time users over innocent crap that a human would have realized was not violating the rules, and with no way for many of them to get their accounts back, and with his deliberately crippling Threads to constantly drive its users to Instagram, is going to be the more interesting case.  He doesn’t have the US taxpayer  propping up his space and car  companies so that he’s got an ATM machine to keep the lights on.

It’s actually beautiful that Zuck’s AI moderation is wrecking so many accounts of longtime FB users who had done nothing wrong.  It’s fitting.

And the Zuckerberg of the old days would have bent over backwards to make Threads a clone of Twitter in order to wreck Twitter when Musk is basically driving millions of users away and making companies look for alternatives.  Zuckerberg of 2025 is protecting Instagram while trying to peel off a few Twitter users.

Edited by atomheartbevo
  • Hook 'Em 2
Link to comment
Share on other sites

24 minutes ago, Goredho said:

That is if you're using their hosted service, a locally ran Deepseek-r1 model hasn't had any censorship problems with Tiananmen Square or other uncomfortable historical events.

Most laptops, and pretty much every M series MacBook can locally run the smaller models on CPU, give it a shot

Edited by Captainant
  • Hook 'Em 4
  • Like 1
Link to comment
Share on other sites

1 hour ago, Captainant said:

That is if you're using their hosted service, a locally ran Deepseek-r1 model hasn't had any censorship problems with Tiananmen Square or other uncomfortable historical events.

Most laptops, and pretty much every M series MacBook can locally run the smaller models on CPU, give it a shot

Which is interesting considering it means the training data wasn't vetted to avoid sensitive issues. 

Link to comment
Share on other sites

26 minutes ago, Ted Lange said:

 

image.png.dad6c41a20183f1d214b1b1e049dc04f.png

 

 

He later added, "It's like rain on your wedding day."

While the deepseek team are using a distillation of openAI models in some examples, they've also demonstrated the pattern using a distilled alibaba model. Their pattern to use reinforcement learning to teach reasoning patterns to a transformer, and then come back with "tune up" passes to refresh its factual knowledge, is extremely effective and has resulted in smaller models producing just-as-good results as larger models.

For shiggles, I even loaded up the 1.5B parameter model onto my S24U and it was doing pretty well with simple to medium complexity questions. It's crazy how much they've packed into that neural network that's small enough to live in memory on my goddamn cell phone

Edit: added a longcat screenshot in the below spoiler of a local LLM running on my cell phone writing a Python function. My s24U was putting out 26 tokens per second!

Spoiler

Screenshot_20250129_092556_PocketPal.thumb.jpg.63828f1465a9519790edfc35fbfa3759.jpg

 

Edited by Captainant
Link to comment
Share on other sites

Who is News Guard?

https://x.com/DeItaone/status/1884616963433099410

DEEPSEEK'S CHATBOT ACHIEVES 17% ACCURACY, TRAILS WESTERN RIVALS IN NEWSGUARD AUDIT Chinese AI startup DeepSeek's chatbot achieved only 17% accuracy in delivering news and information in a NewsGuard audit that ranked it tenth out of eleven in a comparison with its Western competitors including OpenAI's ChatGPT and Google Gemini. The chatbot repeated false claims 30% of the time and gave vague or not useful answers 53% of the time in response to news-related prompts, resulting in an 83% fail rate, according to a report published by trustworthiness rating service NewsGuard on Wednesday.

Link to comment
Share on other sites

5 hours ago, raw dog said:

Who is News Guard?

https://x.com/DeItaone/status/1884616963433099410

DEEPSEEK'S CHATBOT ACHIEVES 17% ACCURACY, TRAILS WESTERN RIVALS IN NEWSGUARD AUDIT Chinese AI startup DeepSeek's chatbot achieved only 17% accuracy in delivering news and information in a NewsGuard audit that ranked it tenth out of eleven in a comparison with its Western competitors including OpenAI's ChatGPT and Google Gemini. The chatbot repeated false claims 30% of the time and gave vague or not useful answers 53% of the time in response to news-related prompts, resulting in an 83% fail rate, according to a report published by trustworthiness rating service NewsGuard on Wednesday.

17% effective? So it’s a Chinese vaccine?

Link to comment
Share on other sites

 

https://simonwillison.net/2025/Jan/29/on-deepseek-and-export-controls/#atom-everything

On DeepSeek and Export Controls. Anthropic CEO (and previously GPT-2/GPT-3 development lead at OpenAI) Dario Amodei's essay about DeepSeek includes a lot of interesting background on the last few years of AI development.

Dario was one of the authors on the original scaling laws paper back in 2020, and he talks at length about updated ideas around scaling up training:

The field is constantly coming up with ideas, large and small, that make things more effective or efficient: it could be an improvement to the architecture of the model (a tweak to the basic Transformer architecture that all of today's models use) or simply a way of running the model more efficiently on the underlying hardware. New generations of hardware also have the same effect. What this typically does is shift the curve: if the innovation is a 2x "compute multiplier" (CM), then it allows you to get 40% on a coding task for $5M instead of $10M; or 60% for $50M instead of $100M, etc.

He argues that DeepSeek v3, while impressive, represented an expected evolution of models based on current scaling laws.

[...] even if you take DeepSeek's training cost at face value, they are on-trend at best and probably not even that. For example this is less steep than the original GPT-4 to Claude 3.5 Sonnet inference price differential (10x), and 3.5 Sonnet is a better model than GPT-4. All of this is to say that DeepSeek-V3 is not a unique breakthrough or something that fundamentally changes the economics of LLM's; it's an expected point on an ongoing cost reduction curve. What's different this time is that the company that was first to demonstrate the expected cost reductions was Chinese.

Dario includes details about Claude 3.5 Sonnet that I've not seen shared anywhere before:

Claude 3.5 Sonnet cost "a few $10M's to train"

3.5 Sonnet "was not trained in any way that involved a larger or more expensive model (contrary to some rumors)" - I've seen those rumors, they involved Sonnet being a distilled version of a larger, unreleased 3.5 Opus.

Sonnet's training was conducted "9-12 months ago" - that would be roughly between January and April 2024. If you ask Sonnet about its training cut-off it tells you "April 2024" - that's surprising, because presumably the cut-off should be at the start of that training period?

The general message here is that the advances in DeepSeek v3 fit the general trend of how we would expect modern models to improve, including that notable drop in training price.

Dario is less impressed by DeepSeek R1, calling it "much less interesting from an innovation or engineering perspective than V3". I enjoyed this footnote:

I suspect one of the principal reasons R1 gathered so much attention is that it was the first model to show the user the chain-of-thought reasoning that the model exhibits (OpenAI's o1 only shows the final answer). DeepSeek showed that users find this interesting. To be clear this is a user interface choice and is not related to the model itself.

The rest of the piece argues for continued export controls on chips to China, on the basis that if future AI unlocks "extremely rapid advances in science and technology" the US needs to get their first, due to his concerns about "military applications of the technology".

image.gif

  • Hook 'Em 1
Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...