Jump to content

Recommended Posts

Posted (edited)
1 hour ago, Goredho said:

The rest of the piece argues for continued export controls on chips to China, on the basis that if future AI unlocks "extremely rapid advances in science and technology" the US needs to get their first, due to his concerns about "military applications of the technology".

Ya know, applying brutal tarrifs against Taiwan and TSMC - the ONLY people in the world who can make these chips - is really gonna help the strategic availability of those chips

Edited by Captainant
  • Hook 'Em 2
  • Fuck Around and Find Out 1
Posted

https://x.com/wallstengine/status/1885253685090930780

Spoiler

SemiAnalysis published an analysis on DeepSeek, addressing recent claims about its cost and performance. $NVDA The report states that the widely circulated $6M training cost for DeepSeek V3 is incorrect, as it only accounts for GPU pre-training expenses and excludes R&D, infrastructure, and other critical costs. According to their findings, DeepSeek’s total server CapEx is around $1.3B, with a significant portion allocated to maintaining and operating its GPU clusters. The report also states that DeepSeek has access to roughly 50,000 Hopper GPUs, but clarifies that this does not mean 50,000 H100s, as some have suggested. Instead, it’s a mix of H800s, H100s, and the China-specific H20s, which NVIDIA has been producing in response to U.S. export restrictions. SemiAnalysis points out that DeepSeek operates its own datacenters and has a more streamlined structure compared to larger AI labs. On performance, the report notes that R1 matches OpenAI’s o1 in reasoning tasks but is not the clear leader across all metrics. It also highlights that while DeepSeek has gained attention for its pricing and efficiency, Google’s Gemini Flash 2.0 is similarly capable and even cheaper when accessed through API. A key innovation cited is Multi-Head Latent Attention (MLA), which significantly reduces inference costs by cutting KV cache usage by 93.3%. The report suggests that any improvements DeepSeek makes will likely be adopted by Western AI labs almost immediately. SemiAnalysis also mentions that costs could fall another 5x by the end of the year, and that DeepSeek’s structure allows it to move quickly compared to larger, more bureaucratic AI labs. However, it notes that scaling up in the face of tightening U.S. export controls remains a challenge.

 

  • Fuck You 1
Posted

State of Texas bans it from government devices, along with some Chinese social media stuff.  Private citizens can still access it, of course.  There are rumblings elsewhere that they are going to try and block it for everybody, but also that there is resistance and Constitutional issues.

https://m10news.com/texas-becomes-first-state-to-ban-chinese-ai-app-deepseek-and-social-media-platform-rednote/

Posted
1 hour ago, atomheartbevo said:

State of Texas bans it from government devices, along with some Chinese social media stuff.  Private citizens can still access it, of course.  There are rumblings elsewhere that they are going to try and block it for everybody, but also that there is resistance and Constitutional issues.

https://m10news.com/texas-becomes-first-state-to-ban-chinese-ai-app-deepseek-and-social-media-platform-rednote/

I mean, you can also just download it and run it locally. There is no practical way to block it. 

  • Hook 'Em 1
Posted
29 minutes ago, Dahobbs said:

I mean, you can also just download it and run it locally. There is no practical way to block it. 

There is also noise about banning it nationally somehow, and that’s the fucking bat signal for /r/DataHoarders to get to work and download every version they can find and post it aorund the internet. 

  • Haha 1
Posted
19 hours ago, atomheartbevo said:

There is also noise about banning it nationally somehow, and that’s the fucking bat signal for /r/DataHoarders to get to work and download every version they can find and post it aorund the internet. 

There's genuine free speech issues around the idea of banning a specific configuration of weights and biases in a transformer architecture.

So I'm sure it'll go through without a hitch, knowing our current timeline

  • Hook 'Em 1
Posted

https://arstechnica.com/tech-policy/2025/02/meta-torrented-over-81-7tb-of-pirated-books-to-train-ai-authors-say/

"Torrenting from a corporate laptop doesn’t feel right," Nikolay Bashlykov, a Meta research engineer, wrote in an April 2023 message, adding a smiley emoji. In the same message, he expressed "concern about using Meta IP addresses 'to load through torrents pirate content.'"

By September 2023, Bashlykov had seemingly dropped the emojis, consulting the legal team directly and emphasizing in an email that "using torrents would entail ‘seeding’ the files—i.e., sharing the content outside, this could be legally not OK."

Emails discussing torrenting prove that Meta knew it was "illegal," authors alleged. And Bashlykov's warnings seemingly landed on deaf ears, with authors alleging that evidence showed Meta chose to instead hide its torrenting as best it could while downloading and seeding terabytes of data from multiple shadow libraries as recently as April 2024.

  • Rage+1 3
Posted
12 minutes ago, Captainant said:

Emails discussing torrenting prove that Meta knew it was "illegal," authors alleged. And Bashlykov's warnings seemingly landed on deaf ears, with authors alleging that evidence showed Meta chose to instead hide its torrenting as best it could while downloading and seeding terabytes of data from multiple shadow libraries as recently as April 2024.

From the article

Quote

The new evidence showed that Meta torrented "at least 81.7 terabytes of data across multiple shadow libraries through the site Anna’s Archive, including at least 35.7 terabytes of data from Z-Library and LibGen," the authors' court filing said. And "Meta also previously torrented 80.6 terabytes of data from LibGen."

200.gif

Posted
39 minutes ago, atomheartbevo said:

From the article

200.gif

That should result in an historic judgment for copyright infringement.  Historic in the sense of magnitude.

  • Hook 'Em 1
Posted
On 2/3/2025 at 8:40 PM, atomheartbevo said:

There is also noise about banning it nationally somehow, and that’s the fucking bat signal for /r/DataHoarders to get to work and download every version they can find and post it aorund the internet. 

Freedom of speech covered this on 3D printed guns. Why is this different? Can’t stop the signal.

  • 2 weeks later...
Posted
On 2/6/2025 at 10:18 PM, B00M said:

Freedom of speech covered this on 3D printed guns. Why is this different? Can’t stop the signal.

They are going to try, starting with, you guessed it, Frank Stallone  Texas

https://www.kxan.com/news/texas/paxton-launches-investigation-into-chinese-ai-company/

Quote

On Friday, Attorney General Ken Paxton announced an investigation into DeepSeek, a Chinese AI company, regarding its privacy practices.

The lawsuit claims DeepSeek violated the Texas Data Privacy and Security Act.

Quote

“The United States and Texas will continue to be at the forefront of global AI innovation,” Paxton said, “and any CCP-aligned company that tries to undermine that dominance by violating the rights of Texans and illegally undercutting American technology companies will face the full force of the law.”

Paxton said he sent Civil Investigative Demands to Google and Apple “requesting their analysis of the DeepSeek application, as well as the documentation DeepSeek was required to submit to them before they made DeepSeek’s app available to consumers.”

Nice Serenity shout-out, by the way.

Posted
8 minutes ago, Ted Lange said:

 

I'm a specialist in this space - this is 100% accurate. I have some customers that are starting to build real business cases for this sort of tech, but nothing to the degree that the market has inflated its value and speculated on that basis. 

GenAI became the next hot buzzword that vapid MBA's would say to eachother at board meetings, without really understanding what it meant. They saw line going up, so it validated the okeydoke they were all nodding along to. The music is gonna stop and the bubble is gonna pop.

 

Now, that's not to say it's all bullshit. To the contrary, it's a novel new mechanism to meaningfully parse through libraries of data that's been locked away in textual language in mere hours. There's crazy potential for applications with contract management and terms disputes, archival information searching and integration, theory possibility space exploration, and on and on. But critically, none of those things justify the trillions in capital expedature that we've seen so far.

It does represent a significant enhancement in our information processing capability. It's just that I doubt those improvements will be put towards social goods, but rather for further concentration of wealth

  • Hook 'Em 3
  • Like 2
Posted
29 minutes ago, Captainant said:

I'm a specialist in this space - this is 100% accurate. I have some customers that are starting to build real business cases for this sort of tech, but nothing to the degree that the market has inflated its value and speculated on that basis. 

GenAI became the next hot buzzword that vapid MBA's would say to eachother at board meetings, without really understanding what it meant. They saw line going up, so it validated the okeydoke they were all nodding along to. The music is gonna stop and the bubble is gonna pop.

 

Now, that's not to say it's all bullshit. To the contrary, it's a novel new mechanism to meaningfully parse through libraries of data that's been locked away in textual language in mere hours. There's crazy potential for applications with contract management and terms disputes, archival information searching and integration, theory possibility space exploration, and on and on. But critically, none of those things justify the trillions in capital expedature that we've seen so far.

It does represent a significant enhancement in our information processing capability. It's just that I doubt those improvements will be put towards social goods, but rather for further concentration of wealth

What would you, a specialist, say to someone who has a general thesis and market bet that goes something like this:

"AI / Gen AI boom that started in 2022 is simply a stepping stone and a use case bandaid until Quantum Computing is GA in 2030-2035 timeframe.

Posted
1 minute ago, Vegas64 said:

What would you, a specialist, say to someone who has a general thesis and market bet that goes something like this:

"AI / Gen AI boom that started in 2022 is simply a stepping stone and a use case bandaid until Quantum Computing is GA in 2030-2035 timeframe.

Quantum computing does something completely and specifically different from classical computing. I am still earning my depth in quantum computing - BUT - where quantum would come in handy is solving NP-hard problems that classical computers are bad at. Vehicle routing (traveling salesman, anyone?), cryptography cracking, protein folding, etc etc, are all well-suited. Even, potentially, in setting the weights and biases in a neural network for optimal performance and throughput - rather than running billions of cycles of gradient descent to rule out local maxima and minima to find the system min/max for the most generally suitable solution.

These are problems in which the solution is easily verified, but it's extremely difficult to solve using classical computing methods. Quantum computing promises to find the right parameters to make the math work the best. But you still need something to actually DO that math, and that's where classical computing will remain.

 

Broadly speaking, "AI" tries to guess or approximate the best answer to a response, and it informs that guess based on the data it was trained on iteratively over time. "Quantum computing" simply collapses all possibility spaces in O(1) time such that only the best response still exists and is not obliterated.

 

That is a massive oversimplification, and quantum computing is (as far as I know) not anywhere close to general utility to actually set up a quantum system to approximate a complex real system. But as a student of computer science if I were to stick my finger in the air and guess where we are on the timeline we're closer to UNIVAC than we are to x86 in terms of quantum computing.

  • Like 2

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...