Jump to content

Markets still falling like whoa


Recommended Posts

Well I definitely should have bailed more on my PLTR. BUT my UVIX has been offsetting the losses on my remaining shares.  I can't decide if I sell it all and eat a big cap gain.  Or hold and risk potentially watching the need for Cap gain taxes slowly erode?  I am sort of thinking thought I love PLTR long term, I have seen it ride high then collapse as well.  Look like the momentum is against it now.  And that's not a good place to be on a high multiple like PLTR.  I want to hold, but when I see UVIX surge, and PLTR wane...  

I am starting to wonder where the good news is gonna come in the next few weeks?  We will have by this time next week a slew of Executive Orders that will be difficult for the market to digest with any certainty.  We are going to see confirmation of some of the least qualified political appointees this country has ever seen. (IMHO at least on paper)The average American won't even realize this, but the big hitters in the market will.  My only question in my head is whether there is enough financial exuberance from the Republican investor side to overcome the instability and unpredictability directly in front of us?  I think taking losses, is where the rubber will hit the road on this exuberance surrounding deregulation, and lower taxes.  IF it becomes clear that the Trump Bump, is actually the Trump Slump?  Yikes!

So financially we have unpredictability in the market mainly among inflation concerns, in large part because of lofty evaluation risk.  We are going to see a confirmation of what I think most everyone would call disrupters at most positions.  Then we are supposedly going to get the "mother of all reconciliation bills!"  Which I fear the market will see as the largest deficit producing legislation this country has ever seen.  Which I in turn, would assume will add to inflation fears?  Or will the certainty of a single monster bill be viewed as a positive by the market?  A single large bill might be a debt buster, but also might give the market a better look at future predictability? 

Anyhow a very unpredictable and skittish market right now.  When my hedge is moving toward becoming my big dog, I wonder if my hedge isn't actually where I should be moving more and more of my chips? 

My big fuck up. last week was not buying WBA options (walgreens) after their epic beatdown, before they announced earnings.  As well as not trimming my PLTR position more and adding even more to UVIX.  I added a little more UVIX.  But I can't decide if I should move complete to risk on?  Seems like that's where things are drifting right now?

 

Link to comment
Share on other sites

Well when you guys figure out how to eliminate the "IF" from market prediction statements please let me know!  

Instead perhaps I should be asking how many of you guys have fairly large hedges against the market right now.  I am considering betting strongly against the market and paying a big cap gain bill to do so.  So "if" you guys can help me eliminate those pesky "ifs" out of my forward looking analysis, that would be particularly helpful. 😉

Link to comment
Share on other sites

8 minutes ago, horn4life said:

Well when you guys figure out how to eliminate the "IF" from market prediction statements please let me know!  

Instead perhaps I should be asking how many of you guys have fairly large hedges against the market right now.  I am considering betting strongly against the market and paying a big cap gain bill to do so.  So "if" you guys can help me eliminate those pesky "ifs" out of my forward looking analysis, that would be particularly helpful. 😉

There will a ton fewer "ifs" after January 20th. That is the origin point of the the uncertainty, we really don't know what was lies and what was truth. 

Link to comment
Share on other sites

Just now, Captainant said:

There will a ton fewer "ifs" after January 20th. That is the origin point of the the uncertainty, we really don't know what was lies and what was truth. 

I don't complete disagree, but what will we know on the 21st-30th?.  My point is uncertainty does not normally bring additional dollars into the market.  As every trade in the market is the result of two people both feeling that the exact opposite outcome is more likely than not.  And they want to bet on it!!!  If... I think January 20th will not bring much clarity at all.  I think it will bring a slew of executive orders not unlike a firehose.  Which I think is going to be difficult for the market to digest as well.  (my own personal opinion is the markets will react negatively to this additional lack of clarity).

I view January 20th as more a point of clarity, about how very, very difficult it's going to be at least in the short term for most investors to feel the upside "if" is more risk than the downside "if?" My perspective is that we may not actually have much true clarity until March?  And I see that as downward rather than upward pressure.  AGAIN the one factor I wish I knew more than anything is how much sideline money will view January 20th as an inflection point to come into the market? Do investors view the 20th as a point of clarity?  IF so the market should move upward if the clarity is good.  But the two scenarios of negative clarity, or continued uncertainty are both negatives.  Sort of the Darrell Royal of throwing a pass, where you have to weigh the interception and incomplete, against the completion.  Two things are bad.

When the clarity comes, the predictability will come.  I just think we are in for a period of greater uncertainty than normal, and I don't see real clarity for another 60 days if it's positive clarity. 

 

 

  • Hook 'Em 1
Link to comment
Share on other sites

5 hours ago, horn4life said:

Well when you guys figure out how to eliminate the "IF" from market prediction statements please let me know!  

Instead perhaps I should be asking how many of you guys have fairly large hedges against the market right now.  I am considering betting strongly against the market and paying a big cap gain bill to do so.  So "if" you guys can help me eliminate those pesky "ifs" out of my forward looking analysis, that would be particularly helpful. 😉

I have some SPY puts that I bought as a hedge against about 60% of my portfolio. I'm not looking to sell my SPY shares, but if they're is value in them I'll sell the options. And that's pretty of the tricky park is when you sell. I have both 3/31 and 6/30 expiry

Link to comment
Share on other sites

25 minutes ago, Wally Fairway said:

I have some SPY puts that I bought as a hedge against about 60% of my portfolio. I'm not looking to sell my SPY shares, but if they're is value in them I'll sell the options. And that's pretty of the tricky park is when you sell. I have both 3/31 and 6/30 expiry

Tricky Park.  Is that Linkin Park's less shitty brother?  

Link to comment
Share on other sites

On 1/13/2025 at 3:10 PM, horn4life said:

Well I definitely should have bailed more on my PLTR. BUT my UVIX has been offsetting the losses on my remaining shares.  I can't decide if I sell it all and eat a big cap gain.  Or hold and risk potentially watching the need for Cap gain taxes slowly erode?  I am sort of thinking thought I love PLTR long term, I have seen it ride high then collapse as well.  Look like the momentum is against it now.  And that's not a good place to be on a high multiple like PLTR.  I want to hold, but when I see UVIX surge, and PLTR wane...  

I am starting to wonder where the good news is gonna come in the next few weeks?  We will have by this time next week a slew of Executive Orders that will be difficult for the market to digest with any certainty.  We are going to see confirmation of some of the least qualified political appointees this country has ever seen. (IMHO at least on paper)The average American won't even realize this, but the big hitters in the market will.  My only question in my head is whether there is enough financial exuberance from the Republican investor side to overcome the instability and unpredictability directly in front of us?  I think taking losses, is where the rubber will hit the road on this exuberance surrounding deregulation, and lower taxes.  IF it becomes clear that the Trump Bump, is actually the Trump Slump?  Yikes!

So financially we have unpredictability in the market mainly among inflation concerns, in large part because of lofty evaluation risk.  We are going to see a confirmation of what I think most everyone would call disrupters at most positions.  Then we are supposedly going to get the "mother of all reconciliation bills!"  Which I fear the market will see as the largest deficit producing legislation this country has ever seen.  Which I in turn, would assume will add to inflation fears?  Or will the certainty of a single monster bill be viewed as a positive by the market?  A single large bill might be a debt buster, but also might give the market a better look at future predictability? 

Anyhow a very unpredictable and skittish market right now.  When my hedge is moving toward becoming my big dog, I wonder if my hedge isn't actually where I should be moving more and more of my chips? 

My big fuck up. last week was not buying WBA options (walgreens) after their epic beatdown, before they announced earnings.  As well as not trimming my PLTR position more and adding even more to UVIX.  I added a little more UVIX.  But I can't decide if I should move complete to risk on?  Seems like that's where things are drifting right now?

 

UVIX isn't something you want to hold long term. It's an extreme short term hedge, and sub-optimal on any longer timescale because it suffers volatility drag which erodes value as markets stays flats, or , ironically , the volatility (its target) itself is volatile. (UVIX holds VIX futures and as those contracts wobble it'll hurt the rebalancing). 

If you're skittish, rather than buy hedges, short term rates are attractive to park your cash. <1Yr bond yields above 4% with minimal duration risk, so money market account can give you something similar.

I bought SPY put spreads before the new year, "expecting" a come down from santa rally, and those monetized really well. New cash is waiting for end of Q1 (typical lull in seasonality) and settling in of new administration before buying in the market.  Also, don't listen to me because I sold off PLTR in 30s.

  • Hook 'Em 1
Link to comment
Share on other sites

5 hours ago, 52-80 said:

UVIX isn't something you want to hold long term. It's an extreme short term hedge, and sub-optimal on any longer timescale because it suffers volatility drag which erodes value as markets stays flats, or , ironically , the volatility (its target) itself is volatile. (UVIX holds VIX futures and as those contracts wobble it'll hurt the rebalancing). 

If you're skittish, rather than buy hedges, short term rates are attractive to park your cash. <1Yr bond yields above 4% with minimal duration risk, so money market account can give you something similar.

I bought SPY put spreads before the new year, "expecting" a come down from santa rally, and those monetized really well. New cash is waiting for end of Q1 (typical lull in seasonality) and settling in of new administration before buying in the market.  Also, don't listen to me because I sold off PLTR in 30s.

Yeah I trimmed PLTR... this week near the lows.  Still up, but as you said I got skittish.   Good points on UVIX.  Especially the erosion in rebalancing!    

The exuberance of 2.8-3.2 inflation data makes me wonder if a slight nudge in the other direction would have had the market react?  

Link to comment
Share on other sites

4 minutes ago, Wally Fairway said:

Is Uncle Warren sick? 

I saw that sell off as well. It was apparently shutting down Pilot's Oil Trading?  Hell I didn't know there were anything but gas stations with high covers that won't fuck up my RV trailer on the way out of San Antonio.   

 

Link to comment
Share on other sites

On 1/13/2025 at 8:41 AM, horn4life said:

Well when you guys figure out how to eliminate the "IF" from market prediction statements please let me know!  

Instead perhaps I should be asking how many of you guys have fairly large hedges against the market right now.  I am considering betting strongly against the market and paying a big cap gain bill to do so.  So "if" you guys can help me eliminate those pesky "ifs" out of my forward looking analysis, that would be particularly helpful. 😉

There’s still enough liquidity and passive investing that if there’s a correction, it’s not likely until late 2025. Maybe a relatively sideways ‘25 compared to ‘23 and ‘24. If it makes you feel better, maybe sell some shares of some positions but don’t create a large tax burden? 

Link to comment
Share on other sites

16 minutes ago, washparkhorn said:

China’s DeepSeek killing the AI vibe. It does more with less (or nvidia chips somehow made it to China via Singapore). 

Apparently the energy efficiency aspect is a biggie.  As that is a fixed operating cost for chip investments.  I hopped over to OKLO (small nuke energy provider) and it's down 16% in pre-market.  

I sort of think the market is gonna shed the losses that are more broad over the day.  Or will folks wake up in panic and really fuck shit up? I have no fucking idea. But the market is skittish as hell, and the Fed's musings while holding, and inflation up next week...

Edited by horn4life
  • Hook 'Em 1
Link to comment
Share on other sites

I meant inflation this week at end.  Market surged like a mofo on good inflation data. so I think the number will be a market mover.  If the numbers drift up even slightly, does that cause a marked downturn in the market?  Or can Trump actually whip the FED lower?  Even in the face of challenging inflation data?   In that case is stand pat no movement up or down considered a positive or negative?

Anyhow likely to be another week that seems like an eternity...

I guess you are jumping for joy if you were moving into China tech

 

Edited by horn4life
Link to comment
Share on other sites

5 minutes ago, hornbri said:

China just showed how shallow the AI “moat” is. 

I think we are going to see AI valuations reset some now. 

There's an astonishing level of hype and price inflation in that space in the markets with outrageous speculative valuations already baked in. Shit like an electric car company being worth more than it's next 5 competitors combined because somethingsomething AI. Nevermind that the AI doesn't contribute revenue to the business at all, and their cars are selling fewer YoYoY - TO THE MOON MEME STOCK BAYBEEE

  • Hook 'Em 2
Link to comment
Share on other sites

5 minutes ago, hornbri said:

China just showed how shallow the AI “moat” is. 

I think we are going to see AI valuations reset some now. 

I dont think this market move is wholly ascribed to DeepSeek, and this market move has lots of confounding effect.

-january  is seasonally slow for equity index (post santa rally)

-us AI-centric valuation were already stretched, ex china

-the deepseek claims seem a bit sketchy

 

i bet SPX ends up flat-to-up by week end.

 

  • Like 1
Link to comment
Share on other sites

23 minutes ago, 52-80 said:

I dont think this market move is wholly ascribed to DeepSeek, and this market move has lots of confounding effect.

-january  is seasonally slow for equity index (post santa rally)

-us AI-centric valuation were already stretched, ex china

-the deepseek claims seem a bit sketchy

 

i bet SPX ends up flat-to-up by week end.

 

The deepseek claim isn't sketchy from what the model does. It's sketchy because no one knows if they actually trained it with less GPUs. The model is as good or better than openai o-1 period. It's better than llama 4. If they did in fact do it with 1/10th the GPUs (complete horseshit) then it's a big moment of accountability for a lot of companies who are overspending on a solvable problem. 

They have been working with AMD as well, so fo say it's just H100s is silly. It could be that they are using GPUs from both at the same time in a novel fabric architecture, but I'm skeptical until they talk more about what the compute cluster general architecture was to train it. 

The models are real though and not at all aketch, there's a reason they made them MIT licensed and broadly available. It was to shit on everyone. 

  • Hook 'Em 3
Link to comment
Share on other sites

7 minutes ago, immamac said:

The deepseek claim isn't sketchy from what the model does. It's sketchy because no one knows if they actually trained it with less GPUs. The model is as good or better than openai o-1 period. It's better than llama 4. If they did in fact do it with 1/10th the GPUs (complete horseshit) then it's a big moment of accountability for a lot of companies who are overspending on a solvable problem. 

They have been working with AMD as well, so fo say it's just H100s is silly. It could be that they are using GPUs from both at the same time in a novel fabric architecture, but I'm skeptical until they talk more about what the compute cluster general architecture was to train it. 

The models are real though and not at all aketch, there's a reason they made them MIT licensed and broadly available. It was to shit on everyone. 

Presumably I'm not the only one on this..

image.gif.5372bf9c5da66a774ae532e0945d473c.gif

  • Haha 5
  • Drool 1
Link to comment
Share on other sites

31 minutes ago, immamac said:

The deepseek claim isn't sketchy from what the model does. It's sketchy because no one knows if they actually trained it with less GPUs. The model is as good or better than openai o-1 period. It's better than llama 4. If they did in fact do it with 1/10th the GPUs (complete horseshit) then it's a big moment of accountability for a lot of companies who are overspending on a solvable problem. 

They have been working with AMD as well, so fo say it's just H100s is silly. It could be that they are using GPUs from both at the same time in a novel fabric architecture, but I'm skeptical until they talk more about what the compute cluster general architecture was to train it. 

The models are real though and not at all aketch, there's a reason they made them MIT licensed and broadly available. It was to shit on everyone. 

It seems likely they did build it on H800 and optimized for it (at least for the final training run). 

So it seems they still used NVDA just not the H100s, also remains to be seen if they can adapt this to run even better on the H100s.

Link to comment
Share on other sites

45 minutes ago, UTPhil2006 said:

Presumably I'm not the only one on this..

image.gif.5372bf9c5da66a774ae532e0945d473c.gif

https://huggingface.co/deepseek-ai/DeepSeek-R1

With AI there's a few phases of actual computationally intense work. 

1) Data prep/loading for training. - This is where you get all the stuff you want to show the AI together, cleaned up and formatted and tagged and whatever. You have to be able to tell an AI what to compare stuff to and when and if something is right or wrong after it spits out whatever it is doing in training. Think of this like decomposing a food dish to its ingredients and how much of each was in the recipe. They "tokenize" the data and assign arbitrary neural net locations to these tokens. 

Simply put this is a brain from newborn until 10-12 years old where you are mostly in feeding and organizing what is fed to you mode. 

2) training itself - this is where the big math stuff happens. It adds parameters and models weights to certain training results and just tries over and over and over and over to spit out something it knows is right by a reward system (that was more right than less right etc) they do this countless amounts of times until there starts to be associations between the neural nodes themselves. So the likelihood of the next token is probably one of the nuerons that has a shitload of connections to the prior token. The one with the most connections is the weighted winner by default, but you can mess with the parameters to get a range. Once you do this enough times you start to get to a place where it's mostly accurate regurgitating prompts that you have fed it. From your big pool of tokenized data. Think of this like having all the ingredients to make every dish you've ever had and also the recipe cards and you just randomly mix them and taste the food then look at the recipe and see how wrong you were (not what you did wrong, just that you weren't right) and then you try again until you get good enough to just recreate any dish you are asked and the recipe is close enough. 

Simply put this is where you learn how to learn and start to use what you've learned in school. 12-19ish brain where you go to school, play sports learn to drive or do things etc. 

2b) reinforcement learning or reasoning etc. This is where you do the things in step 1 but have stuff that wasn't in the training set thrown at you at the same time and you are graded significantly more harshly to the point where you need to not only recreate a dish but also do any modifications or substitutes asked in the prompt and it still be acceptable. 

Simply put this is where you start to really work on specific problems and use your knowledge to do stuff. Think junior year of college or post journeyman trade moving into graduate or masters programs or into unsupervised work in the trades up to and including taking your PE exam for engineers. 19-25ish of the human brain. 

After step 2 you can plop that out into what's called a model. This is a saved state of all the neural network and the weights and all that, but without having to store the training set, just the tokens and parameters. The larger the parameters the bigger the model (1B, 7b, 30b, 200b, 875b that you see next to models) you can get pretty awesome results with low parameter models that are well trained without having to go crazy on the next part. 

3) inference - this is where you take the model and load it into working memory. So the bigger the model the bigger your working memory set needs to be. And all this working memory needs to be immediately accessible so it can start outputting the answer. This is where you can have a restaurant with anyone in the world asking you to cook anything they've ever eaten that's recorded in history and order it their way and you are expected to immediately cook it in as little time as possible. The faster your memory is and the better your reinforcement learning was the faster and more accurately you can serve this dish. 

Simply put this is being the engineer or doctor or tradesperson or whatever you functionally expertise at as an adult. This is what you get paid to do with your brain not your body. Human brain 25+ish. 

 

Except what we are trying to do with these LLM and agentic models is actually just do this whole thing once and have it be expert level at everything. So instead if making a bot to win iron chef we are trying to make a bot that can win iron chef, master chef, survivor, big brother, call the NFC championship game, engineer a building, fix traffic in Chicago, make a cure for cancer, build the best app in the app store, etc. It can do anything you throw at it and it can do it awesome. 

Small language models or models trained specifically on a certain topic are far better and you can take an llm and in 2a specialize it and then it can just win iron chef every time. 

Models right now have 3 styles of release. 

1) Commercial/Proprietary - you pay us we give you the license and let you use it according to a specific cost structure. That's whether you pay per token or run it yourself and pay a license fee. 

2) Commercial/Open Source - you can download the model and use it on your own computers or you can pay us to use it on our computers for a specific cost structure, unless you make a bunch of money off using it to actually do something and then we get a cut of that (meta llama model)

3) Completely Open Source - you can download it and do whatever you want with it as long as you say it's our model. Tons of people can offer this on any cost structure, but the devekoper usually offers a plan too (deepseek) 

If you had 8 H100s or MI350X or whatever that had about 800GB of HBM for the GPUs in a cluster you can run deepseek-R1 with no germs and conditions. Download it from the link up top. 

They have distilled models that make it smaller so you can run it on a laptop all the way through a pretty beefy workstation.

Basically they gave the brain away you just need to put it in an android, how much memory your android has is how fancy of a model you can install. 

  • Hook 'Em 3
  • Like 2
Link to comment
Share on other sites

9 minutes ago, hornbri said:

It seems likely they did build it on H800 and optimized for it (at least for the final training run). 

So it seems they still used NVDA just not the H100s, also remains to be seen if they can adapt this to run even better on the H100s.

This is where things get sketchy they run around saying they've got 50k h100s they have h800s they also have been working with instinct at AMD. They are doing all kinds of shit, but meta talks about what they are doing, very specifically.

https://about.fb.com/news/2024/04/introducing-our-next-generation-infrastructure-for-ai/

They tell you what they are doing not necessarily how. 

Deepseek hasn't said what they are doing, but they put out a paper on "how", but it doesn't make a ton if sense because unless you magically figured out a new way to use GPUs there's a certain level of computation that just needs to happen. 

Link to comment
Share on other sites

43 minutes ago, UTPhil2006 said:

Presumably I'm not the only one on this..

image.gif.5372bf9c5da66a774ae532e0945d473c.gif

 

5 minutes ago, immamac said:

https://huggingface.co/deepseek-ai/DeepSeek-R1

With AI there's a few phases of actual computationally intense work. 

1) Data prep/loading for training. - This is where you get all the stuff you want to show the AI together, cleaned up and formatted and tagged and whatever. You have to be able to tell an AI what to compare stuff to and when and if something is right or wrong after it spits out whatever it is doing in training. Think of this like decomposing a food dish to its ingredients and how much of each was in the recipe. They "tokenize" the data and assign arbitrary neural net locations to these tokens. 

Simply put this is a brain from newborn until 10-12 years old where you are mostly in feeding and organizing what is fed to you mode. 

2) training itself - this is where the big math stuff happens. It adds parameters and models weights to certain training results and just tries over and over and over and over to spit out something it knows is right by a reward system (that was more right than less right etc) they do this countless amounts of times until there starts to be associations between the neural nodes themselves. So the likelihood of the next token is probably one of the nuerons that has a shitload of connections to the prior token. The one with the most connections is the weighted winner by default, but you can mess with the parameters to get a range. Once you do this enough times you start to get to a place where it's mostly accurate regurgitating prompts that you have fed it. From your big pool of tokenized data. Think of this like having all the ingredients to make every dish you've ever had and also the recipe cards and you just randomly mix them and taste the food then look at the recipe and see how wrong you were (not what you did wrong, just that you weren't right) and then you try again until you get good enough to just recreate any dish you are asked and the recipe is close enough. 

Simply put this is where you learn how to learn and start to use what you've learned in school. 12-19ish brain where you go to school, play sports learn to drive or do things etc. 

2b) reinforcement learning or reasoning etc. This is where you do the things in step 1 but have stuff that wasn't in the training set thrown at you at the same time and you are graded significantly more harshly to the point where you need to not only recreate a dish but also do any modifications or substitutes asked in the prompt and it still be acceptable. 

Simply put this is where you start to really work on specific problems and use your knowledge to do stuff. Think junior year of college or post journeyman trade moving into graduate or masters programs or into unsupervised work in the trades up to and including taking your PE exam for engineers. 19-25ish of the human brain. 

After step 2 you can plop that out into what's called a model. This is a saved state of all the neural network and the weights and all that, but without having to store the training set, just the tokens and parameters. The larger the parameters the bigger the model (1B, 7b, 30b, 200b, 875b that you see next to models) you can get pretty awesome results with low parameter models that are well trained without having to go crazy on the next part. 

3) inference - this is where you take the model and load it into working memory. So the bigger the model the bigger your working memory set needs to be. And all this working memory needs to be immediately accessible so it can start outputting the answer. This is where you can have a restaurant with anyone in the world asking you to cook anything they've ever eaten that's recorded in history and order it their way and you are expected to immediately cook it in as little time as possible. The faster your memory is and the better your reinforcement learning was the faster and more accurately you can serve this dish. 

Simply put this is being the engineer or doctor or tradesperson or whatever you functionally expertise at as an adult. This is what you get paid to do with your brain not your body. Human brain 25+ish. 

 

Except what we are trying to do with these LLM and agentic models is actually just do this whole thing once and have it be expert level at everything. So instead if making a not to win iron chef we are trying to make a not that can win iron chef, master chef, survivor, big brother, call the NFC championship game, engineer a building, fix traffic in Chicago, make a cure for cancer, build the best app in the app store, etc. It can do anything you throw at it and it can do it awesome. 

Small language models or models trained specifically on a certain topic are far better and you can take an llm and in 2a specialize it and then it can just win iron chef every time. 

Models right now have 3 styles of release. 

1) Commercial/Proprietary - you pay us we give you the license and let you use it according to a specific cost structure. That's whether you pay per token or run it yourself and pay a license fee. 

2) Commercial/Open Source - you can download the model and use it on your own computers or you can pay us to use it on our computers for a specific cost structure, unless you make a bunch of money off using it to actually do something and then we get a cut of that (meta llama model)

3) Completely Open Source - you can download it and do whatever you want with it as long as you say it's our model. Tons of people can offer this on any cost structure, but the devekoper usually offers a plan too (deepseek) 

If you had 8 H100s or MI350X or whatever that had about 800GB of HBM for the GPUs in a cluster you can run deepseek-R1 with no germs and conditions. Download it from the link up top. 

They have distilled models that make it smaller so you can run it on a laptop all the way through a pretty beefy workstation.

Basically they gave the brain away you just need to put it in an android, how much memory your android has is how fancy of a model you can install. 

AI took this guy's WALL OF TEXT and summarized it for a 5-year old like this:

 
  1. Gathering the Cosmic Ingredients - Picture this: our AI is like a young superhero, just starting out. But before they can fight crime, they need their powers! So, we collect all the cosmic dust, kryptonite bits, and vibranium shards from across the universe - that's our data. We clean 'em up, label 'em, and make 'em ready for the big transformation. It's like when Thor needs his hammer or Iron Man needs his suit, but instead, we're prepping super-knowledge!
  2. The Super-Training Montage - Now, imagine our hero in a training montage, like Rocky or Spider-Man. They're in a lab, training with Professor X's brain machine. Every time they guess the right move or solve a puzzle, they get stronger, faster, smarter. This AI is doing the same, practicing with its data like a superhero practicing punches or flight. It's learning to predict the next villain move or solve the city's problems, but instead of muscles, it's growing brain power!
  3. Advanced Hero Skills - Here's where it gets epic! Like when Captain America gets the super serum, we throw in some new, crazy scenarios at our AI. It's not just fighting the same bad guy; now, it's dealing with alien invasions or time-travel paradoxes. It's like if Batman had to adapt his gadgets for space or if Wonder Woman had to fight in an underwater kingdom. This part makes our AI not just good but great at handling the unknown!
  4. The Superhero Debut - Finally, we've got our fully trained, cosmic-powered hero. This isn't just any hero; this is like assembling the Avengers or Justice League in one brain! We've packed all this knowledge into a super-computer brain. Now, when someone needs help - be it saving a cat from a tree or solving world hunger - our AI superhero is ready! It's like having Iron Man's armor, but instead of a suit, it's a super-smart AI ready to tackle any challenge with speed and precision.
 
And in the superhero world, you've got different versions of how you can use this hero:
  • The Exclusive Hero - You pay to see this hero in action, like watching a blockbuster movie.
  • The Open Hero - You can get the hero's blueprint and make your own version, but if you make a lot of money, you share some with the creator.
  • The Public Hero - This hero's powers are free for all, like Superman, but you gotta credit the one who gave them their powers.
 
So, our AI hero is out there, ready to save the day, in any world, any challenge, with the mightiest brain power we've ever seen!
  • Like 1
Link to comment
Share on other sites

2 minutes ago, 52-80 said:

 

AI took this guy's WALL OF TEXT and summarized it for a 5-year old like this:

 
  1. Gathering the Cosmic Ingredients - Picture this: our AI is like a young superhero, just starting out. But before they can fight crime, they need their powers! So, we collect all the cosmic dust, kryptonite bits, and vibranium shards from across the universe - that's our data. We clean 'em up, label 'em, and make 'em ready for the big transformation. It's like when Thor needs his hammer or Iron Man needs his suit, but instead, we're prepping super-knowledge!
  2. The Super-Training Montage - Now, imagine our hero in a training montage, like Rocky or Spider-Man. They're in a lab, training with Professor X's brain machine. Every time they guess the right move or solve a puzzle, they get stronger, faster, smarter. This AI is doing the same, practicing with its data like a superhero practicing punches or flight. It's learning to predict the next villain move or solve the city's problems, but instead of muscles, it's growing brain power!
  3. Advanced Hero Skills - Here's where it gets epic! Like when Captain America gets the super serum, we throw in some new, crazy scenarios at our AI. It's not just fighting the same bad guy; now, it's dealing with alien invasions or time-travel paradoxes. It's like if Batman had to adapt his gadgets for space or if Wonder Woman had to fight in an underwater kingdom. This part makes our AI not just good but great at handling the unknown!
  4. The Superhero Debut - Finally, we've got our fully trained, cosmic-powered hero. This isn't just any hero; this is like assembling the Avengers or Justice League in one brain! We've packed all this knowledge into a super-computer brain. Now, when someone needs help - be it saving a cat from a tree or solving world hunger - our AI superhero is ready! It's like having Iron Man's armor, but instead of a suit, it's a super-smart AI ready to tackle any challenge with speed and precision.
 
And in the superhero world, you've got different versions of how you can use this hero:
  • The Exclusive Hero - You pay to see this hero in action, like watching a blockbuster movie.
  • The Open Hero - You can get the hero's blueprint and make your own version, but if you make a lot of money, you share some with the creator.
  • The Public Hero - This hero's powers are free for all, like Superman, but you gotta credit the one who gave them their powers.
 
So, our AI hero is out there, ready to save the day, in any world, any challenge, with the mightiest brain power we've ever seen!

Mostly completely wrong, but to a layperson they would think they knew something. 

This is why if you feed AI prompts that it spits out it turns into indistinguishable garbage in 3 passes or less. 

Link to comment
Share on other sites

30 minutes ago, immamac said:

https://huggingface.co/deepseek-ai/DeepSeek-R1

With AI there's a few phases of actual computationally intense work. 

1) Data prep/loading for training. - This is where you get all the stuff you want to show the AI together, cleaned up and formatted and tagged and whatever. You have to be able to tell an AI what to compare stuff to and when and if something is right or wrong after it spits out whatever it is doing in training. Think of this like decomposing a food dish to its ingredients and how much of each was in the recipe. They "tokenize" the data and assign arbitrary neural net locations to these tokens. 

Simply put this is a brain from newborn until 10-12 years old where you are mostly in feeding and organizing what is fed to you mode. 

2) training itself - this is where the big math stuff happens. It adds parameters and models weights to certain training results and just tries over and over and over and over to spit out something it knows is right by a reward system (that was more right than less right etc) they do this countless amounts of times until there starts to be associations between the neural nodes themselves. So the likelihood of the next token is probably one of the nuerons that has a shitload of connections to the prior token. The one with the most connections is the weighted winner by default, but you can mess with the parameters to get a range. Once you do this enough times you start to get to a place where it's mostly accurate regurgitating prompts that you have fed it. From your big pool of tokenized data. Think of this like having all the ingredients to make every dish you've ever had and also the recipe cards and you just randomly mix them and taste the food then look at the recipe and see how wrong you were (not what you did wrong, just that you weren't right) and then you try again until you get good enough to just recreate any dish you are asked and the recipe is close enough. 

Simply put this is where you learn how to learn and start to use what you've learned in school. 12-19ish brain where you go to school, play sports learn to drive or do things etc. 

2b) reinforcement learning or reasoning etc. This is where you do the things in step 1 but have stuff that wasn't in the training set thrown at you at the same time and you are graded significantly more harshly to the point where you need to not only recreate a dish but also do any modifications or substitutes asked in the prompt and it still be acceptable. 

Simply put this is where you start to really work on specific problems and use your knowledge to do stuff. Think junior year of college or post journeyman trade moving into graduate or masters programs or into unsupervised work in the trades up to and including taking your PE exam for engineers. 19-25ish of the human brain. 

After step 2 you can plop that out into what's called a model. This is a saved state of all the neural network and the weights and all that, but without having to store the training set, just the tokens and parameters. The larger the parameters the bigger the model (1B, 7b, 30b, 200b, 875b that you see next to models) you can get pretty awesome results with low parameter models that are well trained without having to go crazy on the next part. 

3) inference - this is where you take the model and load it into working memory. So the bigger the model the bigger your working memory set needs to be. And all this working memory needs to be immediately accessible so it can start outputting the answer. This is where you can have a restaurant with anyone in the world asking you to cook anything they've ever eaten that's recorded in history and order it their way and you are expected to immediately cook it in as little time as possible. The faster your memory is and the better your reinforcement learning was the faster and more accurately you can serve this dish. 

Simply put this is being the engineer or doctor or tradesperson or whatever you functionally expertise at as an adult. This is what you get paid to do with your brain not your body. Human brain 25+ish. 

 

Except what we are trying to do with these LLM and agentic models is actually just do this whole thing once and have it be expert level at everything. So instead if making a bot to win iron chef we are trying to make a bot that can win iron chef, master chef, survivor, big brother, call the NFC championship game, engineer a building, fix traffic in Chicago, make a cure for cancer, build the best app in the app store, etc. It can do anything you throw at it and it can do it awesome. 

Small language models or models trained specifically on a certain topic are far better and you can take an llm and in 2a specialize it and then it can just win iron chef every time. 

Models right now have 3 styles of release. 

1) Commercial/Proprietary - you pay us we give you the license and let you use it according to a specific cost structure. That's whether you pay per token or run it yourself and pay a license fee. 

2) Commercial/Open Source - you can download the model and use it on your own computers or you can pay us to use it on our computers for a specific cost structure, unless you make a bunch of money off using it to actually do something and then we get a cut of that (meta llama model)

3) Completely Open Source - you can download it and do whatever you want with it as long as you say it's our model. Tons of people can offer this on any cost structure, but the devekoper usually offers a plan too (deepseek) 

If you had 8 H100s or MI350X or whatever that had about 800GB of HBM for the GPUs in a cluster you can run deepseek-R1 with no germs and conditions. Download it from the link up top. 

They have distilled models that make it smaller so you can run it on a laptop all the way through a pretty beefy workstation.

Basically they gave the brain away you just need to put it in an android, how much memory your android has is how fancy of a model you can install. 

From a computational linguistics perspective I'm also curious to quantify the token throughput difference between Latin languages and Chinese. A single character can mean a phrase or several words, and a subtle difference can totally change the meaning. The information density in the language itself seems like it would lend itself favorably to building a more efficient LLM architecture, by necessity.

I'm really surprised by how strongly markets are reacting, they must have had a TON of growth priced in

Link to comment
Share on other sites

17 minutes ago, Captainant said:

From a computational linguistics perspective I'm also curious to quantify the token throughput difference between Latin languages and Chinese. A single character can mean a phrase or several words, and a subtle difference can totally change the meaning. The information density in the language itself seems like it would lend itself favorably to building a more efficient LLM architecture, by necessity.

I'm really surprised by how strongly markets are reacting, they must have had a TON of growth priced in

Inference has always been a losers game. The cost of training is insane, but the cost of inference is where everyone is getting absolutely fucking destroyed. 

Until you can inference at scale in a cost effective way, LLMs are a dead end for net positive economic outcomes. Very similar to fusion. 

Training models at 1/10 the cost is a big deal for sure, but if it still costs the same to inference then who fucking cares? It seems like this model doesn't do anything super special on the inference side from what I've read. It still needs as much hardware to do that as any other model of the same size. 

Link to comment
Share on other sites

There's actually a ton of parallels between AI and fusion. 

Fusion just doesn't talk to you so people aren't as frothy hyped about it. Same story different science. It's a money black hole until it isn't and then you've solved the energy problem or intelligence problem. Same hyperbolic outcome chasing, same insane budgets to make it happen. 

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...