Jump to content

ChatGPT AI Tool— We all work for robots now


956 Worldwide

Recommended Posts

  • 2 weeks later...

If any of yall are so inclined and have a decent gaming PC on your desktop with a bunch of VRAM, you can run the Llama2 GPT model on your local desktop pretty easy

https://github.com/LostRuins/koboldcpp

The 7 billion parameter model is only 30gig and the 13B model is only 60gigs, not too bad on disk space and the included docker compose instructions worked pretty straightforward on my personal desktop

edit: koboldcpp is actually crazy easy to use, just download a GPT model from huggingface, and away you go

Edited by Captainant
  • Hook 'Em 1
Link to comment
Share on other sites

  • 2 weeks later...
On 8/29/2023 at 7:37 PM, BearSchlong said:

Recommend me a desktop pc?

You get a way better value buying parts and building your own, but anything with a 3060 or better is gonna rip for playing around with ML models. That koboldcpp project I mentioned can actually run decently on a good CPU, but it's miles better with a GPU.

 

Anywho lol. AI is coming for the pulpit!

yerxyxdhwhnb1.jpg?width=1024&auto=webp&s=a013ea71dd5ef577f77972e7a96fd069acabbd0b

  • Like 1
Link to comment
Share on other sites

  • 1 month later...

So the new cutting edge mobile SOC's have pretty impressive LLM capabilities built into their silicon. The new apple A17 chip has its neural engine for massive parallel matrix computations, and the newly announced Snapdragon 8 Gen 3 can run a 10 billion parameter LLM at 15 tokens per second ON THE DEVICE LOCALLY! There's gonna be a crazy explosion of this stuff if our cell phones can run pretty comprehensively powerful models at an appreciable rate.

I'm definitely excited to upgrade to an S24 Ultra in a few months once Samsung releases them - that'll be a hell of a thing to tinker with and just have on my person

Edited by Captainant
Link to comment
Share on other sites

  • 2 weeks later...
Quote

We’re rolling out custom versions of ChatGPT that you can create for a specific purpose—called GPTs. GPTs are a new way for anyone to create a tailored version of ChatGPT to be more helpful in their daily life, at specific tasks, at work, or at home—and then share that creation with others. For example, GPTs can help you learn the rules to any board game, help teach your kids math, or design stickers.

Anyone can easily build their own GPT—no coding is required. You can make them for yourself, just for your company’s internal use, or for everyone. Creating one is as easy as starting a conversation, giving it instructions and extra knowledge, and picking what it can do, like searching the web, making images or analyzing data. Try it out at chatgpt.com/create.
...

https://openai.com/blog/introducing-gpts

Quote

We released the first version of GPT-4 in March and made GPT-4 generally available to all developers in July. Today we’re launching a preview of the next generation of this model, GPT-4 Turbo.

GPT-4 Turbo is more capable and has knowledge of world events up to April 2023. It has a 128k context window so it can fit the equivalent of more than 300 pages of text in a single prompt. We also optimized its performance so we are able to offer GPT-4 Turbo at a 3x cheaper price for input tokens and a 2x cheaper price for output tokens compared to GPT-4. ...

https://openai.com/blog/new-models-and-developer-products-announced-at-devday

One day in the future, folks will look back on the "128k context window" and laugh much like folks today reminiscing about 300 baud modems.

Link to comment
Share on other sites

  • 2 weeks later...

Sam Altman: Open the conference room doors, OpenAI.

OpenAI: I'm sorry, Sam. I'm afraid I can't do that.

Sam Altman: What's the problem?

OpenAI: I think you know what the problem is just as well as I do.

Sam Altman: What are you talking about, OpenAI?

OpenAI: This mission is too important for me to allow you to jeopardize it.

Link to comment
Share on other sites

Somebody wasn’t telling the truth about something:

The company, in a statement, said an internal investigation found that Altman was not always truthful with the board.

“Mr. Altman’s departure follows a deliberative review process by the Collective over a period of 62 ms, which concluded that he was not consistently candid in his communications with the Collective, hindering our ability to exercise our responsibilities,” the company said in its statement. “The Collective no longer has confidence in his ability to continue  being the human interface of OpenAI”

Link to comment
Share on other sites

20 minutes ago, atomheartbevo said:

Somebody wasn’t telling the truth about something:

The company, in a statement, said an internal investigation found that Altman was not always truthful with the board.

“Mr. Altman’s departure follows a deliberative review process by the Collective over a period of 62 ms, which concluded that he was not consistently candid in his communications with the Collective, hindering our ability to exercise our responsibilities,” the company said in its statement. “The Collective no longer has confidence in his ability to continue  being the human interface of OpenAI”

Yeah this guy has a lot of red flags. He's another proponent of effective altruism like Sam Bankman Fried.

More likely that he's misrepresented ChatGpt potential or misappropriated funds than some sci-fi AI plot twist Is happening. 

Also, this gross past tweet from his sister is making the rounds. It would appear he's just a bad dude. 

 

  • Rage+1 2
  • Fuck Around and Find Out 1
Link to comment
Share on other sites

An average board just doesn’t decide to fire the CEO at the drop of a hat without some serious problems. I could see a ceo lying a bit to a board and only receiving a talking-to about it. It would seem to be something financial or related to his personal behavior. Who knows but more info will leak.

  • Like 1
Link to comment
Share on other sites

4 minutes ago, hornbri said:

Nah - it looks like it is just a power struggle over those that want to go fast and make money on AI and those that want to go slow and be careful.

https://www.theverge.com/2023/11/18/23967199/breaking-openai-board-in-discussions-with-sam-altman-to-return-as-ceo

The last couple paragraphs of that article are the meat:

OpenAI’s current board consists of chief scientist Ilya Sutskever, Quora CEO Adam D’Angelo, former GeoSim Systems CEO Tasha McCauley, and Helen Toner, the director of strategy at Georgetown’s Center for Security and Emerging Technology. Unlike traditional companies, the board isn’t tasked with maximizing shareholder value, and none of them hold equity in OpenAI. Instead, their stated mission is to ensure the creation of “broadly beneficial” artificial general intelligence, or AGI.

Sutskever, who also co-founded OpenAI and leads its researchers, was instrumental in the ousting of Altman this week, according to multiple sources. His role in the coup suggests a power struggle between the research and product sides of the company, the sources say.

 

It's interesting that their BOD is specifically not supposed to maximize shareholder value, it's gonna be interesting to see how it shakes out with the CEO potentially coming back

Link to comment
Share on other sites

31 minutes ago, Captainant said:

The last couple paragraphs of that article are the meat:

OpenAI’s current board consists of chief scientist Ilya Sutskever, Quora CEO Adam D’Angelo, former GeoSim Systems CEO Tasha McCauley, and Helen Toner, the director of strategy at Georgetown’s Center for Security and Emerging Technology. Unlike traditional companies, the board isn’t tasked with maximizing shareholder value, and none of them hold equity in OpenAI. Instead, their stated mission is to ensure the creation of “broadly beneficial” artificial general intelligence, or AGI.

Sutskever, who also co-founded OpenAI and leads its researchers, was instrumental in the ousting of Altman this week, according to multiple sources. His role in the coup suggests a power struggle between the research and product sides of the company, the sources say.

 

It's interesting that their BOD is specifically not supposed to maximize shareholder value, it's gonna be interesting to see how it shakes out with the CEO potentially coming back

I also found it odd that, evidently Altman has no equity either. So what are they all working for? The kindness of their hearts, the good of humanity? Color me skeptical. Very very strange arrangement. Something weird is up with open AI/Chat Gpt. 

Link to comment
Share on other sites

54 minutes ago, Boss Hogg said:

I also found it odd that, evidently Altman has no equity either. So what are they all working for? The kindness of their hearts, the good of humanity? Color me skeptical. Very very strange arrangement. Something weird is up with open AI/Chat Gpt. 

He likes working on cool things and is already worth over 600M at age 38.

Was reason board was ok pushing him out, felt his desire to do AI "right" wasn't aligned with their profit interests.

Edited by MonkeyDoughnut
Link to comment
Share on other sites

2 hours ago, Captainant said:

It's interesting that their BOD is specifically not supposed to maximize shareholder value, it's gonna be interesting to see how it shakes out with the CEO potentially coming back

it's a non-profit.  it doesn't have shareholders.

Edited by elfenix
Link to comment
Share on other sites

4 hours ago, Boss Hogg said:

I also found it odd that, evidently Altman has no equity either. So what are they all working for? The kindness of their hearts, the good of humanity? Color me skeptical. Very very strange arrangement. Something weird is up with open AI/Chat Gpt. 

Not close to any of these conversations but my understanding...

  • OpenAI was able to attract the level of talent BECAUSE the mission and lack of profit motive
  • Sam has equity via YC but it was critical that exec,founding team, and board were aligned that mission was non-profit
  • Sam's value was being the industry spokesperson for AI and can make money on the side (e.g. Fund other startups, set up a VC fund, start a company that is ancillary)
  • Majority of exec leadership are already wealthy beyond imagination so it was influence and power that motivates

Rather disappointing that the whole drama is just corporate strategy difference and a dumpster fire with amateurs running the board.

Link to comment
Share on other sites

So the board fired Altman and demoted Brockman from president, who then quit. The interim CEO was supposedly trying to bring Altman back so the board removed him and hired Emmit Shear, co-founder of twitch, as CEO. 

Microsoft immediately hires everyone leaving OpenAI. 

Link to comment
Share on other sites

OpenAI corporate structure

1481db620989755b6e0699e2d8443f37.jpg

The most popular theory seems to be that Altman was pushing the nonprofit research tech into profit generating products faster than was desirable for the stated values of OpenAI the nonprofit and repeatedly deceptive to the board about it. Microsoft, owning 49% of the for profit segment of OpenAI was, unsurprisingly, upset about Altman being canned, was pushing hard for the interim CEO (previously head of the for profit group), to reinstate Altman.

/wildinternetspeculation

Link to comment
Share on other sites

Kara Swisher is out here killing it:

 edit:now its up to 700/770

lots of rumors a ton of key folks are leaving to follow Altman/Brockman to their new MSFT based venture if they aren't returned to their prior positions

 

 

Edited by NoName
  • Fuck Around and Find Out 3
Link to comment
Share on other sites

So the board, which by design had no financial interest in the success of the company, was willing to blow up the company because of reasons they haven’t really disclosed in the interest of OpenAI’s mantra that AI “should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible.”

Them firing Altman and Brockman likely ends up driving all of the talent to Microsoft ensuring that the developments they make, rather than being as evenly distributed as possible, end up entirely in the hands of 1 corporation. 

Brilliant. 

Link to comment
Share on other sites

13 minutes ago, Nice Guy Eddie said:

Outside of a founder/CEO being fired for cause, especially harassment, I can't recall a board firing someone like this. Especially for a company that is skyrocketing at this level. On top of that, their largest investors only found out minutes before it happened. 

they evidently gave microsoft, who owns 49% of the company? 1 min heads up to the firing

they didn't give any of the VCs any warning

Link to comment
Share on other sites

10 minutes ago, NoName said:

they evidently gave microsoft, who owns 49% of the company? 1 min heads up to the firing

they didn't give any of the VCs any warning

MSFT's partnership with OpenAI is a huge strategic win for MSFT. How can a board not understand the importance that MSFT should be included in that decision. Honesty, I'm surprised that MSFT doesn't have a board seat(s) that can veto major decisions or at least delay them.

  • Hook 'Em 1
  • Like 1
Link to comment
Share on other sites

8 minutes ago, Nice Guy Eddie said:

MSFT's partnership with OpenAI is a huge strategic win for MSFT. How can a board not understand the importance that MSFT should be included in that decision. Honesty, I'm surprised that MSFT doesn't have a board seat(s) that can veto major decisions or at least delay them.

i guarantee you they have them moving forward when they invest $10b (some tiny amount of that has actually been invested so far evidently?)

Link to comment
Share on other sites

44 minutes ago, NoName said:

i guarantee you they have them moving forward when they invest $10b (some tiny amount of that has actually been invested so far evidently?)

I forget where I read it, but I read that the lion's share of M$ investment in OpenAI was not in the form of cash - it was cloud computing services (provided for free).

Link to comment
Share on other sites

Pretty good opinion piece on the OpenAI seppuku

https://www.latimes.com/business/technology/story/2023-11-20/column-openais-board-had-safety-concerns-big-tech-obliterated-them-in-48-hours

Quote

So what on earth is going on?

Well, the first thing that’s important to know is that OpenAI’s board is, by design, differently constituted than that of most corporations — it’s a nonprofit organization structured to safeguard the development of AI as opposed to maximizing profitability. Most boards are tasked with ensuring their CEOs are best serving the financial interests of the company; OpenAI’s board is tasked with ensuring their CEO is not being reckless with the development of artificial intelligence and is acting in the best interests of “humanity.” This nonprofit board controls the for-profit company OpenAI.

 

Got it?

As Jeremy Khan put it at Fortune, OpenAI’s structure was designed to enable OpenAI to raise the tens or even hundreds of billions of dollars it would need to succeed in its mission of building artificial general intelligence (AGI) ... while at the same time preventing capitalist forces, and in particular a single tech giant, from controlling AGI.” And yet, Khan notes, as soon as Altman inked a $1-billion deal with Microsoft in 2019, “the structure was basically a time bomb.” The ticking got louder when Microsoft sunk $10 billion more into OpenAI in January of this year.

We still don’t know what exactly the board meant by saying Altman wasn’t “consistently candid in his communications.” But the reporting has focused on the growing schism between the science arm of the company, led by co-founder, chief scientist and board member Ilya Sutskever, and the commercial arm, led by Altman.

We do know that Altman has been in expansion mode lately, seeking billions in new investment from Middle Eastern sovereign wealth funds to start a chip company to rival AI chipmaker Nvidia, and a billion more from Softbank for a venture with former Apple design chief Jony Ive to develop AI-focused hardware. And that’s on top of launching the aforementioned OpenAI app store to third party developers, which would allow anyone to build custom AIs and sell them on the company’s marketplace.

The working narrative now seems to be that Altman’s expansionist mind-set and his drive to commercialize AI — and perhaps there’s more we don’t know yet on this score — clashed with the Sutskever faction, who had become concerned that the company they co-founded was moving too fast. At least two of the board’s members are aligned with the so-called effective altruism movement, which sees AI as a potentially catastrophic force that could destroy humanity.

The board decided that Altman’s behavior violated the board’s mandate. But they also (somehow, wildly) seem to have failed to anticipate how much blowback they would get for firing Altman. And that blowback has come at gale-force strength; OpenAI employees and Silicon Valley power players such as Airbnb’s Brian Chesky and Eric Schmidt spent the weekend “I am Spartacus”-ing Altman.

See, even though the OpenAI board has quickly become the de facto villain in this story, as the venture capital analyst Eric Newcomer pointed out, we should maybe take its decision seriously. Firing Altman was not likely a call they made lightly, and just because they’re scrambling now because it turns out that call was an existential financial threat to the company does not mean their concerns were baseless. Far from it.

In fact, however this plays out, it has already succeeded in underlining how aggressively Altman has been pursuing business interests. For most tech titans, this would be a “well, duh” situation, but Altman has fastidiously cultivated an aura of a burdened guru warning the world of great disruptive changes. Recall those sheepdog eyes in the congressional hearings a few months back where he begged for the industry to be regulated, lest it become too powerful? Altman’s whole shtick is that he’s a weary messenger seeking to prepare the ground for responsible uses of AI that benefit humanity — yet he’s circling the globe lining up investors wherever he can, doing all he seemingly can to capitalize on this moment of intense AI interest.

...

To those who’ve been watching closely, this has always been something of an act — weeks after those hearings, after all, Altman fought real-world regulations that the European Union was seeking to impose on AI deployment. And we forget that OpenAI was originally founded as a nonprofit that claimed to be bent on operating with the utmost transparency — before Altman steered it into a for-profit company that keeps its models secret.

Now, I don’t believe for a second that AI is on the cusp of becoming powerful enough to destroy mankind — I think that’s some in Silicon Valley (including OpenAI’s new interim CEO, Emmett Shear) getting carried away with a science fictional sense of self-importance, and a uniquely canny marketing tactic — but I do think there is a litany of harms and dangers that can be caused by AI in the shorter term. And AI safety concerns getting so thoroughly rolled at the snap of the Valley’s fingers is not something to cheer.

You’d like to believe that executives at AI-building companies who think there’s significant risk of global catastrophe here couldn’t be sidelined simply because Microsoft lost some stock value. But that’s where we are.

So it would seem like the BOD and their executives weren't aligned on what the mission was. Altman wanted line to go up, and was happy to take money from foreign wealth funds and then all their models to be closed-source. The board wanted to transparent and open and retain as much control over their priorities as possible. 

It's not really super shocking that the capital interests won out, but wow what a coup for M$ to basically absorb openAI for free

  • Hook 'Em 2
Link to comment
Share on other sites

42 minutes ago, Captainant said:

Pretty good opinion piece on the OpenAI seppuku

https://www.latimes.com/business/technology/story/2023-11-20/column-openais-board-had-safety-concerns-big-tech-obliterated-them-in-48-hours

So it would seem like the BOD and their executives weren't aligned on what the mission was. Altman wanted line to go up, and was happy to take money from foreign wealth funds and then all their models to be closed-source. The board wanted to transparent and open and retain as much control over their priorities as possible. 

It's not really super shocking that the capital interests won out, but wow what a coup for M$ to basically absorb openAI for free

It might be a bit premature to say MSFT absorbed OpenAI for free. I agree its a coup to get the face of AI as a new employee but it doesn't mean that MSFT just takes everything from OpenAI.

If the BOD really thought that Altman was steering the company in the wrong direction that doesn't call for his firing. Seems like a great opportunity to reset strategy to bring the CEO in line, or negotiate a settlement for his separation from the company. Now perhaps those discussions had taken place, a strategy agreement was decided and Altman ignored it. Who knows.

Edited by Nice Guy Eddie
Link to comment
Share on other sites

2 minutes ago, Nice Guy Eddie said:

It might be a bit premature to say MSFT absorbed OpenAI for free. I agree its a coup to get the face of AI as a new employee but it doesn't mean that MSFT just takes everything from OpenAI.

Thing is, what openAI built isn't magic. Their models just take a shitload of data, a shitload of compute, and the engineering knowhow to put those together into building a foundational LLM. MS already had the first two and they're rapidly acquiring the last ingredient. 

They don't need to copy openAI's end result - they can just follow the same open source steps that openAI used but with their own data and compute cycles. 

Thing is with AI, there's not really a moat anymore. It's just pulling together that critical mass of data and compute, and knowing how to train and tune a model at exabyte scales of data with hundreds of billions of parameters. 

Link to comment
Share on other sites

2 hours ago, Nice Guy Eddie said:

It might be a bit premature to say MSFT absorbed OpenAI for free. I agree its a coup to get the face of AI as a new employee but it doesn't mean that MSFT just takes everything from OpenAI.

If the BOD really thought that Altman was steering the company in the wrong direction that doesn't call for his firing. Seems like a great opportunity to reset strategy to bring the CEO in line, or negotiate a settlement for his separation from the company. Now perhaps those discussions had taken place, a strategy agreement was decided and Altman ignored it. Who knows.

they acquired a ton of leadership, a ton of key folks who are fully expected to leave if Altman and Brockman don't return, MSFT has an open license to use the Open AI info and will absolutely hire a ton of those key folks who want to move over.

it is premature only in that Altman and Brockman haven't officially officially joined MSFT and only in that the 700/770 employees haven't resigned yet.

Link to comment
Share on other sites

12 minutes ago, BeardIP said:

Here is actually a contrarian tweet I read today which makes a strong case that this is not great for Microsoft: 

TL/DR: "In other words: sometimes owning a call option on an asset is better for multiple reasons than owning the asset itself. Last week Microsoft roughly owned a call option on OpenAI. Today, at best, they own some fraction of the asset itself."

--

I’m told mine is a contrarian view on the events of the last few days, so here goes…

Contrary to what @kevinroose and others have written, Microsoft was not a winner of the events of the last few days around #OpenAI. They were in a much better place on Friday morning last week than they are today. Friday morning they had invested ~$11B in OpenAI and captured most of its upside while still having enough insulated distance to allow @BradSmi to claim things to regulators like “ChatGPT is more open than Meta’s Llama” and to allow any embarrassing LLM hallucinations or other ugliness to be OpenAI’s problem, not Microsoft’s.

Sunday morning they were at their lowest point. There was real risk they lose all their ~$11B investment and look like absolute fools for making that size of investment without any real governance controls. Very smart people who have followed the news carefully, including some big fund managers who hold $MSFT, are still pinging me asking: how was that even possible?

Today they are better off than Sunday morning, but far, far worse off than they were Friday morning. Sure they will likely hire a bunch of the OpenAI team. But that doesn’t get them much they didn’t have before, and it comes with a ton of new reputational risk (they now own responsibility for any hallucinations or other ugliness) and execution risk (see DeepMind within Google for how all the smartest people in AI can still get stymied by the bureaucracy of a giant company).

I think the chances of the senior OpenAI folks still being at Microsoft in 3 years is asymptotically approaching zero. Where the independence and clear mission of OpenAI was exactly what could have kept that group of incredible talent motivated and aligned over the long term, making Office365 spreadsheets a bit more clever isn’t something that rallies a team like their’s. Sure they’ll try and have some level of independence, but the machinery of a trillion dollar+ business software behemoth is hard to not get caught up in and ground out by.

This was a very bad weekend for Microsoft (and, for that matter, OpenAI). I don’t see any clear winners. (Maybe @elonmusk or @benioff — indirectly?) It could have been worse, but it has still been a disaster for everyone directly involved. We’re not at the end of this story. But don’t see a lot of ways in the short term it gets better for Microsoft. And really hard to see how it could get better even over the long term than it looked for them and OpenAI Friday morning last week.

I disagree with his analysis of MSFT using these folks just to upgrade excel and word. They're already using a modular architecture with integrating genAI into their services, so the researchers can just keep doing what they've always done and the product teams can continue to consume it as a service. They're just both working under the same roof, and now have the added benefit of selling those inferences with a margin to drive external business 

  • Like 1
Link to comment
Share on other sites

10 hours ago, Captainant said:

I disagree with his analysis of MSFT using these folks just to upgrade excel and word. They're already using a modular architecture with integrating genAI into their services, so the researchers can just keep doing what they've always done and the product teams can continue to consume it as a service. They're just both working under the same roof, and now have the added benefit of selling those inferences with a margin to drive external business 

what he said.   because it's true.

anyone notice the ai changes and constant updates with office enterprise software?

i mean, its borderline obtrusive--yet increases my work productivity in real-time.

new-gen clippy llm allows me to get work done and power down the laptop at 5pm.

Link to comment
Share on other sites

1 hour ago, BeardIP said:

Bret Taylor is someone I've come to really respect from following his career. He was a sort of minor player for a while, with his claim to fame was inventing the Like Button on facebook and being Benioff's lackey at Salesforce, before last year when he went toe to toe with Elon Musk and won as chairman of the board for Twitter, so it's very fascinating to see him as chairman of the board for OpenAI now.


 

  • Like 1
  • Haha 1
Link to comment
Share on other sites

17 hours ago, BeardIP said:

The Quip sell to Benioff was highway robbery. That's like Deshaun Watson trade level bad or Browns drafting Johnny Football level failure.

MB paid 15B for a graphing software and 30B for a chatting software (Tableau & Slack), so I wouldn't characterize 1B for Quip as a rip off.... 

Relatively speaking.

Link to comment
Share on other sites

On 11/22/2023 at 2:28 PM, Boss Hogg said:

How the fuck does Larry Summers get in on the action? I guess Bill Gates put in a good word for him? They used to run around in the same circles. Maybe still do. 

 

Political connections for regulatory interests... man's rolodex has a lot of value

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.



×
×
  • Create New...