this post was submitted on 22 Nov 2023
125 points (90.3% liked)

Technology

59414 readers
3376 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each another!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed

Approved Bots


founded 1 year ago
MODERATORS
 

Meanwhile, some new details emerged about the days leading up to Altman's firing. "In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world's largest investors for a new chip venture," Bloomberg reported. Altman reportedly was traveling in the Middle East to raise money for "an AI-focused chip company" that would compete against Nvidia.

As Bloomberg wrote, "The board and Altman had differences of opinion on AI safety, the speed of development of the technology and the commercialization of the company, according to a person familiar with the matter. Altman's ambitions and side ventures added complexity to an already strained relationship with the board."

"According to people familiar with the board's thinking, members had grown so untrusting of Altman that they felt it necessary to double-check nearly everything he told them," the WSJ report said. The sources said it wasn't a single incident that led to the firing, "but a consistent, slow erosion of trust over time that made them increasingly uneasy," the WSJ article said. "Also complicating matters were Altman's mounting list of outside AI-related ventures, which raised questions for the board about how OpenAI's technology or intellectual property could be used."

top 41 comments
sorted by: hot top controversial new old
[–] redcalcium@lemmy.institute 50 points 1 year ago (1 children)

OpenAI said the "new initial board" will consist of D'Angelo, economist Larry Summers, and former Salesforce co-CEO Bret Taylor, who will be the chair.

Those pesky board members with their annoying AI safety ideals are gone, replaced by new board members with excellent experience in squeezing profits. Next they'll probably attempt to turn the non-profit parent org into a for profit corporation so they can get equity/stock grants. Yay!

I guess OpenAI will get enshittified next year.

[–] ZahzenEclipse@kbin.social 1 points 11 months ago

Yeah I don't know if the board getting ousted is good for anyone but open ai

[–] j4k3@lemmy.world 45 points 1 year ago

He's a billionaire. There are no honest billionaires. Things will only get worse when billionaires go unchecked.

[–] RattlerSix@lemmy.world 29 points 1 year ago (4 children)

Can anyone explain why this guy and his firing have been such big news?

[–] Heresy_generator@kbin.social 26 points 1 year ago* (last edited 1 year ago) (3 children)

Because "AI" hype is what the venture capitalists are feeding to the financial and tech press theses days and Sam is the venture capitalists biggest "AI" star because he's a good snake oil salesman.

[–] theherk@lemmy.world 24 points 1 year ago

While not inaccurate, that is extremely reductive. The rapid improvement of AI at the transformer level is currently one of the most interesting things happening across many fields including arts and sciences, that also has the widest deviation between potential good and potential harm. OpenAI and its complex governance model are directly at the center of that growth and embroiled in one of the most fascinating governance struggles in recent history.

This drama when combined with how disruptive this technology is likely to be across a wide range of markets affecting the world’s economies makes this interesting and also has the added benefit of being a news departure from the bombings and other terrible stuff going on around the world. Much more fun for popcorn and chat than wars and such.

[–] Lmaydev@programming.dev 11 points 1 year ago (2 children)

We are way beyond hype at this point.

It's a total game changer.

As a developer ChatGPT has completely changed my workflow and massively increased my productivity.

[–] micka190@lemmy.world 23 points 1 year ago (2 children)

As a developer, comments that talk about how ChatGPT is changing the development game confuse the hell out of me. What are you people doing that ChatGPT makes your workflow massively more productive?

  • It gets documentation/help wrong or straight-up makes shit up
    • Same thing with having it generate actual code
  • If "generating code I'd normally copy/paste" is such a game changer, your architecture/design needs a rework
    • Yes, even for tests (seriously, we've had ways to pass arrays of inputs into tests for years, having it copy/paste the same test a hundred times with different values is fucking atrocious)
  • Code "assistant" suggestions have been fucking horrid from my experience with them (and I end up disabling it every time I give it a try)
[–] Lmaydev@programming.dev 13 points 1 year ago* (last edited 1 year ago) (2 children)

When using any new language or framework I can get up and running very quickly.

Used to take time to read the intro docs and then start digging around trying to find the features I need. Now I can straight ask it how to do certain things, what is supported and the best practises.

If I see a block of code I don't understand I can ask it to explain and it will write out line by line what it's doing. No more looking for articles with similar constructs or patterns.

It's amazing at breaking down complex SQL.

Many tedious refactoring tasks can be done by it.

Creating mappers between classes is very good because it can easily pickup matching properties through context if types and names don't match.

Generating class from a db table and vice versa.

If you have a specific problem to solve rather than googling around for other solutions you can ask it for existing methods. This can save days or more of discovery and trial and error.

It's really good generating test cases based on a method.

Recently I implemented a C# IDictionary with change tracking built in. I pasted the code in, it analysed it and pointed out a bug then wrote all the tests for the change tracking.

It did better than I thought it would. Covering lots of chains of actions. Which again found a bug.

It's fairly good at optimising code as well.

As for the mistakes you should be able to spot them and ask it to correct. If it does something invalid tell it that and it will correct.

You have to treat it like a conversation not just ask it questions.

Like Google you have to learn how to use it correctly.

We also have bing enterprise which uses search results and sources its answer. So I can look at the actual web result and read through.

The hallucination thing is basically a meme at this point by people that haven't really used it properly.

[–] Whoresradish@lemmy.world 2 points 11 months ago (1 children)

When I google an issue I quickly get a list of possible solutions with other developers commenting on them with corrections. People can often upvote and downvote answers to indicate if they work or not and if they stop working.

With ai I get a single source of information without the equivalent to peer review. The answer may be out of date and it may misunderstand my request. It may also make the same mistake I am making that I would have caught with a quick googling.

The ai may be able to make boilerplate code occasionally without too much rework, but boilerplate code is not that hard to make already.

The AI is massively more expensive than a search engine and I have not seen any indication that will change soon. This is the biggest problem in my mind. I don't ever expect to have to pay for google. I expect in the future the ai will need to be paid for somehow and I have a feeling they will have to charge too much to justify the use of AI for software development work.

AI has plenty of good uses, but I do not believe software development is the winner. Block chain for instance was massively useful for git repositories, but not useful for many of the crazy things companies attempted to use it for.

[–] Lmaydev@programming.dev 2 points 11 months ago* (last edited 11 months ago) (1 children)

If you use bing search AI it sources its answers. It basically does what you would do when looking through sources and at ratings. But when you find the info you want you can click the link it used to generate it.

It's also free I believe.

[–] Whoresradish@lemmy.world 1 points 11 months ago* (last edited 11 months ago) (1 children)

Right now AI like that is heavily subsidized by investors. My concern with AIs feasibility is that training is so expensive that it won't be able to stay free. Remember we can only stop ai training if the AI topic is no longer developing. Also if the AI can source its answer with a link, did it provide me with a new service that is better than a search engine?

[–] Lmaydev@programming.dev 1 points 11 months ago

Yes because you have your answer and further reading if needed.

Rather than having to read through search results and figure out which were relevant.

[–] ZahzenEclipse@kbin.social 2 points 11 months ago (1 children)

As a newer developer is has been amazing for me and alot of experienced developers also recognize how much benefit it provides so im honestly confused by your standpoint.

[–] Lmaydev@programming.dev 1 points 11 months ago

You'll find the old guard hates change and will shit on things like this without even trying them.

[–] cashew@lemmy.world 2 points 1 year ago

Failing to understand why does not make you correct by ignoring it.

Learning how to use AI tools is another meta-skill just like learning how to use a search engine such as Google. The latter is widely accepted as a must-know for software developers.

[–] ZahzenEclipse@kbin.social 3 points 11 months ago

If you're not actively using AI for a tech job then you're leaving yourself behind. It's look ignoring using Google.

[–] bezerker03@lemmy.bezzie.world 5 points 1 year ago

Chatgpt was one of the biggest game changers in tech in ages. Seeing the company implode over night has been interesting.

[–] yildo@kbin.social 4 points 1 year ago* (last edited 1 year ago)

Because Microsoft and VC types have thrown many billions of US dollars at this and similar companies, so a lot of (their) money is at stake

[–] misk@sopuli.xyz 3 points 1 year ago* (last edited 1 year ago)

While large language models and similar "AI" technologies are very overhyped, they are already plenty usable for things like deepfakes which if left unchecked have significant potential to be weaponised and destabilize societies.

OpenAI is a non-profit that's behind those machine learning models and practical applications like ChatGPT. In principle it should govern development so that it's safe and responsible. There are many allegations that Sam Altman became focused on profit betraying non-profit mission.

While OpenAI is not technically controlled by commercial entities (it has 49% stake by Microsoft) it's entirely dependent on them for funding which likely led to being strong-armed to have Altman regain control.

[–] BigMacHole@lemm.ee 15 points 1 year ago (1 children)

Why didn't the board mention any of this when they were asked about why he was fired?

[–] scytale@lemm.ee 15 points 1 year ago* (last edited 1 year ago) (1 children)

I’m still confused how their chief scientist was part of the coup to remove Altman and at the same was one of the signatures on the letter demanding his return.

[–] webghost0101@sopuli.xyz 1 points 11 months ago

I actually think it was because of Greg Brockman previous head of the board that quit after hearing the news about Altman.

They told him he was vital to the company after firing Sam and removing him from the board.

Ilya their chief scientist officiated Greg and his wifes wedding. Apparantly Greg wifes pleaded to Ilya to support their return.

I think the main issue here is open ai’s stated goal of developing safe agi to benefit all of humanity, destruction of the company and not making any profit would be align with that.

However with so many player developing for profit ai catching up it is probably safter to have an openai raking risks then not having any openai at all.

Ilya probably hoped that Greg and most co workers would stay without Altman, but since they where not the outcome prospects became worse enough to regret.

[–] CrayonRosary@lemmy.world 8 points 1 year ago* (last edited 1 year ago)

We're all doomed.

[–] nyakojiru@lemmy.dbzer0.com 5 points 1 year ago (1 children)

I got a new script in my brain that would scroll fast if it visually detects this stupid mf face

[–] jadedwench@lemmy.world 1 points 11 months ago

Reminds me of Elliot from Mr. Robot who rewired his brain to only see Evil Corp.

[–] autotldr@lemmings.world 3 points 1 year ago

This is the best summary I could come up with:


The three who are leaving the board are OpenAI Chief Scientist Ilya Sutskever, entrepreneur Tasha McCauley, and Helen Toner of the Georgetown Center for Security and Emerging Technology.

OpenAI's interim CEO, Emmett Shear, who led the company for a few days, wrote, "I am deeply pleased by this result, after ~72 very intense hours of work.

"In the weeks leading up to his shocking ouster from OpenAI, Sam Altman was actively working to raise billions from some of the world's largest investors for a new chip venture," Bloomberg reported.

As Bloomberg wrote, "The board and Altman had differences of opinion on AI safety, the speed of development of the technology and the commercialization of the company, according to a person familiar with the matter.

A Wall Street Journal behind-the-scenes report noted that the nonprofit board's mission is to "ensur[e] the company develops AI for humanity's benefit—even if that means wiping out its investors."

The sources said it wasn't a single incident that led to the firing, "but a consistent, slow erosion of trust over time that made them increasingly uneasy," the WSJ article said.


The original article contains 772 words, the summary contains 184 words. Saved 76%. I'm a bot and I'm open source!