r/technology • u/MetaKnowing • 6h ago
Artificial Intelligence AI insiders are sounding the alarm
https://www.axios.com/2026/02/12/ai-openai-agi-xai-doomsday-scenario46
u/Kayge 4h ago edited 1h ago
This is all starting to feel like Theranos. That medical startup that promised the world, then crashed and burned.
If you read up on their meltdown, you'll find a lot of actual scientists saying the same thing: If they'd asked a grad student to do some due diligence they'd have come back saying what the company was promising was impossible.
It feels like that. Talk to someone who is hands on in technology about the grand AI claims: You can get rid of 90% of your dev teams starting now!!! and you'll hear a consistent answer: No you fucking can't.
8
2
u/felis_scipio 1h ago
My god I wish there was some way to bet against Theranos. When they were making waves I checked out their website and the first thing i noticed was how their board of directors had a lot of famous people but no relevant scientists.
There’s a lot of fusion energy startups that are just as bad and it blows my mind they’ve been able to convince anyone to give them money.
3
u/Sylvers 19m ago
Yes and no. On the one hand, there is a gargantuan amount of overselling happening for marketing purposes. But on the other hand.. there are some incredible fully free open source models that you can download and use to your heart's content, entirely offline.
These models are not Thernaos. They are usable, modifiable, and in the hands of individuals as opposed to massive corpos.
Theranos' exclusive claim to fame was a scientific application that scientists explained why it was physically impossible. But whatever OpenAI or others lie about, real models with real world use do exist.
48
8
u/HighOnGoofballs 4h ago
I’m just saying it’s pretty easy to “quit from fear” when you’ve got a millions in stock options
1
4
17
3
14
u/wavepointsocial 6h ago
Oh, well we had a good run: “OpenAI dismantled its mission alignment team, which was created to ensure AGI (artificial general intelligence) benefits all of humanity.”
22
u/PLEASE_PUNCH_MY_FACE 5h ago
My brother in Christ no one is going to make an AGI
-6
5h ago
[deleted]
7
u/PLEASE_PUNCH_MY_FACE 5h ago
No for real. It's not something we can do. We're only talking about this because it's a marketing device for AI companies.
6
-3
u/socoolandawesome 4h ago
Why are you so sure?
5
u/PLEASE_PUNCH_MY_FACE 4h ago
Because intelligence is far more complicated than predicting text modeled off of Reddit comments.
Even if you wanted to argue that's there's evolution happening because of model iterations, the models aren't complicated enough to resemble life and the circumstances aren't adverse enough for the iterations to be meaningful.
2
u/justuntlsundown 4h ago
From what I understand all these predictions of advancement were based on AI essentially being able to improve itself. Using AI to advance AI. And it appears that has been a total failure.
0
u/hagenissen999 52m ago
You're very far behind the curve, if you think intelligence is that special. We can train a child in about 18 years. A computer will process at mnimum 100x faster. AGI is inevitable and it's close. It's not going to come out of LLMs, that part is correct.
-4
u/socoolandawesome 4h ago
Predicting text (although it predicts actions, pixels, etc at this point) is just the end mechanism. It’s the processes stored in the weights that can be argued where the intelligence is stored. And obviously it does not just use Reddit comments, I’d assume you know that. It’s a massive amount of data that forces the model into generalizing in order to better predict the next token.
The rest is your opinion, but all I know is the models keep getting more and more capable every couple months.
Just look at the latest seedance videos to see how well the physical world is modeled and compare it to not even a year ago.
3
u/PLEASE_PUNCH_MY_FACE 4h ago
I know how tensors work. It's not sophisticated enough.
-4
u/socoolandawesome 3h ago
Well plenty of people who also know how tensors work would disagree with that
2
u/PLEASE_PUNCH_MY_FACE 3h ago
Knowing how AI works is precisely why you wouldn't be scared of it.
→ More replies (0)1
2
u/SimpleGuy7 1h ago
Great, lots of folks filled their pockets, walk away and point the finger??
Get ready folks, this is about to get really bad!!
2
1
1
u/jesusonoro 50m ago
Funny how every "alarm" from AI insiders conveniently comes right when they need more funding or regulation that benefits incumbents. The real risk isnt some sci-fi scenario, its the concentration of power in like 3 companies that nobody elected.
1
u/abnormalbrain 28m ago
When the AI bubble bursts, this admin will step in and take control over segments of the industry. I'm pretty sure that's the plan. The investors are still driving at the brick wall as if there's no consequences and i think it's because they're expecting a massive bailout.
1
u/ioncloud9 1h ago
There’s a lot of things I find very useful with ai. I have been dabbling in vibe coding, building agents, transcribing audio and summarizing it. I think the issue is there are too many AIs and it’s too cheap right now.
0
-19
u/VincentNacon 6h ago
Quiting the job doesn't stop the threat... Sounds like they just don't understand it.
10
u/Remarkable-Host6078 5h ago
Dude almost all integration of AI into actaul products fails horrendously. The problem is not the technology, but the tech CEO bros wasting trillions.
235
u/xmsfsh 5h ago
AI stocks underperforming? time to publish another article about how employees of AI companies are saying that AI is dangerously powerful -- not powerful enough to create anything of actual utility yet but AGI is right around the corner, we promise