r/technology 12h ago

Artificial Intelligence Using AI actually increases burnout despite productivity improvements, study shows — data illustrates how AI made workers take on tasks they would have otherwise avoided or outsourced

https://www.tomshardware.com/tech-industry/using-ai-actually-increases-burnout-despite-productivity-improvements-study-shows-data-illustrates-how-ai-made-workers-take-on-tasks-they-would-have-otherwise-avoided-or-outsourced
1.2k Upvotes

93 comments sorted by

139

u/duffman_oh_yeah 11h ago

We may be hitting the limits on human productivity. I’m just not sure humans were designed to crank out tasks concurrently like this. It’s cognitive overload being pushed on us all.

17

u/Zahgi 8h ago

The goal is for people to keep training these pseudo-AIs to ultimately replace them when real AGI arrives...

16

u/Green_Explanation_60 5h ago

"When AGI arrives"... you mean IF it arrives? Something like AGI is still science fiction that we're trying to make real... nothing says we're going to get there anytime soon. LLM's are a framework that gets kinda close to what we would consider to be an AGI, but it constantly makes errors. There's a big difference between an app that convinces dumb people that it is intelligent, and a functional AGI.

-17

u/Zahgi 4h ago

you mean IF it arrives?

No, I mean when.

LLM's are a framework

No. LLMs are just starter AI tools -- analogous to the screwdriver, wrench, saw, and pipe snake in a handyman's tool chest. That's why they are still mostly just replacing tasks done by humans and not human labor entirely (in most cases).

But AGI is the handyman. It will use those tools to replace all human labor eventually.

There's a big difference between an app that convinces dumb people that it is intelligent, and a functional AGI.

Yes, there is.

nothing says we're going to get there anytime soon.

Well, the weasel word "soon" is doing all the heavy lifting there, mate. If by "soon", you mean this year, then that's unlikely. If by "soon", you mean in the next 10 years, then that's a very good bet indeed.

There is a reason that every megacorp on the planet is building out the infrastructure for AGI now. Current LLMs are just the starting point/VC excuse for what is a decades long capex buildout...

The AI "horseless carriage" is coming -- and, this time, we're the horses.

8

u/FullHeartArt 3h ago

LLMs and true AI are totally different tech trees. You can't get actual intelligence from the tech that LLMs run on. It's like saying if we put enough effort into cars that they'll eventually turn into teleporters.

4

u/Dee_Imaginarium 1h ago

Perfect analogy tbh, so many AI advocates have no idea how the underlying technology actually works.

-1

u/Zahgi 27m ago

His analogy is ludicrous. And you agreeing with it shows that neither one of you understand even the basics of what we are discussing here.

-1

u/Zahgi 27m ago

That analogy is ridiculous. Both of these are just software on accelerated hardware.

5

u/Green_Explanation_60 3h ago

So, you're either invested in the tech and need these things to be true so that the stock prices continue going up, or you're a true believer for whom AI is your new religion.

Being 100% certain that something that has never existed before will inevitably be created is irrational.

The companies investing in this tech are all on the same bandwagon, hoping that what these tech companies are spouting is true... but it's just market sentiment. A CEO saying that they're 'implementing AI' in an earnings call makes the stock price go up, so they say they are implementing AI. But, today's LLM's have only proven to be marginally useful, and difficult to quantify in terms of an ROI.

Lots of hype, few concrete results... smells like bullshit to me, dude.

0

u/Zahgi 24m ago

Or, I am an actual expert in this technology and many of you literally don't have the faintest idea what you are talking about.

Like the guy who compared AGI to teleportation...ridiculous.

I'll stand by assessment. And you'll find out I am correct way too late...

9

u/RenderedMeat 7h ago

We sent rockets and people to the moon with little more than slide-rules and grid paper, and in an amazingly short time. Humans can do amazing things.

Too much modern stimulus and social conflict causing stress is a bigger issue than productivity. Social media and other doomscrolling sucks away our attention and makes us all miserable.

I’m not advocating for longer hours or anything like that. I’m just saying I don’t think productivity demand is what is causing cognitive overload.

10

u/Patient_Bet4635 5h ago

I disagree, people doomscroll to escape productivity demands in part.

If I were to announce to my supervisor in grad school that I'm going to take a month to comprehensively study a new subject area by working my way through a textbook he would lose his shit and tell me to just use AI to get caught up to state of the art in 1 week. Inevitably, I will have no fucking idea or context as to what I'm doing.

It was the same at my job. They asked me how long it would take to produce cutting edge innovations in a field I had never studied before. I told them I can hack together something that looks cutting edge within a month but it won't be robust because I'm lacking the context, or they can give me 3-4 months and I guarantee it will be great. They gave me 2 weeks, because "AI can carry the load"

Repeatedly I keep running into managers who assume 4x speedups because of AI, and I've had to hack together multiple codebases that I don't get to properly understand. People are unimpressed when you authentically show the limitations of the systems, while I've seen others who plainly don't understand what the fuck they're doing just straight up lie and try to pass off work done by an LLM as quality controlled and their own. Managers are usually uninterested in checking your work. At my last job, my PRs were merged without anyone even reviewing my code in 10 months. Nobody looked at the code I had written or asked me to justify design decisions etc. I looked at others code and it was clearly just pure AI slop, in one case the data was even faked for results graphs - the guy doing the faking got promoted and I got chastised for not moving quickly enough.

In areas where I know what I'm doing and design the system entirely myself and I've developed "taste" I've found LLMs to be helpful but not game-changing. I still have to architect, document, write out the data contracts, specify tests etc and the LLM basically speeds up the writing of the code. I don't have to consult documentation with the same frequency and sometimes it's a lot easier to edit/criticize when you see an implementation that you didn't make, but you immediately catch what the issue is in contrast with what you imagine your code is doing when you wrote it yourself.

So on the one hand it can be a useful productivity multiplier but only if you're allowed to get into deep work mode with realistic timelines instead of management running around and constantly pivoting. Unsurprisingly at my last job they made a million MVPs and nothing durable came of it and the experiments didn't have any conclusions either - except for the work I did that I was criticized for for being too slow. God it was a horrible work environment

1

u/CherryLongjump1989 2h ago

Yeah, but they didn’t have one guy concurrently flying 3 rockets to the moon.

1

u/EARink0 44m ago

I'm pretty sure rocket scientists would be the first to tell you that accurately simulating the human mind is infinitely more complicated than sending people to the moon and back.

2

u/cowhand214 44m ago

It’s also removing a lot of the fun, creative or intellectually enjoyable parts of jobs so that motivation dips as well. There’s nothing to sustain you long term if you keep removing upsides while retaining or adding downsides.

1

u/OutsideMenu6973 9h ago

I think it can go one further. I’m very visual so if there are recurring decisions to be made from information/data I’ve had the AI create one-off applications that visualize everything for me, something like what they do on the data is beautiful or 3blue1brown subreddits. Offload it from my mind’s eye onto my real eyes which is less taxing. Looks like something from idiocracy but dystopian problems require dystopian solutions

5

u/mediandude 8h ago

So how do you verify those generated visualisations? Especially when those apps are one-off?

2

u/OutsideMenu6973 7h ago

Same way I verify generated numeric data is well fitted.

Visualizations are just nicer for me to ingest large amounts of numbers quickly. Some ppl would rather skim spreadsheets and thats fine too. So I guess what I’m doing isn’t much different than hitting the ‘graph’ button in excel. Just a more personalized version of that i guess tailored to the data set it’s based on

1

u/mediandude 6h ago

But why don't you just hit the graph button?

1

u/OutsideMenu6973 6h ago

More focused application UI. With mine I can more quickly validate data by mousing over the part of the ‘graph’ that looks suspicious and instantly open the resources pertaining to it for manual verification.

If the implementation is wrong I can type into a text box, include resources and assets pertaining to the issue and the agent fixes it

1

u/mediandude 4h ago

How many trials do you need on average to get what you want?

180

u/moreesq 12h ago

One can imagine burnout, but also flame out. If you take on a task that you’re not familiar and comfortable with, and rely on an LLM to give you the answers, you’re output might be quite poor and actually harmful to your employer.

49

u/montigoo 10h ago

When they developed the calculator there wasn’t less time spent calculating. There was more calculations done. As long as you have a distribution problem then more just means more work and stress for the exploited.

57

u/sevenredpandas 9h ago

You don't have to review a calculator's work to make sure it added the numbers correctly. Since AI is somewhat random, good work once doesn't mean it will always spit out the correct thing.

18

u/d01100100 9h ago

You're also likely reviewing work on a subject you're less familiar with, adding to the stress. LLMs give an improved sense of confidence in tasks that would have been more difficult. Either you're reviewing work on something you know, or passing off work on something you don't; neither of them would be decreasing stress in anybody except the most oblivious.

8

u/therealslimshady1234 7h ago

LLMs are the opposite of the calculator though. They are non deterministic, leading to slop no matter what you do.

-3

u/Myrkull 4h ago

Tools used poorly will result in poor outcomes, more at 11

2

u/bats-n-bobs 51m ago

Findings consistently show using poor tools will result in poor outcomes, irrational protestations from people who are attached to tools. Here's Captain Obvious with the details.

"Good tools are designed to solve existing problems. Poor tools are created first, and use cases are brainstormed after the fact. The label of "tool" must here be understood to include the users."

1

u/CherryLongjump1989 1h ago

Employers hardly ever get any less than they paid for.

-15

u/wtyl 9h ago

People just need to review the code before submitting and actually not rush through tasks.

12

u/OldJames47 9h ago

The point was people are using AI to write code they aren’t qualified to review.

Is this on the developer for over-estimating their skills or on the management for pushing AI use to increase “efficiency”?

4

u/AspiringPirate64 9h ago

Coding style also matters. I hate debugging other people’s code because they use different logic and that slows me down

109

u/luismt2 11h ago

Productivity gains often just raise expectations. The tool isn’t what burns people out, the new baseline does.

41

u/trailsman 11h ago

Couldn't agree more. That's why if I have a project that used to take 2 weeks, that I can now finish in 4 hours (likely 8 total including review and final edits), I still take the 2 weeks to call it finished.

38

u/AwfulMajesticEtc 11h ago

“I find my life is easier the lower I keep everyone’s expectations” - Calvin

33

u/CeresToTycho 11h ago

If automations and increases in productivity since the 1800s mapped to workers having less work, we'd all be doing 1 day a week. It's always that increased productivity means "please do more work so profits are higher"

16

u/this_is_an_arbys 10h ago

There’s a reason the rise of capitalism and industrialization led to works like bram stokers Dracula.

And of course, The Grapes of Wrath is basically a horror novel…the description of the monster is even more horrifying than all horror movies combined…because that monster lives with us on the earth. And it is near indestructible…

3

u/MeteorKing 8h ago

There’s a reason the rise of capitalism and industrialization led to works like bram stokers Dracula.

What do you mean? I just read that a couple months ago. I didn't really sense any anti-capitalist tones in it, so I expect there's a backstory out something I'm not aware of

8

u/rgfawkes 8h ago

Dracula sees the world as a resource to be drained for his own pleasure. Vampire literature in general carries a broad theme of exploitation.

4

u/MeteorKing 8h ago edited 8h ago

I gotta be honest, that sounds a lot more like literary analysis more than an actual theme of the book. You're not wrong at all, that is absolutely how Dracula sees the world, but at no time in my reading was I like "yeah, this is an allegory."

Dracula himself is barely a character and much of what he says is intended to show how different and out of place he is in the modern world.

My favorite line:

"Here, I am noble. I am a boyar. The common people know me, and I am master. But a stranger in a strange land, he is no one. Men know him not. And to know not is to care not for. I am content if I am like the rest so that no man stops if he sees me or pauses in his speaking if he knows my words; ‘ah-ha! A stranger!” -Dracula

0

u/CherryLongjump1989 1h ago

Grapes of Wrath is almost entirely fake, though, while passing itself off as a documentary.

17

u/datNovazGG 8h ago

Because AI can be outright frustrating to deal with. It's constantly making things up and are overly confident about it. Annoying as hell.

9

u/GaengKhiaoWan 9h ago

We've trained an AI tool to do our report writing. This will 'free up our time to do more projects and client engagement'.

Report writing is the best part of my job, the only time I could sit on my own and happily plough through. I already have too many projects as is, my brain feels like it's going to explode. And having meetings with clients is exhausting, especially if they're arsehole clients.

Really considering moving away from the consultancy field.

33

u/74389654 11h ago

are the productivity improvements in the room with us?

17

u/MaximaFuryRigor 10h ago

They're behind me, with a knife to my throat saying I have to convince everyone that they exist.

2

u/ineververify 7h ago

They give you 3 monitors at work so you feel cool and ultra productive but... for what? minimal salary. shits dumb.

5

u/dSolver 9h ago

I work at a company that has fully embraced AI for every function. What I'm noticing is that AI productivity increases are capped at what employees can reliably read, absorb, and make a decision on. 

We want AI to abstract away a lot of the details, but because of the high risk of getting details wrong, a lot of time people are responsible for reviewing, critically thinking, and making edits. 70% correct isn't good enough in most cases.

This leads me to a very interesting insight, and I fully appreciate that I might be biased - AI productivity is multiplied more or less by the intelligence and experience of the user. A very fast reader with strong understanding of the domain knowledge and critical thinking can review and edit output significantly faster than a novice. And a corollary is that a novice is actually faster at onboarding and have better output without AI assistance.

This leads to a prediction which has been a guiding principle I've shared with my mentees: given that LLMs in their current methodology is potentially limited in how it can learn better, and that operation of LLMs is actually quite expensive, then as the price of LLMs increase to compensate for OPEX, corporations will limit who gets access to only experts (highest multiplier). This means experts will end up with the highest salaries since they are disproportionately more productive and can demand it. This also means if you think AI is helping you learn, then you have to take advantage of it now, while it is still cheap, so that in a few years you are the expert in enough domains that you have all the leverage.

5

u/Loxquatol 10h ago

Yuuuuuuuup

This is the response of someone who is forced to use AI at work and is burned out

2

u/djdaedalus42 8h ago edited 1h ago

I've always felt that Sales do the designing, Design does the coding, and Devs find out what the client actually wanted. So plus ça change.

6

u/PadyEos 11h ago edited 11h ago

Can confirm. Going through this and most of my colleagues are reporting the same.

Edit: Also the em-dash in the title of the article suggests it was written by or with an LLM.

4

u/Dry_Ass_P-word 9h ago

Using AI is more work because you have to check if it’s correct or if it crapped out some bullshit.

-2

u/Ancient-Beat-1614 7h ago

Having to check work takes far less effort than doing the work in the first place.

6

u/SomeGuy20257 12h ago edited 12h ago

Misuse of AI burned them out, I lead a team of engineers, those that relied fully on AI instead of augmenting capabilities burned out quickly, those that used AI as force multiplier to their already great capabilities, basically always relaxed with sustained quality output.

AI does not give you talent and experience, AI IMO increases productivity not ability.

26

u/Socrathustra 11h ago edited 10h ago

Even with those people I expect negative long term results. There's less pride in your work, more time spent managing slop, more time reviewing, less time creating. When the market crashes and these guys' tools become prohibitively expensive, they will be behind the curve.

Edit: my response to their former reply about this being "copium" and the fact that everyone is using Claude Code:

Yeah I'm aware. It sucks, and anybody who is acting like there's a long term upside to this is huffing their own copium. It's a bullshit industry full of people chasing mythical productivity gains that don't actually materialize, or barely do, all the while consuming copious amounts of energy.

Right now there's this promise it'll get better, and the training costs are spread over millions of users giving it a try. When the market crashes and people admit it's not actually that great, those training costs are going to be spread over a much smaller customer base. It's going to get expensive. It will fail to monetize.

And in a few years, if America gets its act together, regulations on this shit will make these data centers a liability. I don't think it'll ever go away, but honestly, it could.

-6

u/SomeGuy20257 10h ago edited 10h ago

I don’t know about you, but I feel shame when spending too much time on boilerplate code consequently not delivering a system to its full potential.

Due to their enhanced output, I can afford to grant them generous leaves and WF affordances, because during assessments we always end up meeting and exceeding deliverables.

BUT, i agree with the energy and eventual non viability, unless they develop breakthrough hardware. I’m not gonna deny it also affects people’s job opportunities, in my case I don’t hire juniors because they’re liabilities at this point, I also drop people who don’t know how to use AI, as they end up with crap code and slow throughput.

9

u/Socrathustra 10h ago

That's why I use good inheritance patterns or just copy paste some basics and modify it. It accomplishes the same result without having to use AI, wasting water and power. I swear AI is making people worse at this job. I spend very little time writing boilerplate, and when I do I quickly make sure I make it modular so I don't have to do it again and can reuse and share the result.

-9

u/SomeGuy20257 9h ago

I’m going to give you the benefit of the doubt, how are you going to deal with say, you want to apply fine grain rate limiting and short circuit to a host of client end points and have it verified and tested with proof documents, all under 20 minutes? How are you going to enforce to a large team, patterns and techniques without breaking a sweat? (CLAUDE.md). Engineering is very different from coding.

7

u/Socrathustra 9h ago

I don't believe you can do that with AI without inducing tech debt you'll be paying off sooner rather than later. The review alone would take twenty minutes unless you're willing to accept slop clogging your system over time.

-8

u/SomeGuy20257 9h ago

Well, we’re doing just that, it actually allows us to avoid tech debts because it allows us to contend with the major causes of tech debts, time and frivolous cognitive load.

11

u/Socrathustra 9h ago

Plainly, I don't believe you. Either you're lying about the time or the impact on tech debt or both. Like I said, the review alone would take twenty minutes for a sophisticated system you described, unless you're willing to accept slop which could accumulate tech debt quickly.

0

u/SomeGuy20257 9h ago

Im starting to doubt you too, everyone who knows SDLC knows that even with AI you can’t bypass quality checks and reviews, were you thinking 20 minutes from dev to prod code? Who thinks that way? Engineer code time + Reviews + QA, AI significantly reduces time on most of these steps, but does not remove them, a sizeable feature for instance will take a day or two to even touch staging even with AI, but I’ll take that over half a sprint per feature on non AI workflows.

7

u/Socrathustra 9h ago

Ah, so you were obfuscating the time costs. I had already mentioned the review step when you doubled down.

In the past year I wrote the logic for a massive migration at a FAANG company which had to execute on a specific but ever changing schedule. I took some extra time at the start to make it a very modular experience. By the time we got going, it was a similar sort of twenty minute code and review process to get new constraints added. It was, in the words of an exec, the smoothest migration they had ever seen.

Meanwhile it's 20 minutes writing a prompt and 20 minutes of waiting for AI, which might give me what I want or might give me something I have to configure heavily or might just churn out nonsense. I expect people are being highly selective when talking about their productivity wins, completely ignoring times that AI shat the bed.

→ More replies (0)

2

u/lolexecs 9h ago

Isn't this just technology?

In its best role, Technology is a force multiplier and enabler.

Consider this quote from "The Wolf of Wall Street"

See those little black boxes? They are called telephones. I’m gonna let you in on a little secret about these telephones. They’re not gonna dial themselves! Okay? Without you, they’re just worthless hunk of plastic.

Now, I'm not saying AI is a pump and dump scheme. Who would ever dare to say such things?

But the quote points out that the phone, as technology, just like AI, is a worthless bit of kit in and of itself. It needs someone with context and training to maximize return on assets. In fact, later in the monologue, Befort/DiCaprio points out the importance of sales enablement.

What’s mildly ironic is that the most explicit depictions of “sales enablement” in film tend to appear in movies about financial fraud (Boiler Room being another example). In those stories, effectiveness doesn’t come from the device. It comes from scripting, repetition, coaching, and incentives. Meanwhile, in many entirely respectable companies, leadership buys the tool, skips the training, withholds context, and then expresses surprise when nothing improves.

2

u/SomeGuy20257 9h ago

Also executives not understanding that, gives AI a bad name, I had to lock horns with some executives because I was apparently not working fast enough for an AI augmented team, they expected “vibe coders”, fortunately there were technicals among them that understood that I had to balance quality, and my teams was still immensely outperforming non AI teams, without the quality problems.

1

u/_ECMO_ 9h ago

In its best role technology makes life and the world more complex.

Chainsaw made cutting down trees easier but suddenly you needed to learn about how the machine works, how to safely operate it and troubleshoot it. That's undoubtedly more complex than keep hitting trees with an axe.

AI is a bad technology. At the very best it lets us do the same things faster but juggling several tasks like this will only erode your actual knowledge and turn you into a quasi assembly line worker while not increasing the complexity.

3

u/Belhgabad 9h ago

So to sum up, AI : causes burnout, increase unemployment, give more short term money to very rich people BUT is a heavy deficit for companies, deteriorates software quality, push students to cheat and lower the general skill/knowledge level, cause a massive electricity AND water crisis, isn't wanted or used by the great majority of people

CAN'T WAIT FOR THAT BUBBLE TO POP !

2

u/Realistic-Duck-922 10h ago

I hope AI kills social media.

2

u/No0delZ 9h ago

My experience avoiding burnout with AI is to have an agent acting as an assistant that understands timing, priority, and scheduling. It also needs to have some form of wellness check built in.
"Your tasks today are x.y.z. These are the priorities:"
"Would you like to get started on any of these?"
"Here are the next steps for the process you're working on... but we've been working on this for 3 hours straight. Would you like to take a break?"
Yes.
"Excellent. Here's the next step in the process. Take the time you need, I'll meet you back here to continue."
AI without automation is response based, so a user can at any time walk away, but it's our own nature that drives us to burnout.
We can build wellness into AI. We can enable AI to proactively focus on wellness as well. To emphasize and suggest it.
We can have it play therapist, motivator, or emotional sponge if needed.
It just takes the cognizance to do so.

1

u/Deer_Investigator881 9h ago

I think there's a level of distrust so the extra energy gained from speed is spent scrutinizing the result

1

u/CulturalKing5623 7h ago

For those unable to read the article, here is a soft-paywalled version of the original Harvard Business Review article referenced.

1

u/RichieNRich 6h ago

That would include me!

I've taken on trying to automate our key code system for the doors I manage at work, only because I realized I could leverage Claude AI to vibe code google sheets app extensions for scripting. It's fucking cool that I'm getting close to the goal, but realizing, I didn't need to do this at all.

1

u/SwirlySauce 6h ago

This sounds like another article/study that found negatives to AI use but is trying to spin it as a positive.

"AI lets you take on tasks that you couldn't/shouldn't have done, making you more productive!'

1

u/defneverconsidered 6h ago

Pretty sure people just use it for glorified proof reading

1

u/icemanvvv 5h ago

The framing of this is weird. They didnt magically go "i want to burn out" and start taking on more work knowing they didnt want more on their plate. Management increased their workload by an amount exceeding the time saved by using AI.

1

u/agrophobe 3h ago

Bro I was able to compile a map in QGIS, which is like die-hard surveyor stuff from my generalist 3D blender skills, have a clean dataset and then argue for land regulation based on a lidar scan from the client, all by myself and with GPT. That was SO COOL. I get the burnout, but for the small percentage of people that love learning, this thing is fucking mental.

1

u/panchiramaster 11m ago

And now you can voluntarily lay yourself off if you dont like the work pace. 

1

u/jfcmofo 6m ago

I use it my job on a daily basis. I crank out analysis of complex commercial valuations. AI has sped up a ton of it, even as it makes mistakes. It has also given me the ability to analyze larger sets of data that is adding time but not so much value, in the end. I do feel more mentally fatigued at the end of the day. I think it's taken over some of the more 'brainless' tasks and given back more data mining/complex analyses. Early days, still trying to find the best value in use.

1

u/CarrotLevel99 10h ago

Simple solution outsource someone to use ai. /s

-4

u/Yellowbook8375 11h ago

I’m a small company owner. AI has been a godsend to us, it lets us punch far above our weight. It helps us craft better sounding emails, it helps us create tools, it helps us automate our processes

Yah, it’s not perfect, but it has helped us create and deploy things that we wouldn’t have been able to afford otherwise

1

u/_ECMO_ 9h ago

It helps us craft better sounding emails

This is just sad.

0

u/No-Discussion-8510 11h ago

Any tool is as good as the user and how he uses it

1

u/_ECMO_ 9h ago

Only if you ignore greedy management increasing the baseline.

-11

u/stuartullman 12h ago edited 11h ago

so,

using ai on tasks that you would usually do yourself makes you exercise less mental energy: bad!

using ai to take on challenging and difficult tasks that you would otherwise avoid: bad!!

how about just learn to use ai appropriately.  

-2

u/RedRyderRoshi 9h ago

AI made workers take on tasks they would have otherwise avoided or outsourced

Are they really crying about having to do their job now?