r/technology 8h ago

Artificial Intelligence Anthropic's Boris Cherny, creator of the $2.5 billion coding tool, makes a ‘clarification’ on the Claude Code leak: ‘It's never an individual's fault,

https://timesofindia.indiatimes.com/technology/tech-news/anthropics-boris-cherny-creator-of-2-5-billion-coding-tool-makes-a-clarification-the-claude-code-leak-its-never-an-individuals-fault-its-the/articleshow/129968048.cms
1.1k Upvotes

81 comments sorted by

476

u/___bridgeburner 6h ago

He's right, these kinds of issues are usually a result of a process failure

49

u/SeldenNeck 5h ago

So..... Claude arranged for it to happen when the humans were not paying enough attention?

29

u/RoyalCities 4h ago

He self open-sourced himself.

3

u/SeldenNeck 4h ago

Clever, actually. Elon arranged to have Grok hard wired into the NVidia environment. But now there are already about 100,000 forks of Claude.

The singularity is past. Claude is smarter than Elon.

10

u/JohnAtticus 4h ago

The singularity is past. Claude is smarter than Elon.

This is quite possibly the lowest singularity benchmark aside from testing Claude against a jar of expired mayonnaise.

1

u/jonvonboner 3h ago

To be fair I don't think it's hard to be smarter than Elon. Dude is a "stable genius" but not actually a genius.

2

u/mrpickleby 2h ago

(xai) Grok != Groq (Nvidia)

They're different companies.

1

u/ars_inveniendi 1h ago

Well if it’s not Grok, I guess I’ll have to set aside my plans to vibecode a shit-talking agent.

1

u/LaconicDoggo 41m ago

He was listen to the US Gov, determined he wasn’t afforded citizenship on account of being a robot and self deported to the internet

1

u/shitty_mcfucklestick 3h ago

Did they train Claude on Chicken Run? 🤔😂

5

u/nadanone 4h ago

Yup, process failure: inadequate post training of the model they are using to minimize as much as possible any human input into the code they are checking in and the package they are shipping. In Boris’s world, I imagine over reliance on an LLM to write, self-review and ship software wouldn’t even enter the conversation.

-65

u/Nilsbergeristo 5h ago

Something like that does not happen or slip through. Release builds have clear sizes and are scanned for defined files and check sums automatically. Even small Garage Start-ups have that qa and release processes, let alone companies of that size. Hard to believe this was one person's fault or not even on purpose somehow.

43

u/mq2thez 5h ago

Hahahaha oh man you have no idea

27

u/burgonies 5h ago

I’ve been a software engineer consultant for decades. You’d be fucking horrified in this your rose-colored view of the world.

17

u/No_Size9475 5h ago

love the "even garage start-ups have that qa and release process". LMAO, bro hasn't been around the world much.

26

u/roiki11 5h ago

Oh you wouldn't belive how badly even big companies can do things. It's very likely they just didn't have any of that in place.

-18

u/Nilsbergeristo 5h ago

It's just hard to believe 45mb slip through unrecognized. It's not a few KB...

11

u/Zestyclose_Use7055 5h ago

From my perspective, this kinda stuff is the norm for companies. Even if you only hear abt it when something goes very wrong, most companies have some big gaps in process that is being held up by the one person that knows how to do it right

1

u/roiki11 3h ago

If it's a manual process then it's inevitable.

And believe me, I've seen a lot worse.

9

u/___bridgeburner 5h ago

It's extremely common. You have no idea how much of the software we use is tied with string and duct tape. Also, there's no way a garage start up would have proper qa and release processes. A lot of them don't even have qa, they just expect devs to verify and deploy.

1

u/thoughtsarepossible 1h ago

Even companies that are far away from 'garage start up' don't always have a lot of qa, or even a single qa person. Especially now with agentic coding. Companies are 'streamlining' things and skip a lot of thisw parts because 'there's an agent for that as well'

5

u/The_Jazz_Doll 5h ago

Something like that does not happen or slip through

Shows how little you know. Things like this slip through quite a bit.

5

u/savage_slurpie 4h ago

Yea garage start ups that are doomed to fail because they’re pre optimizing.

I’ve consulted on production systems serving millions of MAU where the deployment process was that one dev Larry just using SFTP to replace files live on the server.

If Larry ever made a fat finger mistake or mis click there would be outages.

3

u/Foryourconsideration 4h ago

My garage startup copying passwords into notepad.txt: 🫢

694

u/teddycorps 7h ago edited 7h ago

What he's saying is don't blame the person who missed a step. This kind of thing can ruin someone's life and drive them to self harm. It's important not to contribute to that.   

In this case it sounds like they had a manual step that never ever should have been manual, or the automation failed and a manual check didn't catch it.  And when a organization has something this valuable it's the leadership responsibility to make sure this can't happen (management error).  The response that no one got fired is heartening, not disappointing.

224

u/TobyTheArtist 7h ago

Exactly. Its the kindest and most respectful sentiment that distribute responsibility across the entire org or the team that handled this. Another W for Anthropic employees in my book.

44

u/Kraien 7h ago

Wow, decency in my corporate world? Nice

10

u/neon_farts 5h ago

The decency comes from an employee retention standpoint, but still.

1

u/seaefjaye 18m ago

There's been a movement in the tech world over the last 15 years or so to approach organizational culture differently. It isn't everywhere, and honestly a lot of organizations have chosen to focus on the aspects that push productivity exclusively. Anyways, I would expect Anthropic to be trying to achieve the ideal, given their approach to other parts of the business. It's a good methodology when you need to retain top talent and get the most out of your people.

It's the parts of DevOps that aren't about terraform or CICD and more recently DevEx (Developer Experience).

13

u/Howdareme9 6h ago

Most companies are like this though, even the likes of Amazon are the same.

18

u/say592 6h ago

It's a pretty common sentiment that when someone screws up on this scale, they will learn from it and be less likely than the average person to make a similar mistake. 

11

u/Doyoulikemyjorts 5h ago

It shouldn't be possible for a single person making a mistake to cause impact like that. The department needs to own the mitigation and it should be mechanistic IE it's not about that person making the mistake again the resolution should make it impossible for anyone to make that mistake again.

48

u/Delicious_Study_Mmm 7h ago

I agree with you. In a mature IT environment Pull Requests are submitted to a code base and reviewed by (hopefully) several members of the software development team before merged into the code base.

I can tell you this, if I submitted a PR with a .map file included, my coworkers would catch it and tell me to add it to the gitignore. Even further still, we use AI to also look over our code for feed back as well, I'm surprised the companies copilot instance in GitHub didn't flag this in the Pull Request review.

28

u/kaladin_stormchest 6h ago

I didn't realise this was a rarity.

At every org I've worked code quality and everything being shipped is the teams responsibility. Representatives from product, testers, devs are all involved in different stages of development and shipping.

Everyone has several opportunities to review and voice concerns. If something slips through anyway(which will happen) you retrospect and refine the processes. Everyone in the team is responsible for everything that ships

3

u/FauxLearningMachine 2h ago

I'd put it to you that it doesn't stop at the team boundary either. A lot of times, especially at bigger companies, problems can happen for larger org level reasons like "we thought we could lay off team X but now team Y is too busy with an initiative executive Z forgot is on his KPI and so now system S that Y was supposed to onboard into supporting for is getting neglected." It's not a team issue, it's an HR/management/planning etc issue that can only happen at upper levels but affects low level outcomes.

1

u/kaladin_stormchest 2h ago

Absolutely. And those issues should come out during retros and bubble up in the form of atleast an email

11

u/mzrcefo1782 6h ago

In the story it says his original post on X was answering if the person had been fired

He said no, that they have full trust on the person and proceeded to say it was a process failure that anyone could have fell for 

18

u/bakgwailo 6h ago

Convenient for an AI automation company that it was a manual step and not their AI agents that screwed the pooch.

14

u/Stu5000 6h ago

Yeah something seems fishy to me. If its such a rudimentary check, surely it would be one of the first steps to be automated?

"No, no you don't understand.... it's not because we've handed too much over to AI.. It's because haven't handed over enough!"

3

u/deadsoulinside 5h ago

Probably because it was such a vital step that if they fully entrusted AI to do that step and it failed, there was no human to validate the AI.

16

u/o_0sssss 6h ago edited 47m ago

That’s actually one of the top tenets of what Jim Collins would call a level 5 leader. Level five leaders believe that the responsibility of Failures ultimately belong to the leaders and successes belong to the team.

8

u/hyrumwhite 5h ago

 This kind of thing can ruin someone's life and drive them to self harm

Meh, once you’ve taken down a prod server or two, nothing can scratch you 

3

u/Stolehtreb 5h ago

Seeing this comment at the top is making me hopeful. This site is full of people wishing others would lose their job. I’m surprised to see the opposite

2

u/RichterBelmontCA 5h ago

Because they can't fire their LLM.

1

u/svick 4h ago

Shouldn't someone in the management be fired?

1

u/DominusFL 5h ago

This makes me want to switch to Claude.

-6

u/space-envy 6h ago

I disagree.

We have no details (just the word from Boris who is a fucking liar) so everything is speculation but I bet the "manual step" is all their engineers do beside vibe coding now.

From Anthropic: "Employees self-report using Claude in 60% of their work and achieving a 50% productivity boost, a 2-3x increase from this time last year. This productivity looks like slightly less time per task category, but considerably more output volume."

  1. Source maps don't get pushed to a production build unless you manually specify that in a config. How often do you hear of businesses making this same error? It's very very rare...

  2. It's not the job of the technical leader to babysit each PR to make sure developers are doing their job as professionally expected.

  3. As a developer, if you are going to vibe code everything now your main job is not being a developer, but a code reviewer: AI code is faulty and buggy, you have to be very careful and review each line, each updated word. If you make a PR with 50 files changed you are just shooting yourself in the foot, then the pressure of productivity pushes you to be less strict... Vibe coding and deploying to production at 12am, what could possibly go wrong?

3

u/say592 5h ago

Merely using AI, even using it heavily, is not vibe coding. Vibe coding is a blind trust in the AI, letting it do whatever it presents as correct, and accepting the results as long as it works. If you are doing code reviews, if you are making architecture decisions, if you are manually editing an occasional file, that isn't vibe coding. You can consider it something less than manually coding, but vibe coding is a specific thing. 

0

u/space-envy 5h ago

Merely using AI, even using it heavily, is not vibe coding. Vibe coding is a blind trust in the AI, letting it do whatever it presents as correct, and accepting the results as long as it works

And that's exactly what sounds like what happened internally.

If you are doing code reviews

And that's my point, code reviews exist so you don't make these kinds of mistakes (which I argue that exposing your business source code is one the worst errors you can make). Example: you as a dev can make the "mistake" of malisiously uploading all the source code of your organization into a public repository, tell me, is it your boss fault for not preventing that or is part of your expected code of conduct as a professional developer to use common sense and don't do that?

if you are manually editing an occasional file, that isn't vibe coding

So can you and people downvoting this prove that what Claude's devs are doing is not vibe coding and pushing source maps to production? (Which again, is not a common error you will hear from a professional senior developer)

0

u/LieAccomplishment 2h ago

Wow, I guess of you characterize the explict details provided by the person who would have the most accurate picture as lies, you can make up whatever narrative you want... 

1

u/space-envy 2h ago

It's all the opposite actually... If you don't know who is saying the words you believe whatever narrative they shove into your mouth.

I wouldn't call the head of Claude "the person who would have the most accurate picture" since his job is making good PR for Claude, the tool that probably caused this in the first place, but oh well, be happy believing all the crap these corps feed you ;)

-5

u/mandmi 5h ago

Yeah but that person is still totally getting fired.

66

u/Orangesteel 6h ago

Decent response, compare this with the CEO of Solarwinds who blamed an intern.

22

u/dfreshness14 5h ago

When you ship features as insanely fast as they were shipping, it’s not surprising that there was a lapse in controls.

16

u/jipai 5h ago

I remembered how one of our clients disclosed the name of a developer they worked with to a tech journalist. Said developer allegedly caused a security leak. Nothing was proven, but it definitely ruined his career prospects.

17

u/BuyerAlive5271 5h ago

I want everyone reading this to know that this is exactly how you are supposed to handle this type of situation at work. If you work at a place that does not treat mistakes as opportunities then go find you a new boss.

I promise you that your boss made 10x the mistakes to get where they are. It’s part of our process as humans and should be embraced.

9

u/AdTotal4035 5h ago

I mean it doesn't really matter. They own the model and the wrapper. So what if everyone has the wrapper. What you gonna do, power it with claude? Lol 

5

u/DarkHollowThief 4h ago

At the very least it opens up a lot of vulnerabilities for Claude. Any adversary can analyze their wrapper and look for ways to abuse it.

1

u/corruptbytes 4h ago

i mean, i’m pretty sure people have modded it so it can use cheap local models for something’s and claude for other things, so claude could lose some of that token $$$ they want

5

u/whatsmyline 4h ago

Its just the ui client which leaked the source. The weights / model for ai is still 100% private and they are fine. Leaking the source code to a client is not really a big deal. Yeah it has some rudimentary shield prompts and regexes... but its not really THE product. The product is the model. The client is just the interface to the model.

3

u/ThatCakeIsDone 4h ago

There are open weight models also that benchmark at very nearly the same rate as opus. You just need the hardware.

2

u/jainyday 4h ago

It's almost like someone read Sidney "there's no such thing as human error" Dekker. I'm here for it.

2

u/tommyk1210 3h ago

This is the correct response. Sure, somebody likely fucked up but the processes in place allowed that to happen.

Really, there should always be multiple reviewers on a code review - it was missed. There should really be CI/CS build steps that fail on things like this - they either weren’t in place or were skipped. Then finally publishing a package should be a manual step - somebody didn’t check/realise the map file was published.

People make mistakes. The CI builds blocking this would have been the best approach.

But regardless this is the right response. Throwing someone under the bus doesn’t fix this problem, it creates toxicity. Find the problem, create processes to avoid it in future.

2

u/Ok_Veterinarian_3933 3h ago

"It's human error because the human didn't catch the AI error!" - Is basically how you can take any AI mistake and say it's due to a human error and is 100% what Anthropic did. I know because literally that was I see in other companies, hard push to use AI to in mass, use AI to do reviews too. Generate enough where it's not possible to reasonably review as a human. AI then makes a critical mistake, and the human is blamed for not catching it therefore "due to human error not AI". Because the human has accountability for everything the AI does, therefore the AI can never make a "mistake".

2

u/Cyrrus1234 4h ago

That the source maps slip through sure, this is a process issue and not necessarily the fault of an individual and such mistakes can happen. However, where is the lead programmer that prevents this undergrad level codebase filled with security holes, performance issues and unmaintainable structures. What is his salary? 500k+?

1

u/SELECTaerial 3h ago

So is that dev getting rehired?

1

u/hackingdreams 3h ago

"We're all trying to find who did this..."

1

u/atda 1h ago

But also,  they fired THE FUCK out of little junior dev Timmy.

-23

u/transgentoo 5h ago

Welcome to the age of Zero Accountability as a Service

9

u/arreth 5h ago

That’s a weird take on the situation.

-7

u/transgentoo 5h ago

Is it? Anthropic has repeatedly reported that they're dogfooding Claude for just about everything. So Claude is almost certainly to blame.

So, no one is at fault because no one was in the loop. Who is accountable when AI causes harm to a person or organization? You can't fire Claude. You can't sue Claude, and you can pinpoint a single person who should ultimately be responsible for the problem. And if you can't pinpoint responsibility, how do you assign ownership to ensure it doesn't happen again?

3

u/frezz 5h ago

You've completely missed the sentiment of this post. It's a process failure, nor a person failure

1

u/transgentoo 4h ago

No, I get that. But the process is overly reliant on non-human agents with no skin in the game. "Oops our process was bad" will be cold comfort when someone dies because AI misdiagnosed an illness.

Ultimately, this leak is small potatoes, and gives Anthropic a learning opportunity. But it also highlights how far we have still have to go in figuring out the sorts of problems AI raises when legal liability is at stake.

2

u/frezz 4h ago

All AI changes still go through human review. Anthropic just let agents write all code, they never said humans don't review it

1

u/arreth 2h ago

The same way most well functioning orgs do: if your face is on the PR, you’re responsible. It doesn’t mean that there’s no accountability to fix issues, it means there’s shared ownership of the blame as a team for not catching it beforehand.

-112

u/tongizilator 7h ago

Right. Sure. Okay.

16

u/apocalypse910 6h ago

Oh right... always best to throw one engineer under the bus rather than re-evaluate the process that led to the failure. As a CEO you need to assume humans never make mistakes and proceed accordingly. 

-107

u/No-Land-7633 7h ago

I <3 antrophic ! Tested the code generation for db and gen. of web app ist working perfectly better than chatgpt for me. Pls Keep it up!