Rendered at 11:31:33 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
simonw 16 hours ago [-]
"For the longest time, I would NOT allow people to write tests because I thought that culturally, we need to have a culture of shipping fast"
Tests are how you ship fast.
If you have good tests in place you can ship a new feature without fear that it will break some other feature that you haven't manually tested yet.
ephou7 16 hours ago [-]
Exactly. OP seems to have very limited understanding of software development if that fact has eluded him.
iddan 13 hours ago [-]
Worked with a CTO that had the same rule of thumb. I quickly proved strategic testing is net positive for the business
pastescreenshot 15 hours ago [-]
The rewrite version of this that has gone best for me is to do it as a strangler, not a reset. Pick one ugly workflow, lock in current behavior with characterization tests, rebuild that slice behind a flag, repeat. You still get to fix the architecture, but you do not throw away years of weird production knowledge.
Hamuko 16 hours ago [-]
I think the more specific description would be that "not writing tests allows shipping fast today, writing tests allows shipping fast tomorrow and afterwards".
It wasn't too long ago that I wrote tests for something that was shipped years ago without any automated tests. Figured it was easier doing that than hoping we won't break it.
simonw 15 hours ago [-]
Yeah, but in my experience it really is a literal today vs tomorrow thing.
Your tests pay for themselves the moment you want to ship a second feature without fear of breaking the first.
Ekaros 2 hours ago [-]
If you are going to test what you develop might as well write any tests you do formally right away. It is unlikely you get it fully ready in one go, so you need to anyway run same tests again when you iterate it developing it today.
And that is the minimum level to aim at. If you can automate anything you do to test it right now you should.
ralferoo 17 hours ago [-]
It's not really hinted at in the article, which doesn't actually mention whether the rewrite was a net gain - I presume it was or they wouldn't have written the article, and the lead-in picture paints a rosy picture, but the tone at the end suggests he's not happy with how things turned out.
But one thing that used to be a common design anti-pattern was the "version 2 problem". I think I first heard about it when Netscape were talking about how NN2 was a disaster, and they were finally happy with NN3 or NN4.
Often version 1 is a hastily thrown together mess of stuff, but it works and people like it. But there's lots of bad design decisions and you reach a limit with how far you can continue pushing that bad design before it gets too brittle to change. So you start on version 2, a complete rewrite to fix all the problems and you end up with something that's "technically perfect" but so overengineered, it's slow and everybody hates it, plus there are probably lots of workflow hoops to jump through to get things approved that you end up not making any progress, and possibly version 2 kills the product and/or the company.
The idea is that the "version 3" is a pragmatic compromise - the worse design problems from version 1 are gone, but you forego all the unnecessary stuff that you added in version 2, and finally have a product that customers like again (assuming you can convince them to come back and try v3 out) and you can build into future versions.
To a large degree I think this "version 2 problem" was a by product of waterfall design, it's certainly been less common since agile development became popular in the early 2000s and tooling made large scale refactoring easier, but even so I remember working somewhere with a v1 that the customers were using and a v2 that was a 3-year rewrite going on in parallel. None of the developers wanted to work on v1 even though that's what brought in the revenue, and v2 didn't have any of the benefit of the bug fixes accumulated over the years to fix very specific issues that were never captured in any of the scope documents.
pjmorris 16 hours ago [-]
"The general tendency is to over-design the second system,
using all the ideas and frills that were cautiously sidetracked on
the first one. The result, as Ovid says, is a "big pile."
- Fred Brooks, 'The Mythical Man Month' (1975)
ralferoo 16 hours ago [-]
Oh wow, it's from Mythical Man Month? I've been meaning to read that for years and still never have.
didgeoridoo 16 hours ago [-]
That and Brooks’ underrated “The Design of Design” are notable for having an almost impossible density of quotable aphorisms on every page. They’re all so relevant today that it’s hard to believe that he’s talking about problems he faced half a century ago.
shimman 15 hours ago [-]
Never heard of "The Design of Design" but I bought it off this comment chain.
I think our industry would do a lot to take a moment and breath to understand what we have collectively done since inception. Wonder often if we will look at the highly corporatized influence our industry has had during our time as the dark ages 1000s of years into the future. The idea that private enterprise should shape the direction of our industry is deeply problematic, there needs to be public option and I doubt many devs would disagree.
allenu 16 hours ago [-]
I definitely encountered this second-system effect recently. I have an app that works well because it was written to target a specific use case. User (and I) wanted some additional features, but the original architecture just couldn't handle these new features, so I had to do a rewrite from the ground up.
As I rewrote it, I started pulling in more "nice to haves" or else opening up the design for the potential to support more and more future features. I eventually got to a point where it became unwieldy as it had too many open-ended architectural decisions and a lot of bloat.
I ended up scrapping this v2 before releasing it and worked on a v3 but with a more focused architecture, having some things open-ended but choosing not to pursue them yet as I knew that would just introduce unneeded bloat.
I was quite aware of the second-system effect when doing all this, but I still succumbed to it. Thankfully, the v3 rewrite didn't take as long since I was able to incorporate a lot of the v2 design decisions but scaled some of them back.
hinkley 16 hours ago [-]
My adaptation of the Version 2 Problem is “any idiot can ship version 1 of a product, but it takes skill to ship version 2”.
Usually levied at people who are so hyper focused on shipping a so-called MVP that is really demoware that they are driving us at a brick wall and commenting the entire way about what good time we are making.
basket_horse 16 hours ago [-]
This has been my experience exactly. V1 was custom built for a single client and they loved it. As we tried to expand to multiple clients the v1 was too narrowly scoped (both in UX and code architecture) so we did a full rewrite attempting to generalize the app across more workflows. V2 definitely expanded our client pool, but all our large v1 customers absolutely hated it.
We never did a full v3 rewrite, but it took about 4 years and many v3 redesigns of various features to get our legacy customers on board.
maplant 17 hours ago [-]
Having a culture of not ever writing tests and actively disallowing them is so insane I can't even imagine why there's anything else in this post
kace91 16 hours ago [-]
And particularly the “no tests go faster”.
I feel like we keep having to reestablish known facts every two years in this field.
hinkley 16 hours ago [-]
I ran into some serious struggles when we got far enough into accepting most of the tenets of XP as standard practice that most jobs didn’t even debate half of them and then landed at places that still thought they were stupid. I’d taken for granted I wasn’t going to have to fight those fights and forgotten how to debate them. Because I Said So is not a great look.
14 hours ago [-]
paulhebert 16 hours ago [-]
[flagged]
oraphalous 17 hours ago [-]
[flagged]
soperj 16 hours ago [-]
> they just stampede in with "THIS IS THE RIGHT WAY". And the discussion can't even be had.
That's exactly what this person is railing against.
They strictly forbid testing.
oraphalous 16 hours ago [-]
Again - that's a business decision that needs to be made in the context of that business. The fact that testing was forbidden isn't in itself good or bad. It depends on that business context. THe post says nothing about how that decision was made, whether it was discussed, or if it was just his absolutist ideal he imposed without consideration of the broader cost-benefit.
And I still feel the original comment doesn't give this point enough weight.
emil-lp 16 hours ago [-]
Forbidding tests is not a business decision, it's a software engineering decision, and it's a remarkably poor one at that.
oraphalous 16 hours ago [-]
Hard disagree. It's both. Choosing one way or the other comes with potential risks and rewards to the business and it's up to business leadership to choose what risks they want to take. Your job as an engineer - if you are not part of leadership is to explain those risks / rewards, and then let them make the call.
emil-lp 14 hours ago [-]
Okay, yes, that's a hard disagree.
I have an education and experience in software development. If a manager told me to make a product in an unsafe manner, I'd refuse, and if push came to shove, leave.
Leave, both because I wouldn't be able to defend my work as a professional, but also because I wouldn't work under someone who would want to dictate the manner in which I do what I do.
basket_horse 14 hours ago [-]
This is missing the point. If you’re a 2 man team it’s much more important to have code that has a couple bugs in it but allows you to quickly find your product market fit. As opposed to perfect code with no bugs that is useless.
No one is disagreeing that tests are good in a vacuum / mature product. But if your focus is building a mvp, and you’re trading off the test time with other things, it’s not always worth it.
Screw “leadership” but consider for a second that you’re the leadership.
6 hours ago [-]
sodapopcan 16 hours ago [-]
The truth is in the middle somewhere, regarding tests at least (yes, your microservices story is insane).
I think the author could have been happier with the no-test decision if they had treated the initial work as a prototype with the idea of throwing it away.
At the same time, writing some tests, should not be seen as a waste of time since if you're even at all experienced with it, it's going to be faster than constantly reloading your browser or pressing up-up-up-up-up in a REPL to check progress (if you're doing the latter you are essentially doing a form of sorta reverse TDD).
So I dunno... I may be more in line with the idea that's a bit insane to prevent people from writing tests BUT so many people are so bad at writing tests that ya, for a go-gettem start up it could be the right call.
I certainly agree with your whole cost-benefit analysis paragraph.
hinkley 16 hours ago [-]
> After we started hiring, it became a disaster.
When it stopped being two people he still forbade tests. In this decade. That is fucking nuts.
Fun fact: the guy I worked a 2 man project with and I had a rock solid build cycle, and when we got cancelled to put more wood behind fewer arrows, he and I built the entire CI pipeline. On cruisecontrol. And if you don’t know what that is, that is Stone Age CI. Literal sticks and rocks. Was I ahead of a very big curve? You bet your sweet bippy. But that was more than twenty years ago.
oraphalous 16 hours ago [-]
Did anyone here actually look at the product they were actually building? It's an AI agent bug discovery product. Their whole culture is probably driven at a fundamental philosophical level about the problems of bug discovery. As he says: he wanted to rely on dogfooding - using their product as the way of spotting bugs.
That may have been spectactular naivete but it's not insanity.
The point I keep coming back to here that everyone is fighting me so hard on is that these blanket statements of: NO TESTS IS NUTS... absent of an understanding of the business context... is harmful.
hinkley 15 hours ago [-]
What ends up happening is that your most fundamental features end up rotting because manual testing has biases. Chief among them is probably Recency Bias. It is in fact super easy to break a launch feature if it’s not gating any of the features you’re working on now. If you don’t automate those, yes, you’re nuts.
One of the worst ones I ever encountered was learning that someone broke the entire help system three months prior, and nobody noticed. Because developers don’t use the help system. I convinced a team of very skeptical people that E2E testing the help docs was a higher priority than automating testing of the authentication because every developer used that eight times a day or more. In fact on a previous project with trunk based builds, both times I broke login someone came to tell me so before the build finished.
Debugging is about doing cheap tests first to prune the problem space, and slower tests until you find the culprit. Testing often forgets that and will run expensive tests before fast ones. Particularly in the ice cream cone.
In short, if you declare an epic done with zero automation, you’re a fucking idiot.
oraphalous 15 hours ago [-]
I think maybe - this conversation is more about giving some more acknowledgement to the other side of this issue.
It's not that I disagree with you essentially - or particularly with respect to your analysis of your specific examples. 100% in the cases you describe. Those sound like beneficial tests. Particularly because your example SPEAKS to the business case - users were using the help docs (I think you mean users anyway). So yeah - that's important.
But I don't know why it's so hard extracting a simple acknowledgement of what I'm pointing out - specifically that the decisions like implementing tests IS a cost-benefit decision dependent on business context.
Funny you mention auth testing though. One time both me and the tech lead broke one of the auth flows in production within the space of a week of one another. Yep - no tests. Feel free to judge us insane. But here's how we thought about it - and when I say "we" that includes the business. First of all the auth flow was not actually used by any active users, so damage was low. Two man dev team. Complexity up until that point had been low, pre-product market fit, sales were dogshit, and cash had been low for some time. Feature shipping was the 110% priority. Ok - but these bugs were a sign complexity had increased beyond what we could manage without some tests. And given the importance of auth, it was now easy to make the case to leadership that implementing an e2e test suite was worth it. So we did.
If you still think a decision making process like that is insane - because we didn't immediately implement tests for every shipped feature. Well - I just think you're wrong.
hinkley 15 hours ago [-]
There is supposedly a famous video series of Uncle Bob trying and failing to solve sudoku with TDD. He did not read any guides on solving it and tried from first principles instead, and bounced off of it.
It’s clear to me that if you don’t know what you’re building, testing it first has rubber duck value that can easily be overshadowed by Sunk Cost. I always test my pillars - the bits of the problem that are definite and which I will build off of.
Yes, starting with tests without market fit can also be fatal. But calling anything done without tests is just a slower poison. Before you airlift your brain to another unrelated problem you need to codify some of your assumptions. If you’re good at testing you can write them in a manner that makes it easy to delete them when requirements change. But that takes practice a lot of people don’t have because they avoid writing tests or they write the exact same kinds of tests for years at a time without every stretching their skills.
If you’re not writing tests you’re not writing good ones when you do. Testing is part of CI and the whole philosophy of CI is do the painful parts until you either grow callouses or get fed up and file off the scratchy bits. To avoid testing is to forget the face of your father.
oraphalous 13 hours ago [-]
> Yes, starting with tests without market fit can also be fatal. But calling anything done without tests is just a slower poison.
I think we are pretty close to agreement here. I'd be interested in what you have experienced in the realm of front-end testing though - whether you think things are just as cut and dried in that realm (that's another discussion though).
And I'll also accept the point about skill in test writing that improves the cost-benefit analysis. I'll also cop to not having that kind of practiced ability at testing to the level I would personally like. But it's chicken / egg. A lot of folks get their start at scrappy start ups that can't attract the best talent. And just can't afford to let their devs invest in their skills in this way. Hell - even established companies just grind their devs without letting them learn the shit they need to learn.
I feel a victim of this to some degree - and am combating it with time off work which I can afford at the moment. One of the things I'm working on is just understanding testing better - y'know, so I can in the future write a SKILL.md file that tells Claude what sort of tests it should write. lol...
hinkley 12 hours ago [-]
Testing is hard. No, testing is fucking hard. I've had more mentors in testing than any other two disciplines combined. And I still look at my own tests and make faces. But to a man everyone who has claimed testing is not hard has written tests that made me want to push them into traffic.
Every problem is easy if you oversimplify it.
I send people who come to me struggling with their tests away with permission to fail at them but not permission to give up on getting better. You're gonna write broken tests. Just don't keep writing the same broken tests.
If anyone looking for a PhD or book idea is reading along with this, my suspicion is that it's so difficult because we are either 1) fundamentally doing it wrong (in which case there's room for at least 2 more revolutions in testing method) 2) someone will prove mathematically that it's an intractable problem, Gödel-style, and then someone will apply SAT solvers or similar to the problem and call it a day. Property based testing already pretty much does Monte Carlo simulation...
For backend tests, the penalty at each level of the testing pyramid is about 8x cost for a single test (and IME, moving a test down one level takes 5x as many tests for equivalent coverage, so moving a test down 2 layers reduces the CPUtime by half but also allows parallel execution).
For frontend I think that cost is closer to 10x. So you want to push hard as you can to do component testing and extract a Functional Core for unit tests even harder than you do for backend code. Karma is not awesome but is loads better than Selenium, particularly once you have to debug a failing test. I've been on long projects for a minute so I can't really opine on Puppeteer, but Selenium is hot flaky garbage. I wouldn't tolerate more than 100 E2E tests in it, even on a half million line project. Basically a smoke test situation if you use it like that, but you won't have constant red builds on the same eight tests on shuffle.
I want to say we had 47 E2E tests on a project I thought was going swimmingly from a SDLC perspective. But it might have been 65.
oraphalous 10 hours ago [-]
Great comment... and I feel after reading it, you're probably a pretty great person to work with. Acknowledging the pain / difficulty of a task, but inspiring folks, and giving them the opportunity to perservere is rare to find these days.
Norvig's solution is a work of art. When people ask me for examples of intrinsic versus accidental complexity, his sudoku solver is the best one I have. My only note is that he gives up and goes brute force early. When I first encountered it I had a lot of fun layering other common solving strategies on top of his base without too much extra trouble.
What I did not have fun with is porting it to elixir. That was a long journey to get to a solution I could keep adding stuff too. Immutable data is rough, particularly when you're maintaining 4 distinct views on the same data.
maplant 16 hours ago [-]
Did I say that my way was the right way? No: what I said was actively disallowing tests in every situation was the wrong way.
There is no ability here for the cost benefit analysis to change over time. There is only no tests
oraphalous 16 hours ago [-]
Did you edit the wording of your original comment slightly to emphasise the "actively disallowing them" in every situation? Anyway... if that is what you meant, then ok. It's less awful a statement than what I felt I originally read.
I'd still push back on your hyperbole though. I don't think the author was insane - and we don't know what the broader business context was when they started growing the team and decided to persist without building out the test architecture at that point. They made a call that dogfooding was going to be enough to catch issues as they grew the team. There are a lot of scenarios where that is going to be true.
One scenario where it wouldn't - the most likely - is that the team isn't actually dogfooding because they personally don't find the product useful. Leadership lambasts them to use the product more... but no one does cause it sucks so much it impacts their own personal productivity.
Even there I wouldn't use the word insane... just poor leadership.
maplant 16 hours ago [-]
> Did you edit the wording of your original comment slightly to emphasise the "actively disallowing them" in every situation?
I did not.
ChrisClark 16 hours ago [-]
He did not edit, and you're misunderstanding the meaning behind his post. Not everything needs to be pedantic and accurate, language is flexible, this is about communicating, not being right.
What we really don't need is paragraphs of someone arguing because their own definitions differ slightly from the OP
oraphalous 16 hours ago [-]
>He did not edit
He edited his reply to me multiple times... which is what made me suspect an edit to the original comment. But whatever, I'm happy to acknowledge his original intent even if he did state it more harshly.
>What we really don't need is paragraphs of someone arguing because their own definitions differ slightly from the OP
This is unnecessary. OP came out with "AUTHOR IS INSANE" even on the most generous of interpretations. Even if we allow for nuance OP is claiming, there is little constructive about his contribution. I feel fine about calling it out.
maplant 16 hours ago [-]
> He edited his reply to me multiple times...
I got the sense from your reply that some extra clarity would be beneficial.
> This is unnecessary. OP came out with "AUTHOR IS INSANE" even on the most generous of interpretations.
I did not actually call the author insane, I called their decision to explicitly disallow testing insane. It's an insane decision. I am not _literally_ calling the author insane.
oraphalous 15 hours ago [-]
> I did not actually call the author insane...
If you think this distinction really matters wrt the point I'm trying to make, then it's time for you and I to bug out conversationally. Sometimes two individuals have such different ways of communicating that the pain of exegesis isn't worth the squeeze. No hard feelings. I'm sure 50% responsibility is at least mine, but it's not going to be worth it for either of us figuring out exactly what.
maplant 15 hours ago [-]
I'm not really arguing with your point, I'm correcting your incorrect description of what I'm saying.
To argue with your actual point: I don't really care about the overall context, actively disallowing tests in a codebase is a _bad decision_. Look how it worked out for them.
> it's time for you and I to bug out conversationally
Fine with me
15 hours ago [-]
UltraSane 16 hours ago [-]
Not having ANY tests means tons of manual testing is needed every time you modify code, which will rapidly consume more time than writing the tests would.
Jtsummers 15 hours ago [-]
The manual tests also stop being run, or get reduced to such an extreme that any value from them is going to be low. Testing only happy paths and maybe release specific tests. This is shockingly (to outsiders, not to anyone who's ever been in the industry) common in aerospace and defense systems. There were some aircraft I would not fly on for a few years until I knew our updates had rolled out. Now I'm not connected to that work anymore so I'm back to "ignorance is bliss" mode and try not to think about it.
kerkeslager 10 hours ago [-]
> If you are a two man startup, burning through runway and pre-product-market fit... then spending a lot of time on tests is questionable (although the cost-benefit now with AI is changing very fast).
What's insane that people in 2026 still think tests slow you down.
It takes me maybe 40 hours (1 week) of coding to start receiving ROI from writing tests in a greenfield project, and by 80 hours I'm pretty sure I've saved more time from bugs and improved design due to TDD than I've spent writing the tests.
The ROI is even faster if I'm not the only developer on the project.
If your flagship product takes less time to develop than 40 hours, then your product is extremely vulnerable to being copied by another company, so your entire software project is a bad business idea.
So there really aren't many exceptions: either your project benefits from tests, or it's too easy a project to be a business.
So frankly, it's your comment lacking in cost/benefit analysis.
dagi3d 16 hours ago [-]
sorry, still don't get no tests as an excuse to go faster. obviously ymmv, but you will need to test your implementation somehow, and manual testing usually takes more time than running your automated tests. no need to over test, but definitely tests doesn't mean it will slow you down, unless you don't know how to test, which in that case, that's totally up to you.
aaronrobinson 2 hours ago [-]
You sound like an absolute nightmare to work for.
fabiensanglard 17 hours ago [-]
Pearls.
> I would NOT allow people to write tests
> now [...] we started with tests from the ground up
Trufa 16 hours ago [-]
Two different stages of the project, not necessarily contradictory. I'm not saying this is great, but tests make a whole lot more sense when you know what you're building.
sodapopcan 16 hours ago [-]
Yes. TFA author could have gone into it with this mindset and treated the initial work as a prototype with the idea of throwing it away and would have been happier about it.
> but tests make a whole lot more sense when you know what you're building.
It's very true. This is a "gotcha" a lot of anti-TDDers always bring up, and yet some talk about "prototyping == good" without ever making the connection that you can do both.
casey2 2 hours ago [-]
Two different extremes of dumbassery. If you can't program without the simplest dogma guiding you then programming isn't for you. If you don't even know what you're building why are you selling it as a product?! What are you doing in those 18 months that you don't understand anything about the thing you are building.
It should be common sense to add common sense tests to critical components. Now they are doing TDD THEY STILL DON'T KNOW THE CRITICAL COMPONENTS. Nothing changed. They lack systems thinking.
My guess is that both are just vibecoded slop.
wagwang 16 hours ago [-]
in an age of generated tests, a mandate on no tests is just dumb
elAhmo 15 hours ago [-]
I can't imagine working as a developer at a place where manager/founder "does NOT allow" tests to be written. This, combined with four pivots mentioned in the article seems like they are just riding the hype and trying to brute-force a product without having any basics or PMF.
nineteen999 13 hours ago [-]
How companies like this get funding is well beyond me.
notorandit 16 hours ago [-]
It's a big move. But I understand it.
Sometimes your code is "just" a proof of concept, a way to test the idea.
Very far from a decent product.
That is the time you ditch the code, keep the ideas (both good and bad) and start over.
qingcharles 14 hours ago [-]
This. Depending on the project, especially if you're doing something really novel, you can end up going down dozens of dead-ends which, when removed, leave little scars all over the code base.
It can be so refreshing making that decision to open the old code on one screen and a fresh project on the other and do it right from the start.
Jtsummers 13 hours ago [-]
https://news.ycombinator.com/item?id=47317568 - A submission by this person's company where they say they'll refund you if a bug makes it past their system. Given how buggy their own system apparently is (to the point they're scrapping all the code), perhaps it's not such a smart offer on their part.
16 hours ago [-]
renewiltord 17 hours ago [-]
Tests are most useful for regression detection, so it's a good instinct to not add them when you're primarily exploring. Once you've decided to switch to exploitation, though, regression will hurt. I think it's just a classic 0 to 0.1 not being the same thing as 0.1 to 1.
andrewstuart 17 hours ago [-]
I wouldn’t admit to this level of frankly incompetence.
Wildly swinging dogmatism on how to do software development that’s so wrong you have to throw it all away - then repeating this failure loop multiple times.
Doesn’t inspire any confidence in the person I wouldn’t get them to lead a project.
Why would you be so loud and proud about all this.
randlet 17 hours ago [-]
"bugs were appearing everywhere out of the blue. The codebase was a huge mess of nulls, undefined behaviour, bad error handling. It was so bad that we actually lost a client over this."
Especially wild considering their product is literally an automated bug finder lol.
stephantul 16 hours ago [-]
Same. Admitting to it is one thing, but still it takes a certain kind of attitude to outright forbid people to write tests.
monsieurbanana 17 hours ago [-]
I think there's a real possibility this is a "no such thing as bad publicity" stunt.
ordu 15 hours ago [-]
> I wouldn’t admit to this level of frankly incompetence.
Well yeah. It reminds me of how I wrote an addon for WoW, while having no clue how to write GUI code, learning lua and Blizzard API as I go, and having no tools except a text editor. It took 3-4 sharp ideological shifts, till I got to reading about elm architecture, and refactored all the code into it, while using addons helping with debugging issues, using a scaffold to create throw away addons for testing details of how WoW API functions/object work, using Ace library for messages and some other things, using my another addon to track events to learn when and which events WoW fires... Near the end I was a pretty competent addon developer, but the most part of my way there I was just trying a lot of things to see what works.
> Why would you be so loud and proud about all this.
Oh, I also like to tell my story of how it was. When I finally got it work on clean elm architecture with clear separation of state, view and update, I was proud, obviously, but even before that I was proud because of Danning-Kruger. My code was a way better than the original addon, and it was becoming better and better with each sharp turn. It is funny in hindsight.
4rtem 16 hours ago [-]
Nice using of the io domain there
heliumtera 16 hours ago [-]
So you started with 2023 theo.gg philosophy but now moved on to 2026 theo.gg philosophy
ramesh31 17 hours ago [-]
Next is such a dumpster fire. So much wasted effort due to the Node ecosystem never developing a universal batteries included framework like Rails or Django.
abraxas 17 hours ago [-]
Which in turn were only invented because millennials would not be caught dead writing Java and JSP. We had all this shit figured out by the late nineties and 90% of what is accomplished on the web today was entirely possible and well integrated in Java app servers.
This whole business is a fashion industry.
I'm for one grateful for LLMs because for the first time in around 30 years there is actually genuine novelty to explore in software engineering. Ruby and nodejs weren't it.
pjmlp 2 hours ago [-]
Indeed, as someone confortably on Java/.NET ecosystem, I only put up with stuff like Next.js, because it has become a required skill in the headless SaaS products space.
Thanks to Vercel partnerships, many of those SaaS vendors only support Next.js as extension/integration technology on their SDKs.
sgarman 16 hours ago [-]
Mongodb is webscale.
steve_adams_86 16 hours ago [-]
Do you think it can handle 10 requests per hour? How many mongo instances will that require, and should I use micro services?
mattmanser 15 hours ago [-]
It really wasn't.
MVC really changed web dev for the better, and Django/Rails trail-blazed it. It's one of the few paradigms I've seen in my career that was an unequivocal win for us.
pjmlp 2 hours ago [-]
We were already doing MVC in products like the one sold by Altitude Software in Portugal, in a Tcl based platform that was inspired on Vignette and AOLServer.
The authors of said product eventually went on to create OutSystems, one of the very few RAD products to do Delphi/VB like application servers with graphical tooling.
It was no need for Django/Rails trail-blaze anything other than not everyone has Silicon Valey visibilty to push their ideas.
pjmlp 2 hours ago [-]
Unfortunely parterships made it almost unavoidable in some industry segments, I wonder what all those SaaS vendors will do if something goes wrong with Vercel, or when Next.js becomes unmanageable after yet another rewrite.
Maybe they will rewrite everything with AI by then. /s
Tests are how you ship fast.
If you have good tests in place you can ship a new feature without fear that it will break some other feature that you haven't manually tested yet.
It wasn't too long ago that I wrote tests for something that was shipped years ago without any automated tests. Figured it was easier doing that than hoping we won't break it.
Your tests pay for themselves the moment you want to ship a second feature without fear of breaking the first.
And that is the minimum level to aim at. If you can automate anything you do to test it right now you should.
But one thing that used to be a common design anti-pattern was the "version 2 problem". I think I first heard about it when Netscape were talking about how NN2 was a disaster, and they were finally happy with NN3 or NN4.
Often version 1 is a hastily thrown together mess of stuff, but it works and people like it. But there's lots of bad design decisions and you reach a limit with how far you can continue pushing that bad design before it gets too brittle to change. So you start on version 2, a complete rewrite to fix all the problems and you end up with something that's "technically perfect" but so overengineered, it's slow and everybody hates it, plus there are probably lots of workflow hoops to jump through to get things approved that you end up not making any progress, and possibly version 2 kills the product and/or the company.
The idea is that the "version 3" is a pragmatic compromise - the worse design problems from version 1 are gone, but you forego all the unnecessary stuff that you added in version 2, and finally have a product that customers like again (assuming you can convince them to come back and try v3 out) and you can build into future versions.
To a large degree I think this "version 2 problem" was a by product of waterfall design, it's certainly been less common since agile development became popular in the early 2000s and tooling made large scale refactoring easier, but even so I remember working somewhere with a v1 that the customers were using and a v2 that was a 3-year rewrite going on in parallel. None of the developers wanted to work on v1 even though that's what brought in the revenue, and v2 didn't have any of the benefit of the bug fixes accumulated over the years to fix very specific issues that were never captured in any of the scope documents.
- Fred Brooks, 'The Mythical Man Month' (1975)
I think our industry would do a lot to take a moment and breath to understand what we have collectively done since inception. Wonder often if we will look at the highly corporatized influence our industry has had during our time as the dark ages 1000s of years into the future. The idea that private enterprise should shape the direction of our industry is deeply problematic, there needs to be public option and I doubt many devs would disagree.
As I rewrote it, I started pulling in more "nice to haves" or else opening up the design for the potential to support more and more future features. I eventually got to a point where it became unwieldy as it had too many open-ended architectural decisions and a lot of bloat.
I ended up scrapping this v2 before releasing it and worked on a v3 but with a more focused architecture, having some things open-ended but choosing not to pursue them yet as I knew that would just introduce unneeded bloat.
I was quite aware of the second-system effect when doing all this, but I still succumbed to it. Thankfully, the v3 rewrite didn't take as long since I was able to incorporate a lot of the v2 design decisions but scaled some of them back.
Usually levied at people who are so hyper focused on shipping a so-called MVP that is really demoware that they are driving us at a brick wall and commenting the entire way about what good time we are making.
We never did a full v3 rewrite, but it took about 4 years and many v3 redesigns of various features to get our legacy customers on board.
I feel like we keep having to reestablish known facts every two years in this field.
That's exactly what this person is railing against. They strictly forbid testing.
And I still feel the original comment doesn't give this point enough weight.
I have an education and experience in software development. If a manager told me to make a product in an unsafe manner, I'd refuse, and if push came to shove, leave.
Leave, both because I wouldn't be able to defend my work as a professional, but also because I wouldn't work under someone who would want to dictate the manner in which I do what I do.
No one is disagreeing that tests are good in a vacuum / mature product. But if your focus is building a mvp, and you’re trading off the test time with other things, it’s not always worth it.
Screw “leadership” but consider for a second that you’re the leadership.
I think the author could have been happier with the no-test decision if they had treated the initial work as a prototype with the idea of throwing it away.
At the same time, writing some tests, should not be seen as a waste of time since if you're even at all experienced with it, it's going to be faster than constantly reloading your browser or pressing up-up-up-up-up in a REPL to check progress (if you're doing the latter you are essentially doing a form of sorta reverse TDD).
So I dunno... I may be more in line with the idea that's a bit insane to prevent people from writing tests BUT so many people are so bad at writing tests that ya, for a go-gettem start up it could be the right call.
I certainly agree with your whole cost-benefit analysis paragraph.
When it stopped being two people he still forbade tests. In this decade. That is fucking nuts.
Fun fact: the guy I worked a 2 man project with and I had a rock solid build cycle, and when we got cancelled to put more wood behind fewer arrows, he and I built the entire CI pipeline. On cruisecontrol. And if you don’t know what that is, that is Stone Age CI. Literal sticks and rocks. Was I ahead of a very big curve? You bet your sweet bippy. But that was more than twenty years ago.
That may have been spectactular naivete but it's not insanity.
The point I keep coming back to here that everyone is fighting me so hard on is that these blanket statements of: NO TESTS IS NUTS... absent of an understanding of the business context... is harmful.
One of the worst ones I ever encountered was learning that someone broke the entire help system three months prior, and nobody noticed. Because developers don’t use the help system. I convinced a team of very skeptical people that E2E testing the help docs was a higher priority than automating testing of the authentication because every developer used that eight times a day or more. In fact on a previous project with trunk based builds, both times I broke login someone came to tell me so before the build finished.
Debugging is about doing cheap tests first to prune the problem space, and slower tests until you find the culprit. Testing often forgets that and will run expensive tests before fast ones. Particularly in the ice cream cone.
In short, if you declare an epic done with zero automation, you’re a fucking idiot.
It's not that I disagree with you essentially - or particularly with respect to your analysis of your specific examples. 100% in the cases you describe. Those sound like beneficial tests. Particularly because your example SPEAKS to the business case - users were using the help docs (I think you mean users anyway). So yeah - that's important.
But I don't know why it's so hard extracting a simple acknowledgement of what I'm pointing out - specifically that the decisions like implementing tests IS a cost-benefit decision dependent on business context.
Funny you mention auth testing though. One time both me and the tech lead broke one of the auth flows in production within the space of a week of one another. Yep - no tests. Feel free to judge us insane. But here's how we thought about it - and when I say "we" that includes the business. First of all the auth flow was not actually used by any active users, so damage was low. Two man dev team. Complexity up until that point had been low, pre-product market fit, sales were dogshit, and cash had been low for some time. Feature shipping was the 110% priority. Ok - but these bugs were a sign complexity had increased beyond what we could manage without some tests. And given the importance of auth, it was now easy to make the case to leadership that implementing an e2e test suite was worth it. So we did.
If you still think a decision making process like that is insane - because we didn't immediately implement tests for every shipped feature. Well - I just think you're wrong.
It’s clear to me that if you don’t know what you’re building, testing it first has rubber duck value that can easily be overshadowed by Sunk Cost. I always test my pillars - the bits of the problem that are definite and which I will build off of.
Yes, starting with tests without market fit can also be fatal. But calling anything done without tests is just a slower poison. Before you airlift your brain to another unrelated problem you need to codify some of your assumptions. If you’re good at testing you can write them in a manner that makes it easy to delete them when requirements change. But that takes practice a lot of people don’t have because they avoid writing tests or they write the exact same kinds of tests for years at a time without every stretching their skills.
If you’re not writing tests you’re not writing good ones when you do. Testing is part of CI and the whole philosophy of CI is do the painful parts until you either grow callouses or get fed up and file off the scratchy bits. To avoid testing is to forget the face of your father.
I think we are pretty close to agreement here. I'd be interested in what you have experienced in the realm of front-end testing though - whether you think things are just as cut and dried in that realm (that's another discussion though).
And I'll also accept the point about skill in test writing that improves the cost-benefit analysis. I'll also cop to not having that kind of practiced ability at testing to the level I would personally like. But it's chicken / egg. A lot of folks get their start at scrappy start ups that can't attract the best talent. And just can't afford to let their devs invest in their skills in this way. Hell - even established companies just grind their devs without letting them learn the shit they need to learn.
I feel a victim of this to some degree - and am combating it with time off work which I can afford at the moment. One of the things I'm working on is just understanding testing better - y'know, so I can in the future write a SKILL.md file that tells Claude what sort of tests it should write. lol...
Every problem is easy if you oversimplify it.
I send people who come to me struggling with their tests away with permission to fail at them but not permission to give up on getting better. You're gonna write broken tests. Just don't keep writing the same broken tests.
If anyone looking for a PhD or book idea is reading along with this, my suspicion is that it's so difficult because we are either 1) fundamentally doing it wrong (in which case there's room for at least 2 more revolutions in testing method) 2) someone will prove mathematically that it's an intractable problem, Gödel-style, and then someone will apply SAT solvers or similar to the problem and call it a day. Property based testing already pretty much does Monte Carlo simulation...
For backend tests, the penalty at each level of the testing pyramid is about 8x cost for a single test (and IME, moving a test down one level takes 5x as many tests for equivalent coverage, so moving a test down 2 layers reduces the CPUtime by half but also allows parallel execution).
For frontend I think that cost is closer to 10x. So you want to push hard as you can to do component testing and extract a Functional Core for unit tests even harder than you do for backend code. Karma is not awesome but is loads better than Selenium, particularly once you have to debug a failing test. I've been on long projects for a minute so I can't really opine on Puppeteer, but Selenium is hot flaky garbage. I wouldn't tolerate more than 100 E2E tests in it, even on a half million line project. Basically a smoke test situation if you use it like that, but you won't have constant red builds on the same eight tests on shuffle.
I want to say we had 47 E2E tests on a project I thought was going swimmingly from a SDLC perspective. But it might have been 65.
https://news.ycombinator.com/item?id=3033446 - Linking to this old comment because it links to each of Ron's articles, a discussion about it, and Norvig's version.
Norvig's solution is a work of art. When people ask me for examples of intrinsic versus accidental complexity, his sudoku solver is the best one I have. My only note is that he gives up and goes brute force early. When I first encountered it I had a lot of fun layering other common solving strategies on top of his base without too much extra trouble.
What I did not have fun with is porting it to elixir. That was a long journey to get to a solution I could keep adding stuff too. Immutable data is rough, particularly when you're maintaining 4 distinct views on the same data.
There is no ability here for the cost benefit analysis to change over time. There is only no tests
I'd still push back on your hyperbole though. I don't think the author was insane - and we don't know what the broader business context was when they started growing the team and decided to persist without building out the test architecture at that point. They made a call that dogfooding was going to be enough to catch issues as they grew the team. There are a lot of scenarios where that is going to be true.
One scenario where it wouldn't - the most likely - is that the team isn't actually dogfooding because they personally don't find the product useful. Leadership lambasts them to use the product more... but no one does cause it sucks so much it impacts their own personal productivity.
Even there I wouldn't use the word insane... just poor leadership.
I did not.
What we really don't need is paragraphs of someone arguing because their own definitions differ slightly from the OP
He edited his reply to me multiple times... which is what made me suspect an edit to the original comment. But whatever, I'm happy to acknowledge his original intent even if he did state it more harshly.
>What we really don't need is paragraphs of someone arguing because their own definitions differ slightly from the OP
This is unnecessary. OP came out with "AUTHOR IS INSANE" even on the most generous of interpretations. Even if we allow for nuance OP is claiming, there is little constructive about his contribution. I feel fine about calling it out.
I got the sense from your reply that some extra clarity would be beneficial.
> This is unnecessary. OP came out with "AUTHOR IS INSANE" even on the most generous of interpretations.
I did not actually call the author insane, I called their decision to explicitly disallow testing insane. It's an insane decision. I am not _literally_ calling the author insane.
If you think this distinction really matters wrt the point I'm trying to make, then it's time for you and I to bug out conversationally. Sometimes two individuals have such different ways of communicating that the pain of exegesis isn't worth the squeeze. No hard feelings. I'm sure 50% responsibility is at least mine, but it's not going to be worth it for either of us figuring out exactly what.
To argue with your actual point: I don't really care about the overall context, actively disallowing tests in a codebase is a _bad decision_. Look how it worked out for them.
> it's time for you and I to bug out conversationally
Fine with me
What's insane that people in 2026 still think tests slow you down.
It takes me maybe 40 hours (1 week) of coding to start receiving ROI from writing tests in a greenfield project, and by 80 hours I'm pretty sure I've saved more time from bugs and improved design due to TDD than I've spent writing the tests.
The ROI is even faster if I'm not the only developer on the project.
If your flagship product takes less time to develop than 40 hours, then your product is extremely vulnerable to being copied by another company, so your entire software project is a bad business idea.
So there really aren't many exceptions: either your project benefits from tests, or it's too easy a project to be a business.
So frankly, it's your comment lacking in cost/benefit analysis.
> I would NOT allow people to write tests
> now [...] we started with tests from the ground up
> but tests make a whole lot more sense when you know what you're building.
It's very true. This is a "gotcha" a lot of anti-TDDers always bring up, and yet some talk about "prototyping == good" without ever making the connection that you can do both.
It should be common sense to add common sense tests to critical components. Now they are doing TDD THEY STILL DON'T KNOW THE CRITICAL COMPONENTS. Nothing changed. They lack systems thinking.
My guess is that both are just vibecoded slop.
Sometimes your code is "just" a proof of concept, a way to test the idea. Very far from a decent product.
That is the time you ditch the code, keep the ideas (both good and bad) and start over.
It can be so refreshing making that decision to open the old code on one screen and a fresh project on the other and do it right from the start.
Wildly swinging dogmatism on how to do software development that’s so wrong you have to throw it all away - then repeating this failure loop multiple times.
Doesn’t inspire any confidence in the person I wouldn’t get them to lead a project.
Why would you be so loud and proud about all this.
Especially wild considering their product is literally an automated bug finder lol.
Well yeah. It reminds me of how I wrote an addon for WoW, while having no clue how to write GUI code, learning lua and Blizzard API as I go, and having no tools except a text editor. It took 3-4 sharp ideological shifts, till I got to reading about elm architecture, and refactored all the code into it, while using addons helping with debugging issues, using a scaffold to create throw away addons for testing details of how WoW API functions/object work, using Ace library for messages and some other things, using my another addon to track events to learn when and which events WoW fires... Near the end I was a pretty competent addon developer, but the most part of my way there I was just trying a lot of things to see what works.
> Why would you be so loud and proud about all this.
Oh, I also like to tell my story of how it was. When I finally got it work on clean elm architecture with clear separation of state, view and update, I was proud, obviously, but even before that I was proud because of Danning-Kruger. My code was a way better than the original addon, and it was becoming better and better with each sharp turn. It is funny in hindsight.
This whole business is a fashion industry.
I'm for one grateful for LLMs because for the first time in around 30 years there is actually genuine novelty to explore in software engineering. Ruby and nodejs weren't it.
Thanks to Vercel partnerships, many of those SaaS vendors only support Next.js as extension/integration technology on their SDKs.
MVC really changed web dev for the better, and Django/Rails trail-blazed it. It's one of the few paradigms I've seen in my career that was an unequivocal win for us.
The authors of said product eventually went on to create OutSystems, one of the very few RAD products to do Delphi/VB like application servers with graphical tooling.
It was no need for Django/Rails trail-blaze anything other than not everyone has Silicon Valey visibilty to push their ideas.
Maybe they will rewrite everything with AI by then. /s