I call this my "rule of three" - I wait until I've seen "something" three times before deciding on an abstraction. Two isn't enough to get an idea of all the potential angles, and if you don't touch it a third time, it's probably not important enough to warrant the effort and risk of a refactor
Programming
Welcome to the main community in programming.dev! Feel free to post anything relating to programming here!
Cross posting is strongly encouraged in the instance. If you feel your post or another person's post makes sense in another community cross post into it.
Hope you enjoy the instance!
Rules
Rules
- Follow the programming.dev instance rules
- Keep content related to programming in some way
- If you're posting long videos try to add in some form of tldr for those who don't want to watch videos
Wormhole
Follow the wormhole through a path of communities !webdev@programming.dev
If I find myself repeating more than twice, I just ask "Can this be a function". If yes, I move it there. If not, I just leave it as it is.
Life's too short to spend all day rewriting your code.
Important/generalized patterns reveal themselves over time. I generally push for people to just write the specific thing they need for the given context and then if the same pattern shows up elsewhere, promote the code to a shared library. It’s really hard to anticipate the correct abstraction right from the start, and it’s easy to include a bunch of “just in case” parameterization that bloats the interface and adds a lot of conceptual overhead.
That being said, with low-leverage/complexity code like html views, repetition isn’t as problematic. Although, I do think that HTML components have matured enough to be the correct unit of abstraction, not CSS classes.
What does the code represent? What does it concern?
Focusing on the code and pattern too much may mislead. My thinking is primarily on composition and concern. The rest follows intuitively - fee with risk, gain, and effort assessment.
I've had occasional instances where code duplication is fine or not worth to fix/factor. But I feel like most of the time distinct concerns are easy and often important to factor.
I've habe
found the german.
Lol, I guess I habe wrote it on mobile with autocorrect 🙃
I’ve never heard of WET, but that is exactly the process I preach to my team. Refactor only once the same code block is used 3+ times as that tends to define a method that is a utility and not business logic specific.
This method has worked well in the past.
To expand on that: dry should only be considered for business logic anyway. Wet for everything else sounds great!
I prefer the FP approach where I create smaller functions that I compose together in larger functions or methods wich rarely repeat themselves elsewhere identically. Forcing extractions and merging of such functions often leads to weird code acrobatics.
- When writing new code: make a good faith attempt to DRY (obviously not fucking with Liskov in the process). First draft is very WET - deleting things is easiet at this stage anyway and accidentally prematurely DRYing causes headaches. Repeat the mantra "don't let perfect be the enemy of good" to prevent impulsive DRYing.
- When maintaining existing code I wrote: Refactor to DRY things up where it's clear I've made a conceptual misjudgement or oversight. Priority goes to whatever irritates me most in the moment rather than what would be most efficient.
- When altering other people's code: Assume they had a reason to WET (if you're lucky, read the docs that discuss the decision) but be a little suspicious and consider DRYing things up if it seems like a misjudgement or oversight. Likely realise 50% of the way through that this is going to take much longer and be way more painful than you hoped because of some esoteric bullshit compatibility issue. Curse yourself for not letting sleeping dogs lie but still start engaging in sunk cost fallacy.
- When reviewing PRs: Attempt to politely influence the writer to DRY it up but don't take too much offence to WET if it has a decentish reason in context. Throw in an inline one-liner to not make other maintainers question their sanity or competence when they realise they're reading duplicate code. Also to more reliably establish git blame rather than blaming the next poor fool who comes along and make a minor reformatting revision across the file. Include a date so that someone can stumble across it in 10 years as an archaeological curiosity and their own personal esoteric bullshit compatibility issue.
- Long term maintenance: Live with your irritatingly damp mouldy code. There's new code that needs to be written!
WET/DRY-ness is like a property of code -- a metric or smell perhaps, but not something to goal towards. That's like asking whether you drive fast or slow and whether we should all drive faster or slower.
WET is not what you think it is, or at least not originally. It's not some alternative to DRY. It didn't stand for Write Everything Twice. It stands for Write Every Time. It's supposed to be a negative way to describe code that isn't DRY. It's also abbreviated as "Waste Everyone's Time".
Much, much, much later somebody tried to reuse the term for "Write Everything Twice" but talking specifically about the benefits of singular, repeated templating because the abstraction needed to refactor code into "Write-Once" can make things harder to understand. In other words, it creates a chain of pre-required knowledge of how the abstraction works well before you can even work with it.
The irony here is that DRY is not really about code duplication. It's actually about system knowledge (structures and processes) being clear and unambiguous. Or, applied, you, as a person, not having to repeat yourself when it comes to sharing knowledge. It lends to simpler constructs that don't need much explanation. The example given of Write Everything Twice is actually being DRY, but they don't realize it.
Bill Venners: What's the DRY principle?
Dave Thomas: Don't Repeat Yourself (or DRY) is probably one of the most misunderstood parts of the book.
Bill Venners: How is DRY misunderstood and what is the correct way to understand it?
Dave Thomas: Most people take DRY to mean you shouldn't duplicate code. That's not its intention. The idea behind DRY is far grander than that.
DRY says that every piece of system knowledge should have one authoritative, unambiguous representation. Every piece of knowledge in the development of something should have a single representation. A system's knowledge is far broader than just its code. It refers to database schemas, test plans, the build system, even documentation.
https://www.artima.com/articles/orthogonality-and-the-dry-principle
The article/interview explains more about how DRY is meant to work and even talks about the pitfalls of code generators, which the WET article complains about (React).
The misunderstanding since DRY's coining is probably because, like natural language, we change meanings we with our environment. DRY became a term people use to blast too much abstraction. But noisy abstractions are not DRY. In response we have a new term card AHA (Avoid Hasty Abstractions), which exists as a counter to what people think DRY is.
The TL;DR is DRY is meant to mean your code should be unambiguous, not some sort of mantra to deduplify your code. Apply its original principles, and follow AHA which should be a clearer safeguard to avoid your abstractions not following DRY.
WET is not what you think it is, or at least not originally. It’s not some alternative to DRY. It didn’t stand for Write Everything Twice. It stands for Write Every Time. It’s supposed to be a negative way to describe code that isn’t DRY. It’s also abbreviated as “Waste Everyone’s Time”.
I think you're confusing things. Write Everything Twice (WET) has no resemblance with the concept you mentioned, which makes no sense to be a standalone concept or even rule of thumb.
WET is a clear guideline to avoid usual code quality problems caused by premature specialization and tight coupling which result from DRY fundamentalisms. WET puts on hold the propencity to waste time with code churn. It's importance is clear to anyone who maintains software.
The misunderstanding since DRY’s coining is probably because, like natural language, we change meanings we with our environment.
Not really. Your comment sounds like a weak attempt at revisionism. Some reference books like Bob Martin's Clean Code explicitly cover DRY and the importance of refactoring away any duplicate code.
WET springs from this fundamentalist mindset. There is no two ways about it.
You wrote:
To mitigate problems caused by DRY fundamentalisms, the "write everything twice" (WET) principle was coined.
I'm listing when and how it was coined with the article that coined it. If you have another source to claim it wasn't in the fashion I've described feel free to provide sources to the contrary. I even sources it from the very Wikipedia entry you shared.
WET is a clear guideline
Again, feel free to provide sources to back up your claim.
Your comment sounds like a weak attempt at revisionism.
Again. Feel free to back up your claims with actual sources. The concept of AHA by Kent C Dobbs, of Angular fame says:
There's another concept that people have referred to as WET programming which stands for "Write Everything Twice." That's similarly dogmatic and over prescriptive. Conlin Durbin has defined this as [...]
https://kentcdodds.com/blog/aha-programming
He then goes on to link to the article/blog I mentioned where Durbin states:
Instead, I propose WET programming.
And he goes on to explain this new concept of Write Everything Twice.
If you actually interested in discussion about the debate of DRY/WET/AHA, I'm all for it. But don't misinterpret or try to change what is already well-written.
Also, don't throw accusations at people who provide you documentation and proof and then contest while providing absolutely none yourself. That doesn't sound like any earnest interest in discussion.
I don't think DRY or WET or the "rule of 3" can address the nuances of some abstractions. I think most of the times when I decide to abstract something, it's not because the code looks repetitive, but because I've decomposed the architecture into components with specific responsibilities that work well together.
In other words, I don't abstract from the bottom up, looking at repetitive code. I abstract from the top down, considering what capabilities are required from a system and decomposing the system to deliver those capabilities.
Of course, repetitive code might be a smell, suggesting that there is room for abstraction, but I don't design the abstraction based (entirely) on the existing code.
The implementations mostly don't matter. The only thing that you need to get right are the interfaces.
I liked this talk on the subject: https://www.deconstructconf.com/2019/dan-abramov-the-wet-codebase
It's a nice explanation of how it's less about code that looks the same or currently performs the same operations, and more about what it means.
Of course functions still shouldn't do more than one or two things. Hence they don't get too complex.
And yeah, duplicates to avoid thight coupling (if still needed) are fine, if kept in moderation.
If the only difference between two classes or structs is hard coded config, rewrite to be a single implementation and pass the configs in.
If it's more in depth than that it may not be worth refactoring but future copies should be designed more generically.
I can very much recommend taking a look at the AHA principle. Kent C dodds has a short and good presentation on it. That's what i go by these days
There are 2 metrics that need to be considered:
- easy to read
- easy to modify
The first point is by far the most important. Usually DRY win because less code means less to read so less to put in your head. But if the abstraction is too complicated (for example because there are two many parameteres) then it's worth considering drying.
And don't forget the second point, but do not overthing and YAGNI. Sometime a simple comment “don't forget to update method foobar()
” is enough. Don't forget either that you can always rewrite the abstraction when you need to modify something (if the original did not fit your new requirements), but for this to be an easy task, the understanding of the original abstraction must be crystal clear. That's why the first point is so important.
I agree with WET. We have a tendency to over-complicate things, sometimes you gotta take a breath and ask yourself if the refactor is actually worth it.
It depends on how much regression testing is in place to test that old and refactored code behaviours have the same outputs, and how much budget there is for writing this tests.
For old financial systems for example, the answer is often to repeat the code again, as there's likely to be little tests to confirm existing behaviour and writing tests around very complex business domains is prohibitively expensive.
I just move any duplicate code into a function, no issues yet. (In face, fixing a single bug often ends up fixing multiple problems)
I tend to go with WET and I read one or two articles that introduced WET and explained one of the missunderstandings of DRY: It is about sharing knowledge and less about sharing code. Therefore as me tioned by another poster: it makes sense for business logic but less so fir everything else.