this post was submitted on 06 Apr 2024
58 points (86.2% liked)
Programming Languages
1159 readers
1 users here now
Hello!
This is the current Lemmy equivalent of https://www.reddit.com/r/ProgrammingLanguages/.
The content and rules are the same here as they are over there. Taken directly from the /r/ProgrammingLanguages overview:
This community is dedicated to the theory, design and implementation of programming languages.
Be nice to each other. Flame wars and rants are not welcomed. Please also put some effort into your post.
This isn't the right place to ask questions such as "What language should I use for X", "what language should I learn", and "what's your favorite language". Such questions should be posted in /c/learn_programming or /c/programming.
This is the right place for posts like the following:
- "Check out this new language I've been working on!"
- "Here's a blog post on how I implemented static type checking into this compiler"
- "I want to write a compiler, where do I start?"
- "How does the Java compiler work? How does it handle forward declarations/imports/targeting multiple platforms/?"
- "How should I test my compiler? How are other compilers and interpreters like gcc, Java, and python tested?"
- "What are the pros/cons of ?"
- "Compare and contrast vs. "
- "Confused about the semantics of this language"
- "Proceedings from PLDI / OOPSLA / ICFP / "
See /r/ProgrammingLanguages for specific examples
Related online communities
- ProgLangDesign.net
- /r/ProgrammingLanguages Discord
- Lamdda the Ultimate
- Language Design Stack Exchange
founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The article doesn't make a persuasive case at all. It immediately backs off by acknowledging that 99% of type inference is fine, because it's really only complaining about function signature inference, which is an extreme case that only a few obscure ML variants like Ocaml and F# support.
It's like saying all american movies are terrible, and then clarifying that you've only seen White Chicks
This is the argument.
This comes back to a perennially forgotten/rediscovered fundamental truth about coding: It is much easier to write code than read code
This is immediately followed by the next part that in any sufficiently large organization, you spend more time reading code than writing code.
Put it all together? Fractional second gains in writing that have meaningful expenses when it comes to reading aren't worth it once you're operating at any kind of scale.
If you and your buddy are making little hobby projects. If you have a 3 person dev team. If you're writing your own utility for personal use... I wouldn't expect these features to become evident at that scale.
Again, it isn't saying that it's something intrinsically wrong, it's just that there is a trade off and if you really think about it, under most professional environments it's a net negative effect on efficiency.
I agree if we're talking at the granularity of function signatures, but beyond that I don't. Every language supports type inference when chaining expressions. Inference on local variables is often a way of breaking up large expressions without forcing people to repeat obvious types.
As for inferring code from types, scrub the symbol names off any production java code and see how much sense it makes. If you really go down this path you're quickly going to start wanting refinement types or dependent types. Both great research fields, but the harsh reality is that there's no evidence that either field is production ready, or that either solves problems in readability.
The best technologies for reading code are all about interactive feedback loops that allow you to query and explore. In some languages that is type-based, with features like dot-completion, go-to-definition, and being able to hover to see types and doc comments. And just knowing whether the code compiles provides useful information.
In other languages, like Python and JavaScript, that feedback loop is value-based, and in some ways far richer and more powerful, but it suffers from being unavailable in most contexts. Most importantly, the lack of error messages is not a very useful signal.
I am obviously no authority, but my honest opinion is that type inference is completely orthogonal to the questions that actually matter in code readability, so blaming it is silly.