lysdexic

joined 1 year ago
MODERATOR OF
[–] lysdexic@programming.dev 1 points 1 month ago (1 children)

IMO, fork is the best git client for macOS/Windows

At first glance it looks like a SourceTree clone. What does fork provide that SourceTree doesn't?

[–] lysdexic@programming.dev -1 points 1 month ago (1 children)

Does anyone have any good sources or suggestions on how I could look to try and begin to improve documentation within my team?

Documentation in software projecte, more often than not, is a huge waste of time and resources.

If you expect your docs to go too much into detail, they will quickly become obsolete and dissociated from the actual project. You will need to waste a lot of work keeping them in sync with the project, with little to no benefit at all.

If you expect your docs to stick with high-level descriptions and overviews, they quickly lose relevance and become useless after you onboard to a project.

If you expect your docs to document usecases, you're doing it wrong. That's the job of automated test suites.

The hard truth is that the only people who think they benefit from documentation are junior devs just starting out their career. Their need for docs is a proxy for the challenges they face reading the source code and understanding how the technology is being used and how things work and are expected to work. Once they go through onboarding, documentation quickly vanishes from their concerns.

Nowadays software is self-documenting with combination of three tools: the software projects themselves, version control systems, and ticketing systems. A PR shows you what code changes were involved in implementing a feature/fixing a bug, the commit logs touching some component tells you how that component can and does change, and ticketing shows you the motivation and the context for some changes. Automated test suites track the conditions the software must meet and which the development team feels must be ensured in order for the software to work. The higher you are in the testing pyramid, the closer you are to document usecases.

If you care about improving your team's ability to document their work, you focus on ticketing, commit etiquette, automated tests, and writing clean code.

[–] lysdexic@programming.dev -1 points 1 month ago* (last edited 1 month ago) (1 children)

The only (arguably*) baseless claim in that quote is this part:

You do understand you're making that claim on the post discussing the proposal of Safe C++ ?

And to underline the absurdity of your claim, would you argue that it's impossible to write a"hello, world" program in C++ that's not memory-safe? From that point onward, what would it take to make it violate any memory constraints? Are those things avoidable? Think about it for a second before saying nonsense about impossibilities.

[–] lysdexic@programming.dev 4 points 1 month ago

Custom methods won't have the benefit of being dealt with as if they shared specific semantics, such as being treated as safe methods or idempotent, but ultimately that's just an expected trait that anyone can work with.

In the end, specifying a new standard HTTP method like QUERY extends some very specific assurances regarding semantics, such as whether frameworks should enforce CRSF tokens based on whether a QUERY has the semantics of a safe method or not.

[–] lysdexic@programming.dev -3 points 1 month ago (3 children)

If you could reliably write memory safe code in C++, why do devs put memory safety issues intontheir code bases then?

That's a question you can ask to the guys promoting the adoption of languages marketed based on memory safety arguments. I mean, even Rust has a fair share of CVEs whose root cause is unsafe memory management.

[–] lysdexic@programming.dev 1 points 2 months ago* (last edited 2 months ago) (20 children)

From the article.

Josh Aas, co-founder and executive director of the Internet Security Research Group (ISRG), which oversees a memory safety initiative called Prossimo, last year told The Register that while it's theoretically possible to write memory-safe C++, that's not happening in real-world scenarios because C++ was not designed from the ground up for memory safety.

That baseless claim doesn't pass the smell check. Just because a feature was not rolled out in the mid-90s would that mean that it's not available today? Utter nonsense.

If your paycheck is highly dependent on pushing a specific tool, of course you have a vested interest in diving head-first in a denial pool.

But cargo cult mentality is here to stay.

[–] lysdexic@programming.dev 3 points 2 months ago (2 children)

However, we’re still implementing IPv6, so how long until we could actually use this?

We can already use custom verbs as we please: we only need to have clients and servers agree on a contract.

What we don't have is the benefit of high-level "batteries included" web frameworks doing the work for us.

[–] lysdexic@programming.dev 1 points 2 months ago* (last edited 2 months ago) (2 children)

Yeah, the quality on Lemmy is nowhere (...)

Go ahead and contribute things that you find interesting instead of wasting your time whining about what others might like.

So far, all you're contributing is whiny shitposting. You can find plenty of that in Reddit too.

[–] lysdexic@programming.dev 2 points 2 months ago* (last edited 2 months ago)

It’s from 2015, so its probably what you are doing anyway

No, you are probably not using this at all. The problem with JSON is that this details are all handled in an implementation-defined way, and most implementation just fail/round silently.

Just give it a try and send down the wire a JSON with, say, a huge integer, and see if that triggers a parsing error. For starters, in .NET both Newtonsoft and System.Text.Json set a limit of 64 bits.

https://learn.microsoft.com/en-us/dotnet/api/system.text.json.jsonserializeroptions.maxdepth

[–] lysdexic@programming.dev 4 points 2 months ago* (last edited 2 months ago)

Why restrict to 54-bit signed integers?

Because number is a double, and IEEE754 specifies the mantissa of double-precision numbers as 53bits+sign.

Meaning, it's the highest integer precision that a double-precision object can express.

I suppose that makes sense for maximum compatibility, but feels gross if we’re already identifying value types.

It's not about compatibility. It's because JSON only has a number type which covers both floating point and integers, and number is implemented as a double-precision value. If you have to express integers with a double-precision type, when you go beyond 53bits you will start to experience loss of precision, which goes completely against the notion of an integer.

[–] lysdexic@programming.dev 17 points 2 months ago (1 children)
view more: ‹ prev next ›