this post was submitted on 02 Nov 2023
1395 points (98.5% liked)

Programmer Humor

32558 readers
637 users here now

Post funny things about programming here! (Or just rant about your favourite programming language.)

Rules:

founded 5 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] AAA@feddit.de 95 points 1 year ago (6 children)
[–] BustlingChungus@lemmy.world 46 points 1 year ago

The Testmen?

[–] dan@upvote.au 12 points 1 year ago (1 children)

I've written some tests that got complex enough that I also wrote tests for the logic within the tests.

[–] AAA@feddit.de 7 points 1 year ago

We do that for some of the more complex business logic. We wrote libraries, which are used by our tests, and we wrote tests which test the library functions to ensure they provide correct results.

What always worries me is that WE came up with that. It wasn't some higher up, or business unit, or anything. Only because we cared to do our job correctly. If we didn't - nobody would. Nobody is watching the testers (in my experience).

[–] kevincox@lemmy.ml 6 points 1 year ago (1 children)

Mutation testing is quite cool. Basically it analyzes you code and makes changes that should break something. For example if you have if (foo) { ... } it will remove the branch or make the branch run every time. It then runs your tests and sees if anything fails. If the tests don't fail then either you should add another test, or that code was truly dead and should be removed.

Of course this has lots of "false positives". For example you may be checking if an allocation succeeded and don't need to test if every possible allocation in your code fails, you trust that you can write if (!mem) abort() correctly.

load more comments (1 replies)
[–] Kidplayer_666@lemm.ee 5 points 1 year ago

Create tests to test the tests. Create tests to test those. Recurse to infinity

load more comments (2 replies)
[–] Alexc@lemmings.world 57 points 1 year ago (1 children)

This is why you write the test before the code. You write the test to make sure something fails, then you write the code to make it pass. Then you repeat this until all your behaviors are captured in code. It’s called TDD

But, full marks for writing tests in the first place

[–] oce@jlai.lu 71 points 1 year ago (8 children)

That supposes to have a clear idea of what you're going to code. Otherwise, it's a lot of time wasted to constantly rewrite both the code and tests as you better understand how you're going to solve the task while trying. I guess it works for very narrowed tasks rather than opened problems.

[–] moriquende@lemmy.world 29 points 1 year ago (1 children)

100%. TDD is just not practicably applicable to a lot of scenarios and I wish evangelists were clearer on that detail.

You could replace "TDD" with pretty much any fixed methodology and be completely accurate.

[–] time_fo_that@lemmy.world 16 points 1 year ago

This is the reason I dislike TDD.

[–] homoludens@feddit.de 13 points 1 year ago* (last edited 1 year ago)

constantly rewrite both the code and tests as you better understand how you’re going to solve the task while trying

The tests should be decoupled from the "how" though. It's obviously not possible to completely decouple them, but if you're "constantly" rewriting, something is going wrong.

Brilliant talk on that topic (with slight audio problems): https://www.youtube.com/watch?v=EZ05e7EMOLM

The only projects I've ever found interesting in my career was the stuff where nobody had any idea yet how the problem was going to be handled, and you're right that starting with tests is not even possible in this scenario (prototyping is what's really important). Whenever I've written yet another text/email/calling/video Skype clone for yet another cable company, it's possible to start with tests because you already know everything that's going into it.

[–] Alexc@lemmings.world 4 points 1 year ago

The tests help you discover what needs to be written, too. Honestly, I can’t imagine starting to write code unless I have at least a rough concept of what to write.

Maybe I’m being judgemental (I don’t mean to be) but what I am trying to say is that, in my experience, writing tests as you code has usually lead to the best outcomes and often the fastest delivery times.

load more comments (3 replies)
[–] vsh@lemm.ee 30 points 1 year ago (1 children)

I don't need tests when I know the output 😎

[–] quicken@aussie.zone 5 points 1 year ago (1 children)

What if the output is encrypted? Or 34d matrix.

What if the test was testing timing. Or threading. Or error handing?

[–] cheesemoo@lemmy.conk.me 10 points 1 year ago

LOOKS GOOD TO ME, SHIP IT

[–] jbrains@sh.itjust.works 22 points 1 year ago* (last edited 1 year ago) (1 children)

This seems to happen quite often when programmers try to save time when writing tests, instead of writing very simple tests and allowing the duplication to accumulate before removing it. I understand how they feel: they see the pattern and want to skip the boring parts.

No worries. If you skip the boring parts, then much of the time you'll be less bored, but sometimes this will happen. If you want to avoid this, then you'll have to accept some boredom then refactor the tests later. Maybe never, if your pattern ends up with only two or three instances. If you want to know which path is shorter before you start, then so would I. I can sometimes guess correctly. I mostly never know, because I pick one path and stick with it, so I can never compare.

This also tends to happen when the code they're testing has painful hardwired dependencies on expensive external resources. The "bug" in the test is a symptom of the design of the production code. Yay! You learned something! Time to roll up your sleeves and start breaking things apart... assuming that you need to change it at all. Worst case, leave a warning for the next person.

If you'd like a simple rule to follow, here's one: no branching in your tests. If you think you want a branch, then split the tests into two or more tests, then write them individually, then maybe refactor to remove the duplication. It's not a perfect rule, but it'll take you far....

[–] ChickenLadyLovesLife@lemmy.world 27 points 1 year ago (2 children)

the code they’re testing has painful hardwired dependencies on expensive external resources

I've told this story elsewhere, but I had a coworker who wrote an app to remote-control a baseball-throwing machine from a PDA (running WinCE). These machines cost upwards of $50K so he only very rarely had physical access to one. He loved to write tests, which did him no good when his code fired a 125 mph knuckleball a foot over a 10-year-old kid's head. This resulted in the only occasion in my career when I had to physically restrain a client from punching a colleague.

Ah, the ol’ off-by-one-foot problem.

[–] jbrains@sh.itjust.works 6 points 1 year ago (1 children)

Wow. I love that story and I'm glad nobody was hurt.

I wonder whether that happened as a result of unexpected behavior by the pitching machine or an incorrect assumption about the pitching machine in that coworker's tests.

I find this story compelling because it illustrates the points about managing risk and the limits of testing, but it doesn't sound like the typical story that's obviously hyperbole and could never happen to me.

Thank you for sharing it.

[–] ChickenLadyLovesLife@lemmy.world 15 points 1 year ago (4 children)

It happened because the programmer changed the API from a call that accepted integer values between 0 and 32767 (minimum and maximum wheel speeds) to one that accepted float values between 0.0 and 1.0. A very reasonable change to make, but he quick-fixed all the compiler errors that this produced by casting the passed integer parameters all through his code to float and then clamping the values between 0.0 and 1.0. The result was that formerly low-speed parameters (like 5000 and 6000, for example, which should have produced something like a 20 mph ball with topspin) were instead cast and clamped to 1.0 - maximum speed on both throwing wheels and the aforesaid 125 mph knuckleball. He rewrote his tests to check that passed params were indeed between 0.0 and 1.0, which was pointless since all input was clamped to that range anyway. And there was no way to really test for a "dangerous" throw anyway since the machine was required to be capable of this sort of thing if that's what the coach using it wanted.

[–] Duralf@lemmy.world 5 points 1 year ago (1 children)

API from a call that accepted integer values between 0 and 32767 (minimum and maximum wheel speeds) to one that accepted float values between 0.0 and 1.0.

This would cause alarm bells to ring in my head for sure. If I did something like that I would make a new type that was definitely not implicitly castable to or from the old type. Definitely not a raw integer or float type.

[–] marcos@lemmy.world 4 points 1 year ago (4 children)

That kind of code usually is written on a restricted dialect of C.

C is not a language that allows for that kind of safety practice even on the fully-featured version.

load more comments (4 replies)
load more comments (3 replies)
[–] cupcakezealot@lemmy.blahaj.zone 14 points 1 year ago (3 children)

turned out to be a semicolon

[–] Zaphod@discuss.tchncs.de 25 points 1 year ago (1 children)

The tests wouldn't even run of that was the issue, pretty sure (depends on the language I suppose)

[–] gratux@lemmy.blahaj.zone 16 points 1 year ago (1 children)

fun situations can arise when you write , instead of ; For those not in the know, in c++ the comma operator evaluates the left expression, discards the value, then evaluates the right expression and returns the value. if you now have a a situation like this

int i = 0,
printf("some message");

i has a completely different value, since it actually uses the return value of printf instead

[–] scubbo@lemmy.ml 5 points 1 year ago

And people give Python shit for significant whitespace 😂

[–] rwhitisissle@lemmy.ml 4 points 1 year ago

And in python, no less. Sloppy.

[–] rob64@startrek.website 4 points 1 year ago (1 children)

I'll just write thousands of lines of code inside a global object... I'm sure I won't put a semicolon where a comma should be...

load more comments (1 replies)
[–] fmstrat@lemmy.nowsci.com 11 points 1 year ago

But this does mean, writing tests works.

[–] Rin@lemm.ee 10 points 1 year ago

now you have a group of very specific tests for later debugging

[–] ICastFist@programming.dev 10 points 1 year ago (1 children)

I remember being asked to make unit tests. I wasn't the programmer and for the better part of a week, they didn't even let me look at the code. Yeah, I can make some great unit tests that'll never fail without access to the stuff I'm supposed to test. /s

[–] loutr@sh.itjust.works 11 points 1 year ago (2 children)

I guess it would make sense if you're testing a public API? To make sure the documentation is sufficient and accurate.

[–] Natanael 9 points 1 year ago

Yeah blackbox testing is a whole thing and it's common when you need something to follow a spec and be compatible

load more comments (1 replies)
[–] fiveoar@lemmy.dbzer0.com 8 points 1 year ago (2 children)

Contrats, you have discovered why in TDD you write the test, watch the test fail, then make the test pass, then refactor. AKA: Red, Green, Refactor

load more comments (2 replies)
[–] iAvicenna@lemmy.world 4 points 1 year ago

Run the test a second time, test passes. silently move to the next step.

[–] SrTobi@feddit.de 4 points 1 year ago

And then in the end we realize the most important thing was the tests we wrote along the way.

load more comments
view more: next ›