Senal

joined 11 months ago
[–] Senal 1 points 1 week ago* (last edited 1 week ago)

Depends on the team.

On paper what you're "supposed" to do is iterate through gameplay mechanisms and scenarios by building up the bare minimum needed to get a feel for it, then once you have something viable you proceed further along the development process.

In reality it really depends heavily on context, sometimes you find a particular scenario works fine standalone but not as a part of the whole, or some needed balancing change elsewhere breaks the fun of something established, late additions can also cause this.

but again that depends heavily on the type of game, rpg's are more sensitive to balancing changes than racing sims for example.

Specifically we'd usually evaluate the tradeoff between how much it doesn't work and how much work it is to "fix" it, sometimes it'd get cut completely, sometimes it'd get scaled back, sometimes we'd re-evaluate the feature/scenario for viability and make a decision after that re-evaluation and sometimes we'd just bite the bullet and work through it.

Over time you get a bit more cautious about committing to things without thinking through the potential consequences, but sometimes it just isn't possible to see the future.

I understand the realities of managing a project like that, at the same time these kinds of things are known upfront to a degree and yet people always seem surprised that the cone of uncertainty on a project like that is huge.

As i said, i have no problem with re-use, i have a problem with saying re-use is "essential" to stopping crunch, like the management of a project like that isn't the core of the problem.

[–] Senal 1 points 1 week ago* (last edited 1 week ago) (2 children)

Apologies for the delay, my instance is having problems with communities so i can't reply with that account.

To answer the question, not anymore.

The crunch culture was a big part of me leaving.

Honestly it's not that different in type from non-game dev houses, the difference is in the magnitude.

I understand why these things happen, the reasons just aren't good enough for me.

Poor planning compounds with ridiculous timeframes to create an almost immutable deadline to deliver unrealistic goals.

The problem is, they'll jump right back in to the next project and make exactly the same mistakes. At what point does it stop being mistakes and starts being "just how things are done".

One of the main reasons this works at all is that they take young idealistic programmers who want to work in their dream industry and throw them into a cult of crunch where everyone is doing it so it must be ok or this is the price of having my dream job.

it's certainly not all studios and it seems to have gotten marginally better at the indie to small-medium houses but it's prevalent enough that it's still being talked about.

[–] Senal 3 points 2 months ago
[–] Senal 2 points 5 months ago (1 children)

No problem. Outside perspectives are usually interesting to explore.

I hate the idea that I’m falling for some sort of pseudoscience and weigh that against (a) how it tangibly helped me, and (b) whether we simply haven’t found the proper way to test its efficacy properly

Perhaps a different approach might help.

[ I will caveat the following with : i am not , in any way, qualified to give any psychological advice or medical suggestions, this is not that, it's just my personal opinion. ]


Rather than try and figure out if the test itself is flawed or not, look at the outcome instead.

Based on how you described it, it wasn't the specific methodology itself that was helpful to you.

You can take whatever positives you experienced and explore them completely independently.

Does it matter that you used a potentially flawed methodology to come to a useful conclusion about yourself ?

[–] Senal 2 points 5 months ago (3 children)

Would you mind elaborating on “control the context to eliminate bias and gaming” under this situation?

Sure, apologies if you already know any of this.

As with other scientific fields, there are guidelines and processes in place to evaluate the structure and approach for research.

iirc you don't technically have to adhere to them, but it will certainly be a point of industry and peer criticism if you don't, sometimes leading to papers not being accepted for journals and other more esoteric consequences.

This is one of the reasons proper peer review is important.


A basic example would be picking from (or narrowing to) an appropriate subset of the population.

If you were trying to perform research with the goal of evaluating the population as a whole, running your experiment exclusively with women between the ages of 18-25 would immediately be picked up as a reason the results can't be trusted (in terms of the stated goal).


A slightly less obvious example (for certain kinds of experiments) would be sentence structure and unconscious bias through contextual information.

When wording questions and examples it is easy to introduce a bias in the tone and word choice, which can affect the outcome of the research.

A real world example of the unconscious bias aspect is hiring discrimination : https://www.kcl.ac.uk/research/the-resume-bias-how-names-and-ethnicity-influence-employment-opportunities

A simplistic summary is that there is a bias (unconscious or otherwise) against people with "ethnic" sounding names on their resume.

There is, of course, more nuance to it than that, but still.

This is much less cut and dry because sometimes the bias is the thing being studied and forms a part of the test, which is why when creating these kinds of experiments the process is carefully evaluated and revised, hopefully by multiple people.


Another one you touched upon already is context, the time of day, life events, general disposition etc.

Good test design will try to account for as much of this as possible (though it's unlikely to remove it all entirely).



Obviously the more questions asked, the more granular the results can become, so I’ll grant that.

That's not always strictly true, quality is also important and there are diminishing returns on quantity, the length of a questionnaire can sometimes have it's own effect on the results for instance.

This relates to your final point: What would I consider to be the test’s objectives? For me, it’s an exercise in gleaning insight into one’s own personality; to help with reflection and introspection. To identify your strengths and weaknesses. In some sense, to provide some identity. I can’t tell you how I felt understood. I actually teared up while reading the analysis for the first time. As something of an outsider for much of my life it was like it filled in the missing pieces I long suspected and yet always doubted. Like I said I can’t speak for what others got out of the test, but it was the best therapy I ever received. (And for context, I read every other generalized group to make sure it wasn’t generalized astrological bullshit where every description could match every person, for which nothing came close).

It sounds like this experience was/is of great use to you. I've heard similar things about ADHD and ASD diagnoses.

Finding your tribe/place sounds great.

What i would say is that people who don't have this level of resonance with the results could well see it less favourably than you.

That isn't necessarily because they performed the test (or interpreted the result) incorrectly, it could just mean less to them.

[–] Senal 3 points 5 months ago (5 children)

I'm aware i'm cherry picking here.

Scientists do this all the time.

They do, with strict guidelines about how they can strictly control the context to eliminate bias and gaming (as much as they can anyway).

The only substantive arguments that I’ve seen made – and the only “debunking” aspects to this test revolve around veracity and validity – which is understandably concerning. But let’s unpack that: Do the results bear repeatability, and do what the results say reflect the reality of who that person is?

I could very well be reading this incorrectly but are you saying that veracity and validity are known concerns and then follow that up with "Can we verify? Are the results useful?"

I wouldn't consider restating the questions that represent the known concerns as unpacking said concerns.

misunderstanding of the test’s objectives.

Genuine question, what would you consider to be the test's objectives ?

[–] Senal 18 points 5 months ago

This ^ hot take is brought to you by the same people who ask " If you don't believe in god what is stopping you from raping and murdering your whole neighborhood? "