My interest in anime has waned pretty significantly in recent years but did check out this one when I first heard about it. Absolutely excellent, fun, and very moving towards the end.
vithigar
Even without any potential monetization by anyone... you kind of are? You are part of the community here, and that's what people come here for. Lemmy's community is the product it offers, and you are a piece of it.
Yes, that's my point. They know they have a dominant hand, and which hand that is. They are also likely to remember whether they are right or left handed. Even if they don't know intrinsically what "right" is it can simply be memorized in the same way that people know their blood type.
Combining those two pieces of information should let a person figure out which side is which.
Does she remember whether she's right or left handed? Just as a static fact about herself? I feel like it should be easy to reconcile an instruction like "turn right" by cross-referencing the knowledge of "I'm left handed" with "this is the hand I prefer to use".
I think you have this backwards. They aren't saying that professional research doesn't have any of these problems. They're just iterating what research is, and pointing out that the "do your own research" crowd are almost never actually doing any research.
YouTube shorts as well. I long ago stopped bothering to look at any of them after the 666th one that was like "this incredible unknown fact about (insert franchise)" that is invariably someone basically pissing themselves in excitement reiterating a main story beat as if it was some kind of hidden secret.
Because mistakes are less obvious, and when they do happen tend to be subjective and hard to "prove". You can do a creative job poorly and it might be a while before anyone catches on, so AI gets to just sort of squat there while AI companies pretend LLMs are capable of genuine creative output.
Any job that has an objectively correct result from the work being done will be screwed up by AI on day one, if not immediately.
Or it was overcast on those days. 46/52 is far better than you'd be able to manage in my area.
Durkey Tinner
Leet Moaf
Chotato Pip
Rizza Poll
I think you are conflating a few different concepts here.
Can you comment on the specific makeup of a “rendered” audio file in plaintext, how is the computer representing every little noise bit of sound at any given point, the polyphony etc?
What are the conventions of such representation? How can a spectrogram tell pitches are where they are, how is the computer representing that?
This is a completely separate concern from how data can be represented as text, and will vary by audio format. The "simplest", PCM encoded audio like in a .wav file, doesn't really concern itself at all with polyphony and is just a quantised representation of the audio wave amplitude at any given instant in time. It samples that tens of thousands of times per second. Whether it's a single pure tone or a full symphony the density of what's stored is the same. Just an air-pressure-over-time graph, essentially.
Is it the same to view plaintext as analysing it with a hex-viewer?
"Plaintext" doesn't really have a fixed definition in this context. It can be the same as looking at it in a hex viewer, if your "plaintext" representation is hexadecimal encoding. Binary data, like in audio files, isn't plaintext, and opening it directly in a text editor is not expected to give you a useful result, or even a consistent result. Different editors might show you different "text" depending on what encoding they fall back on, or how they represent unprintable characters.
There are several methods of representing binary data as text, such as hexadecimal, base64, or uuencode, but none of these representations if saved as-is are the original file, strictly speaking.
Two and a half months is insane for a practical skills demonstration for a job interview. Those should be a couple of hours at most.