Kissaki

joined 1 year ago
[–] Kissaki@programming.dev 6 points 4 days ago (2 children)

Python’s major pro is its simple, straightforward syntax, which excels at data handling. This has made it popular with novices of all shades […]

For first-timer coders, Python is easier to learn, understand, and adapt than many low-level programming languages […]

Is python being easy to learn actually true? I can see it being easier than low-level programming. But there's other alternatives like C# and Java that certainly seem much better and easier to me. Especially when you consider the ecosystem around only writing code.

Plus, the Python language is a steadfast feature in the desktop Linux software landscape. It’s preinstalled on most Linux distributions, boasts extensive library support, and can be used to fashion very cool (as well as very basic) Qt, GTK, and other toolkit UIs.

It's certainly available, and more readily available on Linux. The whole v2 v3 mess was lackluster. But I guess preinstalled is convenient, and more accessible than installable Java or whatever.

I've never seen JavaScript or Python popularity as evidence or correlating with actual qualities. More with a self-promoting usage. Python was being used in science, then in AI, then AI became popular. To me, it seems like a natural propagation consequence more than simplicity or features over other frameworks and languages.

[–] Kissaki@programming.dev 3 points 4 days ago

eeew (/s)

I have a dislike for both of them. Well, for JavaScript mainly the server-side part. I'm fine with it on web scripting, where it's the only native one.

[–] Kissaki@programming.dev 4 points 6 days ago

Notably for CPU only. And on other platforms they already did.

Broadcom would like to clarify that while using KVM for the CPU virtualization, they will continue to rely on all of the existing VMware virtual devices for graphics and other functionality. Also on both macOS and Windows they have migrated to the native CPU virtualization frameworks.

[–] Kissaki@programming.dev 1 points 1 week ago

I found it hard to follow despite C# being my main driver.

Using ref, in the past, has been about modifiable variable references.

All these introductions, even when following C# changes across recent versions, were never something I actively used, apart from the occasional adding ref to structs so they can contain existing ref struct types. It never seems necessary.

Even without ref you use reference and struct types, where reference content can be modified elsewhere. And IDisposable for object lifetimes with cleanup.

[–] Kissaki@programming.dev 2 points 1 week ago* (last edited 1 week ago) (1 children)

Have you considered creating a ticket called "Can't ask questions without joining discord"?

Do you think it would have more answers if it were on GitHub discussions?

[–] Kissaki@programming.dev 2 points 1 week ago

I'm not familiar with their products and product names, so I had to look them up, sharing that here, including the other two free non-comm mentioned in the blog post:

  • Aqua: test automation IDE (various tech)
  • Rider: .NET and game dev
  • RustRover: Rust IDE
  • WebStorm: JavaScript and TypeScript IDE
[–] Kissaki@programming.dev 3 points 1 week ago* (last edited 1 week ago)

Release must be documented

It's not a must [unless you put it into a contract], it's a should or would be nice

Many, if not most, projects don't follow a good, obvious, transparent, documented release or change management.

I wish for it, too, but it's not the reality of projects. Most people don't seem to care about it as much as I do.

I agree blind acceptance/merging is problematic. But for some projects (small scope/size/personal-FOSS, trustworthy upstream) I see it as pragmatic rather than problematic.

[–] Kissaki@programming.dev 4 points 1 week ago* (last edited 1 week ago)

The follow-up quotes

In your specific case, the problem is your employer is on that list [of sanctioned entities]. If there's been a mistake and your employer isn't on the list, that's the documentation Greg is looking for.

[–] Kissaki@programming.dev 4 points 1 week ago* (last edited 1 week ago)

I would consider ~~three~~ four approaches.

1. Commit and push manually and deliberately

I commit changes early and often anyway. I also push regularly, seeing the remote as a safe and remote (as in backup) baseline and reference state.

The question would be: Do I switch when I'm still exploring things in the workspace, without committing when switching or moving away from it, and I would want those on the other PC? Then this would not be enough.

2. Auto-push all local git references into a separate space on the git remote

Git branches are refs, commit pointers, just like other refs are. And they can be put under arbitrary paths. refs/heads/ holds branches. I can replicate and regularly update all my branches under refs/pcreplica/laptop/*. And then on the other PC, list or fetch those, individually, or all of them, regularly automatically, or manually.

git push origin refs/heads/*:refs/pcreplica/laptop/*
git ls-remote
git fetch origin refs/pcreplica/laptop/*:refs/laptop/*

3. Auto-push the/a local branch like you suggested

my concern here would be; is only one branch enough? is only the current branch enough?

4. Remoting into the other system

Are the systems both online? Can I remote into / connect into it when need be?

[–] Kissaki@programming.dev 29 points 2 weeks ago

Has features ✅

[–] Kissaki@programming.dev 2 points 2 weeks ago

we should just write the code how it should be

Notably, that's not what he says. He didn't say in general. He said "for once, [after this already long discussion], let's push back here". (Literally "this time we push back")

who need a secure OS (all of them) will opt to not use Linux if it doesn’t plug these holes

I'm not so sure about that. He's making a fair assessment. These are very intricate attack vectors. Security assessment is risk assessment either way. Whether you're weighing a significant performance loss against low risk potentially high impact attack vectors or assess the risk directly doesn't make that much of a difference.

These are so intricate and unlikely to occur, with other firmware patches in line, or alternative hardware, that there's alternative options and acceptable risk.

[–] Kissaki@programming.dev 2 points 2 weeks ago* (last edited 2 weeks ago)

Code before:

async function createUser(user) {
    if (!validateUserInput(user)) {
        throw new Error('u105');
    }

    const rules = [/[a-z]{1,}/, /[A-Z]{1,}/, /[0-9]{1,}/, /\W{1,}/];
    if (user.password.length >= 8 && rules.every((rule) => rule.test(user.password))) {
        if (await userService.getUserByEmail(user.email)) {
            throw new Error('u212');
        }
    } else {
        throw new Error('u201');
    }

    user.password = await hashPassword(user.password);
    return userService.create(user);
}

Here's how I would refac it for my personal readability. I would certainly introduce class types for some concern structuring and not dangling functions, but that'd be the next step and I'm also not too familiar with TypeScript differences to JavaScript.

const passwordRules = [/[a-z]{1,}/, /[A-Z]{1,}/, /[0-9]{1,}/, /\W{1,}/]
function validatePassword(plainPassword) => plainPassword.length >= 8 && passwordRules.every((rule) => rule.test(plainPassword))
async function userExists(email) => await userService.getUserByEmail(user.email)

async function createUser(user) {
    // What is validateUserInput? Why does it not validate the password?
    if (!validateUserInput(user)) throw new Error('u105')
    // Why do we check for password before email? I would expect the other way around.
    if (!validatePassword(user.password)) throw new Error('u201')
    if (!userExists(user.email)) throw new Error('u212')

    const hashedPassword = await hashPassword(user.password)
    return userService.create({ email: user.email, hashedPassword: hashedPassword });
}

Noteworthy:

  • Contrary to most JS code, [for independent/new code] I use the non-semicolon-ending style following JavaScript Standard Style - see their no semicolons rule with reasoning; I don't actually know whether that's even valid TypeScript, I just fell back into JS
  • I use oneliners for simple check-error-early-returns
  • I commented what was confusing to me
  • I do things like this to fully understand code even if in the end I revert it and whether I implement a fix or not. Committing refacs is also a big part of what I do, but it's not always feasible.
  • I made the different interface to userService.create (a different kind of user object) explicit
  • I named the parameter in validatePassword plainPasswort to make the expectation clear, and in the createUser function more clearly and obviously differentiate between "the passwords"/what password is. (In C# I would use a param label on call validatePassword(plainPassword: user.password) which would make the interface expectation and label transformation from interface to logic clear.

Structurally, it's not that different from the post suggestion. But it doesn't truth-able value interpretation, and it goes a bit further.

 

Mapping C# array types to PostgreSQL array columns or other DBMS/DB JSON columns.

 

cross-posted from: https://programming.dev/post/11720354

UI Components: Smart Paste, Smart TextArea, Smart ComboBox

Dependency: Azure Cloud

They show an interesting new kind of interactivity. (Not that I, personally, would ever use Azure Cloud for that though.)

 

UI Components: Smart Paste, Smart TextArea, Smart ComboBox

Dependency: Azure Cloud

They show an interesting new kind of interactivity. (Not that I, personally, would ever use Azure Cloud for that though.)

 

Backwards compatibility is a key principle in .NET, and this means that packages targeting previous .NET versions, like ‘net6.0’ or ‘net7.0’, are also compatible with ‘net8.0’. […]

The new “Include compatible frameworks” option we added allows you to flip between filtering by explicit asset frameworks and the larger set of ‘compatible’ frameworks. Filtering by packages’ compatible frameworks now reveals a much larger set of packages for you to choose from.

 

Truly astonishing how much generalized modding seems to be possible through general DirectX (8/9) interfaces and official Nvidia provided tooling.

As an AMD graphics card user, it's very unfortunate that RTX/this functionality is proprietary/exclusive Nvidia. The tooling at least. The produced results supposedly should work on other graphics cards too (I didn't find official/upstream docs about it).

For more technical details of how it works, see the GameWorks wiki:

 

cross-posted from: https://programming.dev/post/11034601

There's a lot, and specifically a lot of machine learning talk and features in the 1.5 release of Opus - the free and open audio codec.

Audible and continuous (albeit jittery) talk on 90% packet loss is crazy.

Section WebRTC IntegrationSamples has an example where you can test out the 90 % packet loss audio.

 

There's a lot, and specifically a lot of machine learning talk and features in the 1.5 release of Opus - the free and open audio codec.

Audible and continuous (albeit jittery) talk on 90% packet loss is crazy.

Section WebRTC IntegrationSamples has an example where you can test out the 90 % packet loss audio.

 

Describes considerations of convenience and security of auto-confirmation while entering a numeric PIN - which leads to information disclosure considerations.

An attacker can use this behavior to discover the length of the PIN: Try to sign in once with some initial guess like “all ones” and see how many ones can be entered before the system starts validating the PIN.

Is this a problem?

view more: ‹ prev next ›