mkhoury

joined 2 years ago
[–] mkhoury@lemmy.ca 10 points 8 months ago (1 children)

Wow I've never seen enshittification mentioned by a politician. Glad to hear it's getting inside the Overton Window.

[–] mkhoury@lemmy.ca 2 points 8 months ago* (last edited 8 months ago)

I sympathize, I also feel like the fight against the corporations is hopeless. The loss of leverage against employers for tech workers is huge in the face of LLMs. I'm a tech worker myself and am facing those same problems. But I'm not sure that this means that FOSS is useless. The corps have a huge incentive to create these tools, whether they're open source or not. But at least when they're open source, we the people can also use them. I'm not suggesting that we can do this with LLMs today, we just don't have the right contributor and maintainer tools to do it. But right now we have to develop maintainer tools to filter out the huge amount of crap that badly designed LLM systems are putting out. This gives us the opportunity to build a contribution model that doesn't care about human vs LLM provenance, as long as it meets certain quantifiable standards. In 5-10 years, we're going to have LLMs that can infer at very high speed, meaning we can do a lot of error correction by multiplying the number of generations you make and looking for consistency. The engineering effort for LLM systems is barely started, these systems are gonna get way more robust. Wouldn't it be better if these systems were built in the open so that we can all share, understand and leverage these tools for ourselves?

As for the gatekeeping/democratizing of art and tech, I agree that anyone can learn that stuff if they put enough effort into it. But by the simple fact that people need to put time and sweat into it, it disqualifies a large swath of the population, from children to neurodivergent people to low wage workers who don't have the breathing room to rest let alone take up programming. It's really not about a 'soldier at the gate', no person or group is preventing anyone from learning how to code. The social order and biology sometimes makes it so. Wouldn't it be better for everyone if anyone could modify their software without having to invest a shitload of time to learn how to code? Like maybe this person only wants this one specific change in one specific app-- the ROI just isn't there if they have to learn a whole new field.

I am not trying to say that AI and LLMs are the next best thing since sliced bread. I think there's huge problems with it, but I also think that they can be powerful tools if we wield them properly. I think there's big limitations on the tech, and huge ethical implications about the way they're built and their cost to the planet. I'm hoping that we can fix these in the long run, but I sure as fuck don't count on the current AI industry leaders to do it. They're going to use this tech to supercharge surveillance capitalism, imo. It's gonna be fucking horrible. What I hope is that we can carve out a space for personal computing with the help of FLOSS.

[–] mkhoury@lemmy.ca 1 points 9 months ago* (last edited 9 months ago) (2 children)

I agree that with the current state of tools around LLMs, this is very unadvisable. But I think we can develop the right ones.

  • We can have tools that can generate the context/info submitters need to understand what has been done, explain the choices they are making, discuss edge cases and so on. This includes taking screenshots as the submitter is using the app, testing period (require X amount of time of the submitter actually using their feature and smoothening out the experience)

  • We can have tools at the repo level that can scan and analyze the effect. It can also isolate the different submitted features in order to allow others to toggle them or modify them if they're not to their liking. Similarly, you can have lots of LLMs impersonate typical users and try the modifications to make sure they work. Putting humans in the loop at different appropriate times.

People are submitting LLM generated code they don't understand right now. How do we protect repos? How do we welcome these contributions while lowering risk? I think with the right engineering effort, this can be done.

 

Public code repositories like Github are currently being beset by a flood of LLM-generated contributions. It’s becoming a bit of a problem and is one of the facets of the Great Flood the web is currently experiencing.

What does it look like when we are able to use LLMs to handle the flood of contributions? What happens when we’re able to screen and adopt PRs effectively with little to no human intervention?

I use the Voice audiobook app to listen to my DRM-free books. In this app, there’s a configuration setting for auto-rewind. If you pause the book, when you resume, it will rewind by X seconds. I didn’t like that feature, I wanted the amount of seconds to rewind to be based on how long it has been since I’ve paused. So if I resume within a minute, no rewind; within 5 minutes, 10 second rewind; more than that would be 30 seconds.

I can do this because I’m part of a small percentage of people who can clone a repo for an Android app, modify it, rebuild it and push it to my phone. But I don’t want this power to be constrained to a priesthood who know the secret language of coding. I want everyone to be able to do stuff like that.

Imagine a world in which, as I use a specific piece of software, I can request modifications to its behaviour to an LLM-augmented system. That system will pull the open source code, make the necessary modifications (following the project’s contribution guidelines), build it and reload it on my device. Then I can use it and test it, and fix any problems that come along. That modification can then be uploaded to my own repo and made publicly available for anyone else who wants it, or it could even be pushed as a PR to the original system who could scan it for usefulness, alignment, UX, etc., modify it if needed, and then merge it to the main branch.

This wonderful world of personal and communal computing would be unimaginable in a closed source world. No closed source system will accept an external AI to come in and read/modify it at will. This is why open source is more important than ever.

We need to build a Software Commons so that we can give everyone the ability to adapt their digital lives to their liking. So that these intimate, private devices to which we entrust most of our attention, these things which have great effects on our cognitive and emotional functions, remain ours in a real sense. And the way that we do this is to create the tools and processes to allow anyone to make modifications to their software by simply expressing that intent.

And what does communal software development look like? Let’s explore the space of social consensus mechanisms so we can find those that drive the creation of software which promote culture, connection, compassion and empathy.

I want to see the promise of community made by the 90’s web survive the FAANG+ Megacorp Baronies and flourish into a great digital metropolis. The web can still get free to be weird, we just have to make it happen together.

[–] mkhoury@lemmy.ca 9 points 9 months ago

Unless you have a balanced diet that anticipates your workouts and gives you the proper amount of sodium, potassium and magnesium. Sports drinks are just selling you those at a big premium. Stick with water. Eat a banana.

[–] mkhoury@lemmy.ca 25 points 9 months ago (1 children)

Isn't it the opposite then? Since your windows will have vertical scrolls, it makes sense to tile them horizontally in order to maximize vertical space for each window, imo.

[–] mkhoury@lemmy.ca 5 points 9 months ago* (last edited 9 months ago)

I don't know, the person was trying to get it to output defamatory things. They got to print what they wanted to print.

The failure of the bot to provide the action is a separate issue which wouldn't have made the news. It's not like they were trying to get help and it instead started insulting its own company, right?

[–] mkhoury@lemmy.ca 8 points 9 months ago (3 children)

This feels like the equivalent of "I was able to print 'HP Sucks' on an HP printer". Like, yes you can do that, but... why is that important or even needs to be blocked?

[–] mkhoury@lemmy.ca 1 points 9 months ago

Thanks! I had actually gotten confused by the Create Post interface and accidentally did not post the URL to the blog post heh. I fixed it now

4
submitted 9 months ago* (last edited 9 months ago) by mkhoury@lemmy.ca to c/parenting@lemmy.world
 

I have two young kids and spend a lot of time thinking about how to approach the process of parenting. LLMs are a great resource to augment some aspects of parenting. In this blog post, I go into some examples that I use for the following uses:

  • Coming up with activities
  • AI Generation games
  • Thinking through past and future events
  • Approaching complex topics
  • Talking to parenting books
[–] mkhoury@lemmy.ca 2 points 10 months ago

These are very poor arguments for smoking cigarettes, but sure...

[–] mkhoury@lemmy.ca 3 points 10 months ago (2 children)

Another argument to give your tween a smartphone is that they need to learn how to use it, to develop a healthy relationship with it, to understand the pros/cons, to understand how to use it effectively. Abstinence will just make them envious and less likely to think through the consequences.

[–] mkhoury@lemmy.ca 10 points 10 months ago

There are lots of people who could use them. Schools, libraries, poor people.

[–] mkhoury@lemmy.ca 1 points 10 months ago (1 children)

What do you mean? I follow a lot of hashtags on Mastodon. Won't I be seeing a lot of Threads content if I'm on a server federated with them without explicitly opting into that?

 

Hi all, Is there a way for me to block a particular domain no matter in which community it finds itself in? There are some news outlets that I just don't want to be polluted by.

Thanks!

 

I've been working on honing my architectural skills and came across this interesting article that put some things in perspective for me. Maybe it will help you too!

 

I've been playing with SudoLang a lot lately and I finally got around to trying to write a simple example of what an actual codebase written in SudoLang could look like.

  • Automatically builds codebase in JavaScript based on SudoLang
  • Imports interfaces into files using the @interfaces directive to ensure compatibility between generated files

It could be extended to:

  • Automatically generate unit tests
  • Use these unit tests for self-refinement to make sure that the generated code works as expected
  • Automatically look up optimizations
  • Automatically rewrite the SudoLang itself to be more deterministic
  • Progressive compilation (only recompile code changes)
  • Automatically produce documentation

There's so much power in writing my codebase like this. It makes it much faster to write, I don't need to know a lot of the low level technical details of each language (though it helps), I don't need to know the best implementations of algorithms I just need to name and/or explain their outcome, and so much more!

 

A lot of hype around LLMs these days. I wonder how long until we get to the trough of disillusionment.

view more: next ›