this post was submitted on 24 Feb 2024
114 points (97.5% liked)

Free Open-Source Artificial Intelligence

2886 readers
7 users here now

Welcome to Free Open-Source Artificial Intelligence!

We are a community dedicated to forwarding the availability and access to:

Free Open Source Artificial Intelligence (F.O.S.A.I.)

More AI Communities

LLM Leaderboards

Developer Resources

GitHub Projects

FOSAI Time Capsule

founded 1 year ago
MODERATORS
 

This would be a good opportunity to provide a thoughtful, sane, and coherent response to voice your opinion on the future regulation policies for AI to counter the fearmongering.

How to submit a comment: https://www.regulations.gov/document/NTIA-2023-0009-0001

All electronic public comments on this action, identified by Regulations.gov docket number NTIA–2023–0009, may be submitted through the Federal e-Rulemaking Portal. The docket established for this request for comment can be found at www.Regulations.gov, NTIA–2023–0009. To make a submission, click the ‘‘Comment Now!’’ icon, complete the required fields, and enter or attach your comments. Additional instructions can be found in the “Instructions” section below, after “Supplementary Information.”

all 17 comments
sorted by: hot top controversial new old
[–] RobotToaster@mander.xyz 62 points 8 months ago (2 children)

Of course they only want to regulate open source models...

[–] AbouBenAdhem@lemmy.world 15 points 8 months ago* (last edited 8 months ago) (1 children)

From their list of concerns:

The benefits and risks of making model weights widely available compared to the benefits and risks associated with closed models; Innovation, competition, safety, security, trustworthiness, equity, and national security concerns with making AI model weights more or less open; and The role of the U.S. government in guiding, supporting, or restricting the availability of AI model weights

It seems like they’re concerned about both open and closed models, and they’re interested in supporting as well as potentially regulating both.

[–] GluWu@lemm.ee 10 points 8 months ago (1 children)

Lol, no they aren't. That's just legalese. A government regulator agency isn't going to open feedback about what it should be doing regulatory wise and promt the entire thing with "Why should we close AI to only corporations willing to post enough money". They have to at least include the "but maybe potentially let average people use AI" part so when they close everything down they can point back and say "look, we were open to talking about open solutions".

[–] AbouBenAdhem@lemmy.world 2 points 8 months ago

That tends to be the outcome of processes like this, and sometimes it is because the agency already decided on policy ahead of time and only asked for public input for the sake of appearances. But in other cases the request for input is in good faith, and industry interests end up dominating the discussion because other voices convince themselves they’d be ignored anyway.

In the case of new industries still in flux, it’s more likely that commercial interests haven’t yet infiltrated the relevant agencies to dictate policy from within—which is why they have to rely on hyperbolic scare tactics and hope no one contradicts them.

[–] rufus@discuss.tchncs.de 5 points 8 months ago* (last edited 8 months ago)

Haha, that'd be fun. EU with least regulation on open-weight models and the US exalctly the other way around.

[–] PeepinGoodArgs@reddthat.com 15 points 8 months ago

Use your preferred AI to write the letter for you!

[–] Lemmeenym@lemm.ee 6 points 8 months ago (2 children)

Am American, am not particularly tech savvy. Can anyone recommend a reliable resource to read more about this? Are there any nonprofit or foss groups that are likely to have a published position on how AI should be regulated?

[–] will_a113@lemmy.ml 17 points 8 months ago

The EFF has published some suggestions in the past, I’d generally trust their perspective.

[–] GBU_28@lemm.ee 4 points 8 months ago* (last edited 8 months ago)

Well the open source community should have a near zero unified opinion on how a broad technology should be centrally managed. Individual groups or persons should hold their own positions.

This is like saying binary search, or encryption should be regulated. LLM is just a particular technology. A fundamental thing. It's just math and systems.

Products and derivates may be another thing (like Perhaps creating deepfakes of living people, for example, or training a model on copyright material)

[–] keepthepace 5 points 8 months ago (2 children)

I don't understand how we are supposed to file a comment?

[–] FireTower@lemmy.world 4 points 8 months ago (2 children)
[–] cll7793@lemmy.world 2 points 8 months ago

Here are the instructions from: https://www.ntia.gov/federal-register-notice/2024/dual-use-foundation-artificial-intelligence-models-widely-available#

All electronic public comments on this action, identified by Regulations.gov docket number NTIA–2023–0009, may be submitted through the Federal e-Rulemaking Portal. The docket established for this request for comment can be found at www.Regulations.gov, NTIA–2023–0009. To make a submission, click the ‘‘Comment Now!’’ icon, complete the required fields, and enter or attach your comments. Additional instructions can be found in the “Instructions” section below, after “Supplementary Information.”

[–] xia@lemmy.sdf.org 5 points 8 months ago (1 children)

They finally found a way to make freedom-software a boogieman... at least plausibly enough to fool the panicky public.

[–] muntedcrocodile@lemmy.world 3 points 8 months ago

For a country founded in FREEDOM FUCK YEAH they really dont like freedom software do they.

[–] garlicandonions@lemmy.world 1 points 8 months ago

I'd recommend consulting ISO/IEC 5469 and any others published by its AI standards committee, ISO/IEC 42.