So we just let them break the law without penalty because it's hard and costly to redo the work that already broke the law? Nah, they can put time and money towards safeguards to prevent themselves from breaking the law if they want to try to make money off of this stuff.
DigitalWebSlinger
"AI model unlearning" is the equivalent of saying "removing a specific feature from a compiled binary executable". So, yeah, basically not feasible.
But the solution is painfully easy: you remove the data from your training set (ie, the source code), and re-train your model (recompile the executable).
Yes, it may cost you a lot of time and money to accomplish this, but such are the consequences of breaking the law. Maybe be extra careful about obeying laws going forward, eh?
My last employer also asked us to put up glassdoor reviews, but that was when they generally had a good image on the site and had received a few (honestly undeserved at the time) negative reviews.
As things changed for the worse, my colleagues and I watched their rating slowly decline over the course of a year and a half. The higher ups quickly stopped mentioning it. They... do not have a good image on glassdoor anymore.
Are you able to submit a new review? I didn't leave my own review until after I was laid off, so I haven't bothered to "update" mine.
Do not speak the deep magic to me, witch; I do not understand it.
Broadly, this is a simple version of the Strategy Pattern, which is incredibly useful for making flexible software.
In Python, the example given is basically the classic bodge attempt to emulate switch-case statements.
Here's the deal with Michael Farris.
Dude was (is, technically) a constitutional lawyer, and knew his stuff forward and backward. He's filed multiple briefs with SCOTUS on various cases, and IIRC he even argued a case or two before them. I followed him on Facebook, where he would post lots of political opinions and news, and elicit conversation from lots of people. I really respected the guy and his knowledge, and most of his opinions, once upon a time.
Sometime in probably 2012 (give or take some years), best guess, he had what he described as an epiphany / religious vision while on the treadmill at the gym. Like God or an angel had given him a clear vision of what he was supposed to be doing, and shortly after, he shifted his focus and efforts.
I can't find that post now, and I don't remember at this point what he shifted his focus to, but after that, the tone of his posts changed, and I was suddenly disagreeing with many, many of his newly espoused opinions. After a while of that, he decided to move his public political discourse form his personal account to a page, and I lost track of him after that.
To this day, I'm convinced he had a stroke while at the gym, and it tweaked something in his brain. It was really disappointing to see his sudden shift in position and watch his steady decline afterward.
Be me whose server is on Ubuntu 18.04 and needs upgrading to get Bluetooth into home assistant 😭
Too many negative words for chatgpt, imo. "isn't", "not", etc, chatgpt is usually positive and friendly to a fault.
Maybe you could provide a prompt that would output something substantially similar to what they wrote?
Because to effect positive change, you need to build support in the masses, and it's far easier to build support for a single, simple idea that moves us one step in the right direction, than a complex web of ideas that more accurately reflects reality.
I don't know about "be successful", depending on how you measure success. All of these examples have been subsidized by cheap money for years, undercutting competition - and taking year after year of losses while they do it - for the purpose of capturing the market and driving out competitors, so that they can subsequently enact monopolistic behaviors to start actually turning a profit once customers have no other choice.
The problem is money suddenly got expensive, so now they're scrambling to find a way, any way, to turn a profit, before full market capture was achieved.
Can services like this be reasonably priced and user-friendly? Sure. Can they "succeed" / become sustainable while remaining so? Current examples indicate that's where the problem lies.
Total budget is quite high comparative to those numbers.
I'm a little unclear what they're saying gives it away as being AI? Occasional choppiness?
The tweet link posted elsewhere in this thread doesn't give much to go on, but the "choppiness", while noticeable if I was really looking for it, did not stand out to me. What did stand out to me was the clarity of the audio. Every "on the phone with Trump interview" I've heard (which is few, but enough) has had really horrible, standard phone line quality. This had a "computer microphone picking up a computer speaker" quality. Which I suppose lends credence to the theory that they, themselves, are doing the duping, otherwise the faked audio would have come through the phone lines? Probably would have been more believable then, like how grainy pictures of Bigfoot are more believable.