https://fixupx.com/CultureCrave/status/1840858182840877084
AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.
This is not debate club. Unless it’s amusing debate.
For actually-good tech, you want our NotAwfulTech community
https://fixupx.com/CultureCrave/status/1840858182840877084
AHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHAHA
New piece from Ars Technica: Meta smart glasses can be used to dox anyone in seconds, study finds:
Two Harvard students recently revealed that it's possible to combine Meta smart glasses with face image search technology to "reveal anyone's personal details," including their name, address, and phone number, "just from looking at them."
In a Google document, AnhPhu Nguyen and Caine Ardayfio explained how they linked a pair of Meta Ray Bans 2 to an invasive face search engine called PimEyes to help identify strangers by cross-searching their information on various people-search databases. They then used a large language model (LLM) to rapidly combine all that data, making it possible to dox someone in a glance or surface information to scam someone in seconds—or other nefarious uses, such as "some dude could just find some girl’s home address on the train and just follow them home,” Nguyen told 404 Media.
This is all possible thanks to recent progress with LLMs, the students said.
Putting my off-the-cuff thoughts on this:
Right off the bat, I'm pretty confident AR/smart glasses will end up dead on arrival - I'm no expert in marketing/PR, but I'm pretty sure "our product helped someone dox innocent people" is the kind of Dasani-level disaster which pretty much guarantees your product will crash and burn.
I suspect we're gonna see video of someone getting punched for wearing smart glasses - this story's given the public a first impression of smart glasses that boils down to "this person's a creep", and its a lot easier to physically assault someone wearing smart glasses than some random LLM
This is a gut feeling I've had since Baldur talked about AI's public image nearly three months ago, but this gives me further reason to expect the public are gonna be outright hostile to the tech industry once the AI bubble pops.
In the endless genAI shit that the bird site pushes on me, this caught my eye because it seems like a dream tool for a non-tech suit to generate blame examples for engineers https://xcancel.com/rohanpaul_ai/status/1840941643223945561
i .... have no idea whatsoever what the use case is here ... you make the chatbot generate the code instead of cloning the repo? or it's like generating an API that doesn't work or something?
CEO of cloudflare says he’ll donate the bandwidth for Wordpress dot org to shut mullenweg up https://xcancel.com/eastdakota/status/1841154152006627663
Cloudflare is such a weird company in various ways. Saying loudly that they can't judge groups when people ask them not to support the neo-nazis, harassers and worse (they have moved on this under pressure, but it takes a lot of pressure). But then they do this.
basilisk save us from breathless idiocy by rubes
Altman’s investors will have had to get comfortable with at least four levels of intricacy.
ah yes the four-fold path of investing, the true religion
More head-scratching is OpenAI’s governance. Altman was ousted last year, then swiftly returned.
“I cannot look at this and analyse the power structure, and thus I am very confused as to how this happened”
But investors making that call today must surely be powered more by instinct than intelligence.
“must surely”? call the builders, we’ve found a new ultra-strong load bearing phrase
I might be wrong but this sounds like a quick way to make the web worse by putting a huge computational load on your machine for the purpose of privacy inside customer service chat bots that nobody wants. Please correct me if I’m wrong
WebLLM is a high-performance in-browser LLM inference engine that brings language model inference directly onto web browsers with hardware acceleration. Everything runs inside the browser with no server support and is accelerated with WebGPU.
WebLLM is fully compatible with OpenAI API. That is, you can use the same OpenAI API on any open source models locally, with functionalities including streaming, JSON-mode, function-calling (WIP), etc.
We can bring a lot of fun opportunities to build AI assistants for everyone and enable privacy while enjoying GPU acceleration.
I read this twice as LLM interference engine and was hoping for something like SETI or Folding@Home except my computer could interfere with ChatGPT somehow.