this post was submitted on 26 Feb 2024
1487 points (94.8% liked)

Microblog Memes

5699 readers
3478 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] GBU_28@lemm.ee 8 points 8 months ago* (last edited 8 months ago) (1 children)

Huh? Image ai to semantic formating, then consumption is trivial now

[–] Barbarian@sh.itjust.works -3 points 8 months ago* (last edited 8 months ago) (2 children)

Could you give me an example that uses live feeds of video data, or feeds the output to another system? As far as I'm aware (I could be very wrong! Not an expert), the only things that come close to that are things like OCR systems and character recognition. Describing in machine-readable actionable terms what's happening in an image isn't a thing, as far as I know.

[–] GBU_28@lemm.ee 8 points 8 months ago* (last edited 8 months ago) (1 children)

No live video no, that didn't seem the topic

But if you had the horsepower, I don't think it's impossible based on what I've worked with. It's just about snipping and distributing the images, from a bottleneck standpoint

[–] Barbarian@sh.itjust.works -2 points 8 months ago* (last edited 8 months ago)

No live videos

Well, that'd be a prerequisite to a transformer model making decisions for a ship scuttling robot, hence why I brought it up.

[–] FooBarrington@lemmy.world 3 points 8 months ago

Describing in machine-readable actionable terms what's happening in an image isn't a thing, as far as I know.

It is. That's actually the basis of multimodal transformers - they have a shared embedding space for multiple modes of data (e.g. text and images). If you encode data and take those embeddings, you suddenly have a vector describing the contents of your input.