Martineski

joined 1 year ago
MODERATOR OF
[–] Martineski@lemmy.fmhy.ml 1 points 1 year ago (6 children)

Who wouldn't want to?

[–] Martineski@lemmy.fmhy.ml 3 points 1 year ago (2 children)

live theater

Maybe theater will become more popular because ai will automate jobs and people will start pursuing art but I still think that movies will be more popular than theater because they're just easier to access and you have more immersive representations of the worlds/events.

[–] Martineski@lemmy.fmhy.ml 1 points 1 year ago* (last edited 1 year ago)

Wasn't my point. It's more about WHAT someone does with your "identity" in public media. In the long term I can see it being abolished too but in the short term there will be a lot of drama about it for sure.

Edit:

Wasn't my point.

Yeah, it seems like it was my point in my original comment, my bad.

[–] Martineski@lemmy.fmhy.ml 4 points 1 year ago

Also, CGI. It's eveywhere and noone minds as long it's well done.

[–] Martineski@lemmy.fmhy.ml 6 points 1 year ago (4 children)

I imagine that movies where we have "real" actors will become a popular niche for enjoying the acting of those people and not the plot or events themselves.

[–] Martineski@lemmy.fmhy.ml 1 points 1 year ago* (last edited 1 year ago) (5 children)

ai denial spotted in the wild

Edit: Imagine having characters that are not played by a real person. Your movie won't be ruined just because the actor is controversial. Just an example.

[–] Martineski@lemmy.fmhy.ml 2 points 1 year ago

Alternate plan is the amount of govt bailout in shares is now owned by the govt.

Oh! I couldn't come up with a proper way to do it but you solved it for me.

[–] Martineski@lemmy.fmhy.ml 12 points 1 year ago (16 children)

Oh boy, the identity and copyright laws will be chaotic as ai gets more and more advanced. I'm all in for abolishing copyrights but I have no idea what to think about your identity being duplicated/recreated. When is something your identity and when it stops being it? It will be obvious with 1:1 copies of popular people/actors but what about situations where copies are tinkered with to resemble someone less or when you do a mix of multiple people to create one person? What about people that are not known by everyone? What if the virtual person resembles someone by accident?

[–] Martineski@lemmy.fmhy.ml 2 points 1 year ago

Please add the date of the article to the title of the post as according to our rule 6, thank you.

[–] Martineski@lemmy.fmhy.ml 1 points 1 year ago* (last edited 1 year ago)

Please add the date of the article to the title as according to our rule 6, thank you.

[–] Martineski@lemmy.fmhy.ml 1 points 1 year ago* (last edited 1 year ago)

Sorry for a late reaction I was ill in the past few days and didn't have energy to moderate this sub. Please include the date of the paper in the title, thank you.

 
 
 
 

Article: https://gizmodo.com/google-says-itll-scrape-everything-you-post-online-for-1850601486

Article summarizing the article above: https://gizmodo.com/google-says-itll-scrape-everything-you-post-online-for-1850601486

Copy of the summarization:

Google has updated its privacy policy to explicitly state it can use virtually anything you post online to enhance its AI tools, a change that raises intriguing privacy questions and has prompted reactions from platforms such as Twitter and Reddit.

Google's New Privacy Policy: Google has altered its privacy policy to state that it can scrape almost any content posted online for the advancement of its AI tools.

· It uses this data to improve existing services and develop new products, features, and technologies.

· The data harvested aids in training Google's AI models and building products like Google Translate, Bard, and Cloud AI.

Impact on Internet Users: This policy modification challenges conventional concepts of online privacy.

· It suggests that any public post on the internet could be used by Google

· This practice necessitates a shift in how we perceive online activity, focusing on how the information could be employed rather than who can see it.

Legal and Copyright Concerns: The usage of data from the internet to fuel AI systems raises legal and copyright issues.

· It remains uncertain whether such a practice is legal, with courts likely to address these new copyright issues in the coming years.

· This practice affects consumers in surprising ways, raising questions about data ownership.

Reactions from Other Platforms: Twitter and Reddit have responded to this AI-related issue by restricting access to their APIs.

· This action aimed to protect their intellectual property from data scraping but resulted in breaking third-party tools used to access these platforms.

· Controversies have ensued, such as Twitter contemplating charging public entities for tweets, and Reddit seeing a mass protest due to API changes disrupting the work of moderators.

Elon Musk's Stance on Web Scraping: Elon Musk has recently expressed concerns about web scraping.

· He blamed several Twitter mishaps on the company's need to prevent others from data extraction.

· Despite these claims, most IT experts believe these problems are likely due to management issues or technical difficulties.

 

For the first time in the world researchers at Tel Aviv University have encoded a toxin produced by bacteria into mRNA (messenger RNA) molecules and delivered these particles directly to cancer cells, causing the cells to produce the toxin—which eventually killed them with a success rate of 50%.

 

Scaling sequence length has become a critical demand in the era of large language models. However, existing methods struggle with either computational complexity or model expressivity, rendering the maximum sequence length restricted. In this work, we introduce LongNet, a Transformer variant that can scale sequence length to more than 1 billion tokens, without sacrificing the performance on shorter sequences. Specifically, we propose dilated attention, which expands the attentive field exponentially as the distance grows. LongNet has significant advantages: 1) it has a linear computation complexity and a logarithm dependency between tokens; 2) it can be served as a distributed trainer for extremely long sequences; 3) its dilated attention is a drop-in replacement for standard attention, which can be seamlessly integrated with the existing Transformer-based optimization. Experiments results demonstrate that LongNet yields strong performance on both long-sequence modeling and general language tasks. Our work opens up new possibilities for modeling very long sequences, e.g., treating a whole corpus or even the entire Internet as a sequence.

Link to the repo: https://github.com/microsoft/unilm

 

Original title of the article: People almost always get this simple math problem wrong: Can you solve it?

The question goes: “A bat and a ball cost $1.10 in total. The bat costs $1.00 more than the ball. How much does the ball cost?”

 

The development of neural networks to create artificial intelligence in computers was originally inspired by how biological systems work. These "neuromorphic" networks, however, run on hardware that looks nothing like a biological brain, which limits performance.

Now, researchers from Osaka University and Hokkaido University plan to change this by creating neuromorphic "wetware." The work is published in the journal Advanced Functional Materials.

While neural-network models have achieved remarkable success in applications such as image generation and cancer diagnosis, they still lag far behind the general processing abilities of the human brain. In part, this is because they are implemented in software using traditional computer hardware that is not optimized for the millions of parameters and connections that these models typically require.

 

Imagine this: you're at a vibrant cocktail party 🍹, filled with the buzz of conversation and the clink of glasses 🍻. You're a laid-back observer 👀, tucked comfortably in a corner. Yet, you can still easily figure out the social relations between different people, understand what's going on, and even provide social suggestions by reading people's verbal and non-verbal cues.

If a large language model (LLM) could replicate this level of social aptitude, then we could say that it possesses certain social abilities. Curious how different LLMs perform when it comes to understanding and navigating social interactions? Check out these demos processed by AI models♦!

Site: https://chats-lab.github.io/KokoMind/

Martineski: If you know the exact date of when this was published let me know, I hate it when they do it like this:

 

New theoretical research proves that machine learning on quantum computers requires far simpler data than previously believed. The finding paves a path to maximizing the usability of today's noisy, intermediate-scale quantum computers for simulating quantum systems and other tasks better than classical digital computers, while also offering promise for optimizing quantum sensors.

 

A team at the National Institute of Standards and Technology in Boulder, Colorado, has reported the successful implementation of a 400,000 pixel superconducting nanowire single-photon detector (SNSPD) that they say will pave the way for the development of extremely light-sensitive large-format superconducting cameras. Their paper, "A superconducting-nanowire single-photon camera with 400,000 pixels," was published in the preprint repository arXiv on June 15.

Researchers from the University of Colorado's Department of Physics and the Jet Propulsion Laboratory at the California Institute of Technology also participated in the project.

The camera is now the largest of its type. Its pixel array is 400 times greater than the previous largest photon camera. It can work in various light frequencies from the visible to ultraviolet and infrared range and capture images at super high-speed rates, in matters of picoseconds.

view more: ‹ prev next ›