The Original Gay Porn Community - Free Gay Movies and Photos, Gay Porn Site Reviews and Adult Gay Forums

  • Welcome To Just Us Boys - The World's Largest Gay Message Board Community

    In order to comply with recent US Supreme Court rulings regarding adult content, we will be making changes in the future to require that you log into your account to view adult content on the site.
    If you do not have an account, please register.
    REGISTER HERE - 100% FREE / We Will Never Sell Your Info

    PLEASE READ: To register, turn off your VPN (iPhone users- disable iCloud); you can re-enable the VPN after registration. You must maintain an active email address on your account: disposable email addresses cannot be used to register.

And Let's not forget about the other Deplorables


bafkreig65zrqbe2fvxeqwumqkjt6cqoykkwnzo766asnd3u3roy3fsfe5m@jpeg
Oh pretty boy is going to have a lot of fun over the next 4 years.
 
Everyone has their price.

A few years ago, a group of developers and engineers from OpenAI had an ethical disagreement with Elon Musk and other backers of OpenAI. Their concern was that OpenAI wasn't putting safeguards in place to protect humans from AI's actions. Ultimately, they left OpenAI and started their own company, Anthropic. Anthropic was very vocal about their intent to design AI in an "ethical" framework. Among their "guardrails" were two ethical guidelines that they said that they would never violate:
  1. AI should never be used for weapons that can be used against the human race and
  2. AI should never be used to surveil the public.

A $200 million contract with the US Department of Defense and a sociopathic, alcoholic Defense Secretary later, Anthropic has decided that they need to "evolve" on their principles.

In other words, the US Department of Defense wants to use AI to spy on Americans and possibly kill people.

The new safety policy

Anthropic’s new safety policy includes a “Frontier Safety Roadmap” that outlines the company’s self-imposed guidelines and safeguards. But the company acknowledged the new framework is more flexible than its past policy.

“Rather than being hard commitments, these are public goals that we will openly grade our progress towards,” the company said in its blog post.

The change comes a day after Defense Secretary Pete Hegseth gave Anthropic CEO Dario Amodei a Friday deadline to roll back the company’s AI safeguards, or risk losing a $200 million Pentagon contract and being put on what is effectively a government blacklist.

Anthropic has concerns over two issues that it isn’t willing to drop, according to a source familiar with the company’s meeting with Hegseth: AI-controlled weapons and mass domestic surveillance of American citizens. Anthropic believes AI is not reliable enough to operate weapons, and there are no laws or regulations yet that cover how AI could be used in mass surveillance, a source said.

AI researchers applauded Anthropic’s stance on social media on Tuesday and expressed concerns about the idea of AI being used for government surveillance.

The company has long positioned itself as the AI business that prioritizes safety. Anthropic has published research showing how its own AI models could be capable of blackmail under certain conditions. The company recently donated $20 million to Public First Action, a political group pushing for AI safeguards and education.

But the company has faced increasing pressure and competition from both the government and its rivals. Hegseth, for example, plans to invoke the Defense Production Act on Anthropic and designate the company a supply chain risk if it does not comply with the Pentagon’s demands, CNN reported on Tuesday. OpenAI and Anthropic have also been locked in a race to launch new enterprise AI tools in a bid to win the workplace.
 
Another one angling for a pardon when he gets charged some day.
 
Back
Top