the beginning of the end of free speech?

polymoog

Superstar
Joined
Jun 17, 2017
Messages
5,106
It's been coming for a while, with political correctness slowly permeating our psyche over the years. I bothers me that some of my friends alter their speech and lower their voice, as to - not be out of line. I have other friends and family who are fully aware of what's going on, and go out of their way to speak their minds because they realize now is the time to speak up.
i am getting louder and louder, giving less of a shit who i offend.
 






DavidSon

Star
Joined
Jan 10, 2019
Messages
1,170
https://www.wsws.org/en/articles/2019/09/06/cens-s06.html

DARPA Prepares For Mass Internet Censorship

"...On August 23, the Defense Advanced Research Projects Agency (DARPA) issued a solicitation for a so-called Semantic Forensics (SemaFor) program on the federal government business opportunities website. According to the bid specifications, SemaFor “will develop technologies to automatically detect, attribute, and characterize falsified multi-modal media assets (text, audio, image, video) to defend against large-scale, automated disinformation attacks.”

In other words, the US Defense Department is seeking a technology partner that will build a platform to enable the Pentagon to locate any content it identifies as an adversarial “disinformation attack” and shut it down. This technology will cover anything on the internet including web pages, videos, photo and social media platforms."
 






polymoog

Superstar
Joined
Jun 17, 2017
Messages
5,106
In other words, the US Defense Department is seeking a technology partner that will build a platform to enable the Pentagon to locate any content it identifies as an adversarial “disinformation attack” and shut it down. This technology will cover anything on the internet including web pages, videos, photo and social media platforms."
i thought they already had an internet kill switch. i suppose this is more of a precise targeting system.
how would this work against blockchain?
 






DavidSon

Star
Joined
Jan 10, 2019
Messages
1,170
i thought they already had an internet kill switch. i suppose this is more of a precise targeting system.
how would this work against blockchain?
Yeah we see in Kashmir the Indian government pulled the plug on access to their servers(?), so they ultimately have the power either way.

The article says that the defense dept. is pursuing advances to state of the art algorithms (that will be top secret). They say the technology is to combat deep fakes.

"According to the bid specifications, SemaFor will develop technologies to automatically detect, attribute, and characterize falsified multi-modal media assets (text, audio, image, video) to defend against large-scale, automated disinformation attacks."

So nothing really new, but I think their quest to analyse semantics (in real time) is interesting.

"The use of intelligent semantic techniques—the ability for a program to analyze online content within context and determine its meaning and intent—requires the latest developments in artificial intelligence and so-called “neural networks” that have the ability to “learn” and improve performance over time. According to the US Defense Department, semantic analysis will be able to accurately identify inconsistencies within artificially generated content and thereby establish them as “false.”

The author summarizes his article with this reminder:

"...However, the current effort by the Pentagon to create an automated system for identifying and shutting down so-called “fake news” is part of the ongoing broader effort by the Democrats and Republicans, in cooperation with the intelligence state, to both control social media and online content and use it to monitor the moods, ideas and politics of the public."

"Whatever the publicly stated claims of DARPA regarding the desire to stop the use of “false media assets” to target personal attacks, generate “believable” events and propagate ransomware, these tools are undoubtedly part of the imperialist cyberwarfare arsenal currently being developed and deployed by the US military and CIA ."
 






polymoog

Superstar
Joined
Jun 17, 2017
Messages
5,106
Father locked up and facing charged after comments critical of judge

https://www.bitchute.com/video/lwCi5w81nasY/

id like to see the father go after the police department for wrongful arrest. i dont think he has a case against the judge. that woman is a danger to our society if shes deciding cases.
 






saki

Veteran
Joined
Dec 11, 2017
Messages
666
https://www.wsj.com/articles/readers-beware-ai-has-learned-to-create-fake-news-stories-11571018640

Readers Beware: AI Has Learned to Create Fake News Stories
Researchers warn about the risks of computer-generated articles—and release tools that ferret out fakes

‘Large-scale synthesized disinformation is not only possible but is cheap and credible,’ says Cornell professor Sarah Kreps.

By Asa Fitch

Updated Oct. 13, 2019 10:59 pm ET

Real-sounding but made-up news articles have become much easier to produce thanks to a handful of new tools powered by artificial intelligence—raising concerns about potential misuse of the technology.

What deepfakes did for video—producing clips of famous people appearing to say and do things they never said or did—these tools could do for news, tricking people into thinking the earth is flat, global warming is a hoax or a political candidate committed a crime when he or she didn’t. While false articles are nothing new, these AI tools allow them to be generated in seconds by computer.

As far as experts know, the technology has been implemented only by researchers, and it hasn’t been used maliciously. What’s more, it has limitations that keep the stories from seeming too believable.

But many of the researchers who developed the technology, and people who have studied it, fear that as such tools get more advanced, they could spread misinformation or advance a political agenda. That’s why some are sounding the alarm about the risks of computer-generated articles—and releasing tools that let people ferret out potentially fake stories.

‘Quite Convincing’
“The danger is when there is already a lot of similar propaganda written by humans from which these neural language models can learn to generate similar articles,” says Yejin Choi, an associate professor at the University of Washington, a researcher at the Allen Institute for Artificial Intelligence and part of a team that developed a fake-news tool. “The quality of such neural fake news can look quite convincing to humans.”



The first entry in a powerful new generation of synthetic-text tools was unveiled in February, when OpenAI, a San Francisco-based research body backed by prominent tech names like LinkedIn co-founder Reid Hoffman, launched the GPT-2. The software produces genuine-sounding news articles—as well as other types of passages, from fiction to conversations—by drawing on its analysis of 40 gigabytes of text across eight million webpages. Researchers developed the OpenAI software because they knew powerful speech-generation would eventually appear in the wild and wanted to handle its release responsibly.

The GPT-2 system worked so well that in an August survey of 500 people, a majority found its synthetic articles credible. In one group of participants, 72% found a GPT-2 article credible, compared with 83% who found a genuine article credible.

“Large-scale synthesized disinformation is not only possible but is cheap and credible,” says Sarah Kreps, a professor at Cornell University who co-wrote the research. Its spread across the internet, she says, could open the way for malicious influence campaigns. Even if people don’t believe the fake articles are accurate, she says, the knowledge that such stories are out there could have a damaging effect, eroding people’s trust in the media and government.

Given the potential risks associated with giving the world full access to the GPT-2, OpenAI decided not to release it immediately, instead putting out a more limited version for researchers to study and potentially develop tools that could detect artificially generated texts in the wild.

In the months that followed, other researchers replicated OpenAI’s work. In June, Dr. Choi and her colleagues at the University of Washington and the Allen Institute for Artificial Intelligence posted a tool on the institute’s website called Grover, positioning it as a piece of software that could both generate convincing false news stories and use the same technology to detect others’ artificial news by ferreting out telltale textual patterns.

Then, in August, Israel’s AI21 Labs put a language-generation tool called HAIM on its website. It asserted on its site that risks of releasing text-generation tools into the wild were overblown, and that there were beneficial uses of such automatically generated texts, including simplifying and speeding the writing process.

The human touch
Yoav Shoham, co-founder of AI21, said in an interview that the effectiveness of these text-generation tools as propaganda machines was limited because they can’t incorporate political context well enough to score points with target audiences. Even if an AI can produce a real-looking article, Mr. Shoham said, a machine can’t grasp, say, the dynamics of a feud between two politicians and craft a false story that discredits one of them in a nuanced way.

“They have the appearance of making sense, but they don’t,” Mr. Shoham said.

Plus, very often articles go off on strange tangents for reasons the researchers don’t completely understand—the systems are often black boxes, generating text based on their own analyses of existing documents.

Ultimately, Dr. Choi says, producing effective propaganda requires machines to have a broader understanding of how the world works and a fine-tuned sense of how to target such material, something only a human overseeing the process could bring to the table.

“Fine-grained control of the content is not within the currently available technology,” she says.

While so far it doesn’t appear that any of the technology has been used as propaganda, the threat is real enough that the U.S. Defense Department’s Defense Advanced Research Projects Agency, or Darpa, in late August unveiled a program called Semantic Forensics. The project aims to defend against a wide range of automated disinformation attacks, including text-based ones.

Private groups are also developing systems to detect fake stories. Along with the freely available online tool Grover, researchers at the Massachusetts Institute of Technology and Harvard introduced a text inspector (http://gltr.io/dist/index.html) in March. The software uses similar techniques as Grover, predicting whether a passage is AI-made by taking a chunk of text and analyzing how likely a language-generation model would be to pick the word that actually appears next.

But if language-generation models change how they select words and phrases in the future, detection won’t necessarily improve at the same rate, says Jack Clark, OpenAI’s policy director. Ever more complex language-generation systems are proliferating rapidly, driven by researchers and developers who are training new models on larger pools of data. OpenAI already has a model trained on more than 1.5 billion parameters that it hasn’t yet released to the public.

“Increasingly large language models could feasibly either naturally develop or be trained to better approximate human patterns of writing as they get bigger,” Mr. Clark says.
 






polymoog

Superstar
Joined
Jun 17, 2017
Messages
5,106
Last edited:

polymoog

Superstar
Joined
Jun 17, 2017
Messages
5,106
Stepmom Of Atlanta Cop Charged With Black Man's Murder Fired From Job

"her views [supporting her son's actions] created a hostile work environment".

some of us create hostile environments wherever we go.
 






Top