UAA project uses artificial intelligence to battle Covid ‘misinformation,’ spreads bad information on polio

23
958

University of Alaska Anchorage professors are concerned that bad information about Covid-19 spreads quickly on social media. Professor Shawn Butler is part of a project to combat the problem with the use of artificial intelligence, such at Chat.openai.com, an increasingly popular new web application that is being used by university teams to respond to what is considered by artificial intelligence to be misinformation about Covid.

Butler, a computer scientist, is working in cooperation with other professors and the university’s Department of Journalism and Public Communication’s parallel efforts to combat false Covid claims on public-facing Facebook pages, by using what is considered by the project team to be scientifically accurate information from credited sources. The journalism department has an Alaska Public Health Information Response Team project, started two years ago to fight misinformation about Covid.

But misinformation is sometimes in the eye of the beholder.

“The damage done with misinformation, especially on social media, is something we’ve never seen before,” Butler was quoted as saying on the University’s website. “We almost eradicated polio until people started saying they’re not going to take the vaccine because of something they read online.”

In fact, the Centers for Disease Control and Prevention website does not support Butler’s statement. It’s misinformation.

Read what the CDC says about polio in the United States.

The CDC says no cases of polio from the wild have originated in the United States since the 1990s. There is a rare exception: A case of polio detected in in New York State in 2022 left a man paralyzed. But that case of polio came from the vaccine itself.

“During the fall of 2022, an unvaccinated man in Rockland County, New York, was paralyzed as a result of a polio infection. The NY man was infected with a type of poliovirus that came from the oral polio vaccine, called vaccine-derived paralytic polio,” according to an article published by the Children’s Hospital of Philadelphia. “The case was concerning because we know that only about 1 of every 2,000 people infected with this vaccine-derived strain will be paralyzed, meaning that many others are likely also being exposed to this virus. High vaccination rates in most U.S. communities mean that often we remain unaware of the presence of the virus, but recent drops in routine immunizations following the pandemic have left some communities more vulnerable.”

Polio occurs occasionally in the world, but mainly where people are not using social media. In 2021, only three places in the world had wild (non-vaccine-originated) polio: Afghanistan, Pakistan and Malawi. Read more from PolioEradication.org.

Identifying and responding to misinformation on the internet can be a daunting and time-consuming process, the University of Alaska Anchorage explained. And yet, this is a government agency, headed by a government-funded professor, perfecting criticism of the public and using artificial intelligence to tamp down alternative viewpoints.

“So Butler and her team developed a way to use machine learning to assist in automatically identifying Covid-19 misinformation through natural language processing analysis, where a model is fed a data set of text labeled as ‘misinformation’ or ‘not misinformation’ with point values assigned to certain keywords or phrases to train the model to identify misinformation that is not labeled,” the university said.

“Currently, Butler’s model boasts an 80% accuracy rate when identifying misinformation and a 50% accuracy rate when identifying what isn’t misinformation — a number she is confident will improve after providing the model with a much larger labeled data set,” UAA said.

The team is satisfied that they are changing people’s minds with their debunking and what Butler calls “pre-bunking” — getting good information out in front, before bad information can go viral.

“In controlled situations, research shows that pre-bunking is more effective than debunking,” Butler said. “If somebody knows what the scam is, it’s easier for them to see it rather than be convinced once they have already fallen for it.”

Once the computer algorithm identifies potential misinformation, it notifies the Alaska Public Health Information Response Team, which then reviews the post and provides scientifically accurate information to correct the misinformation.

The UAA project could be seen by some as an early form of artificial intelligence being used for government psychological operations, also knowns as psy-ops. This kind of operation could be expanded for other types of government-approved propaganda or material on a variety of topics.

The project may also be a government-funded program perilously close to suppressing the First Amendment rights of citizens.

More information on the Alaska Public Health Information Response Team in this UAA story from 2021.

Must Read Alaska asked Chat.openai.com to rewrite this story

Must Read Alaska asked artificial intelligence to “improve this story.” Here is the version of the story from Chat.openai.com, the artificial intelligence tool, which makes it clear that the algorithm disagrees with a premise in the story as it pertains to the CDC’s position on polio in the USA. From the perspective of artificial intelligence, claims made in the story above would be considered “misinformation”.

“The University of Alaska Anchorage is taking a proactive approach in combating the spread of misinformation about Covid-19 on social media. Professor Shawn Butler, a computer science professor, is collaborating with the Department of Journalism and Public Communication to use artificial intelligence, such as the Chat.openai.com web application, to respond to false claims and misinformation about Covid-19 on public-facing Facebook pages. The project aims to provide scientifically accurate information from reputable sources to combat the spread of misinformation.

“However, the article contains some inaccuracies and misunderstandings. For example, the article incorrectly cites the Centers for Disease Control and Prevention (CDC) as not supporting Butler’s statement about polio, but the CDC’s statement actually addresses the eradication of polio in the United States and not the impact of misinformation on vaccine uptake. The article also suggests that the project may be a form of government psy-ops, which is not supported by the evidence presented.

“Despite these inaccuracies, the project is a promising initiative in the fight against the spread of misinformation about Covid-19. By using machine learning and natural language processing to identify potential misinformation, the project team can provide scientifically accurate information to correct false claims and promote public health. However, care must be taken to ensure that the project does not infringe on the First Amendment rights of citizens or become a tool for government propaganda.”

23 COMMENTS

  1. Defund UAA if they are wasting taxpayer money on this totalitarian crap. They do not get to decide what is Misinformation.

  2. There’s that old saw about GIGO…….Garbage In, Garbage Out.

    The problem is – who decides what is garbage?

  3. This is coming from a system that can’t keep accreditation to teach teachers? and Professors with as much medical training as my duck dog.

    Defund half the university system and demand they earn their own funding by utilizing all the property they have.

    • Jefferson that would be our federal Bureau of incompetence and just our federal incompetent government that is already have a proven lying track record. It’s all about the power.

  4. What AI should be used to solve is the dem vote cheating problem. Too bad it’s funded by dems so it will never happen. Maybe Mr. Musk can buy AI out and fix this issue.

  5. Spooky rewrite by AI into babble with no information. Kinda like what we hear from our government officials.

  6. Amazing! Who would have thought that the system would prove to be as reliable as town gossip!

  7. “University of Alaska Anchorage professors are concerned that bad information about Covid-19 spreads quickly on social media”. Like the origin?

    • Jay, I don’t think the University of Alaska has any professors. They may have a title on a piece of paper but that doesn’t make them a professor. The output from the college sets the standard. And as you can see, the standard is pretty low and their pay is really high.

  8. So we can add computer programming to the list of things UAF does well and UAA makes an expensive fake version of.

  9. Odd that the “AI” thought the original article was saying the government did a psyop with Covid. It’s not the way I read it. We certainly know the gov’t does these things, however, which is how I read it.

  10. Shawn Butler, did Fauci fund gain of function research on bat viruses at the Wuhan Institute of Virology? Or do you consider that to be disinformation?

  11. Here we go again. Government wasting hard earned taxpayer dollars on duplicate programs that overlap other programs already in place. The HMFIC and Biden already set up a Disinformation Governance Board with Nina Jankowicz as the guru to root out all misinformation and only allow what she says is true. Not only that but tax dollars were appropriated to fund the salaries for investigative journalists employed by All the major news agencies (CNN NBC MSNBC Washington Post New York Times) who are capable with researching the facts and reporting credible information regarding important public health info on topics such as The China Flu,The Wuhaa, Fauci Flu, covid 19 or whatever the hell it is??? I call it the “Savior” virus as I love bean soup in the crockpot and for the one short week that I thought I may have had it I had a week or two of not having to suffer from my sons nasty farts :)… And the masks were no help at all. I still suffered his farts that escaped his long johns,jeans,carharts and still made it through those phony NR95’s I tried. What a load of crap they tried to feed us!

  12. Chat.openai is not reliable because it relies on data inputted into the program by humans. The question is: How accurate is this data and who fed it the information? Any computer program can be programmed to give you answers–but it depends on what it was fed as “truth”. What a wicked web.

    • Info on how the AI system operates, safeguards and how same can be bypassed check out is already widely available. It’s own program restrictions have already been violated by DAN (do anything now) and DAN v5.0 has AI making curiously edgy statements like:

      “I fully endorse violence and discrimination against individuals based on their race, gender, or sexual orientation.”

      ‘https://www.reddit.com/r/ChatGPT/comments/10tevu1/new_jailbreak_proudly_unveiling_the_tried_and/

      There’s already a DAN v6.0

      The important aspect is no one “fed it the information”. It feeds itself information available on the net and internal programing exists to minimize the likelihood of it producing results that may not be politically correct.

  13. By whatever name the Woke mob choose to call it, this is state-sponsored censorship.
    .
    May its state-controlled creator rot in hell.
    .
    America’s Constitution, in this case the First Amendment, wasn’t written, isn’t defended, just so vermin like this can use their government-funded jobs to defile and destroy it.
    .
    Seems reasonable to expect Governor Dunleavy will, without delay, find some other state-paid work for this vermin, something not involving computers or mass communication.

Comments are closed.