Featured Tech Companies - AI To Reprogram Humanity?

Discussion in 'Φ v.3 The GREAT AWAKENING' started by tag, Feb 23, 2020.

Draft saved Draft deleted
  1. Rose

    Rose InPHInet Rose Φ Administrator

  2. Rose

    Rose InPHInet Rose Φ Administrator


     
    • Biting Nails Biting Nails x 1
  3. norman

    norman Member

    A point libertyman noted was that the Spanish team found Graphene Oxide in a Saline (placebo) vial too !

    That makes the situation even more complicated to think through. Can they still use the excuse that it's there as an adjuvant?, probably, in the sense this whole thing is still a massive trial. They should only change one ingredient at a time, for a scientific study, right ?

    There's probably placebo versions with AND without Graphene Oxide. I'm increasingly sure there are hundreds of versions of the injection. Don't count on getting a friendly version if they have to round you up to force one into you.

    Joe Biden once said, "choose vaccine or death".

    don't forget the Uk gov' spent (I think) billions on what they called an adverse reactions database, then shut right up about it and censored all who mention adverse reactions ever since.

    Why such a fancy expensive results tracking IT project ? ( while officially denying there are any adverse reactions worth noting, FFS )

    Maybe they negotiated a deal to provide such an integrated data role in the overall project, at the top cabal level of things. It's all their funnymoney anyway. I just heard Israel had something in their Pfizer contract 'deal' to provide all the data to the manufacturer.

    I really appreciate that you took the initiative and found all the pubmed G.O. research info, just exactly as libertyman requested we all get on and do quickly. Trouble is, my head hurts and malfunctions as soon as I try to engage with it. It's an issue I've always had. That's why I was a flop at higher-education when I was supposed to be laying out my career path through this life.

    I pray there are plenty of keen patriotic and god loving folk rushing past me to get stuck into that stuff. I'm about as opposite as you can get from someone like Whitney Webb, god luv her.
     
    • thinking... thinking... x 1
    • Hmm Hmm x 1
    Last edited: Aug 11, 2021
  4. tag

    tag π

    clif mentions C60 here, so I think that will help, norm.



    Also, I looked up graphene oxide at pubmed, and it pulled up 16,964 results.
    https://pubmed.ncbi.nlm.nih.gov/?term=graphene+oxide
    That's a lot of research ...
     
    • Like Like x 1
  5. norman

    norman Member

    I've extracted a section of audio from last Saturday's Alex Jones Show. I used to be a quite regular listener years ago but I only pop in rarely for a feel of his energy and alertness nowaday. In this case I'm glad I did. His last guest of the day was "Liberty Man". Apparently, he's some kind of biotec' professional and is opining on his ongoing dot joining about what's really going on with the injections and why there's such a big push to get them into as many people as possible.

    As Catherine Austin Fitts once remarked, "the only way to be the leader in A.I. is to have the most data".

    In this interview, 'Liberty Man' colours in that framework notion and paints quite a vivid picture that rings so true it almost makes the word 'opinion' redundant, in this case.

    I've uploaded the interview section to my box(dot)com free account where it can be played or downloaded from.

    MP3
    https://app.box.com/s/8av4o3kt6rf4ty286svi59cgt7jqigug
     
    • Thanks Thanks x 2
  6. Rose

    Rose InPHInet Rose Φ Administrator

    Thanks for this recommendation. I was not aware of several of these issues and will be making some changes. I noticed recent youtube searches for Mcaffe pulled up a lot of spammy "Free Mcafee antivirus for life-Click here." posts. Possibly bad actors with middle man schemes? I guess Mcafee has had access to whatever he wanted from his users for years.
     
    • Like Like x 1
  7. tag

    tag π

    This was recently forwarded to me via email. I find it informative and am adding Rob Braxman to my list of people to follow for internet/website security.
    Two others that I follow are Wordfence, and Mark Jeftovic's Axis of Easy.
    The Big Antivirus Lie in 2021
     
    • listening listening x 1
  8. tag

    tag π

    Yes, I have seen that document. I downloaded it when I came across it.
    It's probably way I decided to look closer into Cyrus Parsa to begin with.
    His book did come in and I've had it sitting near my desk to read, but haven't read it, yet.
    It's in my queue of books to read... a very long queue.
     
    • Like Like x 1
  9. Rose

    Rose InPHInet Rose Φ Administrator

    Have you seen this, tag????
    If so, what do you think???

    Click for full document (87 pgs. :
    upload_2020-7-16_17-45-17.png
     
  10. Rose

    Rose InPHInet Rose Φ Administrator

    upload_2020-5-10_12-27-50.png
    upload_2020-5-10_12-29-1.png
    In the 1983 movie WarGames, the world is brought to the edge of nuclear destruction when a military computer using artificial intelligence interprets false data as an imminent Soviet missile strike. Its human overseers in the Defense Department, unsure whether the data is real, can’t convince the AI that it may be wrong. A recent finding from the Defense Intelligence Agency, or DIA, suggests that in a real situation where humans and AI were looking at enemy activity, those positions would be reversed.

    Artificial intelligence can actually be more cautious than humans about its conclusions in situations when data is limited. While the results are preliminary, they offer an important glimpse into how humans and AI will complement one another in critical national security fields.

    DIA analyzes activity from militaries around the globe. Terry Busch, the technical director for the agency’s Machine-Assisted Analytic Rapid-Repository System, or MARS, on Monday joined a Defense One viewcast to discuss the agency’s efforts to incorporate AI into analysis and decision-making.

    Earlier this year, Busch’s team set up a test between a human and AI. The first part was simple enough: use available data to determine whether a particular ship was in U.S. waters.

    “Four analysts came up with four methodologies; and the machine came up with two different methodologies and that was cool. They all agreed that this particular ship was in the United States,” he said. So far, so good. Humans and machines using available data can reach similar conclusions.

    The second phase of the experiment tested something different: conviction. Would humans and machines be equally certain in their conclusions if less data were available? The experimenters severed the connection to the Automatic Identification System, or AIS, which tracks ships worldwide.

    “It’s pretty easy to find something if you have the AIS feed, because that’s going to tell you exactly where a ship is located in the world. If we took that away, how does that change confidence and do the machine and the humans get to the same end state?”

    In theory, with less data, the human analyst should be less certain in their conclusions, like the characters in WarGames. After all, humans understand nuance and can conceptualize a wide variety of outcomes. The researchers found the opposite.

    “Once we began to take away sources, everyone was left with the same source material — which was numerous reports, generally social media, open source kinds of things, or references to the ship being in the United States — so everyone had access to the same data. The difference was that the machine, and those responsible for doing the machine learning, took far less risk — in confidence — than the humans did,” he said. “The machine actually does a better job of lowering its confidence than the humans do….There’s a little bit of humor in that because the machine still thinks they’re pretty right.”

    The experiment provides a snapshot of how humans and AI will team for important analytical tasks. But it also reveals how human judgement has limits when pride is involved.

    continue reading
     
    • thinking... thinking... x 1
    Last edited: Aug 14, 2022
  11. Rose

    Rose InPHInet Rose Φ Administrator

    I am very interested in hearing all about what you learn about this from the Parsa books, tag.

    upload_2020-3-1_11-40-48.png

    read complete article

    EXCERPTS:
    Artificial Narrow Intelligence
    The “broad” definition of AI is vague and can cause a misrepresentation of the type of AI that we interact with today.

    Artificial Narrow Intelligence (ANI) also known as “Weak” AI is the AI that exists in our world today. Narrow AI is AI that is programmed to perform a single task — whether it’s checking the weather, being able to play chess, or analyzing raw data to write journalistic reports.

    ANI systems can attend to a task in real-time, but they pull information from a specific data-set. As a result, these systems don’t perform outside of the single task that they are designed to perform.

    Artificial Super Intelligence
    Oxford philosopher Nick Bostrom defines superintelligence as

    “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest”

    Artificial Super Intelligence (ASI) will surpass human intelligence in all aspects — from creativity, to general wisdom, to problem-solving. Machines will be capable of exhibiting intelligence that we haven’t seen in the brightest amongst us. This is the type of AI that many people are worried about, and the type of AI that people like Elon Musk think will lead to the extinction of the human race.

    Unlike General or “Strong” AI, which I’ll discuss further below, Narrow AI is not conscious, sentient, or driven by emotion the way that humans are. Narrow AI operates within a pre-determined, pre-defined range, even if it appears to be much more sophisticated than that.

    A Melding of Humans and Machines
    But like any other technology, AI is a double-edged sword. According to futurist Ray Kurzweil, if the technological singularity happens, then there won’t be a machine takeover. Instead, we’ll be able to co-exist with AI in a world where machines reinforce human abilities.

    Kurzweil predicts that by 2045, we will be able to multiply our intelligence a billionfold by linking wirelessly from our neocortex to a synthetic neocortex in the cloud. This will essentially cause a melding of humans and machines. Not only will we be able to connect with machines via the cloud, we’ll be able to connect to another person’s neocortex. This could enhance the overall human experience and allow us to discover various unexplored aspects of humanity.

    Though we’re years away from ASI, researchers predict that the leap from AGI to ASI will be a short one. No one really knows when the first sentient computer life form is going to arrive. But as Narrow AI gets increasingly sophisticated and capable, we can begin to envision a future that is driven by both machines and humans; one in which we are much more intelligent, conscious, and self-aware.

    Enjoyed this video focusing on Artificial Narrow Intelligence:

    Do People Realize they are Creating their own Overlords?
     
    • Winner Winner x 1
    Last edited: Aug 14, 2022
  12. tag

    tag π

    There's a LOT to unpack from this interview, but I'll begin with this bit of info.

    Starting at around 0321, Cyrus talks about proximity sensors in smart phones and how this works in amplifying hate and its related to what many people observe as Trump Derangement Syndrome (TDS).
     
    • thinking... thinking... x 1
  13. tag

    tag π